π Why This Kubectl Cheat Sheet is Different
Welcome to the most comprehensive kubectl cheat sheet designed specifically for platform engineers, DevOps teams, and Kubernetes practitioners in 2025. Unlike traditional cheat sheets that simply list commands, this guide is organized by real-world scenarios you encounter daily.
- Scenario-Based Organization β Commands grouped by actual use cases
- Difficulty Levels β Beginner, Intermediate, and Advanced sections
- Pro Tips & Gotchas β Expert insights you won't find in docs
- Troubleshooting Workflows β Complete debugging sequences
- Performance Best Practices β When and why to use each command
- Copy-to-Clipboard β One-click command copying
π Table of Contents
βοΈ Installation & Setup
BeginnerInstalling kubectl
# macOS (via Homebrew)
brew install kubectl
# Linux (via curl)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Windows (via Chocolatey)
choco install kubernetes-cli
# Verify installation
kubectl version --client
Configuring kubectl Context
# View current context
kubectl config current-context
# List all contexts
kubectl config get-contexts
# Switch context
kubectl config use-context
# Set default namespace for current context
kubectl config set-context --current --namespace=
# View full kubeconfig
kubectl config view
kubectx
and kubens
for faster context and namespace switching. Install via: brew install kubectx
π― Cluster Information & Context
BeginnerScenario: You need to inspect cluster health and get basic information
# Get cluster information
kubectl cluster-info
# Get cluster nodes with detailed info
kubectl get nodes -o wide
# Describe a specific node
kubectl describe node
# View cluster component status
kubectl get componentstatuses
# Check API server health
kubectl get --raw /healthz
# View cluster events
kubectl get events --all-namespaces --sort-by='.lastTimestamp'
Resource Discovery
# List all resource types in cluster
kubectl api-resources
# List all resources in current namespace
kubectl get all
# List all resources across all namespaces
kubectl get all --all-namespaces
# Get API versions supported by the cluster
kubectl api-versions
# Explain a resource type
kubectl explain pods
kubectl explain pod.spec.containers
π³ Pod Management & Debugging
IntermediateScenario: Your application pods are crashing, and you need to debug them
# List all pods in current namespace
kubectl get pods
# List pods with more details (IP, Node, etc.)
kubectl get pods -o wide
# Watch pods in real-time
kubectl get pods --watch
# Get pods by label
kubectl get pods -l app=nginx
# Get pods from all namespaces
kubectl get pods --all-namespaces
# Describe a pod (shows events, volumes, conditions)
kubectl describe pod
# Get pod logs
kubectl logs
# Get logs from previous crashed container
kubectl logs --previous
# Stream logs in real-time
kubectl logs -f
# Get logs from specific container in multi-container pod
kubectl logs -c
# Get logs with timestamps
kubectl logs --timestamps=true
# Get last 100 lines of logs
kubectl logs --tail=100
# Get logs from last 1 hour
kubectl logs --since=1h
kubectl logs <pod> --previous
. This shows you what happened before the restart.
Interactive Debugging
# Execute command in a running pod
kubectl exec --
# Get shell access to a pod
kubectl exec -it -- /bin/bash
kubectl exec -it -- /bin/sh
# Execute in specific container (multi-container pod)
kubectl exec -it -c -- /bin/bash
# Copy files to/from pod
kubectl cp :/path/to/destination
kubectl cp :/path/to/file
# Port forward to access pod locally
kubectl port-forward 8080:80
# Port forward for a deployment
kubectl port-forward deployment/ 8080:80
Scenario: Create temporary debug pod in the same network namespace
# Create ephemeral debug container (K8s 1.23+)
kubectl debug -it --image=busybox --target=
# Debug with different image
kubectl debug -it --image=nicolaka/netshoot
# Create debug pod on specific node
kubectl debug node/ -it --image=ubuntu
# Run temporary pod for debugging
kubectl run debug-pod --rm -it --image=busybox -- sh
Pod Management
# Create a pod imperatively
kubectl run nginx --image=nginx:latest
# Create pod with specific labels
kubectl run nginx --image=nginx --labels="app=nginx,env=prod"
# Create pod and expose it as a service
kubectl run nginx --image=nginx --port=80 --expose
# Delete a pod
kubectl delete pod
# Delete all pods with label
kubectl delete pods -l app=nginx
# Force delete a stuck pod
kubectl delete pod --grace-period=0 --force
# Delete all pods in current namespace
kubectl delete pods --all
--force --grace-period=0
in production unless absolutely necessary. This can lead to data loss and orphaned resources.
π Deployments & Scaling
IntermediateScenario: Deploy and scale your application
# Create a deployment
kubectl create deployment nginx --image=nginx:latest
# Create deployment with replicas
kubectl create deployment nginx --image=nginx:latest --replicas=3
# Scale deployment
kubectl scale deployment nginx --replicas=5
# Autoscale deployment (HPA)
kubectl autoscale deployment nginx --cpu-percent=70 --min=2 --max=10
# Update deployment image (rollout)
kubectl set image deployment/nginx nginx=nginx:1.21
# View rollout status
kubectl rollout status deployment/nginx
# View rollout history
kubectl rollout history deployment/nginx
# Rollback to previous version
kubectl rollout undo deployment/nginx
# Rollback to specific revision
kubectl rollout undo deployment/nginx --to-revision=2
# Pause rollout
kubectl rollout pause deployment/nginx
# Resume rollout
kubectl rollout resume deployment/nginx
# Restart deployment (recreate all pods)
kubectl rollout restart deployment/nginx
kubectl rollout status
after deployment updates in CI/CD pipelines. Add --timeout=5m
to fail fast if rollout takes too long.
Deployment Management
# Get all deployments
kubectl get deployments
# Get deployment with details
kubectl describe deployment
# Edit deployment in-place
kubectl edit deployment
# Export deployment YAML
kubectl get deployment -o yaml > deployment.yaml
# Delete deployment
kubectl delete deployment
# View deployment resource usage
kubectl top deployment
Scenario: Zero-downtime deployment with careful rollout
# Update with custom rollout strategy
kubectl patch deployment nginx -p '{"spec":{"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
# Set deployment annotations for change tracking
kubectl annotate deployment nginx kubernetes.io/change-cause="Updated to nginx 1.21"
# Watch pod updates during rollout
kubectl get pods -l app=nginx --watch
# Verify all replicas are ready
kubectl wait --for=condition=available --timeout=300s deployment/nginx
π Services & Networking
IntermediateScenario: Expose your application and debug network connectivity
# Create ClusterIP service (default)
kubectl expose deployment nginx --port=80 --target-port=80
# Create NodePort service
kubectl expose deployment nginx --type=NodePort --port=80
# Create LoadBalancer service
kubectl expose deployment nginx --type=LoadBalancer --port=80
# Get all services
kubectl get services
kubectl get svc
# Describe service
kubectl describe service nginx
# Get service endpoints
kubectl get endpoints nginx
# Test service DNS resolution from pod
kubectl run debug --rm -it --image=busybox -- nslookup nginx.default.svc.cluster.local
# Test service connectivity
kubectl run debug --rm -it --image=nicolaka/netshoot -- curl http://nginx
Ingress Management
# Get all ingress resources
kubectl get ingress
# Describe ingress
kubectl describe ingress
# Get ingress with details
kubectl get ingress -o yaml
# Test ingress from within cluster
kubectl run debug --rm -it --image=curlimages/curl -- curl -H "Host: example.com" http://
kubectl get svc -o wide --all-namespaces | grep LoadBalancer
to quickly find all externally exposed services and their IPs.
Network Policies
# Get network policies
kubectl get networkpolicies
kubectl get netpol
# Describe network policy
kubectl describe networkpolicy
# Test network policy (requires network policy debugging tools)
kubectl run source-pod --rm -it --image=busybox -- wget -O- http://target-service
π ConfigMaps & Secrets
IntermediateConfigMaps
# Create ConfigMap from literal values
kubectl create configmap app-config \\
--from-literal=APP_ENV=production \\
--from-literal=LOG_LEVEL=info
# Create ConfigMap from file
kubectl create configmap app-config --from-file=config.properties
# Create ConfigMap from directory
kubectl create configmap app-config --from-file=./config-dir/
# Create ConfigMap from env file
kubectl create configmap app-config --from-env-file=.env
# Get ConfigMaps
kubectl get configmaps
kubectl get cm
# Describe ConfigMap
kubectl describe configmap app-config
# View ConfigMap data
kubectl get configmap app-config -o yaml
# Edit ConfigMap
kubectl edit configmap app-config
# Delete ConfigMap
kubectl delete configmap app-config
Secrets
# Create generic secret from literal
kubectl create secret generic db-credentials \\
--from-literal=username=admin \\
--from-literal=password='P@ssw0rd!'
# Create secret from file
kubectl create secret generic ssh-key --from-file=ssh-privatekey=~/.ssh/id_rsa
# Create Docker registry secret
kubectl create secret docker-registry regcred \\
--docker-server=docker.io \\
--docker-username=myuser \\
--docker-password=mypassword \\
[email protected]
# Create TLS secret
kubectl create secret tls tls-secret \\
--cert=path/to/cert.crt \\
--key=path/to/key.key
# Get secrets
kubectl get secrets
# Describe secret (doesn't show values)
kubectl describe secret db-credentials
# View secret data (base64 encoded)
kubectl get secret db-credentials -o yaml
# Decode secret value
kubectl get secret db-credentials -o jsonpath='{.data.password}' | base64 --decode
# Edit secret
kubectl edit secret db-credentials
# Delete secret
kubectl delete secret db-credentials
πΎ Storage & Volumes
Advanced# Get Persistent Volumes
kubectl get pv
# Get Persistent Volume Claims
kubectl get pvc
# Describe PVC
kubectl describe pvc
# Get storage classes
kubectl get storageclass
kubectl get sc
# Describe storage class
kubectl describe storageclass
# Check PVC status and binding
kubectl get pvc -o wide
# View PVC events
kubectl describe pvc | grep Events -A 10
# Delete PVC (careful: may delete data)
kubectl delete pvc
# Force delete stuck PVC
kubectl patch pvc -p '{"metadata":{"finalizers":null}}'
Scenario: Debug storage issues and PVC binding problems
# Check if PV is bound to PVC
kubectl get pv,pvc
# View PV details including claim reference
kubectl get pv -o yaml | grep -A 5 claimRef
# Check volume attachment to nodes
kubectl get volumeattachments
# Describe volume attachment
kubectl describe volumeattachment
# Check CSI drivers
kubectl get csidrivers
# Check CSI nodes
kubectl get csinodes
π‘οΈ RBAC & Security
AdvancedScenario: Check permissions and debug RBAC issues
# Check if you can perform an action
kubectl auth can-i create pods
kubectl auth can-i create deployments --namespace=production
kubectl auth can-i '*' '*' --all-namespaces
# Check permissions for another user
kubectl auth can-i get pods [email protected]
kubectl auth can-i create services --as=system:serviceaccount:default:my-sa
# List your permissions
kubectl auth can-i --list
# List permissions for a specific namespace
kubectl auth can-i --list --namespace=production
Service Accounts
# Get service accounts
kubectl get serviceaccounts
kubectl get sa
# Describe service account
kubectl describe sa
# Create service account
kubectl create serviceaccount
# Get service account token
kubectl create token
# Get service account token (legacy method)
kubectl get secret $(kubectl get sa -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode
Roles and RoleBindings
# Get roles and cluster roles
kubectl get roles
kubectl get clusterroles
# Get role bindings
kubectl get rolebindings
kubectl get clusterrolebindings
# Describe role
kubectl describe role
# View who can perform action
kubectl get rolebindings,clusterrolebindings --all-namespaces -o json | jq '.items[] | select(.subjects[]?.kind=="User" and .subjects[]?.name=="[email protected]")'
# Create role
kubectl create role pod-reader --verb=get,list,watch --resource=pods
# Create role binding
kubectl create rolebinding pod-reader-binding --role=pod-reader [email protected]
# Create cluster role binding
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [email protected]
kubectl access-matrix
plugin for visual RBAC debugging: kubectl krew install access-matrix
then kubectl access-matrix
π Advanced Troubleshooting
AdvancedScenario: Complete troubleshooting workflow for a failing application
# Step 1: Check pod status
kubectl get pods -l app=myapp
# Step 2: Describe pod for events and conditions
kubectl describe pod
# Step 3: Check logs from all containers
kubectl logs --all-containers=true
# Step 4: Check previous logs if pod restarted
kubectl logs --previous
# Step 5: Check resource usage
kubectl top pod
# Step 6: Check node health
kubectl get nodes
kubectl describe node
# Step 7: Check events in namespace
kubectl get events --sort-by='.lastTimestamp' | grep
# Step 8: Check if service endpoints exist
kubectl get endpoints
# Step 9: Test DNS resolution
kubectl run debug --rm -it --image=busybox -- nslookup
# Step 10: Test network connectivity
kubectl run debug --rm -it --image=nicolaka/netshoot -- curl http://
Resource Usage Analysis
# Get pod resource usage
kubectl top pods
# Get pod resource usage with containers
kubectl top pods --containers
# Get node resource usage
kubectl top nodes
# Get resource usage for specific namespace
kubectl top pods -n production
# Sort pods by CPU usage
kubectl top pods --sort-by=cpu
# Sort pods by memory usage
kubectl top pods --sort-by=memory
kubectl top
requires metrics-server to be installed in your cluster.
Event Analysis
# Get all events sorted by time
kubectl get events --all-namespaces --sort-by='.lastTimestamp'
# Get events for last 1 hour
kubectl get events --all-namespaces --field-selector involvedObject.kind=Pod --sort-by='.lastTimestamp' | awk '{if(NR>1)print}'
# Watch events in real-time
kubectl get events --watch
# Get events for specific pod
kubectl get events --field-selector involvedObject.name=
# Get warning events only
kubectl get events --field-selector type=Warning
# Get events from specific namespace
kubectl get events -n production --sort-by='.lastTimestamp'
Advanced Debugging Tools
# Use stern for multi-pod log tailing (requires stern)
stern
# Tail logs from all pods with label
stern -l app=nginx
# Debug with different tools based on issue:
# Network debugging (nicolaka/netshoot image)
kubectl run netshoot --rm -it --image=nicolaka/netshoot -- bash
# Inside netshoot, you can use:
# - tcpdump, wireshark
# - curl, wget
# - nslookup, dig
# - traceroute, mtr
# - netstat, ss
# - iperf3
# Kubernetes API debugging
kubectl proxy --port=8080
# Then access: http://localhost:8080/api/v1
# Get raw API response
kubectl get --raw /api/v1/namespaces/default/pods
β‘ Performance & Optimization
AdvancedResource Quotas & Limits
# Get resource quotas
kubectl get resourcequota
kubectl get quota
# Describe resource quota
kubectl describe quota
# Get limit ranges
kubectl get limitrange
kubectl get limits
# Describe limit range
kubectl describe limitrange
# Check namespace resource usage vs quota
kubectl describe namespace
Horizontal Pod Autoscaler (HPA)
# Get HPA status
kubectl get hpa
# Describe HPA
kubectl describe hpa
# Create HPA
kubectl autoscale deployment nginx --cpu-percent=70 --min=2 --max=10
# Create HPA with custom metrics
kubectl autoscale deployment nginx --cpu-percent=70 --memory-percent=80 --min=2 --max=10
# Delete HPA
kubectl delete hpa
# Watch HPA scaling in real-time
kubectl get hpa --watch
Vertical Pod Autoscaler (VPA)
# Get VPA recommendations
kubectl get vpa
# Describe VPA
kubectl describe vpa
# View VPA recommendations
kubectl get vpa -o jsonpath='{.status.recommendation}'
π GitOps & CI/CD Integration
AdvancedScenario: Integrate kubectl in CI/CD pipelines safely
# Wait for deployment to be ready (use in CI/CD)
kubectl wait --for=condition=available --timeout=300s deployment/nginx
# Check rollout status with exit code (for CI/CD)
kubectl rollout status deployment/nginx --timeout=300s
# Perform smoke test after deployment
kubectl run smoke-test --rm -it --image=curlimages/curl --restart=Never -- curl http://nginx
# Validate YAML before applying (dry-run)
kubectl apply --dry-run=client -f deployment.yaml
# Validate on server side
kubectl apply --dry-run=server -f deployment.yaml
# Diff before applying
kubectl diff -f deployment.yaml
# Apply with record for audit trail
kubectl apply -f deployment.yaml --record
# Set image with change cause annotation
kubectl set image deployment/nginx nginx=nginx:1.21
kubectl annotate deployment/nginx kubernetes.io/change-cause="Updated to nginx 1.21 via CI/CD"
ArgoCD/FluxCD Integration
# Check Application health (ArgoCD)
kubectl get applications -n argocd
# Describe Application
kubectl describe application -n argocd
# Check HelmRelease (FluxCD)
kubectl get helmreleases -n flux-system
# Describe HelmRelease
kubectl describe helmrelease -n flux-system
# Check GitRepository source
kubectl get gitrepositories -n flux-system
π Pro Tips for Platform Engineers
# Install krew (kubectl plugin manager)
# Visit: https://krew.sigs.k8s.io/docs/user-guide/setup/install/
# Useful plugins:
kubectl krew install ctx # Context switching
kubectl krew install ns # Namespace switching
kubectl krew install tree # Resource hierarchy visualization
kubectl krew install neat # Clean YAML output
kubectl krew install access-matrix # RBAC visualization
kubectl krew install tail # Multi-pod log tailing
kubectl krew install sniff # Network packet capture
kubectl krew install node-shell # Get shell on node
# Add to ~/.bashrc or ~/.zshrc:
# Kubectl aliases
alias k='kubectl'
alias kgp='kubectl get pods'
alias kgs='kubectl get svc'
alias kgd='kubectl get deployments'
alias kdp='kubectl describe pod'
alias kl='kubectl logs'
alias klf='kubectl logs -f'
alias kex='kubectl exec -it'
alias kaf='kubectl apply -f'
alias kdelf='kubectl delete -f'
alias kgpw='kubectl get pods --watch'
alias kctx='kubectl config use-context'
alias kns='kubectl config set-context --current --namespace'
# Enable kubectl auto-completion
source <(kubectl completion bash) # for bash
source <(kubectl completion zsh) # for zsh
# Make aliases work with completion
complete -F __start_kubectl k
# JSON output with jq filtering
kubectl get pods -o json | jq '.items[] | {name:.metadata.name, status:.status.phase}'
# Custom columns output
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
# JSONPath output
kubectl get pods -o jsonpath='{.items[*].metadata.name}'
# Wide output (more details)
kubectl get pods -o wide
# YAML output (for backup/restore)
kubectl get deployment nginx -o yaml > nginx-deployment.yaml
# Get specific field
kubectl get pod nginx-pod -o jsonpath='{.status.podIP}'
# Sort output
kubectl get pods --sort-by=.metadata.creationTimestamp
# Watch multiple resources at once
kubectl get pods,svc,deploy --watch
# Use labels effectively for organization
kubectl label pods nginx-pod env=production tier=frontend
kubectl get pods -l env=production,tier=frontend
# Use annotations for metadata
kubectl annotate pod nginx-pod description="Production nginx server"
# Patch resources quickly
kubectl patch deployment nginx -p '{"spec":{"replicas":5}}'
# Replace resources (instead of apply)
kubectl replace -f deployment.yaml
# Convert between API versions
kubectl convert -f old-deployment.yaml --output-version apps/v1
# Generate YAML without creating resources
kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > deployment.yaml
# Create resources from stdin
cat <
- Always use RBAC with least privilege principle
- Use separate kubeconfigs for different environments (dev/staging/prod)
- Enable audit logging for kubectl API calls
- Use admission controllers (OPA/Kyverno) to enforce policies
- Never run
kubectl apply
directly in production - use GitOps - Use
--dry-run
andkubectl diff
before applying changes - Implement network policies to restrict pod communication
- Rotate service account tokens regularly
- Use Pod Security Admission/Standards for workload security
- Use
--field-selector
and-l
(label selector) to reduce API load - Avoid
kubectl get all --all-namespaces
in production - too resource intensive - Use
kubectl top
regularly to identify resource bottlenecks - Set proper resource requests and limits on all workloads
- Enable HPA for stateless applications
- Use node affinity and pod anti-affinity for better distribution
- Implement pod disruption budgets (PDB) for high availability
π Quick Reference Tables
Common Command Patterns
Task | Command |
---|---|
Get resources | kubectl get <resource> |
Describe resource | kubectl describe <resource> <name> |
Create resource | kubectl create <resource> <name> |
Delete resource | kubectl delete <resource> <name> |
Edit resource | kubectl edit <resource> <name> |
Apply YAML | kubectl apply -f file.yaml |
Export YAML | kubectl get <resource> <name> -o yaml |
Watch resources | kubectl get <resource> --watch |
Resource Short Names
Full Name | Short Name |
---|---|
pods | po |
services | svc |
deployments | deploy |
replicasets | rs |
statefulsets | sts |
daemonsets | ds |
namespaces | ns |
configmaps | cm |
persistentvolumes | pv |
persistentvolumeclaims | pvc |
ingresses | ing |
networkpolicies | netpol |
serviceaccounts | sa |
Output Formats
Format | Usage |
---|---|
-o wide | Additional details (IP, Node, etc.) |
-o yaml | YAML format (for backup/restore) |
-o json | JSON format (for parsing) |
-o jsonpath | Extract specific fields |
-o custom-columns | Custom column output |
-o name | Resource name only |
π Integration with Atmosly Platform
How Atmosly Enhances Kubectl Workflows
While kubectl is powerful, managing Kubernetes clusters at scale requires additional platform capabilities. Atmosly provides:
- Visual Pipeline Builder β Build CI/CD pipelines without memorizing kubectl commands
- GitOps Integration β Automated kubectl operations through Git workflows
- AI Copilot β Get kubectl command suggestions based on natural language
- Multi-Cluster Management β Manage kubectl contexts across multiple clusters from one interface
- Cost Intelligence β See cost implications before scaling with kubectl
- Environment Cloning β Clone entire Kubernetes environments with one command
- Audit & Compliance β Track all kubectl operations with full audit trail
π Additional Resources
- Official Kubernetes Cheat Sheet
- Kubectl Reference Documentation
- Kubectl Plugins (Krew)
- Stern - Multi-pod Log Tailing
- K9s - Terminal UI for Kubernetes
π― Conclusion
This comprehensive kubectl cheat sheet covered over 150+ commands organized by real-world scenarios. Whether you're debugging a production incident, deploying new applications, or managing RBAC policies, you now have a complete reference guide.
- Always use
kubectl describe
andkubectl logs
for debugging - Leverage
kubectl apply --dry-run
andkubectl diff
before changes - Use labels and selectors for efficient resource management
- Implement RBAC with least privilege principle
- Monitor resource usage with
kubectl top
- Automate with GitOps instead of direct kubectl in production
- Use kubectl plugins (krew) to enhance functionality
Next Steps:
- Bookmark this page for quick reference
- Practice these commands in a safe environment (minikube, kind)
- Set up kubectl aliases and completion in your shell
- Install useful kubectl plugins via krew
- Integrate kubectl best practices into your CI/CD pipelines
- Explore Atmosly for platform-level Kubernetes management
Happy Kuberneting! π