Introduction to Kubernetes Environment Cloning
Kubernetes environment cloning is the practice of creating complete replicas of existing Kubernetes environments including all deployments, services, configurations, secrets, persistent volumes, and networking setup to establish identical staging environments for testing, ephemeral preview environments for feature branches, disaster recovery environments in different regions or clouds, or development environments that mirror production exactly. Manual environment cloning traditionally requires hours of work copying dozens of YAML manifests, updating namespace references, adjusting resource allocations, recreating secrets, provisioning storage volumes, configuring ingress rules, and verifying that all service dependencies are correctly replicated—a tedious, error-prone process that often results in subtle configuration drift where cloned environments differ from source in ways that cause bugs to manifest differently across environments.
The benefits of effective environment cloning are transformative for development velocity and quality: developers can test changes in production-like environments without risking actual production, QA teams can spin up isolated test environments per feature branch eliminating test interference, debugging production issues becomes easier when you can reproduce the exact production state in a safe environment, disaster recovery is simplified since you can pre-clone production to standby regions ready for instant failover, and compliance testing can occur in dedicated environments that are exact production replicas without touching actual customer data or production systems.
This comprehensive guide teaches you everything about Kubernetes environment cloning from manual approaches through automated solutions, covering: what environment cloning means and why it's valuable beyond simple namespace creation, manual cloning using kubectl and kustomize with complete step-by-step procedures, automated cloning with Helm and templating strategies, handling secrets and sensitive data during cloning without exposing credentials, managing persistent data—whether to clone volume snapshots or start with fresh volumes, implementing ephemeral preview environments for pull requests using tools like Argo CD Pull Request Generator or Flux, scaling cloning to manage dozens of dynamic environments without overwhelming cluster capacity, cost optimization for cloned environments to avoid budget explosion from running many replicas of production, and how Atmosly's one-click environment cloning feature automates the entire process through its web interface where you select a source environment and target cluster, Atmosly automatically clones all services with their deployments and configurations, copies environment variables while securely handling secrets (masking sensitive values or prompting for new values), replicates CI/CD pipeline configurations for the cloned environment, maintains service dependencies and ingress routing, and provisions the entire stack in minutes rather than hours—enabling teams to create staging environments, test environments per developer, or feature preview environments on-demand without DevOps bottlenecks or manual YAML editing.
What is Environment Cloning? Beyond Simple Namespace Copy
Understanding Kubernetes Environments
A Kubernetes "environment" typically consists of:
- Namespace: Logical isolation boundary (production, staging, dev, feature-xyz)
- Workloads: Deployments, StatefulSets, DaemonSets, Jobs, CronJobs
- Services: ClusterIP, LoadBalancer, NodePort services for network access
- Configuration: ConfigMaps, Secrets, environment variables
- Storage: PersistentVolumeClaims, StorageClasses
- Networking: Ingress rules, NetworkPolicies, Service entries
- RBAC: ServiceAccounts, Roles, RoleBindings specific to environment
- Resource Quotas: CPU/memory limits for namespace
Cloning an environment means replicating ALL of these components, not just creating an empty namespace.
Why Clone Environments?
Use Case 1: Staging Environment from Production
Create production-identical staging for testing changes safely:
- Same microservices architecture (if production has 12 services, staging has same 12)
- Same service versions (testing upgrade from v1.2 to v1.3 in environment matching production)
- Same configuration structure (different values like database hosts, but same keys)
- Same ingress routing and load balancer setup
- But with reduced resources (staging 1 replica vs production 10)
Use Case 2: Preview Environments for Feature Branches
Ephemeral environments per pull request:
- Developer creates PR for new feature
- Automation clones production environment
- Deploys PR's code in cloned environment
- QA tests feature in isolation (doesn't affect other testing)
- Environment auto-deleted when PR merged or closed
Use Case 3: Disaster Recovery
Pre-cloned production in different region or cloud:
- Primary production in us-east-1
- DR environment cloned to eu-west-1
- Same services, ingress, configuration
- Data replication via database replication or volume snapshots
- DNS failover switches traffic if primary fails
Manual Environment Cloning: Step-by-Step Guide
Step 1: Export Source Environment
# Export all resources from source namespace
kubectl get all,ingress,configmap,secret,pvc -n production -o yaml > production-export.yaml
# This exports: Deployments, Services, Ingresses, ConfigMaps, Secrets, PVCs
# But includes cluster-specific fields that must be cleaned
Step 2: Clean Exported YAML
Remove cluster-specific metadata:
# Remove these fields (they're auto-generated):
# - metadata.uid
# - metadata.resourceVersion
# - metadata.creationTimestamp
# - metadata.selfLink
# - status sections
# - ownerReferences
# Use yq or manual editing
yq eval 'del(.metadata.uid, .metadata.resourceVersion, .metadata.creationTimestamp, .status)' \\
production-export.yaml > production-clean.yaml
Step 3: Transform for Target Environment
Update namespace references:
# Replace namespace: production with namespace: staging
sed 's/namespace: production/namespace: staging/g' production-clean.yaml > staging.yaml
# Update resource allocations (staging uses less)
# Change replicas: 10 to replicas: 2
# Change CPU: 2000m to CPU: 500m
# Requires manual YAML editing or scripts
Step 4: Handle Secrets Separately
Never copy production secrets to staging!
# Extract secret structure (keys, not values)
kubectl get secrets -n production -o yaml | \\
yq eval '.items[] | .data = (.data | to_entries | map({(.key): "PLACEHOLDER"}))' > secret-templates.yaml
# Manually fill in staging-appropriate values
# Or use different secret source (staging database password, not production)
Step 5: Create Target Namespace and Apply
# Create staging namespace
kubectl create namespace staging
# Apply all resources
kubectl apply -f staging.yaml -n staging
# Verify all pods running
kubectl get pods -n staging
# Typical issues:
# - ImagePullSecrets missing (copy from production namespace)
# - PVCs pending (different StorageClass needed)
# - Ingress conflicts (different hostnames required)
Manual cloning problems:
- ⏰ Time-consuming: 1-3 hours for complex environments
- 🐛 Error-prone: Easy to miss resources or misconfigure
- 📝 Not repeatable: Each clone requires manual work
- 🔄 Drift: Clones diverge from source over time
Automated Cloning with Kustomize
Repository Structure for Multiple Environments
k8s-manifests/
├── base/ # Shared configuration
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── production/
│ │ ├── kustomization.yaml # 10 replicas, high resources
│ │ ├── ingress-patch.yaml # app.example.com
│ │ └── resources-patch.yaml
│ ├── staging/
│ │ ├── kustomization.yaml # 2 replicas, medium resources
│ │ ├── ingress-patch.yaml # staging.example.com
│ │ └── resources-patch.yaml
│ └── preview-pr-123/ # Ephemeral preview environment
│ ├── kustomization.yaml # 1 replica, low resources
│ └── ingress-patch.yaml # pr-123.preview.example.com
Base kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
- ingress.yaml
commonLabels:
app: myapp
Staging overlay (overlays/staging/kustomization.yaml):
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: staging # Deploy to staging namespace
bases:
- ../../base
patchesStrategicMerge:
- resources-patch.yaml # Reduce resources
- ingress-patch.yaml # Different hostname
replicas:
- name: myapp
count: 2 # Staging uses 2 replicas vs prod 10
Deploy staging environment:
# Build and apply staging configuration
kubectl apply -k overlays/staging
# This creates complete staging environment from base templates
# With staging-specific overrides (replicas, resources, ingress)
Benefits:
- ✅ DRY (Don't Repeat Yourself): Base config shared
- ✅ Consistent: All environments use same base
- ✅ Versioned: Overlays in Git track environment differences
- ✅ Repeatable: kubectl apply -k creates environment consistently
Ephemeral Preview Environments
The Preview Environment Pattern
Create temporary environment per pull request for testing:
- Developer creates PR with new feature
- CI/CD detects PR creation
- Automation clones environment with PR's code
- Unique URL generated (pr-456.preview.example.com)
- QA/stakeholders test feature in isolation
- Environment destroyed when PR merged/closed
Implementation with ArgoCD ApplicationSet:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: preview-environments
spec:
generators:
- pullRequest:
github:
owner: myorg
repo: myapp
labels:
- preview # Only PRs with "preview" label
template:
metadata:
name: 'pr-{{number}}'
spec:
source:
repoURL: https://github.com/myorg/myapp
targetRevision: '{{head_sha}}'
path: k8s/overlays/preview
destination:
server: https://kubernetes.default.svc
namespace: 'preview-{{number}}'
syncPolicy:
automated:
prune: true # Auto-delete when PR closed
# ArgoCD automatically:
# - Creates namespace per PR
# - Deploys PR's code
# - Deletes namespace when PR merged/closed
Managing Secrets During Cloning
Challenge: Secrets Cannot Be Copied Directly
Production secrets (database passwords, API keys) should NEVER be used in staging/dev.
Solution 1: Separate Secrets Per Environment
# Production (real customer database)
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: production
data:
password: cHJvZHVjdGlvblBhc3N3b3Jk # base64: productionPassword
# Staging (test database)
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: staging
data:
password: c3RhZ2luZ1Bhc3N3b3Jk # base64: stagingPassword
# Same secret name, different values
# Applications work in both environments without code changes
Solution 2: External Secrets Operator
# Store secrets externally (AWS Secrets Manager, Vault)
# Reference different paths per environment
# Production ExternalSecret
spec:
data:
- secretKey: password
remoteRef:
key: production/database/password # Prod secret
# Staging ExternalSecret (same structure, different path)
spec:
data:
- secretKey: password
remoteRef:
key: staging/database/password # Staging secret
# Cloning creates ExternalSecret with correct path for target environment
Persistent Data Handling in Cloned Environments
Option 1: Fresh Volumes (Most Common)
Cloned environment gets new empty volumes:
- Databases start empty or with seed data
- Faster cloning (no data copy)
- Lower cost (no duplicate storage)
- Safe (no production data exposure in staging)
Option 2: Snapshot and Restore (For Testing with Real Data)
# Create snapshot of production volume
kubectl create volumesnapshot prod-db-snapshot \\
--source-pvc=postgres-data \\
-n production
# Create PVC in staging from snapshot
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
namespace: staging
spec:
dataSource:
name: prod-db-snapshot
kind: VolumeSnapshot
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
# Staging database starts with production data snapshot
# Remember to scrub sensitive data for compliance!
Option 3: Logical Data Cloning
For databases, use database-specific tools:
# PostgreSQL: pg_dump from prod, pg_restore to staging
kubectl exec -n production postgres-0 -- \\
pg_dump -U postgres mydb > prod-backup.sql
# Sanitize sensitive data
sed 's/[email protected]/[email protected]/g' prod-backup.sql > sanitized.sql
# Restore to staging
kubectl exec -n staging postgres-0 -- \\
psql -U postgres mydb < sanitized.sql
Cost Optimization for Cloned Environments
Challenge: Cloning Multiplies Costs
If production costs $10,000/month, having 10 clones (staging, 5 developer environments, 3 preview environments) could cost $100,000/month—unsustainable.
Solution: Right-Size Clones
Resource Reduction Strategy:
- Production: 10 replicas, 2 CPU, 4Gi RAM per pod = 20 CPU, 40Gi total
- Staging: 2 replicas, 1 CPU, 2Gi RAM per pod = 2 CPU, 4Gi total (90% cost reduction)
- Dev: 1 replica, 500m CPU, 1Gi RAM = 500m CPU, 1Gi total (95% reduction)
- Preview: 1 replica, 250m CPU, 512Mi RAM = minimal cost
Automated scaling in Kustomize:
# overlays/dev/kustomization.yaml
replicas:
- name: frontend
count: 1
- name: backend
count: 1
patchesJson6902:
- target:
kind: Deployment
name: frontend
patch: |-
- op: replace
path: /spec/template/spec/containers/0/resources/requests/cpu
value: 250m
- op: replace
path: /spec/template/spec/containers/0/resources/requests/memory
value: 512Mi
Schedule-Based Scaling
Shut down non-production environments nights and weekends:
# Scale down staging at 6 PM (CronJob)
apiVersion: batch/v1
kind: CronJob
metadata:
name: scale-down-staging
spec:
schedule: "0 18 * * *" # 6 PM daily
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl
image: bitnami/kubectl
command:
- /bin/sh
- -c
- kubectl scale deployment --all --replicas=0 -n staging
# Scale up at 8 AM
apiVersion: batch/v1
kind: CronJob
metadata:
name: scale-up-staging
spec:
schedule: "0 8 * * 1-5" # 8 AM weekdays
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl
image: bitnami/kubectl
command:
- /bin/sh
- -c
- kubectl scale deployment --all --replicas=2 -n staging
# Savings: 16 hours/day × 5 days + 48 hours weekend = 128/168 hours off = 76% cost reduction
How Atmosly's One-Click Environment Cloning Works (20% Content)
The Atmosly Environment Cloning Feature
While manual cloning requires hours of kubectl commands, YAML editing, and troubleshooting, Atmosly provides one-click environment cloning through its web interface that automates the entire process.
What Atmosly Environment Cloning Does
1. Comprehensive Environment Replication:
- Clones all services (applications and data sources like databases) from source environment
- Copies deployment configurations including container images, resource requests/limits, replica counts
- Replicates service networking setup including ingress rules and load balancer configurations
- Maintains service dependencies and ordering (databases deploy before applications using them)
2. Intelligent Configuration Handling:
- Copies environment variables with smart handling: non-sensitive values copied directly, sensitive values (marked as secrets) either masked or prompt for new values specific to target environment
- Transforms ingress hostnames automatically: production.example.com → staging.example.com → pr-123.preview.example.com
- Adjusts resource allocations based on target environment type (staging gets fewer replicas than production)
- Updates namespace references throughout all configurations
3. CI/CD Pipeline Replication:
- If source environment has configured CI/CD pipelines, Atmosly replicates pipeline configuration for cloned environment
- Adjusts deployment targets (cloned environment points to staging namespace instead of production)
- Maintains build and test configurations
4. Resource Optimization for Clones:
- Automatically reduces resources for non-production clones (production 10 replicas → staging 2 replicas)
- Suggests appropriate resource limits based on environment type
- Estimates monthly cost of cloned environment before creation
5. Rapid Deployment:
- Complete environment cloning typically completes in 5-10 minutes
- All services deploy in correct dependency order
- Health checks verify successful deployment
- Provides clone status and any issues encountered
Using Atmosly for Environment Cloning
Workflow in Atmosly Platform:
- Navigate to source environment (e.g., "production")
- Click "Clone Environment" button
- Select target cluster (same cluster or different)
- Choose environment name ("staging-v2", "qa-environment", "pr-456-preview")
- Review cloning payload showing what will be cloned
- Adjust resource allocations if needed (reduce replicas, CPU, memory for cost savings)
- Handle secrets (use different secrets appropriate for target environment)
- Click "Clone" to initiate
- Monitor cloning progress (services deploying, health checks, networking setup)
- Access cloned environment when ready
Atmosly vs Manual Cloning:
| Aspect | Manual Cloning | Atmosly One-Click |
|---|---|---|
| Time Required | 1-3 hours (complex environments) | 5-10 minutes (automated) |
| Technical Knowledge | Deep kubectl, YAML expertise required | No technical knowledge needed (web UI) |
| Error Risk | High (easy to miss resources, misconfigure) | Low (automated validation) |
| Secret Handling | Manual (risk of copying prod secrets) | Automated masking, prompts for values |
| Resource Optimization | Manual calculation and adjustment | Auto-suggests appropriate sizing |
| Repeatability | Each clone requires manual work | Repeatable (same process every time) |
Common Use Cases with Atmosly Cloning
Use Case 1: Create Staging from Production
Monthly or before major releases, clone production to update staging environment ensuring they stay in sync:
- Clone production environment
- Atmosly automatically reduces replicas (10 → 2), resources (2 CPU → 500m)
- Updates ingress (production.example.com → staging.example.com)
- Prompts for staging-specific secrets (staging database password)
- Deploys to staging namespace or separate staging cluster
- Result: Staging now matches production architecture with appropriate resource sizing
Use Case 2: Developer Personal Environments
Each developer gets personal environment for testing:
- Clone production environment
- Name: "dev-alice", "dev-bob"
- Minimal resources (1 replica, 250m CPU)
- Developer can test changes without affecting team
- Delete when no longer needed
Use Case 3: Feature Preview for Stakeholders
Non-technical stakeholders (PM, designers) review features:
- Clone environment per feature branch
- Deploy feature code to cloned environment
- Share URL: feature-new-ui.preview.example.com
- Stakeholders test and provide feedback
- No need for local development setup
Best Practices for Environment Cloning
1. Automate Cloning (Don't Do Manually)
Use Kustomize overlays, Helm values, or platforms like Atmosly for repeatable cloning.
2. Never Clone Production Secrets
Always use environment-specific secrets. Production secrets in staging/dev is security violation.
3. Right-Size Cloned Environments
Staging doesn't need 10 replicas if production has 10. Use 1-2 replicas saving 80-90% cost.
4. Implement Auto-Cleanup
Preview environments should auto-delete after PR merged. Set TTL (time-to-live) for dev environments.
5. Monitor Clone Costs
Track spending on non-production environments. Often surprises: "We have 15 forgotten preview environments costing $2,000/month"
Conclusion: Environment Cloning for Agile Development
Kubernetes environment cloning enables rapid iteration, safe testing, and better collaboration. Whether using manual kubectl and kustomize, automated tools like ArgoCD ApplicationSets, or platforms like Atmosly's one-click cloning, the key is making environment creation fast, repeatable, and cost-effective.
Key Takeaways:
- Environment cloning replicates entire stack (workloads, config, networking, storage)
- Manual cloning with kubectl: Time-consuming but possible (use kustomize for efficiency)
- Automated cloning with ArgoCD/Flux: Scales to many environments
- Always use separate secrets per environment (never clone production secrets)
- Right-size clones (staging 2 replicas vs prod 10 saves 80% cost)
- Implement auto-cleanup for ephemeral environments (prevent cost accumulation)
- Atmosly one-click cloning: Automates entire process in 5-10 minutes via web UI
Ready to clone Kubernetes environments effortlessly? Start with Atmosly's one-click environment cloning to create staging, development, or preview environments in minutes with automated configuration handling, secret management, and cost optimization.