Introduction to Kubernetes Network Policies: Implementing Zero-Trust Networking
By default, Kubernetes operates with a completely flat network model where any pod can communicate with any other pod across all namespaces without restrictions, a convenient setup for development and learning environments, but an extremely dangerous configuration for production clusters handling sensitive data or multi-tenant workloads. This unrestricted communication model means that if a single container is compromised through an application vulnerability, SQL injection, remote code execution exploit, or supply chain attack, the attacker immediately gains network access to potentially every other service in your entire cluster including databases storing customer personally identifiable information, internal APIs exposing business logic and data, payment processing services handling credit cards, authentication services managing user credentials, and backend systems that should never be accessible from frontend application tiers.
Without Network Policies implementing network segmentation and micro-segmentation, your Kubernetes cluster is effectively one large, flat security zone where the compromise of any single pod perhaps a vulnerable WordPress instance or an unpatched Node.js application with a known CVE can lead to complete cluster-wide compromise through lateral movement as attackers pivot from the initial foothold to progressively more sensitive systems, exfiltrating data, deploying cryptominers, or establishing persistent backdoors for long-term access.
Kubernetes Network Policies are the firewall rules for your cluster, providing Layer 3/4 network segmentation that controls which pods can communicate with which other pods based on label selectors, which namespaces can reach which other namespaces based on namespace selectors, and which external IP addresses or CIDR blocks pods are allowed to connect to for egress traffic filtering. Properly implemented Network Policies provide critical defense-in-depth security by implementing zero-trust networking principles where nothing can communicate unless explicitly allowed, limiting the blast radius of security incidents by containing compromised pods within their intended network segment, preventing lateral movement by blocking attackers from pivoting to other services after initial compromise, meeting compliance requirements for network segmentation required by PCI-DSS, HIPAA, and other regulatory frameworks, enabling multi-tenant cluster security by isolating different teams or customers from each other at the network layer, and providing defense against internal threats from malicious insiders or compromised service accounts that have valid authentication but shouldn't have network access to all resources.
This comprehensive technical guide teaches you everything about Kubernetes Network Policies from foundational concepts to advanced production patterns, covering: how Network Policies actually work under the hood with CNI plugin integration, which CNI plugins support Network Policies and which don't (critical Flannel doesn't support them!), implementing default deny-all policies as security foundation following zero-trust principles, creating explicit allow policies for legitimate traffic patterns while blocking everything else, understanding ingress versus egress policy types and when to use each, pod selectors using labels for targeting policies to specific applications, namespace selectors for cross-namespace communication control, IP block selectors for external traffic filtering, common Network Policy patterns solving real security requirements (frontend-to-backend, database isolation, egress restrictions), advanced techniques like combining selectors with AND/OR logic, testing Network Policies before enforcement to avoid breaking applications, troubleshooting connectivity issues caused by overly restrictive policies, and how Atmosly analyzes actual pod-to-pod traffic patterns in your cluster over time and automatically generates optimal Network Policy YAML that allows only observed legitimate traffic while blocking everything else a data-driven approach that prevents both over-permissive policies that fail to provide security and over-restrictive policies that break application functionality.
By mastering Network Policies and implementing the patterns in this guide, you'll establish network-layer security that significantly hardens your Kubernetes infrastructure against attacks, reduces the blast radius of incidents, enables safe multi-tenancy, and meets compliance requirements while maintaining application functionality.
How Kubernetes Network Policies Work: Architecture and CNI Integration
The Fundamental Network Policy Model
Kubernetes Network Policies specify allowed network traffic, but Kubernetes itself doesn't enforce them; that's the job of your Container Network Interface (CNI) plugin. This separation of policy specification from enforcement is important to understand because it means Network Policies are declarative intent ("I want to allow traffic from frontend to backend on port 8080") but whether they actually work depends entirely on whether your CNI plugin supports and correctly implements Network Policy enforcement.
Key Concepts:
- Ingress: Incoming traffic TO the selected pods. Controls what can connect to your application.
- Egress: Outgoing traffic FROM the selected pods. Controls what your application can connect to.
- Pod Selector: Determines which pods the policy applies to using label matching (app=backend, tier=database, etc.)
- Policy Types: Ingress only, Egress only, or both. Specifying policyTypes: [Ingress] without any ingress rules = deny all ingress.
- Namespace Scope: Network Policies are namespaced resources that affect only pods in the same namespace (unless using a namespaceSelector for cross-namespace rules).
CNI Plugin Support: Critical Prerequisite
Network Policies ONLY work if your CNI plugin supports them. This is a hard requirement without CNI support, Network Policy resources are accepted by the Kubernetes API but completely ignored (no enforcement), creating a dangerous false sense of security.
CNI Plugins WITH Network Policy Support:
- ✅ Calico: Most popular choice specifically for Network Policy support. High-performance eBPF dataplane option. Supports advanced features like global network policies, egress gateways, and encryption. Used by the majority of production clusters requiring Network Policies.
- ✅ Cilium: eBPF-based CNI with advanced observability via Hubble. Supports Layer 7 policies (HTTP method/path filtering), DNS-based policies, and service mesh integration. Best for teams wanting network visibility alongside enforcement.
- ✅ Weave Net: Simple to deploy with automatic encryption. Good Network Policy support, but less performant than Calico/Cilium at very large scale.
- ✅ Antrea: VMware-backed CNI with good Windows support. Solid Network Policy implementation.
- ✅ kube-router: Lightweight CNI using standard Linux networking (iptables/ipvs). Basic but functional Network Policy support.
CNI Plugins WITHOUT Network Policy Support:
- ❌ Flannel: Very popular for simplicity, but does NOT support Network Policies. If you need Network Policies, don't use Flannel alone (can combine with Calico for policy enforcement).
- ❌ Host-local: Simple IPAM, no policy support
- ❌ Bridge: Basic bridge networking, no policies
Verify Network Policy support in your cluster:
# Check if NetworkPolicy API is available
kubectl api-resources | grep networkpolicies
# Output should show:
# networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
# If not listed, your CNI doesn't support Network Policies
# Solution: Install Calico or Cilium
AWS EKS, Google GKE, Azure AKS support: All major managed Kubernetes services support Network Policies when configured with appropriate CNI (EKS with AWS VPC CNI + Calico, GKE with default CNI, AKS with Azure CNI or Calico).
Implementing Default Deny Network Policies: Security Foundation
Why Default Deny is Critical
The single most important security principle for Network Policies is default deny: start by blocking all traffic, then explicitly allow only legitimate communication patterns. This whitelist approach is vastly superior to the default allow with blacklist exceptions because you can't possibly enumerate all the bad traffic patterns to block attackers are creative and constantly evolving their techniques. But you CAN enumerate the legitimate traffic your applications need to function.
Default Deny: Deny All Ingress Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {} # Empty selector = applies to ALL pods in namespace
policyTypes:
- Ingress
# No ingress rules = deny all ingress
What this does: Blocks all incoming traffic to all pods in the production namespace. Pods cannot receive connections from other pods, from other namespaces, or from external sources. Services stop working until you add explicit allow policies.
Default Deny: Deny All Egress Traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
# No egress rules = deny all egress
What this does: Blocks all outgoing traffic from all pods. Pods cannot initiate connections to other pods, databases, external APIs, or even DNS. This will break everything until you add allow rules, which is exactly the point, it forces you to explicitly define what's allowed.
⚠️ Critical Note: Applying these default deny policies will immediately break all services in the namespace. Only apply after you have allow policies ready for critical services, or apply in a test namespace first to validate your allow rules work correctly.
Common Network Policy Patterns for Production
Pattern 1: Allow Frontend to Backend Communication
Use case: Three-tier application: frontend web tier calls backend API tier, which calls database tier. Frontend should never directly access the database (violates tier separation).
# Allow frontend pods to connect to backend pods on port 8080 only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
tier: backend # This policy applies to backend pods
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend # Allow traffic from frontend pods
ports:
- protocol: TCP
port: 8080 # Only on this port
What's allowed: Frontend pods (tier=frontend label) can connect to backend pods (tier=backend label) on TCP port 8080 only.
What's blocked: Frontend cannot connect to the database directly (tier=database pods not in allowed sources). Backend cannot connect to frontend (ingress-only, not bidirectional). Other namespaces cannot connect to the backend (no namespaceSelector). External traffic cannot reach the backend directly (must go through the ingress controller).
Pattern 2: Database Isolation (Backend Access Only)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: postgres-access-control
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend # Only backend tier can access database
access-db: "true" # Additional label for fine-grained control
ports:
- protocol: TCP
port: 5432 # PostgreSQL port only
Security benefit: Even if the frontend is compromised, it cannot connect to the database and steal customer data. Defense-in-depth.
Pattern 3: Allow Cross-Namespace Monitoring
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-scraping
namespace: production
spec:
podSelector:
matchLabels:
prometheus.io/scrape: "true" # Pods exposing metrics
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring # Prometheus pods in monitoring namespace
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090 # Metrics endpoint
Explanation: Allows Prometheus pods from the monitoring namespace to scrape metrics from production pods, while blocking all other cross-namespace traffic.
Pattern 4: Always Allow DNS (Critical!)
The #1 Network Policy Mistake: Blocking DNS and wondering why nothing works.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-access
namespace: production
spec:
podSelector: {} # ALL pods in namespace
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system # CoreDNS/kube-dns runs in kube-system
ports:
- protocol: UDP
port: 53 # DNS port
- protocol: TCP
port: 53 # DNS over TCP (rare but can happen)
Why this matters: Service discovery, external API calls, database connections everything requires DNS resolution. Without this policy, all DNS queries fail and applications cannot resolve service names to IPs.
Pattern 5: Allow Traffic from Ingress Controller
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-controller
namespace: production
spec:
podSelector:
matchLabels:
tier: frontend # Frontend pods receive external traffic
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx # Ingress controller namespace
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
Pattern 6: Egress to External APIs (IP-Based Filtering)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-payment-api
spec:
podSelector:
matchLabels:
app: payment-service
policyTypes:
- Egress
egress:
# Allow internal cluster traffic
- to:
- podSelector: {}
# Allow specific external payment gateway
- to:
- ipBlock:
cidr: 52.32.123.45/32 # Stripe API IP (example)
ports:
- protocol: TCP
port: 443 # HTTPS only
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Security benefit: Payment service can only connect to internal cluster pods, DNS, and the specific payment gateway IP. If compromised, cannot exfiltrate data to attacker-controlled servers.
Understanding Selector Logic: AND vs OR Conditions
The Confusing Part of Network Policies
Network Policy selector logic is subtle and frequently misunderstood, leading to policies that don't work as intended. The key is understanding when selectors combine with AND logic versus OR logic.
AND Logic (Both conditions must match):
ingress:
- from:
- namespaceSelector:
matchLabels:
env: production
podSelector:
matchLabels:
app: frontend
# This means: Pods from (namespace with env=production) AND (pod with app=frontend)
# Both namespace AND pod selector must match same source
OR Logic (Either condition matches):
ingress:
- from:
- namespaceSelector:
matchLabels:
env: production
- podSelector:
matchLabels:
app: frontend
# This means: Pods from (namespace with env=production) OR (pod with app=frontend)
# Two separate list items = OR relationship
Common mistake example:
# WRONG: Trying to allow (namespaceA AND podX) OR (namespaceB AND podY)
# This actually allows (namespaceA AND podX) only
- from:
- namespaceSelector: {matchLabels: {name: a}}
podSelector: {matchLabels: {app: x}}
- namespaceSelector: {matchLabels: {name: b}}
podSelector: {matchLabels: {app: y}}
# CORRECT: Multiple from blocks for OR
- from:
- namespaceSelector: {matchLabels: {name: a}}
podSelector: {matchLabels: {app: x}}
- from:
- namespaceSelector: {matchLabels: {name: b}}
podSelector: {matchLabels: {app: y}}
Advanced Network Policy Patterns
Pattern 7: Allow Specific External CIDR Ranges
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8 # Allow RFC 1918 private network
- ipBlock:
cidr: 172.16.0.0/12
- ipBlock:
cidr: 192.168.0.0/16
# Block specific subnet within allowed range
- ipBlock:
cidr: 10.10.0.0/16
except:
- 10.10.5.0/24 # Blocked even though 10.0.0.0/8 allowed
Use case: Allow internal corporate network access but block specific DMZ subnet.
Pattern 8: Multi-Port Access
ingress:
- from:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 5432 # PostgreSQL
- protocol: TCP
port: 6379 # Redis
Allows: Backend pods can connect on ports 5432 OR 6379 (not AND it's OR for ports list).
Pattern 9: Named Ports (Flexible and Maintainable)
# Service definition
apiVersion: v1
kind: Service
spec:
ports:
- name: http
port: 80
targetPort: 8080
- name: metrics
port: 9090
targetPort: 9090
---
# Network Policy referencing named ports
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: http # References named port, not number
Advantage: If you change port numbers in pod spec, Network Policy automatically adapts (references port name, not hardcoded number).
Testing Network Policies Before Production Enforcement
Pre-Deployment Validation Strategy
Step 1: Deploy in Test Namespace First
# Create test namespace with same labels as production
kubectl create namespace production-test
kubectl label namespace production-test name=production-test
# Deploy your application
kubectl apply -f app-deployment.yaml -n production-test
# Apply Network Policies
kubectl apply -f network-policies.yaml -n production-test
# Test functionality thoroughly
Step 2: Test Allowed Traffic
# Should succeed
kubectl exec -n production-test frontend-pod -- \\
wget -O- http://backend-service:8080/health
# Expected: 200 OK
# Should fail (blocked by policy)
kubectl exec -n production-test frontend-pod -- \\
wget -O- http://database-service:5432
# Expected: timeout or connection refused
Step 3: Monitor for Dropped Connections (Cilium example)
# If using Cilium CNI
cilium monitor --type policy-verdict
# Shows:
# -> DROPPED (policy denied)
# <- FORWARDED (policy allowed)
Common Testing Mistakes to Avoid
Mistake 1: Wrong Label Selectors
# Policy targets app=fronted (typo!)
podSelector:
matchLabels:
app: fronted # Typo: should be "frontend"
# Verify labels on actual pods
kubectl get pods --show-labels -n production | grep frontend
# Check that labels match exactly
Mistake 2: Forgetting policyTypes
# Dangerous: no policyTypes specified
spec:
podSelector: {}
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
# What happens:
# Ingress: Explicitly allowed from frontend (correct)
# Egress: NOT controlled (default allow because policyTypes missing Egress)
# Pods can still exfiltrate data via egress!
# Correct: Always specify policyTypes
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress # Now egress is also controlled
Troubleshooting Network Policy Issues
Issue: Traffic Blocked Unexpectedly
Debugging steps:
# 1. Verify Network Policies exist and target correct pods
kubectl get networkpolicies -n production
kubectl describe networkpolicy allow-frontend-to-backend
# 2. Check pod labels match policy selectors
kubectl get pods --show-labels | grep backend
# Verify app=backend label exists
# 3. Check namespace labels for cross-namespace policies
kubectl get namespace production --show-labels
# Verify name=production label if policies use namespaceSelector
# 4. Test connectivity
kubectl exec frontend-pod -- wget -T 5 -O- http://backend:8080
# -T 5 = 5 second timeout (don't wait forever)
# 5. Check CNI plugin logs
kubectl logs -n kube-system -l k8s-app=calico-node
# Look for "Policy denied" messages
Issue: Policies Not Enforcing (Silent Failure)
Most common cause: CNI doesn't support Network Policies
# Verify CNI supports policies
kubectl api-resources | grep networkpolicies
# Check which CNI is installed
kubectl get pods -n kube-system | grep -E "calico|cilium|weave|flannel"
# If Flannel: Need to add Calico for policy enforcement
# Install Calico policy-only mode (works with Flannel for networking)
Network Policy Best Practices
1. Start with Default Deny
✅ Always implement deny-all first, then add explicit allows
2. Test in Staging First
✅ Never apply untested policies to production
3. Always Allow DNS
✅ UDP:53 to kube-system namespace—required for everything
4. Document Why Each Policy Exists
✅ Add annotations explaining purpose
metadata:
annotations:
description: "Allows frontend to access backend API only on port 8080"
owner: "platform-team"
reviewed: "2025-01-15"
5. Use Namespace Labels Consistently
# Label all namespaces
kubectl label namespace production name=production env=prod
kubectl label namespace staging name=staging env=staging
kubectl label namespace monitoring name=monitoring
kubectl label namespace kube-system name=kube-system
# Makes namespaceSelector policies work reliably
6. Combine with Service Mesh for Layer 7 Policies
Network Policies operate at Layer 3/4 (IP, port). For Layer 7 (HTTP method, path, headers), use service mesh:
- Istio: AuthorizationPolicy for HTTP-level rules (allow POST to /api/users, block DELETE from untrusted sources)
- Linkerd: Server and ServerAuthorization for HTTP policy
- Cilium: Native Layer 7 Network Policy support
7. Monitor Denied Connections
Set up logging/alerting for policy denials to detect:
- Legitimate traffic blocked by misconfigured policy (fix policy)
- Attack attempts or compromised pods (security incident)
- New application features requiring policy updates
8. Version Control Network Policies
✅ Store in Git with application code
✅ Review changes via pull requests
✅ Deploy via GitOps (ArgoCD, Flux)
Conclusion: Zero-Trust Networking for Kubernetes
Kubernetes Network Policies enable zero-trust networking: verify every connection, allow nothing by default, and explicitly permit only required traffic. This dramatically reduces the attack surface and limits the impact of breaches.
Critical Implementation Steps:
- Verify your CNI supports Network Policies (Calico, Cilium, Weave)
- Label all namespaces and pods consistently
- Implement default deny-all in production namespaces
- Create allow policies for legitimate traffic
- Always allow DNS (UDP:53 to kube-system)
- Test thoroughly in staging before production
- Use Atmosly for traffic-based policy generation
- Monitor and iterate as applications evolve
Ready to implement zero-trust networking without breaking your applications? Start your free Atmosly trial for intelligent Network Policy recommendations based on actual traffic analysis.