Kubernetes Network Policies are one of those features that everyone knows they should use but few actually understand. The YAML looks intimidating, the behavior is non-intuitive, and the mental model takes time to develop.
I’ve spent hours debugging policies that “should work” but didn’t. Let me save you that pain with a visual approach to understanding Network Policies.
The Default: Everything Talks to Everything
By default, Kubernetes allows all pod-to-pod communication. Any pod can reach any other pod across any namespace. This is convenient for getting started but terrible for security.
flowchart TD
subgraph cluster["Cluster"]
Pod1["Pod"] <--> Pod2["Pod"]
Pod2 <--> Pod3["Pod"]
Pod3 <--> Pod4["Pod"]
Pod1 <--> Pod3
Pod1 <--> Pod4
Pod2 <--> Pod4
end
note["Everyone can talk"]
If an attacker compromises one pod, they can reach every other pod in the cluster. This violates zero trust principles — we should assume breach and limit blast radius.
Network Policies: The Firewall for Pods
A NetworkPolicy is a specification of how pods are allowed to communicate. Think of it as a firewall rule that applies to groups of pods.
Key concepts:
- Ingress: Incoming traffic to a pod
- Egress: Outgoing traffic from a pod
- Selectors: How to identify pods (by labels)
- Policy Types: Whether to control ingress, egress, or both
The Most Important Rule: Default Deny
The first policy you should apply to any namespace is default deny. This blocks all traffic, then you explicitly allow what’s needed.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Empty = all pods
policyTypes:
- Ingress
- Egress
flowchart TD
subgraph production["production namespace"]
Pod1["Pod"] x--x Pod2["Pod"]
Pod2 x--x Pod3["Pod"]
Pod3 x--x Pod4["Pod"]
end
note["All traffic blocked by default"]
Now nothing can communicate. We build up from zero.
Allowing Ingress: Who Can Talk TO This Pod?
Let’s say we have a frontend that needs to receive traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-ingress
namespace: production
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- podSelector:
matchLabels:
app: ingress-controller
ports:
- protocol: TCP
port: 8080
flowchart LR
subgraph ingress-nginx["ingress-nginx ns"]
IC["ingress controller"]
end
subgraph production["production ns"]
FE["frontend<br/>app:frontend"]
Other["other pods"]
end
IC -->|"✓"| FE
Other x--x|"✗"| FE
The frontend can only receive traffic from the ingress controller on port 8080.
Allowing Egress: What Can This Pod Talk TO?
The frontend needs to talk to the backend API:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-egress
namespace: production
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 3000
- to: # Allow DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
flowchart TD
subgraph production["production namespace"]
FE["frontend"] -->|"✓"| BE["backend"]
BE x--x|"✗"| DB["database"]
FE -->|"✓ DNS only"| DNS["kube-dns"]
end
The frontend can only reach the backend (port 3000) and DNS. It cannot directly access the database.
The Complete Three-Tier Example
Here’s a realistic setup: ingress → frontend → backend → database.
# Frontend policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress
ports:
- port: 80
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 8080
- to: # DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
---
# Backend policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- port: 5432
- to: # DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
---
# Database policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- port: 5432
egress: [] # Database doesn't need to initiate connections
flowchart LR
Internet --> Ingress
Ingress --> Frontend
Frontend --> Backend
Backend --> Database
Frontend -.->|"DNS only"| DNS["kube-dns"]
Backend -.->|"DNS only"| DNS
Each tier can only talk to its adjacent tier. The frontend cannot reach the database. The database cannot reach the internet.
Common Gotchas
Gotcha 1: Don’t Forget DNS
Your pods need DNS to resolve service names. Always allow egress to kube-dns:
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
Without this, your pods can’t resolve backend.production.svc.cluster.local.
Gotcha 2: Policies Are Additive
Multiple policies on the same pod are combined with OR logic. If policy A allows traffic from X, and policy B allows traffic from Y, the pod receives traffic from both X and Y.
You cannot create a “deny” policy that overrides an “allow” policy. The only way to deny is to not allow.
Gotcha 3: Empty Selector Means “All”
podSelector: {} # All pods in this namespace
namespaceSelector: {} # All namespaces
This catches many people. An empty selector doesn’t mean “nothing” — it means “everything.”
Gotcha 4: Policies Only Affect Selected Pods
A NetworkPolicy only affects pods that match its podSelector. Pods without any policies selecting them have no restrictions.
This is why default-deny is important: it ensures every pod has at least one policy.
Testing Network Policies
Always test your policies. Here’s a quick way:
# Create a test pod
kubectl run test-pod --image=busybox --rm -it -- sh
# Try to reach another service
wget -qO- --timeout=2 http://backend.production:8080/health
# Try to reach the database directly (should fail)
nc -zv database.production 5432
Or use a tool like netshoot:
kubectl run netshoot --image=nicolaka/netshoot -it --rm -- bash
CNI Requirements
Not all CNIs support Network Policies. You need a CNI that implements the NetworkPolicy API:
- Calico: Full support, plus extends with more features
- Cilium: Full support, plus L7 policies
- Weave: Full support
- Flannel: No support (needs Calico on top)
Check your CNI before relying on Network Policies.
Beyond Basic Policies
For more advanced use cases, consider:
- Cilium Network Policies: L7 (HTTP) filtering, DNS-based policies
- Calico Network Policies: Host protection, global policies
- Service Mesh: Istio/Linkerd for mutual TLS and L7 policies
The Kubernetes NetworkPolicy API is intentionally simple. More complex requirements often need CNI-specific extensions.
My Recommendation
Start simple:
- Apply default-deny to every namespace
- Add policies that allow only necessary communication
- Test thoroughly before going to production
- Use tools like Cilium’s policy editor to visualize
Network Policies are one of the best security improvements you can make in Kubernetes. They’re free (no extra infrastructure), they’re declarative (infrastructure as code), and they dramatically limit blast radius.
The mental model takes practice, but once it clicks, you’ll wonder how you ever ran a cluster without them.
Network Policies implement the principle of least privilege at the network level. Every connection that doesn’t need to exist shouldn’t exist.
