I write a lot about Kubernetes. I use it daily. I’m a fan.

But Kubernetes isn’t always the answer.

In fact, for many teams and projects, Kubernetes is the wrong choice. Too complex, too expensive, too much overhead for what they’re trying to achieve.

This is the post I’m writing for everyone considering Kubernetes adoption. Not to discourage you, but to help you make a conscious choice.

The Kubernetes hype

Kubernetes has won. It’s the de-facto standard for container orchestration. Every cloud provider offers managed Kubernetes. Every DevOps job posting asks for Kubernetes experience.

But “everyone does it” isn’t a good reason to do something.

Kubernetes solves specific problems:

  • Orchestrating containers across multiple nodes
  • Automatically scaling based on load
  • Self-healing when containers crash
  • Declarative infrastructure configuration
  • Service discovery and load balancing

If you don’t have these problems, Kubernetes doesn’t solve anything for you. It only adds complexity.

When Kubernetes is overkill

You have one application

One monolith. One database. Maybe a Redis cache. Runs fine on one server.

Why would you need Kubernetes?

Your application → Kubernetes cluster → 3+ nodes → etcd →
control plane → ingress controller → cert-manager →
monitoring stack → ...

Versus:

Your application → Docker Compose → 1 server

The first option costs you 10x more time, money, and cognitive load. For exactly the same functionality.

Your team is small

Kubernetes requires knowledge. Not just of Kubernetes itself, but of:

  • Networking (CNI, service mesh, ingress)
  • Storage (CSI, persistent volumes)
  • Security (RBAC, network policies, pod security)
  • Observability (metrics, logs, traces)
  • GitOps (ArgoCD, Flux)

A team of 2-3 developers who also do ops? They don’t have time to learn and maintain all of this.

Your traffic is predictable

Kubernetes’ killer feature is automatic scaling. But if your traffic is predictable — consistent 100 requests per second, no spikes — you don’t need that.

Overprovision a bit and you’re done.

Your budget is limited

Managed Kubernetes (EKS, GKE, AKS) costs money:

  • Control plane fees (~$70-150/month)
  • Minimum 3 worker nodes for HA
  • Load balancer costs
  • Persistent volume costs
  • Egress costs

For a simple application you’re quickly looking at $300-500/month. The same app on a $20 VPS? Also works.

You don’t have microservices

Kubernetes is designed for microservices. Many small, independent services that scale and fail independently of each other.

Putting a monolith in Kubernetes is like buying a Ferrari to get groceries. Technically possible, but you’re not utilizing the capabilities.

The hidden costs of Kubernetes

Operational overhead

Even with managed Kubernetes:

  • Cluster upgrades (new version every 3-4 months)
  • Node pool management
  • Capacity planning
  • Incident response when pods don’t start
  • Debugging networking issues

This is work. Continuous work. Someone has to do it.

Learning curve

Kubernetes concepts you need to understand:

  • Pods, Deployments, StatefulSets, DaemonSets
  • Services, Ingress, NetworkPolicies
  • ConfigMaps, Secrets
  • PersistentVolumes, StorageClasses
  • RBAC, ServiceAccounts
  • Helm, Kustomize
  • CRDs, Operators

And I haven’t even mentioned the ecosystem tools: Prometheus, Grafana, ArgoCD, cert-manager, external-dns, …

YAML hell

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:v1.0.0
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 3
---
apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
# ... even more YAML

This is the minimal config for one application. Compare with Docker Compose:

services:
  my-app:
    image: my-app:v1.0.0
    ports:
      - "80:8080"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]

Debugging complexity

Container won’t start? In Docker:

docker logs my-app

In Kubernetes:

kubectl describe pod my-app-7d9f8b6c5d-x2k9p
kubectl logs my-app-7d9f8b6c5d-x2k9p
kubectl logs my-app-7d9f8b6c5d-x2k9p --previous
kubectl get events --field-selector involvedObject.name=my-app-7d9f8b6c5d-x2k9p

And then you find out it’s an ImagePullBackOff because of a typo in the image name, or a resource limit issue, or a node that’s full, or…

Alternatives to Kubernetes

Docker Compose

When: One server, multiple containers, simple setup.

services:
  app:
    image: my-app:latest
    ports:
      - "80:8080"
    depends_on:
      - db
      - redis
    environment:
      - DATABASE_URL=postgres://db:5432/app
    restart: unless-stopped

  db:
    image: postgres:15
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped

  redis:
    image: redis:7
    restart: unless-stopped

volumes:
  postgres_data:

docker compose up -d and you’re done. Updates? docker compose pull && docker compose up -d.

Pros:

  • Simple to understand
  • No cluster overhead
  • Works anywhere Docker runs
  • Perfect developer experience

Cons:

  • No automatic scaling
  • No high availability (single node)
  • Limited orchestration

Docker Swarm

When: Need multiple nodes, but Kubernetes is too complex.

Docker Swarm is Kubernetes’ forgotten sibling. Simpler, fewer features, but often enough.

# Init swarm
docker swarm init

# Deploy stack
docker stack deploy -c docker-compose.yml myapp

# Scale
docker service scale myapp_web=5

Pros:

  • Same Docker Compose files
  • Built into Docker
  • Easy setup
  • Multi-node support

Cons:

  • Smaller ecosystem
  • Limited community (Docker focuses on Kubernetes)
  • Fewer features than Kubernetes

HashiCorp Nomad

When: You want orchestration without Kubernetes complexity, or you run more than just containers.

Nomad is a lightweight alternative that can orchestrate containers, VMs, and standalone executables.

job "my-app" {
  datacenters = ["dc1"]

  group "web" {
    count = 3

    task "app" {
      driver = "docker"

      config {
        image = "my-app:latest"
        ports = ["http"]
      }

      resources {
        cpu    = 500
        memory = 256
      }
    }
  }
}

Pros:

  • Simpler than Kubernetes
  • Can do more than containers (raw exec, Java, QEMU)
  • Good integration with other HashiCorp tools (Consul, Vault)
  • Single binary, easy to deploy

Cons:

  • Smaller ecosystem
  • Fewer managed cloud options
  • Less “standard” (fewer resources, tutorials)

Platform-as-a-Service

When: You just want to deploy code, not manage infrastructure.

  • Render: Simple deploys, managed PostgreSQL
  • Railway: Developer-friendly, good free tier
  • Fly.io: Edge deployment, good performance
  • Heroku: The original, still solid
# Fly.io example
fly launch
fly deploy

No YAML. No clusters. No ops.

Pros:

  • Zero infrastructure management
  • Push code, app runs
  • Often cheaper for small projects

Cons:

  • Vendor lock-in
  • Less control
  • Can get expensive at scale

Plain VMs

When: You don’t need containers.

Seriously. Not everything needs to be a container.

A Python app with systemd:

[Unit]
Description=My App
After=network.target

[Service]
User=app
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/venv/bin/python app.py
Restart=always

[Install]
WantedBy=multi-user.target

Ansible for configuration management. No Docker, no Kubernetes.

Pros:

  • Simple
  • No container overhead
  • Familiar to most engineers
  • Debugging is easy

Cons:

  • No isolation like containers
  • Dependency management harder
  • Less reproducible

Serverless / FaaS

When: Event-driven workloads, variable load, don’t want to manage servers.

  • AWS Lambda
  • Google Cloud Functions
  • Azure Functions
  • Cloudflare Workers
// AWS Lambda
export const handler = async (event) => {
  return {
    statusCode: 200,
    body: JSON.stringify({ message: "Hello!" })
  };
};

Pros:

  • Pay per execution
  • Automatic scaling to zero
  • No server management

Cons:

  • Cold starts
  • Vendor lock-in
  • Difficult for long-running processes
  • Debugging is complex

Decision framework

Ask yourself:

1. How many services do I have?

  • 1-3 services: Docker Compose or PaaS
  • 4-10 services: Swarm, Nomad, or Kubernetes
  • 10+ services: Kubernetes becomes more relevant

2. Do I need dynamic scaling?

  • No: Almost anything except Kubernetes
  • Yes, but predictable: Scheduled scaling, no K8s needed
  • Yes, unpredictable: Kubernetes or serverless

3. What’s my team’s expertise?

  • No Kubernetes experience: Don’t start with Kubernetes
  • Some experience: Managed Kubernetes (EKS, GKE)
  • Lots of experience: Whatever you want

4. What’s my budget?

  • Small (<$100/month): VPS, PaaS, or serverless
  • Medium ($100-1000/month): Depends on requirements
  • Large (>$1000/month): Kubernetes becomes economically viable

5. Do I need multi-cloud or hybrid?

  • No: Use what the cloud offers
  • Yes: Kubernetes is the best option for portability

When Kubernetes IS the right choice

To be fair: there are good reasons to choose Kubernetes.

  • Many microservices that scale independently
  • Team with Kubernetes expertise that can maintain it
  • Complex deployment requirements (canary, blue-green)
  • Multi-tenant platform for multiple teams
  • Portability between clouds is important
  • Ecosystem (Prometheus, ArgoCD, etc.) has value for you

If this applies to you, Kubernetes is probably a good choice.

My rule of thumb

Start simple. Add complexity when you need it, not when you think you might need it someday.

Docker Compose for development and small production workloads. Kubernetes when you have the problems that Kubernetes solves — not before.

And don’t be afraid to say: “We don’t need Kubernetes.” That’s not failure. That’s engineering wisdom.

Conclusion

Kubernetes is a great tool. For the right use cases.

But it’s not the only tool. And for many projects it’s not the best tool.

Before adopting Kubernetes, ask yourself:

  • What problem am I solving?
  • Is Kubernetes the simplest solution for this problem?
  • Do I have the resources (time, money, expertise) to maintain Kubernetes?

If the answer is “no” to any of these questions, look at alternatives. Docker Compose, Nomad, PaaS, VMs — they all exist for a reason.

The best infrastructure is infrastructure that solves your problems with as little complexity as possible. Sometimes that’s Kubernetes. Often it’s not.

Choose wisely.