Kubernetes Secrets are not secrets. They’re base64-encoded plain text, stored in etcd, often visible to anyone with cluster access. This is the default, and it’s terrifying.
Every cloud provider offers a Key Management Service. AWS has Secrets Manager, Google has Secret Manager, Azure has Key Vault. They work fine — until you need to migrate, or you want to understand what happens to your secrets, or you simply don’t want your most sensitive data in someone else’s infrastructure.
HashiCorp Vault is the self-hosted alternative. You run it, you control it, you understand it.
Why Self-Hosted Secrets?
Using a cloud KMS means:
- Your secrets exist in a system you don’t control
- You can’t audit what happens internally
- Migration becomes a nightmare (secrets are the stickiest lock-in)
- You pay per secret, per request
Self-hosted Vault means:
- Full visibility into how secrets are stored and accessed
- No vendor lock-in for your most critical data
- Works identically across clouds, on-prem, and edge
- Your secrets never leave infrastructure you control
This is sovereignty. Not because cloud KMS is bad, but because secrets are too important to not understand completely.
Vault Architecture Basics
Vault has three core concepts:
Secrets Engines — Where secrets are stored
kv(key-value) for static secretsdatabasefor dynamic database credentialspkifor certificate generation
Auth Methods — How clients prove their identity
kubernetesfor podstokenfor direct accessuserpass,ldap,oidcfor humans
Policies — What authenticated clients can access
flowchart TD
subgraph vault["Vault"]
subgraph auth["Auth Methods"]
K8S["kubernetes"]
USER["userpass"]
end
K8S --> POL["Policies"]
USER --> POL
subgraph engines["Secrets Engines"]
KV["KV"]
PKI["PKI"]
DB["DB"]
end
POL --> KV
POL --> PKI
POL --> DB
end
Installing Vault on Kubernetes
Using the official Helm chart:
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set server.ha.enabled=true \
--set server.ha.replicas=3
For GitOps with ArgoCD:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: vault
namespace: argocd
spec:
project: default
source:
repoURL: https://helm.releases.hashicorp.com
chart: vault
targetRevision: 0.27.0
helm:
values: |
server:
ha:
enabled: true
replicas: 3
raft:
enabled: true
dataStorage:
size: 10Gi
storageClass: longhorn
injector:
enabled: true
destination:
server: https://kubernetes.default.svc
namespace: vault
syncPolicy:
automated:
prune: true
selfHeal: true
Initializing and Unsealing
Vault starts sealed. Initialize it:
kubectl exec -it vault-0 -n vault -- vault operator init
This outputs:
- 5 unseal keys (keep these safe!)
- Initial root token
Store these securely. If you lose the unseal keys, you lose access to all secrets.
Unseal each Vault pod (need 3 of 5 keys):
kubectl exec -it vault-0 -n vault -- vault operator unseal <key1>
kubectl exec -it vault-0 -n vault -- vault operator unseal <key2>
kubectl exec -it vault-0 -n vault -- vault operator unseal <key3>
Repeat for vault-1 and vault-2.
Auto-Unseal
Manual unsealing doesn’t scale. For production, use auto-unseal with a KMS or transit key:
# Using another Vault for transit auto-unseal
server:
ha:
enabled: true
config: |
seal "transit" {
address = "https://vault-primary.example.com:8200"
disable_renewal = "false"
key_name = "autounseal"
mount_path = "transit/"
tls_skip_verify = "false"
}
Or with cloud KMS (yes, for this one thing the trade-off makes sense):
server:
config: |
seal "awskms" {
region = "eu-west-1"
kms_key_id = "alias/vault-unseal"
}
Enabling the KV Secrets Engine
# Login with root token
kubectl exec -it vault-0 -n vault -- vault login
# Enable KV v2 secrets engine
kubectl exec -it vault-0 -n vault -- vault secrets enable -path=secret kv-v2
# Create a secret
kubectl exec -it vault-0 -n vault -- vault kv put secret/myapp/config \
database_url="postgres://user:pass@db:5432/myapp" \
api_key="super-secret-key"
Kubernetes Authentication
Enable pods to authenticate with Vault:
# Enable Kubernetes auth
kubectl exec -it vault-0 -n vault -- vault auth enable kubernetes
# Configure it
kubectl exec -it vault-0 -n vault -- vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT"
Create a role for your application:
kubectl exec -it vault-0 -n vault -- vault write auth/kubernetes/role/myapp \
bound_service_account_names=myapp \
bound_service_account_namespaces=default \
policies=myapp-policy \
ttl=1h
Creating Policies
Policies define what authenticated clients can access:
# myapp-policy.hcl
path "secret/data/myapp/*" {
capabilities = ["read"]
}
path "secret/metadata/myapp/*" {
capabilities = ["list"]
}
Apply it:
kubectl exec -it vault-0 -n vault -- vault policy write myapp-policy - <<EOF
path "secret/data/myapp/*" {
capabilities = ["read"]
}
path "secret/metadata/myapp/*" {
capabilities = ["list"]
}
EOF
Injecting Secrets into Pods
The Vault Agent Injector automatically injects secrets into pods via annotations:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "myapp"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/myapp/config"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/myapp/config" -}}
export DATABASE_URL="{{ .Data.data.database_url }}"
export API_KEY="{{ .Data.data.api_key }}"
{{- end }}
spec:
serviceAccountName: myapp
containers:
- name: app
image: myapp:v1.0.0
command: ["/bin/sh", "-c"]
args:
- source /vault/secrets/config && ./myapp
The injector:
- Adds an init container that authenticates with Vault
- Writes secrets to
/vault/secrets/config - Your app reads them at startup
External Secrets Operator Alternative
The Vault Injector works well but has limitations. External Secrets Operator is an alternative that syncs Vault secrets to native Kubernetes Secrets:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "http://vault.vault:8200"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "myapp"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: myapp-secrets
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: myapp-secrets
creationPolicy: Owner
data:
- secretKey: DATABASE_URL
remoteRef:
key: myapp/config
property: database_url
- secretKey: API_KEY
remoteRef:
key: myapp/config
property: api_key
This creates a regular Kubernetes Secret that your pods can use normally:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
envFrom:
- secretRef:
name: myapp-secrets
Dynamic Database Credentials
Static secrets are fine, but dynamic secrets are better. Vault can generate unique database credentials per pod:
# Enable database secrets engine
kubectl exec -it vault-0 -n vault -- vault secrets enable database
# Configure PostgreSQL connection
kubectl exec -it vault-0 -n vault -- vault write database/config/mydb \
plugin_name=postgresql-database-plugin \
connection_url="postgresql://{{username}}:{{password}}@postgres.default:5432/myapp" \
allowed_roles="myapp-db" \
username="vault_admin" \
password="vault_admin_password"
# Create role that generates credentials
kubectl exec -it vault-0 -n vault -- vault write database/roles/myapp-db \
db_name=mydb \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
Now pods can request unique credentials:
annotations:
vault.hashicorp.com/agent-inject-secret-db: "database/creds/myapp-db"
vault.hashicorp.com/agent-inject-template-db: |
{{- with secret "database/creds/myapp-db" -}}
export PGUSER="{{ .Data.username }}"
export PGPASSWORD="{{ .Data.password }}"
{{- end }}
Each pod gets unique credentials that:
- Auto-rotate
- Can be revoked individually
- Create an audit trail per pod
PKI: Certificate Authority
Vault can be your internal CA:
# Enable PKI
kubectl exec -it vault-0 -n vault -- vault secrets enable pki
# Configure max TTL
kubectl exec -it vault-0 -n vault -- vault secrets tune -max-lease-ttl=87600h pki
# Generate root CA
kubectl exec -it vault-0 -n vault -- vault write -field=certificate pki/root/generate/internal \
common_name="My Org Root CA" \
ttl=87600h > CA_cert.crt
# Create role for issuing certs
kubectl exec -it vault-0 -n vault -- vault write pki/roles/internal-certs \
allowed_domains="internal,svc.cluster.local" \
allow_subdomains=true \
max_ttl=72h
Integrate with cert-manager for automatic certificate management.
High Availability
For production, run Vault in HA mode with Raft storage:
server:
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
cluster_name = "vault-cluster"
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "http://vault-0.vault-internal:8200"
}
retry_join {
leader_api_addr = "http://vault-1.vault-internal:8200"
}
retry_join {
leader_api_addr = "http://vault-2.vault-internal:8200"
}
}
This creates a 3-node Raft cluster:
- One leader, two followers
- Automatic leader election on failure
- Data replicated across all nodes
Backup Strategy
Vault data is critical. Back it up:
# Create snapshot
kubectl exec -it vault-0 -n vault -- vault operator raft snapshot save /tmp/vault-snapshot.snap
# Copy to local
kubectl cp vault/vault-0:/tmp/vault-snapshot.snap ./vault-snapshot.snap
Automate with a CronJob:
apiVersion: batch/v1
kind: CronJob
metadata:
name: vault-backup
spec:
schedule: "0 */6 * * *" # Every 6 hours
jobTemplate:
spec:
template:
spec:
serviceAccountName: vault-backup
containers:
- name: backup
image: hashicorp/vault:1.15
command:
- /bin/sh
- -c
- |
vault operator raft snapshot save /backup/vault-$(date +%Y%m%d-%H%M%S).snap
volumeMounts:
- name: backup
mountPath: /backup
volumes:
- name: backup
persistentVolumeClaim:
claimName: vault-backup
restartPolicy: OnFailure
Audit Logging
Enable audit logging to track all access:
kubectl exec -it vault-0 -n vault -- vault audit enable file file_path=/vault/audit/audit.log
Every secret access is logged:
{
"time": "2025-07-02T10:00:00Z",
"type": "response",
"auth": {
"token_type": "service",
"policies": ["myapp-policy"]
},
"request": {
"path": "secret/data/myapp/config",
"operation": "read"
}
}
My Production Setup
Here’s my actual Vault configuration:
server:
ha:
enabled: true
replicas: 3
raft:
enabled: true
dataStorage:
size: 10Gi
storageClass: longhorn
auditStorage:
enabled: true
size: 10Gi
storageClass: longhorn
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 512Mi
cpu: 500m
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-tls/ca.crt
injector:
enabled: true
resources:
requests:
memory: 64Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 100m
ui:
enabled: true
Key decisions:
- HA with Raft — No external storage dependency
- Audit storage — Compliance and debugging
- Resource limits — Vault is lightweight
- UI enabled — For occasional manual operations
Migration from Cloud KMS
Already using AWS Secrets Manager or similar? Migrate gradually:
- Install Vault alongside your existing KMS
- Sync existing secrets to Vault
- Update apps one by one to use Vault
- Decommission cloud KMS when empty
Use External Secrets Operator to read from both during transition:
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: aws-secrets-manager
spec:
provider:
aws:
service: SecretsManager
region: eu-west-1
---
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: vault
spec:
provider:
vault:
server: "http://vault.vault:8200"
Why This Matters
Secrets are the keys to your kingdom. They deserve infrastructure you understand and control.
Self-hosted Vault gives you:
- Visibility — See exactly how secrets are stored and accessed
- Portability — Same secrets management everywhere
- Control — No surprise pricing, no vendor decisions affecting you
- Understanding — When something breaks, you can fix it
The operational overhead is real but manageable. The peace of mind from controlling your most sensitive data? Invaluable.
Your secrets are too important to be someone else’s problem. Self-hosted Vault puts you in control of your most critical infrastructure.
