Vault with Kubernetes: Injecting Secrets into Pods
The Problem with Kubernetes Secrets
Kubernetes has a built-in Secrets resource, but it has serious limitations for production use. Secrets are base64-encoded (not encrypted) in etcd by default. Even with etcd encryption at rest enabled, the protection model is limited. RBAC for Secrets is coarse-grained, meaning anyone with access to read Secrets in a namespace can read all Secrets in that namespace. There is no audit trail for individual secret access at the application level. Rotation requires redeploying workloads or building custom controllers. There is no support for dynamic credential generation. And perhaps most critically, Secrets often end up committed to Git repositories as part of Kubernetes manifests, Helm values files, or Kustomize overlays.
The scale of the problem becomes clear when you audit a typical cluster. Count the number of Kubernetes Secrets, note how many contain database passwords or API keys, check when they were last rotated, and ask whether anyone can tell you which pods accessed which secrets in the last 24 hours. In most organizations, the answers are concerning.
Vault solves all of these problems, but the challenge becomes: how do you get Vault secrets into running pods without baking credentials into container images, environment variables, or Kubernetes Secret objects? The answer is one of three integration patterns: the Vault Agent Injector, the Vault CSI Provider, or the Vault Secrets Operator. Each has different trade-offs in terms of resource overhead, feature support, and operational complexity. Understanding these trade-offs is essential for choosing the right approach for your environment.
Architecture Overview
Before diving into specific integration methods, it helps to understand the overall architecture of Vault-Kubernetes integration:
+-------------------+
| Vault Server |
| (External or |
| In-Cluster) |
+--------+----------+
|
+--------------+--------------+
| | |
+-------+------+ +----+-----+ +------+-------+
| Agent Injector| | CSI | | Secrets |
| (Webhook) | | Provider | | Operator |
+-------+------+ +----+-----+ +------+-------+
| | |
+-------+------+ +----+-----+ +------+-------+
| Sidecar in | | DaemonSet| | Controller |
| each Pod | | on Nodes | | in Cluster |
+-------+------+ +----+-----+ +------+-------+
| | |
+-------+--------------+--------------+------+
| Application Pods |
| Secrets as files / env vars / K8s Secrets |
+---------------------------------------------+
All three methods rely on the Kubernetes auth method in Vault, which allows pods to authenticate using their service account tokens. The key difference is how secrets are delivered to the application.
Kubernetes Auth Method Setup
Before any integration pattern works, Vault needs to trust your Kubernetes cluster. The Kubernetes auth method allows pods to authenticate using their Kubernetes service account JWT tokens. Vault validates these tokens by calling the Kubernetes TokenReview API.
Configuring Vault (Vault Running Inside the Cluster)
When Vault runs as a pod inside the same Kubernetes cluster, configuration is straightforward because it can use the in-cluster service account:
# Enable the Kubernetes auth method
vault auth enable kubernetes
# Configure it using the in-cluster service account
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token
Configuring Vault (Vault Running Outside the Cluster)
When Vault runs on external infrastructure (VMs, a different cluster, or a managed service), you need to provide the cluster's API server endpoint and a service account token with permissions to call the TokenReview API:
# Create a service account in the cluster for Vault to use
kubectl create serviceaccount vault-auth -n vault
# Create a ClusterRoleBinding for token review
kubectl create clusterrolebinding vault-auth-binding \
--clusterrole=system:auth-delegator \
--serviceaccount=vault:vault-auth
# Get the service account token (Kubernetes 1.24+)
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Secret
metadata:
name: vault-auth-token
namespace: vault
annotations:
kubernetes.io/service-account.name: vault-auth
type: kubernetes.io/service-account-token
EOF
# Extract the token and CA certificate
SA_TOKEN=$(kubectl get secret vault-auth-token -n vault -o jsonpath='{.data.token}' | base64 -d)
K8S_CA_CERT=$(kubectl get secret vault-auth-token -n vault -o jsonpath='{.data.ca\.crt}' | base64 -d)
K8S_HOST=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
# Configure Vault with external cluster details
vault auth enable kubernetes
vault write auth/kubernetes/config \
kubernetes_host="$K8S_HOST" \
kubernetes_ca_cert="$K8S_CA_CERT" \
token_reviewer_jwt="$SA_TOKEN" \
issuer="https://kubernetes.default.svc.cluster.local"
Multi-Cluster Configuration
For organizations running multiple Kubernetes clusters, you can mount multiple Kubernetes auth backends:
# Enable a separate auth mount for each cluster
vault auth enable -path=kubernetes-prod kubernetes
vault auth enable -path=kubernetes-staging kubernetes
vault auth enable -path=kubernetes-dev kubernetes
# Configure each mount with the respective cluster's details
vault write auth/kubernetes-prod/config \
kubernetes_host="https://prod-cluster.example.com:6443" \
kubernetes_ca_cert=@prod-ca.crt \
token_reviewer_jwt=@prod-token.jwt
vault write auth/kubernetes-staging/config \
kubernetes_host="https://staging-cluster.example.com:6443" \
kubernetes_ca_cert=@staging-ca.crt \
token_reviewer_jwt=@staging-token.jwt
Creating Vault Roles for Kubernetes Service Accounts
Vault roles map Kubernetes identities (service accounts in namespaces) to Vault policies:
# Create a policy for the application
vault policy write webapp - <<'EOF'
path "secret/data/webapp/production" {
capabilities = ["read"]
}
path "secret/metadata/webapp/production" {
capabilities = ["read", "list"]
}
path "database/creds/webapp-readonly" {
capabilities = ["read"]
}
path "sys/leases/renew" {
capabilities = ["update"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
EOF
# Create a Kubernetes auth role
vault write auth/kubernetes/role/webapp \
bound_service_account_names="webapp-sa" \
bound_service_account_namespaces="production" \
policies="webapp" \
ttl="1h" \
max_ttl="4h" \
audience="vault"
# Create a role that allows multiple service accounts
vault write auth/kubernetes/role/monitoring \
bound_service_account_names="prometheus-sa,grafana-sa,alertmanager-sa" \
bound_service_account_namespaces="monitoring" \
policies="monitoring-read" \
ttl="2h" \
max_ttl="8h"
# Create a role scoped to multiple namespaces
vault write auth/kubernetes/role/shared-infra \
bound_service_account_names="infra-sa" \
bound_service_account_namespaces="production,staging" \
policies="shared-infra" \
ttl="1h"
This role says: any pod running as service account webapp-sa in the production namespace can authenticate and receive a token with the webapp policy. The binding is precise, preventing pods in other namespaces or with different service accounts from impersonating the webapp.
Service Account Setup
# service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: webapp-sa
namespace: production
labels:
app: webapp
vault-integration: "true"
kubectl apply -f service-account.yaml
Vault Agent Injector
The Agent Injector is a Kubernetes mutating admission webhook. When you add specific annotations to a pod spec, the injector automatically adds a Vault Agent sidecar (or init container) that authenticates to Vault, retrieves secrets, and writes them to a shared in-memory volume. This is the most mature and feature-rich integration method.
How It Works Internally
- You deploy a pod with Vault annotations.
- The Kubernetes API server sends the pod spec to the Vault Agent Injector webhook.
- The injector mutates the pod spec, adding an init container and/or sidecar container.
- The init container authenticates to Vault using the pod's service account token.
- It retrieves the requested secrets and writes them to a shared tmpfs volume at
/vault/secrets/. - Your application container starts and reads secrets from the shared volume.
- If a sidecar is present, it continuously watches for secret changes and re-renders templates.
Installing the Injector
The injector is included in the Vault Helm chart:
# Install only the injector (pointing to an external Vault server)
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set injector.enabled=true \
--set server.enabled=false \
--set injector.externalVaultAddr="https://vault.example.com:8200" \
--set injector.authPath="auth/kubernetes" \
--set injector.replicas=2 \
--set injector.resources.requests.cpu=100m \
--set injector.resources.requests.memory=64Mi \
--set injector.resources.limits.cpu=250m \
--set injector.resources.limits.memory=128Mi
# Install Vault server AND injector together
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set server.ha.enabled=true \
--set server.ha.replicas=3 \
--set server.ha.raft.enabled=true \
--set injector.enabled=true \
--set injector.replicas=2
# Verify the injector is running
kubectl get pods -n vault -l app.kubernetes.io/name=vault-agent-injector
kubectl get mutatingwebhookconfigurations | grep vault
Setting server.enabled=false installs only the injector, pointing it at an external Vault cluster. Running two injector replicas provides redundancy since the webhook is critical for pod creation.
Annotations Reference
Here are the key annotations that control injection behavior:
| Annotation | Purpose | Example |
|---|---|---|
vault.hashicorp.com/agent-inject | Enable injection | "true" |
vault.hashicorp.com/role | Vault Kubernetes auth role | "webapp" |
vault.hashicorp.com/agent-inject-secret-NAME | Secret path to inject | "secret/data/webapp/production" |
vault.hashicorp.com/agent-inject-template-NAME | Go template for rendering | See below |
vault.hashicorp.com/agent-pre-populate-only | Init container only (no sidecar) | "true" |
vault.hashicorp.com/agent-pre-populate | Disable init container | "false" |
vault.hashicorp.com/agent-inject-command-NAME | Command to run after render | "kill -HUP 1" |
vault.hashicorp.com/agent-inject-status | Set to "update" to re-inject | "update" |
vault.hashicorp.com/agent-cache-enable | Enable response caching | "true" |
vault.hashicorp.com/agent-limits-cpu | CPU limit for sidecar | "250m" |
vault.hashicorp.com/agent-limits-mem | Memory limit for sidecar | "128Mi" |
vault.hashicorp.com/agent-requests-cpu | CPU request for sidecar | "50m" |
vault.hashicorp.com/agent-requests-mem | Memory request for sidecar | "64Mi" |
vault.hashicorp.com/tls-skip-verify | Skip TLS verification | "false" |
vault.hashicorp.com/ca-cert | Path to CA cert in pod | "/vault/tls/ca.crt" |
Basic Example: Static Secrets
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "webapp"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/webapp/production"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/webapp/production" -}}
DB_HOST={{ .Data.data.db_host }}
DB_PORT={{ .Data.data.db_port }}
DB_USER={{ .Data.data.db_user }}
DB_PASS={{ .Data.data.db_pass }}
API_KEY={{ .Data.data.api_key }}
{{- end }}
vault.hashicorp.com/agent-pre-populate-only: "true"
spec:
serviceAccountName: webapp-sa
containers:
- name: webapp
image: myregistry/webapp:latest
command: ["/bin/sh", "-c"]
args:
- "source /vault/secrets/config && exec node server.js"
ports:
- containerPort: 3000
Secrets are written to /vault/secrets/NAME inside the pod, where NAME matches the suffix you used in the annotation key. The template annotation lets you control the output format to match whatever your application expects.
Template Formats for Different Applications
Different applications expect secrets in different formats. Here are templates for common patterns:
# JSON format for Node.js/Python applications
annotations:
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/webapp/production" -}}
{
"database": {
"host": "{{ .Data.data.db_host }}",
"port": {{ .Data.data.db_port }},
"username": "{{ .Data.data.db_user }}",
"password": "{{ .Data.data.db_pass }}"
},
"api_key": "{{ .Data.data.api_key }}"
}
{{- end }}
# .env format for dotenv-based applications
annotations:
vault.hashicorp.com/agent-inject-template-dotenv: |
{{- with secret "secret/data/webapp/production" -}}
DB_HOST={{ .Data.data.db_host }}
DB_PORT={{ .Data.data.db_port }}
DB_USER={{ .Data.data.db_user }}
DB_PASS={{ .Data.data.db_pass }}
{{- end }}
# YAML format for Spring Boot or similar
annotations:
vault.hashicorp.com/agent-inject-template-application: |
{{- with secret "secret/data/webapp/production" -}}
spring:
datasource:
url: jdbc:postgresql://{{ .Data.data.db_host }}:{{ .Data.data.db_port }}/webapp
username: {{ .Data.data.db_user }}
password: {{ .Data.data.db_pass }}
{{- end }}
# Java properties format
annotations:
vault.hashicorp.com/agent-inject-template-props: |
{{- with secret "secret/data/webapp/production" -}}
db.host={{ .Data.data.db_host }}
db.port={{ .Data.data.db_port }}
db.user={{ .Data.data.db_user }}
db.pass={{ .Data.data.db_pass }}
{{- end }}
# Nginx config snippet for TLS certificates
annotations:
vault.hashicorp.com/agent-inject-template-cert: |
{{- with secret "pki_int/issue/web-servers" "common_name=api.internal.example.com" "ttl=72h" -}}
{{ .Data.certificate }}
{{ .Data.ca_chain }}
{{- end }}
vault.hashicorp.com/agent-inject-template-key: |
{{- with secret "pki_int/issue/web-servers" "common_name=api.internal.example.com" "ttl=72h" -}}
{{ .Data.private_key }}
{{- end }}
Init Container vs Sidecar
By default, the injector adds both an init container (to populate secrets before the app starts) and a sidecar (to keep secrets updated). You can control this behavior:
annotations:
# Init container only -- secrets are fetched once at startup
# Use for static secrets that do not change during the pod lifetime
vault.hashicorp.com/agent-pre-populate-only: "true"
# Sidecar only -- no init container, secrets appear after startup
# Use when the app can handle delayed secret availability
vault.hashicorp.com/agent-pre-populate: "false"
Use init-only mode for secrets that do not change during the pod's lifetime. This reduces resource overhead since no sidecar process runs after initialization. Use the sidecar for dynamic secrets (like database credentials) that need lease renewal, or for secrets that rotate and need to be re-rendered.
Dynamic Secrets with Templates and Lease Renewal
For database credentials that are dynamically generated and need continuous renewal:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "webapp"
# Dynamic database credentials
vault.hashicorp.com/agent-inject-secret-db: "database/creds/webapp-readonly"
vault.hashicorp.com/agent-inject-template-db: |
{{- with secret "database/creds/webapp-readonly" -}}
postgresql://{{ .Data.username }}:{{ .Data.password }}@db.internal:5432/webapp?sslmode=require
{{- end }}
# Static application config
vault.hashicorp.com/agent-inject-secret-config: "secret/data/webapp/production"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/webapp/production" -}}
STRIPE_KEY={{ .Data.data.stripe_api_key }}
JWT_SECRET={{ .Data.data.jwt_secret }}
{{- end }}
# Run a command when secrets change (signal the app to reload)
vault.hashicorp.com/agent-inject-command-db: "/bin/sh -c 'kill -USR1 $(pidof node) || true'"
# Resource limits for the sidecar
vault.hashicorp.com/agent-limits-cpu: "100m"
vault.hashicorp.com/agent-limits-mem: "64Mi"
vault.hashicorp.com/agent-requests-cpu: "50m"
vault.hashicorp.com/agent-requests-mem: "32Mi"
spec:
serviceAccountName: webapp-sa
containers:
- name: webapp
image: myregistry/webapp:v2.1.0
command: ["/bin/sh", "-c"]
args:
- |
while [ ! -f /vault/secrets/db ]; do sleep 1; done
export DATABASE_URL=$(cat /vault/secrets/db)
source /vault/secrets/config
exec node server.js
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 10
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
The sidecar automatically renews the database lease and re-renders the template when the credentials change. The agent-inject-command annotation signals the application to reload its configuration when the file changes.
Vault Agent Configuration (Advanced)
For more complex scenarios, you can provide a full Vault Agent configuration file instead of using annotations:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-configmap: "vault-agent-config"
# vault-agent-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: vault-agent-config
namespace: production
data:
config.hcl: |
auto_auth {
method "kubernetes" {
mount_path = "auth/kubernetes"
config = {
role = "webapp"
}
}
sink "file" {
config = {
path = "/home/vault/.vault-token"
}
}
}
cache {
use_auto_auth_token = true
}
listener "tcp" {
address = "127.0.0.1:8100"
tls_disable = true
}
template {
source = "/vault/configs/db.ctmpl"
destination = "/vault/secrets/db"
}
template {
source = "/vault/configs/config.ctmpl"
destination = "/vault/secrets/config"
command = "kill -HUP $(pidof nginx) 2>/dev/null || true"
}
This approach enables Vault Agent caching, which allows the application to make Vault API calls through a local proxy without managing tokens directly.
Vault CSI Provider
The Vault CSI Provider uses the Kubernetes Container Storage Interface (CSI) to mount secrets as volumes. This approach is more aligned with native Kubernetes patterns and does not require a sidecar container in each pod. Instead, a DaemonSet runs on each node and handles secret retrieval.
Installation
# Install the Secrets Store CSI Driver (prerequisite)
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install csi secrets-store-csi-driver/secrets-store-csi-driver \
--namespace kube-system \
--set syncSecret.enabled=true \
--set enableSecretRotation=true \
--set rotationPollInterval=120s
# Install the Vault CSI Provider
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set server.enabled=false \
--set injector.enabled=false \
--set csi.enabled=true \
--set csi.resources.requests.cpu=50m \
--set csi.resources.requests.memory=64Mi
# Verify both are running
kubectl get pods -n kube-system -l app=secrets-store-csi-driver
kubectl get pods -n vault -l app.kubernetes.io/name=vault-csi-provider
SecretProviderClass
The SecretProviderClass defines which Vault secrets to fetch and how to expose them:
# secret-provider-class.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-webapp-secrets
namespace: production
spec:
provider: vault
parameters:
vaultAddress: "https://vault.example.com:8200"
vaultCACertPath: "/vault/tls/ca.crt"
roleName: "webapp"
objects: |
- objectName: "db-password"
secretPath: "secret/data/webapp/production"
secretKey: "db_pass"
- objectName: "api-key"
secretPath: "secret/data/webapp/production"
secretKey: "stripe_api_key"
- objectName: "jwt-secret"
secretPath: "secret/data/webapp/production"
secretKey: "jwt_secret"
# Optionally sync to a Kubernetes Secret for use in env vars
secretObjects:
- secretName: webapp-secrets
type: Opaque
data:
- objectName: db-password
key: DB_PASS
- objectName: api-key
key: API_KEY
- objectName: jwt-secret
key: JWT_SECRET
- secretName: webapp-tls
type: kubernetes.io/tls
data:
- objectName: tls-cert
key: tls.crt
- objectName: tls-key
key: tls.key
Mounting in a Pod
# deployment-csi.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
serviceAccountName: webapp-sa
containers:
- name: webapp
image: myregistry/webapp:latest
volumeMounts:
- name: secrets
mountPath: "/mnt/secrets"
readOnly: true
env:
# Use synced Kubernetes Secret for environment variables
- name: DB_PASS
valueFrom:
secretKeyRef:
name: webapp-secrets
key: DB_PASS
- name: API_KEY
valueFrom:
secretKeyRef:
name: webapp-secrets
key: API_KEY
ports:
- containerPort: 3000
volumes:
- name: secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: vault-webapp-secrets
Secrets appear as files in /mnt/secrets/ and optionally as a synced Kubernetes Secret for use in environment variables. The CSI driver handles rotation when enableSecretRotation is enabled.
Vault Secrets Operator (VSO)
The Vault Secrets Operator is the newest integration method. It runs as a controller in the cluster and syncs Vault secrets into Kubernetes Secrets using custom resources. This approach is the most Kubernetes-native and works well with GitOps workflows because the CRDs can be stored in Git.
Installation
helm install vault-secrets-operator hashicorp/vault-secrets-operator \
--namespace vault-secrets-operator-system \
--create-namespace \
--set defaultVaultConnection.enabled=true \
--set defaultVaultConnection.address="https://vault.example.com:8200" \
--set defaultAuthMethod.enabled=true \
--set defaultAuthMethod.method=kubernetes \
--set defaultAuthMethod.mount=kubernetes
# Verify the operator is running
kubectl get pods -n vault-secrets-operator-system
kubectl get crds | grep secrets.hashicorp.com
Custom Resources
# vault-connection.yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
name: vault-connection
namespace: production
spec:
address: https://vault.example.com:8200
caCertSecretRef: vault-ca-cert
skipTLSVerify: false
---
# vault-auth.yaml
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vault-auth
namespace: production
spec:
method: kubernetes
mount: kubernetes
kubernetes:
role: webapp
serviceAccount: webapp-sa
audiences:
- vault
---
# vault-static-secret.yaml (for KV secrets)
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: webapp-secrets
namespace: production
spec:
type: kv-v2
mount: secret
path: webapp/production
destination:
name: webapp-k8s-secret
create: true
labels:
app: webapp
type: Opaque
refreshAfter: 30s
vaultAuthRef: vault-auth
---
# vault-dynamic-secret.yaml (for dynamic database credentials)
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
name: webapp-db-creds
namespace: production
spec:
mount: database
path: creds/webapp-readonly
destination:
name: webapp-db-secret
create: true
type: Opaque
renewalPercent: 67
vaultAuthRef: vault-auth
---
# vault-pki-secret.yaml (for PKI certificates)
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultPKISecret
metadata:
name: webapp-tls
namespace: production
spec:
mount: pki_int
role: web-servers
commonName: api.internal.example.com
altNames:
- api-v2.internal.example.com
ttl: 72h
destination:
name: webapp-tls-secret
create: true
type: kubernetes.io/tls
vaultAuthRef: vault-auth
The operator watches these CRDs and keeps the destination Kubernetes Secrets in sync with Vault. When a secret changes in Vault, the operator updates the Kubernetes Secret automatically. For dynamic secrets, the operator handles lease renewal and re-creates credentials before they expire.
Rollout Restart on Secret Change
The VSO can automatically trigger rolling restarts when secrets change:
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: webapp-secrets
namespace: production
spec:
type: kv-v2
mount: secret
path: webapp/production
destination:
name: webapp-k8s-secret
create: true
refreshAfter: 30s
rolloutRestartTargets:
- kind: Deployment
name: webapp
vaultAuthRef: vault-auth
Comparing Integration Methods
| Feature | Agent Injector | CSI Provider | Vault Secrets Operator |
|---|---|---|---|
| Sidecar required | Yes (optional init-only) | No | No |
| Dynamic secret renewal | Yes (built-in) | Limited | Yes |
| Template rendering | Yes (Go templates) | No | Limited (transformation) |
| Kubernetes Secret sync | No | Yes | Yes (primary method) |
| Resource overhead per pod | Higher (sidecar) | Lower (DaemonSet) | Lower (single controller) |
| Maturity | Most mature | Stable | Newer but actively developed |
| GitOps friendly | No (annotations only) | Partial (SecretProviderClass) | Yes (CRDs in Git) |
| Best for | Dynamic secrets, complex templates | Static secrets, env vars | GitOps workflows, CRD-based |
| Caching support | Yes | No | No |
| Secret format control | Full (Go templates) | Key extraction only | Key mapping |
| Deployment model | Webhook + sidecar per pod | DaemonSet + CSI driver | Controller + CRDs |
Recommendation: Use the Agent Injector when you need dynamic secrets with continuous lease renewal and complex template rendering. Use the CSI Provider when you primarily need static secrets as environment variables with minimal overhead. Use the Vault Secrets Operator when you want a GitOps-friendly approach and are comfortable with newer tooling.
Practical Example: Complete PostgreSQL Deployment
Let us put it all together with a complete example using the Agent Injector to provide dynamic PostgreSQL credentials to a web application.
Vault Configuration
# 1. Enable and configure the database engine
vault secrets enable database
vault write database/config/webapp-db \
plugin_name="postgresql-database-plugin" \
allowed_roles="webapp-creds" \
connection_url="postgresql://{{username}}:{{password}}@pg.internal:5432/webapp?sslmode=require" \
username="vault_admin" \
password="admin-pass"
vault write -f database/config/webapp-db/rotate-root
# 2. Create the database role
vault write database/roles/webapp-creds \
db_name="webapp-db" \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\"; \
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO \"{{name}}\";" \
revocation_statements="REVOKE ALL ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="8h"
# 3. Create the policy
vault policy write webapp-db-creds - <<'EOF'
path "database/creds/webapp-creds" {
capabilities = ["read"]
}
path "sys/leases/renew" {
capabilities = ["update"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
EOF
# 4. Create the Kubernetes auth role
vault write auth/kubernetes/role/webapp-db \
bound_service_account_names="webapp-sa" \
bound_service_account_namespaces="production" \
policies="webapp-db-creds" \
ttl="2h" \
max_ttl="8h"
Kubernetes Manifests
# full-deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: webapp-sa
namespace: production
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "webapp-db"
vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/webapp-creds"
vault.hashicorp.com/agent-inject-template-db-creds: |
{{- with secret "database/creds/webapp-creds" -}}
DATABASE_URL=postgresql://{{ .Data.username }}:{{ .Data.password }}@pg.internal:5432/webapp?sslmode=require
DB_USER={{ .Data.username }}
DB_PASS={{ .Data.password }}
{{- end }}
vault.hashicorp.com/agent-limits-cpu: "100m"
vault.hashicorp.com/agent-limits-mem: "64Mi"
spec:
serviceAccountName: webapp-sa
containers:
- name: webapp
image: myregistry/webapp:v2.1.0
command: ["/bin/sh", "-c"]
args:
- |
while [ ! -f /vault/secrets/db-creds ]; do sleep 1; done
export $(cat /vault/secrets/db-creds | xargs)
exec node server.js
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: webapp
namespace: production
spec:
selector:
app: webapp
ports:
- port: 80
targetPort: 3000
type: ClusterIP
The sidecar handles authentication, credential fetching, lease renewal, and re-rendering the template when credentials rotate. Your application just reads a file.
Network Policies for Vault Access
Restrict which pods can communicate with Vault:
# Allow only labeled namespaces to access Vault
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: vault-access
namespace: vault
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: vault
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
vault-access: "true"
ports:
- port: 8200
protocol: TCP
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: vault
ports:
- port: 8200
protocol: TCP
- port: 8201
protocol: TCP
Label the namespaces that should have Vault access:
kubectl label namespace production vault-access=true
kubectl label namespace staging vault-access=true
Troubleshooting Common Issues
Pod Stuck in Init
The most common issue is the Vault Agent init container failing to authenticate. Always check the init container logs first:
# Check init container logs
kubectl logs webapp-pod-abc123 -c vault-agent-init -n production
# Check events on the pod
kubectl describe pod webapp-pod-abc123 -n production
# Common error messages and their causes:
# "permission denied" -- policy does not grant access to the secret path
# "invalid role" -- service account name or namespace does not match the Vault role
# "connection refused" -- Vault is unreachable from the pod network
# "x509: certificate signed by unknown authority" -- CA cert mismatch
Common causes and fixes:
- Service account mismatch: The pod's service account name or namespace does not match what is configured in the Vault role. Verify with
kubectl get sa -n productionand compare withvault read auth/kubernetes/role/webapp. - Vault unreachable: The pod cannot reach the Vault server. Test with
kubectl run debug --image=curlimages/curl -it --rm -- curl -k https://vault.example.com:8200/v1/sys/health. - Kubernetes auth misconfigured: The CA cert or API server URL in the Kubernetes auth config is wrong. Re-check with
vault read auth/kubernetes/config. - Token reviewer permissions: The service account Vault uses for token review does not have the
system:auth-delegatorClusterRole.
Permission Denied on Secret Access
# Check what policies the authenticated token has
kubectl exec webapp-pod-abc123 -c vault-agent -- cat /home/vault/.vault-token
# Use that token to check capabilities
vault token capabilities hvs.tokenHere secret/data/webapp/production
# Common mistake: forgetting "data/" in KV v2 paths
# WRONG: path "secret/webapp/production"
# RIGHT: path "secret/data/webapp/production"
Sidecar Not Updating Secrets
If the sidecar is running but secrets are not updating:
# Check sidecar logs for errors
kubectl logs webapp-pod-abc123 -c vault-agent -n production
# Verify the lease is still valid and renewable
kubectl exec webapp-pod-abc123 -c vault-agent -- cat /home/vault/.vault-token
# Check if agent-pre-populate-only is accidentally set to true
kubectl get pod webapp-pod-abc123 -o jsonpath='{.metadata.annotations}' | jq .
Ensure agent-pre-populate-only is not set to true if you need ongoing renewal.
Resource Pressure from Sidecars
Each injected pod gets a sidecar container. In large clusters with hundreds or thousands of pods, this adds up significantly. Monitor the resource usage:
# Check resource usage of vault-agent sidecars
kubectl top pods -n production -l app=webapp --containers | grep vault-agent
# Aggregate sidecar resource usage across the cluster
kubectl get pods --all-namespaces -o json | \
jq '[.items[].spec.containers[] | select(.name == "vault-agent") | .resources.requests] | length'
Mitigation strategies:
- Set low resource requests and limits on sidecars using annotations
- Use init-only mode for static secrets (eliminates sidecar)
- Switch to the CSI Provider or VSO for static secrets
- Use Vault Agent caching to reduce API calls
Summary
The Vault-Kubernetes integration gives you centralized secret management with zero long-lived secrets stored in Kubernetes etcd. The three integration methods serve different needs: the Agent Injector excels at dynamic secrets with continuous lease renewal and complex template rendering; the CSI Provider provides static secrets with minimal resource overhead through its DaemonSet architecture; and the Vault Secrets Operator brings GitOps-native secret management through Kubernetes CRDs. Whichever method you choose, always start by setting up the Kubernetes auth method and testing authentication with a simple static secret before building out dynamic credentials, PKI certificates, or multi-cluster configurations. The investment in proper Vault-Kubernetes integration pays dividends in security posture, operational simplicity, and compliance readiness.
DevSecOps Lead
Security-first mindset in everything I ship. From zero-trust architectures to supply chain security, I make sure your pipeline doesn't become your weakest link.
Related Articles
HashiCorp Vault Fundamentals: Installation and First Secrets
Install HashiCorp Vault, understand seal/unseal mechanics, configure secret engines, and store your first secrets with policies and authentication methods.
HashiCorp Vault and Kubernetes: Secrets Management That Actually Works
Integrate HashiCorp Vault with Kubernetes to eliminate static secrets from your cluster — with working manifests, threat models, and pipeline automation.
Vault Dynamic Secrets: Short-Lived Credentials on Demand
Generate short-lived database credentials, AWS IAM roles, and PKI certificates with Vault dynamic secrets — eliminating long-lived credentials from your infrastructure.