Encrypting Kubernetes Secrets at Rest: Because Base64 Is Not Encryption
Your Kubernetes Secrets Are Not Encrypted
Here's the thing that still catches experienced engineers off guard: by default, Kubernetes secrets are stored in etcd as base64-encoded plaintext. Not encrypted. Not protected. Base64, which is an encoding scheme, not a security mechanism.
Anyone with read access to etcd — whether through a compromised node, a backup tape, or an overly permissive RBAC policy — can read every secret in your cluster. Database passwords, API keys, TLS certificates, all of it.
Let me tell you why this matters and how to fix it with encryption at rest using KMS providers.
What "At Rest" Actually Means
Encryption at rest protects data as it's stored on disk. For Kubernetes, that means the data written to etcd. This is distinct from encryption in transit (which TLS handles between the API server and etcd) and from encryption in use (which is a much harder problem).
Without encryption at rest, here's what an attacker sees if they get access to etcd data:
# Reading a secret directly from etcd (this is what your backup contains)
ETCDCTL_API=3 etcdctl get /registry/secrets/production/database-credentials
# Output includes base64-encoded values that trivially decode to:
# DB_PASSWORD=super_secret_production_password_2026
With encryption at rest enabled, that same etcd read returns ciphertext that's useless without the encryption key.
The Encryption Configuration
Kubernetes supports encryption at rest through an EncryptionConfiguration resource that you pass to the API server. This tells the API server how to encrypt and decrypt resources before writing them to or reading them from etcd.
Here's a basic configuration using AES-CBC:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
- configmaps
providers:
- aescbc:
keys:
- name: key-2026-03
secret: c2VjcmV0LWtleS1oZXJlLTMyLWJ5dGVzLWxvbmc=
- identity: {}
Let me break this down because every field matters:
- resources: Which Kubernetes resource types to encrypt. At minimum, encrypt secrets. I also encrypt configmaps because teams frequently put sensitive data in them despite being told not to.
- providers: Ordered list of encryption providers. The first provider is used for encryption. All providers are tried for decryption (which is how you do key rotation).
- identity: The plaintext provider. Having it last means the API server can still read old unencrypted data, but all new writes will use AES-CBC.
Why AES-CBC Isn't Good Enough
The basic aescbc provider works, but it has a significant problem: the encryption key is stored in a file on the control plane node's filesystem. If an attacker compromises the control plane, they get both the encrypted data (from etcd) and the key to decrypt it (from the config file). That's not meaningful security — it's a speed bump.
Here's the thing — for real security, you need the encryption key to live outside the cluster entirely. That's where KMS providers come in.
KMS Provider Architecture
The KMS (Key Management Service) provider delegates encryption to an external service. The API server never sees the actual encryption key. Instead:
- API server generates a DEK (Data Encryption Key) for each encryption operation
- The DEK is sent to the KMS plugin, which encrypts it using a KEK (Key Encryption Key) that lives in the external KMS
- The encrypted DEK is stored alongside the encrypted data in etcd
- On decryption, the encrypted DEK is sent to the KMS plugin, which decrypts it using the KEK
- The API server uses the decrypted DEK to decrypt the actual data
Write path:
Secret data → [DEK encrypts data] → [KMS encrypts DEK] → etcd stores encrypted data + encrypted DEK
Read path:
etcd → encrypted DEK → [KMS decrypts DEK] → [DEK decrypts data] → Secret data
This is envelope encryption, and it's the industry standard approach. The KEK never leaves the KMS, so compromising the cluster doesn't give you the ability to decrypt anything.
Setting Up AWS KMS Provider
For AWS-managed Kubernetes (EKS), KMS integration is relatively straightforward. For self-managed clusters on AWS, here's the full setup.
Step 1: Create the KMS Key
aws kms create-key \
--description "Kubernetes secrets encryption key" \
--key-usage ENCRYPT_DECRYPT \
--key-spec SYMMETRIC_DEFAULT \
--tags TagKey=Environment,TagValue=production
# Note the KeyId from the output
# arn:aws:kms:us-east-1:123456789012:key/abcd1234-5678-90ef-ghij-klmnopqrstuv
Step 2: Deploy the KMS Plugin
The AWS KMS plugin runs as a gRPC server on each control plane node. It communicates with the API server over a Unix socket.
# /etc/kubernetes/manifests/aws-encryption-provider.yaml
apiVersion: v1
kind: Pod
metadata:
name: aws-encryption-provider
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: aws-encryption-provider
image: registry.k8s.io/provider-aws/aws-encryption-provider:v1.0.0
command:
- /aws-encryption-provider
- --key=arn:aws:kms:us-east-1:123456789012:key/abcd1234-5678-90ef-ghij-klmnopqrstuv
- --region=us-east-1
- --listen=/var/run/kmsplugin/socket.sock
volumeMounts:
- name: kmsplugin
mountPath: /var/run/kmsplugin
volumes:
- name: kmsplugin
hostPath:
path: /var/run/kmsplugin
type: DirectoryOrCreate
Step 3: Configure the API Server
# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
- configmaps
providers:
- kms:
apiVersion: v2
name: aws-kms-provider
endpoint: unix:///var/run/kmsplugin/socket.sock
timeout: 3s
- identity: {}
Add the encryption config flag to the API server:
# In the kube-apiserver manifest
spec:
containers:
- command:
- kube-apiserver
- --encryption-provider-config=/etc/kubernetes/encryption-config.yaml
# ... other flags
volumeMounts:
- name: encryption-config
mountPath: /etc/kubernetes/encryption-config.yaml
readOnly: true
- name: kmsplugin
mountPath: /var/run/kmsplugin
Verifying Encryption Is Working
After enabling encryption, you need to verify it's actually encrypting data. Here's how:
# Create a test secret
kubectl create secret generic encryption-test \
--from-literal=verify=encrypted-at-rest \
-n default
# Read it directly from etcd
ETCDCTL_API=3 etcdctl get /registry/secrets/default/encryption-test \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
# If encryption is working, you should see binary/ciphertext
# prefixed with "k8s:enc:kms:v2:aws-kms-provider:"
# NOT the plaintext value "encrypted-at-rest"
If you see the plaintext value, encryption isn't active. Check the API server logs for errors related to the encryption provider.
Encrypting Existing Secrets
Here's a critical detail that most guides bury in a footnote: enabling encryption at rest only affects new writes. All your existing secrets are still stored unencrypted in etcd. You need to re-encrypt them:
# Re-encrypt all secrets in the cluster
kubectl get secrets --all-namespaces -o json | \
kubectl replace -f -
# Verify by checking etcd directly for a few secrets
# They should now show the encrypted prefix
This triggers a read-then-write for every secret, which encrypts them with the new provider. For large clusters, do this namespace by namespace during a maintenance window to avoid overloading the API server.
Key Rotation
Encryption keys should be rotated periodically. With KMS providers, key rotation happens in the KMS itself, and it's significantly simpler than rotating local keys.
For AWS KMS, enable automatic key rotation:
aws kms enable-key-rotation \
--key-id arn:aws:kms:us-east-1:123456789012:key/abcd1234-5678-90ef-ghij-klmnopqrstuv
AWS rotates the key material annually and handles the decryption of old data transparently. The key ARN doesn't change, so no Kubernetes configuration updates are needed.
For manual rotation or rotation of local aescbc keys, the process is more involved:
# 1. Add the new key as the FIRST in the list (used for new encryptions)
# 2. Keep the old key SECOND (used to decrypt existing data)
providers:
- aescbc:
keys:
- name: key-2026-06
secret: bmV3LXNlY3JldC1rZXktaGVyZS0zMi1ieXRlcw==
- name: key-2026-03
secret: c2VjcmV0LWtleS1oZXJlLTMyLWJ5dGVzLWxvbmc=
- identity: {}
# 3. Restart the API server
# 4. Re-encrypt all secrets (they'll use the new key)
# 5. Remove the old key from the config
# 6. Restart the API server again
Let me tell you why the ordering matters: the API server uses the first key for encryption and tries all keys for decryption. If you put the new key second, everything continues to be encrypted with the old key. I've seen teams think they rotated their keys when they actually just added a backup decryption key.
What Encryption at Rest Doesn't Protect Against
Let me be honest about the threat model here. Encryption at rest protects against:
- Physical theft of etcd storage (disks, backups)
- Unauthorized direct access to etcd
- Data leakage through etcd snapshots
It does not protect against:
- Someone with
kubectl get secretpermissions (they get the decrypted value through the API) - A compromised API server process (it has access to decrypt everything)
- Secrets exposed as environment variables in pod specs (visible in
kubectl describe pod)
Encryption at rest is one layer in a defense-in-depth strategy. You still need proper RBAC, network policies, audit logging, and ideally an external secrets manager like HashiCorp Vault or AWS Secrets Manager for truly sensitive credentials.
Monitoring and Alerting
Once encryption is in place, monitor it:
# Check encryption provider health (KMS v2)
kubectl get --raw /healthz/kms-providers
# Monitor API server metrics for KMS latency
# kms_envelope_encryption_dek_cache_fill_percent
# apiserver_envelope_encryption_dek_cache_inter_arrival_time_seconds
Set up alerts for KMS provider failures. If the KMS becomes unreachable, the API server cannot encrypt new secrets or decrypt existing ones. This can cascade into pod scheduling failures if pods reference secrets that can't be read.
Final Thoughts
If you're running Kubernetes in any environment where compliance matters — and that's most environments these days — encryption at rest for secrets is table stakes. The default behavior of storing base64 plaintext in etcd is a compliance finding waiting to happen.
KMS providers are the right approach because they keep the encryption keys outside the blast radius of a cluster compromise. The setup takes an afternoon, key rotation can be automated, and you get to check a box on your next security audit with actual substance behind it.
Start by enabling encryption for secrets and configmaps. Verify it's working. Re-encrypt existing data. Set up key rotation. Then move on to the harder problems of secrets management that encryption at rest doesn't solve.
Related Articles
Senior Kubernetes Architect
10+ years orchestrating containers in production. Battle-tested opinions on everything from pod scheduling to service mesh. I've seen clusters burn and helped rebuild them better.
Related Articles
Zero-Trust Networking in Kubernetes with Network Policies
How to implement zero-trust networking in Kubernetes using NetworkPolicies — deny by default, allow by exception, and sleep better at night.
Kubernetes Pod Security Standards: A Complete Guide
Learn everything about Kubernetes Pod Security Standards (PSS) and Pod Security Admission (PSA) — from baseline to restricted profiles with practical examples.
The Complete Guide to Kubernetes Deployment Strategies: Rolling, Blue-Green, Canary, and Progressive Delivery
A comprehensive guide to every Kubernetes deployment strategy — rolling updates, blue-green, canary, and progressive delivery with Argo Rollouts and Flagger.