Kubernetes Pod Security Standards: A Complete Guide
What Are Pod Security Standards?
Kubernetes Pod Security Standards (PSS) define three levels of security policies that cover a broad spectrum of security needs. They are designed to be simple and straightforward, giving cluster administrators a common language for pod security.
The three levels are:
- Privileged — Unrestricted, providing the widest possible level of permissions
- Baseline — Minimally restrictive, preventing known privilege escalations
- Restricted — Heavily restricted, following current pod hardening best practices
Why Pod Security Matters
I've seen production clusters compromised because someone deployed a privileged container that didn't need to be privileged. It takes one misconfigured pod to give an attacker node-level access. Pod Security Standards exist to prevent exactly this.
Before PSS, we had PodSecurityPolicies (PSP), which were deprecated in Kubernetes 1.21 and removed in 1.25. If you're still running PSP — it's time to migrate.
Pod Security Admission Controller
The Pod Security Admission (PSA) controller is the built-in mechanism for enforcing Pod Security Standards. It's enabled by default since Kubernetes 1.25.
Configuration Modes
PSA operates in three modes per namespace:
| Mode | Behavior |
|---|---|
enforce | Rejects pods that violate the policy |
audit | Logs violations but allows the pod |
warn | Sends warnings to the user but allows the pod |
Labeling Namespaces
Apply security standards to namespaces using labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Baseline Profile Deep Dive
The baseline profile prevents known privilege escalations. Here's what it restricts:
Prohibited Fields
# These are NOT allowed under baseline:
spec:
hostNetwork: true # Denied
hostPID: true # Denied
hostIPC: true # Denied
containers:
- securityContext:
privileged: true # Denied
capabilities:
add:
- NET_RAW # Only specific caps allowed
What Baseline Allows
- Running as any user (including root)
- Most volume types
- Default capabilities
- Non-privileged containers
This is your minimum viable security. If you're not at least running baseline, you're running naked.
Baseline-Compliant Pod Example
Here's a pod that passes baseline but not restricted:
apiVersion: v1
kind: Pod
metadata:
name: baseline-pod
spec:
containers:
- name: app
image: myapp:v1.2.3
securityContext:
privileged: false # Required for baseline
ports:
- containerPort: 8080
Notice what's missing: no runAsNonRoot, no capabilities.drop, no seccomp profile. Baseline is the floor, not the target. It blocks the most dangerous configurations (privileged containers, host namespaces) but still allows a lot of insecure patterns.
Restricted Profile Deep Dive
The restricted profile follows current hardening best practices. This is what I recommend for all production workloads.
Required Security Context
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
Key Restrictions
- Must run as non-root —
runAsNonRoot: true - Must drop all capabilities —
capabilities.drop: ["ALL"] - No privilege escalation —
allowPrivilegeEscalation: false - Seccomp profile required —
RuntimeDefaultorLocalhost - Restricted volume types only — ConfigMap, Secret, PVC, EmptyDir, etc.
Migration Strategy
Moving from no security to restricted doesn't happen overnight. Here's the approach I've used across multiple clusters:
Phase 1: Audit Everything
# Label all namespaces with audit mode first
kubectl label ns --all \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted
Check your audit logs to see what would break.
Phase 2: Fix Violations
Most violations come from:
- Running as root (add
runAsNonRoot: true) - Missing seccomp profiles
- Not dropping capabilities
- Using host networking unnecessarily
Phase 3: Enforce
# Enforce baseline first, then restricted
kubectl label ns production \
pod-security.kubernetes.io/enforce=baseline
# Once clean, upgrade to restricted
kubectl label ns production \
pod-security.kubernetes.io/enforce=restricted --overwrite
Configuring PSA at the Cluster Level
Namespace labels are great for targeted enforcement, but in a production cluster with dozens of namespaces, you want a cluster-wide default. The PSA admission controller can be configured with a default configuration that applies when namespaces don't have explicit labels.
Create an admission configuration file:
# /etc/kubernetes/psa-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
enforce: "baseline"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces:
- kube-system
- kube-public
- kube-node-lease
- monitoring
Pass this to the API server with the --admission-control-config-file flag. On managed Kubernetes services like EKS or GKE, you'll rely on namespace labels instead since you don't control the API server flags directly.
The key insight here: I set the default enforcement to baseline while auditing and warning at restricted. This means every new namespace gets baseline enforcement automatically, while your logs show you what would break under restricted. When a namespace is ready, you upgrade its label to enforce: restricted.
Exemptions: When You Genuinely Need Privilege
Some workloads legitimately need elevated privileges. CNI plugins, log collectors, and monitoring agents often need host-level access. The right approach isn't to weaken the policy — it's to use targeted exemptions.
Namespace-Level Exemptions
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Setting enforce to privileged but auditing at restricted means monitoring tools run without interference, but you still get visibility into which pods could be tightened.
Workload-Specific Security Contexts for Exempted Namespaces
Even in exempted namespaces, apply security contexts where you can:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentbit
namespace: monitoring
spec:
selector:
matchLabels:
app: fluentbit
template:
metadata:
labels:
app: fluentbit
spec:
serviceAccountName: fluentbit
hostNetwork: false
dnsPolicy: ClusterFirst
containers:
- name: fluentbit
image: fluent/fluent-bit:3.2
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
add: ["DAC_READ_SEARCH"] # Only what's needed for log reading
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containers
This DaemonSet needs hostPath volumes (which restricted doesn't allow), but it still drops all capabilities except the one it actually needs, sets readOnlyRootFilesystem, and blocks privilege escalation. The principle: request the minimum elevation required, not a blanket exemption from all security.
Validating Workloads Before Deployment
Don't wait until deployment time to discover PSS violations. Shift this left into your CI pipeline.
Using kubectl dry-run for Pre-Deployment Checks
# Check if a manifest would be accepted under restricted policy
kubectl apply --dry-run=server -f deployment.yaml --namespace production
# The output will include warnings for any violations
# Example output:
# Warning: would violate PodSecurity "restricted:latest":
# allowPrivilegeEscalation != false
# unrestricted capabilities
# runAsNonRoot != true
Automated CI Validation with Kyverno CLI
# Install kyverno CLI
brew install kyverno
# Create a PSS-equivalent policy file
cat > pss-restricted.yaml <<'EOF'
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pss-restricted-check
spec:
validationFailureAction: Audit
rules:
- name: restricted-volumes
match:
any:
- resources:
kinds: ["Pod"]
validate:
message: "Only specific volume types are allowed under restricted."
deny:
conditions:
any:
- key: "{{ request.object.spec.volumes[].hostPath || '' }}"
operator: NotEquals
value: ""
- name: drop-all-capabilities
match:
any:
- resources:
kinds: ["Pod"]
validate:
message: "Containers must drop ALL capabilities."
pattern:
spec:
containers:
- securityContext:
capabilities:
drop: ["ALL"]
EOF
# Validate your manifests against the policy
kyverno apply pss-restricted.yaml --resource deployment.yaml
Integration into GitHub Actions
- name: Validate Pod Security Standards
run: |
# Validate all manifests against restricted PSS
for file in $(find k8s/ -name '*.yaml' -o -name '*.yml'); do
echo "Checking $file..."
kubectl apply --dry-run=server -f "$file" \
--namespace pss-test 2>&1 | tee -a pss-report.txt
done
# Fail if any warnings found
if grep -q "would violate PodSecurity" pss-report.txt; then
echo "PSS violations found. See report above."
exit 1
fi
Migrating from PodSecurityPolicy to PSA
If you're still on PSP (or recently migrated from a cluster that used them), here's the mapping between common PSP configurations and their PSA equivalents.
PSP to PSS Mapping Reference
| PSP Field | PSS Level | Notes |
|---|---|---|
privileged: false | Baseline | Baseline denies privileged containers |
hostNetwork: false | Baseline | Baseline denies host networking |
hostPID: false | Baseline | Baseline denies host PID namespace |
runAsUser.rule: MustRunAsNonRoot | Restricted | Restricted requires non-root |
requiredDropCapabilities: [ALL] | Restricted | Restricted requires dropping all caps |
allowPrivilegeEscalation: false | Restricted | Restricted denies privilege escalation |
volumes: [configMap, secret, pvc, emptyDir] | Restricted | Restricted limits volume types |
readOnlyRootFilesystem: true | Neither | PSS doesn't enforce this — use admission controllers |
The biggest difference: PSP was a cluster-level resource applied via RBAC. PSA is namespace-level via labels. This means your migration needs to touch every namespace, not just the RBAC bindings.
Step-by-Step Migration Script
#!/bin/bash
set -euo pipefail
echo "=== PSP to PSA Migration ==="
# Step 1: Identify all namespaces and their current PSP bindings
echo "--- Current PSP Bindings ---"
kubectl get psp 2>/dev/null || echo "No PSPs found (already removed?)"
# Step 2: Label all application namespaces with audit+warn first
for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep -v '^kube-'); do
echo "Labeling $ns with audit=restricted, warn=restricted"
kubectl label namespace "$ns" \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted \
--overwrite
done
# Step 3: Check audit logs after 48 hours
echo ""
echo "Wait 48 hours, then run:"
echo "kubectl get events --all-namespaces | grep -i 'pod-security'"
echo ""
# Step 4: After fixing violations, enforce baseline
echo "When ready, enforce baseline on all namespaces:"
echo "kubectl label namespace <name> pod-security.kubernetes.io/enforce=baseline --overwrite"
echo ""
echo "Then upgrade individual namespaces to restricted as they become compliant."
Monitoring PSA Violations
Set up alerts for PSA violations so you know when workloads are being blocked or would be blocked:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: psa-violations
namespace: monitoring
spec:
groups:
- name: pod-security
rules:
- alert: PodSecurityViolationWarning
expr: |
increase(
apiserver_audit_event_total{
verb=~"create|update",
resource="pods",
annotations_authorization_k8s_io_decision="allow",
annotations_pod_security_kubernetes_io_audit_violations!=""
}[1h]
) > 10
for: 5m
labels:
severity: warning
annotations:
summary: "High rate of pod security audit violations"
description: "More than 10 pods in the last hour would violate the restricted profile. Review and remediate before enforcing."
Real-World Security Context Templates
Here are battle-tested security context configurations for common workload types.
Web Application (Node.js, Python, Go)
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir:
sizeLimit: 100Mi
Java Application (Needs Writable Temp and PID Files)
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
volumeMounts:
- name: tmp
mountPath: /tmp
- name: heap-dumps
mountPath: /app/dumps
volumes:
- name: tmp
emptyDir:
sizeLimit: 500Mi
- name: heap-dumps
emptyDir:
sizeLimit: 2Gi
NGINX Reverse Proxy
spec:
securityContext:
runAsNonRoot: true
runAsUser: 101 # nginx user
runAsGroup: 101
fsGroup: 101
seccompProfile:
type: RuntimeDefault
containers:
- name: nginx
image: nginxinc/nginx-unprivileged:1.27
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
ports:
- containerPort: 8080 # Unprivileged port
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /var/cache/nginx
- name: run
mountPath: /var/run
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
- name: run
emptyDir: {}
Note the use of nginxinc/nginx-unprivileged instead of the standard nginx image. The standard image tries to bind to port 80 and run as root. The unprivileged variant runs as user 101 on port 8080. Always prefer images that are designed to run as non-root.
Common Pitfalls
Pitfall 1: Init containers forgotten. Security contexts apply to init containers too. I've seen deployments fail because the init container ran as root while the main container was restricted.
Pitfall 2: Helm chart defaults. Many Helm charts don't set security contexts. Always check and override with your own values.
# Override Helm chart security contexts in values.yaml
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containerSecurityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
Pitfall 3: Exemptions creep. Once you start exempting namespaces, the list grows. Document every exemption with a timeline for removal.
Pitfall 4: Ephemeral containers for debugging. When you use kubectl debug to attach an ephemeral container, it inherits the pod's security context. If you need root access for debugging, you'll need to debug from a pod in a less restrictive namespace or use node-level debugging:
# Debug at the node level instead of injecting into restricted pods
kubectl debug node/my-node -it --image=busybox
Pitfall 5: Image UID mismatch. If your container image runs as user 1000 but your security context specifies runAsUser: 10001, the process can't read files owned by user 1000 inside the image. Always match the runAsUser to the user baked into the image, or ensure file permissions are set to group-readable with a matching fsGroup.
Third-Party Tools for PSS Enforcement
While PSA is the built-in mechanism, there are cases where you need more flexibility.
Kyverno for Fine-Grained Pod Security
Kyverno lets you enforce PSS-equivalent policies with exceptions that PSA can't express:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restricted-with-exceptions
spec:
validationFailureAction: Enforce
background: true
rules:
- name: require-run-as-non-root
match:
any:
- resources:
kinds: ["Pod"]
exclude:
any:
- resources:
namespaces: ["monitoring"]
selector:
matchLabels:
app.kubernetes.io/name: node-exporter
validate:
message: "Pods must run as non-root."
pattern:
spec:
securityContext:
runAsNonRoot: true
containers:
- securityContext:
runAsNonRoot: true
This policy enforces runAsNonRoot everywhere except for node-exporter in the monitoring namespace. PSA labels can only exempt entire namespaces — Kyverno lets you exempt specific workloads within an otherwise restricted namespace.
OPA Gatekeeper Constraints
If your organization prefers Gatekeeper over Kyverno, here's the equivalent:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPPrivilegedContainer
metadata:
name: psp-privileged-container
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
excludedNamespaces: ["kube-system"]
parameters:
exemptImages:
- "docker.io/calico/*"
- "quay.io/prometheus/*"
Both tools give you audit trails, mutation capabilities, and policy reporting that PSA doesn't provide natively. For production clusters with complex exemption requirements, I recommend running one of these alongside PSA for the additional flexibility.
PSS Profiles Reference Card
Here's a quick reference for what each profile allows and denies. Print this out and keep it near your desk:
| Control | Privileged | Baseline | Restricted |
|---|---|---|---|
| Privileged containers | Allowed | Denied | Denied |
| Host namespaces (PID, IPC, Network) | Allowed | Denied | Denied |
| Host ports | Allowed | Limited | Limited |
| hostPath volumes | Allowed | Allowed | Denied |
| Privileged escalation | Allowed | Allowed | Denied |
| Running as root | Allowed | Allowed | Denied |
| Seccomp profile | Any | Any | RuntimeDefault or Localhost |
| Capabilities | Any | Drop NET_RAW only | Drop ALL |
| Volume types | Any | Any | ConfigMap, Secret, PVC, EmptyDir, etc. |
| AppArmor | Any | Any | RuntimeDefault or Localhost |
Troubleshooting PSA Rejections
When a pod gets rejected by PSA, the error message tells you exactly what failed. Here's how to decode and fix the most common ones.
# Example rejection message:
# Error from server (Forbidden): error when creating "deploy.yaml":
# pods "my-pod" is forbidden: violates PodSecurity "restricted:latest":
# allowPrivilegeEscalation != false
# (container "app" must set securityContext.allowPrivilegeEscalation=false),
# unrestricted capabilities
# (container "app" must set securityContext.capabilities.drop=["ALL"]),
# runAsNonRoot != true
# (pod or container "app" must set securityContext.runAsNonRoot=true)
Each violation maps directly to a field you need to set. Fix them one by one:
# Quick check: validate a manifest against a specific PSS level
kubectl label namespace test-pss pod-security.kubernetes.io/enforce=restricted --overwrite
# Try to apply your manifest
kubectl apply -f manifest.yaml -n test-pss --dry-run=server
# Fix violations, re-run until clean
For persistent issues, use this diagnostic script:
#!/bin/bash
# pss-check.sh - Check all pods in a namespace for restricted compliance
NAMESPACE="${1:-default}"
echo "Checking pods in namespace: $NAMESPACE"
echo "========================================="
kubectl get pods -n "$NAMESPACE" -o json | jq -r '
.items[] |
.metadata.name as $pod |
.spec.containers[] |
{
pod: $pod,
container: .name,
runAsNonRoot: (.securityContext.runAsNonRoot // "NOT SET"),
allowPrivEsc: (.securityContext.allowPrivilegeEscalation // "NOT SET"),
dropAll: (if (.securityContext.capabilities.drop // []) | map(ascii_downcase) | contains(["all"]) then "YES" else "NO" end),
readOnlyFS: (.securityContext.readOnlyRootFilesystem // "NOT SET"),
seccomp: (.securityContext.seccompProfile.type // "NOT SET")
} |
"\(.pod)/\(.container): runAsNonRoot=\(.runAsNonRoot) allowPrivEsc=\(.allowPrivEsc) dropAll=\(.dropAll) readOnlyFS=\(.readOnlyFS) seccomp=\(.seccomp)"
'
Conclusion
Pod Security Standards aren't optional in 2026. If you're running Kubernetes without PSA enforcement, you're one misconfigured deployment away from a security incident. Start with audit mode, fix your workloads, and enforce restricted wherever possible.
The effort is worth it. I've migrated clusters with hundreds of workloads to restricted profiles, and every single time, the team found security issues they didn't know existed. Containers running as root that didn't need to. Host networking enabled for services that only needed cluster-internal communication. Capabilities granted that were never used.
The migration path is clear: audit first to see the violations, fix the common ones (security contexts, capabilities, non-root), enforce baseline as the safety net, then graduate namespaces to restricted as they become compliant. Build PSS validation into your CI pipeline so new violations don't sneak in. And monitor your audit logs — they're telling you exactly where your next improvement needs to happen.
If you take one thing from this guide, let it be this: the restricted profile is achievable for the vast majority of production workloads. The exceptions are real but narrow — CNI plugins, log collectors, and a handful of system agents. Everything else can and should run restricted. The security posture improvement is substantial, and the migration effort is a one-time cost that pays dividends for the lifetime of the cluster.
Related Articles
Senior Kubernetes Architect
10+ years orchestrating containers in production. Battle-tested opinions on everything from pod scheduling to service mesh. I've seen clusters burn and helped rebuild them better.
Related Articles
Encrypting Kubernetes Secrets at Rest: Because Base64 Is Not Encryption
How to configure encryption at rest for Kubernetes secrets using KMS providers, because your secrets in etcd are stored in plaintext by default.
Systematic Debugging of CrashLoopBackOff: A Field Guide From Someone Who's Been Paged Too Many Times
A systematic approach to debugging CrashLoopBackOff in Kubernetes, covering the most common causes and the exact commands to diagnose each one.
Zero-Trust Networking in Kubernetes with Network Policies
How to implement zero-trust networking in Kubernetes using NetworkPolicies — deny by default, allow by exception, and sleep better at night.