DevOpsil
Vault
92%
Fresh

Vault Dynamic Secrets: Short-Lived Credentials on Demand

Amara OkaforAmara Okafor21 min read

Why Dynamic Secrets Matter

Static secrets are a ticking time bomb. A database password stored in Vault is better than one stored in a .env file, but it still has fundamental problems: multiple services share the same credential, nobody rotates it, and if it leaks, you have no idea which system was compromised. Even with diligent rotation policies, static credentials create a window of vulnerability between the time a credential is compromised and the time it is rotated. In most organizations, that window is measured in months or years.

Dynamic secrets flip the model entirely. Instead of Vault storing a credential that already exists, Vault generates a unique credential on demand, with a built-in time-to-live (TTL). When the lease expires, Vault automatically revokes the credential by connecting to the target system and deleting the user, revoking the certificate, or invalidating the token. Every consumer gets its own credential, so you get per-client attribution in your database logs, cloud audit trails, and network traffic. If a credential leaks, you revoke that single lease without disrupting every other service that depends on the same system.

This approach eliminates three major risks at once: credential sharing (every consumer gets a unique identity), forgotten rotation (TTLs enforce automatic expiration), and blast radius from compromise (revoking one lease affects only one consumer).

Consider the math: if you have 50 microservices all sharing one database password, a single leak compromises all 50 services and you cannot tell which one was the source. With dynamic secrets, each service has its own credential, the credential lives for one hour, and revoking it takes a single API call. The security posture improvement is not incremental; it is transformational.

How Dynamic Secrets Work Internally

When a client requests a dynamic secret, Vault executes the following sequence:

  1. The client sends a read request to the secret engine's credential generation endpoint (e.g., database/creds/myapp-readonly).
  2. Vault authenticates the request using the client's token and checks policies.
  3. The secret engine connects to the target system (database, cloud provider, etc.) using its pre-configured administrative credentials.
  4. The engine creates a new credential on the target system according to the role's configuration (SQL statements, IAM policy documents, certificate parameters, etc.).
  5. Vault creates a lease that tracks the credential's TTL and renewal status.
  6. Vault returns the credential and lease information to the client.
  7. When the lease expires (or is revoked), Vault connects to the target system again and revokes the credential.

This means Vault needs administrative access to every system it generates credentials for. The trade-off is deliberate: you concentrate privileged access in one hardened system (Vault) instead of distributing it across dozens of applications.

Database Secret Engine

The database secret engine is typically the first dynamic engine teams adopt, because database credentials are the most commonly shared and least frequently rotated secrets in most organizations. Vault supports PostgreSQL, MySQL, MariaDB, MongoDB, Microsoft SQL Server, Oracle, Elasticsearch, Redis, Snowflake, Cassandra, Couchbase, and InfluxDB through its plugin system.

Enabling and Configuring for PostgreSQL

# Enable the database secret engine
vault secrets enable database

# Configure the PostgreSQL connection
# The {{username}} and {{password}} templates are replaced by Vault
vault write database/config/myapp-db \
  plugin_name="postgresql-database-plugin" \
  allowed_roles="myapp-readonly,myapp-readwrite,myapp-admin" \
  connection_url="postgresql://{{username}}:{{password}}@db.internal:5432/myapp?sslmode=require" \
  username="vault_admin" \
  password="vault-admin-password" \
  max_open_connections=5 \
  max_idle_connections=3 \
  max_connection_lifetime="5s"

# Rotate the root credentials so nobody knows the admin password
vault write -f database/config/myapp-db/rotate-root

After rotating root credentials, Vault is the only entity that knows the database admin password. This is a one-way operation. You cannot retrieve the rotated password from Vault. If you need to regain access outside of Vault, you will need to use the database's own recovery mechanisms.

The connection pool settings (max_open_connections, max_idle_connections, max_connection_lifetime) are important for production. Vault maintains persistent connections to target databases, and these settings prevent connection exhaustion under heavy credential generation loads.

Creating Roles with SQL Statements

Roles define the SQL statements that Vault executes to create and revoke database users. The role configuration is where you control exactly what permissions the generated credentials will have.

# Read-only role for reporting and analytics services
vault write database/roles/myapp-readonly \
  db_name="myapp-db" \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\"; \
    ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO \"{{name}}\";" \
  revocation_statements="REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; \
    REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public FROM \"{{name}}\"; \
    REVOKE USAGE ON SCHEMA public FROM \"{{name}}\"; \
    DROP ROLE IF EXISTS \"{{name}}\";" \
  renew_statements="ALTER ROLE \"{{name}}\" VALID UNTIL '{{expiration}}';" \
  default_ttl="1h" \
  max_ttl="24h"

# Read-write role for application services
vault write database/roles/myapp-readwrite \
  db_name="myapp-db" \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
    GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\"; \
    GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\"; \
    ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO \"{{name}}\"; \
    ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE, SELECT ON SEQUENCES TO \"{{name}}\";" \
  revocation_statements="REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; \
    REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public FROM \"{{name}}\"; \
    REVOKE USAGE ON SCHEMA public FROM \"{{name}}\"; \
    DROP ROLE IF EXISTS \"{{name}}\";" \
  renew_statements="ALTER ROLE \"{{name}}\" VALID UNTIL '{{expiration}}';" \
  default_ttl="1h" \
  max_ttl="8h"

# Admin role for migration jobs (limited to schema changes)
vault write database/roles/myapp-admin \
  db_name="myapp-db" \
  creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' CREATEROLE; \
    GRANT ALL PRIVILEGES ON DATABASE myapp TO \"{{name}}\"; \
    GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO \"{{name}}\"; \
    GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";" \
  revocation_statements="REVOKE ALL PRIVILEGES ON DATABASE myapp FROM \"{{name}}\"; \
    REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; \
    REVOKE ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public FROM \"{{name}}\"; \
    DROP ROLE IF EXISTS \"{{name}}\";" \
  default_ttl="15m" \
  max_ttl="1h"

Notice the ALTER DEFAULT PRIVILEGES statements. Without them, tables created after the role is set up would not be accessible to dynamically generated users. The renew_statements extend the database-level expiration when a lease is renewed in Vault, keeping the two systems in sync.

Generating and Using Credentials

# Generate a set of short-lived credentials
vault read database/creds/myapp-readonly
# Key                Value
# ---                -----
# lease_id           database/creds/myapp-readonly/abcd-1234-efgh-5678
# lease_duration     1h
# lease_renewable    true
# password           A1b2-C3d4-E5f6-G7h8
# username           v-approle-myapp-re-abcdef1234-1234567890

# Use the credentials directly
psql "postgresql://v-approle-myapp-re-abcdef1234-1234567890:A1b2-C3d4-E5f6-G7h8@db.internal:5432/myapp?sslmode=require"

# Generate credentials and extract them for scripting
DB_CREDS=$(vault read -format=json database/creds/myapp-readwrite)
DB_USER=$(echo "$DB_CREDS" | jq -r '.data.username')
DB_PASS=$(echo "$DB_CREDS" | jq -r '.data.password')
DB_LEASE=$(echo "$DB_CREDS" | jq -r '.lease_id')
LEASE_DURATION=$(echo "$DB_CREDS" | jq -r '.lease_duration')

echo "Username: $DB_USER"
echo "Lease expires in: $LEASE_DURATION seconds"

# Use in a connection string
export DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@db.internal:5432/myapp?sslmode=require"

Every call to database/creds/ generates a completely new database user with a unique password. When the lease expires, Vault connects to the database and executes the revocation_statements, dropping that user entirely. The username format (v-approle-myapp-re-...) is designed to be traceable back to the auth method, role, and timestamp, making database-level audit logs meaningful.

MySQL Configuration

The same pattern applies to MySQL with slightly different SQL syntax:

vault write database/config/orders-db \
  plugin_name="mysql-database-plugin" \
  allowed_roles="orders-app,orders-readonly" \
  connection_url="{{username}}:{{password}}@tcp(mysql.internal:3306)/orders" \
  username="vault_admin" \
  password="vault-admin-password" \
  max_open_connections=5

vault write database/roles/orders-app \
  db_name="orders-db" \
  creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; \
    GRANT SELECT, INSERT, UPDATE ON orders.* TO '{{name}}'@'%';" \
  revocation_statements="DROP USER IF EXISTS '{{name}}'@'%';" \
  default_ttl="2h" \
  max_ttl="12h"

vault write database/roles/orders-readonly \
  db_name="orders-db" \
  creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; \
    GRANT SELECT ON orders.* TO '{{name}}'@'%';" \
  revocation_statements="DROP USER IF EXISTS '{{name}}'@'%';" \
  default_ttl="1h" \
  max_ttl="8h"

MongoDB Configuration

For MongoDB, the plugin uses MongoDB's user management API:

vault write database/config/analytics-db \
  plugin_name="mongodb-database-plugin" \
  allowed_roles="analytics-reader" \
  connection_url="mongodb://{{username}}:{{password}}@mongo1.internal:27017,mongo2.internal:27017,mongo3.internal:27017/admin?replicaSet=rs0&ssl=true" \
  username="vault_admin" \
  password="vault-admin-password"

vault write database/roles/analytics-reader \
  db_name="analytics-db" \
  creation_statements='{"db": "analytics", "roles": [{"role": "read"}]}' \
  revocation_statements='{"db": "analytics"}' \
  default_ttl="2h" \
  max_ttl="24h"

Microsoft SQL Server Configuration

vault write database/config/erp-db \
  plugin_name="mssql-database-plugin" \
  allowed_roles="erp-app" \
  connection_url="sqlserver://{{username}}:{{password}}@mssql.internal:1433/erp" \
  username="vault_admin" \
  password="vault-admin-password"

vault write database/roles/erp-app \
  db_name="erp-db" \
  creation_statements="CREATE LOGIN [{{name}}] WITH PASSWORD = '{{password}}'; \
    USE erp; CREATE USER [{{name}}] FOR LOGIN [{{name}}]; \
    GRANT SELECT, INSERT, UPDATE, DELETE TO [{{name}}];" \
  revocation_statements="USE erp; DROP USER IF EXISTS [{{name}}]; \
    DROP LOGIN [{{name}}];" \
  default_ttl="1h" \
  max_ttl="8h"

AWS Secret Engine

The AWS secret engine generates IAM credentials on demand, replacing long-lived access keys that sit in CI/CD systems, developer laptops, and configuration files. This is particularly important because AWS access keys do not expire on their own and are one of the most commonly leaked credential types.

Configuration

# Enable the AWS secret engine
vault secrets enable aws

# Configure root credentials that Vault uses to create IAM users/roles
# Use an IAM user with minimal permissions needed to manage other users
vault write aws/config/root \
  access_key="AKIAIOSFODNN7EXAMPLE" \
  secret_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" \
  region="us-east-1" \
  max_retries=3

# Rotate the root keys so the original keys are no longer valid
vault write -f aws/config/rotate-root

# Configure the default lease TTL for AWS credentials
vault write aws/config/lease \
  lease="1h" \
  lease_max="24h"

The IAM user that Vault uses needs the following permissions to manage other IAM users and assume roles:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "iam:CreateUser",
        "iam:DeleteUser",
        "iam:CreateAccessKey",
        "iam:DeleteAccessKey",
        "iam:PutUserPolicy",
        "iam:DeleteUserPolicy",
        "iam:ListAccessKeys",
        "iam:ListUserPolicies",
        "iam:AttachUserPolicy",
        "iam:DetachUserPolicy",
        "iam:ListAttachedUserPolicies",
        "iam:ListGroupsForUser",
        "iam:RemoveUserFromGroup",
        "iam:GetUser",
        "iam:TagUser",
        "iam:UntagUser"
      ],
      "Resource": "arn:aws:iam::*:user/vault-*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "sts:AssumeRole"
      ],
      "Resource": "arn:aws:iam::*:role/vault-*"
    }
  ]
}

IAM User Credentials

IAM user credentials create actual IAM users in your AWS account. They are the most flexible but also the slowest to generate and clean up.

# Create a role that generates IAM users with an inline policy
vault write aws/roles/s3-readonly \
  credential_type="iam_user" \
  policy_document=-<<'POLICY'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:ListBucket",
        "s3:ListBucketVersions"
      ],
      "Resource": [
        "arn:aws:s3:::my-app-bucket",
        "arn:aws:s3:::my-app-bucket/*"
      ]
    }
  ]
}
POLICY

# Create a role using existing AWS managed policies
vault write aws/roles/ec2-admin \
  credential_type="iam_user" \
  policy_arns="arn:aws:iam::aws:policy/AmazonEC2FullAccess"

# Generate credentials
vault read aws/creds/s3-readonly
# Key                Value
# ---                -----
# lease_id           aws/creds/s3-readonly/abcd-1234
# lease_duration     1h
# access_key         AKIAIOSFODNN7NEWKEY
# secret_key         wJalrXUtnFEMI/newkey123

Assumed Role Credentials (STS AssumeRole)

For cross-account access or time-limited STS credentials, assumed_role is the preferred credential type. It does not create IAM users and provides temporary credentials with a session token.

# Create a role for assuming an existing AWS IAM role
vault write aws/roles/deploy-role \
  credential_type="assumed_role" \
  role_arns="arn:aws:iam::123456789012:role/DeployRole" \
  default_sts_ttl="1h" \
  max_sts_ttl="4h"

# You can also scope down permissions with a policy
vault write aws/roles/deploy-role-scoped \
  credential_type="assumed_role" \
  role_arns="arn:aws:iam::123456789012:role/DeployRole" \
  policy_document=-<<'POLICY'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["ecs:UpdateService", "ecs:DescribeServices"],
      "Resource": "arn:aws:ecs:us-east-1:123456789012:service/prod/*"
    }
  ]
}
POLICY

# Generate credentials
vault read aws/creds/deploy-role
# Returns: access_key, secret_key, and security_token (session token)

The target IAM role must have a trust policy that allows Vault's IAM user to assume it:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::VAULT_ACCOUNT_ID:user/vault-aws-engine"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "vault-dynamic-secrets"
        }
      }
    }
  ]
}

Federation Token Credentials

Federation tokens provide temporary credentials scoped down from Vault's own IAM user permissions. They are useful for CI/CD pipelines that need short-lived access without assuming separate roles.

vault write aws/roles/ci-deploy \
  credential_type="federation_token" \
  policy_document=-<<'POLICY'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ecs:UpdateService",
        "ecs:DescribeServices",
        "ecs:DescribeTaskDefinition",
        "ecs:RegisterTaskDefinition",
        "ecr:GetAuthorizationToken",
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ],
      "Resource": "*"
    }
  ]
}
POLICY

vault read aws/creds/ci-deploy

Credential Type Comparison

Credential TypeUse CaseMax TTLCreates IAM UserSpeed
iam_userLong-running jobs, legacy appsVault lease TTLYesSlowest (IAM propagation)
assumed_roleCross-account, short tasks12h (STS limit)NoFast
federation_tokenCI/CD pipelines36h (STS limit)NoFast

For most use cases, assumed_role is the best choice. It creates no persistent IAM resources, generates credentials quickly, and supports cross-account access patterns. Use iam_user only when you need credentials that outlive STS limits or need to interact with services that do not support session tokens.

PKI Secret Engine

The PKI engine turns Vault into a certificate authority, issuing short-lived TLS certificates on demand. This eliminates the operational burden of manual certificate management, enables mutual TLS (mTLS) between services, and reduces the risk window from certificate compromise to minutes instead of months.

Architecture: Root CA and Intermediate CA

In production, you should never issue end-entity certificates directly from a root CA. Instead, create a root CA that signs an intermediate CA, and use the intermediate for all certificate issuance. If the intermediate is compromised, you can revoke it and create a new one without replacing the root CA across your entire trust chain.

Root CA (long-lived, offline if possible)
  |
  +-- Intermediate CA (medium-lived, issues certificates)
        |
        +-- Service Certificate (short-lived, 72h or less)
        +-- Service Certificate
        +-- Service Certificate

Setting Up the Root CA

# Enable the PKI engine for the root CA
vault secrets enable pki

# Set the maximum TTL for the root CA to 10 years
vault secrets tune -max-lease-ttl=87600h pki

# Generate a root certificate
vault write -field=certificate pki/root/generate/internal \
  common_name="My Organization Root CA" \
  issuer_name="root-2026" \
  ttl=87600h \
  key_type="ec" \
  key_bits=384 > root_ca.crt

# Configure the CA and CRL URLs
vault write pki/config/urls \
  issuing_certificates="https://vault.example.com:8200/v1/pki/ca" \
  crl_distribution_points="https://vault.example.com:8200/v1/pki/crl" \
  ocsp_servers="https://vault.example.com:8200/v1/pki/ocsp"

Setting Up the Intermediate CA

# Enable a second PKI mount for the intermediate
vault secrets enable -path=pki_int pki

# Set the maximum TTL for the intermediate to 5 years
vault secrets tune -max-lease-ttl=43800h pki_int

# Generate the intermediate CSR
vault write -format=json pki_int/intermediate/generate/internal \
  common_name="My Organization Intermediate CA" \
  issuer_name="intermediate-2026" \
  key_type="ec" \
  key_bits=384 \
  | jq -r '.data.csr' > intermediate.csr

# Sign the intermediate with the root CA
vault write -format=json pki/root/sign-intermediate \
  issuer_ref="root-2026" \
  csr=@intermediate.csr \
  format=pem_bundle \
  ttl=43800h \
  | jq -r '.data.certificate' > intermediate.crt

# Import the signed intermediate certificate
vault write pki_int/intermediate/set-signed certificate=@intermediate.crt

# Configure URLs for the intermediate
vault write pki_int/config/urls \
  issuing_certificates="https://vault.example.com:8200/v1/pki_int/ca" \
  crl_distribution_points="https://vault.example.com:8200/v1/pki_int/crl" \
  ocsp_servers="https://vault.example.com:8200/v1/pki_int/ocsp"

Creating Certificate Roles

Roles define the constraints for issued certificates:

# Role for internal web servers
vault write pki_int/roles/web-servers \
  allowed_domains="example.com,internal.example.com" \
  allow_subdomains=true \
  allow_bare_domains=false \
  max_ttl="720h" \
  key_type="ec" \
  key_bits=256 \
  require_cn=true \
  server_flag=true \
  client_flag=false \
  enforce_hostnames=true \
  allow_ip_sans=true \
  allowed_uri_sans="spiffe://cluster.local/*"

# Role for mTLS client certificates
vault write pki_int/roles/service-mesh \
  allowed_domains="mesh.internal" \
  allow_subdomains=true \
  max_ttl="24h" \
  key_type="ec" \
  key_bits=256 \
  server_flag=true \
  client_flag=true \
  enforce_hostnames=true \
  allow_ip_sans=false

# Role for short-lived internal certificates (for CI/CD, testing)
vault write pki_int/roles/ephemeral \
  allowed_domains="ci.internal" \
  allow_subdomains=true \
  max_ttl="4h" \
  key_type="ec" \
  key_bits=256 \
  no_store=true \
  generate_lease=false

The no_store=true option on the ephemeral role tells Vault not to store the issued certificate. This is useful for high-volume issuance where you do not need revocation capability and want to avoid storage growth.

Issuing Certificates

# Issue a certificate for a web server
vault write -format=json pki_int/issue/web-servers \
  common_name="api.internal.example.com" \
  alt_names="api-v2.internal.example.com" \
  ip_sans="10.0.1.50" \
  ttl="72h" > cert.json

# Extract the certificate components
cat cert.json | jq -r '.data.certificate' > api.crt
cat cert.json | jq -r '.data.private_key' > api.key
cat cert.json | jq -r '.data.ca_chain[]' > ca-chain.crt

# Issue a certificate for mTLS
vault write -format=json pki_int/issue/service-mesh \
  common_name="payment-service.mesh.internal" \
  ttl="12h" > mtls-cert.json

# Verify a certificate
openssl x509 -in api.crt -text -noout

Each issued certificate gets its own lease. When the lease expires or is revoked, the certificate appears on the Certificate Revocation List (CRL).

Automating Certificate Rotation

For services that need automatic certificate rotation, use Vault Agent with a template:

# vault-agent-config.hcl
auto_auth {
  method "kubernetes" {
    mount_path = "auth/kubernetes"
    config = {
      role = "web-server"
    }
  }
}

template {
  source      = "/etc/vault-agent/templates/cert.tpl"
  destination = "/etc/tls/server.crt"
  perms       = "0644"
  command     = "systemctl reload nginx"
}

template {
  source      = "/etc/vault-agent/templates/key.tpl"
  destination = "/etc/tls/server.key"
  perms       = "0600"
  command     = "systemctl reload nginx"
}

The template file:

{{- with secret "pki_int/issue/web-servers" "common_name=api.internal.example.com" "ttl=72h" -}}
{{ .Data.certificate }}
{{ .Data.ca_chain }}
{{- end -}}

Lease Management and Revocation

Every dynamic secret is associated with a lease. Leases track the TTL and provide a mechanism for renewal and revocation. Understanding lease management is essential for operating dynamic secrets at scale.

Lease Operations

# List active leases under a prefix
vault list sys/leases/lookup/database/creds/myapp-readonly/

# Look up a specific lease to see its details
vault lease lookup database/creds/myapp-readonly/abcd-1234-efgh-5678

# Renew a lease (extend its TTL by the default increment)
vault lease renew database/creds/myapp-readonly/abcd-1234-efgh-5678

# Renew with a specific increment
vault lease renew -increment=2h database/creds/myapp-readonly/abcd-1234-efgh-5678

# Revoke a single lease (immediately revokes the credential)
vault lease revoke database/creds/myapp-readonly/abcd-1234-efgh-5678

# Revoke all leases under a prefix (emergency use)
vault lease revoke -prefix database/creds/myapp-readonly/

# Revoke all leases for an entire engine
vault lease revoke -prefix database/

# Force revocation (skips the revocation call to the target system)
vault lease revoke -force database/creds/myapp-readonly/abcd-1234-efgh-5678

Force revocation should only be used when the target system is unreachable and you need to clean up Vault's lease state. The credential will still exist on the target system and must be cleaned up manually.

TTL and Max-TTL Hierarchy

TTLs operate at multiple levels, and the most restrictive value always wins:

System max TTL (default 768h / 32 days)
  |
  +-- Mount max TTL (vault secrets tune -max-lease-ttl)
        |
        +-- Role max TTL (max_ttl parameter on the role)
              |
              +-- Requested TTL (ttl parameter on the read/issue call)
# Set system-wide default TTL
vault write sys/config/default-lease-ttl ttl="768h"

# Set mount-level max TTL
vault secrets tune -max-lease-ttl=24h database/
vault secrets tune -default-lease-ttl=1h database/

# Role-level TTL (set during role creation)
vault write database/roles/myapp-readonly \
  default_ttl="1h" \
  max_ttl="8h"

# Request a specific TTL at credential generation time
vault read -ttl=30m database/creds/myapp-readonly

A client can renew a lease repeatedly, but the total lifetime cannot exceed the max TTL. After that, the client must authenticate again and request a new credential. This ensures that even well-behaved clients cycle their credentials regularly.

Lease Renewal Patterns

For applications that need to maintain database connections across lease renewals, implement a renewal loop:

#!/bin/bash
# lease-renewer.sh -- runs alongside your application

LEASE_ID="$1"
RENEW_INTERVAL=1800  # 30 minutes

while true; do
  echo "$(date): Renewing lease ${LEASE_ID}"
  RESULT=$(vault lease renew -format=json "$LEASE_ID" 2>&1)

  if echo "$RESULT" | jq -e '.lease_id' > /dev/null 2>&1; then
    NEW_TTL=$(echo "$RESULT" | jq -r '.lease_duration')
    echo "$(date): Lease renewed, new TTL: ${NEW_TTL}s"
  else
    echo "$(date): Lease renewal failed, requesting new credentials"
    # Signal application to reconnect with new credentials
    kill -USR1 $(cat /var/run/myapp.pid)
    break
  fi

  sleep $RENEW_INTERVAL
done

Credential Rotation for Static Accounts

For systems that only support a single database user (legacy applications, shared third-party services), Vault offers static role rotation. Instead of creating and destroying users, Vault periodically rotates the password of an existing user.

# Configure a static role
vault write database/static-roles/legacy-app \
  db_name="myapp-db" \
  username="legacy_app_user" \
  rotation_period="24h" \
  rotation_statements="ALTER USER \"{{name}}\" WITH PASSWORD '{{password}}';"

# Read the current password (Vault rotates it on the configured schedule)
vault read database/static-creds/legacy-app

# Manually trigger a rotation
vault write -f database/static-roles/legacy-app/rotate-credentials

Vault rotates the password on the configured schedule and stores the current value. Your application reads the latest password from Vault every time it connects. The previous password becomes invalid immediately after rotation.

Practical Workflow: CI/CD Pipeline with Dynamic Secrets

Here is a complete CI/CD pipeline that uses dynamic secrets for both AWS deployment and database migrations:

#!/bin/bash
set -euo pipefail
# ci-deploy.sh -- runs in your CI/CD pipeline

# 1. Authenticate to Vault using AppRole
# VAULT_ROLE_ID and VAULT_SECRET_ID are injected by the CI system
export VAULT_ADDR="https://vault.internal:8200"
VAULT_TOKEN=$(vault write -field=token auth/approle/login \
  role_id="$VAULT_ROLE_ID" \
  secret_id="$VAULT_SECRET_ID")
export VAULT_TOKEN

echo "Authenticated to Vault successfully"

# 2. Get short-lived AWS credentials for deployment
echo "Fetching AWS deployment credentials..."
AWS_CREDS=$(vault read -format=json aws/creds/ci-deploy)
export AWS_ACCESS_KEY_ID=$(echo "$AWS_CREDS" | jq -r '.data.access_key')
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_CREDS" | jq -r '.data.secret_key')
export AWS_SESSION_TOKEN=$(echo "$AWS_CREDS" | jq -r '.data.security_token')
AWS_LEASE=$(echo "$AWS_CREDS" | jq -r '.lease_id')

# Wait for IAM credential propagation if using iam_user type
# (Not needed for assumed_role or federation_token)
# sleep 10

# 3. Get short-lived database credentials for migrations
echo "Fetching database migration credentials..."
DB_CREDS=$(vault read -format=json database/creds/myapp-admin)
DB_USER=$(echo "$DB_CREDS" | jq -r '.data.username')
DB_PASS=$(echo "$DB_CREDS" | jq -r '.data.password')
export DATABASE_URL="postgresql://${DB_USER}:${DB_PASS}@db.internal:5432/myapp?sslmode=require"
DB_LEASE=$(echo "$DB_CREDS" | jq -r '.lease_id')

# 4. Run database migrations
echo "Running database migrations..."
npx prisma migrate deploy

# 5. Revoke database credentials immediately (no longer needed)
echo "Revoking database credentials..."
vault lease revoke "$DB_LEASE"

# 6. Build and push the container image
echo "Building and pushing container image..."
docker build -t myregistry/webapp:${GIT_SHA} .
docker push myregistry/webapp:${GIT_SHA}

# 7. Deploy the application
echo "Deploying to ECS..."
aws ecs update-service \
  --cluster prod \
  --service myapp \
  --force-new-deployment \
  --task-definition "myapp:latest"

# 8. AWS credentials will auto-expire when their STS session ends
# Optionally revoke early
vault lease revoke "$AWS_LEASE"

echo "Deployment complete"

GitLab CI Integration Example

# .gitlab-ci.yml
deploy:
  stage: deploy
  image: hashicorp/vault:1.17.3
  variables:
    VAULT_ADDR: "https://vault.internal:8200"
  id_tokens:
    VAULT_ID_TOKEN:
      aud: "https://vault.internal:8200"
  script:
    # Authenticate using JWT/OIDC (no static secrets needed)
    - export VAULT_TOKEN=$(vault write -field=token auth/jwt/login role=gitlab-deploy jwt=$VAULT_ID_TOKEN)

    # Fetch deployment secrets
    - export AWS_CREDS=$(vault read -format=json aws/creds/deploy-role)
    - export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDS | jq -r '.data.access_key')
    - export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDS | jq -r '.data.secret_key')
    - export AWS_SESSION_TOKEN=$(echo $AWS_CREDS | jq -r '.data.security_token')

    # Deploy
    - aws ecs update-service --cluster prod --service myapp --force-new-deployment
  only:
    - main

GitHub Actions Integration Example

# .github/workflows/deploy.yml
name: Deploy
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    steps:
      - uses: actions/checkout@v4

      - name: Import Vault Secrets
        uses: hashicorp/vault-action@v3
        with:
          url: https://vault.internal:8200
          method: jwt
          role: github-deploy
          jwtGithubAudience: https://vault.internal:8200
          secrets: |
            aws/creds/deploy-role access_key | AWS_ACCESS_KEY_ID ;
            aws/creds/deploy-role secret_key | AWS_SECRET_ACCESS_KEY ;
            aws/creds/deploy-role security_token | AWS_SESSION_TOKEN ;
            secret/data/myapp/production api_key | API_KEY

      - name: Deploy
        run: |
          aws ecs update-service --cluster prod --service myapp --force-new-deployment

Monitoring Dynamic Secrets at Scale

As your dynamic secrets usage grows, monitoring becomes critical to avoid hitting resource limits and detecting anomalies.

Lease Count Monitoring

# Count active leases per engine
vault list -format=json sys/leases/lookup/database/creds/myapp-readonly/ | jq length
vault list -format=json sys/leases/lookup/database/creds/myapp-readwrite/ | jq length
vault list -format=json sys/leases/lookup/aws/creds/ci-deploy/ | jq length

# Monitor total lease count via metrics
curl -s --header "X-Vault-Token: $VAULT_TOKEN" \
  "${VAULT_ADDR}/v1/sys/metrics?format=prometheus" | grep "vault_expire_num_leases"

Alert Thresholds

Set up alerts for the following conditions:

ConditionThresholdAction
Active database leasesMore than 80% of max_connectionsInvestigate leak
AWS IAM user countMore than 4500 (AWS limit is 5000)Switch to assumed_role
Lease creation rate spike5x normal rate in 5 minutesCheck for credential leak
Lease renewal failure rateMore than 5%Check target system health
PKI certificate issuance rateMore than 1000/hourVerify automation is correct

Database Connection Monitoring

A common problem is applications that request new credentials on every request instead of reusing connections. Monitor your database for connection count trends:

-- PostgreSQL: check active Vault-generated connections
SELECT usename, count(*) as connections, state
FROM pg_stat_activity
WHERE usename LIKE 'v-%'
GROUP BY usename, state
ORDER BY connections DESC;

-- Check for connection limit pressure
SELECT max_conn, used, max_conn - used AS available
FROM (SELECT count(*) AS used FROM pg_stat_activity) t1,
     (SELECT setting::int AS max_conn FROM pg_settings WHERE name = 'max_connections') t2;

Policies for Dynamic Secrets

Write specific policies that grant access only to the credential generation endpoints, not the configuration endpoints:

# policy: webapp-dynamic-creds.hcl

# Allow generating database credentials
path "database/creds/myapp-readonly" {
  capabilities = ["read"]
}

path "database/creds/myapp-readwrite" {
  capabilities = ["read"]
}

# Allow generating AWS credentials
path "aws/creds/s3-readonly" {
  capabilities = ["read"]
}

# Allow requesting PKI certificates
path "pki_int/issue/web-servers" {
  capabilities = ["create", "update"]
}

# Allow lease management (renew and revoke own leases)
path "sys/leases/renew" {
  capabilities = ["update"]
}

path "sys/leases/revoke" {
  capabilities = ["update"]
}

# Allow token self-management
path "auth/token/renew-self" {
  capabilities = ["update"]
}

# DENY access to engine configuration
path "database/config/*" {
  capabilities = ["deny"]
}

path "database/roles/*" {
  capabilities = ["deny"]
}

path "aws/config/*" {
  capabilities = ["deny"]
}

Summary

Dynamic secrets fundamentally change your security posture. Instead of managing the lifecycle of credentials yourself through rotation scripts, shared password documents, and manual revocation procedures, you delegate that responsibility to Vault. Credentials are unique per consumer (enabling attribution), short-lived (enforcing automatic expiration), automatically revoked (eliminating forgotten credentials), and fully audited (providing compliance evidence). Start with the database engine for the biggest immediate impact because database credentials are the most commonly shared secret in most organizations. Then expand to AWS for CI/CD pipelines and PKI for service mesh and internal TLS. The combination of dynamic secrets with Vault's policy system and audit logging creates a security posture that is fundamentally stronger than any static credential management approach.

Share:
Amara Okafor
Amara Okafor

DevSecOps Lead

Security-first mindset in everything I ship. From zero-trust architectures to supply chain security, I make sure your pipeline doesn't become your weakest link.

Related Articles