HashiCorp Vault Fundamentals: Installation and First Secrets
What Problem Does Vault Solve?
Every production system needs credentials: database passwords, API keys, TLS certificates, cloud provider tokens, SSH keys, encryption keys, and service-to-service authentication tokens. The traditional approach of storing these in environment variables, config files, CI/CD pipeline variables, or Kubernetes ConfigMaps creates a sprawl that grows more dangerous over time. Credentials end up scattered across dozens of systems with no central audit trail, no rotation strategy, and no revocation mechanism. When a developer leaves the team, nobody knows which credentials they had access to. When a breach occurs, there is no way to determine which secrets were exposed or to revoke them in a coordinated fashion.
HashiCorp Vault provides a unified secrets management platform that addresses every one of these problems. It centralizes secret storage behind a single API, enforces fine-grained access policies written in HCL, generates dynamic short-lived credentials that are automatically revoked, encrypts data in transit and at rest, and produces a detailed audit log of every single secret access. Vault also provides an encryption-as-a-service capability through its Transit engine, allowing applications to encrypt and decrypt data without ever managing encryption keys directly.
Beyond storage, Vault fundamentally changes the security model. Instead of distributing long-lived credentials to every system that needs them, Vault becomes the single source of truth. Applications authenticate to Vault, receive short-lived tokens scoped to exactly the permissions they need, and those tokens expire automatically. If a system is compromised, you revoke its token and every credential it obtained, instantly cutting off access without affecting any other system.
If your team currently manages secrets by copying .env files around, embedding credentials in Kubernetes ConfigMaps, or storing API keys in CI/CD pipeline variables, Vault replaces all of that with a system designed for security from day one.
Architecture Overview
Before installing Vault, it helps to understand its core architecture. Vault operates as a client-server application. The server component manages all secret operations, policy enforcement, and audit logging. Clients interact with the server through a REST API, a CLI tool, or one of the many client libraries available for languages like Go, Python, Java, Ruby, and Node.js.
Core Components
Vault's architecture consists of several key subsystems that work together:
Storage Backend: The persistence layer where Vault stores its encrypted data. Vault supports multiple backends including Raft integrated storage (recommended), Consul, and cloud-managed options. The storage backend never sees plaintext data because Vault encrypts everything before writing.
Barrier: The cryptographic barrier is the central security mechanism. All data that passes through Vault is encrypted by the barrier before being written to storage. The barrier is unlocked during the unseal process and locked during the seal process.
Secret Engines: Pluggable components that store, generate, or encrypt data. Each engine is mounted at a specific path and handles requests to that path. The KV engine stores static secrets, the database engine generates dynamic credentials, the PKI engine issues certificates, and the Transit engine provides encryption as a service.
Auth Methods: Pluggable components that verify the identity of users and machines. Each auth method is mounted at a path and, upon successful authentication, returns a Vault token with attached policies.
Audit Devices: Logging backends that record every request and response. Vault supports file, syslog, and socket audit devices. Critically, if all audit devices fail, Vault refuses to serve any requests as a security measure.
Policies: Written in HCL, policies define which paths a token can access and which operations (create, read, update, delete, list) it can perform. Vault uses a deny-by-default model.
Data Flow
When a client makes a request to Vault, the following sequence occurs:
- The client sends an HTTP request to the Vault API with a token in the
X-Vault-Tokenheader. - Vault validates the token and looks up its attached policies.
- Vault checks whether the policies permit the requested operation on the requested path.
- If permitted, Vault routes the request to the appropriate secret engine.
- The secret engine processes the request and returns data.
- Vault logs the request and response to all enabled audit devices.
- Vault returns the response to the client.
Installing Vault
Option 1: Binary Installation (Linux)
Download the latest release for your platform from the official HashiCorp releases page. This is the recommended approach for production servers because it gives you full control over the binary version and update process.
# Set the desired version
export VAULT_VERSION="1.17.3"
# Download and verify the binary
wget https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip
wget https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_SHA256SUMS
wget https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_SHA256SUMS.sig
# Verify the checksum
grep "linux_amd64" vault_${VAULT_VERSION}_SHA256SUMS | sha256sum -c -
# Extract and install
unzip vault_${VAULT_VERSION}_linux_amd64.zip
sudo mv vault /usr/local/bin/
vault version
Enable autocomplete for a better CLI experience:
vault -autocomplete-install
complete -C /usr/local/bin/vault vault
For a production systemd service, create the following unit file:
# /etc/systemd/system/vault.service
[Unit]
Description=HashiCorp Vault
Documentation=https://www.vaultproject.io/docs
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/vault.d/vault.hcl
[Service]
User=vault
Group=vault
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
LimitNOFILE=65536
LimitMEMLOCK=infinity
[Install]
WantedBy=multi-user.target
Create the vault user, directories, and set permissions:
# Create a dedicated vault user
sudo useradd --system --home /opt/vault --shell /bin/false vault
# Create required directories
sudo mkdir -p /etc/vault.d /opt/vault/data /opt/vault/tls /var/log/vault
sudo chown -R vault:vault /opt/vault /var/log/vault
sudo chown -R vault:vault /etc/vault.d
sudo chmod 750 /opt/vault/data
Option 2: Docker
For local development and testing, Docker is the fastest path to a running Vault instance:
# Run Vault in dev mode with Docker
docker run --rm -d \
--name vault \
-p 8200:8200 \
-e 'VAULT_DEV_ROOT_TOKEN_ID=my-dev-token' \
-e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200' \
hashicorp/vault:1.17.3 server -dev
# For persistent data in dev mode
docker run --rm -d \
--name vault \
-p 8200:8200 \
-v vault-data:/vault/file \
-e 'VAULT_DEV_ROOT_TOKEN_ID=my-dev-token' \
hashicorp/vault:1.17.3 server -dev
You can also run a production-like Docker setup with a configuration file:
# Create a config directory
mkdir -p ./vault-config
# Mount the config and run in server mode
docker run --rm -d \
--name vault \
-p 8200:8200 \
-v ./vault-config:/vault/config \
-v vault-data:/vault/data \
--cap-add=IPC_LOCK \
hashicorp/vault:1.17.3 server -config=/vault/config/vault.hcl
Option 3: Helm Chart on Kubernetes
For production Kubernetes deployments, use the official Helm chart. This is the most common deployment model for organizations already running Kubernetes:
# Add the HashiCorp Helm repository
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update
# Install with high availability enabled
helm install vault hashicorp/vault \
--namespace vault \
--create-namespace \
--set server.ha.enabled=true \
--set server.ha.replicas=3 \
--set server.ha.raft.enabled=true \
--set server.dataStorage.size=10Gi \
--set server.auditStorage.enabled=true \
--set server.auditStorage.size=10Gi
# Verify the installation
kubectl get pods -n vault
kubectl get svc -n vault
Option 4: Package Manager Installation
On macOS and some Linux distributions, you can use package managers:
# macOS with Homebrew
brew tap hashicorp/tap
brew install hashicorp/tap/vault
# Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vault
# RHEL/CentOS/Fedora
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo yum -y install vault
Dev Server vs Production Mode
Vault ships with two distinct operating modes. Understanding the difference is critical before you go anywhere near production.
| Feature | Dev Server | Production |
|---|---|---|
| Storage | In-memory only | Persistent (Raft, Consul, etc.) |
| TLS | Disabled | Required |
| Initialization | Automatic | Manual |
| Unsealing | Automatic | Manual or auto-unseal |
| Root token | Printed to stdout | Generated once at init |
| Audit logging | Disabled by default | Must be enabled |
| Secret engines | KV v2 enabled at secret/ | Nothing enabled by default |
| Suitable for | Learning, testing | Real workloads |
Start a dev server for learning:
vault server -dev -dev-root-token-id="my-dev-token"
In a separate terminal, configure your client:
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='my-dev-token'
vault status
The output of vault status tells you everything you need to know about the server state:
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.17.3
Build Date 2026-01-15T14:36:24Z
Storage Type inmem
Cluster Name vault-cluster-abc123
Cluster ID 12345678-abcd-efgh-ijkl-123456789012
HA Enabled false
Never run a dev server in production. It stores everything in memory with no encryption at rest and no authentication requirements beyond the root token.
Initialization and Unsealing
When you start Vault in production mode for the first time, it must be initialized. Initialization generates the master encryption key and splits it into key shares using Shamir's Secret Sharing algorithm. This algorithm allows you to divide a secret into multiple parts such that a minimum threshold of parts is required to reconstruct the original secret.
Production Configuration File
Before initializing, you need a configuration file:
# /etc/vault.d/vault.hcl
# Storage backend using Raft integrated storage
storage "raft" {
path = "/opt/vault/data"
node_id = "vault-1"
}
# TCP listener with TLS enabled
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_cert_file = "/opt/vault/tls/vault.crt"
tls_key_file = "/opt/vault/tls/vault.key"
tls_min_version = "tls12"
}
# Advertised API address
api_addr = "https://vault.example.com:8200"
cluster_addr = "https://vault.example.com:8201"
# Enable the web UI
ui = true
# Logging
log_level = "info"
log_file = "/var/log/vault/vault.log"
# Disable memory lock only if you cannot set the capability
# disable_mlock = true
Initialization
# Start the production server
sudo systemctl start vault
# Point your client at the server
export VAULT_ADDR='https://vault.example.com:8200'
# Initialize with 5 key shares and a threshold of 3
vault operator init -key-shares=5 -key-threshold=3
The output looks like this:
Unseal Key 1: s3cr3tK3y1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
Unseal Key 2: s3cr3tK3y2BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB=
Unseal Key 3: s3cr3tK3y3CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC=
Unseal Key 4: s3cr3tK3y4DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD=
Unseal Key 5: s3cr3tK3y5EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE=
Initial Root Token: hvs.rootTokenValueHere123456789
Vault initialized with 5 key shares and a key threshold of 3.
Distribute the unseal keys to different trusted operators. Store them securely using hardware security modules, separate password managers, or physical safes. No single person should hold enough keys to unseal Vault alone. The root token should be used only for initial setup and then revoked.
Seal/Unseal Mechanics in Detail
Vault uses a layered encryption model that is critical to understand:
-
Data Encryption Key (DEK): A randomly generated AES-256-GCM key that encrypts all data stored by Vault. This key is stored in the storage backend, encrypted by the master key.
-
Master Key: Used to encrypt/decrypt the DEK. The master key itself is split using Shamir's Secret Sharing and is never stored anywhere in its complete form.
-
Unseal Keys: The shares of the master key distributed to operators during initialization.
When Vault starts, it loads the encrypted DEK from storage but cannot decrypt it because it does not have the master key. This state is called sealed. Vault can literally read nothing from its storage backend in this state.
To unseal, you provide enough key shares to reconstruct the master key:
# Each command provides one key share
# You need to reach the threshold (3 in our example)
vault operator unseal s3cr3tK3y1AAAAAAAAAA...
# Unseal Progress: 1/3
vault operator unseal s3cr3tK3y2BBBBBBBBBB...
# Unseal Progress: 2/3
vault operator unseal s3cr3tK3y3CCCCCCCCCC...
# Vault is now unsealed
You can check the seal status at any time:
vault status
# Key fields to check:
# Sealed: false
# Unseal Progress: 0/3
To re-seal Vault in an emergency such as a suspected breach:
vault operator seal
This immediately protects all data. No secrets can be read or written until the threshold of unseal keys is provided again. The master key is discarded from memory, and Vault returns to a state where it cannot decrypt anything.
Auto-Unseal Overview
For production environments, manual unsealing is impractical. Vault supports auto-unseal using cloud Key Management Services (KMS). With auto-unseal, the master key is encrypted by a KMS key instead of being split with Shamir's algorithm. When Vault starts, it calls the KMS to decrypt the master key automatically.
# Add to vault.hcl for AWS KMS auto-unseal
seal "awskms" {
region = "us-east-1"
kms_key_id = "arn:aws:kms:us-east-1:123456789012:key/abcd-1234-efgh-5678"
}
With auto-unseal configured, vault operator init produces recovery keys instead of unseal keys. Recovery keys are used for certain administrative operations but are not needed for routine unsealing.
Secret Engines
Secret engines are the components that store, generate, or encrypt data. Each engine is mounted at a specific path in the Vault namespace, and requests to that path are routed to the corresponding engine. Vault ships with dozens of engines, but most teams start with the KV (Key-Value) engine.
Enabling and Mounting Engines
Every secret engine must be explicitly enabled and mounted at a path:
# Enable KV v2 at the default "secret/" path
vault secrets enable -path=secret -version=2 kv
# Enable a second KV engine at a different path
vault secrets enable -path=team-alpha -version=2 kv
# Enable the database engine
vault secrets enable database
# Enable the PKI engine
vault secrets enable pki
# Enable the Transit engine (encryption as a service)
vault secrets enable transit
# List all enabled engines
vault secrets list -detailed
KV v1 vs KV v2
| Feature | KV v1 | KV v2 |
|---|---|---|
| Versioning | No | Yes (configurable max versions) |
| Soft delete | No | Yes (undelete within retention) |
| Check-and-set | No | Yes (CAS for optimistic locking) |
| Metadata | No | Yes (custom metadata per secret) |
| Delete behavior | Permanent | Soft delete with destroy option |
| Performance | Slightly faster | Slightly more overhead |
KV v2 is recommended for nearly all use cases because versioning provides a safety net against accidental overwrites and deletions.
# Enable KV v2 with custom configuration
vault secrets enable -path=secret -version=2 kv
# Configure max versions to retain
vault write secret/config max_versions=10 cas_required=false delete_version_after="768h"
The delete_version_after setting automatically cleans up old versions after the specified duration, preventing unbounded storage growth.
Writing and Reading Secrets
Writing Secrets
# Write a simple secret with multiple key-value pairs
vault kv put secret/myapp/config \
db_host="db.internal" \
db_port="5432" \
db_user="appuser" \
db_pass="s3cur3P@ss!"
# Write from a JSON file (useful for complex secrets)
vault kv put secret/myapp/config @config.json
# Write from stdin to avoid shell history exposure
echo -n '{"api_key":"abc123"}' | vault kv put secret/myapp/apikey -
# Write with check-and-set to prevent accidental overwrites
# cas=0 means "only write if the key does not exist"
vault kv put -cas=0 secret/myapp/new-secret value="first-write"
# cas=1 means "only write if current version is 1"
vault kv put -cas=1 secret/myapp/new-secret value="second-write"
Reading Secrets
# Read a full secret (displays all fields and metadata)
vault kv get secret/myapp/config
# Read as JSON (useful for scripting and automation)
vault kv get -format=json secret/myapp/config
# Read a specific field only
vault kv get -field=db_pass secret/myapp/config
# Read a specific version
vault kv get -version=2 secret/myapp/config
# Read metadata only (no secret values)
vault kv metadata get secret/myapp/config
Adding Custom Metadata
KV v2 supports custom metadata that does not count as secret data but helps with organization:
# Add custom metadata to a secret
vault kv metadata put \
-custom-metadata=owner="platform-team" \
-custom-metadata=environment="production" \
-custom-metadata=last-rotated="2026-03-23" \
secret/myapp/config
Listing, Deleting, and Destroying
# List secrets at a path
vault kv list secret/myapp/
# Soft delete (KV v2 -- can be undeleted)
vault kv delete secret/myapp/config
# Undelete a specific version
vault kv undelete -versions=3 secret/myapp/config
# Permanently destroy specific versions (cannot be undone)
vault kv destroy -versions=1,2 secret/myapp/config
# Delete all versions and metadata permanently
vault kv metadata delete secret/myapp/config
Policies
Policies are the authorization mechanism in Vault. They are written in HCL (HashiCorp Configuration Language) and define what paths a token can access and what operations it can perform. Vault uses a deny-by-default model, meaning any path not explicitly granted in a policy is denied.
Policy Syntax and Capabilities
# policy: myapp-readonly.hcl
# Allow reading secrets under the myapp path
# Note: KV v2 requires "data/" in the path
path "secret/data/myapp/*" {
capabilities = ["read", "list"]
}
# Allow listing the myapp path itself
path "secret/metadata/myapp/*" {
capabilities = ["read", "list"]
}
# Allow the app to renew its own token
path "auth/token/renew-self" {
capabilities = ["update"]
}
# Allow the app to look up its own token info
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Deny access to admin secrets explicitly
path "secret/data/admin/*" {
capabilities = ["deny"]
}
Available capabilities:
- create: Create new data at a path (HTTP POST/PUT to a path that does not exist)
- read: Read data at a path (HTTP GET)
- update: Modify data at a path (HTTP POST/PUT to a path that exists)
- delete: Delete data at a path (HTTP DELETE)
- list: List entries at a path (HTTP LIST)
- sudo: Allows access to paths that are root-protected
- deny: Explicitly denies access, overrides all other capabilities
Advanced Policy Patterns
# policy: team-lead.hcl
# Wildcard matching: allow access to any secret under team-alpha
path "secret/data/team-alpha/+" {
capabilities = ["create", "read", "update", "delete"]
}
# The + glob matches a single path segment
# secret/data/team-alpha/app1 -- matches
# secret/data/team-alpha/app1/nested -- does not match
# The * glob matches everything after
path "secret/data/team-alpha/*" {
capabilities = ["read", "list"]
}
# Templated policies using identity information
# This allows each user to manage their own secrets
path "secret/data/users/{{identity.entity.name}}/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Required parameters: restrict what values can be written
path "secret/data/production/*" {
capabilities = ["create", "update"]
required_parameters = ["owner", "ttl"]
allowed_parameters = {
"owner" = []
"ttl" = ["1h", "4h", "24h"]
"env" = ["production"]
}
denied_parameters = {
"admin_override" = []
}
}
# Min/max wrapping TTL for response wrapping
path "secret/data/sensitive/*" {
capabilities = ["read"]
min_wrapping_ttl = "1m"
max_wrapping_ttl = "30m"
}
Writing and Managing Policies
# Write a policy from a file
vault policy write myapp-readonly myapp-readonly.hcl
# Write a policy inline using a heredoc
vault policy write ci-deploy - <<'EOF'
path "secret/data/ci/*" {
capabilities = ["read", "list"]
}
path "aws/creds/ci-deploy" {
capabilities = ["read"]
}
EOF
# List all policies
vault policy list
# Read a policy
vault policy read myapp-readonly
# Delete a policy
vault policy delete myapp-readonly
# Test a policy: check what capabilities a token has on a path
vault token capabilities secret/data/myapp/config
Policy Precedence and Merging
When multiple policies are attached to a token, Vault merges capabilities using these rules:
- All capabilities from all policies are combined (union).
- The
denycapability always wins, regardless of what other policies grant. - If no policy grants access to a path, access is denied (deny-by-default).
- The
rootpolicy bypasses all checks and should never be used for regular operations.
For example, if Policy A grants ["read", "list"] on secret/data/myapp/* and Policy B grants ["create", "update"] on the same path, a token with both policies gets ["read", "list", "create", "update"]. But if Policy C has ["deny"] on that path, the token gets nothing.
Authentication Methods
Authentication methods verify the identity of users and machines, then map that identity to a set of Vault policies. Vault supports many auth methods, each designed for different use cases.
Token Authentication
Tokens are Vault's core authentication primitive. Every other auth method ultimately creates a token. Understanding tokens is essential.
# Create a token with specific policies and a TTL
vault token create -policy="myapp-readonly" -ttl="1h"
# Create a token with multiple policies
vault token create \
-policy="myapp-readonly" \
-policy="monitoring-read" \
-ttl="4h" \
-display-name="ci-pipeline"
# Create an orphan token (not tied to parent token lifecycle)
vault token create -orphan -policy="myapp-readonly" -ttl="24h"
# Create a periodic token (can be renewed indefinitely)
vault token create -policy="myapp-readonly" -period="1h"
# Look up your current token details
vault token lookup
# Look up a specific token
vault token lookup -accessor "accessor_value_here"
# Renew your current token
vault token renew
# Revoke a specific token and all its children
vault token revoke hvs.tokenValueHere
Token types: Vault has two token types. Service tokens are the default and are persisted to storage, support renewal, and can create child tokens. Batch tokens are lightweight, not persisted, cannot be renewed, and are ideal for high-volume ephemeral operations.
# Create a batch token
vault token create -type=batch -policy="myapp-readonly" -ttl="1h"
Userpass Authentication
For human users during development or in smaller teams:
# Enable the userpass auth method
vault auth enable userpass
# Create a user with policies
vault write auth/userpass/users/jdoe \
password="changeme" \
policies="myapp-readonly,monitoring-read" \
token_ttl="4h" \
token_max_ttl="24h"
# Log in as the user
vault login -method=userpass username=jdoe password=changeme
# Update a user's policies
vault write auth/userpass/users/jdoe \
policies="team-lead,monitoring-read"
# Delete a user
vault delete auth/userpass/users/jdoe
# List all users
vault list auth/userpass/users
AppRole Authentication
AppRole is designed for machine-to-machine authentication. It uses a two-part credential: a role_id (like a username, relatively static) and a secret_id (like a password, should be ephemeral). This split credential model allows you to distribute the two parts through different secure channels.
# Enable AppRole
vault auth enable approle
# Create a role with specific constraints
vault write auth/approle/role/myapp \
token_policies="myapp-readonly" \
token_ttl="1h" \
token_max_ttl="4h" \
secret_id_ttl="10m" \
secret_id_num_uses=1 \
token_num_uses=0 \
bind_secret_id=true
# Get the role ID (this is stable and can be baked into config)
vault read auth/approle/role/myapp/role-id
# Generate a secret ID (this is ephemeral)
vault write -f auth/approle/role/myapp/secret-id
# Generate a secret ID with metadata for audit tracking
vault write auth/approle/role/myapp/secret-id \
metadata="env=production,pipeline=deploy"
# Log in with AppRole
vault write auth/approle/login \
role_id="your-role-id-here" \
secret_id="your-secret-id-here"
The secret_id_num_uses=1 setting ensures each secret ID can only be used once, limiting the blast radius of a compromised credential. The secret_id_ttl="10m" means the secret ID expires even if it is not used, providing a second layer of protection.
LDAP Authentication
For organizations with existing directory services:
# Enable LDAP auth
vault auth enable ldap
# Configure the LDAP connection
vault write auth/ldap/config \
url="ldaps://ldap.example.com:636" \
userattr="sAMAccountName" \
userdn="ou=Users,dc=example,dc=com" \
groupdn="ou=Groups,dc=example,dc=com" \
groupattr="cn" \
binddn="cn=vault-bind,ou=ServiceAccounts,dc=example,dc=com" \
bindpass="bind-password-here" \
certificate=@ldap-ca.crt \
insecure_tls=false \
starttls=false
# Map an LDAP group to Vault policies
vault write auth/ldap/groups/devops policies="devops-admin,monitoring-read"
vault write auth/ldap/groups/developers policies="dev-readonly"
# Log in with LDAP
vault login -method=ldap username=jdoe
OIDC Authentication
For single sign-on with identity providers like Okta, Azure AD, or Google Workspace:
# Enable OIDC auth
vault auth enable oidc
# Configure OIDC with your provider
vault write auth/oidc/config \
oidc_discovery_url="https://accounts.google.com" \
oidc_client_id="your-client-id" \
oidc_client_secret="your-client-secret" \
default_role="default"
# Create a role that maps OIDC claims to policies
vault write auth/oidc/role/default \
allowed_redirect_uris="https://vault.example.com:8200/ui/vault/auth/oidc/oidc/callback" \
allowed_redirect_uris="http://localhost:8250/oidc/callback" \
user_claim="email" \
groups_claim="groups" \
policies="default" \
ttl="4h"
# Log in via OIDC (opens a browser window)
vault login -method=oidc
Vault CLI Essentials
Here is a comprehensive reference for the most-used CLI commands:
# Server management
vault status # Check server status
vault operator init # Initialize a new Vault
vault operator unseal # Provide an unseal key
vault operator seal # Seal the Vault
vault operator step-down # Force leader to step down (HA)
# Secret engine management
vault secrets list # List enabled engines
vault secrets enable -path=kv kv # Enable an engine
vault secrets disable kv/ # Disable an engine
vault secrets tune -max-lease-ttl=24h kv/ # Tune engine settings
# Auth method management
vault auth list # List enabled auth methods
vault auth enable approle # Enable an auth method
vault auth disable approle/ # Disable an auth method
# KV v2 secret operations
vault kv put secret/path key=value # Write a secret
vault kv get secret/path # Read a secret
vault kv get -field=key secret/path # Read a specific field
vault kv list secret/ # List secrets at a path
vault kv delete secret/path # Soft delete
vault kv undelete -versions=1 secret/path # Restore a soft-deleted version
vault kv destroy -versions=1 secret/path # Permanently destroy
vault kv metadata get secret/path # View metadata
# Token management
vault token lookup # Look up current token
vault token renew # Renew current token
vault token revoke hvs.tokenHere # Revoke a token
vault token capabilities secret/path # Check capabilities
# Lease management
vault lease lookup lease-id-here # Look up a lease
vault lease renew lease-id-here # Renew a lease
vault lease revoke lease-id-here # Revoke a lease
vault lease revoke -prefix secret/ # Revoke all leases under a prefix
# Policy management
vault policy list # List all policies
vault policy read policy-name # Read a policy
vault policy write name file.hcl # Write a policy from file
vault policy delete policy-name # Delete a policy
# Audit management
vault audit list # List audit devices
vault audit enable file file_path=/var/log/vault-audit.log
vault audit disable file/ # Disable an audit device
Environment Variables
# Essential environment variables
export VAULT_ADDR='https://vault.example.com:8200' # Vault server address
export VAULT_TOKEN='hvs.yourTokenHere' # Authentication token
export VAULT_CACERT='/path/to/ca.crt' # CA cert for TLS verification
export VAULT_CLIENT_CERT='/path/to/client.crt' # Client cert for mTLS
export VAULT_CLIENT_KEY='/path/to/client.key' # Client key for mTLS
export VAULT_SKIP_VERIFY='false' # Never set to true in production
export VAULT_FORMAT='json' # Default output format
export VAULT_NAMESPACE='admin' # Namespace (Enterprise)
UI Overview
Vault ships with a built-in web UI accessible at the server address (e.g., https://vault.example.com:8200/ui). The UI provides a graphical interface for:
- Browsing and editing secrets in any mounted engine
- Managing policies with a built-in syntax-highlighted editor
- Enabling and configuring auth methods
- Viewing and revoking leases
- Wrapping and unwrapping response-wrapped tokens
- Monitoring Vault status and replication state
For the dev server, log into the UI using the root token. In production, configure an OIDC or LDAP auth method so operators can log in with their existing corporate credentials instead of managing separate Vault usernames.
Audit Logging Setup
Even during initial setup, enabling audit logging should be one of your first actions after unsealing:
# Enable file-based audit logging
vault audit enable file file_path=/var/log/vault/audit.log
# Enable a second audit device for redundancy
vault audit enable -path=syslog syslog tag="vault" facility="AUTH"
# Verify audit devices are enabled
vault audit list -detailed
Every request to Vault is now logged with the identity of the caller, the operation performed, the path accessed, and a timestamp. Secret values in the audit log are HMAC-hashed, so the log itself does not contain plaintext secrets but can still be used to correlate access patterns.
Practical Example: Complete Application Secrets Workflow
Let us walk through a complete workflow for a web application that needs database credentials, an API key, and an encryption key.
# 1. Start dev server (for this example only)
vault server -dev -dev-root-token-id="root"
# 2. Set environment
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='root'
# 3. Enable audit logging (good practice even in dev)
vault audit enable file file_path=/tmp/vault-audit.log
# 4. Store the application's secrets
vault kv put secret/webapp/production \
db_url="postgresql://db.prod.internal:5432/webapp" \
db_username="webapp_svc" \
db_password="Pr0d-P@ssw0rd-2026!" \
stripe_api_key="sk_live_abc123def456" \
jwt_secret="super-secret-jwt-signing-key" \
encryption_key="aes256-base64-encoded-key-here"
# 5. Add metadata for tracking
vault kv metadata put \
-custom-metadata=owner="platform-team" \
-custom-metadata=environment="production" \
-custom-metadata=rotation-schedule="quarterly" \
secret/webapp/production
# 6. Create a read-only policy for the app
vault policy write webapp-prod - <<'EOF'
path "secret/data/webapp/production" {
capabilities = ["read"]
}
path "secret/metadata/webapp/production" {
capabilities = ["read"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
path "auth/token/lookup-self" {
capabilities = ["read"]
}
EOF
# 7. Create an AppRole for the app
vault auth enable approle
vault write auth/approle/role/webapp-prod \
token_policies="webapp-prod" \
token_ttl="30m" \
token_max_ttl="2h" \
secret_id_ttl="10m" \
secret_id_num_uses=1
# 8. Retrieve credentials for the app
ROLE_ID=$(vault read -field=role_id auth/approle/role/webapp-prod/role-id)
SECRET_ID=$(vault write -f -field=secret_id auth/approle/role/webapp-prod/secret-id)
# 9. The app authenticates and reads secrets
APP_TOKEN=$(vault write -field=token auth/approle/login \
role_id="$ROLE_ID" secret_id="$SECRET_ID")
VAULT_TOKEN=$APP_TOKEN vault kv get -format=json secret/webapp/production
# 10. Verify the access was logged
cat /tmp/vault-audit.log | python3 -m json.tool | head -50
Integrating with Application Code
Here is how an application would use the Vault API directly in different languages:
# Using the HTTP API with curl
curl --header "X-Vault-Token: ${APP_TOKEN}" \
--request GET \
${VAULT_ADDR}/v1/secret/data/webapp/production | jq '.data.data'
# Using the Vault Agent to provide secrets as a file
# The agent authenticates and renders templates automatically
# See the Kubernetes integration article for details
For Python applications using the hvac library:
import hvac
import os
client = hvac.Client(
url=os.environ['VAULT_ADDR'],
token=os.environ['VAULT_TOKEN']
)
secret = client.secrets.kv.v2.read_secret_version(
path='webapp/production',
mount_point='secret'
)
db_password = secret['data']['data']['db_password']
For Node.js applications using node-vault:
const vault = require("node-vault")({
apiVersion: "v1",
endpoint: process.env.VAULT_ADDR,
token: process.env.VAULT_TOKEN,
});
const { data } = await vault.read("secret/data/webapp/production");
const dbPassword = data.data.db_password;
Response Wrapping
Response wrapping is a powerful feature that allows you to securely deliver secrets to systems that need them exactly once. Instead of returning a secret directly, Vault wraps it in a single-use token with a short TTL.
# Wrap a secret read in a single-use token valid for 5 minutes
vault kv get -wrap-ttl=5m secret/webapp/production
# Output:
# Key Value
# --- -----
# wrapping_token hvs.wrappedTokenHere
# wrapping_accessor abc123
# wrapping_token_ttl 5m
# wrapping_token_creation_time 2026-03-23T10:00:00Z
# The recipient unwraps to get the actual secret
VAULT_TOKEN=hvs.wrappedTokenHere vault unwrap
# The wrapping token is now invalid and cannot be used again
This is particularly useful for initial secret delivery during application bootstrapping. You generate a wrapped token, pass it to the new application through a secure channel, and the application unwraps it to get its initial credentials.
Next Steps
Once you are comfortable with static secrets, policies, and authentication methods, move on to dynamic secrets. Dynamic secrets are where Vault generates short-lived database credentials, AWS IAM users, and TLS certificates on demand. That eliminates the most dangerous class of secret: the long-lived credential that nobody remembers to rotate. The combination of dynamic secrets, fine-grained policies, and comprehensive audit logging transforms Vault from a password manager into a complete security infrastructure platform.
DevSecOps Lead
Security-first mindset in everything I ship. From zero-trust architectures to supply chain security, I make sure your pipeline doesn't become your weakest link.
Related Articles
Vault with Kubernetes: Injecting Secrets into Pods
Inject HashiCorp Vault secrets into Kubernetes pods using the Agent Injector and CSI provider — with practical examples for database credentials and TLS certificates.
HashiCorp Vault and Kubernetes: Secrets Management That Actually Works
Integrate HashiCorp Vault with Kubernetes to eliminate static secrets from your cluster — with working manifests, threat models, and pipeline automation.
Vault Dynamic Secrets: Short-Lived Credentials on Demand
Generate short-lived database credentials, AWS IAM roles, and PKI certificates with Vault dynamic secrets — eliminating long-lived credentials from your infrastructure.