Docker Agents in Jenkins: Reproducible Builds Every Time
The classic Jenkins setup -- a controller with a handful of static agents -- breaks down quickly. Agents accumulate state from previous builds, different projects need conflicting tool versions, and debugging "works on my agent" issues wastes hours. Docker agents fix all of this by spinning up a fresh container for every build or stage, running the work, and throwing the container away.
This guide covers everything you need to run Jenkins builds inside Docker containers: plugin setup, basic and advanced agent configurations, custom Dockerfiles, caching strategies, Docker-in-Docker versus socket mounting, Kaniko for daemonless builds, sidecar containers for integration tests, multi-stage builds, Kubernetes pod agents, security hardening, and troubleshooting the issues you will inevitably hit.
Why Docker Agents
Static Jenkins agents suffer from several problems that get worse as your team and project portfolio grow:
| Problem | Static Agents | Docker Agents |
|---|---|---|
| State pollution | Build artifacts and caches from previous builds leak into new ones | Fresh container every time -- no leftover state |
| Tool version conflicts | Project A needs Node 18, Project B needs Node 20 | Each project specifies its own image |
| Snowflake agents | Manual tool installs cause configuration drift | Image is the single source of truth |
| Scaling | Adding capacity means provisioning VMs, installing tools, connecting them | Start more containers -- Docker handles scheduling |
| Reproducibility | "Works on my agent" issues | Same image, same result, every time |
| Security isolation | Builds share the same file system and processes | Container isolation between builds |
Docker agents eliminate these issues. Each build gets a pristine container from a defined image. When the build ends, the container is destroyed. Need a different toolchain? Change the image name. Need more capacity? The Docker host handles container scheduling.
When NOT to Use Docker Agents
Docker agents are not always the right choice:
- Builds that need GPU access -- GPU passthrough to containers is possible but adds complexity
- Windows builds -- Windows Docker containers have significant limitations compared to Linux
- Builds that need persistent local state -- Some tools (like Bazel) benefit from persistent caches that are harder to maintain with ephemeral containers
- Very small teams with simple needs -- The overhead of maintaining Docker images may not be worth it for a team of three with two Node.js projects
Docker Plugin Setup
You need the Docker Pipeline plugin installed on your Jenkins controller. This is usually included in the suggested plugins during initial setup, but verify it:
- Go to Manage Jenkins, then Plugins, then Installed plugins
- Search for "Docker Pipeline" (artifact ID:
docker-workflow) - If not installed, go to Available plugins and install it
The Jenkins controller (or the node running the pipeline) needs Docker installed and the Jenkins user must have access to the Docker socket:
# On the Jenkins host
sudo apt-get update
sudo apt-get install -y docker.io
# Add the jenkins user to the docker group
sudo usermod -aG docker jenkins
# Restart Jenkins to pick up the group change
sudo systemctl restart jenkins
# Verify Docker access
sudo -u jenkins docker ps
For Docker-based Jenkins installations, mount the Docker socket into the Jenkins container:
# docker-compose.yml
services:
jenkins:
image: jenkins/jenkins:lts-jdk17
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
# Either run as root or match the Docker group GID
user: root
Verifying the Setup
After installation, test Docker access from a pipeline:
pipeline {
agent {
docker { image 'alpine:3.19' }
}
stages {
stage('Test') {
steps {
sh 'echo "Docker agent works!"'
sh 'cat /etc/os-release'
sh 'whoami'
}
}
}
}
If this pipeline succeeds, your Docker agent setup is working.
Basic Docker Agent Usage
The simplest form: run an entire pipeline in a Docker container.
pipeline {
agent {
docker {
image 'node:20-alpine'
}
}
stages {
stage('Info') {
steps {
sh 'node --version'
sh 'npm --version'
sh 'whoami'
sh 'pwd'
}
}
stage('Install') {
steps {
sh 'npm ci'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
stage('Build') {
steps {
sh 'npm run build'
}
}
}
}
Jenkins pulls the node:20-alpine image (if not cached), starts a container, mounts the workspace into the container, runs all stages inside it, and destroys the container when done. The workspace mount means your source code is available inside the container automatically.
How Jenkins Docker Agents Work Internally
Understanding what happens behind the scenes helps with debugging:
- Jenkins checks out your code to the workspace on the host
- Jenkins runs
docker pull node:20-alpine(if not cached) - Jenkins runs something like:
docker run -d -v /workspace:/workspace -w /workspace node:20-alpine cat - For each
shstep, Jenkins runs:docker exec container-id sh -c 'your-command' - When the pipeline ends, Jenkins runs:
docker stop container-id && docker rm container-id
The workspace is bind-mounted from the host, so files persist between steps within the same stage or pipeline. But the container itself is ephemeral.
Docker Agent Options
agent {
docker {
image 'node:20-alpine'
label 'docker-host' // Run on nodes with this label
args '-v /tmp:/tmp -e FOO=bar' // Additional docker run arguments
registryUrl 'https://registry.example.com'
registryCredentialsId 'docker-reg-creds'
reuseNode true // Reuse the workspace node instead of allocating a new one
alwaysPull true // Always pull the latest image
}
}
Docker Run Arguments Reference
The args parameter passes arbitrary flags to docker run. These are the flags you will use most often:
| Flag | Purpose | Example |
|---|---|---|
-v /host:/container | Mount host directories | -v /tmp/.npm:/root/.npm |
-e VAR=value | Set environment variables | -e NODE_ENV=ci |
--network host | Use host networking | --network host |
--network my-net | Use a custom Docker network | --network jenkins-build-net |
--memory 4g | Limit container memory | --memory 4g |
--cpus 2 | Limit CPU usage | --cpus 2 |
-u root | Run as root inside the container | -u root |
-u 1000:1000 | Run as specific UID/GID | -u 1000:1000 |
--dns 8.8.8.8 | Set DNS servers | --dns 8.8.8.8 --dns 8.8.4.4 |
--add-host | Add host entries | --add-host db:192.168.1.100 |
--tmpfs /tmp | Mount a tmpfs filesystem | --tmpfs /tmp:rw,size=1g |
--shm-size 2g | Set shared memory size (useful for Chrome/Playwright) | --shm-size 2g |
Per-Stage Docker Agents
Different stages often need different tools. Use per-stage agents with agent none at the top level:
pipeline {
agent none
stages {
stage('Build Frontend') {
agent {
docker { image 'node:20-alpine' }
}
steps {
dir('frontend') {
sh 'npm ci'
sh 'npm run build'
}
stash includes: 'frontend/dist/**', name: 'frontend-build'
}
}
stage('Build Backend') {
agent {
docker { image 'golang:1.22-alpine' }
}
steps {
dir('backend') {
sh 'go build -o ../app ./cmd/server'
}
stash includes: 'app', name: 'backend-build'
}
}
stage('Build Docker Image') {
agent { label 'docker-host' }
steps {
unstash 'frontend-build'
unstash 'backend-build'
sh 'docker build -t my-app:${BUILD_NUMBER} .'
}
}
}
}
Use stash/unstash to pass artifacts between stages running on different agents. Keep stash sizes small -- they are stored on the controller and large stashes can cause performance issues.
Custom Dockerfiles for Build Agents
Pre-built images from Docker Hub rarely have everything you need. Custom Dockerfiles let you create tailored build environments that include exactly the tools your project requires.
Basic Custom Build Image
# ci/Dockerfile
FROM node:20-alpine
# Install additional tools
RUN apk add --no-cache \
git \
python3 \
make \
g++ \
curl \
jq \
bash \
openssh-client
# Install specific npm global packages
RUN npm install -g pnpm@9 typescript@5
# Install Chrome for E2E tests
RUN apk add --no-cache chromium chromium-chromedriver \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont
ENV CHROME_BIN=/usr/bin/chromium-browser
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# Non-root user for security
RUN adduser -D -u 1000 builder
USER builder
WORKDIR /home/builder/app
Reference it in your Jenkinsfile:
pipeline {
agent {
dockerfile {
filename 'ci/Dockerfile'
additionalBuildArgs '--build-arg NODE_VERSION=20'
args '-v /tmp/.npm:/home/builder/.npm' // Cache npm packages
}
}
stages {
stage('Install') {
steps {
sh 'pnpm install --frozen-lockfile'
}
}
stage('Lint') {
steps {
sh 'pnpm run lint'
}
}
stage('Test') {
steps {
sh 'pnpm test'
}
}
stage('Build') {
steps {
sh 'pnpm run build'
}
}
}
}
Jenkins builds the Dockerfile the first time and caches the image. Subsequent builds reuse the cached image unless the Dockerfile changes.
Multi-Tool Build Image for Monorepos
For monorepo projects that need multiple languages in a single build:
# ci/Dockerfile.multi
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
# Base tools
RUN apt-get update && apt-get install -y \
curl git wget unzip jq \
build-essential \
ca-certificates \
gnupg \
&& rm -rf /var/lib/apt/lists/*
# Node.js 20
RUN mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key \
| gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" \
| tee /etc/apt/sources.list.d/nodesource.list \
&& apt-get update && apt-get install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
# Go 1.22
RUN wget -q https://go.dev/dl/go1.22.0.linux-amd64.tar.gz \
&& tar -C /usr/local -xzf go1.22.0.linux-amd64.tar.gz \
&& rm go1.22.0.linux-amd64.tar.gz
ENV PATH="/usr/local/go/bin:${PATH}"
ENV GOPATH="/go"
ENV PATH="${GOPATH}/bin:${PATH}"
# Python 3.11
RUN apt-get update && apt-get install -y \
python3.11 python3-pip python3-venv \
&& rm -rf /var/lib/apt/lists/*
# kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x kubectl && mv kubectl /usr/local/bin/
# Helm
RUN curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# AWS CLI v2
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip && ./aws/install && rm -rf aws awscliv2.zip
# Create non-root build user
RUN useradd -m -u 1000 builder
USER builder
WORKDIR /home/builder/workspace
This is a bigger image (typically 1-2 GB), but it eliminates the need for per-stage agents in monorepo pipelines. The trade-off: build time of the image itself versus pipeline complexity.
Best Practices for Build Images
| Practice | Reason |
|---|---|
Pin base image versions (node:20.11-alpine, not node:latest) | Reproducibility -- latest can change without warning |
Use --no-cache flag in package managers | Smaller image size |
Combine RUN commands with && | Fewer layers, smaller image |
| Run as non-root user | Security -- limit blast radius of container escape |
Include git in the image | Jenkins needs it for checkout operations |
| Keep images in a private registry | Faster pulls, no Docker Hub rate limits |
| Tag images with a version and rebuild periodically | Security patches in base images |
Pre-building and Publishing Build Images
Instead of building from a Dockerfile every time, pre-build and push your CI images:
// Jenkinsfile for building the CI image itself
pipeline {
agent { label 'docker' }
triggers {
cron('H 4 * * 1') // Rebuild weekly to pick up security patches
}
stages {
stage('Build CI Image') {
steps {
sh '''
docker build \
-t registry.example.com/ci-images/node-ci:20 \
-f ci/Dockerfile \
.
docker push registry.example.com/ci-images/node-ci:20
'''
}
}
}
}
Then reference the pre-built image in application pipelines:
agent {
docker {
image 'registry.example.com/ci-images/node-ci:20'
registryUrl 'https://registry.example.com'
registryCredentialsId 'registry-creds'
}
}
This is faster than building from a Dockerfile on every pipeline run and ensures consistency across all projects.
Mounting Volumes for Caching
Docker agents start clean every time, which is great for reproducibility but terrible for build speed. Without caching, every build downloads all dependencies from scratch. Mount volumes to cache package manager data:
pipeline {
agent {
docker {
image 'node:20-alpine'
args '''
-v npm-cache:/root/.npm
-v pnpm-store:/root/.local/share/pnpm/store
'''
}
}
stages {
stage('Install') {
steps {
sh 'npm ci' // Uses cached packages from npm-cache volume
}
}
}
}
Language-Specific Cache Mounts
| Language/Tool | Cache Directory | Volume Mount |
|---|---|---|
| npm | ~/.npm | -v npm-cache:/root/.npm |
| pnpm | ~/.local/share/pnpm/store | -v pnpm-store:/root/.local/share/pnpm/store |
| Yarn | ~/.cache/yarn | -v yarn-cache:/root/.cache/yarn |
| Go modules | /go/pkg/mod | -v go-mod-cache:/go/pkg/mod |
| Go build cache | ~/.cache/go-build | -v go-build-cache:/root/.cache/go-build |
| Maven | ~/.m2/repository | -v maven-cache:/root/.m2/repository |
| Gradle | ~/.gradle/caches | -v gradle-cache:/root/.gradle/caches |
| pip | ~/.cache/pip | -v pip-cache:/root/.cache/pip |
| Rust/Cargo | ~/.cargo/registry | -v cargo-cache:/root/.cargo/registry |
| Composer (PHP) | ~/.cache/composer | -v composer-cache:/root/.cache/composer |
Use Docker named volumes rather than bind mounts to host directories. Named volumes are managed by Docker and work across container restarts without permission issues. Bind mounts require you to manage permissions manually and can cause UID/GID conflicts.
Cache Cleanup
Named volumes grow over time. Set up periodic cleanup:
#!/bin/bash
# cleanup-build-caches.sh
# Run weekly via cron
# List volumes and their sizes
docker system df -v 2>/dev/null | grep -A 100 "Local Volumes"
# Remove volumes not used in the last 30 days
# (Docker does not track last-used, so this is approximate)
docker volume prune -f
# Or selectively remove specific caches
# docker volume rm npm-cache go-mod-cache
Docker-in-Docker vs Docker Socket Mounting
When your pipeline needs to build Docker images inside a Docker agent, you have two choices. This is one of the most common decision points in Jenkins Docker setups.
Docker Socket Mounting (Recommended)
Mount the host's Docker socket into the build container:
agent {
docker {
image 'docker:24-cli'
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
How it works: The build container uses the host's Docker daemon to build images. The docker commands inside the container talk to the same daemon that manages the container itself.
| Aspect | Details |
|---|---|
| Setup complexity | Simple -- just mount the socket |
| Image caching | Shared with host -- images are cached across builds |
| Performance | Fast -- no nested virtualization overhead |
| Security | Container can see and manipulate all containers on the host |
| Privileged mode | Not required |
| Network access | Built images can access host network |
Docker-in-Docker (DinD)
Run a full Docker daemon inside the build container:
agent {
docker {
image 'docker:24-dind'
args '--privileged' // Required for DinD
}
}
| Aspect | Details |
|---|---|
| Setup complexity | Moderate -- requires privileged mode and storage driver configuration |
| Image caching | Isolated -- no cache reuse between builds by default |
| Performance | Slower -- nested daemon overhead, no shared cache |
| Security | Complete isolation from host Docker, but requires --privileged |
| Privileged mode | Required (significant security concern) |
| Network access | Isolated from host network |
Side-by-Side Comparison
| Factor | Socket Mount | DinD |
|---|---|---|
| Build speed | Fast (shared cache) | Slower (cold cache) |
| Isolation | Low (shared daemon) | High (separate daemon) |
| Security risk | Medium (socket access) | High (privileged mode) |
| Multi-tenant safety | Not suitable | Better (but still risky) |
| Debugging ease | Easy (host tools work) | Harder (nested Docker) |
| Storage usage | Shared with host | Additional storage per build |
Recommendation: Use Docker socket mounting for most cases. The security surface is manageable with proper agent isolation. Use DinD only when you genuinely need full isolation, such as in multi-tenant environments where builds from different teams must not see each other's containers.
Rootless Docker Socket Mounting
For improved security with socket mounting, use rootless Docker:
# Install rootless Docker (on the Jenkins host)
dockerd-rootless-setuptool.sh install
# Mount the rootless socket instead
docker run -d \
-v /run/user/1000/docker.sock:/var/run/docker.sock \
jenkins/jenkins:lts-jdk17
This limits the blast radius of container escapes because the Docker daemon itself runs without root privileges.
Kaniko: Building Images Without Docker
Kaniko builds Docker images from a Dockerfile without needing a Docker daemon at all. It runs entirely in userspace, making it ideal for:
- Kubernetes-based Jenkins agents where you cannot mount the Docker socket
- Environments where Docker socket access is prohibited by security policy
- CI environments that need to build images without privileged containers
Kaniko on Kubernetes
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:debug
command: ['sleep']
args: ['infinity']
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker
- name: node
image: node:20-alpine
command: ['sleep']
args: ['infinity']
volumes:
- name: docker-config
secret:
secretName: docker-registry-creds
items:
- key: .dockerconfigjson
path: config.json
'''
}
}
stages {
stage('Test') {
steps {
container('node') {
sh 'npm ci && npm test'
}
}
}
stage('Build and Push') {
steps {
container('kaniko') {
sh '''
/kaniko/executor \
--context=dir:///home/jenkins/agent/workspace/my-job \
--destination=registry.example.com/my-app:${BUILD_NUMBER} \
--destination=registry.example.com/my-app:latest \
--cache=true \
--cache-repo=registry.example.com/my-app/cache \
--snapshot-mode=redo \
--use-new-run
'''
}
}
}
}
}
Kaniko with Docker-based Jenkins
For non-Kubernetes Jenkins, you can still use Kaniko as a Docker container:
stage('Build with Kaniko') {
steps {
withCredentials([file(credentialsId: 'docker-config-json', variable: 'DOCKER_CONFIG')]) {
sh """
docker run --rm \
-v \${WORKSPACE}:/workspace \
-v \${DOCKER_CONFIG}:/kaniko/.docker/config.json:ro \
gcr.io/kaniko-project/executor:latest \
--context=/workspace \
--destination=registry.example.com/my-app:${BUILD_NUMBER} \
--cache=true \
--cache-repo=registry.example.com/my-app/cache
"""
}
}
}
Kaniko Options Reference
| Flag | Purpose |
|---|---|
--context | Build context directory |
--dockerfile | Path to Dockerfile (default: Dockerfile in context) |
--destination | Registry destination (can specify multiple times) |
--cache | Enable layer caching |
--cache-repo | Repository for cached layers |
--cache-ttl | Cache time-to-live (default: 336h / 14 days) |
--snapshot-mode=redo | Faster snapshots (may miss some file changes) |
--use-new-run | Improved RUN command handling |
--skip-tls-verify | Skip TLS verification (insecure registries) |
--build-arg | Pass build arguments |
--target | Build a specific stage in a multi-stage Dockerfile |
Kaniko supports layer caching via a remote registry (--cache=true --cache-repo=...), which helps with build speed even though there is no local Docker cache.
Sidecar Containers for Integration Tests
Some tests need services running alongside the build -- a database, a Redis instance, a message queue, or a mock API. Docker provides several ways to run sidecar containers.
Using docker.image().withRun()
pipeline {
agent { label 'docker' }
stages {
stage('Integration Tests') {
steps {
script {
docker.image('postgres:16-alpine').withRun(
'-e POSTGRES_PASSWORD=testpass ' +
'-e POSTGRES_DB=testdb ' +
'-e POSTGRES_USER=testuser'
) { db ->
docker.image('redis:7-alpine').withRun('') { redis ->
docker.image('node:20-alpine').inside(
"--link ${db.id}:postgres " +
"--link ${redis.id}:redis " +
"-e DATABASE_URL=postgresql://testuser:testpass@postgres:5432/testdb " +
"-e REDIS_URL=redis://redis:6379"
) {
sh 'npm ci'
sh 'npm run db:migrate'
sh 'npm run test:integration'
}
}
}
}
}
}
}
}
The withRun method starts a container in the background. The inside method runs commands in a container linked to the sidecars. When the block exits, all containers are stopped and removed automatically.
Using Docker Compose
For more complex service topologies, Docker Compose is often more readable and maintainable:
# docker-compose.test.yml
version: "3.9"
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
healthcheck:
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
app:
build:
context: .
dockerfile: ci/Dockerfile
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
environment:
DATABASE_URL: postgresql://testuser:testpass@postgres:5432/testdb
REDIS_URL: redis://redis:6379
NODE_ENV: test
command: sh -c "npm ci && npm run db:migrate && npm run test:integration"
stage('Integration Tests') {
agent { label 'docker-host' }
steps {
sh 'docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit --exit-code-from app'
}
post {
always {
sh 'docker-compose -f docker-compose.test.yml logs --no-color > integration-logs.txt'
archiveArtifacts artifacts: 'integration-logs.txt', allowEmptyArchive: true
sh 'docker-compose -f docker-compose.test.yml down -v --remove-orphans'
}
}
}
Using Docker Networks
For better control over service discovery without --link (which is deprecated):
stage('Integration Tests') {
steps {
script {
def networkName = "jenkins-${env.BUILD_TAG}".replaceAll('[^a-zA-Z0-9_.-]', '-')
try {
sh "docker network create ${networkName}"
// Start services
sh """
docker run -d --name postgres-${BUILD_NUMBER} \
--network ${networkName} \
--network-alias postgres \
-e POSTGRES_PASSWORD=test \
postgres:16-alpine
"""
// Wait for Postgres to be ready
sh """
for i in \$(seq 1 30); do
docker exec postgres-${BUILD_NUMBER} pg_isready && break
sleep 1
done
"""
// Run tests
docker.image('node:20-alpine').inside(
"--network ${networkName} " +
"-e DATABASE_URL=postgresql://postgres:test@postgres:5432/postgres"
) {
sh 'npm ci && npm run test:integration'
}
} finally {
sh "docker rm -f postgres-${BUILD_NUMBER} || true"
sh "docker network rm ${networkName} || true"
}
}
}
}
Multi-Stage Builds in Pipelines
Combine Docker multi-stage builds with Jenkins pipelines for efficient image creation:
pipeline {
agent { label 'docker-host' }
environment {
REGISTRY = 'registry.example.com'
IMAGE = 'my-service'
TAG = "${GIT_COMMIT.take(8)}"
DOCKER_BUILDKIT = '1'
}
stages {
stage('Test') {
agent {
docker { image 'node:20-alpine' }
}
steps {
sh 'npm ci'
sh 'npm run lint'
sh 'npm test -- --ci --coverage'
}
post {
always {
junit '**/junit.xml'
}
}
}
stage('Build Production Image') {
steps {
sh """
docker build \
--target production \
--build-arg BUILD_DATE=\$(date -u +%Y-%m-%dT%H:%M:%SZ) \
--build-arg GIT_COMMIT=${GIT_COMMIT} \
--build-arg VERSION=${TAG} \
--cache-from ${REGISTRY}/${IMAGE}:latest \
-t ${REGISTRY}/${IMAGE}:${TAG} \
-t ${REGISTRY}/${IMAGE}:latest \
.
"""
}
}
stage('Security Scan') {
steps {
sh """
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image \
--exit-code 1 \
--severity HIGH,CRITICAL \
--ignore-unfixed \
${REGISTRY}/${IMAGE}:${TAG}
"""
}
}
stage('Push') {
steps {
withCredentials([usernamePassword(
credentialsId: 'registry-creds',
usernameVariable: 'REG_USER',
passwordVariable: 'REG_PASS'
)]) {
sh '''
echo "$REG_PASS" | docker login $REGISTRY -u "$REG_USER" --password-stdin
docker push ${REGISTRY}/${IMAGE}:${TAG}
docker push ${REGISTRY}/${IMAGE}:latest
docker logout $REGISTRY
'''
}
}
}
}
}
With a corresponding multi-stage Dockerfile:
# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Stage 2: Build
FROM deps AS builder
COPY . .
RUN npm run build
# Stage 3: Test (can be targeted with --target test)
FROM builder AS test
RUN npm test
# Stage 4: Production
FROM node:20-alpine AS production
ARG BUILD_DATE
ARG GIT_COMMIT
ARG VERSION
LABEL org.opencontainers.image.created=$BUILD_DATE
LABEL org.opencontainers.image.revision=$GIT_COMMIT
LABEL org.opencontainers.image.version=$VERSION
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY /app/dist ./dist
RUN adduser -D -u 1001 appuser
USER appuser
EXPOSE 3000
HEALTHCHECK \
CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]
Caching Strategies
Build speed with Docker agents depends heavily on caching. Here are the strategies ranked by effectiveness, from simplest to most advanced:
1. Docker Layer Caching
Ensure your Dockerfile is ordered so that infrequently changing layers come first:
# GOOD: package.json changes rarely, source code changes often
COPY package*.json ./ # Cached until package.json changes
RUN npm ci # Cached until package.json changes
COPY . . # Changes often -- invalidates only this and later layers
RUN npm run build
# BAD: source code change invalidates npm install
COPY . .
RUN npm ci
RUN npm run build
2. Named Volume Caching
Mount persistent volumes for package manager caches (covered above in the Mounting Volumes section). This is the single most impactful optimization for build speed.
3. BuildKit Cache Mounts
Docker BuildKit supports cache mounts that persist across builds without cluttering the image:
# syntax=docker/dockerfile:1
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN \
npm ci
COPY . .
RUN npm run build
Enable BuildKit in your pipeline:
environment {
DOCKER_BUILDKIT = '1'
}
4. Registry-Based Caching
Use --cache-from to pull cache layers from a registry. This works even on agents that have never built the image before:
sh """
docker pull ${REGISTRY}/${IMAGE}:latest || true
docker build \
--cache-from ${REGISTRY}/${IMAGE}:latest \
-t ${REGISTRY}/${IMAGE}:${TAG} \
.
"""
5. BuildKit Inline Caching
Embed cache metadata in the pushed image so other builders can use it:
sh """
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from ${REGISTRY}/${IMAGE}:latest \
-t ${REGISTRY}/${IMAGE}:${TAG} \
.
docker push ${REGISTRY}/${IMAGE}:${TAG}
"""
Caching Strategy Comparison
| Strategy | Setup Effort | Speed Improvement | Works Across Agents |
|---|---|---|---|
| Layer ordering | None (Dockerfile best practice) | Moderate | Yes (same host) |
| Named volumes | Low | High | No (host-specific) |
| BuildKit cache mounts | Low | High | No (host-specific) |
| Registry-based cache | Medium | Moderate-High | Yes |
| BuildKit inline cache | Medium | Moderate-High | Yes |
Combine these strategies for the best results. A well-cached Docker build can be as fast as a build on a static agent, with all the reproducibility benefits of containers.
Security Considerations
Docker agents introduce new security surfaces. Understanding and mitigating them is critical for production Jenkins installations.
Docker Socket Security
Mounting the Docker socket gives the build container the ability to:
- List, start, and stop any container on the host
- Pull and push images to any registry the daemon is configured for
- Mount any host directory into a new container
- Effectively gain root access to the host
Mitigations:
| Mitigation | How |
|---|---|
| Dedicated build hosts | Do not run builds on machines that host production workloads |
| Rootless Docker | Run the Docker daemon without root privileges |
| Network policies | Restrict what containers can access on the network |
| User namespaces | Map container root to unprivileged host user |
| Read-only socket proxy | Use docker-socket-proxy to limit API access |
Docker Socket Proxy
Instead of mounting the raw Docker socket, use a proxy that limits which Docker API endpoints are available:
# docker-compose.yml
services:
docker-proxy:
image: tecnativa/docker-socket-proxy
environment:
CONTAINERS: 1
IMAGES: 1
NETWORKS: 1
VOLUMES: 1
# Disable dangerous endpoints
POST: 1
BUILD: 1
EXEC: 1
SWARM: 0
SECRETS: 0
CONFIGS: 0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "2375:2375"
jenkins:
image: jenkins/jenkins:lts-jdk17
environment:
DOCKER_HOST: tcp://docker-proxy:2375
Image Security
| Practice | Reason |
|---|---|
| Use official base images | Maintained, scanned, documented |
| Pin image digests for critical builds | Tags can be overwritten; digests are immutable |
| Scan images with Trivy or Grype | Catch known vulnerabilities before deployment |
| Use minimal base images (Alpine, distroless) | Smaller attack surface |
| Do not run containers as root | Limit blast radius of container escape |
| Do not store secrets in images | Use build secrets or runtime injection |
Pinning by Digest
For maximum reproducibility and security:
agent {
docker {
image 'node@sha256:abc123def456...'
}
}
This ensures you always get the exact same image, even if someone pushes a new image with the same tag.
Kubernetes Pod Agents
For elastic scaling, use Kubernetes to dynamically provision Jenkins agents as pods:
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
labels:
jenkins-agent: "true"
spec:
containers:
- name: node
image: node:20-alpine
command: ['sleep']
args: ['infinity']
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1"
- name: docker
image: docker:24-cli
command: ['sleep']
args: ['infinity']
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
'''
defaultContainer 'node'
}
}
stages {
stage('Test') {
steps {
sh 'npm ci && npm test'
}
}
stage('Build Image') {
steps {
container('docker') {
sh 'docker build -t my-app:${BUILD_NUMBER} .'
}
}
}
}
}
Kubernetes Agent Benefits
| Benefit | Details |
|---|---|
| Elastic scaling | Pods are created on demand and destroyed after use |
| Resource management | Kubernetes handles scheduling and resource limits |
| Multi-container pods | Run tests in one container, build images in another |
| Node selectors and tolerations | Target specific node pools (GPU, high-memory, etc.) |
| Service accounts | Fine-grained RBAC for Kubernetes API access |
Troubleshooting Common Issues
Permission Denied on Docker Socket
Symptom: permission denied while trying to connect to the Docker daemon socket
# Check Docker socket permissions
ls -la /var/run/docker.sock
# Expected: srw-rw---- 1 root docker ...
# Add jenkins user to docker group
sudo usermod -aG docker jenkins
# For Docker-in-Docker Jenkins, match the host's docker GID
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)
docker run -d \
--group-add ${DOCKER_GID} \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkins/jenkins:lts-jdk17
Workspace Not Found in Container
Symptom: Files from the checkout are missing inside the Docker container
Jenkins mounts the workspace automatically, but the mount path must match. Check:
- The workspace path on the host exists and is readable
- The Docker user has permission to read the mounted directory
- If using per-stage Docker agents, set
reuseNode trueto keep the same workspace
agent {
docker {
image 'node:20'
reuseNode true // Use the same workspace as the parent agent
}
}
Container Runs as Wrong User
Symptom: Permission denied when writing to the workspace, or npm/yarn fails with EACCES
// Option 1: Run as root (simple but less secure)
args '-u root'
// Option 2: Match the host user's UID/GID
args '-u $(id -u):$(id -g)'
// Option 3: Run as root and fix permissions at the start
steps {
sh 'chown -R node:node .'
sh 'su - node -c "npm ci"'
}
Network Issues Inside Containers
Symptom: Containers cannot reach the internet, npm install times out, or DNS resolution fails
// Use host networking
args '--network host'
// Or specify DNS servers explicitly
args '--dns 8.8.8.8 --dns 8.8.4.4'
// Or use a custom Docker network
args '--network jenkins-build-net'
Out of Memory in Docker Containers
Symptom: Build processes get OOM-killed, node process exits with code 137
// Increase container memory limit
args '--memory 4g --memory-swap 4g'
// For Node.js, also increase V8 heap size
environment {
NODE_OPTIONS = '--max-old-space-size=3072'
}
Chrome/Playwright Crashes in Containers
Symptom: E2E tests crash with "No usable sandbox" or shared memory errors
args '--shm-size 2g' // Increase shared memory (default is 64MB)
// Or use /dev/shm as tmpfs
args '--shm-size 2g --cap-add SYS_ADMIN' // SYS_ADMIN for Chrome sandbox
// Alternative: disable Chrome sandbox (less secure, but works)
environment {
CHROME_FLAGS = '--no-sandbox --disable-dev-shm-usage'
}
Docker Image Pull Failures
Symptom: "toomanyrequests: You have reached your pull rate limit" from Docker Hub
// Use a private registry mirror
agent {
docker {
image 'registry.example.com/mirror/node:20-alpine'
}
}
// Or authenticate to Docker Hub to get higher rate limits
agent {
docker {
image 'node:20-alpine'
registryUrl 'https://index.docker.io/v1/'
registryCredentialsId 'dockerhub-creds'
}
}
Performance Optimization Checklist
| Optimization | Impact | Effort |
|---|---|---|
| Mount package manager cache volumes | High | Low |
| Pre-build and push CI images to a registry | High | Medium |
| Use BuildKit with cache mounts | High | Low |
| Order Dockerfile layers by change frequency | Medium | Low |
Use --cache-from with registry-based caching | Medium | Medium |
Use reuseNode true where appropriate | Medium | Low |
Use alwaysPull false for stable images | Low | Low |
| Set appropriate memory and CPU limits | Medium | Low |
| Use parallel stages for independent work | High | Medium |
| Clean up Docker images and volumes periodically | Low (prevents disk issues) | Low |
Docker agents are not a silver bullet -- they add complexity to your Jenkins setup and require Docker infrastructure. But for teams that need reproducible builds, multi-language support, and clean isolation between projects, they are the right approach. Start with simple agent { docker { image '...' } } declarations and add complexity only when you need it. The goal is reproducible builds, not the most sophisticated Docker setup possible.
CI/CD Engineering Lead
Automation evangelist who believes no deployment should require a human. I write pipelines, break pipelines, and write about both. Code-first, always.
Related Articles
Jenkins Declarative Pipelines: The Complete Jenkinsfile Guide
Master Jenkins declarative pipelines — stages, steps, post actions, environment variables, credentials, parallel execution, and when conditions.
Jenkins Installation & Configuration: From Zero to First Pipeline
Install Jenkins on Ubuntu and Docker, configure security settings, manage plugins, and create your first freestyle and pipeline jobs step by step.
Jenkins Shared Libraries: Reusable Pipeline Code at Scale
Build and maintain Jenkins shared libraries — directory structure, global vars, custom steps, class-based libraries, testing, and versioning strategies.