DevOpsil
Jenkins
92%
Fresh

Docker Agents in Jenkins: Reproducible Builds Every Time

Sarah ChenSarah Chen27 min read

The classic Jenkins setup -- a controller with a handful of static agents -- breaks down quickly. Agents accumulate state from previous builds, different projects need conflicting tool versions, and debugging "works on my agent" issues wastes hours. Docker agents fix all of this by spinning up a fresh container for every build or stage, running the work, and throwing the container away.

This guide covers everything you need to run Jenkins builds inside Docker containers: plugin setup, basic and advanced agent configurations, custom Dockerfiles, caching strategies, Docker-in-Docker versus socket mounting, Kaniko for daemonless builds, sidecar containers for integration tests, multi-stage builds, Kubernetes pod agents, security hardening, and troubleshooting the issues you will inevitably hit.

Why Docker Agents

Static Jenkins agents suffer from several problems that get worse as your team and project portfolio grow:

ProblemStatic AgentsDocker Agents
State pollutionBuild artifacts and caches from previous builds leak into new onesFresh container every time -- no leftover state
Tool version conflictsProject A needs Node 18, Project B needs Node 20Each project specifies its own image
Snowflake agentsManual tool installs cause configuration driftImage is the single source of truth
ScalingAdding capacity means provisioning VMs, installing tools, connecting themStart more containers -- Docker handles scheduling
Reproducibility"Works on my agent" issuesSame image, same result, every time
Security isolationBuilds share the same file system and processesContainer isolation between builds

Docker agents eliminate these issues. Each build gets a pristine container from a defined image. When the build ends, the container is destroyed. Need a different toolchain? Change the image name. Need more capacity? The Docker host handles container scheduling.

When NOT to Use Docker Agents

Docker agents are not always the right choice:

  • Builds that need GPU access -- GPU passthrough to containers is possible but adds complexity
  • Windows builds -- Windows Docker containers have significant limitations compared to Linux
  • Builds that need persistent local state -- Some tools (like Bazel) benefit from persistent caches that are harder to maintain with ephemeral containers
  • Very small teams with simple needs -- The overhead of maintaining Docker images may not be worth it for a team of three with two Node.js projects

Docker Plugin Setup

You need the Docker Pipeline plugin installed on your Jenkins controller. This is usually included in the suggested plugins during initial setup, but verify it:

  1. Go to Manage Jenkins, then Plugins, then Installed plugins
  2. Search for "Docker Pipeline" (artifact ID: docker-workflow)
  3. If not installed, go to Available plugins and install it

The Jenkins controller (or the node running the pipeline) needs Docker installed and the Jenkins user must have access to the Docker socket:

# On the Jenkins host
sudo apt-get update
sudo apt-get install -y docker.io

# Add the jenkins user to the docker group
sudo usermod -aG docker jenkins

# Restart Jenkins to pick up the group change
sudo systemctl restart jenkins

# Verify Docker access
sudo -u jenkins docker ps

For Docker-based Jenkins installations, mount the Docker socket into the Jenkins container:

# docker-compose.yml
services:
  jenkins:
    image: jenkins/jenkins:lts-jdk17
    volumes:
      - jenkins_home:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
    # Either run as root or match the Docker group GID
    user: root

Verifying the Setup

After installation, test Docker access from a pipeline:

pipeline {
    agent {
        docker { image 'alpine:3.19' }
    }
    stages {
        stage('Test') {
            steps {
                sh 'echo "Docker agent works!"'
                sh 'cat /etc/os-release'
                sh 'whoami'
            }
        }
    }
}

If this pipeline succeeds, your Docker agent setup is working.

Basic Docker Agent Usage

The simplest form: run an entire pipeline in a Docker container.

pipeline {
    agent {
        docker {
            image 'node:20-alpine'
        }
    }

    stages {
        stage('Info') {
            steps {
                sh 'node --version'
                sh 'npm --version'
                sh 'whoami'
                sh 'pwd'
            }
        }
        stage('Install') {
            steps {
                sh 'npm ci'
            }
        }
        stage('Test') {
            steps {
                sh 'npm test'
            }
        }
        stage('Build') {
            steps {
                sh 'npm run build'
            }
        }
    }
}

Jenkins pulls the node:20-alpine image (if not cached), starts a container, mounts the workspace into the container, runs all stages inside it, and destroys the container when done. The workspace mount means your source code is available inside the container automatically.

How Jenkins Docker Agents Work Internally

Understanding what happens behind the scenes helps with debugging:

  1. Jenkins checks out your code to the workspace on the host
  2. Jenkins runs docker pull node:20-alpine (if not cached)
  3. Jenkins runs something like: docker run -d -v /workspace:/workspace -w /workspace node:20-alpine cat
  4. For each sh step, Jenkins runs: docker exec container-id sh -c 'your-command'
  5. When the pipeline ends, Jenkins runs: docker stop container-id && docker rm container-id

The workspace is bind-mounted from the host, so files persist between steps within the same stage or pipeline. But the container itself is ephemeral.

Docker Agent Options

agent {
    docker {
        image 'node:20-alpine'
        label 'docker-host'                  // Run on nodes with this label
        args '-v /tmp:/tmp -e FOO=bar'       // Additional docker run arguments
        registryUrl 'https://registry.example.com'
        registryCredentialsId 'docker-reg-creds'
        reuseNode true                       // Reuse the workspace node instead of allocating a new one
        alwaysPull true                      // Always pull the latest image
    }
}

Docker Run Arguments Reference

The args parameter passes arbitrary flags to docker run. These are the flags you will use most often:

FlagPurposeExample
-v /host:/containerMount host directories-v /tmp/.npm:/root/.npm
-e VAR=valueSet environment variables-e NODE_ENV=ci
--network hostUse host networking--network host
--network my-netUse a custom Docker network--network jenkins-build-net
--memory 4gLimit container memory--memory 4g
--cpus 2Limit CPU usage--cpus 2
-u rootRun as root inside the container-u root
-u 1000:1000Run as specific UID/GID-u 1000:1000
--dns 8.8.8.8Set DNS servers--dns 8.8.8.8 --dns 8.8.4.4
--add-hostAdd host entries--add-host db:192.168.1.100
--tmpfs /tmpMount a tmpfs filesystem--tmpfs /tmp:rw,size=1g
--shm-size 2gSet shared memory size (useful for Chrome/Playwright)--shm-size 2g

Per-Stage Docker Agents

Different stages often need different tools. Use per-stage agents with agent none at the top level:

pipeline {
    agent none

    stages {
        stage('Build Frontend') {
            agent {
                docker { image 'node:20-alpine' }
            }
            steps {
                dir('frontend') {
                    sh 'npm ci'
                    sh 'npm run build'
                }
                stash includes: 'frontend/dist/**', name: 'frontend-build'
            }
        }

        stage('Build Backend') {
            agent {
                docker { image 'golang:1.22-alpine' }
            }
            steps {
                dir('backend') {
                    sh 'go build -o ../app ./cmd/server'
                }
                stash includes: 'app', name: 'backend-build'
            }
        }

        stage('Build Docker Image') {
            agent { label 'docker-host' }
            steps {
                unstash 'frontend-build'
                unstash 'backend-build'
                sh 'docker build -t my-app:${BUILD_NUMBER} .'
            }
        }
    }
}

Use stash/unstash to pass artifacts between stages running on different agents. Keep stash sizes small -- they are stored on the controller and large stashes can cause performance issues.

Custom Dockerfiles for Build Agents

Pre-built images from Docker Hub rarely have everything you need. Custom Dockerfiles let you create tailored build environments that include exactly the tools your project requires.

Basic Custom Build Image

# ci/Dockerfile
FROM node:20-alpine

# Install additional tools
RUN apk add --no-cache \
    git \
    python3 \
    make \
    g++ \
    curl \
    jq \
    bash \
    openssh-client

# Install specific npm global packages
RUN npm install -g pnpm@9 typescript@5

# Install Chrome for E2E tests
RUN apk add --no-cache chromium chromium-chromedriver \
    nss \
    freetype \
    harfbuzz \
    ca-certificates \
    ttf-freefont
ENV CHROME_BIN=/usr/bin/chromium-browser
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser

# Non-root user for security
RUN adduser -D -u 1000 builder
USER builder

WORKDIR /home/builder/app

Reference it in your Jenkinsfile:

pipeline {
    agent {
        dockerfile {
            filename 'ci/Dockerfile'
            additionalBuildArgs '--build-arg NODE_VERSION=20'
            args '-v /tmp/.npm:/home/builder/.npm'  // Cache npm packages
        }
    }

    stages {
        stage('Install') {
            steps {
                sh 'pnpm install --frozen-lockfile'
            }
        }
        stage('Lint') {
            steps {
                sh 'pnpm run lint'
            }
        }
        stage('Test') {
            steps {
                sh 'pnpm test'
            }
        }
        stage('Build') {
            steps {
                sh 'pnpm run build'
            }
        }
    }
}

Jenkins builds the Dockerfile the first time and caches the image. Subsequent builds reuse the cached image unless the Dockerfile changes.

Multi-Tool Build Image for Monorepos

For monorepo projects that need multiple languages in a single build:

# ci/Dockerfile.multi
FROM ubuntu:22.04

ENV DEBIAN_FRONTEND=noninteractive

# Base tools
RUN apt-get update && apt-get install -y \
    curl git wget unzip jq \
    build-essential \
    ca-certificates \
    gnupg \
    && rm -rf /var/lib/apt/lists/*

# Node.js 20
RUN mkdir -p /etc/apt/keyrings \
    && curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key \
       | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
    && echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_20.x nodistro main" \
       | tee /etc/apt/sources.list.d/nodesource.list \
    && apt-get update && apt-get install -y nodejs \
    && rm -rf /var/lib/apt/lists/*

# Go 1.22
RUN wget -q https://go.dev/dl/go1.22.0.linux-amd64.tar.gz \
    && tar -C /usr/local -xzf go1.22.0.linux-amd64.tar.gz \
    && rm go1.22.0.linux-amd64.tar.gz
ENV PATH="/usr/local/go/bin:${PATH}"
ENV GOPATH="/go"
ENV PATH="${GOPATH}/bin:${PATH}"

# Python 3.11
RUN apt-get update && apt-get install -y \
    python3.11 python3-pip python3-venv \
    && rm -rf /var/lib/apt/lists/*

# kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" \
    && chmod +x kubectl && mv kubectl /usr/local/bin/

# Helm
RUN curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# AWS CLI v2
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
    && unzip awscliv2.zip && ./aws/install && rm -rf aws awscliv2.zip

# Create non-root build user
RUN useradd -m -u 1000 builder
USER builder
WORKDIR /home/builder/workspace

This is a bigger image (typically 1-2 GB), but it eliminates the need for per-stage agents in monorepo pipelines. The trade-off: build time of the image itself versus pipeline complexity.

Best Practices for Build Images

PracticeReason
Pin base image versions (node:20.11-alpine, not node:latest)Reproducibility -- latest can change without warning
Use --no-cache flag in package managersSmaller image size
Combine RUN commands with &&Fewer layers, smaller image
Run as non-root userSecurity -- limit blast radius of container escape
Include git in the imageJenkins needs it for checkout operations
Keep images in a private registryFaster pulls, no Docker Hub rate limits
Tag images with a version and rebuild periodicallySecurity patches in base images

Pre-building and Publishing Build Images

Instead of building from a Dockerfile every time, pre-build and push your CI images:

// Jenkinsfile for building the CI image itself
pipeline {
    agent { label 'docker' }

    triggers {
        cron('H 4 * * 1')  // Rebuild weekly to pick up security patches
    }

    stages {
        stage('Build CI Image') {
            steps {
                sh '''
                    docker build \
                      -t registry.example.com/ci-images/node-ci:20 \
                      -f ci/Dockerfile \
                      .
                    docker push registry.example.com/ci-images/node-ci:20
                '''
            }
        }
    }
}

Then reference the pre-built image in application pipelines:

agent {
    docker {
        image 'registry.example.com/ci-images/node-ci:20'
        registryUrl 'https://registry.example.com'
        registryCredentialsId 'registry-creds'
    }
}

This is faster than building from a Dockerfile on every pipeline run and ensures consistency across all projects.

Mounting Volumes for Caching

Docker agents start clean every time, which is great for reproducibility but terrible for build speed. Without caching, every build downloads all dependencies from scratch. Mount volumes to cache package manager data:

pipeline {
    agent {
        docker {
            image 'node:20-alpine'
            args '''
                -v npm-cache:/root/.npm
                -v pnpm-store:/root/.local/share/pnpm/store
            '''
        }
    }

    stages {
        stage('Install') {
            steps {
                sh 'npm ci'  // Uses cached packages from npm-cache volume
            }
        }
    }
}

Language-Specific Cache Mounts

Language/ToolCache DirectoryVolume Mount
npm~/.npm-v npm-cache:/root/.npm
pnpm~/.local/share/pnpm/store-v pnpm-store:/root/.local/share/pnpm/store
Yarn~/.cache/yarn-v yarn-cache:/root/.cache/yarn
Go modules/go/pkg/mod-v go-mod-cache:/go/pkg/mod
Go build cache~/.cache/go-build-v go-build-cache:/root/.cache/go-build
Maven~/.m2/repository-v maven-cache:/root/.m2/repository
Gradle~/.gradle/caches-v gradle-cache:/root/.gradle/caches
pip~/.cache/pip-v pip-cache:/root/.cache/pip
Rust/Cargo~/.cargo/registry-v cargo-cache:/root/.cargo/registry
Composer (PHP)~/.cache/composer-v composer-cache:/root/.cache/composer

Use Docker named volumes rather than bind mounts to host directories. Named volumes are managed by Docker and work across container restarts without permission issues. Bind mounts require you to manage permissions manually and can cause UID/GID conflicts.

Cache Cleanup

Named volumes grow over time. Set up periodic cleanup:

#!/bin/bash
# cleanup-build-caches.sh
# Run weekly via cron

# List volumes and their sizes
docker system df -v 2>/dev/null | grep -A 100 "Local Volumes"

# Remove volumes not used in the last 30 days
# (Docker does not track last-used, so this is approximate)
docker volume prune -f

# Or selectively remove specific caches
# docker volume rm npm-cache go-mod-cache

Docker-in-Docker vs Docker Socket Mounting

When your pipeline needs to build Docker images inside a Docker agent, you have two choices. This is one of the most common decision points in Jenkins Docker setups.

Mount the host's Docker socket into the build container:

agent {
    docker {
        image 'docker:24-cli'
        args '-v /var/run/docker.sock:/var/run/docker.sock'
    }
}

How it works: The build container uses the host's Docker daemon to build images. The docker commands inside the container talk to the same daemon that manages the container itself.

AspectDetails
Setup complexitySimple -- just mount the socket
Image cachingShared with host -- images are cached across builds
PerformanceFast -- no nested virtualization overhead
SecurityContainer can see and manipulate all containers on the host
Privileged modeNot required
Network accessBuilt images can access host network

Docker-in-Docker (DinD)

Run a full Docker daemon inside the build container:

agent {
    docker {
        image 'docker:24-dind'
        args '--privileged'  // Required for DinD
    }
}
AspectDetails
Setup complexityModerate -- requires privileged mode and storage driver configuration
Image cachingIsolated -- no cache reuse between builds by default
PerformanceSlower -- nested daemon overhead, no shared cache
SecurityComplete isolation from host Docker, but requires --privileged
Privileged modeRequired (significant security concern)
Network accessIsolated from host network

Side-by-Side Comparison

FactorSocket MountDinD
Build speedFast (shared cache)Slower (cold cache)
IsolationLow (shared daemon)High (separate daemon)
Security riskMedium (socket access)High (privileged mode)
Multi-tenant safetyNot suitableBetter (but still risky)
Debugging easeEasy (host tools work)Harder (nested Docker)
Storage usageShared with hostAdditional storage per build

Recommendation: Use Docker socket mounting for most cases. The security surface is manageable with proper agent isolation. Use DinD only when you genuinely need full isolation, such as in multi-tenant environments where builds from different teams must not see each other's containers.

Rootless Docker Socket Mounting

For improved security with socket mounting, use rootless Docker:

# Install rootless Docker (on the Jenkins host)
dockerd-rootless-setuptool.sh install

# Mount the rootless socket instead
docker run -d \
  -v /run/user/1000/docker.sock:/var/run/docker.sock \
  jenkins/jenkins:lts-jdk17

This limits the blast radius of container escapes because the Docker daemon itself runs without root privileges.

Kaniko: Building Images Without Docker

Kaniko builds Docker images from a Dockerfile without needing a Docker daemon at all. It runs entirely in userspace, making it ideal for:

  • Kubernetes-based Jenkins agents where you cannot mount the Docker socket
  • Environments where Docker socket access is prohibited by security policy
  • CI environments that need to build images without privileged containers

Kaniko on Kubernetes

pipeline {
    agent {
        kubernetes {
            yaml '''
                apiVersion: v1
                kind: Pod
                spec:
                  containers:
                  - name: kaniko
                    image: gcr.io/kaniko-project/executor:debug
                    command: ['sleep']
                    args: ['infinity']
                    volumeMounts:
                    - name: docker-config
                      mountPath: /kaniko/.docker
                  - name: node
                    image: node:20-alpine
                    command: ['sleep']
                    args: ['infinity']
                  volumes:
                  - name: docker-config
                    secret:
                      secretName: docker-registry-creds
                      items:
                      - key: .dockerconfigjson
                        path: config.json
            '''
        }
    }

    stages {
        stage('Test') {
            steps {
                container('node') {
                    sh 'npm ci && npm test'
                }
            }
        }
        stage('Build and Push') {
            steps {
                container('kaniko') {
                    sh '''
                        /kaniko/executor \
                          --context=dir:///home/jenkins/agent/workspace/my-job \
                          --destination=registry.example.com/my-app:${BUILD_NUMBER} \
                          --destination=registry.example.com/my-app:latest \
                          --cache=true \
                          --cache-repo=registry.example.com/my-app/cache \
                          --snapshot-mode=redo \
                          --use-new-run
                    '''
                }
            }
        }
    }
}

Kaniko with Docker-based Jenkins

For non-Kubernetes Jenkins, you can still use Kaniko as a Docker container:

stage('Build with Kaniko') {
    steps {
        withCredentials([file(credentialsId: 'docker-config-json', variable: 'DOCKER_CONFIG')]) {
            sh """
                docker run --rm \
                  -v \${WORKSPACE}:/workspace \
                  -v \${DOCKER_CONFIG}:/kaniko/.docker/config.json:ro \
                  gcr.io/kaniko-project/executor:latest \
                  --context=/workspace \
                  --destination=registry.example.com/my-app:${BUILD_NUMBER} \
                  --cache=true \
                  --cache-repo=registry.example.com/my-app/cache
            """
        }
    }
}

Kaniko Options Reference

FlagPurpose
--contextBuild context directory
--dockerfilePath to Dockerfile (default: Dockerfile in context)
--destinationRegistry destination (can specify multiple times)
--cacheEnable layer caching
--cache-repoRepository for cached layers
--cache-ttlCache time-to-live (default: 336h / 14 days)
--snapshot-mode=redoFaster snapshots (may miss some file changes)
--use-new-runImproved RUN command handling
--skip-tls-verifySkip TLS verification (insecure registries)
--build-argPass build arguments
--targetBuild a specific stage in a multi-stage Dockerfile

Kaniko supports layer caching via a remote registry (--cache=true --cache-repo=...), which helps with build speed even though there is no local Docker cache.

Sidecar Containers for Integration Tests

Some tests need services running alongside the build -- a database, a Redis instance, a message queue, or a mock API. Docker provides several ways to run sidecar containers.

Using docker.image().withRun()

pipeline {
    agent { label 'docker' }

    stages {
        stage('Integration Tests') {
            steps {
                script {
                    docker.image('postgres:16-alpine').withRun(
                        '-e POSTGRES_PASSWORD=testpass ' +
                        '-e POSTGRES_DB=testdb ' +
                        '-e POSTGRES_USER=testuser'
                    ) { db ->
                        docker.image('redis:7-alpine').withRun('') { redis ->
                            docker.image('node:20-alpine').inside(
                                "--link ${db.id}:postgres " +
                                "--link ${redis.id}:redis " +
                                "-e DATABASE_URL=postgresql://testuser:testpass@postgres:5432/testdb " +
                                "-e REDIS_URL=redis://redis:6379"
                            ) {
                                sh 'npm ci'
                                sh 'npm run db:migrate'
                                sh 'npm run test:integration'
                            }
                        }
                    }
                }
            }
        }
    }
}

The withRun method starts a container in the background. The inside method runs commands in a container linked to the sidecars. When the block exits, all containers are stopped and removed automatically.

Using Docker Compose

For more complex service topologies, Docker Compose is often more readable and maintainable:

# docker-compose.test.yml
version: "3.9"
services:
  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: testdb
      POSTGRES_USER: testuser
      POSTGRES_PASSWORD: testpass
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 5s
      retries: 5

  app:
    build:
      context: .
      dockerfile: ci/Dockerfile
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    environment:
      DATABASE_URL: postgresql://testuser:testpass@postgres:5432/testdb
      REDIS_URL: redis://redis:6379
      NODE_ENV: test
    command: sh -c "npm ci && npm run db:migrate && npm run test:integration"
stage('Integration Tests') {
    agent { label 'docker-host' }
    steps {
        sh 'docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit --exit-code-from app'
    }
    post {
        always {
            sh 'docker-compose -f docker-compose.test.yml logs --no-color > integration-logs.txt'
            archiveArtifacts artifacts: 'integration-logs.txt', allowEmptyArchive: true
            sh 'docker-compose -f docker-compose.test.yml down -v --remove-orphans'
        }
    }
}

Using Docker Networks

For better control over service discovery without --link (which is deprecated):

stage('Integration Tests') {
    steps {
        script {
            def networkName = "jenkins-${env.BUILD_TAG}".replaceAll('[^a-zA-Z0-9_.-]', '-')

            try {
                sh "docker network create ${networkName}"

                // Start services
                sh """
                    docker run -d --name postgres-${BUILD_NUMBER} \
                      --network ${networkName} \
                      --network-alias postgres \
                      -e POSTGRES_PASSWORD=test \
                      postgres:16-alpine
                """

                // Wait for Postgres to be ready
                sh """
                    for i in \$(seq 1 30); do
                        docker exec postgres-${BUILD_NUMBER} pg_isready && break
                        sleep 1
                    done
                """

                // Run tests
                docker.image('node:20-alpine').inside(
                    "--network ${networkName} " +
                    "-e DATABASE_URL=postgresql://postgres:test@postgres:5432/postgres"
                ) {
                    sh 'npm ci && npm run test:integration'
                }
            } finally {
                sh "docker rm -f postgres-${BUILD_NUMBER} || true"
                sh "docker network rm ${networkName} || true"
            }
        }
    }
}

Multi-Stage Builds in Pipelines

Combine Docker multi-stage builds with Jenkins pipelines for efficient image creation:

pipeline {
    agent { label 'docker-host' }

    environment {
        REGISTRY = 'registry.example.com'
        IMAGE = 'my-service'
        TAG = "${GIT_COMMIT.take(8)}"
        DOCKER_BUILDKIT = '1'
    }

    stages {
        stage('Test') {
            agent {
                docker { image 'node:20-alpine' }
            }
            steps {
                sh 'npm ci'
                sh 'npm run lint'
                sh 'npm test -- --ci --coverage'
            }
            post {
                always {
                    junit '**/junit.xml'
                }
            }
        }

        stage('Build Production Image') {
            steps {
                sh """
                    docker build \
                      --target production \
                      --build-arg BUILD_DATE=\$(date -u +%Y-%m-%dT%H:%M:%SZ) \
                      --build-arg GIT_COMMIT=${GIT_COMMIT} \
                      --build-arg VERSION=${TAG} \
                      --cache-from ${REGISTRY}/${IMAGE}:latest \
                      -t ${REGISTRY}/${IMAGE}:${TAG} \
                      -t ${REGISTRY}/${IMAGE}:latest \
                      .
                """
            }
        }

        stage('Security Scan') {
            steps {
                sh """
                    docker run --rm \
                      -v /var/run/docker.sock:/var/run/docker.sock \
                      aquasec/trivy:latest image \
                      --exit-code 1 \
                      --severity HIGH,CRITICAL \
                      --ignore-unfixed \
                      ${REGISTRY}/${IMAGE}:${TAG}
                """
            }
        }

        stage('Push') {
            steps {
                withCredentials([usernamePassword(
                    credentialsId: 'registry-creds',
                    usernameVariable: 'REG_USER',
                    passwordVariable: 'REG_PASS'
                )]) {
                    sh '''
                        echo "$REG_PASS" | docker login $REGISTRY -u "$REG_USER" --password-stdin
                        docker push ${REGISTRY}/${IMAGE}:${TAG}
                        docker push ${REGISTRY}/${IMAGE}:latest
                        docker logout $REGISTRY
                    '''
                }
            }
        }
    }
}

With a corresponding multi-stage Dockerfile:

# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci

# Stage 2: Build
FROM deps AS builder
COPY . .
RUN npm run build

# Stage 3: Test (can be targeted with --target test)
FROM builder AS test
RUN npm test

# Stage 4: Production
FROM node:20-alpine AS production
ARG BUILD_DATE
ARG GIT_COMMIT
ARG VERSION
LABEL org.opencontainers.image.created=$BUILD_DATE
LABEL org.opencontainers.image.revision=$GIT_COMMIT
LABEL org.opencontainers.image.version=$VERSION
WORKDIR /app
COPY package*.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=builder /app/dist ./dist
RUN adduser -D -u 1001 appuser
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]

Caching Strategies

Build speed with Docker agents depends heavily on caching. Here are the strategies ranked by effectiveness, from simplest to most advanced:

1. Docker Layer Caching

Ensure your Dockerfile is ordered so that infrequently changing layers come first:

# GOOD: package.json changes rarely, source code changes often
COPY package*.json ./     # Cached until package.json changes
RUN npm ci                # Cached until package.json changes
COPY . .                  # Changes often -- invalidates only this and later layers
RUN npm run build

# BAD: source code change invalidates npm install
COPY . .
RUN npm ci
RUN npm run build

2. Named Volume Caching

Mount persistent volumes for package manager caches (covered above in the Mounting Volumes section). This is the single most impactful optimization for build speed.

3. BuildKit Cache Mounts

Docker BuildKit supports cache mounts that persist across builds without cluttering the image:

# syntax=docker/dockerfile:1
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci
COPY . .
RUN npm run build

Enable BuildKit in your pipeline:

environment {
    DOCKER_BUILDKIT = '1'
}

4. Registry-Based Caching

Use --cache-from to pull cache layers from a registry. This works even on agents that have never built the image before:

sh """
    docker pull ${REGISTRY}/${IMAGE}:latest || true
    docker build \
      --cache-from ${REGISTRY}/${IMAGE}:latest \
      -t ${REGISTRY}/${IMAGE}:${TAG} \
      .
"""

5. BuildKit Inline Caching

Embed cache metadata in the pushed image so other builders can use it:

sh """
    docker build \
      --build-arg BUILDKIT_INLINE_CACHE=1 \
      --cache-from ${REGISTRY}/${IMAGE}:latest \
      -t ${REGISTRY}/${IMAGE}:${TAG} \
      .
    docker push ${REGISTRY}/${IMAGE}:${TAG}
"""

Caching Strategy Comparison

StrategySetup EffortSpeed ImprovementWorks Across Agents
Layer orderingNone (Dockerfile best practice)ModerateYes (same host)
Named volumesLowHighNo (host-specific)
BuildKit cache mountsLowHighNo (host-specific)
Registry-based cacheMediumModerate-HighYes
BuildKit inline cacheMediumModerate-HighYes

Combine these strategies for the best results. A well-cached Docker build can be as fast as a build on a static agent, with all the reproducibility benefits of containers.

Security Considerations

Docker agents introduce new security surfaces. Understanding and mitigating them is critical for production Jenkins installations.

Docker Socket Security

Mounting the Docker socket gives the build container the ability to:

  • List, start, and stop any container on the host
  • Pull and push images to any registry the daemon is configured for
  • Mount any host directory into a new container
  • Effectively gain root access to the host

Mitigations:

MitigationHow
Dedicated build hostsDo not run builds on machines that host production workloads
Rootless DockerRun the Docker daemon without root privileges
Network policiesRestrict what containers can access on the network
User namespacesMap container root to unprivileged host user
Read-only socket proxyUse docker-socket-proxy to limit API access

Docker Socket Proxy

Instead of mounting the raw Docker socket, use a proxy that limits which Docker API endpoints are available:

# docker-compose.yml
services:
  docker-proxy:
    image: tecnativa/docker-socket-proxy
    environment:
      CONTAINERS: 1
      IMAGES: 1
      NETWORKS: 1
      VOLUMES: 1
      # Disable dangerous endpoints
      POST: 1
      BUILD: 1
      EXEC: 1
      SWARM: 0
      SECRETS: 0
      CONFIGS: 0
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - "2375:2375"

  jenkins:
    image: jenkins/jenkins:lts-jdk17
    environment:
      DOCKER_HOST: tcp://docker-proxy:2375

Image Security

PracticeReason
Use official base imagesMaintained, scanned, documented
Pin image digests for critical buildsTags can be overwritten; digests are immutable
Scan images with Trivy or GrypeCatch known vulnerabilities before deployment
Use minimal base images (Alpine, distroless)Smaller attack surface
Do not run containers as rootLimit blast radius of container escape
Do not store secrets in imagesUse build secrets or runtime injection

Pinning by Digest

For maximum reproducibility and security:

agent {
    docker {
        image 'node@sha256:abc123def456...'
    }
}

This ensures you always get the exact same image, even if someone pushes a new image with the same tag.

Kubernetes Pod Agents

For elastic scaling, use Kubernetes to dynamically provision Jenkins agents as pods:

pipeline {
    agent {
        kubernetes {
            yaml '''
                apiVersion: v1
                kind: Pod
                metadata:
                  labels:
                    jenkins-agent: "true"
                spec:
                  containers:
                  - name: node
                    image: node:20-alpine
                    command: ['sleep']
                    args: ['infinity']
                    resources:
                      requests:
                        memory: "1Gi"
                        cpu: "500m"
                      limits:
                        memory: "2Gi"
                        cpu: "1"
                  - name: docker
                    image: docker:24-cli
                    command: ['sleep']
                    args: ['infinity']
                    volumeMounts:
                    - name: docker-sock
                      mountPath: /var/run/docker.sock
                  volumes:
                  - name: docker-sock
                    hostPath:
                      path: /var/run/docker.sock
            '''
            defaultContainer 'node'
        }
    }

    stages {
        stage('Test') {
            steps {
                sh 'npm ci && npm test'
            }
        }
        stage('Build Image') {
            steps {
                container('docker') {
                    sh 'docker build -t my-app:${BUILD_NUMBER} .'
                }
            }
        }
    }
}

Kubernetes Agent Benefits

BenefitDetails
Elastic scalingPods are created on demand and destroyed after use
Resource managementKubernetes handles scheduling and resource limits
Multi-container podsRun tests in one container, build images in another
Node selectors and tolerationsTarget specific node pools (GPU, high-memory, etc.)
Service accountsFine-grained RBAC for Kubernetes API access

Troubleshooting Common Issues

Permission Denied on Docker Socket

Symptom: permission denied while trying to connect to the Docker daemon socket

# Check Docker socket permissions
ls -la /var/run/docker.sock
# Expected: srw-rw---- 1 root docker ...

# Add jenkins user to docker group
sudo usermod -aG docker jenkins

# For Docker-in-Docker Jenkins, match the host's docker GID
DOCKER_GID=$(stat -c '%g' /var/run/docker.sock)
docker run -d \
  --group-add ${DOCKER_GID} \
  -v /var/run/docker.sock:/var/run/docker.sock \
  jenkins/jenkins:lts-jdk17

Workspace Not Found in Container

Symptom: Files from the checkout are missing inside the Docker container

Jenkins mounts the workspace automatically, but the mount path must match. Check:

  1. The workspace path on the host exists and is readable
  2. The Docker user has permission to read the mounted directory
  3. If using per-stage Docker agents, set reuseNode true to keep the same workspace
agent {
    docker {
        image 'node:20'
        reuseNode true  // Use the same workspace as the parent agent
    }
}

Container Runs as Wrong User

Symptom: Permission denied when writing to the workspace, or npm/yarn fails with EACCES

// Option 1: Run as root (simple but less secure)
args '-u root'

// Option 2: Match the host user's UID/GID
args '-u $(id -u):$(id -g)'

// Option 3: Run as root and fix permissions at the start
steps {
    sh 'chown -R node:node .'
    sh 'su - node -c "npm ci"'
}

Network Issues Inside Containers

Symptom: Containers cannot reach the internet, npm install times out, or DNS resolution fails

// Use host networking
args '--network host'

// Or specify DNS servers explicitly
args '--dns 8.8.8.8 --dns 8.8.4.4'

// Or use a custom Docker network
args '--network jenkins-build-net'

Out of Memory in Docker Containers

Symptom: Build processes get OOM-killed, node process exits with code 137

// Increase container memory limit
args '--memory 4g --memory-swap 4g'

// For Node.js, also increase V8 heap size
environment {
    NODE_OPTIONS = '--max-old-space-size=3072'
}

Chrome/Playwright Crashes in Containers

Symptom: E2E tests crash with "No usable sandbox" or shared memory errors

args '--shm-size 2g'  // Increase shared memory (default is 64MB)

// Or use /dev/shm as tmpfs
args '--shm-size 2g --cap-add SYS_ADMIN'  // SYS_ADMIN for Chrome sandbox

// Alternative: disable Chrome sandbox (less secure, but works)
environment {
    CHROME_FLAGS = '--no-sandbox --disable-dev-shm-usage'
}

Docker Image Pull Failures

Symptom: "toomanyrequests: You have reached your pull rate limit" from Docker Hub

// Use a private registry mirror
agent {
    docker {
        image 'registry.example.com/mirror/node:20-alpine'
    }
}

// Or authenticate to Docker Hub to get higher rate limits
agent {
    docker {
        image 'node:20-alpine'
        registryUrl 'https://index.docker.io/v1/'
        registryCredentialsId 'dockerhub-creds'
    }
}

Performance Optimization Checklist

OptimizationImpactEffort
Mount package manager cache volumesHighLow
Pre-build and push CI images to a registryHighMedium
Use BuildKit with cache mountsHighLow
Order Dockerfile layers by change frequencyMediumLow
Use --cache-from with registry-based cachingMediumMedium
Use reuseNode true where appropriateMediumLow
Use alwaysPull false for stable imagesLowLow
Set appropriate memory and CPU limitsMediumLow
Use parallel stages for independent workHighMedium
Clean up Docker images and volumes periodicallyLow (prevents disk issues)Low

Docker agents are not a silver bullet -- they add complexity to your Jenkins setup and require Docker infrastructure. But for teams that need reproducible builds, multi-language support, and clean isolation between projects, they are the right approach. Start with simple agent { docker { image '...' } } declarations and add complexity only when you need it. The goal is reproducible builds, not the most sophisticated Docker setup possible.

Share:
Sarah Chen
Sarah Chen

CI/CD Engineering Lead

Automation evangelist who believes no deployment should require a human. I write pipelines, break pipelines, and write about both. Code-first, always.

Related Articles