DevOpsil
Jenkins
93%
Fresh

Jenkins Declarative Pipelines: The Complete Jenkinsfile Guide

Sarah ChenSarah Chen26 min read

Declarative pipelines are the standard way to define CI/CD workflows in Jenkins. They give you a structured, readable syntax that sits in a Jenkinsfile right next to your application code. If you have been writing freestyle jobs or ad-hoc scripted pipelines, switching to declarative will make your pipelines easier to write, review, and maintain. This guide covers every major feature of the declarative pipeline syntax with real-world examples, production patterns, and the gotchas that the official documentation glosses over.

Declarative vs. Scripted Pipelines

Before diving in, it helps to understand why declarative pipelines exist alongside scripted pipelines.

FeatureDeclarativeScripted
SyntaxStructured, opinionatedFree-form Groovy
Learning curveLowerHigher
ValidationLinted before executionErrors at runtime
Blue Ocean supportFull visualizationLimited
FlexibilityCovers 90% of use casesUnlimited
Error handlingpost blockstry/catch/finally
Restart from stageSupportedNot supported

The rule of thumb: start declarative and stay declarative as long as you can. When you hit its limits -- dynamic stage generation, complex error handling, or heavy Groovy logic -- you can drop into a script block within a declarative pipeline. Only reach for a fully scripted pipeline when declarative truly cannot express what you need.

Pipeline Structure at a Glance

Every declarative pipeline follows the same skeleton:

pipeline {
    agent any

    options { ... }
    parameters { ... }
    environment { ... }
    triggers { ... }

    stages {
        stage('Stage Name') {
            steps {
                // your build commands
            }
        }
    }

    post {
        always { ... }
        success { ... }
        failure { ... }
    }
}

The pipeline block is the top-level container. Everything else nests inside it. The order of the sections does not matter syntactically, but keeping them in a consistent order helps your team read Jenkinsfiles quickly. I recommend: agent, options, parameters, environment, triggers, stages, post.

Validation and Linting

Declarative pipelines are validated before they run. If you have a syntax error, Jenkins tells you immediately instead of failing halfway through a build. You can also validate Jenkinsfiles without running them:

# Validate a Jenkinsfile using the Jenkins API
curl -X POST -F "jenkinsfile=@Jenkinsfile" \
  http://localhost:8080/pipeline-model-converter/validate

This is useful in a pre-commit hook or as part of a PR check. Catch syntax errors before they hit the main branch.

Agent Directives

The agent directive tells Jenkins where to run the pipeline or a specific stage. Choosing the right agent strategy is one of the most important decisions in pipeline design.

Common Agent Types

// Run on any available executor
agent any

// Run on a node with a specific label
agent { label 'linux && docker' }

// Run inside a Docker container
agent {
    docker {
        image 'node:20-alpine'
        args '-v /tmp:/tmp'
    }
}

// Run inside a container built from a Dockerfile in the repo
agent {
    dockerfile {
        filename 'Dockerfile.ci'
        dir 'build'
        additionalBuildArgs '--build-arg APP_ENV=ci'
    }
}

// No agent at top level -- define per stage
agent none

// Run inside a Kubernetes pod
agent {
    kubernetes {
        yaml '''
            apiVersion: v1
            kind: Pod
            spec:
              containers:
              - name: maven
                image: maven:3.9-eclipse-temurin-17
                command: ['sleep']
                args: ['infinity']
        '''
    }
}

When to Use Each Agent Type

Agent TypeUse CaseTrade-offs
anySimple pipelines, single-tool buildsNo environment control
labelWhen specific nodes have required tools or hardwareRequires agent provisioning
dockerMost pipelines -- clean, reproducible environmentsRequires Docker on the agent
dockerfileCustom build environments defined in the repoSlower first build (image build)
kubernetesElastic scaling, multi-container podsRequires Kubernetes cluster
noneMulti-platform or multi-environment pipelinesEach stage must declare its own agent

Per-Stage Agents

When you use agent none at the top level, each stage must declare its own agent. This is common in multi-platform builds or when different stages need different environments:

pipeline {
    agent none

    stages {
        stage('Build Frontend') {
            agent { docker { image 'node:20' } }
            steps {
                sh 'npm ci && npm run build'
                stash includes: 'dist/**', name: 'frontend'
            }
        }
        stage('Build Backend') {
            agent { docker { image 'golang:1.22' } }
            steps {
                sh 'go build -o app ./cmd/server'
                stash includes: 'app', name: 'backend'
            }
        }
        stage('Package') {
            agent { label 'docker' }
            steps {
                unstash 'frontend'
                unstash 'backend'
                sh 'docker build -t my-app:${BUILD_NUMBER} .'
            }
        }
    }
}

Each stage spins up its own container, runs, and tears it down. Use stash/unstash to pass artifacts between stages on different agents. Keep stash sizes small -- they are stored on the controller.

Stages and Steps

Stages are the logical groupings of your pipeline. Steps are the individual commands within a stage.

stages {
    stage('Checkout') {
        steps {
            checkout scm
        }
    }
    stage('Install Dependencies') {
        steps {
            sh 'npm ci'
        }
    }
    stage('Lint') {
        steps {
            sh 'npm run lint'
        }
    }
    stage('Test') {
        steps {
            sh 'npm test -- --coverage --reporters=default --reporters=jest-junit'
            junit 'test-results/**/*.xml'
            publishHTML(target: [
                allowMissing: false,
                alwaysLinkToLastBuild: true,
                keepAll: true,
                reportDir: 'coverage/lcov-report',
                reportFiles: 'index.html',
                reportName: 'Coverage Report'
            ])
        }
    }
    stage('Build') {
        steps {
            sh 'npm run build'
            archiveArtifacts artifacts: 'dist/**/*', fingerprint: true
        }
    }
}

Stage Design Guidelines

  • Keep stages focused on one logical step. A stage called "Build and Test and Deploy" is doing too much.
  • Name them clearly. Stage names show up in the Blue Ocean UI, the classic stage view, and pipeline logs. Good names make debugging faster.
  • Do not combine unrelated work in the same stage just to reduce the number of stages. Five focused stages are better than two bloated ones.
  • Order stages by dependency. Later stages should depend on earlier ones completing successfully.

Common Step Reference

StepPurposeExample
shExecute a shell commandsh 'make build'
batExecute a Windows batch commandbat 'msbuild /p:Configuration=Release'
powershellExecute PowerShellpowershell 'Get-ChildItem'
checkoutCheck out source codecheckout scm
echoPrint a messageecho 'Starting build...'
dirChange working directorydir('subdir') { sh 'make' }
stashSave files for later stagesstash includes: 'dist/**', name: 'build'
unstashRestore saved filesunstash 'build'
archiveArtifactsSave build artifactsarchiveArtifacts 'dist/**'
junitPublish JUnit test resultsjunit '**/test-results/*.xml'
retryRetry a block on failureretry(3) { sh 'flaky-test.sh' }
sleepWait for a durationsleep(time: 30, unit: 'SECONDS')
timeoutFail if block exceeds timetimeout(5) { sh 'long-task.sh' }
errorFail the build with a messageerror 'Missing required file'
writeFileWrite content to a filewriteFile file: 'out.txt', text: 'hello'
readFileRead a file's contentdef c = readFile 'version.txt'

Environment Variables

The environment block sets environment variables available to all steps. You can define them at the pipeline level or the stage level. Stage-level variables override pipeline-level ones.

pipeline {
    agent any

    environment {
        APP_NAME = 'my-service'
        APP_VERSION = sh(script: 'git describe --tags --always', returnStdout: true).trim()
        NODE_ENV = 'ci'
        // Credentials helper -- creates APP_CREDS_USR and APP_CREDS_PSW
        APP_CREDS = credentials('app-credentials')
    }

    stages {
        stage('Build') {
            environment {
                // Stage-level variables
                BUILD_TIMESTAMP = sh(script: 'date -u +%Y%m%d%H%M%S', returnStdout: true).trim()
                BUILD_TAG = "${APP_VERSION}-${BUILD_TIMESTAMP}"
            }
            steps {
                sh 'echo "Building $APP_NAME version $BUILD_TAG"'
                sh 'docker build -t $APP_NAME:$BUILD_TAG .'
            }
        }
    }
}

Built-in Variables

Jenkins provides many built-in variables. The most useful ones:

VariableDescriptionExample Value
BUILD_NUMBERCurrent build number42
BUILD_URLFull URL of the buildhttp://jenkins/job/foo/42/
BUILD_IDBuild identifier (same as BUILD_NUMBER)42
JOB_NAMEName of the jobmy-pipeline
JOB_BASE_NAMEShort name without foldermy-pipeline
WORKSPACEAbsolute path of the workspace/var/jenkins_home/workspace/my-pipeline
GIT_COMMITCurrent Git commit hasha1b2c3d4e5f6...
GIT_BRANCHCurrent Git branchorigin/main
BRANCH_NAMEBranch name (multibranch pipelines)main
CHANGE_IDPull request number (multibranch)123
CHANGE_TARGETPR target branchmain
TAG_NAMETag name (if building a tag)v1.2.3
NODE_NAMEName of the agent running the builddocker-agent-1
EXECUTOR_NUMBERExecutor slot on the agent0

Dynamic Environment Variables

You can compute environment variables dynamically:

environment {
    // From a shell command
    GIT_SHORT = sh(script: 'git rev-parse --short HEAD', returnStdout: true).trim()

    // From Groovy expressions
    IS_MAIN = "${env.BRANCH_NAME == 'main' ? 'true' : 'false'}"

    // From a file
    VERSION = readFile('VERSION').trim()
}

Important: Environment variable values are always strings. Even if you assign a boolean expression, it becomes the string "true" or "false". Always compare with string equality in shell scripts.

Credentials Binding

Never put secrets in your Jenkinsfile. Use the Credentials Binding plugin to inject them at runtime.

Environment-Level Credentials

pipeline {
    agent any

    environment {
        // Username/Password type -- creates DOCKER_CREDS_USR and DOCKER_CREDS_PSW
        DOCKER_CREDS = credentials('dockerhub-credentials')

        // Secret text type -- creates the variable directly
        SONAR_TOKEN = credentials('sonarqube-token')

        // Secret file type -- creates a path to a temp file
        KUBECONFIG = credentials('kubeconfig-prod')
    }

    stages {
        stage('Push Image') {
            steps {
                sh '''
                    echo "$DOCKER_CREDS_PSW" | docker login -u "$DOCKER_CREDS_USR" --password-stdin
                    docker push myregistry/myapp:${BUILD_NUMBER}
                    docker logout
                '''
            }
        }
        stage('Scan') {
            steps {
                sh 'sonar-scanner -Dsonar.token=$SONAR_TOKEN'
            }
        }
        stage('Deploy') {
            steps {
                sh 'kubectl --kubeconfig=$KUBECONFIG apply -f k8s/'
            }
        }
    }
}

Step-Level Credentials with withCredentials

For more granular control and to limit the scope of credential exposure:

stage('Deploy') {
    steps {
        withCredentials([
            string(credentialsId: 'slack-webhook', variable: 'SLACK_URL'),
            file(credentialsId: 'kubeconfig', variable: 'KUBECONFIG'),
            usernamePassword(
                credentialsId: 'aws-creds',
                usernameVariable: 'AWS_ACCESS_KEY_ID',
                passwordVariable: 'AWS_SECRET_ACCESS_KEY'
            ),
            sshUserPrivateKey(
                credentialsId: 'deploy-ssh-key',
                keyFileVariable: 'SSH_KEY',
                usernameVariable: 'SSH_USER'
            )
        ]) {
            sh '''
                kubectl --kubeconfig=$KUBECONFIG apply -f k8s/
                ssh -i $SSH_KEY $SSH_USER@production-host "sudo systemctl restart app"
            '''
        }
        // Credentials are NOT available here -- scope is limited to the block
    }
}

Credential Types Reference

TypeCredential ClassVariables Created
Username/PasswordusernamePasswordusernameVariable, passwordVariable
Secret textstringvariable
Secret filefilevariable (path to temp file)
SSH keysshUserPrivateKeykeyFileVariable, usernameVariable, passphraseVariable
Certificatecertificatevariable (path to PKCS12 keystore)

Credentials are masked in the console output automatically. Jenkins replaces the secret value with **** in logs. However, be careful with commands that might encode or transform secrets -- base64-encoded secrets will not be masked.

Post Actions

The post block runs after stages complete, regardless of the outcome (or specifically based on it). This is where you handle notifications, cleanup, reporting, and artifact publishing.

post {
    always {
        // Runs no matter what
        junit allowEmptyResults: true, testResults: '**/test-results/*.xml'
        publishHTML(target: [
            reportDir: 'coverage',
            reportFiles: 'index.html',
            reportName: 'Coverage'
        ])
        cleanWs()
    }
    success {
        slackSend(channel: '#deployments', color: 'good',
            message: "SUCCESS: ${JOB_NAME} #${BUILD_NUMBER} on ${BRANCH_NAME}")
    }
    failure {
        slackSend(channel: '#deployments', color: 'danger',
            message: "FAILED: ${JOB_NAME} #${BUILD_NUMBER} - ${BUILD_URL}")
        emailext(
            subject: "FAILED: ${JOB_NAME} #${BUILD_NUMBER}",
            body: "Check console output at ${BUILD_URL}",
            to: 'team@example.com',
            recipientProviders: [requestor(), culprits()]
        )
    }
    unstable {
        // Test failures marked build as unstable
        slackSend(channel: '#ci-alerts', color: 'warning',
            message: "UNSTABLE: ${JOB_NAME} #${BUILD_NUMBER} - some tests failed")
    }
    changed {
        // Status changed from previous build (e.g., failure to success)
        echo "Build status changed from ${currentBuild.previousBuild?.result} to ${currentBuild.currentResult}"
    }
    fixed {
        // Previous build failed, this one succeeded
        slackSend(channel: '#deployments', color: 'good',
            message: "FIXED: ${JOB_NAME} is green again!")
    }
    regression {
        // Previous build succeeded, this one failed
        slackSend(channel: '#ci-alerts', color: 'danger',
            message: "REGRESSION: ${JOB_NAME} broke on build #${BUILD_NUMBER}")
    }
    aborted {
        echo 'Build was manually aborted.'
        cleanWs()
    }
}

Post Condition Execution Order

When multiple post conditions match, they execute in this order:

  1. always
  2. changed
  3. fixed or regression
  4. success, unstable, failure, or aborted
  5. cleanup (always runs last, even if other post blocks fail)

Stage-Level Post Blocks

You can define post blocks at the stage level for stage-specific cleanup:

stage('Integration Tests') {
    steps {
        sh 'docker-compose -f docker-compose.test.yml up -d'
        sh 'sleep 10'  // Wait for services to be ready
        sh 'npm run test:integration'
    }
    post {
        always {
            sh 'docker-compose -f docker-compose.test.yml down -v'
            sh 'docker-compose -f docker-compose.test.yml rm -f'
        }
        success {
            echo 'Integration tests passed.'
        }
        failure {
            sh 'docker-compose -f docker-compose.test.yml logs --no-color > integration-logs.txt'
            archiveArtifacts artifacts: 'integration-logs.txt', allowEmptyArchive: true
        }
    }
}

When Conditions

The when directive controls whether a stage executes. This is how you create conditional pipelines -- running deployment stages only on certain branches, skipping expensive tests on draft PRs, or enabling feature flags.

Basic When Conditions

stage('Deploy to Staging') {
    when {
        branch 'develop'
    }
    steps {
        sh './deploy.sh staging'
    }
}

stage('Deploy to Production') {
    when {
        branch 'main'
        beforeAgent true  // Evaluate before allocating an agent
    }
    steps {
        sh './deploy.sh production'
    }
}

stage('PR Checks') {
    when {
        changeRequest()  // Only on pull requests
    }
    steps {
        sh 'npm run lint'
        sh 'npm test'
    }
}

Complete When Conditions Reference

// Branch name match (exact)
when { branch 'main' }

// Branch name match (glob pattern)
when { branch pattern: 'release-*', comparator: 'GLOB' }

// Branch name match (regex)
when { branch pattern: 'feature/.*', comparator: 'REGEXP' }

// Environment variable check
when { environment name: 'DEPLOY_TO', value: 'production' }

// Expression (arbitrary Groovy)
when { expression { return params.RUN_TESTS == true } }

// Tag builds only
when { tag 'v*' }

// Tag with comparator
when { tag pattern: 'v\\d+\\.\\d+\\.\\d+', comparator: 'REGEXP' }

// Triggered by specific cause
when { triggeredBy 'TimerTrigger' }
when { triggeredBy cause: 'UserIdCause', detail: 'admin' }

// Only when specific files changed
when { changeset '**/*.java' }
when { changeset pattern: 'frontend/.*', comparator: 'REGEXP' }

// Pull request targeting a specific branch
when { changeRequest target: 'main' }

// Check if building a specific commit
when { equals expected: 'SUCCESS', actual: currentBuild.previousBuild?.result }

Combining Conditions with Logic Operators

// AND logic (all conditions must be true)
when {
    allOf {
        branch 'main'
        environment name: 'DEPLOY_ENABLED', value: 'true'
        not { triggeredBy 'TimerTrigger' }
    }
}

// OR logic (any condition must be true)
when {
    anyOf {
        branch 'main'
        branch 'develop'
        tag 'v*'
    }
}

// NOT logic
when {
    not {
        branch 'feature/*'
    }
}

// Nested logic
when {
    allOf {
        anyOf {
            branch 'main'
            branch 'release/*'
        }
        not {
            environment name: 'SKIP_DEPLOY', value: 'true'
        }
    }
}

The beforeAgent Option

By default, when conditions are evaluated after the agent is allocated. This means Jenkins spins up a Docker container or connects to an agent node, then checks the condition -- wasting resources if the condition is false.

Add beforeAgent true to evaluate the condition first:

stage('Deploy') {
    when {
        branch 'main'
        beforeAgent true  // Check branch BEFORE allocating the agent
    }
    agent { label 'deploy-node' }
    steps {
        sh './deploy.sh'
    }
}

Always use beforeAgent true when the condition is based on branch names, parameters, or environment variables -- anything that does not require the workspace.

Input Steps and Manual Approval Gates

The input directive pauses the pipeline and waits for manual approval. This is essential for production deployments and any action that requires human sign-off.

Stage-Level Input

stage('Deploy to Production') {
    when { branch 'main' }
    input {
        message 'Deploy to production?'
        ok 'Yes, deploy it'
        submitter 'admin,release-managers'
        parameters {
            choice(name: 'TARGET_REGION',
                   choices: ['us-east-1', 'eu-west-1', 'ap-southeast-1'],
                   description: 'Select the deployment region')
            booleanParam(name: 'RUN_SMOKE_TESTS',
                         defaultValue: true,
                         description: 'Run smoke tests after deployment?')
        }
    }
    steps {
        echo "Deploying to ${TARGET_REGION}..."
        sh "./deploy.sh production ${TARGET_REGION}"
        script {
            if (RUN_SMOKE_TESTS == 'true') {
                sh './smoke-tests.sh'
            }
        }
    }
}

Put input at the stage level (not inside steps) so that no agent is held while waiting for approval. When input is at the stage level, Jenkins releases the agent and only re-acquires it after approval.

Step-Level Input

For simpler approval gates:

stage('Confirm Delete') {
    steps {
        script {
            def userInput = input(
                message: 'This will delete the staging database. Are you sure?',
                ok: 'Proceed',
                submitter: 'admin',
                parameters: [
                    string(name: 'CONFIRM', defaultValue: '',
                           description: 'Type DELETE to confirm')
                ]
            )
            if (userInput != 'DELETE') {
                error 'Confirmation text did not match. Aborting.'
            }
        }
        sh './delete-staging-db.sh'
    }
}

Timeout for Input

Avoid pipelines that wait forever for approval:

stage('Deploy') {
    options {
        timeout(time: 4, unit: 'HOURS')
    }
    input {
        message 'Approve deployment?'
    }
    steps {
        sh './deploy.sh'
    }
}

If no one approves within 4 hours, the pipeline is aborted.

Parallel Stages

Run independent stages simultaneously to cut pipeline duration. This is one of the most impactful optimizations you can make.

stage('Tests') {
    failFast true  // Abort all parallel branches if any fails
    parallel {
        stage('Unit Tests') {
            agent { docker { image 'node:20' } }
            steps {
                sh 'npm ci'
                sh 'npm run test:unit -- --ci'
            }
            post {
                always { junit '**/junit-unit.xml' }
            }
        }
        stage('Integration Tests') {
            agent { docker { image 'node:20' } }
            steps {
                sh 'npm ci'
                sh 'npm run test:integration -- --ci'
            }
            post {
                always { junit '**/junit-integration.xml' }
            }
        }
        stage('E2E Tests') {
            agent { docker { image 'cypress/included:13.6.0' } }
            steps {
                sh 'npm ci'
                sh 'npx cypress run --reporter junit --reporter-options mochaFile=results/e2e-[hash].xml'
            }
            post {
                always {
                    junit 'results/*.xml'
                    archiveArtifacts artifacts: 'cypress/screenshots/**', allowEmptyArchive: true
                }
            }
        }
        stage('Security Scan') {
            agent { docker { image 'aquasec/trivy:latest' } }
            steps {
                sh 'trivy fs --exit-code 1 --severity HIGH,CRITICAL .'
            }
        }
    }
}

Parallel Stage Best Practices

  • Each parallel branch should have its own agent. Sharing an agent defeats the purpose of parallelism.
  • Use failFast true to abort remaining branches when one fails. No point running E2E tests if unit tests already failed.
  • Keep parallel branches independent. If stage B depends on stage A, they cannot be parallel.
  • Monitor resource consumption. Four parallel Docker agents need four times the resources. Make sure your infrastructure can handle it.

Matrix Builds

For testing across multiple configurations (Node versions, OS variants, etc.), use the matrix directive:

stage('Cross-Version Tests') {
    matrix {
        axes {
            axis {
                name 'NODE_VERSION'
                values '18', '20', '22'
            }
            axis {
                name 'OS'
                values 'alpine', 'slim'
            }
        }
        excludes {
            exclude {
                axis {
                    name 'NODE_VERSION'
                    values '18'
                }
                axis {
                    name 'OS'
                    values 'slim'
                }
            }
        }
        agent {
            docker { image "node:${NODE_VERSION}-${OS}" }
        }
        stages {
            stage('Test') {
                steps {
                    sh 'node --version'
                    sh 'npm ci'
                    sh 'npm test'
                }
            }
        }
    }
}

This generates 5 combinations (3x2 minus 1 exclusion) and runs them all in parallel.

Options

The options directive configures pipeline-level behavior.

pipeline {
    agent any

    options {
        timeout(time: 30, unit: 'MINUTES')
        retry(2)
        timestamps()
        disableConcurrentBuilds()
        buildDiscarder(logRotator(numToKeepStr: '20', artifactNumToKeepStr: '5'))
        skipDefaultCheckout()
        ansiColor('xterm')
        quietPeriod(10)
        checkoutToSubdirectory('src')
    }

    stages {
        stage('Build') {
            options {
                timeout(time: 10, unit: 'MINUTES')  // Stage-level timeout
                retry(3)  // Retry this specific stage up to 3 times
            }
            steps {
                checkout scm  // Manual checkout since we skipped default
                sh 'make build'
            }
        }
    }
}

Options Reference

OptionPurposeScope
timeoutKill the build if it exceeds the time limitPipeline or stage
retryRetry the entire pipeline or stage on failurePipeline or stage
timestampsPrefix console output with timestampsPipeline
disableConcurrentBuildsQueue builds instead of running in parallelPipeline
buildDiscarderAutomatically delete old buildsPipeline
skipDefaultCheckoutDo not check out SCM automaticallyPipeline or stage
skipStagesAfterUnstableStop executing stages after one goes unstablePipeline
ansiColorEnable ANSI color code processing in consolePipeline
quietPeriodWait N seconds before starting a triggered buildPipeline
checkoutToSubdirectoryCheck out code into a subdirectoryPipeline
preserveStashesKeep stashes from completed builds for restarted stagesPipeline
durabilityHintTrade durability for performancePipeline

Performance Optimization with durabilityHint

For pipelines where speed matters more than crash recovery:

options {
    durabilityHint('PERFORMANCE_OPTIMIZED')
}
Durability HintBehavior
MAX_SURVIVABILITYDefault. Survives controller restarts. Slower.
SURVIVABLE_NONATOMICFaster. May lose some progress on restart.
PERFORMANCE_OPTIMIZEDFastest. Build state is lost if controller restarts mid-build.

Parameters

Parameters let users provide input when triggering a build manually or programmatically.

pipeline {
    agent any

    parameters {
        string(name: 'DEPLOY_ENV',
               defaultValue: 'staging',
               description: 'Target environment (staging, production)')
        booleanParam(name: 'RUN_INTEGRATION_TESTS',
                     defaultValue: true,
                     description: 'Run integration tests?')
        choice(name: 'LOG_LEVEL',
               choices: ['info', 'debug', 'warn', 'error'],
               description: 'Application log level')
        text(name: 'RELEASE_NOTES',
             defaultValue: '',
             description: 'Release notes for this deployment')
        password(name: 'EXTERNAL_API_KEY',
                 defaultValue: '',
                 description: 'External API key (prefer credentials over this)')
    }

    stages {
        stage('Info') {
            steps {
                echo "Deploying to ${params.DEPLOY_ENV} with log level ${params.LOG_LEVEL}"
                echo "Integration tests: ${params.RUN_INTEGRATION_TESTS}"
            }
        }
        stage('Integration Tests') {
            when {
                expression { return params.RUN_INTEGRATION_TESTS }
            }
            steps {
                sh 'npm run test:integration'
            }
        }
        stage('Deploy') {
            steps {
                sh "./deploy.sh ${params.DEPLOY_ENV}"
            }
        }
    }
}

The first build after adding parameters will use default values. Subsequent builds show the parameter form in the UI.

Security note: Avoid the password parameter type for real secrets. It is visible in the build configuration XML and not encrypted at rest. Use Jenkins credentials instead.

Triggers

Automate when pipelines run without manual intervention.

pipeline {
    agent any

    triggers {
        // Poll SCM every 5 minutes (with hash-based jitter to spread load)
        pollSCM('H/5 * * * *')

        // Run on a cron schedule
        cron('H 2 * * 1-5')  // Approximately 2 AM on weekdays

        // Trigger when another job completes
        upstream(upstreamProjects: 'build-base-image',
                 threshold: hudson.model.Result.SUCCESS)
    }

    stages {
        stage('Nightly Build') {
            steps {
                sh 'make full-build'
            }
        }
    }
}

Cron Syntax Reference

MINUTE HOUR DOM MONTH DOW
  |      |   |    |    |
  |      |   |    |    +--- Day of week (0-7, 0 and 7 are Sunday)
  |      |   |    +-------- Month (1-12)
  |      |   +------------- Day of month (1-31)
  |      +------------------ Hour (0-23)
  +------------------------- Minute (0-59)
ExpressionMeaning
H/15 * * * *Every 15 minutes (with jitter)
H 2 * * *Once daily around 2 AM
H 2 * * 1-5Weekdays around 2 AM
H H(0-3) * * *Once daily between midnight and 3 AM
H 8,12,16 * * 1-5Three times daily on weekdays

The H symbol distributes builds evenly. H/15 means "every 15 minutes, but offset by a hash of the job name." This prevents all jobs from running at exactly the same time.

Webhook-Based Triggers

For GitHub-based workflows, prefer webhooks over polling. Configure a GitHub webhook pointing to https://jenkins.example.com/github-webhook/ and use the GitHub plugin. This gives you instant builds on push and pull request events instead of up-to-5-minute delays with polling.

For generic webhook triggers, install the Generic Webhook Trigger plugin:

triggers {
    GenericTrigger(
        genericVariables: [
            [key: 'PAYLOAD_REF', value: '$.ref'],
            [key: 'PAYLOAD_ACTION', value: '$.action']
        ],
        token: 'my-webhook-token',
        causeString: 'Triggered by webhook',
        printContributedVariables: true,
        regexpFilterText: '$PAYLOAD_REF',
        regexpFilterExpression: 'refs/heads/(main|develop)'
    )
}

Script Blocks: Escaping to Groovy

When declarative syntax is not enough, use a script block to write arbitrary Groovy:

stage('Dynamic Steps') {
    steps {
        script {
            // Read a config file and act on it
            def config = readJSON file: 'deploy-config.json'

            config.services.each { service ->
                echo "Deploying ${service.name} to ${service.target}"
                sh "kubectl set image deployment/${service.name} app=${service.image}:${env.BUILD_NUMBER}"
            }

            // Store a value for later stages
            env.DEPLOYED_SERVICES = config.services.collect { it.name }.join(',')
        }
    }
}

When to Use Script Blocks

Use CaseScript Block Needed?
Conditional logic (if/else)Usually yes, unless when is sufficient
Loops over dynamic dataYes
Complex string manipulationYes
Calling shared library classesYes
Setting environment variables dynamicallyYes
Simple shell commandsNo, use sh directly

Resist the temptation to put everything in script blocks. The more Groovy you write, the less benefit you get from the declarative structure. If you find yourself writing more script blocks than declarative steps, consider whether a shared library would be a better home for that logic.

Putting It All Together: Production Jenkinsfile

Here is a comprehensive real-world Jenkinsfile for a Node.js microservice that builds, tests, creates a Docker image, and deploys with all the patterns covered in this guide:

pipeline {
    agent none

    options {
        timeout(time: 45, unit: 'MINUTES')
        timestamps()
        buildDiscarder(logRotator(numToKeepStr: '30', artifactNumToKeepStr: '5'))
        disableConcurrentBuilds(abortPrevious: true)
        ansiColor('xterm')
    }

    parameters {
        booleanParam(name: 'SKIP_TESTS', defaultValue: false,
                     description: 'Skip tests (emergency deploys only)')
        choice(name: 'LOG_LEVEL', choices: ['info', 'debug'],
               description: 'Log level for the deployed service')
    }

    environment {
        REGISTRY = 'registry.example.com'
        IMAGE_NAME = 'user-service'
        IMAGE_TAG = "${GIT_COMMIT.take(8)}"
    }

    stages {
        stage('Validate') {
            agent { docker { image 'node:20-alpine' } }
            when {
                not { expression { return params.SKIP_TESTS } }
                beforeAgent true
            }
            steps {
                sh 'npm ci'
                sh 'npm run lint'
                sh 'npm run type-check'
            }
        }

        stage('Test') {
            when {
                not { expression { return params.SKIP_TESTS } }
                beforeAgent true
            }
            failFast true
            parallel {
                stage('Unit Tests') {
                    agent { docker { image 'node:20-alpine' } }
                    steps {
                        sh 'npm ci'
                        sh 'npm run test:unit -- --ci --coverage'
                    }
                    post {
                        always {
                            junit 'test-results/unit/*.xml'
                        }
                    }
                }
                stage('Integration Tests') {
                    agent { docker { image 'node:20-alpine' } }
                    steps {
                        sh 'npm ci'
                        sh 'npm run test:integration -- --ci'
                    }
                    post {
                        always {
                            junit 'test-results/integration/*.xml'
                        }
                    }
                }
            }
        }

        stage('Build Image') {
            when {
                anyOf { branch 'main'; branch 'develop'; tag 'v*' }
                beforeAgent true
            }
            agent { label 'docker' }
            steps {
                sh """
                    docker build \
                      --build-arg LOG_LEVEL=${params.LOG_LEVEL} \
                      --build-arg GIT_COMMIT=${GIT_COMMIT} \
                      -t ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} \
                      -t ${REGISTRY}/${IMAGE_NAME}:${BRANCH_NAME} \
                      .
                """
                withCredentials([usernamePassword(
                    credentialsId: 'registry-creds',
                    usernameVariable: 'REG_USER',
                    passwordVariable: 'REG_PASS'
                )]) {
                    sh '''
                        echo "$REG_PASS" | docker login $REGISTRY -u "$REG_USER" --password-stdin
                        docker push ${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}
                        docker push ${REGISTRY}/${IMAGE_NAME}:${BRANCH_NAME}
                        docker logout $REGISTRY
                    '''
                }
            }
        }

        stage('Deploy Staging') {
            when {
                branch 'develop'
                beforeAgent true
            }
            agent { label 'deploy' }
            steps {
                withCredentials([file(credentialsId: 'kubeconfig-staging', variable: 'KUBECONFIG')]) {
                    sh """
                        kubectl set image deployment/${IMAGE_NAME} \
                          ${IMAGE_NAME}=${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} \
                          --namespace=staging
                        kubectl rollout status deployment/${IMAGE_NAME} \
                          --namespace=staging --timeout=300s
                    """
                }
            }
        }

        stage('Deploy Production') {
            when {
                branch 'main'
                beforeAgent true
            }
            options {
                timeout(time: 4, unit: 'HOURS')
            }
            input {
                message 'Deploy to production?'
                ok 'Deploy'
                submitter 'release-team,admin'
            }
            agent { label 'deploy' }
            steps {
                withCredentials([file(credentialsId: 'kubeconfig-prod', variable: 'KUBECONFIG')]) {
                    sh """
                        kubectl set image deployment/${IMAGE_NAME} \
                          ${IMAGE_NAME}=${REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG} \
                          --namespace=production
                        kubectl rollout status deployment/${IMAGE_NAME} \
                          --namespace=production --timeout=600s
                    """
                }
            }
            post {
                success {
                    slackSend(channel: '#releases', color: 'good',
                        message: "DEPLOYED: ${IMAGE_NAME}:${IMAGE_TAG} to production")
                }
                failure {
                    slackSend(channel: '#releases', color: 'danger',
                        message: "DEPLOY FAILED: ${IMAGE_NAME}:${IMAGE_TAG} to production")
                }
            }
        }
    }

    post {
        failure {
            slackSend(channel: '#ci-alerts', color: 'danger',
                message: "FAILED: ${JOB_NAME} #${BUILD_NUMBER}\n${BUILD_URL}")
        }
        fixed {
            slackSend(channel: '#ci-alerts', color: 'good',
                message: "FIXED: ${JOB_NAME} is green again after build #${BUILD_NUMBER}")
        }
    }
}

This pipeline uses per-stage agents so nothing sits idle, conditional deployment stages based on branch, parallel test execution, manual approval for production, credential injection with limited scope, and comprehensive post-build notifications. Every piece is version-controlled alongside the application code.

Troubleshooting Declarative Pipelines

Common Errors and Fixes

"No such DSL method" errors: This usually means a required plugin is not installed. The step name in the error tells you which plugin is missing. Install the plugin and restart Jenkins.

"Expected a stage" or "Not a valid section definition": Declarative syntax is strict. Make sure every stage is inside stages (or parallel), every step is inside steps, and you have not accidentally put a Groovy statement outside a script block.

"Scripts not permitted to use method": Jenkins' script security sandbox blocks unapproved method calls. Navigate to Manage Jenkins, then In-process Script Approval and approve the pending signature. In production, prefer shared libraries with @NonCPS annotations over script approvals.

Credentials not available in parallel stages: Credentials bound at the pipeline environment level are available everywhere. But withCredentials blocks are scoped to their enclosing block. If parallel stages need the same credentials, bind them in each parallel branch or at the environment level.

Pipeline restarts fail: Restart from stage only works with declarative pipelines and only for stages that use agent at the stage level. Stages that rely on state from previous stages may not work correctly after restart.

Debugging Tips

// Print all environment variables
steps {
    sh 'env | sort'
}

// Print the current working directory and its contents
steps {
    sh 'pwd && ls -la'
}

// Inspect a variable's value and type
steps {
    script {
        echo "BUILD_NUMBER type: ${BUILD_NUMBER.getClass().name}"
        echo "params: ${params}"
    }
}

Declarative pipelines cover the vast majority of CI/CD use cases. They are structured enough to be readable by anyone on the team, flexible enough to handle complex real-world workflows, and integrated deeply enough with Jenkins to provide excellent visualization and management capabilities. Master the patterns in this guide and you will be able to build pipelines for virtually any project.

Share:
Sarah Chen
Sarah Chen

CI/CD Engineering Lead

Automation evangelist who believes no deployment should require a human. I write pipelines, break pipelines, and write about both. Code-first, always.

Related Articles