DevOpsil

Artifact Repositories in CI/CD: Push, Pull, and Promote

Sarah ChenSarah Chen20 min read

An artifact repository without CI/CD integration is just a file server. The real value emerges when every build automatically pushes versioned artifacts to your repository, every pipeline pulls dependencies from your local cache, and promotion workflows move tested artifacts from development to production repositories. This is the operational heart of a mature software delivery pipeline, where the combination of automated builds, immutable artifacts, and controlled promotion gates transforms how teams ship software. This guide covers the practical integration patterns for Nexus and Artifactory with the three most popular CI/CD platforms, along with advanced patterns for multi-stage promotion, security scanning gates, dependency caching, and operational best practices.

Why Artifact Management Matters in CI/CD

Without centralized artifact management, teams hit these problems repeatedly:

  • Unrepeatable builds --- Pulling dependencies directly from the internet means builds can break when upstream packages are removed or modified. The left-pad incident, the ua-parser-js supply chain attack, and countless other incidents demonstrate this risk.
  • Slow pipelines --- Every build downloads dependencies from scratch instead of using cached copies. For a large Java project with hundreds of dependencies, this can add 5-10 minutes to every build.
  • No audit trail --- You cannot determine which exact binaries are running in production. When a security incident occurs, traceability is critical.
  • Manual promotion --- Moving artifacts between environments involves copying files and hoping for the best, with no guarantee that what was tested is what gets deployed.
  • Rate limiting --- Docker Hub, npm, and other registries impose rate limits that can throttle your CI pipelines during peak hours.

A properly integrated artifact repository solves all of these by acting as the single source of truth for both dependencies and build outputs.

The Build-Once-Deploy-Everywhere Principle

The foundational principle of artifact-based CI/CD is: build the artifact once, then deploy the same binary to every environment. This is the opposite of the anti-pattern where each environment triggers a fresh build from source.

Source Code
     |
     v
[ Build ] ----push----> [ Dev Repository ]
                              |
                         (integration tests pass)
                              |
                         promote
                              |
                              v
                         [ Staging Repository ]
                              |
                         (UAT + security scan pass)
                              |
                         promote
                              |
                              v
                         [ Production Repository ]
                              |
                         deploy to production

Every artifact in the production repository has been through every quality gate. There is no possibility of "it works in staging but not in production" caused by a different build. The binary is identical.

Docker Push and Pull in Jenkins Pipelines

Jenkins with Nexus Docker Registry

// Jenkinsfile
pipeline {
    agent any

    environment {
        NEXUS_URL = 'nexus.internal:8082'
        DOCKER_IMAGE = "${NEXUS_URL}/myapp"
        NEXUS_CREDENTIALS = credentials('nexus-docker-creds')
        GIT_COMMIT_SHORT = "${GIT_COMMIT.take(7)}"
    }

    stages {
        stage('Build') {
            steps {
                script {
                    docker.build("${DOCKER_IMAGE}:${GIT_COMMIT_SHORT}",
                        "--build-arg BUILD_NUMBER=${BUILD_NUMBER} " +
                        "--build-arg GIT_COMMIT=${GIT_COMMIT} " +
                        "--label build.number=${BUILD_NUMBER} " +
                        "--label git.commit=${GIT_COMMIT} " +
                        "--label build.timestamp=${BUILD_TIMESTAMP} .")
                }
            }
        }

        stage('Unit Tests') {
            steps {
                script {
                    docker.image("${DOCKER_IMAGE}:${GIT_COMMIT_SHORT}").inside {
                        sh 'npm test -- --ci --coverage'
                    }
                }
            }
            post {
                always {
                    junit 'test-results/**/*.xml'
                    publishHTML(target: [
                        reportDir: 'coverage',
                        reportFiles: 'index.html',
                        reportName: 'Coverage Report'
                    ])
                }
            }
        }

        stage('Push to Nexus') {
            steps {
                script {
                    docker.withRegistry("https://${NEXUS_URL}", 'nexus-docker-creds') {
                        def image = docker.image("${DOCKER_IMAGE}:${GIT_COMMIT_SHORT}")
                        image.push()
                        image.push("${BUILD_NUMBER}")
                        // Only tag as latest on main branch
                        if (env.BRANCH_NAME == 'main') {
                            image.push('latest')
                        }
                    }
                }
            }
        }

        stage('Security Scan') {
            steps {
                sh """
                    trivy image --exit-code 0 --severity LOW,MEDIUM \
                        --format json --output trivy-report.json \
                        ${DOCKER_IMAGE}:${GIT_COMMIT_SHORT}

                    trivy image --exit-code 1 --severity CRITICAL,HIGH \
                        --ignore-unfixed \
                        ${DOCKER_IMAGE}:${GIT_COMMIT_SHORT}
                """
            }
            post {
                always {
                    archiveArtifacts artifacts: 'trivy-report.json'
                }
            }
        }

        stage('Deploy to Staging') {
            when {
                branch 'main'
            }
            steps {
                sh """
                    kubectl set image deployment/myapp \
                        myapp=${DOCKER_IMAGE}:${GIT_COMMIT_SHORT} \
                        --namespace=staging
                    kubectl rollout status deployment/myapp \
                        --namespace=staging --timeout=300s
                """
            }
        }
    }

    post {
        failure {
            slackSend(
                channel: '#deployments',
                color: 'danger',
                message: "Build failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}"
            )
        }
    }
}

Jenkins with Artifactory Plugin

JFrog provides a dedicated Jenkins plugin that adds build info tracking, which creates a detailed record of every artifact, dependency, and environment variable involved in a build:

// Jenkinsfile with JFrog plugin
pipeline {
    agent any

    environment {
        ARTIFACTORY_SERVER = 'artifactory-prod'
    }

    stages {
        stage('Configure JFrog CLI') {
            steps {
                script {
                    // Configure JFrog CLI with server credentials
                    jf 'c add ${ARTIFACTORY_SERVER} --url=https://artifactory.company.com --access-token=${JF_ACCESS_TOKEN}'
                }
            }
        }

        stage('Build Docker Image') {
            steps {
                script {
                    sh "docker build -t artifactory.company.com/docker-local/myapp:${BUILD_NUMBER} ."
                }
            }
        }

        stage('Push with Build Info') {
            steps {
                script {
                    def server = Artifactory.server(ARTIFACTORY_SERVER)
                    def rtDocker = Artifactory.docker(server: server)

                    def buildInfo = rtDocker.push(
                        "artifactory.company.com/docker-local/myapp:${BUILD_NUMBER}",
                        'docker-local',
                        new BuildInfo()
                    )

                    // Capture environment variables in build info
                    buildInfo.env.capture = true
                    buildInfo.env.collect()

                    // Add git information
                    buildInfo.vcs = [
                        [revision: env.GIT_COMMIT, branch: env.GIT_BRANCH, url: env.GIT_URL]
                    ]

                    // Publish build info to Artifactory
                    server.publishBuildInfo(buildInfo)
                }
            }
        }

        stage('Xray Scan') {
            steps {
                script {
                    def server = Artifactory.server(ARTIFACTORY_SERVER)
                    def scanConfig = [
                        'buildName': env.JOB_NAME,
                        'buildNumber': env.BUILD_NUMBER,
                        'failBuild': true
                    ]
                    def scanResult = server.xrayScan(scanConfig)
                    echo "Xray scan result: ${scanResult}"
                }
            }
        }

        stage('Promote to Staging') {
            when {
                branch 'main'
            }
            steps {
                script {
                    def server = Artifactory.server(ARTIFACTORY_SERVER)
                    def promotionConfig = [
                        'buildName': env.JOB_NAME,
                        'buildNumber': env.BUILD_NUMBER,
                        'targetRepo': 'docker-staging',
                        'sourceRepo': 'docker-local',
                        'status': 'Staging',
                        'copy': true,
                        'failFast': true
                    ]
                    server.promote(promotionConfig)
                }
            }
        }
    }
}

Docker Push and Pull in GitHub Actions

GitHub Actions with Nexus

# .github/workflows/build-push.yml
name: Build and Push to Nexus

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

env:
  NEXUS_URL: nexus.internal:8082
  IMAGE_NAME: myapp

jobs:
  build-and-test:
    runs-on: ubuntu-latest
    outputs:
      image-tag: ${{ steps.meta.outputs.tags }}

    steps:
      - uses: actions/checkout@v4

      - name: Generate image metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.NEXUS_URL }}/${{ env.IMAGE_NAME }}
          tags: |
            type=sha,prefix=
            type=ref,event=branch
            type=semver,pattern={{version}}
            type=raw,value=latest,enable={{is_default_branch}}

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Login to Nexus Docker Registry
        if: github.event_name != 'pull_request'
        uses: docker/login-action@v3
        with:
          registry: ${{ env.NEXUS_URL }}
          username: ${{ secrets.NEXUS_USERNAME }}
          password: ${{ secrets.NEXUS_PASSWORD }}

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=registry,ref=${{ env.NEXUS_URL }}/${{ env.IMAGE_NAME }}:buildcache
          cache-to: ${{ github.event_name != 'pull_request' && format('type=registry,ref={0}/{1}:buildcache,mode=max', env.NEXUS_URL, env.IMAGE_NAME) || '' }}
          build-args: |
            BUILD_NUMBER=${{ github.run_number }}
            GIT_COMMIT=${{ github.sha }}

  security-scan:
    needs: build-and-test
    if: github.event_name != 'pull_request'
    runs-on: ubuntu-latest

    steps:
      - name: Login to Nexus
        uses: docker/login-action@v3
        with:
          registry: ${{ env.NEXUS_URL }}
          username: ${{ secrets.NEXUS_USERNAME }}
          password: ${{ secrets.NEXUS_PASSWORD }}

      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: "${{ env.NEXUS_URL }}/${{ env.IMAGE_NAME }}:${{ github.sha }}"
          format: 'sarif'
          output: 'trivy-results.sarif'
          severity: 'CRITICAL,HIGH'

      - name: Upload Trivy scan results
        uses: github/codeql-action/upload-sarif@v3
        if: always()
        with:
          sarif_file: 'trivy-results.sarif'

  deploy-staging:
    needs: [build-and-test, security-scan]
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    environment: staging

    steps:
      - uses: actions/checkout@v4

      - name: Deploy to staging
        run: |
          kubectl set image deployment/myapp \
            myapp=${{ env.NEXUS_URL }}/${{ env.IMAGE_NAME }}:${{ github.sha }} \
            --namespace=staging
          kubectl rollout status deployment/myapp \
            --namespace=staging --timeout=300s

GitHub Actions with Artifactory

# .github/workflows/build-push-artifactory.yml
name: Build and Push to Artifactory

on:
  push:
    branches: [main]
    tags: ['v*']

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Setup JFrog CLI
        uses: jfrog/setup-jfrog-cli@v4
        env:
          JF_URL: ${{ secrets.JF_URL }}
          JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }}

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Authenticate Docker with Artifactory
        run: |
          jf docker-login artifactory.company.com

      - name: Build Docker image
        run: |
          docker build \
            --label "org.opencontainers.image.revision=${{ github.sha }}" \
            --label "org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}" \
            -t artifactory.company.com/docker-local/myapp:${{ github.sha }} \
            -t artifactory.company.com/docker-local/myapp:${{ github.run_number }} .

      - name: Push with build info
        run: |
          jf docker push artifactory.company.com/docker-local/myapp:${{ github.sha }} \
            --build-name=myapp \
            --build-number=${{ github.run_number }}

          jf docker push artifactory.company.com/docker-local/myapp:${{ github.run_number }} \
            --build-name=myapp \
            --build-number=${{ github.run_number }}

      - name: Collect and publish build info
        run: |
          jf rt build-collect-env myapp ${{ github.run_number }}
          jf rt build-add-git myapp ${{ github.run_number }}
          jf rt build-publish myapp ${{ github.run_number }}

      - name: Xray scan
        run: |
          jf build-scan myapp ${{ github.run_number }} --fail=true --vuln

      - name: Promote to staging (on main branch)
        if: github.ref == 'refs/heads/main'
        run: |
          jf rt build-promote myapp ${{ github.run_number }} \
            docker-staging \
            --status="Staging" \
            --copy \
            --props="deployed.env=staging;deployed.by=github-actions"

GitLab CI Integration

Complete GitLab CI Pipeline with Nexus

# .gitlab-ci.yml
stages:
  - build
  - test
  - scan
  - push
  - promote

variables:
  NEXUS_URL: nexus.internal:8082
  IMAGE_NAME: myapp
  DOCKER_IMAGE: "${NEXUS_URL}/${IMAGE_NAME}"

build:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  script:
    - docker build
        --build-arg BUILD_NUMBER=${CI_PIPELINE_IID}
        --build-arg GIT_COMMIT=${CI_COMMIT_SHA}
        --label "git.commit=${CI_COMMIT_SHA}"
        --label "git.branch=${CI_COMMIT_REF_NAME}"
        --label "pipeline.id=${CI_PIPELINE_ID}"
        -t "${DOCKER_IMAGE}:${CI_COMMIT_SHORT_SHA}"
        -t "${DOCKER_IMAGE}:${CI_PIPELINE_IID}" .
    - docker save "${DOCKER_IMAGE}:${CI_COMMIT_SHORT_SHA}" > image.tar
  artifacts:
    paths:
      - image.tar
    expire_in: 1 hour

test:
  stage: test
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  script:
    - docker load < image.tar
    - docker run --rm "${DOCKER_IMAGE}:${CI_COMMIT_SHORT_SHA}" npm test -- --ci

scan:
  stage: scan
  image:
    name: aquasec/trivy:latest
    entrypoint: [""]
  script:
    - trivy image --input image.tar
        --exit-code 1
        --severity CRITICAL,HIGH
        --ignore-unfixed
        --format json
        --output trivy-report.json
  artifacts:
    paths:
      - trivy-report.json
    when: always
  allow_failure: false

push:
  stage: push
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  before_script:
    - echo "${NEXUS_PASSWORD}" | docker login -u "${NEXUS_USERNAME}" --password-stdin "${NEXUS_URL}"
  script:
    - docker load < image.tar
    - docker push "${DOCKER_IMAGE}:${CI_COMMIT_SHORT_SHA}"
    - docker push "${DOCKER_IMAGE}:${CI_PIPELINE_IID}"
    - |
      if [ "$CI_COMMIT_BRANCH" = "main" ]; then
        docker tag "${DOCKER_IMAGE}:${CI_COMMIT_SHORT_SHA}" "${DOCKER_IMAGE}:latest"
        docker push "${DOCKER_IMAGE}:latest"
      fi
  rules:
    - if: $CI_COMMIT_BRANCH

promote-to-staging:
  stage: promote
  image: curlimages/curl:latest
  script:
    - |
      curl -u "${NEXUS_USERNAME}:${NEXUS_PASSWORD}" -X POST \
        "http://nexus.internal:8081/service/rest/v1/staging/move/docker-staging?repository=docker-dev&tag=${CI_COMMIT_SHORT_SHA}&name=${IMAGE_NAME}"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
  environment:
    name: staging

Maven and Gradle Builds with Repository Proxying

Maven in CI

Configure the CI pipeline to use Nexus as a mirror for all Maven dependencies:

# GitHub Actions example
- name: Cache Maven dependencies
  uses: actions/cache@v4
  with:
    path: ~/.m2/repository
    key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
    restore-keys: |
      ${{ runner.os }}-maven-

- name: Build with Maven
  run: |
    mvn clean package deploy -s ci-settings.xml \
      -DskipTests=false \
      -Dmaven.test.failure.ignore=false
  env:
    NEXUS_USERNAME: ${{ secrets.NEXUS_USERNAME }}
    NEXUS_PASSWORD: ${{ secrets.NEXUS_PASSWORD }}

The ci-settings.xml referenced above:

<settings>
  <mirrors>
    <mirror>
      <id>nexus</id>
      <mirrorOf>*</mirrorOf>
      <url>http://nexus.internal:8081/repository/maven-public/</url>
    </mirror>
  </mirrors>
  <servers>
    <server>
      <id>nexus</id>
      <username>${env.NEXUS_USERNAME}</username>
      <password>${env.NEXUS_PASSWORD}</password>
    </server>
    <server>
      <id>nexus-releases</id>
      <username>${env.NEXUS_USERNAME}</username>
      <password>${env.NEXUS_PASSWORD}</password>
    </server>
    <server>
      <id>nexus-snapshots</id>
      <username>${env.NEXUS_USERNAME}</username>
      <password>${env.NEXUS_PASSWORD}</password>
    </server>
  </servers>
</settings>

Gradle in CI

// build.gradle
repositories {
    maven {
        url "http://nexus.internal:8081/repository/maven-public/"
        credentials {
            username = System.getenv("NEXUS_USERNAME")
            password = System.getenv("NEXUS_PASSWORD")
        }
        allowInsecureProtocol = true // Only for internal HTTP
    }
}

publishing {
    repositories {
        maven {
            def releasesUrl = "http://nexus.internal:8081/repository/maven-releases/"
            def snapshotsUrl = "http://nexus.internal:8081/repository/maven-snapshots/"
            url = version.endsWith('SNAPSHOT') ? snapshotsUrl : releasesUrl
            credentials {
                username = System.getenv("NEXUS_USERNAME")
                password = System.getenv("NEXUS_PASSWORD")
            }
            allowInsecureProtocol = true
        }
    }
}
# GitHub Actions for Gradle
- name: Cache Gradle dependencies
  uses: actions/cache@v4
  with:
    path: |
      ~/.gradle/caches
      ~/.gradle/wrapper
    key: ${{ runner.os }}-gradle-${{ hashFiles('**/*.gradle*', '**/gradle-wrapper.properties') }}
    restore-keys: |
      ${{ runner.os }}-gradle-

- name: Build and publish
  run: ./gradlew clean build publish
  env:
    NEXUS_USERNAME: ${{ secrets.NEXUS_USERNAME }}
    NEXUS_PASSWORD: ${{ secrets.NEXUS_PASSWORD }}

npm Publish to Private Registry

GitHub Actions

# .github/workflows/npm-publish.yml
name: Publish npm Package

on:
  push:
    tags: ['v*']

jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'npm'

      - name: Configure npm registry
        run: |
          echo "//nexus.internal:8081/repository/npm-hosted/:_authToken=${NPM_TOKEN}" > .npmrc
          echo "registry=http://nexus.internal:8081/repository/npm-group/" >> .npmrc
        env:
          NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

      - name: Install dependencies
        run: npm ci

      - name: Run tests
        run: npm test

      - name: Publish
        run: npm publish --registry=http://nexus.internal:8081/repository/npm-hosted/

GitLab CI

# GitLab CI npm publish
publish-npm:
  stage: publish
  image: node:20
  script:
    - echo "//nexus.internal:8081/repository/npm-hosted/:_authToken=${NPM_TOKEN}" > .npmrc
    - echo "registry=http://nexus.internal:8081/repository/npm-group/" >> .npmrc
    - npm ci
    - npm test
    - npm version ${CI_COMMIT_TAG} --no-git-tag-version || true
    - npm publish --registry=http://nexus.internal:8081/repository/npm-hosted/
  rules:
    - if: $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+/

Python Package Publishing

# GitHub Actions for Python packages
name: Publish Python Package

on:
  push:
    tags: ['v*']

jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-python@v5
        with:
          python-version: '3.12'

      - name: Install build tools
        run: pip install build twine

      - name: Build package
        run: python -m build

      - name: Upload to Nexus
        run: |
          twine upload \
            --repository-url http://nexus.internal:8081/repository/pypi-hosted/ \
            -u "${NEXUS_USERNAME}" -p "${NEXUS_PASSWORD}" \
            dist/*
        env:
          NEXUS_USERNAME: ${{ secrets.NEXUS_USERNAME }}
          NEXUS_PASSWORD: ${{ secrets.NEXUS_PASSWORD }}

Promotion Workflows

Promotion is the practice of moving artifacts between repositories as they pass quality gates. Instead of rebuilding for each environment, you promote the exact same binary from dev to staging to production. This guarantees that the binary deployed to production is the exact same one that passed all tests.

The Promotion Pattern

Build stage:
  Build artifact --> push to docker-dev

Integration test stage:
  Pull from docker-dev --> run tests
  If pass: promote to docker-staging

UAT + Security scan stage:
  Pull from docker-staging --> run UAT
  Scan with Xray/Trivy
  If pass: promote to docker-release

Production deployment:
  Pull from docker-release --> deploy

Promotion in Nexus

Nexus OSS does not support built-in promotion. Use the REST API to copy or move artifacts:

#!/bin/bash
# promote.sh - Promote a Docker image between Nexus repositories
set -euo pipefail

SOURCE_REPO="${1:?Usage: promote.sh SOURCE_REPO TARGET_REPO IMAGE TAG}"
TARGET_REPO="${2:?}"
IMAGE="${3:?}"
TAG="${4:?}"

NEXUS_URL="http://nexus.internal:8081"

echo "Promoting ${IMAGE}:${TAG} from ${SOURCE_REPO} to ${TARGET_REPO}..."

# Use Nexus Pro staging move API
RESPONSE=$(curl -s -w "%{http_code}" -u "${NEXUS_USER}:${NEXUS_PASS}" -X POST \
  "${NEXUS_URL}/service/rest/v1/staging/move/${TARGET_REPO}?repository=${SOURCE_REPO}&tag=${TAG}&name=${IMAGE}")

HTTP_CODE="${RESPONSE: -3}"
if [ "$HTTP_CODE" -ne 200 ] && [ "$HTTP_CODE" -ne 204 ]; then
    echo "ERROR: Promotion failed with HTTP ${HTTP_CODE}"
    echo "Response: ${RESPONSE}"
    exit 1
fi

echo "Successfully promoted ${IMAGE}:${TAG} to ${TARGET_REPO}"

# Add metadata about the promotion
curl -s -u "${NEXUS_USER}:${NEXUS_PASS}" -X PUT \
  "${NEXUS_URL}/service/rest/v1/components/${IMAGE}/tags" \
  -H "Content-Type: application/json" \
  -d "{
    \"promoted_from\": \"${SOURCE_REPO}\",
    \"promoted_at\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",
    \"promoted_by\": \"${CI_USER:-manual}\"
  }"

Promotion in Artifactory

Artifactory has native promotion support with full audit trail:

# Promote a build from dev to staging
curl -u admin:password -X POST \
  "http://artifactory:8082/artifactory/api/build/promote/myapp/42" \
  -H "Content-Type: application/json" \
  -d '{
    "status": "staging",
    "comment": "Passed integration tests, promoting to staging",
    "sourceRepo": "docker-dev",
    "targetRepo": "docker-staging",
    "copy": true,
    "artifacts": true,
    "dependencies": false,
    "properties": {
      "promoted.by": ["ci-pipeline"],
      "promoted.at": ["2026-03-23T10:30:00Z"],
      "quality.gate": ["integration-tests-passed"]
    }
  }'

# Promote from staging to production
curl -u admin:password -X POST \
  "http://artifactory:8082/artifactory/api/build/promote/myapp/42" \
  -H "Content-Type: application/json" \
  -d '{
    "status": "production",
    "comment": "Passed UAT and security scan, promoting to production",
    "sourceRepo": "docker-staging",
    "targetRepo": "docker-release",
    "copy": true,
    "artifacts": true,
    "properties": {
      "promoted.by": ["release-manager"],
      "quality.gate": ["uat-passed", "security-scan-clear"]
    }
  }'

Promotion in GitHub Actions

# .github/workflows/promote.yml
name: Promote to Production

on:
  workflow_dispatch:
    inputs:
      build_number:
        description: 'Build number to promote'
        required: true
      source_env:
        description: 'Source environment'
        required: true
        type: choice
        options:
          - dev
          - staging
      target_env:
        description: 'Target environment'
        required: true
        type: choice
        options:
          - staging
          - production

jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - name: Validate promotion path
        run: |
          SOURCE="${{ github.event.inputs.source_env }}"
          TARGET="${{ github.event.inputs.target_env }}"

          # Enforce promotion order: dev -> staging -> production
          if [ "$SOURCE" = "dev" ] && [ "$TARGET" != "staging" ]; then
            echo "ERROR: Can only promote from dev to staging"
            exit 1
          fi
          if [ "$SOURCE" = "staging" ] && [ "$TARGET" != "production" ]; then
            echo "ERROR: Can only promote from staging to production"
            exit 1
          fi

  promote:
    needs: validate
    runs-on: ubuntu-latest
    environment: ${{ github.event.inputs.target_env }}  # Requires approval for production

    steps:
      - name: Setup JFrog CLI
        uses: jfrog/setup-jfrog-cli@v4
        env:
          JF_URL: ${{ secrets.JF_URL }}
          JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }}

      - name: Promote build
        run: |
          SOURCE_REPO="docker-${{ github.event.inputs.source_env }}"
          TARGET_REPO="docker-${{ github.event.inputs.target_env }}"

          jf rt build-promote myapp ${{ github.event.inputs.build_number }} \
            ${TARGET_REPO} \
            --copy \
            --status="${{ github.event.inputs.target_env }}" \
            --comment="Promoted by ${{ github.actor }} via GitHub Actions" \
            --props="promoted.by=${{ github.actor }};promoted.workflow=${{ github.run_id }}"

      - name: Notify Slack
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            {
              "text": "Build myapp#${{ github.event.inputs.build_number }} promoted from ${{ github.event.inputs.source_env }} to ${{ github.event.inputs.target_env }} by ${{ github.actor }}"
            }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

Build Info and Traceability

Build info connects your CI metadata to the artifacts stored in your repository. This allows you to answer questions like "which commit produced this Docker image?" and "what dependencies were included in build 42?"

Capturing Build Info with JFrog CLI

# Full build info workflow
export BUILD_NAME="myapp"
export BUILD_NUMBER="${CI_PIPELINE_IID}"

# Collect environment variables (filtered for safety)
jf rt build-collect-env ${BUILD_NAME} ${BUILD_NUMBER}

# Add git information
jf rt build-add-git ${BUILD_NAME} ${BUILD_NUMBER}

# Add dependency info from a file pattern
jf rt build-add-dependencies ${BUILD_NAME} ${BUILD_NUMBER} "libs/*.jar"

# Upload artifacts with build info association
jf rt upload "target/*.jar" maven-releases/ \
  --build-name=${BUILD_NAME} \
  --build-number=${BUILD_NUMBER}

# Publish build info to Artifactory
jf rt build-publish ${BUILD_NAME} ${BUILD_NUMBER}

# Query build info later
jf rt curl -XGET "/api/build/${BUILD_NAME}/${BUILD_NUMBER}"

Build Info in Nexus

Nexus does not have a native build info concept. Implement traceability through:

  • Docker labels in your Dockerfile
  • Component tags via the REST API
  • External metadata stored in a database or CI system
# Add traceability labels using OCI standard labels
ARG BUILD_NUMBER
ARG GIT_COMMIT
ARG BUILD_TIMESTAMP

LABEL org.opencontainers.image.revision="${GIT_COMMIT}"
LABEL org.opencontainers.image.created="${BUILD_TIMESTAMP}"
LABEL org.opencontainers.image.source="https://github.com/company/myapp"
LABEL org.opencontainers.image.version="${BUILD_NUMBER}"
LABEL com.company.build.number="${BUILD_NUMBER}"
LABEL com.company.build.pipeline="${PIPELINE_URL}"

Query labels later:

# Inspect labels on a pushed image
docker inspect nexus.internal:8082/myapp:42 --format '{{json .Config.Labels}}' | jq .

# Or via the registry API
curl -s -u "${NEXUS_USER}:${NEXUS_PASS}" \
  "http://nexus.internal:8082/v2/myapp/manifests/42" \
  -H "Accept: application/vnd.docker.distribution.manifest.v2+json" | jq .

Caching Dependencies for Faster Builds

Multi-Layer Caching Strategy

The fastest builds use multiple cache layers:

  1. CI platform cache (GitHub Actions cache, GitLab cache) --- Caches the local dependency directory
  2. Repository proxy (Nexus/Artifactory) --- Caches remote dependencies for all pipelines
  3. Docker layer cache --- Caches build layers in the registry
# GitHub Actions with all three cache layers
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # Layer 1: CI platform cache for npm
      - name: Cache node_modules
        uses: actions/cache@v4
        with:
          path: |
            ~/.npm
            node_modules
          key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      # Layer 2: npm registry points to Nexus proxy
      - name: Install dependencies via Nexus proxy
        run: |
          npm config set registry http://nexus.internal:8081/repository/npm-group/
          npm ci
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

      # Layer 3: Docker layer caching via registry
      - name: Build with Docker layer cache
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: nexus.internal:8082/myapp:${{ github.sha }}
          cache-from: type=registry,ref=nexus.internal:8082/myapp:buildcache
          cache-to: type=registry,ref=nexus.internal:8082/myapp:buildcache,mode=max

Maven Cache with Nexus Proxy

- name: Cache Maven dependencies
  uses: actions/cache@v4
  with:
    path: ~/.m2/repository
    key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
    restore-keys: |
      ${{ runner.os }}-maven-

- name: Build with cached deps
  run: |
    mvn clean package -s ci-settings.xml \
      -T 1C \
      -Dmaven.artifact.threads=10

The -T 1C flag enables parallel builds (1 thread per CPU core) and -Dmaven.artifact.threads=10 allows parallel dependency downloads.

Security Scanning Integration

Integrate vulnerability scanning into your promotion workflow to gate artifact progression:

# Reusable scan workflow
name: Security Scan Gate

on:
  workflow_call:
    inputs:
      image:
        required: true
        type: string
      fail-on-critical:
        required: false
        type: boolean
        default: true

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - name: Run Trivy scanner
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ inputs.image }}
          format: 'table'
          exit-code: ${{ inputs.fail-on-critical && '1' || '0' }}
          severity: 'CRITICAL,HIGH'
          ignore-unfixed: true

      - name: Run Grype scanner (second opinion)
        uses: anchore/scan-action@v3
        with:
          image: ${{ inputs.image }}
          fail-build: ${{ inputs.fail-on-critical }}
          severity-cutoff: 'high'

      - name: Check for base image updates
        run: |
          # Extract base image from Dockerfile
          BASE_IMAGE=$(grep "^FROM" Dockerfile | head -1 | awk '{print $2}')
          echo "Base image: ${BASE_IMAGE}"

          # Check if base image has known CVEs
          trivy image --severity CRITICAL "${BASE_IMAGE}" --exit-code 0

Jenkins Pipeline with Scanning Gate

stage('Security Scan') {
    parallel {
        stage('Trivy Scan') {
            steps {
                sh """
                    trivy image --exit-code 1 --severity CRITICAL,HIGH \
                        --ignore-unfixed \
                        --format json --output trivy-report.json \
                        nexus.internal:8082/myapp:\${BUILD_NUMBER}
                """
            }
        }
        stage('License Check') {
            steps {
                sh """
                    trivy image --security-checks license \
                        --severity UNKNOWN,HIGH,CRITICAL \
                        --format json --output license-report.json \
                        nexus.internal:8082/myapp:\${BUILD_NUMBER}
                """
            }
        }
    }
    post {
        always {
            archiveArtifacts artifacts: '*-report.json'
        }
    }
}

stage('Promote to Staging') {
    when {
        expression { currentBuild.result == null || currentBuild.result == 'SUCCESS' }
    }
    steps {
        sh "./scripts/promote.sh docker-dev docker-staging myapp \${BUILD_NUMBER}"
    }
}

Cleanup Automation

Automate cleanup to prevent storage from growing unbounded:

#!/bin/bash
# cleanup-artifacts.sh - Automated artifact cleanup
set -euo pipefail

NEXUS_URL="http://nexus.internal:8081"
NEXUS_CREDS="${NEXUS_USER}:${NEXUS_PASS}"

echo "=== Cleanup started at $(date) ==="

# Remove Docker images older than 30 days from dev repo
echo "Cleaning docker-dev..."
COMPONENTS=$(curl -s -u "${NEXUS_CREDS}" \
  "${NEXUS_URL}/service/rest/v1/search?repository=docker-dev" | \
  jq -r '.items[] | select(.lastDownloaded != null) |
    select((.lastDownloaded | fromdateiso8601) < (now - 2592000)) | .id')

for id in ${COMPONENTS}; do
    echo "Deleting component: ${id}"
    curl -s -u "${NEXUS_CREDS}" -X DELETE \
      "${NEXUS_URL}/service/rest/v1/components/${id}"
done

# Remove Maven snapshots older than 14 days
echo "Cleaning maven-snapshots..."
curl -s -u "${NEXUS_CREDS}" -X POST \
  "${NEXUS_URL}/service/rest/v1/script/cleanup-snapshots/run" \
  -H 'Content-Type: text/plain' \
  -d '{"repositoryName": "maven-snapshots", "olderThanDays": 14}'

# Compact blob stores to reclaim disk space
echo "Compacting blob stores..."
for STORE in docker-blobs npm-blobs maven-blobs; do
    curl -s -u "${NEXUS_CREDS}" -X POST \
      "${NEXUS_URL}/service/rest/v1/blobstores/${STORE}/compact"
done

echo "=== Cleanup completed at $(date) ==="

Artifactory Cleanup with AQL

#!/bin/bash
# artifactory-cleanup.sh
set -euo pipefail

ARTIFACTORY_URL="http://artifactory:8082/artifactory"
CREDS="${JF_USER}:${JF_TOKEN}"

# Find and delete Docker images not downloaded in 30 days
echo "Finding stale Docker images..."
STALE_IMAGES=$(curl -s -u "${CREDS}" -X POST \
  "${ARTIFACTORY_URL}/api/search/aql" \
  -H "Content-Type: text/plain" \
  -d 'items.find({
    "repo": "docker-dev",
    "type": "folder",
    "$or": [
      {"stat.downloaded": {"$before": "30d"}},
      {"stat.downloads": {"$eq": 0}}
    ]
  }).include("repo", "path", "name", "stat.downloaded", "stat.downloads")')

echo "${STALE_IMAGES}" | jq -r '.results[] | "\(.repo)/\(.path)/\(.name)"' | while read path; do
    echo "Deleting: ${path}"
    curl -s -u "${CREDS}" -X DELETE "${ARTIFACTORY_URL}/${path}"
done

# Run garbage collection
curl -s -u "${CREDS}" -X POST \
  "${ARTIFACTORY_URL}/api/system/storage/gc"

echo "Cleanup complete"

Schedule cleanup in your CI system or as a cron job:

# Run weekly on Sunday at 02:00
0 2 * * 0 /opt/scripts/cleanup-artifacts.sh >> /var/log/artifact-cleanup.log 2>&1

Monitoring CI/CD Integration Health

Track these metrics to ensure your artifact repository integration is healthy:

MetricHealthy RangeAlert Threshold
Proxy cache hit rateAbove 80%Below 60%
Artifact upload time (P95)Under 30 secondsOver 60 seconds
Dependency download time (P95)Under 5 secondsOver 15 seconds
Failed promotions per day0More than 0
Stale artifacts in dev reposUnder 1000Over 5000
Storage growth ratePredictableSudden spikes
# Check Nexus repository health
curl -s -u "${NEXUS_USER}:${NEXUS_PASS}" \
  "http://nexus.internal:8081/service/rest/v1/status/check" | jq .

# Check Artifactory health
curl -s -u "${JF_USER}:${JF_TOKEN}" \
  "http://artifactory:8082/artifactory/api/system/ping"

Summary

Artifact repository integration is not optional for mature CI/CD pipelines. Start with Docker push/pull in your existing pipelines, then add dependency proxying to speed up builds, and finally implement promotion workflows to ensure only tested artifacts reach production. The specific API calls differ between Nexus and Artifactory, but the patterns are universal: build once, store in dev, scan for vulnerabilities, promote to staging, validate with acceptance tests, promote to production, and deploy. Layer caching at every level --- CI platform, repository proxy, and Docker build cache --- to minimize pipeline execution time. Automate cleanup to prevent storage bloat, and monitor your integration health to catch issues before they affect developer productivity. The investment in proper artifact management pays for itself many times over through faster builds, reliable deployments, and the confidence that comes from knowing exactly what binary is running in every environment.

Share:
Sarah Chen
Sarah Chen

CI/CD Engineering Lead

Automation evangelist who believes no deployment should require a human. I write pipelines, break pipelines, and write about both. Code-first, always.

Related Articles