DevOpsil

Nexus Repository Manager: Setup and Configuration Guide

Sarah ChenSarah Chen20 min read

Sonatype Nexus Repository Manager is the backbone of artifact management for many engineering teams. It acts as a central hub where you store your own artifacts, cache external dependencies, and enforce governance across your software supply chain. Whether you are running a small team or managing hundreds of microservices, Nexus provides the infrastructure to keep builds fast, reproducible, and secure. Across industries from financial services to healthcare, organizations rely on Nexus to maintain control over the binaries that make it into production environments.

This guide walks through deploying Nexus in multiple environments, configuring repositories for the most common package formats, tuning performance for high-throughput workloads, setting up the operational policies you need for production use, and hardening the instance for enterprise-grade security.

Why Nexus Repository Manager

Every time a CI pipeline runs npm install, mvn package, or docker pull, it fetches dependencies from the internet. Without a local proxy, you are at the mercy of upstream availability, network latency, and potential supply-chain attacks. The left-pad incident of 2016, where a single unpublished npm package broke thousands of builds worldwide, demonstrated just how fragile direct dependency fetching can be. Nexus solves this by acting as a local cache and a private registry simultaneously.

Key benefits include:

  • Faster builds --- Dependencies are cached locally after the first fetch, reducing download times from minutes to milliseconds for large dependency trees
  • Availability --- Builds succeed even if npmjs.org, Docker Hub, or Maven Central experiences downtime
  • Security --- Centralized scanning and access control for all artifacts, with the ability to block known-vulnerable components
  • Governance --- Cleanup policies, retention rules, and audit logs that satisfy compliance requirements
  • Cost reduction --- Fewer egress bandwidth charges when dependencies are served from an internal cache
  • Reproducibility --- Pinned dependencies cached locally guarantee that the same build input produces the same output every time

Architecture Overview

Before installing, it helps to understand how Nexus fits into your infrastructure. Nexus Repository Manager 3 runs as a Java application backed by an embedded OrientDB database (OSS) or PostgreSQL (Pro). Artifact binary data is stored in blob stores, which can be file-based or backed by S3-compatible object storage.

A typical production deployment looks like this:

                    Developers / CI Pipelines
                           |
                    [ Load Balancer / Reverse Proxy ]
                           |
                    [ Nexus Repository Manager ]
                     /           |           \
           [ Blob Store ]  [ Blob Store ]  [ Blob Store ]
           (Docker)        (Maven/npm)     (PyPI/Generic)
                           |
                    [ Database ]
                    (OrientDB / PostgreSQL)

For high availability, Nexus Pro supports active-passive clustering with a shared blob store and database. The OSS edition is limited to single-node deployments but can be made resilient through automated recovery and regular backups.

Installation

The fastest way to get Nexus running is with Docker. This is suitable for both evaluation and production use when combined with persistent storage and proper resource allocation.

# Create a persistent volume for Nexus data
docker volume create nexus-data

# Run Nexus Repository Manager 3
docker run -d \
  --name nexus \
  -p 8081:8081 \
  -p 8082:8082 \
  -p 8083:8083 \
  -v nexus-data:/nexus-data \
  --restart unless-stopped \
  -e INSTALL4J_ADD_VM_PARAMS="-Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m" \
  sonatype/nexus3:3.68.0

Ports 8082 and 8083 are reserved for Docker registries (hosted and group, respectively). The main UI and API run on 8081. The JVM parameters are critical for production: Nexus needs at least 2GB of heap and 2GB of direct memory for stable operation under load.

Retrieve the initial admin password:

docker exec nexus cat /nexus-data/admin.password

After logging in for the first time, you will be prompted to change the admin password and configure anonymous access. Disable anonymous access for any internet-facing or production instance.

Docker Compose for Production

For a more reproducible production setup, use Docker Compose with proper resource limits:

# docker-compose.yml
version: "3.8"

services:
  nexus:
    image: sonatype/nexus3:3.68.0
    container_name: nexus
    restart: unless-stopped
    ports:
      - "8081:8081"
      - "8082:8082"
      - "8083:8083"
    volumes:
      - nexus-data:/nexus-data
    environment:
      - INSTALL4J_ADD_VM_PARAMS=-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g -Djava.util.prefs.userRoot=/nexus-data/javaprefs
    deploy:
      resources:
        limits:
          cpus: "4.0"
          memory: 12G
        reservations:
          cpus: "2.0"
          memory: 8G
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8081/service/rest/v1/status"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 120s

volumes:
  nexus-data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/nexus

Standalone Installation

For bare-metal or VM deployments where Docker is not available:

# Install Java 8 or 11 (required)
sudo apt-get install -y openjdk-11-jre-headless

# Download and extract
wget https://download.sonatype.com/nexus/3/nexus-3.68.0-04-unix.tar.gz
tar -xzf nexus-3.68.0-04-unix.tar.gz
sudo mv nexus-3.68.0-04 /opt/nexus
sudo mv sonatype-work /opt/sonatype-work

# Create a dedicated user
sudo useradd -r -s /bin/false -d /opt/nexus nexus
sudo chown -R nexus:nexus /opt/nexus /opt/sonatype-work

# Configure to run as the nexus user
echo 'run_as_user="nexus"' > /opt/nexus/bin/nexus.rc

# Tune JVM settings
cat > /opt/nexus/bin/nexus.vmoptions <<'OPTS'
-Xms4g
-Xmx4g
-XX:MaxDirectMemorySize=4g
-XX:+UnlockDiagnosticVMOptions
-XX:+LogVMOutput
-XX:LogFile=../sonatype-work/nexus3/log/jvm.log
-XX:-OmitStackTraceInFastThrow
-Djava.net.preferIPv4Stack=true
-Dkaraf.home=.
-Dkaraf.base=.
-Dkaraf.etc=etc/karaf
-Djava.util.logging.config.file=etc/karaf/java.util.logging.properties
-Dkaraf.data=../sonatype-work/nexus3
-Dkaraf.log=../sonatype-work/nexus3/log
-Djava.io.tmpdir=../sonatype-work/nexus3/tmp
OPTS

# Start Nexus
/opt/nexus/bin/nexus start

Create a systemd service for production:

# /etc/systemd/system/nexus.service
[Unit]
Description=Nexus Repository Manager
After=network.target

[Service]
Type=forking
LimitNOFILE=65536
LimitNPROC=65536
ExecStart=/opt/nexus/bin/nexus start
ExecStop=/opt/nexus/bin/nexus stop
User=nexus
Restart=on-abort
TimeoutStartSec=180

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable nexus
sudo systemctl start nexus

Kubernetes Deployment with Helm

For teams running Kubernetes, deploy Nexus using a Helm chart:

helm repo add sonatype https://sonatype.github.io/helm3-charts/
helm repo update

kubectl create namespace nexus

helm install nexus sonatype/nexus-repository-manager \
  --namespace nexus \
  --set persistence.enabled=true \
  --set persistence.storageSize=500Gi \
  --set nexus.resources.requests.cpu=2 \
  --set nexus.resources.requests.memory=8Gi \
  --set nexus.resources.limits.cpu=4 \
  --set nexus.resources.limits.memory=12Gi \
  --set nexus.env[0].name=INSTALL4J_ADD_VM_PARAMS \
  --set nexus.env[0].value="-Xms4g -Xmx4g -XX:MaxDirectMemorySize=4g"

Understanding Repository Types

Nexus organizes repositories into three types. Understanding these is critical to a clean setup.

TypePurposeExample
HostedStores your own artifacts (internal builds, releases)Your Docker images, internal npm packages
ProxyCaches artifacts from external registriesDocker Hub, npmjs.org, Maven Central
GroupCombines hosted and proxy repos behind a single URLOne URL for all npm packages (internal + public)

The group repository is what you point your build tools at. It transparently searches hosted repos first, then proxies upstream registries. This means your developers and CI pipelines use a single URL regardless of whether a package is internal or external.

A best practice for organizations with release management requirements is to maintain separate hosted repositories for different stages:

RepositoryPurposeWho Pushes
docker-snapshotsDevelopment builds, overwritable tagsCI pipelines on feature branches
docker-releasesImmutable release buildsCI pipelines on tagged commits
docker-proxyCached upstream imagesNobody (automatic caching)
docker-groupUnified pull endpointNobody (read-only aggregation)

Configuring a Docker Registry

Hosted Docker Repository

This stores your internally built Docker images.

  1. Navigate to Settings then Repositories then Create Repository then docker (hosted)
  2. Configure:
    • Name: docker-hosted
    • HTTP port: 8082
    • Enable Docker V1 API: unchecked (use V2 only)
    • Allow redeploy: unchecked for release repos, checked for snapshot repos
    • Blob store: select or create a dedicated blob store
  3. Save

You can also create repositories via the Nexus REST API, which is useful for automation:

# Create a hosted Docker repository via REST API
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/docker/hosted' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "docker-hosted",
    "online": true,
    "storage": {
      "blobStoreName": "docker-blobs",
      "strictContentTypeValidation": true,
      "writePolicy": "ALLOW_ONCE"
    },
    "docker": {
      "v1Enabled": false,
      "forceBasicAuth": true,
      "httpPort": 8082
    }
  }'

Proxy Docker Repository

This caches images pulled from Docker Hub:

# Create a Docker proxy repository
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/docker/proxy' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "docker-proxy",
    "online": true,
    "storage": {
      "blobStoreName": "docker-blobs",
      "strictContentTypeValidation": true
    },
    "proxy": {
      "remoteUrl": "https://registry-1.docker.io",
      "contentMaxAge": 1440,
      "metadataMaxAge": 1440
    },
    "negativeCache": {
      "enabled": true,
      "timeToLive": 1440
    },
    "httpClient": {
      "blocked": false,
      "autoBlock": true
    },
    "docker": {
      "v1Enabled": false,
      "forceBasicAuth": true
    },
    "dockerProxy": {
      "indexType": "HUB",
      "indexUrl": "https://index.docker.io/"
    }
  }'

Docker Group Repository

Combine both behind a single endpoint:

# Create a Docker group repository
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/docker/group' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "docker-group",
    "online": true,
    "storage": {
      "blobStoreName": "docker-blobs",
      "strictContentTypeValidation": true
    },
    "group": {
      "memberNames": ["docker-hosted", "docker-proxy"]
    },
    "docker": {
      "v1Enabled": false,
      "forceBasicAuth": true,
      "httpPort": 8083
    }
  }'

Client Configuration

Configure Docker to use your Nexus registry:

{
  "insecure-registries": ["nexus.internal:8082", "nexus.internal:8083"],
  "registry-mirrors": ["http://nexus.internal:8083"]
}

Save this to /etc/docker/daemon.json and restart Docker. For production, always use TLS with a reverse proxy instead of insecure registries.

# Push to hosted registry
docker tag myapp:1.0 nexus.internal:8082/myapp:1.0
docker login nexus.internal:8082
docker push nexus.internal:8082/myapp:1.0

# Pull through group registry (searches hosted first, then Docker Hub)
docker pull nexus.internal:8083/nginx:latest

Reverse Proxy with TLS

Production deployments should always sit behind a reverse proxy with TLS termination. Here is an Nginx configuration that handles both the UI and Docker registries:

# /etc/nginx/conf.d/nexus.conf
upstream nexus {
    server 127.0.0.1:8081;
}

upstream docker-hosted {
    server 127.0.0.1:8082;
}

upstream docker-group {
    server 127.0.0.1:8083;
}

# Main Nexus UI and API
server {
    listen 443 ssl http2;
    server_name nexus.company.com;

    ssl_certificate     /etc/ssl/certs/nexus.company.com.crt;
    ssl_certificate_key /etc/ssl/private/nexus.company.com.key;
    ssl_protocols       TLSv1.2 TLSv1.3;

    client_max_body_size 10G;
    proxy_read_timeout 600;
    proxy_send_timeout 600;

    location / {
        proxy_pass http://nexus;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Docker hosted registry
server {
    listen 443 ssl http2;
    server_name docker.company.com;

    ssl_certificate     /etc/ssl/certs/docker.company.com.crt;
    ssl_certificate_key /etc/ssl/private/docker.company.com.key;
    ssl_protocols       TLSv1.2 TLSv1.3;

    client_max_body_size 10G;

    location / {
        proxy_pass http://docker-hosted;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Docker group registry (pull-through cache)
server {
    listen 443 ssl http2;
    server_name docker-mirror.company.com;

    ssl_certificate     /etc/ssl/certs/docker-mirror.company.com.crt;
    ssl_certificate_key /etc/ssl/private/docker-mirror.company.com.key;
    ssl_protocols       TLSv1.2 TLSv1.3;

    client_max_body_size 10G;

    location / {
        proxy_pass http://docker-group;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

After configuring the reverse proxy, update Nexus's base URL in Settings then General then Base URL to match your public HTTPS URL.

npm Repository Setup

Create the Repositories

Create three npm repositories following the hosted/proxy/group pattern:

  • npm-hosted --- for your private packages
  • npm-proxy --- proxy to https://registry.npmjs.org
  • npm-group --- combines both
# Create npm proxy
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/npm/proxy' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "npm-proxy",
    "online": true,
    "storage": {
      "blobStoreName": "npm-blobs",
      "strictContentTypeValidation": true
    },
    "proxy": {
      "remoteUrl": "https://registry.npmjs.org",
      "contentMaxAge": 1440,
      "metadataMaxAge": 1440
    },
    "negativeCache": {
      "enabled": true,
      "timeToLive": 1440
    },
    "httpClient": {
      "blocked": false,
      "autoBlock": true
    }
  }'

# Create npm hosted
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/npm/hosted' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "npm-hosted",
    "online": true,
    "storage": {
      "blobStoreName": "npm-blobs",
      "strictContentTypeValidation": true,
      "writePolicy": "ALLOW_ONCE"
    }
  }'

# Create npm group
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/npm/group' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "npm-group",
    "online": true,
    "storage": {
      "blobStoreName": "npm-blobs",
      "strictContentTypeValidation": true
    },
    "group": {
      "memberNames": ["npm-hosted", "npm-proxy"]
    }
  }'

Client Configuration

# Set the registry globally
npm config set registry http://nexus.internal:8081/repository/npm-group/

# Authenticate for publishing
npm login --registry=http://nexus.internal:8081/repository/npm-hosted/

# Or use .npmrc per project
cat > .npmrc <<'EOF'
registry=http://nexus.internal:8081/repository/npm-group/
//nexus.internal:8081/repository/npm-hosted/:_authToken=YOUR_TOKEN
always-auth=true
EOF

Publish internal packages:

npm publish --registry=http://nexus.internal:8081/repository/npm-hosted/

For scoped packages, configure the scope to point at your hosted repo:

npm config set @company:registry http://nexus.internal:8081/repository/npm-hosted/

Maven Repository Setup

Maven is Nexus's native strength. The default installation includes maven-central (proxy), maven-releases (hosted), and maven-snapshots (hosted).

Configure your ~/.m2/settings.xml:

<settings>
  <mirrors>
    <mirror>
      <id>nexus</id>
      <mirrorOf>*</mirrorOf>
      <url>http://nexus.internal:8081/repository/maven-public/</url>
    </mirror>
  </mirrors>
  <servers>
    <server>
      <id>nexus-releases</id>
      <username>deployer</username>
      <password>deploy-password</password>
    </server>
    <server>
      <id>nexus-snapshots</id>
      <username>deployer</username>
      <password>deploy-password</password>
    </server>
  </servers>
</settings>

In your pom.xml, configure deployment:

<distributionManagement>
  <repository>
    <id>nexus-releases</id>
    <url>http://nexus.internal:8081/repository/maven-releases/</url>
  </repository>
  <snapshotRepository>
    <id>nexus-snapshots</id>
    <url>http://nexus.internal:8081/repository/maven-snapshots/</url>
  </snapshotRepository>
</distributionManagement>

For Gradle projects, configure the repository in build.gradle.kts:

repositories {
    maven {
        url = uri("http://nexus.internal:8081/repository/maven-public/")
        credentials {
            username = System.getenv("NEXUS_USERNAME") ?: "deployer"
            password = System.getenv("NEXUS_PASSWORD") ?: "deploy-password"
        }
    }
}

publishing {
    repositories {
        maven {
            val releasesUrl = uri("http://nexus.internal:8081/repository/maven-releases/")
            val snapshotsUrl = uri("http://nexus.internal:8081/repository/maven-snapshots/")
            url = if (version.toString().endsWith("SNAPSHOT")) snapshotsUrl else releasesUrl
            credentials {
                username = System.getenv("NEXUS_USERNAME") ?: "deployer"
                password = System.getenv("NEXUS_PASSWORD") ?: "deploy-password"
            }
        }
    }
}

PyPI Repository Setup

For Python teams, create a PyPI proxy and hosted repository:

  • pypi-proxy --- proxy to https://pypi.org
  • pypi-hosted --- for internal Python packages
  • pypi-group --- combines both

Configure pip:

# ~/.pip/pip.conf (Linux/macOS) or %APPDATA%\pip\pip.ini (Windows)
[global]
index-url = http://nexus.internal:8081/repository/pypi-group/simple/
trusted-host = nexus.internal

Publish with twine:

twine upload \
  --repository-url http://nexus.internal:8081/repository/pypi-hosted/ \
  -u deployer -p deploy-password \
  dist/*

For Poetry users:

poetry config repositories.nexus http://nexus.internal:8081/repository/pypi-hosted/
poetry config http-basic.nexus deployer deploy-password
poetry publish --repository nexus

Blob Stores

Blob stores define where Nexus physically stores artifact data. By default, everything goes to a single file-based blob store. For production, separate blob stores per format improve management, allow different storage backends, and make capacity planning clearer.

Blob StoreTypePath/BucketPurpose
docker-blobsFile/nexus-data/blobs/dockerAll Docker images
npm-blobsFile/nexus-data/blobs/npmnpm packages
maven-blobsFile/nexus-data/blobs/mavenMaven artifacts
defaultFile/nexus-data/blobs/defaultEverything else

Create blob stores via the REST API:

curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/blobstores/file' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "docker-blobs",
    "path": "/nexus-data/blobs/docker",
    "softQuota": {
      "type": "spaceRemainingQuota",
      "limit": 107374182400
    }
  }'

Nexus Pro supports S3 blob stores for offloading large artifact data to object storage:

curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/blobstores/s3' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "docker-s3",
    "bucketConfiguration": {
      "bucket": {
        "region": "us-east-1",
        "name": "nexus-docker-artifacts",
        "prefix": "docker",
        "expiration": 3
      },
      "encryption": {
        "encryptionType": "s3ManagedEncryption"
      }
    },
    "softQuota": {
      "type": "spaceUsedQuota",
      "limit": 1099511627776
    }
  }'

Cleanup Policies

Without cleanup policies, your Nexus storage will grow indefinitely. Define policies under Settings then Repository then Cleanup Policies.

Common policies:

Policy NameCriteriaApplied To
Remove old snapshotsLast downloaded more than 30 days agomaven-snapshots
Remove old Docker tagsLast downloaded more than 14 days ago, not latestdocker-hosted
Remove stale npmLast downloaded more than 60 days agonpm-hosted
Remove pre-releaseRegex match on pre-release versionsAll hosted repos

Create cleanup policies via REST API:

# Create a cleanup policy for old Docker images
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/lifecycle/cleanup-policy' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "cleanup-old-docker",
    "format": "docker",
    "mode": "delete",
    "criteria": {
      "lastDownloaded": 14,
      "lastBlobUpdated": 30
    }
  }'

After creating policies, attach them to repositories and schedule the Admin - Cleanup repositories using their associated policies task to run nightly, typically during low-traffic hours.

Compact blob stores afterward by scheduling Admin - Compact blob store to reclaim disk space. Without compaction, deleted artifacts continue to consume disk space because Nexus uses soft deletes.

A recommended task schedule:

TaskSchedulePurpose
Cleanup repositoriesDaily at 02:00Mark artifacts for deletion
Compact blob storeDaily at 04:00Reclaim disk space
Rebuild repository indexWeekly on SundayFix search index inconsistencies
Database backupDaily at 01:00Export database for recovery

User Management and Roles

Nexus ships with several built-in roles. For production, create custom roles following least-privilege principles.

RolePermissionsAssigned To
ci-deployerPush to hosted repos onlyCI service accounts
developer-readPull from group reposDeveloper machines
release-managerPush/delete in release reposRelease engineering team
admin-fullFull administrative accessPlatform team

Create roles via the REST API for repeatable configuration:

# Create a CI deployer role
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/security/roles' \
  -H 'Content-Type: application/json' \
  -d '{
    "id": "ci-deployer",
    "name": "CI Deployer",
    "description": "Push access to hosted repositories",
    "privileges": [
      "nx-repository-view-docker-docker-hosted-add",
      "nx-repository-view-docker-docker-hosted-edit",
      "nx-repository-view-docker-docker-hosted-read",
      "nx-repository-view-npm-npm-hosted-add",
      "nx-repository-view-npm-npm-hosted-edit",
      "nx-repository-view-npm-npm-hosted-read",
      "nx-repository-view-maven2-maven-releases-add",
      "nx-repository-view-maven2-maven-releases-edit",
      "nx-repository-view-maven2-maven-releases-read"
    ]
  }'

# Create a CI service account
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/security/users' \
  -H 'Content-Type: application/json' \
  -d '{
    "userId": "ci-pipeline",
    "firstName": "CI",
    "lastName": "Pipeline",
    "emailAddress": "ci@company.com",
    "password": "secure-generated-password",
    "status": "active",
    "roles": ["ci-deployer"]
  }'

LDAP Integration

Nexus supports LDAP for centralized authentication:

  1. Navigate to Settings then Security then LDAP
  2. Add your LDAP server connection (host, port, search base)
  3. Configure user and group mapping
  4. Map LDAP groups to Nexus roles
LDAP Group          Nexus Role
devops-team         nx-admin
developers          developer-read
ci-systems          ci-deployer
release-eng         release-manager

For Active Directory environments:

Connection:
  Protocol: ldaps
  Host: ldap.company.com
  Port: 636
  Search base: DC=company,DC=com

User mapping:
  User subtree: OU=Users,DC=company,DC=com
  Object class: user
  User ID attribute: sAMAccountName
  Name attribute: cn
  Email attribute: mail

Group mapping:
  Group type: Dynamic groups
  Group subtree: OU=Groups,DC=company,DC=com
  Group object class: group
  Group member attribute: member

Performance Tuning

For high-throughput environments serving hundreds of CI pipelines:

JVM Tuning

-Xms8g
-Xmx8g
-XX:MaxDirectMemorySize=8g
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:InitiatingHeapOccupancyPercent=45
-XX:+ParallelRefProcEnabled

The rule of thumb is to allocate at least one-third of available system memory to the JVM heap, one-third to direct memory, and leave one-third for the operating system and file system cache.

Connection Pool Tuning

Edit nexus.properties to increase connection limits:

# /nexus-data/etc/nexus.properties
nexus.scripts.allowCreation=true
jetty.request.header.size=65536
nexus.datastore.nexus.maximumPoolSize=100

File System Optimization

# Increase file descriptor limits
echo "nexus soft nofile 65536" >> /etc/security/limits.conf
echo "nexus hard nofile 65536" >> /etc/security/limits.conf

# Use XFS for blob store volumes (better for large files)
mkfs.xfs /dev/sdb1
mount -o noatime,nodiratime /dev/sdb1 /nexus-data/blobs

# Add to /etc/fstab for persistence
echo "/dev/sdb1 /nexus-data/blobs xfs noatime,nodiratime 0 0" >> /etc/fstab

Storage Considerations

Plan storage capacity based on your artifact volume:

FormatTypical SizeGrowth RateNotes
Docker images500GB - 5TBHighMulti-layer images are large
Maven artifacts50GB - 500GBMediumSnapshots accumulate quickly
npm packages20GB - 200GBMediumnode_modules can be surprising
PyPI packages10GB - 100GBLowWheels are compact

Key practices:

  • Monitor blob store usage via the System then Status page or the REST API
  • Use separate mount points or volumes for each blob store
  • Set up alerts when disk usage exceeds 80%
  • Consider S3 blob stores for Docker images (the largest consumer)
  • Enable blob store soft quotas to get warnings before hitting hard limits
# Monitor storage via REST API
curl -u admin:admin123 \
  'http://nexus.internal:8081/service/rest/v1/status/check' | jq .

# Check blob store sizes
curl -u admin:admin123 \
  'http://nexus.internal:8081/service/rest/v1/blobstores' | jq '.[] | {name, totalSizeInBytes, availableSpaceInBytes}'

Backup Strategy

Nexus data lives in the sonatype-work directory. A proper backup includes:

  1. Database backup --- Schedule the built-in Admin - Export databases for backup task
  2. Blob store backup --- Use filesystem snapshots or rsync for file-based stores
  3. Configuration export --- Script the REST API to export repository and security config
#!/bin/bash
# nexus-backup.sh - Complete Nexus backup script

BACKUP_DIR="/backup/nexus/$(date +%Y%m%d)"
NEXUS_URL="http://nexus.internal:8081"
NEXUS_CREDS="admin:backup-password"

mkdir -p "${BACKUP_DIR}"

# Export repository configuration
curl -s -u "${NEXUS_CREDS}" \
  "${NEXUS_URL}/service/rest/v1/repositories" \
  | jq '.' > "${BACKUP_DIR}/repositories.json"

# Export security configuration
curl -s -u "${NEXUS_CREDS}" \
  "${NEXUS_URL}/service/rest/v1/security/roles" \
  | jq '.' > "${BACKUP_DIR}/roles.json"

curl -s -u "${NEXUS_CREDS}" \
  "${NEXUS_URL}/service/rest/v1/security/users" \
  | jq '.' > "${BACKUP_DIR}/users.json"

# Backup blob stores (use rsync for incremental backups)
rsync -av --delete /opt/sonatype-work/nexus3/blobs/ "${BACKUP_DIR}/blobs/"

# Backup the database export (created by scheduled task)
rsync -av /opt/sonatype-work/nexus3/backup/ "${BACKUP_DIR}/db/"

# Compress and upload to offsite storage
tar -czf "${BACKUP_DIR}.tar.gz" -C /backup/nexus "$(date +%Y%m%d)"
aws s3 cp "${BACKUP_DIR}.tar.gz" s3://company-backups/nexus/

# Retain only last 7 local backups
find /backup/nexus -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \;
find /backup/nexus -maxdepth 1 -name "*.tar.gz" -mtime +7 -delete

echo "Backup completed: ${BACKUP_DIR}"

Schedule database exports to run before your blob store backup to ensure consistency. Add this script to cron:

# Run backup daily at 03:00
0 3 * * * /opt/scripts/nexus-backup.sh >> /var/log/nexus-backup.log 2>&1

Monitoring and Health Checks

Monitor Nexus health using the built-in status endpoints:

# Basic health check
curl -s http://nexus.internal:8081/service/rest/v1/status

# Detailed system status (requires authentication)
curl -s -u admin:admin123 http://nexus.internal:8081/service/rest/v1/status/check

# Read-only system status (useful for load balancer health checks)
curl -s http://nexus.internal:8081/service/rest/v1/status/writable

For Prometheus monitoring, Nexus exposes metrics at /service/metrics/prometheus (requires the Prometheus metrics capability to be enabled). Create alerting rules for critical conditions:

# prometheus-rules.yml
groups:
  - name: nexus
    rules:
      - alert: NexusDiskUsageHigh
        expr: nexus_blobstore_usage_bytes / nexus_blobstore_total_bytes > 0.85
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "Nexus blob store usage above 85%"

      - alert: NexusDown
        expr: up{job="nexus"} == 0
        for: 2m
        labels:
          severity: critical
        annotations:
          summary: "Nexus Repository Manager is unreachable"

Security Hardening

For production deployments, apply these security measures:

  1. Disable anonymous access unless you have a specific use case for public reads
  2. Use HTTPS everywhere with TLS 1.2+ via a reverse proxy
  3. Rotate credentials regularly for service accounts
  4. Enable audit logging to track who accessed or modified what
  5. Restrict admin access to a small group using LDAP group mapping
  6. Network segmentation --- expose only the reverse proxy, not the Nexus ports directly
  7. Content selectors --- restrict access to specific paths within repositories
# Create a content selector for production artifacts only
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/security/content-selectors' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "production-images",
    "description": "Only production Docker images",
    "expression": "format == \"docker\" and path =~ \"/v2/production/.*\""
  }'

Troubleshooting Common Issues

ProblemCauseSolution
Slow Docker pullsBlob store on slow diskMove blob store to SSD or S3
Out of memory errorsJVM heap too smallIncrease Xmx and MaxDirectMemorySize
"Repository is offline"Upstream registry unreachableCheck proxy settings and auto-block status
Disk space exhaustedNo cleanup policiesCreate and schedule cleanup tasks
Authentication failuresLDAP connection timeoutVerify LDAP server connectivity and timeout settings
Corrupt blob storeUnclean shutdownRun "Repair - Reconcile component database from blob store" task
# Check Nexus logs for errors
docker logs nexus --tail 200

# Check task execution history
curl -s -u admin:admin123 \
  'http://nexus.internal:8081/service/rest/v1/tasks' | jq '.items[] | {name, lastRunResult}'

# Force re-index a repository
curl -u admin:admin123 -X POST \
  'http://nexus.internal:8081/service/rest/v1/repositories/docker-hosted/rebuild-index'

Summary

Nexus Repository Manager is a foundational piece of DevOps infrastructure. A well-configured Nexus instance with proper repository types, cleanup policies, access control, and monitoring will make your builds faster, more reliable, and more secure. Start with Docker and npm repositories since those see the highest pull volume, then expand to cover Maven, PyPI, and other formats as your team adopts them. Invest early in automation through the REST API to ensure your Nexus configuration is repeatable and version-controlled. The time spent on proper setup, performance tuning, and backup procedures pays dividends when your team scales from a handful of services to hundreds.

Share:
Sarah Chen
Sarah Chen

CI/CD Engineering Lead

Automation evangelist who believes no deployment should require a human. I write pipelines, break pipelines, and write about both. Code-first, always.

Related Articles