JFrog Artifactory: Repository Setup and Access Control
JFrog Artifactory is an enterprise-grade universal artifact repository manager that supports virtually every package format in existence. It serves as the single source of truth for all your binaries, container images, and packages throughout the entire software development lifecycle. Unlike simpler registries, Artifactory offers deep integration with JFrog's security scanning (Xray), distribution (Distribution), and pipeline (Pipelines) products, making it a popular choice for organizations that need end-to-end software supply chain management. With over 30 supported package types and features like build info tracking, artifact promotion, and federated repositories, Artifactory handles use cases from startup-scale development to globally distributed enterprise deployments.
This guide covers installation across multiple environments, repository configuration for the most common formats, the access control model you need to secure your artifacts, performance optimization, security hardening, and operational best practices for running Artifactory in production.
Artifactory Editions
Before you deploy, understand what you are getting:
| Edition | License | Key Features | Package Types |
|---|---|---|---|
| OSS | Apache 2.0 | Maven, Gradle, Ivy, SBT, generic repos only | 6 |
| JCR (JFrog Container Registry) | Free | Docker, Helm, generic repos | 3 |
| Pro | Commercial | All package types, replication, Xray integration | 30+ |
| Enterprise | Commercial | Multi-site, high availability, federation | 30+ |
| Enterprise+ | Commercial | Full JFrog Platform, Pipelines, Distribution | 30+ |
For most DevOps teams, Pro is the practical minimum. The OSS edition is too limited for real-world use since it lacks Docker, npm, and PyPI support. JCR is a solid free option if you only need container images and Helm charts. Enterprise editions add high availability and multi-site federation, which become necessary when you have teams across multiple geographic regions.
Architecture Overview
Understanding Artifactory's architecture helps with capacity planning and troubleshooting.
Developers / CI Pipelines
|
[ Load Balancer ]
/ \
[ Artifactory Node 1 ] [ Artifactory Node 2 ]
\ /
[ Shared Storage ]
(NFS / S3 / GCS)
|
[ Database ]
(PostgreSQL / MySQL / Oracle / MSSQL)
Key components:
- JFrog Router (port 8082) --- Handles all incoming requests and routes them to the appropriate microservice
- Artifactory Service (port 8081) --- Core artifact storage and retrieval
- Access Service --- Authentication, authorization, and token management
- Metadata Service --- Handles artifact properties and build info
- Event Service --- Webhook and event processing
For production, JFrog recommends PostgreSQL as the database backend and S3 or GCS for the filestore. The database stores metadata (artifact coordinates, properties, permissions) while the filestore holds the actual binary data.
Installation
Docker Deployment
The quickest path to a running Artifactory instance:
# Create data directory with proper permissions
mkdir -p /opt/jfrog/artifactory/var
chmod 777 /opt/jfrog/artifactory/var
# Run Artifactory Pro (use jfrog/artifactory-oss for OSS edition)
docker run -d \
--name artifactory \
-p 8081:8081 \
-p 8082:8082 \
-v /opt/jfrog/artifactory/var:/var/opt/jfrog/artifactory \
--restart unless-stopped \
-e JF_SHARED_DATABASE_TYPE=postgresql \
-e JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver \
releases-docker.jfrog.io/jfrog/artifactory-pro:7.77.0
Port 8081 serves the Artifactory UI and the legacy API. Port 8082 is the JFrog Platform router that handles all repository endpoints under a unified URL scheme.
Access the UI at http://your-host:8082/ui/ and log in with admin / password. You will be prompted to change the password and configure a base URL.
Docker Compose for Production
A more complete setup with PostgreSQL:
# docker-compose.yml
version: "3.8"
services:
postgresql:
image: postgres:15
container_name: artifactory-postgres
restart: unless-stopped
environment:
POSTGRES_DB: artifactory
POSTGRES_USER: artifactory
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U artifactory"]
interval: 10s
timeout: 5s
retries: 5
artifactory:
image: releases-docker.jfrog.io/jfrog/artifactory-pro:7.77.0
container_name: artifactory
restart: unless-stopped
depends_on:
postgresql:
condition: service_healthy
ports:
- "8081:8081"
- "8082:8082"
volumes:
- artifactory-data:/var/opt/jfrog/artifactory
environment:
- JF_SHARED_DATABASE_TYPE=postgresql
- JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver
- "JF_SHARED_DATABASE_URL=jdbc:postgresql://postgresql:5432/artifactory"
- JF_SHARED_DATABASE_USERNAME=artifactory
- "JF_SHARED_DATABASE_PASSWORD=${POSTGRES_PASSWORD}"
- "JF_SHARED_NODE_IP=${HOST_IP}"
- JF_SHARED_JAVAOPTS_XMS=4g
- JF_SHARED_JAVAOPTS_XMX=8g
deploy:
resources:
limits:
cpus: "4.0"
memory: 12G
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
volumes:
postgres-data:
artifactory-data:
Helm Chart Deployment
For Kubernetes environments, JFrog provides an official Helm chart with extensive customization options:
# Add the JFrog Helm repository
helm repo add jfrog https://charts.jfrog.io
helm repo update
# Create namespace
kubectl create namespace artifactory
# Install with custom values
helm install artifactory jfrog/artifactory \
--namespace artifactory \
--values artifactory-values.yaml \
--wait --timeout 10m
Production values.yaml:
# artifactory-values.yaml
artifactory:
persistence:
enabled: true
size: 500Gi
storageClassName: gp3
resources:
requests:
cpu: "4"
memory: "8Gi"
limits:
cpu: "8"
memory: "16Gi"
javaOpts:
xms: "6g"
xmx: "10g"
# Configure S3 filestore for scalability
configMapOverrides:
binarystore: |
<config version="2">
<chain template="s3-storage-v3"/>
<provider id="s3-storage-v3" type="s3-storage-v3">
<endpoint>s3.amazonaws.com</endpoint>
<bucketName>company-artifactory-filestore</bucketName>
<path>artifactory/filestore</path>
<region>us-east-1</region>
<identity>AKIAIOSFODNN7EXAMPLE</identity>
<credential>wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY</credential>
<useInstanceCredentials>true</useInstanceCredentials>
</provider>
</config>
# Enable access logging
accessLogging:
enabled: true
# Custom system YAML overrides
systemYaml: |
shared:
logging:
consoleLog:
enabled: true
database:
type: postgresql
driver: org.postgresql.Driver
url: "jdbc:postgresql://artifactory-postgresql:5432/artifactory"
postgresql:
enabled: true
persistence:
size: 100Gi
storageClassName: gp3
resources:
requests:
cpu: "1"
memory: "2Gi"
limits:
cpu: "2"
memory: "4Gi"
nginx:
enabled: true
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
tlsSecretName: artifactory-tls
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "1"
memory: "512Mi"
Standalone Installation
# Download and extract
wget https://releases.jfrog.io/artifactory/bintray-artifactory/org/artifactory/pro/jfrog-artifactory-pro/7.77.0/jfrog-artifactory-pro-7.77.0-linux.tar.gz
tar -xzf jfrog-artifactory-pro-7.77.0-linux.tar.gz
sudo mv artifactory-pro-7.77.0 /opt/jfrog/artifactory
# Create dedicated user
sudo useradd -r -s /bin/false -d /opt/jfrog artifactory
sudo chown -R artifactory:artifactory /opt/jfrog
# Configure system YAML
cat > /opt/jfrog/artifactory/var/etc/system.yaml <<'YAML'
shared:
javaHome: /opt/jfrog/artifactory/app/third-party/java
database:
type: postgresql
driver: org.postgresql.Driver
url: "jdbc:postgresql://localhost:5432/artifactory"
username: artifactory
password: "${ARTIFACTORY_DB_PASSWORD}"
node:
id: "art-node-1"
security:
joinKey: "${ARTIFACTORY_JOIN_KEY}"
YAML
# Create systemd service
cat > /etc/systemd/system/artifactory.service <<'SERVICE'
[Unit]
Description=JFrog Artifactory
After=network.target postgresql.service
[Service]
Type=forking
User=artifactory
Group=artifactory
ExecStart=/opt/jfrog/artifactory/app/bin/artifactoryctl start
ExecStop=/opt/jfrog/artifactory/app/bin/artifactoryctl stop
LimitNOFILE=65536
LimitNPROC=65536
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
SERVICE
# Start Artifactory
sudo systemctl daemon-reload
sudo systemctl enable artifactory
sudo systemctl start artifactory
Repository Types
Artifactory uses different terminology than Nexus but the concepts are equivalent:
| Artifactory Term | Nexus Equivalent | Purpose | Example |
|---|---|---|---|
| Local | Hosted | Stores your internal artifacts | Company Docker images, private npm packages |
| Remote | Proxy | Caches artifacts from external registries | Docker Hub, npmjs.org, Maven Central |
| Virtual | Group | Aggregates local and remote repos behind one URL | Single npm URL for internal and public packages |
| Federated | N/A (Pro+) | Mirrors local repos across multiple Artifactory instances | Multi-site artifact distribution |
The virtual repository is your single endpoint. Point all build tools at the virtual repo, and Artifactory handles routing between local and remote sources. When a client pushes an artifact, the virtual repo's defaultDeploymentRepo setting determines which local repo receives the upload.
Repository Naming Conventions
A consistent naming convention makes administration easier:
Format: {team}-{format}-{type}-{qualifier}
Examples: platform-docker-local
platform-docker-remote-dockerhub
platform-docker-virtual
backend-maven-local-releases
backend-maven-local-snapshots
shared-npm-virtual
Docker Registry Configuration
Local Docker Repository
Create via the UI or REST API:
# Create local Docker repository via REST API
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/docker-local" \
-H "Content-Type: application/json" \
-d '{
"key": "docker-local",
"rclass": "local",
"packageType": "docker",
"dockerApiVersion": "V2",
"description": "Internal Docker images",
"xrayIndex": true,
"blockPushingSchema1": true,
"maxUniqueTags": 10,
"tagRetention": 100,
"propertySets": ["artifactory"]
}'
The maxUniqueTags setting is critical for storage management. It limits how many tags a single image can have, automatically removing the oldest tags when the limit is exceeded.
Remote Docker Repository
# Proxy Docker Hub
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/docker-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "docker-remote",
"rclass": "remote",
"packageType": "docker",
"url": "https://registry-1.docker.io/",
"externalDependenciesEnabled": true,
"enableTokenAuthentication": true,
"blockPushingSchema1": true,
"missedRetrievalCachePeriodSecs": 1800,
"unusedArtifactsCleanupPeriodHours": 720,
"retrievalCachePeriodSecs": 7200,
"assumedOfflinePeriodSecs": 300,
"storeArtifactsLocally": true
}'
# Proxy GitHub Container Registry
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/ghcr-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "ghcr-remote",
"rclass": "remote",
"packageType": "docker",
"url": "https://ghcr.io/",
"externalDependenciesEnabled": true,
"enableTokenAuthentication": true,
"username": "github-username",
"password": "ghp_token"
}'
# Proxy AWS ECR Public
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/ecr-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "ecr-remote",
"rclass": "remote",
"packageType": "docker",
"url": "https://public.ecr.aws/",
"externalDependenciesEnabled": true
}'
Virtual Docker Repository
# Create virtual repository combining local and remote
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/docker" \
-H "Content-Type: application/json" \
-d '{
"key": "docker",
"rclass": "virtual",
"packageType": "docker",
"repositories": ["docker-local", "docker-remote", "ghcr-remote", "ecr-remote"],
"defaultDeploymentRepo": "docker-local",
"resolveDockerTagsByTimestamp": true
}'
The resolveDockerTagsByTimestamp setting ensures that when the same tag exists in multiple repositories, the most recently published one wins.
Docker Client Usage
# Login to Artifactory Docker registry
docker login artifactory.company.com
# Push an image
docker tag myapp:1.0 artifactory.company.com/docker/myapp:1.0
docker push artifactory.company.com/docker/myapp:1.0
# Pull through virtual repo (checks local first, then Docker Hub)
docker pull artifactory.company.com/docker/nginx:latest
# Use with buildkit for faster builds
DOCKER_BUILDKIT=1 docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from artifactory.company.com/docker/myapp:latest \
-t artifactory.company.com/docker/myapp:2.0 .
Helm Chart Repository
# Create local Helm repo
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/helm-local" \
-H "Content-Type: application/json" \
-d '{
"key": "helm-local",
"rclass": "local",
"packageType": "helm",
"xrayIndex": true
}'
# Create remote Helm repo for public charts
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/helm-bitnami-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "helm-bitnami-remote",
"rclass": "remote",
"packageType": "helm",
"url": "https://charts.bitnami.com/bitnami"
}'
# Create virtual Helm repo
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/helm-virtual" \
-H "Content-Type: application/json" \
-d '{
"key": "helm-virtual",
"rclass": "virtual",
"packageType": "helm",
"repositories": ["helm-local", "helm-bitnami-remote"],
"defaultDeploymentRepo": "helm-local"
}'
# Add as Helm repo
helm repo add company https://artifactory.company.com/artifactory/helm-virtual \
--username deployer --password deploy-token
# Push a chart using the JFrog CLI
jf rt upload "mychart-1.0.0.tgz" helm-local/ \
--build-name=mychart --build-number=1.0.0
npm and PyPI Repositories
npm Setup
Create the local/remote/virtual trio, then configure your client:
# Create npm local
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/npm-local" \
-H "Content-Type: application/json" \
-d '{
"key": "npm-local",
"rclass": "local",
"packageType": "npm",
"xrayIndex": true
}'
# Create npm remote
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/npm-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "npm-remote",
"rclass": "remote",
"packageType": "npm",
"url": "https://registry.npmjs.org",
"missedRetrievalCachePeriodSecs": 1800
}'
# Create npm virtual
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/npm-virtual" \
-H "Content-Type: application/json" \
-d '{
"key": "npm-virtual",
"rclass": "virtual",
"packageType": "npm",
"repositories": ["npm-local", "npm-remote"],
"defaultDeploymentRepo": "npm-local",
"externalDependenciesEnabled": false
}'
Client configuration:
# .npmrc
registry=https://artifactory.company.com/artifactory/api/npm/npm-virtual/
//artifactory.company.com/artifactory/api/npm/npm-virtual/:_authToken=YOUR_TOKEN
always-auth=true
Publish internal packages:
npm publish --registry=https://artifactory.company.com/artifactory/api/npm/npm-local/
For scoped packages:
# .npmrc for scoped packages
@company:registry=https://artifactory.company.com/artifactory/api/npm/npm-local/
//artifactory.company.com/artifactory/api/npm/npm-local/:_authToken=YOUR_TOKEN
registry=https://artifactory.company.com/artifactory/api/npm/npm-virtual/
PyPI Setup
# Create PyPI repositories
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/pypi-local" \
-H "Content-Type: application/json" \
-d '{
"key": "pypi-local",
"rclass": "local",
"packageType": "pypi"
}'
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/pypi-remote" \
-H "Content-Type: application/json" \
-d '{
"key": "pypi-remote",
"rclass": "remote",
"packageType": "pypi",
"url": "https://files.pythonhosted.org"
}'
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/pypi-virtual" \
-H "Content-Type: application/json" \
-d '{
"key": "pypi-virtual",
"rclass": "virtual",
"packageType": "pypi",
"repositories": ["pypi-local", "pypi-remote"],
"defaultDeploymentRepo": "pypi-local"
}'
# ~/.pip/pip.conf
[global]
index-url = https://artifactory.company.com/artifactory/api/pypi/pypi-virtual/simple/
trusted-host = artifactory.company.com
Upload with twine:
twine upload \
--repository-url https://artifactory.company.com/artifactory/api/pypi/pypi-local/ \
-u deployer -p deploy-token \
dist/*
Repository Layouts
Artifactory uses repository layouts to understand artifact coordinates (group, artifact, version). Each package type has a default layout, but you can customize them for non-standard structures.
Common layouts:
| Layout | Pattern | Used By |
|---|---|---|
| maven-2-default | [orgPath]/[module]/[baseRev]/[module]-[baseRev].[ext] | Maven, Gradle |
| npm-default | [orgPath]/[module]/[module]-[baseRev].[ext] | npm |
| simple-default | [orgPath]/[module]/[baseRev]/[module]-[baseRev].[ext] | Generic |
| nuget-default | [orgPath]/[module].[baseRev].[ext] | NuGet |
Custom layouts are useful when you have legacy artifact naming conventions that do not match standard patterns. You can define them in the UI under Administration then Repositories then Layouts.
Permission Targets
Artifactory's permission model is based on permission targets that map repositories and paths to users and groups. Understanding this model is essential for securing your artifacts.
Permission Model Hierarchy
Groups (collections of users)
|
v
Permission Targets (define what repos/paths and what actions)
|
v
Actions: Read, Annotate, Deploy, Delete, Manage
Creating a Permission Target
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/v2/security/permissions/ci-deploy" \
-H "Content-Type: application/json" \
-d '{
"name": "ci-deploy",
"repo": {
"include-patterns": ["**"],
"exclude-patterns": [],
"repositories": ["docker-local", "npm-local", "helm-local"],
"actions": {
"users": {
"ci-deployer": ["read", "write", "annotate"]
},
"groups": {
"ci-systems": ["read", "write", "annotate"]
}
}
},
"build": {
"include-patterns": ["**"],
"exclude-patterns": [],
"repositories": ["artifactory-build-info"],
"actions": {
"groups": {
"ci-systems": ["read", "write", "managedXrayMeta", "distribute"]
}
}
}
}'
Recommended Permission Structure
| Permission Target | Repositories | Users/Groups | Actions |
|---|---|---|---|
dev-read | All virtual repos | developers group | Read |
ci-deploy | All local repos | ci-systems group | Read, Write, Annotate |
ci-build-info | Build info repo | ci-systems group | Read, Write, Manage |
release-promote | Release repos | release-managers group | Read, Write, Delete |
security-scan | All repos | security-team group | Read, Manage Xray Meta |
admin-all | All repos | platform-admins group | Admin |
Path-Based Permissions
You can restrict access to specific paths within a repository:
# Allow team-a to only push to their namespace
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/v2/security/permissions/team-a-docker" \
-H "Content-Type: application/json" \
-d '{
"name": "team-a-docker",
"repo": {
"include-patterns": ["team-a/**"],
"exclude-patterns": [],
"repositories": ["docker-local"],
"actions": {
"groups": {
"team-a": ["read", "write", "annotate"]
}
}
}
}'
Access Tokens
For CI/CD systems, use access tokens instead of passwords:
# Create a scoped access token
curl -u admin:password -X POST \
"http://artifactory:8082/access/api/v1/tokens" \
-H "Content-Type: application/json" \
-d '{
"subject": "ci-deployer",
"scope": "applied-permissions/groups:ci-systems",
"expires_in": 31536000,
"description": "CI pipeline deploy token",
"refreshable": true,
"audience": "jfrt@*"
}'
Store the returned token in your CI system's secrets manager. Tokens can be revoked without affecting the user account, making them ideal for automated systems.
For short-lived tokens in CI/CD:
# Create a token that expires in 1 hour (for a single pipeline run)
curl -u admin:password -X POST \
"http://artifactory:8082/access/api/v1/tokens" \
-H "Content-Type: application/json" \
-d '{
"subject": "ci-deployer",
"scope": "applied-permissions/groups:ci-systems",
"expires_in": 3600,
"description": "Short-lived pipeline token"
}'
Token Best Practices
| Practice | Reason |
|---|---|
| Use group-scoped tokens | Permissions change without reissuing tokens |
| Set short expiration for CI | Limits blast radius if token leaks |
| Use refreshable tokens for long-running services | Avoids downtime during token rotation |
| Store tokens in secrets managers | Never hardcode in pipelines |
| Audit token usage regularly | Detect unauthorized access patterns |
Xray Integration
JFrog Xray provides security scanning and license compliance for artifacts stored in Artifactory. When integrated, it becomes your first line of defense against vulnerable dependencies.
Configuring Xray Indexing
# Enable Xray indexing on a repository
curl -u admin:password -X PUT \
"http://artifactory:8082/artifactory/api/repositories/docker-local" \
-H "Content-Type: application/json" \
-d '{
"key": "docker-local",
"xrayIndex": true
}'
Creating Security Policies
# Create a security policy
curl -u admin:password -X POST \
"http://artifactory:8082/xray/api/v2/policies" \
-H "Content-Type: application/json" \
-d '{
"name": "block-critical-vulns",
"description": "Block artifacts with critical vulnerabilities",
"type": "security",
"rules": [{
"name": "critical-rule",
"criteria": {
"min_severity": "Critical",
"fix_version_dependant": false
},
"actions": {
"block_download": {
"active": true,
"unscanned": true
},
"block_release_bundle_distribution": true,
"fail_build": true,
"notify_deployer": true,
"notify_watch_recipients": true,
"create_ticket_enabled": false
},
"priority": 1
}]
}'
Creating Watches
# Create an Xray watch
curl -u admin:password -X POST \
"http://artifactory:8082/xray/api/v2/watches" \
-H "Content-Type: application/json" \
-d '{
"general_data": {
"name": "docker-security-watch",
"description": "Monitor Docker images for vulnerabilities",
"active": true
},
"project_resources": {
"resources": [{
"type": "repository",
"bin_mgr_id": "default",
"name": "docker-local",
"filters": [{
"type": "regex",
"value": ".*"
}]
}]
},
"assigned_policies": [{
"name": "block-critical-vulns",
"type": "security"
}]
}'
Storage Management
Monitor storage usage and plan for growth:
# Check storage summary
curl -u admin:password \
"http://artifactory:8082/artifactory/api/storageinfo"
# Get detailed storage breakdown by repository
curl -u admin:password \
"http://artifactory:8082/artifactory/api/storageinfo" | \
jq '.repositoriesSummaryList[] | {repoKey, usedSpace, filesCount, percentage}'
Artifact Cleanup with AQL
Artifactory Query Language (AQL) is powerful for finding and cleaning up artifacts:
# Find Docker images not downloaded in 30 days
curl -u admin:password -X POST \
"http://artifactory:8082/artifactory/api/search/aql" \
-H "Content-Type: text/plain" \
-d 'items.find({
"repo": "docker-local",
"type": "folder",
"stat.downloaded": {"$before": "30d"},
"$or": [
{"stat.downloads": {"$eq": 0}},
{"stat.downloaded": {"$before": "30d"}}
]
}).include("repo", "path", "name", "stat.downloaded")'
# Find artifacts larger than 1GB
curl -u admin:password -X POST \
"http://artifactory:8082/artifactory/api/search/aql" \
-H "Content-Type: text/plain" \
-d 'items.find({
"size": {"$gt": 1073741824}
}).include("repo", "path", "name", "size").sort({"$desc": ["size"]}).limit(50)'
Key practices:
- Set up artifact cleanup --- Use the built-in artifact cleanup feature or schedule AQL-based cleanup scripts
- Monitor binaries count --- A sudden spike may indicate a misconfigured CI pipeline publishing too many artifacts
- Leverage checksum-based storage --- Artifactory deduplicates identical binaries automatically via checksum-based storage, saving significant space
- Use properties for lifecycle management --- Tag artifacts with properties like
retention.keep=trueto exclude them from cleanup
Replication Between Instances
For multi-site deployments, Artifactory supports push and pull replication:
# Configure push replication from primary to secondary
curl -u admin:password -X PUT \
"http://primary:8082/artifactory/api/replications/docker-local" \
-H "Content-Type: application/json" \
-d '{
"url": "https://secondary.company.com/artifactory/docker-local",
"username": "replication-user",
"password": "replication-password",
"enabled": true,
"cronExp": "0 0 */2 * * ?",
"syncDeletes": true,
"syncProperties": true,
"syncStatistics": false,
"enableEventReplication": true,
"pathPrefix": "",
"socketTimeoutMillis": 15000
}'
Replication strategies:
| Strategy | Use Case | Direction | Latency |
|---|---|---|---|
| Push replication | Primary writes, secondaries read | Primary to Secondary | Scheduled (minutes-hours) |
| Pull replication | Secondary pulls what it needs | Secondary from Primary | Scheduled |
| Event-based replication | Real-time sync for critical artifacts | Bidirectional | Near real-time (seconds) |
| Federated repositories | Multi-master, full sync | All nodes bidirectional | Near real-time |
For global deployments, federated repositories (Enterprise+ edition) provide the best experience since developers at any site can both push and pull with low latency.
Performance Tuning
JVM Configuration
# system.yaml
shared:
javaHome: /opt/java
extraJavaOpts: >-
-Xms8g -Xmx12g
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:+ParallelRefProcEnabled
-XX:InitiatingHeapOccupancyPercent=45
-Dartifactory.maxUploadSizeMb=25000
Database Tuning
For PostgreSQL backends:
-- Recommended PostgreSQL settings for Artifactory
ALTER SYSTEM SET max_connections = 200;
ALTER SYSTEM SET shared_buffers = '4GB';
ALTER SYSTEM SET effective_cache_size = '12GB';
ALTER SYSTEM SET work_mem = '256MB';
ALTER SYSTEM SET maintenance_work_mem = '1GB';
ALTER SYSTEM SET checkpoint_completion_target = 0.9;
ALTER SYSTEM SET wal_buffers = '64MB';
ALTER SYSTEM SET default_statistics_target = 100;
ALTER SYSTEM SET random_page_cost = 1.1;
SELECT pg_reload_conf();
Filestore Configuration
For high-throughput environments, use S3 with a local cache:
<!-- binarystore.xml -->
<config version="2">
<chain template="s3-storage-v3-direct"/>
<provider id="s3-storage-v3-direct" type="s3-storage-v3">
<endpoint>s3.amazonaws.com</endpoint>
<bucketName>company-artifactory</bucketName>
<path>filestore</path>
<region>us-east-1</region>
<useInstanceCredentials>true</useInstanceCredentials>
<maxConnections>200</maxConnections>
<multiPartLimit>104857600</multiPartLimit>
</provider>
<provider id="cache-fs" type="cache-fs">
<maxCacheSize>100000000000</maxCacheSize>
</provider>
</config>
Monitoring and Health Checks
# System health check (no auth required)
curl -s "http://artifactory:8082/artifactory/api/system/ping"
# Detailed system health (auth required)
curl -s -u admin:password "http://artifactory:8082/artifactory/api/system/health"
# Version info
curl -s -u admin:password "http://artifactory:8082/artifactory/api/system/version"
# Active users and connections
curl -s -u admin:password "http://artifactory:8082/artifactory/api/system/usage"
Artifactory exposes metrics compatible with Prometheus via the /artifactory/api/v1/system/metrics endpoint. Use these to build dashboards and alerts for upload/download throughput, cache hit rates, storage growth, and error rates.
Troubleshooting Common Issues
| Problem | Cause | Solution |
|---|---|---|
| Slow artifact resolution | Cache miss on remote repos | Increase retrieval cache period, check network to upstream |
| Upload fails with 413 | Client body too large for reverse proxy | Increase client_max_body_size in Nginx config |
| Docker push fails | Token authentication issue | Verify access tokens and Docker client login |
| High disk usage | No cleanup policies | Configure artifact cleanup and set retention policies |
| Database connection pool exhausted | Too many concurrent requests | Increase PostgreSQL max_connections and pool size |
| Replication lag | Network bandwidth or scheduling | Switch to event-based replication for critical repos |
# Check Artifactory logs
docker logs artifactory --tail 200
# Check JFrog Router logs
cat /opt/jfrog/artifactory/var/log/router-service.log
# Run system diagnostics
curl -u admin:password -X POST \
"http://artifactory:8082/artifactory/api/system/support/bundle" \
-H "Content-Type: application/json" \
-d '{"parameters": {"thread_dump": true, "configuration": true, "system": true}}'
Summary
JFrog Artifactory is a comprehensive artifact management platform that scales from small teams to enterprise deployments across multiple regions. Start by setting up the local/remote/virtual repository pattern for your most-used package types, configure permission targets that follow least-privilege principles, and integrate Xray scanning for security from day one. The REST API makes everything automatable, so codify your repository and permission configuration from the start. Invest in proper storage planning with S3-backed filestores and cleanup policies to keep costs under control. For multi-site teams, federated repositories provide the best developer experience with low-latency access to artifacts regardless of location. The combination of Artifactory's universal format support, Xray's security scanning, and the JFrog CLI's automation capabilities makes it a powerful foundation for any software supply chain.
CI/CD Engineering Lead
Automation evangelist who believes no deployment should require a human. I write pipelines, break pipelines, and write about both. Code-first, always.
Related Articles
Nexus Repository Manager: Setup and Configuration Guide
Deploy Nexus Repository Manager, configure hosted and proxy repositories for Docker, npm, Maven, and PyPI, and set up cleanup policies and access control.
The Complete Guide to GitHub Actions CI/CD: From Zero to Production-Ready Pipelines
Build production-grade GitHub Actions CI/CD pipelines — from first workflow to reusable workflows, matrix builds, and deployment gates.
Git Commands: Cheat Sheet
Git commands cheat sheet for DevOps engineers — branching, rebasing, stashing, bisecting, cherry-picking, and recovery workflows with examples.