ArgoCD Application Patterns: App of Apps, ApplicationSets, and Beyond
GitOps Means Git Is the Only API
If someone SSHs into a cluster and runs kubectl apply manually, that change is a ghost. It exists, but your Git repo doesn't know about it. ArgoCD's job is to make the cluster match Git — and only Git. But as your cluster count and application count grow, you need patterns that scale beyond "one Application manifest per service."
Here's how I structure ArgoCD for teams running 20+ services across multiple environments.
Pattern 0: The Single Application (Where Everyone Starts)
# apps/api-service.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-service
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/acme/k8s-manifests.git
targetRevision: main
path: services/api-service/overlays/prod
destination:
server: https://kubernetes.default.svc
namespace: api
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
retry:
limit: 3
backoff:
duration: 5s
factor: 2
maxDuration: 3m
This works for 5 services. At 20 services across 3 environments, you're maintaining 60 nearly identical YAML files. That's not IaC — that's YAML farming.
Pattern 1: App of Apps
The App of Apps pattern uses one root Application that points to a directory of Application manifests. ArgoCD syncs the root app, discovers the child apps, and syncs those too.
Directory Structure
gitops-repo/
├── root-apps/
│ └── prod-root.yaml # The root Application
├── apps/
│ ├── api-service.yaml # Child Application
│ ├── worker-service.yaml
│ ├── web-frontend.yaml
│ └── monitoring-stack.yaml
└── services/
├── api-service/
│ └── overlays/
│ ├── dev/
│ ├── staging/
│ └── prod/
├── worker-service/
│ └── overlays/
│ ├── dev/
│ ├── staging/
│ └── prod/
└── monitoring-stack/
└── base/
Root Application
# root-apps/prod-root.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prod-root
namespace: argocd
spec:
project: platform
source:
repoURL: https://github.com/acme/gitops-config.git
targetRevision: main
path: apps/
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
Add a new service? Drop an Application YAML into apps/. The root app picks it up on the next sync. Remove a service? Delete the file. ArgoCD prunes the child Application and all its resources.
When App of Apps breaks down
- You still maintain one YAML file per application
- Templating across apps requires Helm or Kustomize on the apps directory
- Scaling to 100+ apps means a large
apps/directory with lots of copy-paste
Pattern 2: ApplicationSets (The Scalable Answer)
ApplicationSets are ArgoCD's native solution for generating Applications from templates and data sources. One ApplicationSet can produce hundreds of Applications.
Git Directory Generator
Automatically create an Application for every directory under services/:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: prod-services
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- git:
repoURL: https://github.com/acme/gitops-config.git
revision: main
directories:
- path: "services/*/overlays/prod"
template:
metadata:
name: "prod-{{ index .path.segments 1 }}"
spec:
project: applications
source:
repoURL: https://github.com/acme/gitops-config.git
targetRevision: main
path: "{{ .path.path }}"
destination:
server: https://kubernetes.default.svc
namespace: "{{ index .path.segments 1 }}"
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
New service? Create a services/new-service/overlays/prod/ directory with your Kustomize overlay. The ApplicationSet detects it and generates the Application. Zero ArgoCD config changes.
Matrix Generator: Multi-Cluster + Multi-Service
Deploy every service to every cluster:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: all-services-all-clusters
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- matrix:
generators:
- clusters:
selector:
matchLabels:
env: prod
- git:
repoURL: https://github.com/acme/gitops-config.git
revision: main
directories:
- path: "services/*"
template:
metadata:
name: "{{ .name }}-{{ index .path.segments 1 }}"
spec:
project: applications
source:
repoURL: https://github.com/acme/gitops-config.git
targetRevision: main
path: "{{ .path.path }}/overlays/prod"
destination:
server: "{{ .server }}"
namespace: "{{ index .path.segments 1 }}"
syncPolicy:
automated:
prune: true
selfHeal: true
3 clusters x 10 services = 30 Applications generated from one manifest. Add a cluster? Label it with env: prod and every service deploys automatically.
List Generator: Explicit Control
When you need per-app overrides that don't fit neatly into directory structures:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: core-services
namespace: argocd
spec:
goTemplate: true
goTemplateOptions: ["missingkey=error"]
generators:
- list:
elements:
- name: api-service
namespace: api
chartVersion: "2.4.1"
- name: worker-service
namespace: workers
chartVersion: "1.8.0"
- name: web-frontend
namespace: web
chartVersion: "3.1.2"
template:
metadata:
name: "prod-{{ .name }}"
spec:
project: applications
source:
repoURL: https://github.com/acme/helm-charts.git
targetRevision: "{{ .chartVersion }}"
path: "charts/{{ .name }}"
destination:
server: https://kubernetes.default.svc
namespace: "{{ .namespace }}"
Pattern 3: Project-Based Isolation
Don't run everything in default. Use AppProjects to enforce boundaries:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: team-payments
namespace: argocd
spec:
description: "Payment team applications"
sourceRepos:
- "https://github.com/acme/payments-*"
destinations:
- server: https://kubernetes.default.svc
namespace: "payments-*"
clusterResourceWhitelist: []
namespaceResourceWhitelist:
- group: ""
kind: "*"
- group: "apps"
kind: "*"
roles:
- name: admin
policies:
- p, proj:team-payments:admin, applications, *, team-payments/*, allow
groups:
- payments-team
The payments team can deploy anything to payments-* namespaces. They cannot touch kube-system, they cannot create ClusterRoles, they cannot deploy from repos outside their org. Guardrails in code.
My Production Layout
gitops-config/
├── applicationsets/
│ ├── prod-services.yaml # Git directory generator
│ ├── staging-services.yaml
│ └── monitoring.yaml # List generator for obs stack
├── projects/
│ ├── platform.yaml
│ ├── team-payments.yaml
│ └── team-search.yaml
├── root-app.yaml # Points to applicationsets/
└── services/
├── api-service/
│ ├── base/
│ │ ├── deployment.yaml
│ │ ├── service.yaml
│ │ └── kustomization.yaml
│ └── overlays/
│ ├── dev/
│ ├── staging/
│ └── prod/
└── worker-service/
├── base/
└── overlays/
The root app syncs applicationsets/. Each ApplicationSet generates Applications from the services/ directory. Adding a service is mkdir + kustomize files + git push. Removing a service is rm -rf + git push. No ArgoCD UI clicking, no imperative commands.
Sync Strategies and Waves
When deploying multiple applications, order matters. You can't deploy an application before its database migration runs. ArgoCD supports sync waves and hooks for this.
Sync Waves
# CRDs first (wave -1), then namespace (wave 0), then app (wave 1)
apiVersion: v1
kind: Namespace
metadata:
name: payments
annotations:
argocd.argoproj.io/sync-wave: "0"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: payments-api
annotations:
argocd.argoproj.io/sync-wave: "1"
Lower waves sync first. Use negative numbers for prerequisites. Positive numbers for application resources. ArgoCD waits for each wave to be healthy before starting the next.
Sync Hooks for Migrations
apiVersion: batch/v1
kind: Job
metadata:
name: db-migration
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
template:
spec:
containers:
- name: migrate
image: ghcr.io/org/api:v1.5.0
command: ["python", "manage.py", "migrate", "--noinput"]
restartPolicy: Never
backoffLimit: 3
PreSync hooks run before the main sync. The migration runs, succeeds, and the Job is cleaned up. Only then does ArgoCD update the Deployment. If the migration fails, the sync stops — your application never sees an incompatible schema.
Health Checks and Custom Health Assessments
ArgoCD tracks resource health. By default, it knows about Deployments, StatefulSets, and Services. For custom resources, define health checks.
-- Custom health check for a CronJob (configmap in argocd-cm)
hs = {}
if obj.status ~= nil then
if obj.status.lastSuccessfulTime ~= nil then
hs.status = "Healthy"
hs.message = "Last successful run: " .. obj.status.lastSuccessfulTime
else
hs.status = "Progressing"
hs.message = "Waiting for first successful run"
end
else
hs.status = "Unknown"
end
return hs
Add it to the ArgoCD ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
resource.customizations.health.batch_CronJob: |
hs = {}
if obj.status ~= nil then
if obj.status.lastSuccessfulTime ~= nil then
hs.status = "Healthy"
end
end
return hs
Without custom health checks, ArgoCD marks custom resources as "Unknown" permanently. That pollutes your dashboard and hides real issues.
Notifications for Deployment Events
ArgoCD Notifications sends alerts when syncs succeed, fail, or need attention.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-notifications-cm
namespace: argocd
data:
service.slack: |
token: $slack-token
template.app-sync-succeeded: |
slack:
attachments: |
[{
"color": "#18be52",
"title": "{{.app.metadata.name}} synced successfully",
"text": "Application {{.app.metadata.name}} is now running revision {{.app.status.sync.revision}}",
"fields": [{
"title": "Sync Status",
"value": "{{.app.status.sync.status}}",
"short": true
}]
}]
template.app-sync-failed: |
slack:
attachments: |
[{
"color": "#E96D76",
"title": "{{.app.metadata.name}} sync FAILED",
"text": "{{.app.status.operationState.message}}"
}]
trigger.on-sync-succeeded: |
- when: app.status.operationState.phase in ['Succeeded']
send: [app-sync-succeeded]
trigger.on-sync-failed: |
- when: app.status.operationState.phase in ['Error', 'Failed']
send: [app-sync-failed]
Subscribe an Application to notifications:
metadata:
annotations:
notifications.argoproj.io/subscribe.on-sync-succeeded.slack: deployments
notifications.argoproj.io/subscribe.on-sync-failed.slack: deployments-alerts
Successful deploys go to #deployments. Failures go to #deployments-alerts. Keep the channels separate so failures don't get buried in success messages.
Troubleshooting
Problem: Application stuck in "OutOfSync" after sync.
Fix: Check for resources with last-applied-configuration annotations that conflict with ArgoCD's desired state. Run argocd app diff <app> to see what's different.
Problem: ApplicationSet generates duplicate Applications.
Fix: Ensure your template generates unique names. Use the {{ index .path.segments N }} accessor to pull unique identifiers from directory paths.
Problem: Sync takes too long with many resources.
Fix: Increase ArgoCD's controller.repo.server.timeout.seconds. For applications with 100+ resources, set it to 300s. Also check if the Git repo is large — ArgoCD clones it on every sync.
Problem: Pruning deletes resources it shouldn't.
Fix: Add the argocd.argoproj.io/compare-options: IgnoreExtraneous annotation to resources that exist outside ArgoCD's management. Common for PVCs and secrets created by operators.
Conclusion
Start with single Applications to learn ArgoCD's model. Move to App of Apps when you hit 10+ services and want a single root of truth. Graduate to ApplicationSets when you need multi-cluster, multi-environment generation without maintaining one YAML per app per env. Layer in AppProjects from day one for team isolation. Use sync waves for ordering, hooks for migrations, and notifications to keep teams informed. The goal is always the same: Git is the only interface to your cluster. Everything else is just a sync loop making reality match your repo.
Related Articles
Platform Engineer
Terraform enthusiast, platform builder, DRY advocate. I believe infrastructure should be versioned, reviewed, and deployed like any other code. GitOps or bust.
Related Articles
ArgoCD Image Updater for Automated Container Deployments
Configure ArgoCD Image Updater to automatically detect and deploy new container images to Kubernetes without manual manifest changes or CI triggers.
Crossplane: Managing Cloud Infrastructure from Kubernetes
How to use Crossplane to provision and manage cloud infrastructure using Kubernetes-native APIs — one control plane to rule them all.
Terraform from Zero to Production: Project Structure, Modules, State, and CI/CD
Build production-grade Terraform infrastructure — project structure, module design, state management, testing, and CI/CD pipeline integration.