Ansible Fundamentals: Your First Playbook to Production
Why Ansible and Why Agentless Matters
Ansible stands apart from configuration management tools like Puppet, Chef, and SaltStack because it is entirely agentless. There is no daemon to install on target hosts, no certificate authority to manage, no message bus to configure, and no persistent connection to maintain. Ansible connects over SSH (or WinRM for Windows hosts), executes tasks through a thin Python layer on the remote machine, and disconnects. That simplicity means you can start automating infrastructure in minutes rather than spending days setting up a control plane.
The agentless model also significantly reduces your attack surface. Every agent you install on a server is another process that needs patching, monitoring, log rotation, and firewall rules. With Ansible, the control node is the only machine that needs the Ansible software installed. Your managed nodes just need Python (which ships with virtually every Linux distribution) and an SSH server, both of which you almost certainly already have.
From an operational standpoint, the agentless approach eliminates an entire class of failures. There is no agent to crash, no heartbeat to lose, and no drift between what the agent thinks the state should be and what the server actually is. Ansible evaluates the desired state at runtime every time you execute a playbook, which means you always get a fresh assessment of your infrastructure.
How Ansible Compares to Other Tools
| Feature | Ansible | Puppet | Chef | SaltStack |
|---|---|---|---|---|
| Architecture | Agentless (SSH) | Agent-based | Agent-based | Agent or agentless |
| Language | YAML (declarative) | Puppet DSL | Ruby DSL | YAML + Jinja2 |
| Learning curve | Low | Medium | High | Medium |
| Push vs Pull | Push (default) | Pull | Pull | Push or Pull |
| Requires agent | No | Yes | Yes | Optional |
| Windows support | WinRM / SSH | Yes | Yes | Yes |
| Execution order | Sequential (top to bottom) | Non-deterministic (by default) | Sequential | Sequential |
The sequential execution model is particularly important. When you read an Ansible playbook, tasks run in the exact order they appear. There is no dependency graph to reason about, no resource ordering to debug. What you see is what you get.
Installing Ansible
Using pip (Recommended for Latest Version)
The pip installation gives you the most recent stable release and is the recommended approach for production control nodes:
python3 -m pip install --user ansible
ansible --version
If you need a specific version for compatibility:
python3 -m pip install --user ansible==9.3.0
To install only the core engine without the full collection bundle:
python3 -m pip install --user ansible-core
Using apt on Debian/Ubuntu
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install -y ansible
Using dnf on RHEL/Fedora
sudo dnf install -y ansible
Using Homebrew on macOS
brew install ansible
Verifying Installation
After installation, verify everything works:
ansible --version
ansible localhost -m ping
The ping module does not send ICMP packets. It verifies that Ansible can connect to the target, execute Python, and return a JSON result. If you see "pong" in the output, your installation is working correctly.
Configuring ansible.cfg
Ansible looks for its configuration file in this order: the ANSIBLE_CONFIG environment variable, ansible.cfg in the current directory, ~/.ansible.cfg, and finally /etc/ansible/ansible.cfg. For project-specific settings, create an ansible.cfg at the root of your project:
[defaults]
inventory = inventory/
remote_user = deploy
private_key_file = ~/.ssh/ansible_ed25519
host_key_checking = False
retry_files_enabled = False
stdout_callback = yaml
timeout = 30
forks = 20
[privilege_escalation]
become = True
become_method = sudo
become_user = root
become_ask_pass = False
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o PreferredAuthentications=publickey
The pipelining = True setting is one of the most impactful performance optimizations. It reduces the number of SSH operations required to execute a module by sending the module code over the existing SSH connection rather than creating a temporary file. This can speed up playbook execution by 2 to 5 times.
The forks = 20 setting tells Ansible to manage up to 20 hosts simultaneously. The default is 5, which is conservative. Increase this based on the resources of your control node and the number of hosts you manage.
Inventory Files
The inventory tells Ansible which hosts to manage, how to connect to them, and how to organize them into groups. You can write inventories in INI or YAML format.
INI Format
[webservers]
web1.example.com
web2.example.com ansible_port=2222
web3.example.com ansible_host=10.0.1.53
[dbservers]
db1.example.com ansible_user=dbadmin
db2.example.com ansible_user=dbadmin
[loadbalancers]
lb1.example.com
[production:children]
webservers
dbservers
loadbalancers
[webservers:vars]
http_port=80
document_root=/var/www/html
[all:vars]
ansible_python_interpreter=/usr/bin/python3
YAML Format
all:
vars:
ansible_python_interpreter: /usr/bin/python3
children:
webservers:
hosts:
web1.example.com:
web2.example.com:
ansible_port: 2222
web3.example.com:
ansible_host: 10.0.1.53
vars:
http_port: 80
document_root: /var/www/html
dbservers:
hosts:
db1.example.com:
ansible_user: dbadmin
db2.example.com:
ansible_user: dbadmin
loadbalancers:
hosts:
lb1.example.com:
production:
children:
webservers:
dbservers:
loadbalancers:
Group Variables and Host Variables
For larger inventories, define variables in separate files instead of inline:
inventory/
hosts.yml
group_vars/
all.yml
webservers.yml
dbservers.yml
production/
vars.yml
vault.yml
host_vars/
web1.example.com.yml
db1.example.com.yml
Ansible automatically loads variable files that match group or host names. This keeps your inventory file clean and your variables organized.
# inventory/group_vars/webservers.yml
http_port: 80
document_root: /var/www/html
nginx_worker_processes: auto
nginx_worker_connections: 1024
ssl_certificate_path: /etc/ssl/certs/server.crt
ssl_key_path: /etc/ssl/private/server.key
# inventory/host_vars/web1.example.com.yml
nginx_server_name: web1.example.com
custom_vhost_config: true
The default inventory location is /etc/ansible/hosts, but you should always specify yours explicitly with -i:
ansible -i inventory/ all -m ping
Ad-Hoc Commands
Before writing playbooks, ad-hoc commands let you run one-off tasks across your fleet. They are invaluable for troubleshooting, gathering information, and making quick changes:
# Check uptime on all web servers
ansible webservers -i inventory/ -m command -a "uptime"
# Check disk usage across all servers
ansible all -i inventory/ -m command -a "df -h"
# Install a package on web servers
ansible webservers -i inventory/ -m apt -a "name=nginx state=present" --become
# Copy a file to all servers
ansible all -i inventory/ -m copy -a "src=./motd dest=/etc/motd owner=root mode=0644" --become
# Restart a service
ansible webservers -i inventory/ -m service -a "name=nginx state=restarted" --become
# Gather facts about a specific host
ansible web1.example.com -i inventory/ -m setup
# Run a shell command with pipes
ansible dbservers -i inventory/ -m shell -a "ps aux | grep postgres"
# Create a user across all production servers
ansible production -i inventory/ -m user -a "name=deploy state=present shell=/bin/bash" --become
# Reboot servers (use with caution)
ansible webservers -i inventory/ -m reboot -a "reboot_timeout=300" --become
The --become flag tells Ansible to escalate privileges (sudo by default). Ad-hoc commands are useful for quick checks and emergency fixes, but anything repeatable belongs in a playbook.
Common Ad-Hoc Module Reference
| Module | Purpose | Example |
|---|---|---|
ping | Test connectivity | ansible all -m ping |
command | Run a command (no shell features) | ansible all -m command -a "uptime" |
shell | Run a command (with pipes, redirects) | ansible all -m shell -a "cat /etc/os-release" |
copy | Copy files to remote hosts | ansible all -m copy -a "src=f.txt dest=/tmp/" |
fetch | Copy files from remote to local | ansible all -m fetch -a "src=/var/log/syslog dest=./logs/" |
apt / yum | Package management | ansible all -m apt -a "name=htop state=present" |
service | Manage services | ansible all -m service -a "name=nginx state=started" |
setup | Gather system facts | ansible all -m setup |
file | Manage files and directories | ansible all -m file -a "path=/tmp/test state=directory" |
user | Manage user accounts | ansible all -m user -a "name=deploy state=present" |
Playbook Structure
A playbook is a YAML file containing one or more plays. Each play targets a group of hosts and defines an ordered list of tasks. The playbook is the fundamental unit of Ansible automation.
---
- name: Configure web servers
hosts: webservers
become: true
gather_facts: true
vars:
http_port: 80
document_root: /var/www/html
pre_tasks:
- name: Update apt cache
apt:
update_cache: true
cache_valid_time: 3600
tasks:
- name: Install Nginx
apt:
name: nginx
state: present
- name: Copy Nginx configuration
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/sites-available/default
owner: root
group: root
mode: "0644"
validate: "nginx -t -c %s"
notify: Restart Nginx
- name: Deploy index page
copy:
src: files/index.html
dest: "{{ document_root }}/index.html"
owner: www-data
group: www-data
mode: "0644"
- name: Ensure Nginx is running
service:
name: nginx
state: started
enabled: true
post_tasks:
- name: Verify Nginx is responding
uri:
url: "http://localhost:{{ http_port }}"
status_code: 200
register: result
retries: 3
delay: 5
until: result.status == 200
handlers:
- name: Restart Nginx
service:
name: nginx
state: restarted
Key Playbook Components
| Component | Purpose |
|---|---|
hosts | Which inventory group to target |
become | Escalate privileges for the entire play |
gather_facts | Whether to collect system facts before running tasks |
vars | Variables available to all tasks in the play |
pre_tasks | Tasks that run before roles |
tasks | The main ordered list of actions to execute |
post_tasks | Tasks that run after roles and tasks |
handlers | Tasks that only run when notified by other tasks |
notify | Triggers a handler when the notifying task changes something |
Handlers are critical for efficiency. If you notify "Restart Nginx" from three different tasks, the handler only runs once at the end of the play, not three times. Handlers execute in the order they are defined in the handlers section, not the order they were notified.
The validate Parameter
Notice the validate parameter on the template task. This tells Ansible to validate the rendered configuration before deploying it. If nginx -t -c %s fails (where %s is replaced with the temporary file path), Ansible aborts the task and leaves the existing configuration intact. This prevents deploying broken configurations that would crash your service.
Common Modules in Depth
Ansible ships with thousands of modules. These are the ones you will use constantly in production playbooks.
Package Management
# Debian/Ubuntu with apt
- name: Install multiple packages
apt:
name:
- nginx
- curl
- htop
- unzip
- jq
state: present
update_cache: true
cache_valid_time: 3600
# Pin a specific version
- name: Install specific Nginx version
apt:
name: nginx=1.24.0-1~jammy
state: present
# Remove a package
- name: Remove Apache (if accidentally installed)
apt:
name: apache2
state: absent
purge: true
# RHEL/CentOS/Fedora with dnf
- name: Install packages on RHEL
dnf:
name:
- httpd
- curl
- vim
state: present
# OS-agnostic with package module
- name: Install curl regardless of distro
package:
name: curl
state: present
File Operations
- name: Create a directory tree
file:
path: /opt/myapp/{{ item }}
state: directory
owner: deploy
group: deploy
mode: "0755"
loop:
- ""
- config
- logs
- data
- tmp
- name: Copy a static file with backup
copy:
src: files/app.conf
dest: /etc/myapp/app.conf
owner: root
group: root
mode: "0644"
backup: true
- name: Write content directly (no source file needed)
copy:
content: |
# Application Environment
APP_ENV=production
APP_PORT=8080
LOG_LEVEL=info
dest: /opt/myapp/.env
owner: deploy
group: deploy
mode: "0600"
- name: Render a Jinja2 template
template:
src: templates/app.conf.j2
dest: /etc/myapp/app.conf
owner: root
group: root
mode: "0644"
notify: Restart App
- name: Download a file from the internet
get_url:
url: https://github.com/prometheus/node_exporter/releases/download/v1.8.1/node_exporter-1.8.1.linux-amd64.tar.gz
dest: /tmp/node_exporter.tar.gz
checksum: sha256:fbadb376afa7c883f87f70795700a8a200f7fd45412532571f
- name: Extract an archive
unarchive:
src: /tmp/node_exporter.tar.gz
dest: /opt/
remote_src: true
creates: /opt/node_exporter-1.8.1.linux-amd64
- name: Create a symlink
file:
src: /opt/node_exporter-1.8.1.linux-amd64/node_exporter
dest: /usr/local/bin/node_exporter
state: link
- name: Manage a line in a file
lineinfile:
path: /etc/ssh/sshd_config
regexp: "^PermitRootLogin"
line: "PermitRootLogin no"
state: present
notify: Restart sshd
- name: Insert a block of text
blockinfile:
path: /etc/hosts
block: |
10.0.1.10 app-server-1
10.0.1.11 app-server-2
10.0.1.12 app-server-3
marker: "# {mark} ANSIBLE MANAGED BLOCK - app servers"
User and Service Management
- name: Create application user with SSH key
user:
name: deploy
shell: /bin/bash
groups: sudo
append: true
create_home: true
generate_ssh_key: true
ssh_key_bits: 4096
ssh_key_type: ed25519
- name: Add authorized key for deploy user
authorized_key:
user: deploy
state: present
key: "{{ lookup('file', '~/.ssh/deploy_ed25519.pub') }}"
- name: Manage systemd service
systemd:
name: myapp
state: started
enabled: true
daemon_reload: true
- name: Create a systemd unit file
copy:
content: |
[Unit]
Description=My Application
After=network.target
[Service]
Type=simple
User=deploy
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/bin/server
Restart=always
RestartSec=5
Environment=APP_ENV=production
[Install]
WantedBy=multi-user.target
dest: /etc/systemd/system/myapp.service
owner: root
group: root
mode: "0644"
notify:
- Reload systemd
- Restart myapp
Firewall and Network
- name: Allow SSH through UFW
ufw:
rule: allow
port: "22"
proto: tcp
- name: Allow HTTP and HTTPS
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- "80"
- "443"
- name: Enable UFW with default deny
ufw:
state: enabled
policy: deny
- name: Wait for a port to become available
wait_for:
host: "{{ inventory_hostname }}"
port: 8080
timeout: 30
state: started
Variables and Facts
Variables can be defined in many places. Ansible evaluates them with a specific precedence order (from lowest to highest priority):
- Role defaults (
defaults/main.yml) - Inventory file or script group variables
- Inventory
group_vars/all - Inventory
group_vars/* - Inventory file or script host variables
- Inventory
host_vars/* - Play
vars - Play
vars_prompt - Play
vars_files - Role
vars/main.yml - Block variables
- Task variables (including
include_vars) set_fact/ registered variables- Extra variables (
-eon command line) -- always wins
Working with Facts
Facts are variables automatically gathered from managed hosts. They contain extensive information about the system:
- name: Display OS information
debug:
msg: >
This host runs {{ ansible_distribution }} {{ ansible_distribution_version }}
on {{ ansible_architecture }} with {{ ansible_memtotal_mb }}MB RAM
and {{ ansible_processor_vcpus }} CPUs
- name: Use facts for conditional logic
apt:
name: nginx
state: present
when: ansible_distribution == "Ubuntu" and ansible_distribution_major_version | int >= 22
- name: Set variable based on available memory
set_fact:
nginx_worker_connections: "{{ 1024 if ansible_memtotal_mb < 2048 else 4096 }}"
You can inspect all facts with:
ansible web1.example.com -m setup
ansible web1.example.com -m setup -a "filter=ansible_distribution*"
Custom Facts
You can create custom facts by placing scripts or JSON/INI files in /etc/ansible/facts.d/ on managed hosts:
- name: Deploy custom fact script
copy:
content: |
#!/bin/bash
echo '{"app_version": "2.5.1", "deployed_at": "'$(date -Iseconds)'"}'
dest: /etc/ansible/facts.d/app.fact
mode: "0755"
- name: Re-gather facts to pick up custom fact
setup:
- name: Use custom fact
debug:
msg: "App version is {{ ansible_local.app.app_version }}"
Conditionals with when
Use when to skip tasks based on conditions. The when clause accepts raw Jinja2 expressions without the double curly braces:
- name: Install Nginx on Debian systems
apt:
name: nginx
state: present
when: ansible_os_family == "Debian"
- name: Install httpd on RedHat systems
yum:
name: httpd
state: present
when: ansible_os_family == "RedHat"
- name: Run only on Ubuntu 22.04 or later
debug:
msg: "Modern Ubuntu detected"
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version | int >= 22
- name: Skip if variable is not defined
template:
src: custom.conf.j2
dest: /etc/myapp/custom.conf
when: custom_config is defined and custom_config | length > 0
- name: Restart only if config changed
service:
name: myapp
state: restarted
when: config_result is changed
- name: Act on registered variable
command: /opt/myapp/bin/migrate
register: migration_result
changed_when: "'Already up to date' not in migration_result.stdout"
failed_when: migration_result.rc not in [0, 2]
Loops
Iterate over lists with loop:
- name: Create multiple users
user:
name: "{{ item.name }}"
groups: "{{ item.groups }}"
shell: "{{ item.shell | default('/bin/bash') }}"
state: present
loop:
- { name: alice, groups: sudo }
- { name: bob, groups: developers }
- { name: carol, groups: "sudo,developers" }
- name: Install packages from a variable list
apt:
name: "{{ item }}"
state: present
loop: "{{ required_packages }}"
- name: Create directories with specific permissions
file:
path: "{{ item.path }}"
state: directory
owner: "{{ item.owner }}"
mode: "{{ item.mode }}"
loop:
- { path: /opt/myapp, owner: deploy, mode: "0755" }
- { path: /opt/myapp/logs, owner: deploy, mode: "0755" }
- { path: /opt/myapp/config, owner: deploy, mode: "0700" }
- { path: /opt/myapp/data, owner: deploy, mode: "0750" }
- name: Template multiple configuration files
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: root
mode: "0644"
loop:
- { src: templates/nginx.conf.j2, dest: /etc/nginx/nginx.conf }
- { src: templates/app.conf.j2, dest: /etc/myapp/app.conf }
- { src: templates/logrotate.j2, dest: /etc/logrotate.d/myapp }
notify: Reload Nginx
Looping Over Dictionaries
- name: Create users from a dictionary
user:
name: "{{ item.key }}"
uid: "{{ item.value.uid }}"
groups: "{{ item.value.groups }}"
loop: "{{ users | dict2items }}"
vars:
users:
alice:
uid: 1001
groups: sudo
bob:
uid: 1002
groups: developers
For older playbooks you may see with_items instead of loop. Both work, but loop is the modern syntax.
Tags
Tags let you run a subset of tasks from a playbook, which is essential for large playbooks where you might want to run only the configuration step or only the deployment step:
- name: Install Nginx
apt:
name: nginx
state: present
tags:
- packages
- nginx
- install
- name: Configure Nginx
template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
tags:
- configuration
- nginx
- name: Deploy application code
git:
repo: "{{ app_repo }}"
dest: /opt/myapp
version: "{{ app_version }}"
tags:
- deploy
- app
Run only tagged tasks:
# Run only configuration tasks
ansible-playbook site.yml --tags "configuration"
# Run everything except package installation
ansible-playbook site.yml --skip-tags "packages"
# Run tasks tagged with either nginx or deploy
ansible-playbook site.yml --tags "nginx,deploy"
# List all tags in a playbook
ansible-playbook site.yml --list-tags
# List all tasks that would run with specific tags
ansible-playbook site.yml --tags "configuration" --list-tasks
Error Handling and Resilience
Production playbooks need robust error handling:
- name: Attempt to download latest release
get_url:
url: "https://releases.example.com/latest.tar.gz"
dest: /tmp/app.tar.gz
register: download_result
ignore_errors: true
- name: Fall back to cached version
copy:
src: files/app-fallback.tar.gz
dest: /tmp/app.tar.gz
when: download_result is failed
- name: Run database migration with retry
command: /opt/myapp/bin/migrate
register: migrate_result
retries: 3
delay: 10
until: migrate_result.rc == 0
- name: Use block for grouped error handling
block:
- name: Deploy new application version
git:
repo: "{{ app_repo }}"
dest: /opt/myapp
version: "{{ app_version }}"
- name: Run post-deploy checks
uri:
url: "http://localhost:8080/health"
status_code: 200
rescue:
- name: Roll back to previous version
git:
repo: "{{ app_repo }}"
dest: /opt/myapp
version: "{{ previous_version }}"
- name: Notify team of failure
uri:
url: "{{ slack_webhook }}"
method: POST
body_format: json
body:
text: "Deployment of {{ app_version }} failed on {{ inventory_hostname }}. Rolled back to {{ previous_version }}."
always:
- name: Ensure application is running
service:
name: myapp
state: started
Check Mode (Dry Run)
Run a playbook without making changes:
ansible-playbook site.yml --check
Check mode tells each module to report what it would change without actually doing it. Combine with --diff to see file content differences:
ansible-playbook site.yml --check --diff
You can force certain tasks to always run in check mode or to always execute even during a check run:
- name: This always runs, even in check mode
command: /opt/myapp/bin/version
check_mode: false
register: app_version
changed_when: false
- name: This is always checked, even in normal mode
apt:
name: nginx
state: present
check_mode: true
register: nginx_check
Not all modules support check mode perfectly, but the built-in ones (apt, copy, template, file, service) all handle it well.
Practical Example: Production Web Server Deployment
Here is a complete, production-ready playbook that installs Nginx, deploys a static site, hardens SSH, configures a firewall, sets up log rotation, and deploys monitoring:
---
- name: Deploy production web server
hosts: webservers
become: true
vars:
domain: example.com
document_root: "/var/www/{{ domain }}"
ssh_port: 22
allowed_ssh_networks:
- "10.0.0.0/8"
- "172.16.0.0/12"
pre_tasks:
- name: Validate required variables
assert:
that:
- domain is defined
- domain | length > 0
fail_msg: "The 'domain' variable must be defined"
tags: always
tasks:
- name: Update apt cache
apt:
update_cache: true
cache_valid_time: 3600
tags: packages
- name: Install required packages
apt:
name:
- nginx
- ufw
- logrotate
- fail2ban
- unattended-upgrades
state: present
tags: packages
- name: Harden SSH configuration
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
loop:
- { regexp: "^PermitRootLogin", line: "PermitRootLogin no" }
- { regexp: "^PasswordAuthentication", line: "PasswordAuthentication no" }
- { regexp: "^X11Forwarding", line: "X11Forwarding no" }
- { regexp: "^MaxAuthTries", line: "MaxAuthTries 3" }
notify: Restart sshd
tags: security
- name: Create document root
file:
path: "{{ document_root }}"
state: directory
owner: www-data
group: www-data
mode: "0755"
tags: deploy
- name: Deploy site content
synchronize:
src: files/site/
dest: "{{ document_root }}/"
delete: true
rsync_opts:
- "--exclude=.git"
notify: Reload Nginx
tags: deploy
- name: Deploy Nginx virtual host
template:
src: templates/vhost.conf.j2
dest: "/etc/nginx/sites-available/{{ domain }}"
owner: root
group: root
mode: "0644"
validate: "nginx -t -c /dev/null || true"
notify: Reload Nginx
tags: configuration
- name: Enable virtual host
file:
src: "/etc/nginx/sites-available/{{ domain }}"
dest: "/etc/nginx/sites-enabled/{{ domain }}"
state: link
notify: Reload Nginx
tags: configuration
- name: Remove default site
file:
path: /etc/nginx/sites-enabled/default
state: absent
notify: Reload Nginx
tags: configuration
- name: Deploy logrotate configuration
copy:
content: |
/var/log/nginx/*.log {
daily
missingok
rotate 30
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 $(cat /var/run/nginx.pid)
endscript
}
dest: /etc/logrotate.d/nginx
owner: root
group: root
mode: "0644"
tags: logging
- name: Configure UFW defaults
ufw:
state: enabled
policy: deny
logging: "on"
tags: firewall
- name: Allow SSH through UFW
ufw:
rule: allow
port: "{{ ssh_port | string }}"
proto: tcp
src: "{{ item }}"
loop: "{{ allowed_ssh_networks }}"
tags: firewall
- name: Allow HTTP and HTTPS through UFW
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- "80"
- "443"
tags: firewall
- name: Ensure Nginx is started and enabled
service:
name: nginx
state: started
enabled: true
handlers:
- name: Reload Nginx
service:
name: nginx
state: reloaded
- name: Restart sshd
service:
name: sshd
state: restarted
Run the playbook:
# Full deployment
ansible-playbook -i inventory/ deploy-webserver.yml
# Only deploy new content
ansible-playbook -i inventory/ deploy-webserver.yml --tags deploy
# Only update firewall rules
ansible-playbook -i inventory/ deploy-webserver.yml --tags firewall
# Dry run with diff output
ansible-playbook -i inventory/ deploy-webserver.yml --check --diff
# Target only staging servers
ansible-playbook -i inventory/ deploy-webserver.yml --limit staging
# Verbose output for debugging
ansible-playbook -i inventory/ deploy-webserver.yml -vvv
Integrating Ansible Playbooks with CI/CD
Running playbooks manually works for small teams, but production deployments should be automated through CI/CD pipelines.
GitHub Actions Example
# .github/workflows/deploy.yml
name: Deploy Infrastructure
on:
push:
branches: [main]
workflow_dispatch:
inputs:
environment:
description: "Target environment"
required: true
default: "staging"
type: choice
options:
- staging
- production
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ github.event.inputs.environment || 'staging' }}
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install Ansible
run: pip install ansible boto3
- name: Configure SSH key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
echo "${{ secrets.SSH_KNOWN_HOSTS }}" > ~/.ssh/known_hosts
- name: Run playbook
env:
ANSIBLE_VAULT_PASSWORD: ${{ secrets.VAULT_PASSWORD }}
ANSIBLE_HOST_KEY_CHECKING: "False"
run: |
echo "$ANSIBLE_VAULT_PASSWORD" > .vault_pass
chmod 600 .vault_pass
ansible-playbook -i inventory/ site.yml \
--vault-password-file .vault_pass \
--limit ${{ github.event.inputs.environment || 'staging' }}
rm -f .vault_pass
Troubleshooting Common Issues
SSH Connection Problems
# Test SSH manually first
ssh -i ~/.ssh/key.pem deploy@target-host
# Increase Ansible verbosity
ansible all -m ping -vvvv
# Check if Python exists on the remote host
ansible all -m raw -a "which python3"
Module Errors
# Check module documentation
ansible-doc apt
ansible-doc template
# Validate YAML syntax before running
python3 -c "import yaml; yaml.safe_load(open('site.yml'))"
# Use ansible-lint for best practices
pip install ansible-lint
ansible-lint site.yml
Performance Tuning
| Setting | Default | Recommended | Impact |
|---|---|---|---|
forks | 5 | 20-50 | Parallel host execution |
pipelining | False | True | Reduces SSH operations |
gather_facts | True | As needed | Saves 2-5 seconds per host |
fact_caching | memory | jsonfile or redis | Avoids re-gathering facts |
poll_interval | 15 | 5-10 | Faster async task completion |
To cache facts between playbook runs:
[defaults]
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_facts_cache
fact_caching_timeout = 86400
Next Steps
Once you are comfortable writing playbooks, the next areas to explore are roles (for organizing playbooks into reusable components), Ansible Vault (for encrypting secrets), and dynamic inventories (for cloud environments where hosts come and go). Each of these builds on the fundamentals covered here and is essential for running Ansible at production scale.
Related Articles
Ansible Roles and Galaxy: Structuring Automation at Scale
Structure your Ansible automation with roles for reusability. Learn role directory structure, Galaxy usage, dependencies, and testing with Molecule.
Ansible Dynamic Inventory: Automating Cloud Infrastructure
Use dynamic inventories to automatically discover and manage cloud infrastructure — AWS EC2, Azure VMs, and GCP instances with Ansible inventory plugins.
Ansible Vault: Encrypting Secrets in Your Automation
Encrypt sensitive data with Ansible Vault — encrypt files, inline variables, multi-password setups, and integrate with external secret managers like HashiCorp Vault.