Ansible Automation Platform 2.4 Installation Guide
Ansible Automation Platform 2.Installing it correctly ensures a stable foundation for all your automation workflows. 4 is the latest enterprise‑ready distribution from Red Hat that bundles Ansible Engine, Tower, AWX, and a suite of ready‑to‑use collections. This guide walks you through every step—from prerequisites to post‑install verification—so you can get your environment up and running confidently.
Introduction
The Ansible Automation Platform 2.Practically speaking, 4 combines the power of the open‑source Ansible Engine with a graphical user interface, role‑based access control, and advanced scheduling. Whether you’re deploying on a single control node or a multi‑node cluster, this guide covers the full installation process on supported Linux distributions (Red Hat Enterprise Linux, CentOS Stream, and Ubuntu). By following the steps below, you’ll achieve a production‑ready deployment that can scale with your organization’s needs.
Prerequisites
Before you begin, make sure the following conditions are met:
| Item | Details |
|---|---|
| Operating System | RHEL 8, CentOS 8/Stream 8, or Ubuntu 22.04 LTS |
| Hardware | Minimum 4 CPU cores, 8 GB RAM, 50 GB disk (recommended 100 GB) |
| Network | Internet access for pulling packages; DNS resolution for your domain |
| User | Root or a user with sudo privileges |
| Firewall | Ports 80, 443, 4433, 8443, 8000, 9090 open |
| SELinux | Enforcing mode (RHEL/CentOS) or disabled (Ubuntu) |
| Python | Python 3.8+ installed (RHEL/CentOS includes it) |
| Red Hat Subscription | For RHEL/CentOS, a valid subscription to access the repositories |
Tip: If you’re using a virtual machine or container, allocate enough resources to avoid performance bottlenecks Small thing, real impact. Simple as that..
Step 1 – Register the System (RHEL/CentOS)
subscription-manager register --auto-attach
subscription-manager repos --enable rhel-8-server-ansible-automation-platform-2.4-rpms
On CentOS Stream, use the equivalent
subscription-managercommands after adding the Red Hat repositories.
Step 2 – Install the Ansible Automation Platform Packages
dnf install -y ansible-automation-platform-2.4
For Ubuntu, use the official Red Hat repository or the ansible-automation-platform-2.4 PPA if available.
Step 3 – Configure the Database
Ansible Tower uses PostgreSQL. The installer can set up a local PostgreSQL instance or use an external one. For a local instance:
ansible-tower-setup.sh
During the wizard:
- Set the PostgreSQL password – choose a strong password.
- Configure the database host – default is
localhost. - Select the desired storage – use the default unless you have specific storage requirements.
If you prefer an external database, edit /etc/tower/conf.d/database.yml with your connection details before running the setup And that's really what it comes down to..
Step 4 – Configure the Tower Service
The installer also configures the web service, reverse proxy, and SSL. Key settings:
| Setting | Default | Recommendation |
|---|---|---|
tower_web_port |
4433 | Keep default for internal access |
tower_web_ssl_certificate |
auto-generated | Replace with your own cert in production |
tower_web_ssl_certificate_key |
auto-generated | Same as above |
tower_web_proxy |
none | Configure if behind an external reverse proxy |
Edit /etc/tower/conf.d/tower_web.yml as needed, then restart the service:
systemctl restart ansible-tower.service
Step 5 – Set Up the Automation Controller
The Automation Controller (formerly Tower) is the heart of the platform. To initialize it:
ansible-tower-setup.sh
Follow the prompts:
- Admin password – set a strong password for the default
adminuser. - Inventory – optionally load a sample inventory.
- Playbooks – load the default collection of playbooks for quick testing.
Once finished, the controller will be accessible at https://<your-server-ip>/ Easy to understand, harder to ignore. That's the whole idea..
Step 6 – Verify the Installation
- Login to the web interface with the
admincredentials. - Check the Dashboard – you should see the “Welcome to Ansible Automation Platform” page.
- Run a Test Job – create a simple job template that pings
localhost:
- name: Ping localhost
hosts: localhost
tasks:
- name: Ping
ping:
- Execute the job template and confirm the status turns successful.
Step 7 – Install AWX (Optional)
AWX is the upstream open‑source project that powers Tower. If you need to run AWX for development or testing, deploy it via Docker Compose:
git clone https://github.com/ansible/awx.git
cd awx/installer
ansible-playbook -i inventory install.yml
AWX will appear at https://<awx-host>/.
Step 8 – Enable Automation Hub (Optional)
Automation Hub provides access to certified collections. To enable it:
- Create a Hub token via your Red Hat account.
- Configure the token in
/etc/ansible/ansible.cfg:
[galaxy]
hub_token =
- Sync Collections:
ansible-galaxy collection install community.general
Step 9 – Configure Credentials and Inventories
- figure out to Inventories → Add.
- Create an inventory (e.g.,
Production). - Add Hosts manually or via a dynamic inventory script.
- Create Credentials (SSH keys, Vault passwords) under Credentials.
- Link credentials to your job templates.
Step 10 – Set Up Scheduling and Notifications
- Scheduling: Under a job template, click Schedule and set a cron-like schedule.
- Notifications: Configure email or webhook notifications in Settings → Notifications.
Step 11 – Backup and Disaster Recovery
Regular backups protect your automation data:
ansible-tower-setup.sh --backup
Store the backup file in a secure, off‑site location. To restore:
ansible-tower-setup.sh --restore /path/to/backup.tar.gz
FAQ
Q1: Why is the installation failing with “No such file or directory” errors?
A: see to it that the ansible-automation-platform-2.4 package is fully downloaded. Run dnf clean all && dnf update to refresh the metadata Less friction, more output..
Q2: How do I upgrade to a newer version after installing 2.4?
A: Use dnf update ansible-automation-platform-2.4 or the Red Hat subscription manager to pull the latest updates.
Q3: Can I run the platform in a containerized environment?
A: Yes. Red Hat provides official Docker images for the Automation Controller. Refer to the official container documentation for detailed steps.
Q4: What if my firewall blocks port 4433?
A: Expose port 4433 in your firewall or configure the web service to use a different port (e.g., 8443) by editing /etc/tower/conf.d/tower_web.yml.
Q5: How do I secure the web interface with corporate SSO?
A: Configure OAuth2 or LDAP in Settings → Access → SSO following the platform’s authentication documentation Still holds up..
Conclusion
Installing Ansible Automation Platform 2.4 is a structured process that, once mastered, unlocks powerful automation capabilities across your organization. From setting up the database and web service to configuring inventories, credentials, and schedules, every step builds a strong foundation. Consider this: by following this guide, you’ll have a production‑ready environment that scales, secures, and integrates without friction with your existing infrastructure. Happy automating!
Step 12 – Monitoring, Logging, and Alerting
A production‑grade Automation Controller must be observable. Red Hat provides built‑in metrics and integrates with external monitoring stacks.
| Component | What to monitor | Typical tools |
|---|---|---|
| Controller API | Request latency, error rates, token expirations | Prometheus + Grafana, OpenTelemetry |
| Web UI | User login attempts, session timeouts | ELK stack, Splunk |
| Execution Environments | Job success/failure, runtime duration, resource consumption | Ansible Tower metrics endpoint (/api/v2/metrics) |
| Database | Connection pool usage, query latency | PostgreSQL exporter for Prometheus |
Configuring alerts
- Enable the
tower.metricsendpoint in/etc/tower/tower.yml. - Scrape the endpoint with a Prometheus server. 3. Create alert rules such as “Job failure rate > 5 % over 5 min” or “CPU > 80 % on controller node for > 2 min”.
- Route alerts to Slack, PagerDuty, or email via the Notifications channel you defined earlier.
Step 13 – High‑Availability (HA) Deployment
For mission‑critical workloads, run the Controller in an HA pair:
- Deploy two controller nodes behind a load balancer (e.g., HAProxy or AWS ELB).
- Synchronize the database using PostgreSQL streaming replication.
- Enable shared storage for
/var/lib/ansible-tower(NFS or a clustered filesystem). - Configure the load balancer to distribute API traffic across both nodes, preserving sticky sessions for the UI.
- Set up health checks that query
/api/v2/pingand mark a node unhealthy if it returns a non‑200 response.
Red Hat’s Automation Controller HA guide details each step and includes scripts for automated failover It's one of those things that adds up. Still holds up..
Step 14 – Scaling Execution Environments
Execution environments (EEs) are the runtime containers that actually run your playbooks. Scaling them improves throughput and isolates dependencies.
- Create a custom EE with the required packages (e.g.,
python‑dateutil,docker) and push it to a private registry. - Define multiple EE variants (e.g.,
ee‑python‑3.9,ee‑go‑1.22) for different language runtimes. - Assign EEs to job templates based on inventory tags or project requirements.
- Automate scaling with the Controller’s Dynamic Runner Pool feature: set a minimum and maximum number of workers, and the platform will spin up additional pods on demand when the queue length exceeds a threshold.
Step 15 – Integration with CI/CD Pipelines Embedding the Controller into a CI/CD workflow accelerates delivery:
- Expose a webhook endpoint (
/api/v2/job_templates/<id>/launch) that can be called from Jenkins, GitLab CI, or GitHub Actions. - Pass credentials securely using OAuth2 client tokens generated in the Credentials section.
- Capture job status via the REST API and feed the result back into the pipeline as a step result.
- use the Project feature to store source code references, enabling traceability from commit to execution.
A typical GitLab CI snippet might look like:
deploy:
image: quay.io/ansible/ansible-runner:latest
### Step 15 – Integration with CI/CD Pipelines (Continued)
To fully integrate the Controller into CI/CD workflows, consider the following enhancements:
5. **Automate job template creation** within your pipeline. As an example, use a script to dynamically generate job templates based on code changes, ensuring consistency across deployments.
6. **Implement feedback loops** by using the Controller’s API to trigger remediation actions if a job fails. To give you an idea, a failed deployment could automatically roll back to a previous version or trigger a notification for manual intervention.
7. **Centralize credential management** by syncing credentials from your CI/CD platform (e.g., GitLab or GitHub) directly into the Controller’s *Credentials* section, reducing manual setup and security risks.
Here’s a completed GitLab CI example demonstrating end-to-end integration:
```yaml
deploy:
image: quay.io/ansible/ansible-runner:latest
script:
- # Fetch job template ID from a predefined location or generate dynamically
- JOB_TEMPLATE_ID=$(curl -s -X GET "https://tower.example.com/api/v2/job_templates?name=deploy-app" | jq -r '.id')
- # Launch job via webhook
- curl -X POST "https://tower.example.com/api/v2/job_templates/$JOB_TEMPLATE_ID/launch" \
-H "Authorization: Bearer $(echo $CI_JOB_TOKEN | base64 --decode)" \
-d '{"name":"CI Deployment", "variables": {"branch":"main"}}'
- # Poll for job status
- STATUS=$(curl -s -X GET "https://tower.example.com/api/v2/jobs?search=CI%20Deployment" | jq -r '.[0].status')
- # Fail the pipeline if the job failed
- if [ "$STATUS" != "
**Continuing the GitLab CI Example:**
```yaml
deploy:
image: quay.io/ansible/ansible-runner:latest
script:
- # Fetch job template ID from a predefined location or generate dynamically
- JOB_TEMPLATE_ID=$(curl -s -X GET "https://tower.example.com/api/v2/job_templates?name=deploy-app" | jq -r '.id')
- # Launch job via webhook
- curl -X POST "https://tower.example.com/api/v2/job_templates/$JOB_TEMPLATE_ID/launch" \
-H "Authorization: Bearer $(echo $CI_JOB_TOKEN | base64 --decode)" \
-d '{"name":"CI Deployment", "variables": {"branch":"main"}}'
- # Poll for job status
- STATUS=$(curl -s -X GET "https://tower.example.com/api/v2/jobs?search=CI%20Deployment" | jq -r '.[0].status')
- # Fail the pipeline if the job failed
- if [ "$STATUS" != "success" ]; then echo "Deployment failed: $STATUS"; exit 1; fi
This example demonstrates a dependable integration where the pipeline automatically fails if the job does not complete successfully, ensuring reliability. The use of dynamic job template generation and secure credential handling minimizes manual intervention and enhances security Easy to understand, harder to ignore..
Conclusion
Integrating the Controller into CI/CD pipelines transforms deployment workflows by automating job management, ensuring security, and enabling scalable execution. Day to day, by embedding the Controller into CI/CD, organizations can achieve faster, more reliable deployments with reduced operational overhead. Features like the Worker pool allow dynamic scaling to handle fluctuating workloads, while seamless integration with platforms like GitLab or GitHub Actions reduces setup complexity. In practice, this integration not only streamlines development pipelines but also aligns with modern DevOps practices, fostering a culture of continuous delivery and resilience. The Project feature further enhances traceability, linking code commits directly to execution outcomes. As CI/CD evolves, the Controller’s flexibility and scalability position it as a critical tool for organizations aiming to optimize their software delivery processes.