Project Overview
This project offers a broad, cross-environment multi-component infrastructure-as-code setting using Terraform for provisioning and Ansible for configuration management. It controls the development, staging, and production environments and covers all major cloud services, including computing, storage, and security. Through automation tools, the project ensures consistent deployment and scalability throughout all environments while following best practices for effective and robust infrastructure management.
Step-by-Step Instructions:
Setting Up the Ubuntu OS Environment
First, you need a Ubuntu Linux environment to install and configure Terraform and Ansible. The environment can be any of the following:
Local Machine: If you have a computer running Ubuntu OS, you can set up the tools directly. AWS EC2 Instance: Create an EC2 instance with Ubuntu as the operating system. Ensure it has a public IP and security groups configured for SSH access.
Virtual Machine: Use VirtualBox, VMware, or WSL a similar tool to run an Ubuntu virtual machine.
AWS CLI
Install and configure AWS CLI for interaction with AWS services
sudo apt update #update the packages of your system sudo apt install curl unzip curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" #install aws cli unzip awscliv2.zip #unzip the installer sudo ./aws/install #run the installation script aws --version # check the version of aws cli
Go to the AWS IAM Console
After that select the user and click Add User
Assign the necessary permission Once created, you'll be provided with an Access Key ID and Secret Access Key
Open your terminal
aws configure
and then enter your credentialsNow, Install Terraform
sudo apt-get update #update the package sudo apt-get install -y gnupg software-properties-common #install Dependencies curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg #Add Key echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list #Add repsitory sudo apt-get update && sudo apt-get install terraform #then update the package and install terraform terraform --version #verify the installation
and Ansible
sudo apt-add-repository ppa:ansible/ansible # Add the ppa sudo apt update #update package sudo apt install ansible #install ansible ansible --version #check the verion you install
Creating Directories for Terraform and Ansible directory
mkdir <your-project-name> && cd <your-project-name> mkdir terraform # create a terraform directory mkdir ansible #create a directory for ansible tree #it is a command to verify the structure of your directory
<your-project-name>/ ├── terraform/ └── ansible/
Navigate to the Terraform directory
mkdir infra #create terraform files here ec2.tf bucket.tf dynamodb.tf variable.tf output.tf tree infra/ ├── bucket.tf ├── dynamodb.tf ├── ec2.tf ├── output.tf └── variable.tf
cd ..
go to the Terraform directoryCreate the
main.tf
File (Using Modules for Multi-Environment Setup)In the main.tf, we define three modules: dev, stage, and prod. These modules use the same infra module. Each module can then use different settings for the EC2 instance type, AMI ID, S3 bucket name, DynamoDB table name, and the public IP addresses of the created EC2 instances.
Create the providers.tf file (AWS provider configuration)
The providers.tf file configures the AWS provider, specifying the region. It also verifies proper authentication with default credentials or profiles configured in the AWS CLI.Create the file terraform.tf
This terraform.tf file initializes Terraform and configures a remote backend to store the state file. This setup ensures a robust and collaborative workflow.Generate SSH Keys (
devops-key
and [devops-key.pub])
note: here I have used the key name asdevops-key
, you can create with any name, and replace that everywhere that old one appears, Creating SSH keys uses thessh-keygen
command:ssh-keygen -t rsa -b 2048 -f devops-key -N ""
tree # to see the structure ├── devops-key # Private SSH key ├── devops-key.pub # Public SSH key ├── infra │ ├── bucket.tf │ ├── dynamodb.tf │ ├── ec2.tf │ ├── output.tf │ └── variable.tf ├── main.tf # Defines environment-based modules ├── providers.tf # AWS provider configuration ├── terraform.tf # Backend configuration for state management
After that Run the command
Terraform init
Initialize Terraform with the required providers and modulesterraform plan
: Review the plan to apply changesterraform apply
: Apply the changes to provision infrastructureYou can see below that all instances, buckets, dynamodb are running or created, which is created through Terraform :
instances:-
Buckets:-
Dynamo DB
Secure the Private Key
Before using the private key, set proper permissions so that it is securely encrypted, thereby not allowing other users to access it. Use the following command to limit the access:
chmod 400 devops-key
# Set read-only permissions for the owner to ensure securityAccess EC2 Instances
ssh -i devops-key ubuntu@<your-ec2-ip>
Create the dynamic Inventories in the Ansible directory
mkdir -p inventories/dev inventories/prod inventories/stg
For dev inventories:
[servers] server1 ansible_host=3.249.218.238 server2 ansible_host=34.241.195.105 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/devops-key ansible_python_interpreter=/usr/bin/python3
prod inventories
[servers] server1 ansible_host=3.252.144.3 server2 ansible_host=63.34.12.124 server3 ansible_host=34.244.48.139 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/devops-key ansible_python_interpreter=/usr/bin/python3
stag inventories:
[servers] server1 ansible_host=34.244.89.121 server2 ansible_host=34.242.151.189 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/devops-key ansible_python_interpreter=/usr/bin/python3
Directory Structure
inventories ├── dev ├── prod └── stg
Creating a playbook for installing Nginx on all servers
cd ../ansible # Navigate to the ansible directory
Create the
playbooks
directory inside the Ansible directory:mkdir playbooks
Navigate to the
playbooks
Directory :cd playbooks
Create the
install_nginx_playbook.yml
File:- Create theinstall_nginx_playbook.yml
file with the following content to install Nginx and render a webpage using thenginx-role
:--- - name: Install Nginx and render a webpage to it hosts: servers become: yes roles: - nginx-role
verify the directory structure:
ansible ├── inventories │ ├── dev │ ├── prod │ └── stg ├── playbooks │ └── install_nginx_playbook.yml
Now initializing Roles for nginx named nginx-role from ansible galaxy
After doing this, use Ansible Galaxy to initialize the
nginx-role
. This will create a folder structure for all tasks, files, handlers, templates, and variables related to the Nginx role.Navigate to the
playbooks
Directory:cd ansible/playbooks
Now, use the
ansible-galaxy
command to initialize thenginx-role
:ansible-galaxy role init nginx-role
This will create the directory structure within the
nginx-role
nginx-role ├── README.md ├── defaults │ └── main.yml ├── files │ └── index.html ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates ├── tests │ ├── inventory │ └── test.yml └── vars └── main.yml
Add Custom Tasks and Files to Your
nginx-role
Now that your role structure is ready, you can add your custom tasks and files.
Create a
tasks/main.yml
file under thenginx-role/tasks/
directory. This file will contain all the steps to install, configure, and manage the Nginx service. Here's the content for yourtasks/main.yml
--- # tasks file for nginx-role - name: Install nginx apt: name: nginx state: latest - name: Enable nginx service: name: nginx enabled: yes - name: Deploy webpage copy: src: index.html dest: /var/www/html
This will ensure that:
Nginx is installed with the latest version.
Nginx service is enabled and starts automatically.
The
index.html
file is copied to the/var/www/html
directory, which is where the default Nginx webpage is served.
You can add an
index.html
file under thenginx-role/files/
directory. This file can be customized as per your needs.Note: You can add HTML content with your custom webpage content as needed. The aim here is to serve a simple webpage as part of the Nginx configuration.
To add the
update_
inventories.sh
script to your Ansible directory and integrate it with your existing setup,In your
ansible
directory, create a new file namedupdate_
inventories.sh
#!/bin/bash # Paths and Variables TERRAFORM_OUTPUT_DIR="../terraform" # Replace with the actual Terraform directory path ANSIBLE_INVENTORY_DIR="./inventories" # Navigate to the Terraform directory cd "$TERRAFORM_OUTPUT_DIR" || { echo "Terraform directory not found"; exit 1; } # Fetch IPs from Terraform outputs DEV_IPS=$(terraform output -json dev_infra_ec2_public_ips | jq -r '.[]') STG_IPS=$(terraform output -json stg_infra_ec2_public_ips | jq -r '.[]') PRD_IPS=$(terraform output -json prd_infra_ec2_public_ips | jq -r '.[]') # Function to update inventory file update_inventory_file() { local ips="$1" local inventory_file="$2" local env="$3" # Create or clear the inventory file > "$inventory_file" # Write the inventory header echo "[servers]" >> "$inventory_file" # Add dynamic hosts based on IPs local count=1 for ip in $ips; do echo "server${count} ansible_host=$ip" >> "$inventory_file" count=$((count + 1)) done # Add common variables echo "" >> "$inventory_file" echo "[servers:vars]" >> "$inventory_file" echo "ansible_user=ubuntu" >> "$inventory_file" echo "ansible_ssh_private_key_file=home/fozia_user/devops-key" >> "$inventory_file" echo "ansible_python_interpreter=/usr/bin/python3" >> "$inventory_file" echo "Updated $env inventory: $inventory_file" } # Update each inventory file update_inventory_file "$DEV_IPS" "$ANSIBLE_INVENTORY_DIR/dev" "dev" update_inventory_file "$STG_IPS" "$ANSIBLE_INVENTORY_DIR/stg" "stg" update_inventory_file "$PRD_IPS" "$ANSIBLE_INVENTORY_DIR/prd" "prd" echo "All inventory files updated successfully!"
verify the structure of the directory
ansible ├── inventories │ ├── dev │ ├── prod │ └── stg ├── playbooks │ ├── install_nginx_playbook.yml │ └── nginx-role │ ├── README.md │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── index.html │ ├── handlers │ │ └── main.yml │ ├── meta │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ ├── templates │ ├── tests │ │ ├── inventory │ │ └── test.yml │ └── vars │ └── main.yml ├── update_inventories.sh
Before running the
update_
inventories.sh
script, ensure that it is executable. You can do this by running the following command:chmod +x update_
inventories.sh
You can now execute the script to update the inventory files with the IPs fetched from Terraform:
./update_
inventories.sh
After running the script, check the
inventories
directory. Thedev
,stg
, andprod
inventory files should now be updated with the IPs of your servers and the necessary variables:[servers] server1 ansible_host=63.35.220.234 server2 ansible_host=3.255.173.173 [servers:vars] ansible_user=ubuntu ansible_ssh_private_key_file=/home/fozia_user/devops-key ansible_python_interpreter=/usr/bin/python3
Now that your inventory files are updated, you can reference them in your Ansible playbooks by using the
-i
option:- For stag inventory :
ansible-playbook -i inventories/stag install_nginx_playbook.yml
Repeat this process for dev and prod environments as well:
ansible-playbook -i inventories/dev install_nginx_playbook.yml
,ansible-playbook -i inventories/prod install_nginx_playbook.yml
Verify all servers and whether html page is visible or not (for all inventory like: dev,stag, prod)
final directory structure:
├── ansible │ ├── inventories │ │ ├── dev │ │ ├── prod │ │ └── stg │ ├── playbooks │ │ ├── install_nginx_playbook.yml │ │ └── nginx-role │ │ ├── README.md │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── index.html │ │ ├── handlers │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ │ ├── templates │ │ ├── tests │ │ │ ├── inventory │ │ │ └── test.yml │ │ └── vars │ │ └── main.yml │ └── update_inventories.sh └── terraform ├── infra │ ├── bucket.tf │ ├── dynamodb.tf │ ├── ec2.tf │ ├── output.tf │ └── variable.tf ├── main.tf ├── providers.tf ├── terraform.tf ├── terraform.tfstate └── terraform.tfstate.backup
Navigate to the Terraform Directory: Go to the directory where your Terraform configuration files are located.
cd /path/to/terraform/directory
Run
Terraform Destroy
: Run the command to destroy all the resources that were created by Terraform. The flag--auto-approve
ensures you won't be prompted to confirm the destructionterraform destroy --auto-approve
Once the command finishes executing, your infrastructure will be completely torn down, and you will have successfully cleaned up all resourcesThis is the final step to ensure that you have a well-managed infrastructure setup that can be recreated anytime using Terraform and Ansible.
This project has given you hands-on experience in managing infrastructure and configurations for multiple environments using industry-standard tools like Terraform and Ansible. You have successfully automated your infrastructure management, from provisioning to configuration, across different environments.
You can now apply these skills to any real-world scenario, ensuring that infrastructure is managed efficiently, securely, and consistently across any environment.
Thank you for reading this blog. I hope it was informative enough to help you establish and manage multi-environment infrastructure using Terraform on AWS. If you have any questions or feedback, don't hesitate to reach out through the comments or my GitHub repository.
All the best for your DevOps journey!** 🚀😊