Creating a Multi-Environment Infrastructure using Terraform and Ansible

·

10 min read

Creating a Multi-Environment Infrastructure using Terraform and Ansible

Project Overview

This project offers a broad, cross-environment multi-component infrastructure-as-code setting using Terraform for provisioning and Ansible for configuration management. It controls the development, staging, and production environments and covers all major cloud services, including computing, storage, and security. Through automation tools, the project ensures consistent deployment and scalability throughout all environments while following best practices for effective and robust infrastructure management.

Step-by-Step Instructions:

  1. Setting Up the Ubuntu OS Environment

    First, you need a Ubuntu Linux environment to install and configure Terraform and Ansible. The environment can be any of the following:

    Local Machine: If you have a computer running Ubuntu OS, you can set up the tools directly. AWS EC2 Instance: Create an EC2 instance with Ubuntu as the operating system. Ensure it has a public IP and security groups configured for SSH access.

    Virtual Machine: Use VirtualBox, VMware, or WSL a similar tool to run an Ubuntu virtual machine.

  2. AWS CLI

    Install and configure AWS CLI for interaction with AWS services

     sudo apt update #update the packages of your system
     sudo apt install curl unzip
     curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" #install aws cli
     unzip awscliv2.zip #unzip the installer
     sudo ./aws/install #run the installation script
     aws --version # check the version of aws cli
    
  3. Go to the AWS IAM Console

    After that select the user and click Add User

    Assign the necessary permission Once created, you'll be provided with an Access Key ID and Secret Access Key

    Open your terminal aws configure and then enter your credentials

  4. Now, Install Terraform

     sudo apt-get update #update the package
     sudo apt-get install -y gnupg software-properties-common #install Dependencies
     curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg  #Add Key
     echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list  #Add repsitory
     sudo apt-get update && sudo apt-get install terraform  #then update the package and install terraform
     terraform --version  #verify the installation
    

  5. and Ansible

     sudo apt-add-repository ppa:ansible/ansible # Add the ppa
     sudo apt update  #update package
     sudo apt install ansible  #install ansible
     ansible --version  #check the verion you install
    

  6. Creating Directories for Terraform and Ansible directory

    1.   mkdir <your-project-name> && cd <your-project-name>
        mkdir terraform  # create a terraform directory 
        mkdir ansible #create a directory for ansible
        tree #it is a command to verify the structure of your directory
      
    2.   <your-project-name>/
        ├── terraform/
        └── ansible/
      
    3. Navigate to the Terraform directory

       mkdir infra
       #create terraform files here
       ec2.tf 
       bucket.tf
       dynamodb.tf
       variable.tf
       output.tf
       tree
       infra/
       ├── bucket.tf  
       ├── dynamodb.tf  
       ├── ec2.tf  
       ├── output.tf  
       └── variable.tf
      
    4. cd .. go to the Terraform directory

      Create the main.tf File (Using Modules for Multi-Environment Setup)

      In the main.tf, we define three modules: dev, stage, and prod. These modules use the same infra module. Each module can then use different settings for the EC2 instance type, AMI ID, S3 bucket name, DynamoDB table name, and the public IP addresses of the created EC2 instances.

    5. Create the providers.tf file (AWS provider configuration)
      The providers.tf file configures the AWS provider, specifying the region. It also verifies proper authentication with default credentials or profiles configured in the AWS CLI.

    6. Create the file terraform.tf
      This terraform.tf file initializes Terraform and configures a remote backend to store the state file. This setup ensures a robust and collaborative workflow.

    7. Generate SSH Keys (devops-key and [devops-key.pub])
      note: here I have used the key name as devops-key, you can create with any name, and replace that everywhere that old one appears, Creating SSH keys uses the ssh-keygen command: ssh-keygen -t rsa -b 2048 -f devops-key -N ""

    8.   tree # to see the structure
        ├── devops-key        # Private SSH key 
        ├── devops-key.pub    # Public SSH key
        ├── infra
        │   ├── bucket.tf
        │   ├── dynamodb.tf
        │   ├── ec2.tf
        │   ├── output.tf
        │   └── variable.tf
        ├── main.tf           # Defines environment-based modules
        ├── providers.tf      # AWS provider configuration
        ├── terraform.tf      # Backend configuration for state management
      
  7. After that Run the command Terraform init Initialize Terraform with the required providers and modules

  8. terraform plan : Review the plan to apply changes

  9. terraform apply : Apply the changes to provision infrastructure

  10. You can see below that all instances, buckets, dynamodb are running or created, which is created through Terraform :

    1. instances:-

    2. Buckets:-

    3. Dynamo DB

    4. Secure the Private Key

      Before using the private key, set proper permissions so that it is securely encrypted, thereby not allowing other users to access it. Use the following command to limit the access: chmod 400 devops-key # Set read-only permissions for the owner to ensure security

    5. Access EC2 Instances ssh -i devops-key ubuntu@<your-ec2-ip>

    6. Create the dynamic Inventories in the Ansible directory

      mkdir -p inventories/dev inventories/prod inventories/stg

    7. For dev inventories:

       [servers]
       server1 ansible_host=3.249.218.238
       server2 ansible_host=34.241.195.105
      
       [servers:vars]
       ansible_user=ubuntu
       ansible_ssh_private_key_file=~/.ssh/devops-key
       ansible_python_interpreter=/usr/bin/python3
      
      1. prod inventories

         [servers]
         server1 ansible_host=3.252.144.3
         server2 ansible_host=63.34.12.124
         server3 ansible_host=34.244.48.139
        
         [servers:vars]
         ansible_user=ubuntu
         ansible_ssh_private_key_file=~/.ssh/devops-key
         ansible_python_interpreter=/usr/bin/python3
        
        1. stag inventories:

           [servers]
           server1 ansible_host=34.244.89.121
           server2 ansible_host=34.242.151.189
          
           [servers:vars]
           ansible_user=ubuntu
           ansible_ssh_private_key_file=~/.ssh/devops-key
           ansible_python_interpreter=/usr/bin/python3
          
        2. Directory Structure

          inventories
          ├── dev
          ├── prod
          └── stg
          
    8. Creating a playbook for installing Nginx on all servers

    9.   cd ../ansible # Navigate to the ansible directory
      
  11. Create the playbooks directory inside the Ansible directory: mkdir playbooks

  12. Navigate to the playbooks Directory : cd playbooks

  13. Create the install_nginx_playbook.yml File:- Create the install_nginx_playbook.yml file with the following content to install Nginx and render a webpage using the nginx-role:

    ---
    - name: Install Nginx and render a webpage to it
      hosts: servers
      become: yes
      roles:
        - nginx-role
    
  14. verify the directory structure:

    ansible
    ├── inventories
    │   ├── dev
    │   ├── prod
    │   └── stg
    ├── playbooks
    │   └── install_nginx_playbook.yml
    
  15. Now initializing Roles for nginx named nginx-role from ansible galaxy

  16. After doing this, use Ansible Galaxy to initialize the nginx-role. This will create a folder structure for all tasks, files, handlers, templates, and variables related to the Nginx role.

  17. Navigate to the playbooks Directory: cd ansible/playbooks

  18. Now, use the ansible-galaxy command to initialize the nginx-role :ansible-galaxy role init nginx-role

  19. This will create the directory structure within the nginx-role

    nginx-role
    ├── README.md
    ├── defaults
    │   └── main.yml
    ├── files
    │   └── index.html
    ├── handlers
    │   └── main.yml
    ├── meta
    │   └── main.yml
    ├── tasks
    │   └── main.yml
    ├── templates
    ├── tests
    │   ├── inventory
    │   └── test.yml
    └── vars
        └── main.yml
    
  20. Add Custom Tasks and Files to Your nginx-role

  21. Now that your role structure is ready, you can add your custom tasks and files.

  22. Create a tasks/main.yml file under the nginx-role/tasks/ directory. This file will contain all the steps to install, configure, and manage the Nginx service. Here's the content for your tasks/main.yml


    ---
    # tasks file for nginx-role
    
    - name: Install nginx
      apt:
        name: nginx
        state: latest
    
    - name: Enable nginx
      service:
        name: nginx
        enabled: yes
    
    - name: Deploy webpage
      copy:
        src: index.html
        dest: /var/www/html
    
  23. This will ensure that:

    1. Nginx is installed with the latest version.

    2. Nginx service is enabled and starts automatically.

    3. The index.html file is copied to the /var/www/html directory, which is where the default Nginx webpage is served.

  24. You can add an index.html file under the nginx-role/files/ directory. This file can be customized as per your needs.

  25. Note: You can add HTML content with your custom webpage content as needed. The aim here is to serve a simple webpage as part of the Nginx configuration.

  26. To add the update_inventories.sh script to your Ansible directory and integrate it with your existing setup,

  27. In your ansible directory, create a new file named update_inventories.sh

     #!/bin/bash
    
    # Paths and Variables
    TERRAFORM_OUTPUT_DIR="../terraform"  # Replace with the actual Terraform directory path
    ANSIBLE_INVENTORY_DIR="./inventories"
    
    # Navigate to the Terraform directory
    cd "$TERRAFORM_OUTPUT_DIR" || { echo "Terraform directory not found"; exit 1; }
    
    # Fetch IPs from Terraform outputs
    DEV_IPS=$(terraform output -json dev_infra_ec2_public_ips | jq -r '.[]')
    STG_IPS=$(terraform output -json stg_infra_ec2_public_ips | jq -r '.[]')
    PRD_IPS=$(terraform output -json prd_infra_ec2_public_ips | jq -r '.[]')
    
    # Function to update inventory file
    update_inventory_file() {
        local ips="$1"
        local inventory_file="$2"
        local env="$3"
    
        # Create or clear the inventory file
        > "$inventory_file"
    
        # Write the inventory header
        echo "[servers]" >> "$inventory_file"
    
        # Add dynamic hosts based on IPs
        local count=1
        for ip in $ips; do
            echo "server${count} ansible_host=$ip" >> "$inventory_file"
            count=$((count + 1))
        done
    
        # Add common variables
        echo "" >> "$inventory_file"
        echo "[servers:vars]" >> "$inventory_file"
        echo "ansible_user=ubuntu" >> "$inventory_file"
        echo "ansible_ssh_private_key_file=home/fozia_user/devops-key" >> "$inventory_file"
        echo "ansible_python_interpreter=/usr/bin/python3" >> "$inventory_file"
    
        echo "Updated $env inventory: $inventory_file"
    }
    
    # Update each inventory file
    update_inventory_file "$DEV_IPS" "$ANSIBLE_INVENTORY_DIR/dev" "dev"
    update_inventory_file "$STG_IPS" "$ANSIBLE_INVENTORY_DIR/stg" "stg"
    update_inventory_file "$PRD_IPS" "$ANSIBLE_INVENTORY_DIR/prd" "prd"
    
    echo "All inventory files updated successfully!"
    
  28. verify the structure of the directory

    ansible
    ├── inventories
    │   ├── dev
    │   ├── prod
    │   └── stg
    ├── playbooks
    │   ├── install_nginx_playbook.yml
    │   └── nginx-role
    │       ├── README.md
    │       ├── defaults
    │       │   └── main.yml
    │       ├── files
    │       │   └── index.html
    │       ├── handlers
    │       │   └── main.yml
    │       ├── meta
    │       │   └── main.yml
    │       ├── tasks
    │       │   └── main.yml
    │       ├── templates
    │       ├── tests
    │       │   ├── inventory
    │       │   └── test.yml
    │       └── vars
    │           └── main.yml
    ├── update_inventories.sh
    
  29. Before running the update_inventories.sh script, ensure that it is executable. You can do this by running the following command: chmod +x update_inventories.sh

  30. You can now execute the script to update the inventory files with the IPs fetched from Terraform: ./update_inventories.sh

  31. After running the script, check the inventories directory. The dev, stg, and prod inventory files should now be updated with the IPs of your servers and the necessary variables:

    [servers]
    server1 ansible_host=63.35.220.234
    server2 ansible_host=3.255.173.173
    
    [servers:vars]
    ansible_user=ubuntu
    ansible_ssh_private_key_file=/home/fozia_user/devops-key
    ansible_python_interpreter=/usr/bin/python3
    
  32. Now that your inventory files are updated, you can reference them in your Ansible playbooks by using the -i option:

    1. For stag inventory :
    ansible-playbook -i inventories/stag install_nginx_playbook.yml

  1. Repeat this process for dev and prod environments as well: ansible-playbook -i inventories/dev install_nginx_playbook.yml , ansible-playbook -i inventories/prod install_nginx_playbook.yml

  2. Verify all servers and whether html page is visible or not (for all inventory like: dev,stag, prod)

  3. final directory structure:

    ├── ansible
    │   ├── inventories
    │   │   ├── dev
    │   │   ├── prod
    │   │   └── stg
    │   ├── playbooks
    │   │   ├── install_nginx_playbook.yml
    │   │   └── nginx-role
    │   │       ├── README.md
    │   │       ├── defaults
    │   │       │   └── main.yml
    │   │       ├── files
    │   │       │   └── index.html
    │   │       ├── handlers
    │   │       │   └── main.yml
    │   │       ├── meta
    │   │       │   └── main.yml
    │   │       ├── tasks
    │   │       │   └── main.yml
    │   │       ├── templates
    │   │       ├── tests
    │   │       │   ├── inventory
    │   │       │   └── test.yml
    │   │       └── vars
    │   │           └── main.yml
    │   └── update_inventories.sh
    └── terraform
        ├── infra
        │   ├── bucket.tf
        │   ├── dynamodb.tf
        │   ├── ec2.tf
        │   ├── output.tf
        │   └── variable.tf
        ├── main.tf
        ├── providers.tf
        ├── terraform.tf
        ├── terraform.tfstate
        └── terraform.tfstate.backup
    
  4. Navigate to the Terraform Directory: Go to the directory where your Terraform configuration files are located.cd /path/to/terraform/directory

  5. Run Terraform Destroy: Run the command to destroy all the resources that were created by Terraform. The flag --auto-approve ensures you won't be prompted to confirm the destruction

  6. terraform destroy --auto-approve Once the command finishes executing, your infrastructure will be completely torn down, and you will have successfully cleaned up all resources

  7. This is the final step to ensure that you have a well-managed infrastructure setup that can be recreated anytime using Terraform and Ansible.

    This project has given you hands-on experience in managing infrastructure and configurations for multiple environments using industry-standard tools like Terraform and Ansible. You have successfully automated your infrastructure management, from provisioning to configuration, across different environments.

    You can now apply these skills to any real-world scenario, ensuring that infrastructure is managed efficiently, securely, and consistently across any environment.

Thank you for reading this blog. I hope it was informative enough to help you establish and manage multi-environment infrastructure using Terraform on AWS. If you have any questions or feedback, don't hesitate to reach out through the comments or my GitHub repository.

All the best for your DevOps journey!** 🚀😊