Provision & Configure Web Server on AWS EC2 Instance using Ansible

How much time does it take for you to provision & configure instances as per a sudden requirement? 1 set of deployment of your Production Environment where the companies main website is loaded is corrupted. What do you do? How much time does it take for you to bring it up again?

Daksh Jain
10 min readSep 13, 2020
Application Deployment + Configuration Management + Continuous Delivery

Your website is the face of your company. It is everything that brings business to you. You follow the best DevOps practices using CI/CD Tools for taking care of your Website. Everything from Git, Jenkins, Docker, Containers, Kubernetes, Deployments, Splunk, and monitoring tools are working great. Suddenly any Deployment gets corrupted, the site still runs all thanks to Kubernetes, BUT,

What happens if you need to start from Provisioning: The first step. How much time does it take to again Provision the instance(if on the cloud), get all the required resources, configure the system — install all the required software, start the services, and many more steps to get the website running?
The time taken should be very less. Because each second might matter a great amount of loss to your business.

Here comes handy, the fastest and simplest way to automate apps and IT infrastructure: ANSIBLE.

I have created a setup on AWS Cloud using 1 single Ansible Playbook. The steps followed are:

First Play

  • Create a Vault to store the AWS Secret & Access Key.
  • Create the required variables.
  • Create a Security Group for AWS EC2 instance.
  • Provision EC2 Instances on AWS Cloud.
  • Then put the Public IP of all the EC2 Instance in a host group “webserver” in the Inventory of Ansible.

Second Play

  • This play works on the host group “webserver” and calls the Role I have created to configure Web Server (httpd) on a Linux Instances.
  • The Role will install httpd software on all Linux Instances.
  • It will copy a dummy code from my GitHub, into the folder in the instances.
  • Then Start the service for httpd.

Note: Here I will be focusing only on “httpd software” because I am using “ami-0ebc1ac48dfd14136” which is Amazon Linux 2 AMI based on RedHat OS family. For any other OS Family like Debian, you can use software “apache2” and configure accordingly.

Let’s start by building the code:

I have Ansible installed on my local Linux VM — RHEL 8. I am creating the playbook in this VM.

Step 1 —Ansible config file

  • /etc/inventory is the default folder if specified Ansible can pick Hosts from here.
  • host_key_checking false because we are specifying our own key and want SSH using that key.
  • remote_user is specified as ec2-user because when Ansible will SSH into the instance it will not be root.
  • ask_pass false because we are using a key.
  • private_key_file is specified with the full path where the .pem file is stored that will be used to do SSH into the AWS Instance.
  • Then in the privilege_escalation section, I have specified become_method as sudo. This is done because we are not logging in using the root account and sudo will be required to install software or start the services.

Step 2— Create an Ansible Vault

After creating an AWS Account you are provided with an Access Key and a Secret Key. Create an Ansible Vault that will store your Keys securely as it is encrypted.

ansible-vault create mycred.yml
Ansible Vault

If someone without the password tries to view the file he will get the following output:

Encrypted file

To view the file you can use this command and put the correct password.

ansible-vault view mycred.yml

Step 3 — Main Playbook

Play 1 — Configure Localhost for Provisioning AWS Instance

Variables -

- hosts: localhost
gather_facts: no
vars_files:
- mycred.yml
vars:
myport: 81
region: ap-south-1
subnet: subnet-c48ee588
sg: websg
type: t2.micro
number: 1
  • hosts: localhost
    Where we want to run this playbook? I am running this is localhost because from here I want to Provision an EC2 Instance and work on that.
  • gather_facts: no so that it saves time and memory because we don’t want Ansible to gather any facts about the Managed Node as we are not using it.
  • vars_files
    This contains the vault that has the access & secret key.
    NOTE: If you change the name of the vault change it in this file.
  • vars
    Here multiple variables with their default values are provided. These values will be used by default if the user runs the playbook normally.
    The variable “subnet” is a required variable & its value needs to be provided by the user according to their subnet name in the AWS Console.

Installations -

tasks:
- name: installing python
package:
name: python36
state: present
- name: installing boto3
pip:
name: boto3
state: present

Boto3 is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services.
So using these 2 tasks I am checking the presence of Python & Boto3. If not present these 2 tasks will download the same.

Security Group for EC2 Instance -

          - name: create security group
ec2_group:
name: "{{ sg }}"
description: The webservers security group
region: "{{ region }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: "{{ myport }}"
to_port: "{{ myport }}"
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0

Next, I have created the Security Group to be used by the AWS EC2 Instance that will be created in the future. It picks up the values from the variables declared above.
In the security group, 2 things have to be set: Ingress and Egress.

Ingress means the traffic that is coming into our website. We need to specify this, keeping in mind what ports we want to keep open. I have kept open 2 ports: SSH, and HTTP.

  • SSH so that Ansible can connect to it to do the configuration.
  • HTTP so that traffic can hit on the website. The port is specified from the variable provided above.

Egress has been set to all ports so that outbound traffic originating from within a network can go outside to the Public World.

Provision EC2 Instance -

          - name: launching ec2 instance
ec2:
key_name: key1
instance_type: "{{ type }}"
image: ami-0ebc1ac48dfd14136
wait: true
group: "{{ sg }}"
count: "{{ number }}"
vpc_subnet_id: "{{ subnet }}"
assign_public_ip: yes
region: "{{ region }}"
state: present
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
instance_tags:
Name: webserver
register: ec2
  • key_name is important to specify here and needs to be specified carefully in the ansible.cfg file as well.
  • instance_type is set using the variables.
  • image id has been fixed.
  • wait will wait for the instance to reach its desired state before returning.
  • group will take the name of the security group.
  • count is the number of instances you want to launch.
  • vpc_subnet_id is the name of the subnet to launch the EC2 instances.
  • assign_public_ip to allocate a random public IP to the instance on which Public World can connect to.
  • region like ap-south-1, ap-southeast-1, where to launch the instance.
  • state specifies the state of the instance, whether it is running, stopped or terminated. By default value is present.
  • aws_access_key & aws_secret_key are used from the vault.
  • instance_tags is provided so that later it can be used to put the IP of the EC2 instance in the inventory with a common hostname.

The whole thing is registered in a variable “ec2” which will be used later.

Add to Inventory dynamically -

          - name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: webserver
loop: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
state: started
loop: "{{ ec2.instances }}"

add_host is a module that helps to add IPs in the inventory file dynamically for use by Ansible. For this, I have used the concept of “for loop” because this playbook can launch as many instances as the user wants.
So in the loop, the variable is ec2.instances & in the hostname keyword, the variable used is item.public_ip. item is a pre-defined variable for the for loops in Ansible.

Since it will take some time to connect to the EC2 Instance, we have to tell Ansible to wait.
wait_for module is used where I have specified the port 22 i.e. SSH and again looped over so that it can let Ansible wait to get connected to all the instances. Once it is connected it goes on to the next task.

Play 2 — Configure the EC2 Instances to work as Web Servers

- hosts: webserver
gather_facts: no
tasks:
- command: curl
http://ipv4.icanhazip.com
register: x
- debug:
var: x.stdout
- name: Pass variables to role
include_role:
name: httpdserver
vars:
my_ip: x.stdout

hosts: webserver
This will work in the dynamic inventory which was created that has the public IP of EC2 Instances.

gather_facts: no so that it saves time and memory because we don’t want Ansible to gather any facts about the Managed Node as we are not using it.

The first task is to hit a URL: http://ipv4.icanhazip.com
This URL gets the IPs that are present. Since we are using the URL in the hosts — web server, so it knows all the Public IPs.
Then register it in a variable.

include_role is a module that can call a role. I have used this so that I can pass the IPs that are registered in the “x” variable to the files in the role.
From this, I have a variable my_ip that gets the value of x which is all IPs.

Role -

ansible-galaxy list

This output shows that there are 3 by default folders where you can create roles. These paths are known by Ansible, so it is better to create the roles in any 1 of these folders.

cd /etc/ansible/
mkdir roles
cd roles

The role is created using this command:

ansible-galaxy init httpdserver

This will create a folder with the name of “httpdserver” and inside there are multiple folders.

These folders contain pre-created main.yml files inside which we need to write our tasks, handlers, & variables.

vars -

# vars file for httpdserver
my_port: 81
my_path: /var/www/html/

2 variables have been specified here:
* my_port: 81
* my_path: /var/www/html/
These will be used in the tasks and handlers.

tasks -

- name: install httpd
package:
name: httpd
state: present
register: status
- name: install php
package:
name: php
state: present
- name: configure httpd
template:
src: my.conf
dest: /etc/httpd/conf.d/my.conf
when: status.rc == 0
notify: restart httpd

First thing is to install httpd software using the package module.

Each service has a configuration file. So we are sending our own configuration file from the Managed Node (Localhost — Ansible) into the Controller Node i.e. EC2 Instances in our case.

The configuration file is present in the template folder in the same role.

- name: copy code
get_url:
url:
https://raw.githubusercontent.com/Dakshjain1/php-cloud/master/index.php
dest: "{{ my_path }}index.html"
- name: start httpd
service:
name: httpd
state: started
when: status.rc == 0

Now using the get_url module, and this code will be put in the dest folder specified: it will be /var/www/html/index.html
Finally starting the service for httpd.

template -

Listen {{ my_port }}<Virtualhost {{ my_ip }}:{{ my_port }}>
DocumentRoot {{ my_path }}
</Virtualhost>

This is the config file template and is using the jinja syntax and it takes the variable values of my_ip & my_port.

Now, what if something is updated in the config file, service was already started so it won’t happen again. For this, I have used the concept of handlers.

handler -

# handlers file for httpdserver
- name: restart httpd
service:
name: httpd
state: restarted

This is the handler that gets notified when the configuration file is updated.

You can find these code files on my GitHub profile.

Now I will run the playbook -

NOTE: subnet is a required variable and it is mandatory to pass

ansible-playbook ec2_v2.yml                                                       -e sebnet=<your subnet id>                                                 -e number=<number of instances to launch>                                      --ask-vault-pass
3 subnets provisioned and configured using 1 single click ansible-playbook

Clearly it can be seen that 3 Instances have been launched.

All 3 Instances are running on port 81

Now the only issue that can be seen here is that since 3 instances are launched, there must be just 1 URL where the clients will hit. This is done using a Load Balancer program.

Stay tuned!! In the next article, I will show you how to balance the load and only 1 URL will run all instances as backend servers.

For any doubt, suggestions, or feedback connect to me on LinkedIn.

--

--

Daksh Jain

Automation Tech Enthusiast || Terraform Researcher || DevOps || MLOps ||