Website Rollout Feature Updates || Zero Downtime

Ever thought of what might have happened when Facebook was rolling out the “dark mode” update? Or when Amazon is adding some cool new gadgets for you to shop? You must be thinking: the page might not load, the site might be slow, moreover they also need time to edit their own website.

According to Income Diary Amazon makes $1,084 per second. Now you see, they can’t afford to loose business like this. This makes them look weak in front of clients & competitors all around Wall Street. So what do they do? How come they update the site and we can also see it on real time?

So I present to you, HOW THEY DO IT!

Image for post
Image for post
Flow Chart
  • Git / GitHub -> Source Code Management & Version Controlling
  • Jenkins -> Automate software development related to deploying
  • Kubernetes -> container-orchestration system for automating application deployment, scaling, and management
  • Docker -> Container Engine Kubernetes will manage

Consider a Developer ready to write some amazing scripts. He uses Git for saving his codes.

After he has written the code:

Image for post
Image for post
Prototype of a webpage

After the code is ready developer pushes the code to GitHub.

Image for post
Image for post
Push to GitHub repository

Now as we know CI/CD tool Jenkins will pull all the files from the GitHub repository into its workspace. Then the role of Jenkins start.

To help Jenkins pull the files, I will be using the GitHub webhooks. These are used to send a request to Jenkins like, “Developer has changed some code… come and pull it.”

Image for post
Image for post
Image for post
Image for post
GitHub Webhook

ngrok is a software that I used to expose my private IP on the internet. Its extensive usage and detailed explanation can be seen in this article:

  • All the contents are in the workspace of Jenkins.
  • Developer also sends a very basic Dockerfile in which Jenkins will have to intelligently add the webpage file dynamically.
Image for post
Image for post
  • To help Jenkins create a docker image from the Dockerfile, I am using CloudBees Plugin. This will also push the image to DockerHub.

We are using Kubernetes to manage our containers because if a container dies or get corrupts docker itself won’t be able to bring it up again. So for all the smart management of containers we need Kubernetes.

There are a variety of services that Kubernetes provides but the one that we are going to use is Deployment. This will manage

  • Pod creation
  • Monitoring of health of pods
  • Replicas for load balancing
  • Rolling out updates.

We will need a .yaml file for deployment which is present with the DevOps engineers already.

Image for post
Image for post
website_deploy.yaml

But the problem is they will always need to change the name of image in this file.

So we need to make this thing automated.

For this I have used regular expression and combined it with “sed” command in linux.

This command will be put in the Job1 only so that as soon as the image is build and we have a tag, we can change it in the .yaml file. After this Job 2 will take command.

sudo sed -i "s/image.*/image: dakshjain09\/test:${BUILD_NUMBER}/" /root/kube_config/website_deploy.yaml
  • image.*: Means 1 or more character after the word image.
  • ${BUILD_NUMBER}: This will provide the tag for the image.

This complete process can be seen in the image below:

Image for post
Image for post
Job 1 — Update Dockerfile -> Update .yaml file -> Bulid Docker image -> Push on DockerHub
Image for post
Image for post
Docker SSH — Docker client can connect to this Docker Server on port 4321

We have written in the system file of docker that using tcp any IP(that’s why 0.0.0.0) can use your docker service through the port 4321.

Now we are ready to deploy our website using pods in Kubernetes. We have a deployment.yaml file that is up-to-date by the Jenkins job1.

I will be handling the Job 2 using a Dynamic Slave Node i.e. Kubernetes wil do its work on a slave node.

When the job 2 will be triggered, it will ask for a slave node to be created. Then job 2 will do its work of deploying the pods as per requirement. When the job is completed it will terminate the slave node.

Using a dynamic slave node is beneficial as it helps a lot to save and efficiently manage resources.

The Dockerfile for Kubectl client and ssh:

Image for post
Image for post
Dockerfile — ssh and kubectl setup in container — kube_docker image is build
FROM ubuntu:16.04

RUN apt-get update && apt-get install -y openssh-server
RUN apt-get install openjdk-8-jre -y
RUN mkdir /var/run/sshd
RUN echo 'root:redhat' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# kubectl setup
RUN apt-get install curl -y
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectlRUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

SSH is done so that Jenkins slave node can connect to the docker service.

Now we will configure the cloud:

Image for post
Image for post
Configuring cloud for creating slave node

In this we have done many things:

  • First write the Docker Host URL that we have written in the docker.service system file.
  • I have written 0.0.0.0 to make it dynamic i.e. any IP can connect. 4321 is a random port I have used. While using a random port confirm in advance that the port you want to use is free.
  • Heads Up: Testing Connection is always a good step!!
  • Then in Docker Agent Template we tell the name of the docker image created on the docker server from the Dockerfile mentioned above. The name of the image is kube_docker.
  • Label is provided for the client’s use. When the client creates a Job he will tell his requirement will be satisfied from which slave using these labels.
  • Then we attach the volume so that the config files, authentication necessities can be used by the kubectl client program.
  • Connect method is ssh and we provide the username and password as the same provided in the Dockerfile above i.e. root & redhat.

This job just creates the deployment using basic Kubernetes commands and for rolling update replaces the deployment and the pods if they already exist.

Image for post
Image for post
Job 2 — This deploys the pods and keeps monitoring
Image for post
Image for post
Deployment files for deployment and service

Service.yaml file is created so that I can expose it through PAT. I have fixed the port number to 31000, so that at anytime if any pod fails and deployment launches a new pod then client faces no issues.

Also a fixed port is important because when the webpage is updated and Deployment rolls out features, 1 by 1 all pods are removed and created again.

Image for post
Image for post
The Big Picture — Complete Update Process

This is the whole process that is happening and it can be seen from the bottom right corner Command Prompt that, the client keeps on hitting the website, rolling updates come and without any downtime very smoothly the site gets updated and the clients get shifted to the new webpage.

You can find the codes on my GitHub and for any other codes, you can ping me.

These are the outputs for reference.

Image for post
Image for post
Job 1 Output
Image for post
Image for post
Job 2 Output

Worked in collaboration with Ashish Kumar.

Connect me on my LinkedIn as well.

Written by

Automation Tech Enthusiast || Terraform Researcher || DevOps || MLOps ||

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store