Once a Developer pushes the code to GitHub, it is automatically put in the workspace of Jenkins (using GitHub Webhooks). From here various Jobs are made in Jenkins by the Operations Team for automatic testing, monitoring and deployment in the Production System.
Now it is a well known fact that there is SILOS among the 2 teams and it becomes difficult for the operations team to write the Jenkins configurations without consulting the Developer.
So why not come up with a method such that the Developer, writes these jobs in his own way i.e. CODE, put this in Github and somehow this code automatically creates the jobs in Jenkins.
Is this possible? Yes!! Using a powerful method GROOVY.
Groovy is a powerful, multi-faceted language for the Java platform.
Let’s start by setting up an environment.
- Developer will create a code in Groovy Language and push it on GitHub.
- Using WebHooks Jenkins will automatically pull this code and Jobs will be created for automatic Website deployment.
- Then simply the Administrator of Jenkins will have to run the First Job and rest all will work through chaining and visually can be seen in a build pipeline view.
Let’s say the developer has written a code and pushed it on Github.
Now the task of Admin remains very easy just create a job to pull this code.
This is called Seed Job. Once this Job is run for the first time it shows an error.
Now when you run the job again it will show 4 new generated Jobs will be created and 1 Build Pipeline will be created as can be seen in the output:
But this is a manual approach, and someone might not know how to do this. Also this consumes some time. So a better apporach to follow is -
Now we have 4 Jobs and 1 Build Pipleline View that are made by the code that the Developer sent !!
The jobs that the Developer made were based on these requirements:
- Job1 => Pull all code/webpages from GitHub repository into the base OS.
- Job 2 => By looking at the code, Jenkins should launch the respective container and start an interpreter. It should launch a docker image, deploy the code and start the interpreter.
- Job 3 => Test the program/code if it is working fine and if not send a mail to the developer.
- Job 4 => This is a monitoring job. It will deploy and run the Webpage on the Production Environment.
On the 4th Job Monitoring is not required, because instead of using Docker independently I have used Kubernetes that will manage the pods in Job 3 only.
Generated Job 1
This job just pulls the code/webpages from GitHub whenever there is any update on the GitHub.
Generated Job 2
I will be handling the Job 2 using a Dynamic Slave Node i.e. Kubernetes wil do its work on a slave node.
We have written in the system file of docker that using tcp any IP(that’s why 0.0.0.0) can use your docker service through the port 4321.
When the job 2 will be triggered, it will ask for a slave node to be created. Then job 2 will do its work of deploying the pods as per requirement. When the job is completed it will terminate the slave node.
Using a dynamic slave node is beneficial as it helps a lot to save and efficiently manage resources.
The Dockerfile for Kubectl client, python and ssh:
You can pull this image from my DockerHub.
SSH is done so that Jenkins slave node can connect to the docker service.
Python is installed because I am running a Python code on the slave.
Now we will configure the cloud:
In this we have done many things:
- First write the Docker Host URL that we have written in the docker.service system file.
- I have written 0.0.0.0 to make it dynamic i.e. any IP can connect. 4321 is a random port I have used. While using a random port confirm in advance that the port you want to use is free.
- Heads Up: Testing Connection is always a good step!!
- Then in Docker Agent Template we tell the name of the docker image created on the docker server from the Dockerfile mentioned above. The name of the image is kube_docker.
- Label is provided for the client’s use. When the client creates a Job he will tell his requirement will be satisfied from which slave using these labels.
- Then we attach the volume so that the config files, authentication necessities can be used by the kubectl client program.
- Connect method is ssh and we provide the username and password as the same provided in the Dockerfile above i.e. root & redhat.
Job 2 is an interesting job. Here we want Jenkins to itself judge what interpreter or language the code is using and launch a respective pod using a docker image.
Now this python file checks for different types of file extensions and prints all the extensions that it finds in the entire folder.
Now, Kubectl commands will create Deployment and Service using YAML files and also check if the pods already exist. If it does then replace the deployment with the updated code. Kubernetes Deployment also support Rolling updates through the YAML code file.
You can find the service and Deployment files here: GitHub.
Generated Job 3
This Job will be testing the website and the webpages about how it will look on the front end to the client.
Here I am using a trick of the bash shell error codes by deliberately sending an error code so that the job fails if a container is not running. When there is an unstable built a mail is sent to the developer.
Generated Job 4
This Job will deploy the webpage on the Production Environment only if the last job runs successfully i.e. it is passed by the testing team.
This is the code of how we will be creating the Build Pipeline View.