Docker enables faster delivery of software. The main aim of launching an OS is so that we can run a program. Docker helps the user to focus on the program rather than focus on the installation of the OS. Docker is a very much required tool in the field of DevOps and has increased the pace at which companies deliver. Docker helps these companies using docker, to win at every stage.
Deploying your websites in the public domain requires a lot of planning.
1 second of downtime means a lot of loss.
The client is not able to connect means loss of business.
Not so secure database means a data breach and indeed a loss of business.
Everything ends up being related to business and money.
It is very important to be pre-planned about the way you are setting up your business in the public domain.
Here is my plan :
Create 2 projects in different regions.
Follow the rule “No Root Account”. Create a service account with appropriate roles and powers for proper management.
Then enable a few APIs.
Create SQL Server in 1 VPC in 1 project.
Create the Google Kubernetes Cluster in another VPC in the other project.
In the cluster deploy a WordPress deployment. Now this is an intelligent deployment and will be taken care by the fully managed GKE Service.
Then finally put the IP of the service (Load Balancer) of the deployment in the SQL Server so that no other IP can hit the SQL Server. …
But there are 2 issues in this setup, especially when it is over the cloud:
how to launch multiple EC2 instances in different subnets
Your website is the face of your company. It is everything that brings business to you. You follow the best DevOps practices using CI/CD Tools for taking care of your Website. Everything from Git, Jenkins, Docker, Containers, Kubernetes, Deployments, Splunk, and monitoring tools are working great. Suddenly any Deployment gets corrupted, the site still runs all thanks to Kubernetes, BUT,
What happens if you need to start from Provisioning: The first step. How much time does it take to again Provision the instance(if on the cloud), get all the required resources, configure the system — install all the required software, start the services, and many more steps to get the website running?
The time taken should be very less. …
In my last article, I had set up a WordPress server on top of 1 EC2 instance and stored its data in an SQL Server which was also set up by me on an EC2 Instance.
This is an amazing setup, but the problem would arise when:
So what we need is a smart solution that keeps on monitoring these instances and if it goes down, start it up again so that the clients never face downtime. …
A Website is a face, the front-end of your business. The website is made as creative as possible because all the clients/users explore the site all the time. But as important as the front-end, the back-end is equally important. The back-end consists of all the important data such as login credentials, search information, etc. of the client. If the database gets compromised all the data will be mishandled and the reputation of your business goes down.
Amazon EKS (Elastic Kubernetes Service) is a fully managed Kubernetes service. It is known for its security, reliability, and scalability.
EKS is deeply integrated with services such as Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing you a seamless experience to monitor, scale, and load-balance your applications.
Terraform helps us to integrate multiple technologies into one single code to compile and then execute it to build the described infrastructure.
Reference to my previous article:
Here I explained in detail about how Cloud Technology works and makes deployment faster and simpler.
When a website is deployed over the Public Network, many things have to be planned. The website is not deployed over just one Webserver. If the website gets an extra amount of traffic that was not pre-calculated, then the site will crash and all efforts and business goes waste. So to avoid this, principles of DevOps are utilised. The most common things to do is use Auto-Scaling and Load Balancers.
Now as the name suggests Auto-Scaling will scale the instances where the website is running as and when required according to some set rules related to metrics of the instance. …
For the Montoring and Evaluation Team, doing the monitoring of the system resources (CPU usage, RAM, networking and more) on which your important webservers are running is DATA. They use historical data to create plans on future consumption of resources.
This is a very important part regarding Capacity Planning because if on your website huge traffic comes, it is a positive sign, but if you don’t have enough resources to handle it, it is indeed a matter of embarrassment.
So we need a to make plans and accordingly act. But to help the M&E Team we need to keep their data safe. This monitoring data has a lot of value. …
Once a Developer pushes the code to GitHub, it is automatically put in the workspace of Jenkins (using GitHub Webhooks). From here various Jobs are made in Jenkins by the Operations Team for automatic testing, monitoring and deployment in the Production System.
Now it is a well known fact that there is SILOS among the 2 teams and it becomes difficult for the operations team to write the Jenkins configurations without consulting the Developer.
So why not come up with a method such that the Developer, writes these jobs in his own way i.e. …