Prometheus || Kubernetes || Grafana

Prometheus & Grafana on Kubernetes

Daksh Jain
5 min readJul 7, 2020

--

For the Montoring and Evaluation Team, doing the monitoring of the system resources (CPU usage, RAM, networking and more) on which your important webservers are running is DATA. They use historical data to create plans on future consumption of resources.

This is a very important part regarding Capacity Planning because if on your website huge traffic comes, it is a positive sign, but if you don’t have enough resources to handle it, it is indeed a matter of embarrassment.

So we need a to make plans and accordingly act. But to help the M&E Team we need to keep their data safe. This monitoring data has a lot of value. We need some strategies to do such things.

One such strategy can be to integrate Prometheus & Grafana on Kubernetes. Using this we can launch Prometheus & Grafana on a pod using Deployment and use a PVC for making the data persistent.

Prometheus is a free tool used for event monitoring and alerting. It records real-time metrics in a time series database. This tool is used for complex queries and hans its own language called PromQL.

This tool combines well with a yet another amazing tool “Grafana”.

Grafana is a multi-platform open source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web. It is expandable through a plug-in system. End users can create complex monitoring dashboards using interactive query builders like Prometheus.

Few things to note about Kubernetes:

  • Pod: Pod is the smallest deployable unit in Kubernetes. Alone this is not intelligent and doesn’t serve the purpose of Kubernetes, i.e. Orchestration and Management of Containers.
  • ReplicaSet: A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time.
  • Deployment: A Deployment provides declarative updates for Pods and ReplicaSets. The ReplicaSet creates Pods in the background. We use Deployment to Rollout a ReplicaSet.
  • Persistent Volume Claim (PVC): PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource.
  • Service: Service is an abstract way to expose an application running on a set of Pods as a network service.
  • ConfigMap: ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. It is basically a dictionary of configuration.
  • Kusmotization.yaml: Here we specify all the files in sequence so that we can run just 1 file to run all the files.

So in this setup I have used Deployment for management of pods, which uses ReplicaSet in the Background for managing the desires. Then I used PVC for making the data persistent. Then Service (Node Port) is used to expose the pods to the outside world. Then in Prometheus, since we need to pass a Configuration File, for that I have used ConfigMap.

Lets get going !!

First I created the Dockerfile for Prometheus.

You can use the same Docker Image from DockerHub using this command:

docker pull dakshjain09/prometheus:v1

Next I made the service file. Here we have specified the type NodePort. This is used to expose the pod for external traffic. The Selector is important to specify and this is the name of the Deployment specified in the deployment file.

Next a PVC is created for persistent storage. Here we have to specify the accessmode and the amount of storage.

Next we create a configmap file to pass the configuration file for Prometheus. Here in the data tag we have to pass the configuration file details of Prometheus. In the targets we have to specify the system IPs we want to monitor for the metrics.

Next I have created the Deployment file for the Prometheus pods. I have specified an argument : “args” because in my Docker Image it is mandatory to pass “ — — config.file= path_of_config_file”.

In the Volume Mounts:

  1. Specified the PVC
  2. Specified the configMap

Then finally we create a kustomization.yaml file.

You can also get all the files from GitHub.

We just have to run the kustomization.yaml file and it will deploy everything for us.

Prometheus on Kubernetes — with PVC, Service, ConfigMap
By default this config file of Prometheus & Target page
If we update the configMap file and delete the older pod: new targets appear — Permanent

Now all the requirements are met:

  • Configuration file remains permanent.
  • All the storage remains persistent

Next lets move to Grafana!

Starting off by creating a Dockerfile which is uploaded on DockerHub and can be downloaded using the command:

docker pull dakshjain09/grafana-server:v1

I have mentioned the path /var/lib/grafana explicitly so that the data gets stored here and I can mount the PVC in this folder.

Next I have created a Service file for Grafana where I have written the type NodePort to expose the pod to the external world.

Next I have created the PVC for persistent storage so that the dashboards that are prepared by the M&E Team don’t get removed even if the pod gets corrupted.

Next I have created the Deployment file which will take care of updating the pods and in the background ReplicaSet does its job to maintain the desire of the number of pods.

The volume is mounted to the folder in which Grafana dashboards will be saved i.e. /var/lib/grafanawhich is mentioned in the Dockerfile that I have created.

Next finally we create the kustomization file for Grafana in which we mention the files in order.

You can also get the code from GitHub.

Now when this file is run whole setup is ready and also whatever dashboards are created gets saved in the PVC.

Prometheus on Grafana — with PVC, Service
By default no dashboards present but a PVC is attached
Dashboard created => Pod gets corrupt => Deployment creates again => Again check the page => Same Dashboard appears => No data lost

So finally the requirement that the data of Prometheus and Grafana should be made persistent is met using Kubernetes manifest files.

That’s all folks!!

For any queries, corrections, or suggestions you can always connect with me on my LinkedIn.

Worked in collaboration with Ashish Kumar.

--

--

Daksh Jain
Daksh Jain

Written by Daksh Jain

Automation Tech Enthusiast || Terraform Researcher || DevOps || MLOps ||

No responses yet