Kubernetes : Deploy Prometheus2020/08/25 |
Deploy Prometheus to monitor metrics in Kubernetes Cluster.
This example is based on the environment like follows.
-----------+---------------------------+--------------------------+------------ | | | eth0|10.0.0.30 eth0|10.0.0.51 eth0|10.0.0.52 +----------+-----------+ +-----------+----------+ +-----------+----------+ | [ dlp.srv.world ] | | [ node01.srv.world ] | | [ node02.srv.world ] | | Master Node | | Worker Node | | Worker Node | +----------------------+ +----------------------+ +----------------------+ |
[1] |
A Persistent storage is needed for Prometheus.
On this example, install NFS Server on Master Node and configure [/home/nfsshare] directory as NFS share as external persistent storage, and also configure dynamic volume provisioning with NFS plugin like the example of [1], [2], [3]. |
[2] | Install Prometheus chart with Helm. |
# output config and change some settings root@dlp:~# helm inspect values stable/prometheus > prometheus.yaml
root@dlp:~#
vi prometheus.yaml alertmanager: ..... ..... line 213: uncomment and specify [storageClass] to use storageClass: "nfs-client" ..... ..... server: ..... ..... line 803: uncomment and specify [storageClass] to use storageClass: "nfs-client" ..... ..... pushgateway: ..... ..... line 1134: uncomment and specify [storageClass] to use storageClass: "nfs-client" # create a namespace for Prometheus root@dlp:~# kubectl create namespace monitoring namespace/monitoring created helm install prometheus --namespace monitoring -f prometheus.yaml stable/prometheus NAME: prometheus LAST DEPLOYED: Tue Aug 24 19:14:04 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster: prometheus-server.monitoring.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace monitoring port-forward $POD_NAME 9090 The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster: prometheus-alertmanager.monitoring.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace monitoring port-forward $POD_NAME 9093 ################################################################################# ###### WARNING: Pod Security Policy has been moved to a global property. ##### ###### use .Values.podSecurityPolicy.enabled with pod-based ##### ###### annotations ##### ###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) ##### ################################################################################# The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster: prometheus-pushgateway.monitoring.svc.cluster.local Get the PushGateway URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace monitoring port-forward $POD_NAME 9091 For more information on running Prometheus, visit: https://prometheus.io/root@dlp:~# kubectl get pods -n monitoring -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES prometheus-alertmanager-77455866bd-p8t9d 2/2 Running 0 2m12s 10.244.1.23 node01.srv.world <none> <none> prometheus-kube-state-metrics-c65b87574-9rmz6 1/1 Running 0 2m12s 10.244.1.22 node01.srv.world <none> <none> prometheus-node-exporter-rth4v 1/1 Running 0 2m12s 10.0.0.51 node01.srv.world <none> <none> prometheus-node-exporter-z7tbn 1/1 Running 0 2m12s 10.0.0.52 node02.srv.world <none> <none> prometheus-pushgateway-c454fc4-sn6bh 1/1 Running 0 2m12s 10.244.2.15 node02.srv.world <none> <none> prometheus-server-dc6d7575c-5fcjh 2/2 Running 0 2m12s 10.244.2.16 node02.srv.world <none> <none> # if access from outside of cluster, set port-forwarding root@dlp:~# kubectl port-forward -n monitoring service/prometheus-server --address 0.0.0.0 9090:80 |
[3] | If you deploy Granafa, too, It's possible like follows. |
# output config and change some settings root@dlp:~# helm inspect values stable/grafana > grafana.yaml
root@dlp:~#
vi grafana.yaml line 215: enable [persistence] line 216: uncomment and change to your [storageClass] persistence: type: pvc enabled: true storageClassName: nfs-clientroot@dlp:~# helm install grafana --namespace monitoring -f grafana.yaml stable/grafana NAME: grafana LAST DEPLOYED: Tue Aug 24 19:33:46 2020 NAMESPACE: monitoring STATUS: deployed REVISION: 1 NOTES: 1. Get your 'admin' user password by running: kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: grafana.monitoring.svc.cluster.local Get the Grafana URL to visit by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace monitoring port-forward $POD_NAME 3000 3. Login with the password from step 1 and the username: adminroot@dlp:~# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE grafana-7b6499754f-74786 1/1 Running 0 41s ..... ..... # if access from outside of cluster, set port-forwarding root@dlp:~# kubectl port-forward -n monitoring service/grafana --address 0.0.0.0 3000:80 |
[4] |
If you access to Prometheus UI from a Host in cluster, access to the URL below with an Web browser.
⇒ http://prometheus-server.monitoring.svc.cluster.local
If you set port-fowarding, access to the URL below on a client computer in your local network.
⇒ http://(Master Node Hostname or IP address):(setting port)/
That's OK if following Prometheus UI is displayed.
|
[5] |
If you access to Granafa from a Host in cluster, access to the URL below with an Web browser.
⇒ http://grafana.monitoring.svc.cluster.local
If you set port-fowarding, access to the URL below on a client computer in your local network.
⇒ http://(Master Node Hostname or IP address):(setting port)/
That's OK if following Granafa UI is displayed.
For [admin] password, it's possible to confirm with the command below.⇒ kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo |
Sponsored Link |