Kubernetes : Deploy Prometheus2024/06/07 |
Deploy Prometheus to monitor metrics in Kubernetes Cluster. This example is based on the environment like follows. +----------------------+ +----------------------+ | [ ctrl.srv.world ] | | [ dlp.srv.world ] | | Manager Node | | Control Plane | +-----------+----------+ +-----------+----------+ eth0|10.0.0.25 eth0|10.0.0.30 | | ------------+--------------------------+----------- | | eth0|10.0.0.51 eth0|10.0.0.52 +-----------+----------+ +-----------+----------+ | [ node01.srv.world ] | | [ node02.srv.world ] | | Worker Node#1 | | Worker Node#2 | +----------------------+ +----------------------+ |
[1] |
Persistent storage is needed for Prometheus.
On this example, configure [nfs.srv.world:/home/nfsshare] directory on NFS server as external storage, and also configure dynamic volume provisioning with NFS plugin like the example [1] - [4] of here. |
[2] | Install Prometheus chart with Helm. |
# output config and change some settings root@ctrl:~# helm inspect values bitnami/kube-prometheus > prometheus.yaml
root@ctrl:~#
vi prometheus.yaml ..... ..... line 21 : specify [storageClass] to use storageClass: "nfs-client" ..... ..... ..... line 1249 : specify [storageClass] to use storageClass: "nfs-client" ..... ..... ..... line 2327 : specify [storageClass] to use storageClass: "nfs-client" # create a namespace for Prometheus root@ctrl:~# kubectl create namespace monitoring namespace/monitoring created helm install prometheus --namespace monitoring -f prometheus.yaml bitnami/kube-prometheus NAME: prometheus LAST DEPLOYED: Fri Jun 7 05:04:00 2024 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: kube-prometheus CHART VERSION: 9.4.1 APP VERSION: 0.74.0 ** Please be patient while the chart is being deployed ** Watch the Prometheus Operator Deployment status using the command: kubectl get deploy -w --namespace monitoring -l app.kubernetes.io/name=kube-prometheus-operator,app.kubernetes.io/instance=prometheus Watch the Prometheus StatefulSet status using the command: kubectl get sts -w --namespace monitoring -l app.kubernetes.io/name=kube-prometheus-prometheus,app.kubernetes.io/instance=prometheus Prometheus can be accessed via port "9090" on the following DNS name from within your cluster: prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local To access Prometheus from outside the cluster execute the following commands: echo "Prometheus URL: http://127.0.0.1:9090/" kubectl port-forward --namespace monitoring svc/prometheus-kube-prometheus-prometheus 9090:9090 Watch the Alertmanager StatefulSet status using the command: kubectl get sts -w --namespace monitoring -l app.kubernetes.io/name=kube-prometheus-alertmanager,app.kubernetes.io/instance=prometheus Alertmanager can be accessed via port "9093" on the following DNS name from within your cluster: prometheus-kube-prometheus-alertmanager.monitoring.svc.cluster.local To access Alertmanager from outside the cluster execute the following commands: echo "Alertmanager URL: http://127.0.0.1:9093/" kubectl port-forward --namespace monitoring svc/prometheus-kube-prometheus-alertmanager 9093:9093 WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs: - alertmanager.resources - blackboxExporter.resources - operator.resources - prometheus.resources - prometheus.thanos.resources +info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/root@ctrl:~# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 66s prometheus-kube-prometheus-blackbox-exporter-7f9487fc-45vwl 1/1 Running 0 88s prometheus-kube-prometheus-operator-6d88786cf8-kmwdx 1/1 Running 0 88s prometheus-kube-state-metrics-5bfdb58c6d-9jzrk 1/1 Running 0 88s prometheus-node-exporter-bzswr 1/1 Running 0 88s prometheus-node-exporter-tdn7z 1/1 Running 0 88s prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 66s # if access from outside of cluster, set port-forwarding root@ctrl:~# kubectl port-forward -n monitoring service/prometheus-kube-prometheus-prometheus --address 0.0.0.0 9090:9090 & |
[3] | If you deploy Grafana, too, It's possible like follows. |
# output config and change some settings root@ctrl:~# helm inspect values bitnami/grafana > grafana.yaml
root@ctrl:~#
vi grafana.yaml # line 612 : change to your [storageClass] persistence: enabled: true ## If defined, storageClassName: <storageClass> ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## storageClass: "nfs-client"root@ctrl:~# helm install grafana --namespace monitoring -f grafana.yaml bitnami/grafana NAME: grafana LAST DEPLOYED: Fri Jun 7 05:07:29 2024 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: grafana CHART VERSION: 11.3.3 APP VERSION: 11.0.0 ** Please be patient while the chart is being deployed ** 1. Get the application URL by running these commands: echo "Browse to http://127.0.0.1:8080" kubectl port-forward svc/grafana 8080:3000 & 2. Get the admin credentials: echo "User: admin" echo "Password: $(kubectl get secret grafana-admin --namespace monitoring -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)" # Note: Do not include grafana.validateValues.database here. See https://github.com/bitnami/charts/issues/20629 WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs: - grafana.resources +info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/root@ctrl:~# kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 4m10s grafana-66b58c8f46-kcgjq 1/1 Running 0 64s prometheus-kube-prometheus-blackbox-exporter-7f9487fc-45vwl 1/1 Running 0 4m32s prometheus-kube-prometheus-operator-6d88786cf8-kmwdx 1/1 Running 0 4m32s prometheus-kube-state-metrics-5bfdb58c6d-9jzrk 1/1 Running 0 4m32s prometheus-node-exporter-bzswr 1/1 Running 0 4m32s prometheus-node-exporter-tdn7z 1/1 Running 0 4m32s prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 4m10s # if access from outside of cluster, set port-forwarding root@ctrl:~# kubectl port-forward -n monitoring service/grafana --address 0.0.0.0 3000:3000 & |
[4] |
If you access to Prometheus UI from a Host in cluster, access to the URL below with an Web browser.
⇒ http://prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local
If you set port-forwarding, access to the URL below on a client computer in your local network.
⇒ http://(Manager Node Hostname or IP address):(setting port)/
That's OK if following Prometheus UI is displayed.
|
[5] |
If you access to Grafana from a Host in cluster, access to the URL below with an Web browser.
⇒ http://grafana.monitoring.svc.cluster.local
If you set port-forwarding, access to the URL below on a client computer in your local network.
⇒ http://(Manager Node Hostname or IP address):(setting port)/
That's OK if following Grafana UI is displayed.
For [admin] password, it's possible to confirm with the command below.⇒ echo "Password: $(kubectl get secret grafana-admin --namespace monitoring -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)" |
Sponsored Link |