Skip to content

Collect metrics from new application

Requirements

Your app should expose a path to fetch metrics in HTTP, often the path is /metrics.

You can check it using a kubectl port-forward on your pod or service.

Scraping metrics

ServiceMonitor vs PodMonitor

Usually we have a pod with its associated service, then ServiceMonitor should be the default choice but, in some case, you may want to scrape metrics of a set of pods which all have a certain label which is not consistent between different services.

We will only discuss about ServiceMonitor here, PodMonitors configuration is very close.

Important

Prometheus-operator will scan for ServiceMonitors and PodMonitors that have the label caascad.com/prometheus-monitor set with caascad.

More info :

ServiceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: ""                                    # ServiceMonitor name
  namespace: ""                               # Namespace target
  labels:
    caascad.com/prometheus-monitor: "caascad"
spec:
  endpoints:
  - interval: 15s
    port: ""                                  # Service port name (warning: don't use the port value)
    relabelings:                              # Optionnally: Add label to the scraped metrics
    - replacement: ""
      targetLabel: ""
  namespaceSelector:                          # Optionnally: Select which namespaces the Endpoints objects are discovered from
    matchNames:
    - ""
  selector:                                   # Select by labels which service are scraped
    matchLabels:
      ""

Adding another labels to metrics

You can add another key=value in the same way as above

Reference :

Deploy the ServiceMonitor

Just deploy your ServiceMonitor with one of :

  • kubectl apply with a Kubernetes config file having the above content.

  • Helm chart

Sometimes the existing upstream charts provide servicemonitors for which additional labels can be added.

Example if there are such condition in servicemonitor template :

{{- if .Values.prometheus.monitor.additionalLabels }}
{{- toYaml .Values.prometheus.monitor.additionalLabels | nindent 4 }}
{{- end }}

The values ​​file used to deploy the chart will be :

prometheus:
  monitor:
    enabled: true
    additionalLabels:
        caascad.com/prometheus-monitor: "caascad"

Note

It's often necessary to activate the servicemonitor (here with enabled flag)

How it works

Prometheus-operator runs an automatic discovery of ServiceMonitors and PodMonitors and will detect it soon.

  • Then it will get the information it needs from the ServiceMonitor or PodMonitor
  • It will get more information from the Service or Pod you selected with your monitor definition
  • And it will regenerate the configuration of Prometheus

All of this is automatic.

Disable scraping metrics

Delete the ServiceMonitors/PodMonitors related to the Service/Pods you do not want to monitor any more. kubectl delete will do it.

Prometheus will automatically reload its configuration.

Troubleshooting

Get the Prometheus configuration file

You can fetch the Prometheus configuration file from the Kubernetes CLI:

kubectl get -n caascad-monitoring secret prometheus-caascad-prometheus -o jsonpath="{['.data']['prometheus\.yaml\.gz']}" | base64 -d | gunzip

If you configure your ServiceMonitor/PodMonitor, prometheus-operator should modify this file almost instantly. Prometheus does reload its configuration file but it's not as fast as the configuration modification and it can takes a few minutes.

Verify Prometheus Target

You can use a port-forward on Prometheus :

kubectl -n caascad-monitoring port-forward svc/caascad-prometheus 9090:9090 &

Go to Status/Targets in order to verify if the endpoint target is up, and see the error message if the target is down.

Ensure you check on the Grafana

Go to https://grafana.ZONE_NAME.caascad.com/

Go to Explore tab, choose datasource Thanos-app and verify that the metric is present.