How to Set up Prometheus Node exporter in Kubernetes

In this guide we’ll learn how to set up and configure Node Exporter to collect Linux system metrics like CPU load and disk I/O and expose them as Prometheus-style metrics in kubernetes.

To get all the kubernetes node-level system metrics, you need to have a node-exporter running in all the kubernetes nodes. It collects all the Linux system metrics and exposes them via /metrics endpoint on port 9100

Similarly, you need to install Kube state metrics to get all the metrics related to kubernetes objects.

Related content

Kubernetes Manifests

Once you have the prerequisites in place, we can create kubernetes resource manifests.

Namespace

In Kubernetes, a namespace provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).

Let us create a namespace for our gatus resources. Save the following in namespace.yaml:

---
apiVersion: v1
kind: Namespace
metadata:
  name: node-exporter

Then execute the following command to create a new namespace named node-exporter.

kubectl apply -f namespace.yaml

Create a node-exporter Daemonset

Since we will be collecting metrics in each node in our cluster, we will need to have the node-exporter set up in each of the node. In this situation, a Kubernetes Daemonset fits this.

DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Save the following content to deployment.yaml.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: node-exporter
    app.kubernetes.io/component: node-exporter
    app.kubernetes.io/name: node-exporter
  name: node-exporter
  namespace: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
      app.kubernetes.io/component: node-exporter
      app.kubernetes.io/name: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
        app.kubernetes.io/component: node-exporter
        app.kubernetes.io/name: node-exporter
    spec:
      containers:
      - args:
        - --path.sysfs=/host/sys
        - --path.rootfs=/host/root
        - --no-collector.wifi
        - --no-collector.hwmon
        - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/)
        - --collector.netclass.ignored-devices=^(veth.*)$
        image: prom/node-exporter:v1.5.0
        name: node-exporter
        ports:
        - containerPort: 9100
          name: http
          protocol: TCP
        resources:
          limits:
            cpu: 250m
            memory: 180Mi
          requests:
            cpu: 100m
            memory: 180Mi
        volumeMounts:
        - mountPath: /host/sys
          mountPropagation: HostToContainer
          name: sys
          readOnly: true
        - mountPath: /host/root
          mountPropagation: HostToContainer
          name: root
          readOnly: true
      volumes:
      - hostPath:
          path: /sys
        name: sys
      - hostPath:
          path: /
        name: root

Create a deployment on node-exporter namespace using the above file.

kubectl create -f deployment.yaml 

You can check the created daemonset using the following command.

kubectl get daemonset --namespace=node-exporter
kubectl get pods --namespace=node-exporter

Connecting To Node Exporter

Once set up is done and node-exporter is exporting the metrics, Prometheus needs to access the /metrics endpoint so it can scrape. In our kubernetes set up, we can achieve this in one of these options:

  1. Accessing using the service endpoint if prometheus and node-exporter is running in same cluster
  2. Exposing the node-exporter deployment as a service with NodePort or a Load Balancer.
  3. Adding an Ingress object if you have an Ingress controller deployed.

Using the service endpoint

Every kubernetes service can be accessed internally using the service url. The URL of service is in the below format:

<service-name>.<namespace>.svc.cluster.local:<service-port>

In our case this url can be use as the endpoint for the node-exporter:

node-exporter.node-exporter.svc.cluster.local:9100

Exposing Node Exporter as a Service [NodePort & LoadBalancer]

To access the Node exporter dashboard over a IP or a DNS name, you need to expose it as a Kubernetes service.

Create a file namedservice.yaml and copy the following contents. We will expose node-exporter on all kubernetes node IP’s on port 9100.

Note: If you are on AWS, Azure, or Google Cloud, You can use Loadbalancer type, which will create a load balancer and automatically points it to the Kubernetes service endpoint.

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9100"
    prometheus.io/scrape: "true"
  name: node-exporter
  namespace: node-exporter
spec:
  ports:
  - name: node-exporter
    port: 9100
    protocol: TCP
    targetPort: http
  selector:
    app: node-exporter
    app.kubernetes.io/component: node-exporter
    app.kubernetes.io/name: node-exporter
  type: NodePort

Create the service using the following command.

kubectl create -f service.yaml --namespace=node-exporter

Once created, you can access the Gatus dashboard using any of the Kubernetes nodes IP on port 9100. If you are on the cloud, make sure you have the right firewall rules to access port 9100 from your workstation.

Exposing node exporter Using Ingress

If you have an existing ingress controller setup, you can create an ingress object to route the Prometheus DNS to the Prometheus backend service.

Also, you can add SSL for Prometheus in the ingress layer. You can refer to the Kubernetes ingress TLS/SSL Certificate guide for more details.

Here is a sample ingress object.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: node-exporter
  namespace: node-exporter
  labels:
    app.kubernetes.io/name: node-exporter
    app.kubernetes.io/instance: node-exporter
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod-issuer
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: node-exporter.citizix.cloud
      http:
        paths:
          - path: /
            pathType: ImplementationSpecific
            backend:
              service:
                name: node-exporter
                port:
                  number: 9100
  tls:
  - hosts:
    - node-exporter.citizix.cloud
    secretName: node-exporter-tls-ingress

Using Kustomize

Kustomize introduces a template-free way to customize application configuration that simplifies the use of off-the-shelf applications. Now, built into kubectl as apply -k.

In our case, to avoid multiple kubectl apply -f <filename> we can use kustomize to define all the resources then apply as one. Save the following as kustomization.yaml.

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - namespace.yaml
  - daemonset.yaml
  - service.yaml
  - ingress.yaml

namespace: node-exporter

Then apply using this command:

➜ kubectl apply -k .

Conclusion

In this article, we learnt how to set up Node Exporter on Kubernetes.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy