How to run Grafana Loki with Helm and kustomize in kubernetes

In this guide we will set up grafana loki for log aggregation on kubernetes using helm chart.

Loki is a Prometheus-inspired logging service for cloud native infrastructure. It is a logging backend optimized for users running Prometheus and Kubernetes with great logs search and visualization. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.

Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem. A small index and highly compressed chunks simplifies the operation and significantly lowers the cost of Loki.

An agent (also called a client) acquires logs, turns the logs into streams, and pushes the streams to Loki through an HTTP API. The Promtail agent is designed for Loki installations, but many other Agents such as fluentd, docker driver, fluentbit and logstash seamlessly integrate with Loki.

Loki indexes streams. Each stream identifies a set of logs associated with a unique set of labels. A quality set of labels is key to the creation of an index that is both compact and allows for efficient query execution.

LogQL is the query language for Loki.

Related content:

Prerequisites

Before proceeding with this guide, ensure that you have the following:

  • Helm, kubectl installed and working on local machine
  • Kustomize (Optional)
  • A running Kubernetes cluster.

Deploy a loki cluster

A typical Loki stack consists of:

  • Loki itself, the log database (this would be the equivalent of Elasticsearch);
  • Grafana, the visualisation web interface (equivalent of Kibana);
  • Promtail, that allows to scrape log files and send the logs to Loki (equivalent of Logstash).

First add the loki helm chart using this command:

helm repo add grafana https://grafana.github.io/helm-charts

Then update the repository

helm repo update

You can install the complete stack in a dedicated namespace. Deploy with the default configuration in a custom Kubernetes cluster namespace

helm upgrade --install loki --namespace=loki grafana/loki-stack

If you have custom configs, use this command:

helm upgrade --install loki grafana/loki-distributed --set "key1=val1,key2=val2,..."

Deploy Grafana to your Kubernetes cluster

Grafana is available in the same repo that we set up. Install it using this command:

helm install grafana grafana/grafana

To get the admin password for the Grafana pod, run the following command:

kubectl get secret --namespace <YOUR-NAMESPACE> loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

To access the Grafana UI, run the following command to port forward then navigate to localhost port 3000:

kubectl port-forward --namespace <YOUR-NAMESPACE> service/loki-grafana 3000:80

We can use Kustomize to achieve the above functionality. The advantage of this is that the deployment can be added as code. Kustomize is a command-line configuration manager for Kubernetes objects. Integrated with kubectl since 1.14, it allows you to make declarative changes to your configurations without touching a template.

Save the following as kustomization.yaml.

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - namespace.yaml

helmCharts:
  - name: loki-stack
    repo: https://grafana.github.io/helm-charts
    version: v2.6.5
    releaseName: loki-stack
    namespace: grafana-loki
  - name: grafana
    repo: https://grafana.github.io/helm-charts
    version: v6.30.2
    releaseName: grafana
    namespace: grafana-loki

For the namespace.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: grafana-loki

Now you can apply the changes:

kustomize build . --enable-helm | kubectl apply -f -

You should now see the Loki stack installed in your cluster. With the stack being installed you should find the following components present in your cluster.

Accessing Grafana

The default installation settings of the loki stack are pretty complete:

  • the data source is correctly configured in Grafana
  • promtail is configured to scrape the logs of the pods running on your cluster

This means you can directly head to the Explore menu to check the logs of your pods:

  1. click on the compass on the left;
  2. at the top of the screen, select the loki;
  3. the main field at the top allows to type a LogQL query.

Navigate to http://localhost:3000 and login with admin and the password output above. If for some reason the datasource is not working, you can add it using the URL http://<helm-installation-name>.<namespace>.svc.cluster.local:3100 for Loki (with <helm-installation-name> and <namespace> replaced by the installation and namespace, respectively, of your deployment).

Exploring the logs

Before we explore the logs, we should first understand LogQL.LogQL is Loki’s PromQL-inspired query language. Queries act as if they are distributed grep to aggregate log sources. LogQL uses labels and operators for filtering. Example LogQL query

{container="query-frontend",namespace="grafana-loki"} |= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500

So we can see the logs, let us create a sample deployment that would echo hello world in a loop

kubectl create deploy logs-example --image=busybox -- sh -c 'for run in $(seq 1 10000); do echo "Hello $run"; sleep 2; done'

Let us now go back to the Loki log explorer and give this LogQL query.

{app="logs-example",namespace="default"}

Where the app is one of the deployment’s labels and namespace is the name of the namespace.

You also have the split option to view multiple LogQL queries from the same Grafana Console.

You should now see the logs from all the namespaces in your cluster and from all the pods within your cluster. You can use various LogQL queries to aggregate the data and also stream then live from the log viewer console.

Here are a few more logql examples…

To show the number of requests received per minute (for pods in the default namespace):

  • type: time series
  • query: count_over_time({namespace="default"}[1m])

To show the number of login attempts per minute:

  • type: time series
  • query: count_over_time({namespace="default"}|="POST /login"[1m])

To show the requests that caused a server error (code 5xx):

  • type: logs
  • query: {namespace="default"}|~" 5.. "

Clean up

If you no longer interested in grafana loki, you can clean up with these commands:

helm delete loki 
helm delete grafana 
comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy