How to install and set up Kafdrop – Kafka Web UI

Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages.

Apache Kafka is an open-source platform. Kafka was originally developed by Linkedin and was later incubated as the Apache Project. It can process over 1 million messages per second.

Kafka is an amazing platform for processing a huge number of messages very quickly. However, Kafka has one disadvantage that it does not come with an inbuilt User Interface where the users can see the information related to Kafka.

Kafdrop helps us in solving this problem. It gives us a simple, lightweight, and easy-to-use User Interface where one can not only see the required information but can also create and delete Kafka topics.

Features

  • View Kafka brokers - topic and partition assignments, and controller status
  • View topics - partition count, replication status, and custom configuration
  • Browse messages - JSON, plain text, Avro and Protobuf encoding
  • View consumer groups - per-partition parked offsets, combined and per-partition lag
  • Create new topics
  • View ACLs
  • Support for Azure Event Hubs

Requirements

  • Java 11 or newer
  • Kafka (version 0.11.0 or newer) or Azure Event Hubs

Optional, additional integration:

  • Schema Registry

Kafdrop can be installed by executing a JAR file or via docker or on Kubernetes. In this guide we will set up Kafdrop JAR directly.

Also checkout:

Installing Java

Kafdrop needs Java to run, hence first we need to install that on our local environment and it must be equal or greater than Java 11. Well, we don’t need to add any third repository because the package to get JAVA is already there on the system base repo.

Let us install latest Java on RHEL 8 based distributions with the following command. For other Linux Distributions please consult your package manager manual.

1
sudo dnf install java-11-openjdk

Type y and press enter when prompted to accept the installation.

Get the latest Kafdrop

Kafdrop is available as a jar file. Get the latest release from github release page here.

We are going to download and install kafka in the /opt/kafka directory. First become root and set up the required directory structure

1
2
sudo mkdir/opt/kafdrop
cd /opt/kafdrop/

Next, download the latest kadrop and rename the file. In this guide we are downloading version 3.30.0.

1
2
3
curl -LO https://github.com/obsidiandynamics/kafdrop/releases/download/3.30.0/kafdrop-3.30.0.jar

mv kafdrop-3.30.0.jar kafdrop.jar

Running the Jar

Once the jar is downloaded, we can run it with the java -jar command. This is the command:

1
2
3
java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
    -jar /opt/kafdrop/kafdrop.jar \
    --kafka.brokerConnect=localhost:9092

If unspecified, kafka.brokerConnect defaults to localhost:9092.

Note: As of Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka admin API.

Once it starts, Open a browser and navigate to http://server_ip:9000/. The port can be overridden by adding the following config:

1
--server.port=<port> --management.server.port=<port>

Optionally, configure a schema registry connection with:

1
--schemaregistry.connect=http://localhost:8081

and if you also require basic auth for your schema registry connection you should add:

1
--schemaregistry.auth=username:password

Finally, a default message format (e.g. to deserialize Avro messages) can optionally be configured as follows:

1
--message.format=AVRO

Valid format values are DEFAULT, AVRO, PROTOBUF. This can also be configured at the topic level via dropdown when viewing messages.

Create a Systemd unit for Kafdrop service

When running Kafdrop Service in a production server we have to run it in the background. Hence, create systemd units for both the scripts.

Create a kafdrop systemd service file

1
sudo vim /etc/systemd/system/kafdrop.service

Add this content to the file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
[Unit]
Description=Kafdrop server
Documentation=https://github.com/obsidiandynamics/kafdrop
Requires=network.target remote-fs.target
After=network.target remote-fs.target

[Service]
Type=simple
ExecStart=/bin/java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
    -jar /opt/kafdrop/kafdrop.jar \
    --kafka.brokerConnect=localhost:9092
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

Finally, save and close the file. To ensure that the service is recognized, reload systemd units:

1
sudo systemctl daemon-reload

Start and enable Kafdrop systemd service

Now, let’s start and enable the service to make sure they will also get active even after the system reboot.

Start kafdrop

1
sudo systemctl start kafdrop

Confirm the services status to ensure that they are both running as expected:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
$ sudo systemctl status kafdrop
 kafdrop.service - Kafdrop server
   Loaded: loaded (/etc/systemd/system/kafdrop.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2022-04-16 10:49:28 UTC; 21s ago
     Docs: https://github.com/obsidiandynamics/kafdrop
 Main PID: 78642 (java)
    Tasks: 22 (limit: 23167)
   Memory: 334.7M
   CGroup: /system.slice/kafdrop.service
           &#x2514;&#x2500;78642 /bin/java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED -jar /opt/kafdrop/kafdrop.jar --kafka.brokerConnect=localhost:9092

Apr 16 10:49:35 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:35.046  INFO 78642 [           main] k.c.KafkaConfiguration                   : Checking keystore file kafka.keystore.jks
Apr 16 10:49:35 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:35.046  INFO 78642 [           main] k.c.KafkaConfiguration                   : Checking properties file kafka.properties
Apr 16 10:49:35 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:35.128  INFO 78642 [           main] k.s.BuildInfo                            : Kafdrop version: 3.30.0, build time: 2022-04->
Apr 16 10:49:36 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:36.039  INFO 78642 [           main] o.s.b.a.e.w.EndpointLinksResolver        : Exposing 13 endpoint(s) beneath base path '/a>
Apr 16 10:49:37 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:36.996  INFO 78642 [           main] i.u.Undertow                             : starting server: Undertow - 2.2.16.Final
Apr 16 10:49:37 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:37.011  INFO 78642 [           main] o.x.Xnio                                 : XNIO version 3.8.6.Final
Apr 16 10:49:37 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:37.029  INFO 78642 [           main] o.x.n.NioXnio                            : XNIO NIO Implementation Version 3.8.6.Final
Apr 16 10:49:37 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:37.075  INFO 78642 [           main] o.j.t.Version                            : JBoss Threads version 3.1.0.Final
Apr 16 10:49:37 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:37.139  INFO 78642 [           main] o.s.b.w.e.u.UndertowWebServer            : Undertow started on port(s) 9000 (http)
Apr 16 10:49:37 rockysrv.citizix.com java[78642]: 2022-04-16 10:49:37.727  INFO 78642 [           main] o.s.b.StartupInfoLogger                  : Started Kafdrop in 7.411 seconds (JVM runnin

Finally enable the service on boot:

1
sudo systemctl enable kafdrop

Once the service is successfully started, you can access the UI.

Open a browser and navigate to http://server_ip:9000/.

The Cluster Overview screen is the landing page of the web UI.

You get to see the overall layout of the cluster - the individual brokers that make it up, their addresses and some key broker stats - whether they are a controller and the number of partitions each broker owns. The latter is quite important - as your cluster size and the number of topics (and therefore partitions) grows, you generally want to see an approximately level distribution of partitions across the cluster.

Next is the Topics List, which in most cases is what you’re really here for. Any reasonably-sized microservices-based ecosystem might have hundreds, if not thousands of topics. As you’d expect, the list is searchable. The stats displayed alongside each topic are fairly ho-hum. The one worth noting is the under-replicated column. Essentially, it’s telling us the number of partition replicas that have fallen behind the primary. Zero is a good figure. Anything else is indicative of either a broker or a network issue that requires immediate attention.

Click on a topic in the list to get to the Topic Overview screen.

The screen is subdivided into four sections.

On the top-left, there is a summary of the topic stats - a handy view, not dissimilar to what you would have seen in the cluster overview.

On the top-right, you can view the custom configuration. In the example above, the topic runs a stock-standard config, so there’s nothing to see. Had the configuration been overridden, you’d see a set of custom values like in the example below.

The bottom-left section enumerates over the partitions. The partition indexes are links - clicking through will reveal the first 100 messages in the topic.

The consumers section on the bottom-right lists the consumer group names as well as their aggregate lag (the sum of all individual partition lags).

Clicking on the consumer group on the Topic Overview gets you into the Consumer View. This screen provides a comprehensive breakdown of a single consumer group.

The view is sectioned by topic. For each topic, a separate table lists the underlying partitions. Against each partition, we see the committed offset, which we can compare against the first and last offsets to see how our consumer is tracking. Conveniently, Kafdrop displays the computed lag for each partition, which is aggregated at the footer of each topic table.

The Message View screen is the coveted topic viewer that has in all likelihood brought you here. You can get to the message view in one of two ways:

  1. Click the View Messages button in the Topic Overview screen.
  2. Click the individual partition link in the Topic Overview.

It’s exactly what you’d expect - a chronologically-ordered list of messages (or records, in Kafka parlance) for a chosen partition.

Each entry conveniently displays the offset, the record key (if one is set), the timestamp of publication, and any headers that may have been appended by the producer.

There’s another little trick up Kafdrop’s sleeve. If the message happens to be a valid JSON document, the topic viewer can nicely format it. Click on the green arrow on the left of the message to expand it.

Conclusion

In this guide we learnt how to install and use the Kafdrop Kafka UI.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy