Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages.
Apache Kafka is an open-source platform. Kafka was originally developed by Linkedin and was later incubated as the Apache Project. It can process over 1 million messages per second.
Kafka is an amazing platform for processing a huge number of messages very quickly. However, Kafka has one disadvantage that it does not come with an inbuilt User Interface where the users can see the information related to Kafka.
Kafdrop helps us in solving this problem. It gives us a simple, lightweight, and easy-to-use User Interface where one can not only see the required information but can also create and delete Kafka topics.
Features
- View Kafka brokers - topic and partition assignments, and controller status
- View topics - partition count, replication status, and custom configuration
- Browse messages - JSON, plain text, Avro and Protobuf encoding
- View consumer groups - per-partition parked offsets, combined and per-partition lag
- Create new topics
- View ACLs
- Support for Azure Event Hubs
Requirements
- Java 11 or newer
- Kafka (version 0.11.0 or newer) or Azure Event Hubs
Optional, additional integration:
- Schema Registry
Kafdrop can be installed by executing a JAR file or via docker or on Kubernetes. In this guide we will set up Kafdrop JAR directly.
Also checkout:
Installing Java
Kafdrop needs Java to run, hence first we need to install that on our local environment and it must be equal or greater than Java 11. Well, we don’t need to add any third repository because the package to get JAVA is already there on the system base repo.
Let us install latest Java on RHEL 8 based distributions with the following command. For other Linux Distributions please consult your package manager manual.
|
|
Type y
and press enter
when prompted to accept the installation.
Get the latest Kafdrop
Kafdrop is available as a jar file. Get the latest release from github release page here.
We are going to download and install kafka in the /opt/kafka directory. First become root and set up the required directory structure
|
|
Next, download the latest kadrop and rename the file. In this guide we are downloading version 3.30.0.
|
|
Running the Jar
Once the jar is downloaded, we can run it with the java -jar
command. This is the command:
|
|
If unspecified, kafka.brokerConnect
defaults to localhost:9092
.
Note: As of Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka admin API.
Once it starts, Open a browser and navigate to http://server_ip:9000/
. The port can be overridden by adding the following config:
|
|
Optionally, configure a schema registry connection with:
|
|
and if you also require basic auth for your schema registry connection you should add:
|
|
Finally, a default message format (e.g. to deserialize Avro messages) can optionally be configured as follows:
|
|
Valid format values are DEFAULT
, AVRO
, PROTOBUF
. This can also be configured at the topic level via dropdown when viewing messages.
Create a Systemd unit for Kafdrop service
When running Kafdrop Service in a production server we have to run it in the background. Hence, create systemd units for both the scripts.
Create a kafdrop systemd service file
|
|
Add this content to the file
|
|
Finally, save and close the file. To ensure that the service is recognized, reload systemd units:
|
|
Start and enable Kafdrop systemd service
Now, let’s start and enable the service to make sure they will also get active even after the system reboot.
Start kafdrop
|
|
Confirm the services status to ensure that they are both running as expected:
|
|
Finally enable the service on boot:
|
|
Once the service is successfully started, you can access the UI.
Navigating the UI
Open a browser and navigate to http://server_ip:9000/
.
The Cluster Overview screen is the landing page of the web UI.
You get to see the overall layout of the cluster - the individual brokers that make it up, their addresses and some key broker stats - whether they are a controller and the number of partitions each broker owns. The latter is quite important - as your cluster size and the number of topics (and therefore partitions) grows, you generally want to see an approximately level distribution of partitions across the cluster.
Next is the Topics List, which in most cases is what you’re really here for. Any reasonably-sized microservices-based ecosystem might have hundreds, if not thousands of topics. As you’d expect, the list is searchable. The stats displayed alongside each topic are fairly ho-hum. The one worth noting is the under-replicated column. Essentially, it’s telling us the number of partition replicas that have fallen behind the primary. Zero is a good figure. Anything else is indicative of either a broker or a network issue that requires immediate attention.
Click on a topic in the list to get to the Topic Overview screen.
The screen is subdivided into four sections.
On the top-left, there is a summary of the topic stats - a handy view, not dissimilar to what you would have seen in the cluster overview.
On the top-right, you can view the custom configuration. In the example above, the topic runs a stock-standard config, so there’s nothing to see. Had the configuration been overridden, you’d see a set of custom values like in the example below.
The bottom-left section enumerates over the partitions. The partition indexes are links - clicking through will reveal the first 100 messages in the topic.
The consumers section on the bottom-right lists the consumer group names as well as their aggregate lag (the sum of all individual partition lags).
Clicking on the consumer group on the Topic Overview gets you into the Consumer View. This screen provides a comprehensive breakdown of a single consumer group.
The view is sectioned by topic. For each topic, a separate table lists the underlying partitions. Against each partition, we see the committed offset, which we can compare against the first and last offsets to see how our consumer is tracking. Conveniently, Kafdrop displays the computed lag for each partition, which is aggregated at the footer of each topic table.
The Message View screen is the coveted topic viewer that has in all likelihood brought you here. You can get to the message view in one of two ways:
- Click the View Messages button in the Topic Overview screen.
- Click the individual partition link in the Topic Overview.
It’s exactly what you’d expect - a chronologically-ordered list of messages (or records, in Kafka parlance) for a chosen partition.
Each entry conveniently displays the offset, the record key (if one is set), the timestamp of publication, and any headers that may have been appended by the producer.
There’s another little trick up Kafdrop’s sleeve. If the message happens to be a valid JSON document, the topic viewer can nicely format it. Click on the green arrow on the left of the message to expand it.
Conclusion
In this guide we learnt how to install and use the Kafdrop Kafka UI.