Feb 10, 2021
Broker communication in Apache Kafka® 2.7 and beyond
Kafka is moving away from ZooKeeper, slowly but surely. Find out how this impacts broker communication in the latest versions of Apache Kafka.
Due to scalability and duplication issues, Apache Kafka is engaged in the long process of breaking its dependency on ZooKeeper. In this article, we’ll look at how this will change the way that brokers communicate with each other and with Kafka as a whole.
Kafka and ZooKeeper today
When setting up an Apache Kafka broker, there are three essential broker configurations that you need to define. The broker needs to be able to know and tell other system elements who it is (broker.id). It needs to know where to write topic data (log.dirs). And it needs to connect to ZooKeeper (zookeeper.connect) in order to join its Apache Kafka system as a productive member. This tells you just how central ZooKeeper currently is for Kafka.
As a brief recap, an Apache Kafka system consists of producers, consumers, brokers and ZooKeeper.
- Producers send records to the core
- Consumers fetch records from the core
- Brokers handle all the requests from them and keep data replicated within the cluster.
ZooKeeper keeps the whole band in sync. It stores the cluster configuration and maintains cluster membership. Brokers send a regular heartbeat signal to ZooKeeper. If no heartbeat signal arrives, the broker’s ZooKeeper session times out. When this information reaches the controller broker, it triggers a new leader election for all partitions with a leader on that broker. If the timed-out broker was the controller, the other brokers race to take over and a new controller arises.
So what’s the problem?
There are two rather large issues with the ZooKeeper dependency. Firstly, ZooKeeper is a limitation on Kafka’s extendability. As Kafka clusters get larger, the data that ZooKeeper needs to store increases. Then that data must be loaded periodically from ZooKeeper to Kafka. The larger the cluster, the longer the loading times. And of course, long loading times run the risk of failure in the middle of the operation. This can lead to brokers being left in divergent states, or at worst ZooKeeper not being up to date with the state held in the controller’s memory.
Another chronic issue is the problem of maintaining two systems without it leading to mistakes and discrepancies. The systems are being independently set up and configured manually. Also, external utilities can modify one system without talking to the other. In both cases, errors are possible and we’d say even likely.
And finally, let’s face it, ZooKeeper isn’t the easiest system to maintain. At Aiven we know how, but it’s a steep learning curve for someone wanting to take Kafka into use.
After the breakup
In the post-ZooKeeper world, Apache Kafka would independently take care of its configuration management. The configuration will be written to a metadata log, and it will contain information about each metadata change: topics, partitions, ISRs, configurations and so on. The log would be written by a Raft quorum formed by the controller nodes. The Raft quorum elects a leader, called the active controller, whose job it is to handle all RPCs from the brokers. (Note that changes are planned in Kafka to the Raft model; it will be implemented as pull-based, rather than push.)
Effectively, instead of one single controller, there will be several controllers available to take over if the active controller goes offline. These standby controllers are simply the other nodes in the quorum.
The brokers fetch the metadata they need from the active controller at regular intervals. These fetches also act as a heartbeat. If a broker hasn’t fetched metadata for a while, the active controller removes the entry for the broker from the cluster metadata.
As a side note, the new architecture will also enhance the creation and deletion of topics. As long as it’s up to ZooKeeper to maintain the list of topics that have changed, it forms a bottleneck in the system. In the new system, new topics can be created or deleted with just a new entry in the metadata partition.
But what does it mean today?
For Aiven for Kafka users, the main difference in a ZooKeeper-free Kafka will be improved performance. Over here we’re busy planning and refactoring, but all the changes are under the hood. All you have to do is lean back, sip your coffee and enjoy the freedom of a fully managed Kafka service.
But maybe you didn’t come here to be told that, but rather to find out more about what is going on with brokers in the latest Kafka release.
Raft module
In Apache Kafka 2.7, the core Raft implementation has been added. The new “raft” module contains the core consensus protocol. This has not been fully integrated with the controller yet, but you can test the performance with a standalone server.
Inter-broker API
Apache Kafka 2.7 adds a new inter-broker API to Alter ISR which gives the controller the exclusive ability to update Leader and ISR state. As a result, the metadata sent by the controller always reflects the latest state, and the leader can reject inconsistent leader and ISR changes.
Also, after updates it’s no longer necessary to reinitialize the current state. This can reduce shutdown time significantly.
And finally, the reassignment of partitions is only complete when new replicas are added to the ISR. Because the controller no longer has to wait for a change notification, the process is quicker.
SCRAM configuration API
In 2.7 we also see the new broker-side SCRAM configuration API. Its job is to remove the ZooKeeper dependency from the process of changing SCRAM settings.
Wrapping up
Moving away from ZooKeeper is a huge deal for Apache Kafka, and will take a long time to accomplish. The latest release takes some further steps in that direction, also impacting the way brokers communicate. The changes are not enormous yet, but are indicative of great things to come. We’ll continue to keep abreast of the latest developments!
Tip! Further reading:
An introduction to Apache Kafka by Oskari Saarenmaa
5 Benefits of a Kafka-centric microservice architecture
Not using Aiven services yet? Sign up now for your free trial at https://console.aiven.io/signup!
In the meantime, make sure you follow our changelog and blog RSS feeds or our LinkedIn and Twitter accounts to stay up-to-date with product and feature-related news.
Further reading
Subscribe to the Aiven newsletter
All things open source, plus our product updates and news in a monthly newsletter.
Related resources
Mar 6, 2024
Discover how Doccla uses open source tech to transform healthcare with virtual wards, supported by Aiven, in the UK and Europe.
Nov 10, 2022
We regularly measure the throughput performance of Aiven for Apache Kafka® - check out our latest test results in this post.
Oct 28, 2022
Aiven isn't the only thing out there. In this post, we compare Confluent's offering with Aiven's, and match both against self-managed solutions. Find out more!