InternetNews

Apache Kafka 1.0 Now Avaliable

The Apache Software Foundation has announced Apache Kafka 1.0, adding an improved Streams API, enhanced metrics, improved tolerance for disk failures, general bug fixes, and more.

Apache Kafka is an open distributed streaming platform, used by thousands of companies worldwide. Some enhancements in this release include:

  • Various improvements to the Streams API. These include a new API to expose the state of active tasks at runtime, an improved builder API, a new cogroup API, and improved debuggability
  • A wide number of improvements to metrics, such a new health checking and a global topic and partition count
  • Java 9 support, bringing faster TLS and CRC32 implementations. This leads to faster over-the-wire encryption
  • Improved authentication error handling
  • Better tolerance for disk failures. Now a single disk failure in a JBOD broker will no longer bring the entire broker down

Despite already being in widespread use, this is the first major release milestone for Apache Kafka. Neha Narkhede, co-creator of Kafka, explains why:

For Apache Kafka, the wait for 1.0 was less about stability and more about completeness of the vision that we and the community set to build towards back when we first created Kafka. After all, Kafka has been in production at thousands of companies for several years.

Specifically, Narkhede outlines this vision:

So that is the vision we had in mind and what we set out to build towards – a Streaming Platform; the ability to read, write, move and process streams of data with transactional correctness at company-wide scale.

Narkhede also explains the iterations that Kafka has gone through in order to achieve this vision. These have included:

  1. Introducing a log like abstraction for continuous streams, where publishing is appending to an ordered log, and consuming is reading continuously from a given offset.
  2. Adding replication and fault tolerance to logs
  3. Introduction of Connect and Streams APIs used to make it easy to get data out of Kafka and process it
  4. Exactly-once semantics for stream processing through transactions





Duncan

Duncan is a technology professional with over 20 years experience of working in various IT roles. He has a interest in cyber security, and has a wide range of other skills in radio, electronics and telecommunications.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.