Skip to content

Releases: akka/alpakka-kafka

0.8.3

28 Nov 22:32
Compare
Choose a tag to compare
  • #53 Fixes major off-by one issue with offsets in Kafka manual committer
  • Automatically stops the publisher actor after it fails - allows simpler manual restarting
  • Other minor fixes

0.8.2

20 Oct 07:30
Compare
Choose a tag to compare

This is a bugfix release, addressing #41 and #43

0.8.1

09 Sep 14:04
Compare
Choose a tag to compare
  • Add cancel() API to the PublisherWithCommitSink to allow elegant closing of all underlying resources
  • Properly replace log4j with slf4j

0.8.0

27 Aug 14:43
Compare
Choose a tag to compare

0.7.2

25 Aug 08:55
Compare
Choose a tag to compare
  • Critical fix for the Java API

0.7.1

24 Aug 12:16
Compare
Choose a tag to compare

Adds Java API

0.7.0

23 Jul 19:20
Compare
Choose a tag to compare
  • Introduces new API for creating producers / consumers
    • allows passing custom dispatcher name
    • allows defining custom RequestStrategy
    • allows passing kafka-specific properties
  • Fixes #21 by supporting akka-based error handling (more in README)

0.6.0

26 Jun 14:58
Compare
Choose a tag to compare
  • Fixed error handling (#10)
  • Some minor improvements and additional tests

0.5.0

21 May 13:32
Compare
Choose a tag to compare
  • Updated dependencies (Akka, reactive-streams)
  • Fixed a bug causing high CPU load and rendering actors non-cancellable
  • Removed some redundant checks (now performed by underlying Akka code)

0.4.0

13 Apr 17:55
Compare
Choose a tag to compare
  • Support for Encoders/Decoders of different message types.
    This is a breaking change that enforces update of client code to explicitly decalare a proper Decoder/Encoder instead of using raw Strings. In case you want to keep String-based messages, use a new StringEncoder() or new StringDecoder(). Thanks @javierarrieta for this contrbution! (#7)
  • Fixed the consumer behavior when new messages appear in stream.
    When connected to a Kafka queue, the flow was consuming all the present elements but after that, new elements were not polled correctly. This fix assures that an ActorPublisher keeps polling Kafka for new elements if there is demand.