- switched default encoding for messages value and key for JS Kafka client to Buffer
- simplified integration tests
- updated dependencies:
kafka-node ~2.4.1 → ~2.6.1 eslint ~4.18.2 → ~4.19.1 mocha ~5.0.4 → ~5.2.0 sinon ^4.4.6 → ^6.0.0 node-rdkafka ~2.2.3 → ~2.3.3 async ~2.6.0 → ~2.6.1
- updated NConsumer and NProducer to debug and concat errors of require of native lib
- node-rdkafka has seg fault bugs in 2.3.1 -> falling back to 2.2.3
- corrected consumer callback error pass (now also logging warning to not do it)
- now allows to pass correlation-id (opaque key) when producing with NProducer
- updated dependencies:
uuid ~3.1.0 → ~3.2.1 bluebird ~3.5.0 → ~3.5.1 debug ^3.0.0 → ^3.1.0 kafka-node ^2.3.0 → ^2.4.1 eslint ^4.11.0 → ^4.18.2 express ^4.16.2 → ^4.16.3 mocha ~5.0.2 → ~5.0.4 sinon ^4.1.2 → ^4.4.6 node-rdkafka ^2.2.0 → ^2.3.1
- now starting analytics immediately
- propagating connection promise correctly
- now proxying consumer_commit_cb
- upgraded dependencies: eslint@4.11.0, sinon@4.1.2, node-rdkafka@2.2.0
- upgraded node-librdkafka dependency to 2.1.1
- added pause and resume functions for NConsumer
- added commitMessage method to NConsumer
- added option to switch partition selection to murmurv2
- intelligent healthcheck, checkout librdkafka/Health.md
- average batch processing time in getStats() for nconsumer
- clear rejects for operations, when the clients are not connected
- added unit tests for Health.js
- refactored readme
- intelligent fetch grace times in batch mode
- small optimisations on nconsumer
- BREAKING CHANGE nconsumer 1:n (batch mode) does not commit on every x batches now, it will only commit when a certain amount of messages has been consumed and processed requiredAmountOfMessagesForCommit = batchSize * commitEveryNBatch
- this increases performance and makes less commit requests when a topic's lag has been resolved and the amount of "freshly" produced messages is clearly lower than batchSize.
- comes with the new analytics class for nproducers and nconsumers
- checkout librdkafka/Analytics.md
- new offset info functions for NConsumer (checkout librdkafka/README.md)
- new getLagStatus() function for NConsumer that fetches and compares partition offsets
- updates
node-rdkafka
to @2.1.0 which ships fixes
- added librdkafka/Metadata class
- added new metadata functions to NProducer
- send, buffer and _sendBufferFormat are now async functions
- ^ BREAKING CHANGE sinek now requires min. Node.js Version 7.6
- added
auto
mode for NProducer (automatically produces to latest partition count event if it changes during runtime of a producer -> updates every 5 minutes) - refactored and optimized NProducer send logic
- updated librdkafka/README.md
- added new tests for NProducer
- fixed bug in NConsumer consume() consume options, where commitSync field was always true
- added JSDOC for NConsumer and NProducer
- new 1:N consumer mode (making 1:1 mode configurable with params -> see lib/librdkafka/README.md)
- more stats for consumer batch mode
- new consumer batch event
- BREAKING CHANGE as consumer.consume(syncEvent) now rejects if you have
enabled.auto.commit: true
- updated librdkafka/README.md
- Updated depdendencies
- Re-created lockfile
- fixed bug in sync commit (now catching timeout errors)
- NConsumer automatically sets memory related configs (easier start if you missed those config params..)
- NConsumer in 1:1 mode will now use commitMessageSync instead of commitMessage (this reduces performance, but ensures we do not stack tons of commit-requests in the consumers-queue), sinek 6.5.0 will follow with an option to set the amount of messages that are consumed & committed in one step 1-10000
- bugfix on NProducer (partitions ranged from 1-30 instead of 0-29)
- added streaming mode to NConsumer, you can pass true to .connect(true) and omit .consume() to enable streaming mode consuming
- adjusted sasl example
- fixed connection event (ready) for connect/ consumers
- fixed a few small things
- added tconf fields to config
- updated docs
- more and better examples
- updated NProducer api to allow new node-rdkafka 2.0.0 (as it had breaking changes regarding its topic api)
- sinek now ships with an optional dependency to node-rdkafka
- 2 native clients embbed rdkafka in the usual sinek connector api interface
- NConsumer and NProducer
- sasl support
- additional config params through noptions
- fixed a few option reference passes to allow for better ssl support
- added /kafka-setup that allows for an easy local ssl kafka broker setup
- added /ssl-example to show how ssl connections are configured
- updated readme
- added eslint and updated code style accordingly
- Updated to latest kafka-node 2.2.0
- Fixed bug in logging message value length
- Added 3 new format methhods publish, unpublish, update to connect producer
- Added partitionKey (optional) to all bufferFormat operations of publisher and connect producer
- Updated all dependencies
- Clients can now omit Zookeeper and connect directly to a Broker by omitting zkConStr and passing kafkaHost in the config
Producer/Consumer Key Changes #704
- BREAKING CHANGE The
key
is decoded as astring
by default. Previously was aBuffer
. The preferred encoding for the key can be defined by thekeyEncoding
option on any of the consumers and will fallback toencoding
if omitted
- First entry in CHANGELOG.md