Silently dropping documents while writing to Elastic Search

We noticed that when the writes volume is heavy during the night, Alpakka Elastic Search fails to write to Elastic Search. There are no errors logged when this happens, not in the supervision strategy or in the (!result.success) block. We use the following versions,

“com.lightbend.akka” %% “akka-stream-alpakka-elasticsearch” % “2.0.2”
“com.typesafe.akka” %% “akka-stream-kafka” % “2.0.6”
Akka version = “2.5.31”

On the client write side, we have tested with buffer sizes set to 20, 100, 1000 and 1. Of all these combinations only when the buffer is set to 1, we did not lose any documents on ES. Is this a known issue? What is the recommended buffer size?
Our stream includes a simple logic which

  1. Listens to Kafka -> 2. validate -> 3. send to Kafka -> 4. also write to ES -> 5. commit offset back to kafka

Are you checking that the write was successful before committing the offset to Kafka like in the sample in the docs here?

Hi, I did not get a notification and missed this reply from 2 weeks ago! Thanks for responding.
Yes, we do check result.success to confirm the write and there are no errors logged around the time when the messages are lost.
One peculiar thing is when I debugged the lost events, the listener lost all thousands of them in a 30 second window and just before there was a surge in inbound kafka events, which kicked off a rebalance of kafka clients. I read other tickets where alpakka kafka connector fetches the events and when rebalance happens it lost the already buffered ones, but doesn’t make sense as broker would start afresh as we are not committing offset unless its a success.