first of all: I’m new to Akka streams and Alpakka…
I’m implementing an application which receives sensor measurement data via bluetooth from a sensor node and then sends this data via Alpakka MQTT streaming to a MQTT broker.
About between 60-80 sensor values are received per second.
But compared to the Paho based implementation only half of the messages get delivered to the broker.
The queue seems to be filled almost immediately and many messages are dropped.
The underlying transport seems to be quite slow and I don’t know how I could improve the performance.
I was trying to increase the queue size to gargantous sizes and tried out different OverflowStrategies but the problem remains.
even a large buffer can be quickly filled up by a tight loop sending messages to the queue.
I would recommend back-pressuring on the sending side. The SourceQueue.offer method returns a Future that gets completed when the element is accepted to the stream.
I am not sure what triggers message sends in your code and whether there is a nice way to wait until that offer Future completes, but you should bring back-pressure signal as much as possible to the source of the messages.
Basically I have 8 listeners to bluetooth sensor readings (temperature, pressure, gyroscope, etc.). These react whenever they receive data and do nothing more except interpreting received byte arrays and then publishing them via session ! Command(Publish(... .
These 8 listeners generate the messages each with their own topic so there are 8 topics the app publishes to.
I also noticed that, after a few seconds some messages of 2-3 topics are not delivered at all while the other topics continue to work fine.
We are using this in production and have no issue over message loss. With the Publish you can provide a promise that will be complete once the PubAck is received. You can then back pressure on that.
Try also with a QoS of 0. If that works for you then it is likely because of no back back pressure on the publish as above.