Lagom service initialization problem in docker swarm

Hi.

I have googled similar errors, but didn’t get definite answers what might be wrong with my lagom application.
The service is deployed into one node docker swarm where kafka, cassandra, zookeeper and lagom service are running in separate containers. After all services deployement into swarm the logs in service container contain the follwing:

ered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency SERIAL (1 replica were required but only 0 acknowledged the write)
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying events for persistenceId [/sharding/kafkaProducer-rawParameterValuesCoordinator]. Last known sequence number [0]
java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency SERIAL (1 replica were required but only 0 acknowledged the write)
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462)
at akka.persistence.cassandra.package$$anon$1.$anonfun$run$1(package.scala:18)
at scala.util.Try$.apply(Try.scala:209)
at akka.persistence.cassandra.package$$anon$1.run(package.scala:18)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency SERIAL (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:100)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:134)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:507)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1075)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:998)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency SERIAL (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:60)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:38)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:289)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:269)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying events for persistenceId [/sharding/kafkaProducer-rawParameterValuesCoordinator]. Last known sequence number [0]
akka.pattern.CircuitBreakerOpenException: Circuit Breaker is open; calls are failing fast
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [34] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [35] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [35] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]
[error] a.c.s.PersistentShardCoordinator - Persistence failure when replaying events for persistenceId [/sharding/kafkaProducer-rawParameterValuesCoordinator]. Last known sequence number [0]
akka.pattern.CircuitBreakerOpenException: Circuit Breaker is open; calls are failing fast
[warn] a.c.s.ShardRegion - Trying to register to coordinator at [ActorSelection[Anchor(akka://application/), Path(/system/sharding/kafkaProducer-rawParameterValuesCoordinator/singleton/coordinator)]], but no acknowledgement. Total [35] buffered messages. [Coordinator [Member(address = akka.tcp://application@10.0.1.24:2552, status = Up)] is reachable.]

The network connectivity from service container to cassandra container seems to be ok as to kafka and zookeeper. I think i made neccessary changes to application conf:

play.modules.enabled += com.spaceit.mc.receive.impl.TmReceiverModule

cassandra.default {
contact-points = [“cassandra”]
session-provider = akka.persistence.cassandra.ConfigSessionProvider
}

cassandra-journal {
contact-points = {cassandra.default.contact-points} session-provider = {cassandra.default.session-provider}
}

cassandra-snapshot-store {
contact-points = {cassandra.default.contact-points} session-provider = {cassandra.default.session-provider}
}

lagom.persistence.read-side.cassandra {
contact-points = {cassandra.default.contact-points} session-provider = {cassandra.default.session-provider}
}
tm.cassandra.keyspace = tm

cassandra-journal.keyspace = {tm.cassandra.keyspace} cassandra-snapshot-store.keyspace = {tm.cassandra.keyspace}
lagom.persistence.read-side.cassandra.keyspace = ${tm.cassandra.keyspace}

cassandra-query-journal.eventual-consistency-delay = 10s

lagom.broker.kafka.service-name = “”
lagom.broker.kafka.brokers = “localhost:9094”

lagom {
discovery {
zookeeper {
server-hostname = zookeeper # hostname or IP-address for the ZooKeeper server
server-port = 2181 # port for the ZooKeeper server
uri-scheme = “http” # for example: http or https
routing-policy = “first” # valid routing policies: first, random, round-robin
}
}
}

lagom.cluster.join-self = on

In what direction i should search in order to find the reason for this problem?

So far it seems that problem is not related to docker swarm neither to my application. online-auction-java example application gives the same exceptions. I built one service implementation in this application and started it outside docker(kafka and cassandra remained in docker) and the same errors appear. Cassandra works fine - checked it wich cassandra client cqlsh. Changed some cassandra config parameters without luck.