Lagom errors on startup

I’m trying to evaluating lagom, but it keeps yelling errors and warnings. I’ve tried to find the cause for several days but I’m not progressing. I hope this forum is the correct place to ask. It’s my last resort…

At first, the application seems to start fine:

sbt:file-heaven> runAll
[info] Updating lagom-internal-meta-project-kafka...
[info] Done updating.
[info] Starting Kafka
[info] Updating lagom-internal-meta-project-cassandra...
[info] Done updating.
[info] Starting Cassandra
....................................................
[info] Cassandra server running at 127.0.0.1:4000
[info] Updating lagom-internal-meta-project-service-locator...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 	* io.netty:netty-handler:4.1.29.Final is selected over 4.1.13.Final
[warn] 	    +- com.typesafe.netty:netty-reactive-streams:2.0.0    (depends on 4.1.13.Final)
[warn] 	    +- com.lightbend.lagom:lagom-client_2.12:1.4.9        (depends on 4.1.13.Final)
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.12:1.1.0 is selected over {1.0.4, 1.0.6, 1.1.1}
[warn] 	    +- com.lightbend.lagom:lagom-logback_2.12:1.4.9       (depends on 1.1.1)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 1.0.4)
[warn] 	    +- com.typesafe.play:cachecontrol_2.12:1.1.3          (depends on 1.0.4)
[warn] 	    +- com.typesafe:ssl-config-core_2.12:0.3.6            (depends on 1.0.4)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 1.0.4)
[warn] 	* com.google.guava:guava:22.0 is selected over {16.0, 19.0, 20.0}
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 22.0)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 22.0)
[warn] 	    +- org.reflections:reflections:0.9.11                 (depends on 20.0)
[warn] 	    +- com.google.inject:guice:4.1.0                      (depends on 19.0)
[warn] 	    +- com.fasterxml.jackson.datatype:jackson-datatype-guava:2.8.11 (depends on 16.0)
[warn] 	* com.typesafe.akka:akka-stream_2.12:2.5.18 is selected over {2.4.20, 2.5.9, 2.5.17}
[warn] 	    +- com.lightbend.lagom:lagom-logback_2.12:1.4.9       (depends on 2.5.18)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 2.5.18)
[warn] 	    +- com.typesafe.play:play-streams_2.12:2.6.20 ()      (depends on 2.5.17)
[warn] 	    +- com.typesafe.play:play-ws-standalone_2.12:1.1.10   (depends on 2.5.9)
[warn] 	    +- com.typesafe.akka:akka-http-core_2.12:10.0.14 ()   (depends on 2.4.20)
[warn] 	* com.typesafe:ssl-config-core_2.12:0.3.6 is selected over 0.2.2
[warn] 	    +- com.typesafe.akka:akka-stream_2.12:2.5.18 ()       (depends on 0.3.6)
[warn] 	    +- com.typesafe.play:play-ws-standalone_2.12:1.1.10   (depends on 0.2.2)
[warn] 	* com.typesafe.akka:akka-actor_2.12:2.5.18 is selected over {2.4.20, 2.5.17}
[warn] 	    +- com.lightbend.lagom:lagom-logback_2.12:1.4.9       (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-stream_2.12:2.5.18 ()       (depends on 2.5.18)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-slf4j_2.12:2.5.18 ()        (depends on 2.5.18)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 2.5.17)
[warn] 	    +- com.typesafe.akka:akka-parsing_2.12:10.0.14 ()     (depends on 2.4.20)
[warn] Run 'evicted' to see detailed eviction warnings
2018-12-04T22:45:48.322Z [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
2018-12-04T22:45:51.315Z [info] com.lightbend.lagom.discovery.ServiceLocatorServer [] - Service locator can be reached at http://localhost:9008
2018-12-04T22:45:51.316Z [info] com.lightbend.lagom.discovery.ServiceLocatorServer [] - Service gateway can be reached at http://localhost:9000
[info] Service locator is running at http://localhost:9008
[info] Service gateway is running at http://localhost:9000
[info] Updating shapeless-json...
[info] Done updating.
[info] Updating auth-api...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.12:1.0.4 is selected over {1.0.6, 1.1.1}
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 1.0.4)
[warn] 	    +- com.typesafe:ssl-config-core_2.12:0.3.6            (depends on 1.1.1)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 1.0.6)
[warn] Run 'evicted' to see detailed eviction warnings
[info] Updating auth-impl...
[info] Done updating.
[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn] 	* com.github.jnr:jnr-constants:0.9.9 is selected over 0.9.0
[warn] 	    +- org.lmdbjava:lmdbjava:0.6.1                        (depends on 0.9.9)
[warn] 	    +- com.github.jnr:jnr-posix:3.0.27                    (depends on 0.9.0)
[warn] 	* io.netty:netty:3.10.6.Final is selected over 3.10.5.Final
[warn] 	    +- com.typesafe.akka:akka-remote_2.12:2.5.18 ()       (depends on 3.10.6.Final)
[warn] 	    +- org.apache.zookeeper:zookeeper:3.4.10              (depends on 3.10.5.Final)
[warn] 	* io.netty:netty-handler:4.1.29.Final is selected over {4.0.44.Final, 4.1.13.Final}
[warn] 	    +- com.lightbend.lagom:lagom-persistence-cassandra-core_2.12:1.4.9 (depends on 4.1.29.Final)
[warn] 	    +- com.typesafe.netty:netty-reactive-streams:2.0.0    (depends on 4.1.13.Final)
[warn] 	    +- com.lightbend.lagom:lagom-client_2.12:1.4.9        (depends on 4.1.13.Final)
[warn] 	    +- com.datastax.cassandra:cassandra-driver-core:3.2.0 (depends on 4.0.44.Final)
[warn] 	* org.scala-lang.modules:scala-parser-combinators_2.12:1.1.0 is selected over {1.0.4, 1.0.6, 1.1.1}
[warn] 	    +- com.typesafe.play:cachecontrol_2.12:1.1.3          (depends on 1.1.0)
[warn] 	    +- com.lightbend.lagom:lagom-cluster-core_2.12:1.4.9  (depends on 1.1.1)
[warn] 	    +- com.lightbend.lagom:lagom-logback_2.12:1.4.9       (depends on 1.0.4)
[warn] 	    +- org.apache.kafka:kafka_2.12:0.11.0.1               (depends on 1.0.4)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 1.0.4)
[warn] 	    +- com.typesafe:ssl-config-core_2.12:0.3.6            (depends on 1.0.4)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 1.0.4)
[warn] 	* com.google.guava:guava:22.0 is selected over {16.0.1, 19.0, 16.0}
[warn] 	    +- com.lightbend.lagom:lagom-scaladsl-cluster_2.12:1.4.9 (depends on 22.0)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 22.0)
[warn] 	    +- com.lightbend.lagom:lagom-scaladsl-play-json_2.12:1.4.9 (depends on 22.0)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 22.0)
[warn] 	    +- com.fasterxml.jackson.datatype:jackson-datatype-guava:2.8.11 (depends on 16.0)
[warn] 	    +- com.datastax.cassandra:cassandra-driver-core:3.2.0 (depends on 19.0)
[warn] 	    +- org.apache.curator:curator-test:2.10.0             (depends on 16.0.1)
[warn] 	    +- org.apache.curator:curator-client:2.10.0           (depends on 16.0.1)
[warn] 	* com.typesafe.akka:akka-stream_2.12:2.5.18 is selected over {2.5.7, 2.4.20, 2.5.9, 2.5.17}
[warn] 	    +- com.lightbend.lagom:lagom-logback_2.12:1.4.9       (depends on 2.5.18)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-remote_2.12:2.5.18 ()       (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-persistence-query_2.12:2.5.18 () (depends on 2.5.18)
[warn] 	    +- com.typesafe.play:play-streams_2.12:2.6.20 ()      (depends on 2.5.17)
[warn] 	    +- com.typesafe.play:play-ws-standalone_2.12:1.1.10   (depends on 2.5.9)
[warn] 	    +- com.typesafe.akka:akka-http-core_2.12:10.0.14 ()   (depends on 2.4.20)
[warn] 	    +- com.typesafe.akka:akka-stream-kafka_2.12:0.18      (depends on 2.5.7)
[warn] 	* com.typesafe:ssl-config-core_2.12:0.3.6 is selected over 0.2.2
[warn] 	    +- com.typesafe.akka:akka-stream_2.12:2.5.18 ()       (depends on 0.3.6)
[warn] 	    +- com.typesafe.play:play-ws-standalone_2.12:1.1.10   (depends on 0.2.2)
[warn] 	* com.typesafe.akka:akka-actor_2.12:2.5.18 is selected over {2.4.20, 2.5.17}
[warn] 	    +- com.lightbend.lagom:lagom-logback_2.12:1.4.9       (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-stream_2.12:2.5.18 ()       (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-persistence_2.12:2.5.18 ()  (depends on 2.5.18)
[warn] 	    +- com.lightbend.lagom:lagom-api_2.12:1.4.9           (depends on 2.5.18)
[warn] 	    +- com.lightbend.lagom:lagom-scaladsl-play-json_2.12:1.4.9 (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-slf4j_2.12:2.5.18 ()        (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-remote_2.12:2.5.18 ()       (depends on 2.5.18)
[warn] 	    +- com.typesafe.akka:akka-testkit_2.12:2.5.18 ()      (depends on 2.5.18)
[warn] 	    +- com.typesafe.play:play_2.12:2.6.20 ()              (depends on 2.5.17)
[warn] 	    +- com.typesafe.akka:akka-parsing_2.12:10.0.14 ()     (depends on 2.4.20)
[warn] Run 'evicted' to see detailed eviction warnings
23:46:05.189 [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0:0:0:0:0:0:0:0:61221
[info] Compiling 3 Scala sources to /home/user/development/scala/lagom/file-heaven2/shapeless-json/target/scala-2.12/classes ...
[info] Done compiling.
[info] Compiling 1 Scala source to /home/user/development/scala/lagom/file-heaven2/auth-api/target/scala-2.12/classes ...
[info] Done compiling.
[info] Compiling 5 Scala sources to /home/user/development/scala/lagom/file-heaven2/auth-impl/target/scala-2.12/classes ...
[info] Done compiling.
23:46:54.344 [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
23:46:54.370 [info] akka.remote.Remoting [sourceThread=ForkJoinPool-2-worker-1, akkaSource=akka.remote.Remoting, sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:54.369UTC] - Starting remoting
23:46:54.650 [info] akka.remote.Remoting [sourceThread=ForkJoinPool-2-worker-1, akkaTimestamp=22:46:54.648UTC, akkaSource=akka.remote.Remoting, sourceActorSystem=auth-impl-application] - Remoting started; listening on addresses :[akka.tcp://auth-impl-application@127.0.0.1:44025]
23:46:54.653 [info] akka.remote.Remoting [sourceThread=ForkJoinPool-2-worker-1, akkaTimestamp=22:46:54.651UTC, akkaSource=akka.remote.Remoting, sourceActorSystem=auth-impl-application] - Remoting now listens on addresses: [akka.tcp://auth-impl-application@127.0.0.1:44025]
23:46:54.715 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=ForkJoinPool-2-worker-1, akkaTimestamp=22:46:54.715UTC, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - Starting up, Akka version [2.5.18] ...
23:46:54.818 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=ForkJoinPool-2-worker-1, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:54.817UTC] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - Registered cluster JMX MBean [akka:type=Cluster,port=44025]
23:46:54.818 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=ForkJoinPool-2-worker-1, akkaTimestamp=22:46:54.817UTC, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - Started up successfully
23:46:54.903 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=auth-impl-application-akka.actor.default-dispatcher-4, akkaTimestamp=22:46:54.891UTC, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - No seed-nodes configured, manual cluster join required
23:46:55.152 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=auth-impl-application-akka.actor.default-dispatcher-4, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:55.151UTC] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - Node [akka.tcp://auth-impl-application@127.0.0.1:44025] is JOINING itself (with roles [dc-default]) and forming new cluster
23:46:55.155 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=auth-impl-application-akka.actor.default-dispatcher-4, akkaTimestamp=22:46:55.154UTC, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] dc [default] is the new leader
23:46:55.167 [info] akka.cluster.Cluster(akka://auth-impl-application) [sourceThread=auth-impl-application-akka.actor.default-dispatcher-4, akkaTimestamp=22:46:55.166UTC, akkaSource=akka.cluster.Cluster(akka://auth-impl-application), sourceActorSystem=auth-impl-application] - Cluster Node [akka.tcp://auth-impl-application@127.0.0.1:44025] - Leader is moving node [akka.tcp://auth-impl-application@127.0.0.1:44025] to [Up]
23:46:55.819 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-17, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/cassandraOffsetStorePrepare-singleton, sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:55.818UTC] - Singleton manager starting singleton actor [akka://auth-impl-application/user/cassandraOffsetStorePrepare-singleton/singleton]
23:46:55.843 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-17, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/cassandraOffsetStorePrepare-singleton, sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:55.843UTC] - ClusterSingletonManager state change [Start -> Oldest]
23:46:55.844 [info] com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor [sourceThread=auth-impl-application-akka.actor.default-dispatcher-5, akkaTimestamp=22:46:55.843UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/cassandraOffsetStorePrepare-singleton/singleton/cassandraOffsetStorePrepare, sourceActorSystem=auth-impl-application] - Executing cluster start task cassandraOffsetStorePrepare.
23:46:55.968 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-7, akkaTimestamp=22:46:55.967UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/AuthEntityCoordinator, sourceActorSystem=auth-impl-application] - Singleton manager starting singleton actor [akka://auth-impl-application/system/sharding/AuthEntityCoordinator/singleton]
23:46:55.969 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-7, akkaTimestamp=22:46:55.967UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/AuthEntityCoordinator, sourceActorSystem=auth-impl-application] - ClusterSingletonManager state change [Start -> Oldest]
23:46:56.064 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-17, akkaTimestamp=22:46:56.063UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/kafkaProducer-token-topicCoordinator, sourceActorSystem=auth-impl-application] - Singleton manager starting singleton actor [akka://auth-impl-application/system/sharding/kafkaProducer-token-topicCoordinator/singleton]
23:46:56.064 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-17, akkaTimestamp=22:46:56.063UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/kafkaProducer-token-topicCoordinator, sourceActorSystem=auth-impl-application] - ClusterSingletonManager state change [Start -> Oldest]
23:46:56.267 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-4, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton, sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:56.266UTC] - Singleton manager starting singleton actor [akka://auth-impl-application/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton/singleton]
23:46:56.267 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-4, akkaTimestamp=22:46:56.266UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton, sourceActorSystem=auth-impl-application] - ClusterSingletonManager state change [Start -> Oldest]
23:46:56.278 [info] com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor [sourceThread=auth-impl-application-akka.actor.default-dispatcher-6, akkaTimestamp=22:46:56.278UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton/singleton/readSideGlobalPrepare-UserIndexReadSideProcessor, sourceActorSystem=auth-impl-application] - Executing cluster start task readSideGlobalPrepare-UserIndexReadSideProcessor.
23:46:56.339 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-18, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/UserIndexReadSideProcessorCoordinator, sourceActorSystem=auth-impl-application, akkaTimestamp=22:46:56.338UTC] - Singleton manager starting singleton actor [akka://auth-impl-application/system/sharding/UserIndexReadSideProcessorCoordinator/singleton]
23:46:56.341 [info] akka.cluster.singleton.ClusterSingletonManager [sourceThread=auth-impl-application-akka.actor.default-dispatcher-18, akkaTimestamp=22:46:56.338UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/UserIndexReadSideProcessorCoordinator, sourceActorSystem=auth-impl-application] - ClusterSingletonManager state change [Start -> Oldest]
23:46:56.776 [info] akka.cluster.singleton.ClusterSingletonProxy [sourceThread=auth-impl-application-akka.actor.default-dispatcher-6, akkaTimestamp=22:46:56.775UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/user/cassandraOffsetStorePrepare-singletonProxy, sourceActorSystem=auth-impl-application] - Singleton identified at [akka://auth-impl-application/user/cassandraOffsetStorePrepare-singleton/singleton]
23:46:57.229 [info] play.api.Play [] - Application started (Dev)
[info] Service auth-impl listening for HTTP on 0:0:0:0:0:0:0:0:61221
[info] (Service started, press enter to stop and go back to the console...)

After ~ 5-10 seconds of silence, lots of messages of these kind start to appear:

23:47:06.360 [warn] akka.cluster.sharding.ShardRegion [sourceThread=auth-impl-application-akka.actor.default-dispatcher-2, akkaTimestamp=22:47:06.360UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/UserIndexReadSideProcessor, sourceActorSystem=auth-impl-application] - Trying to register to coordinator at [ActorSelection[Anchor(akka://auth-impl-application/), Path(/system/sharding/UserIndexReadSideProcessorCoordinator/singleton/coordinator)]], but no acknowledgement. Total [20] buffered messages. [Coordinator [Member(address = akka.tcp://auth-impl-application@127.0.0.1:44025, status = Up)] is reachable.]
23:47:16.451 [error] akka.cluster.sharding.PersistentShardCoordinator [sourceThread=auth-impl-application-akka.actor.default-dispatcher-7, akkaTimestamp=22:47:16.451UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:44025/system/sharding/AuthEntityCoordinator/singleton/coordinator, sourceActorSystem=auth-impl-application] - Persistence failure when replaying events for persistenceId [/sharding/AuthEntityCoordinator]. Last known sequence number [0]

I do also get these errors:

12/04 23:39:48 ERROR[Native-Transport-Requests-3] o.a.c.t.m.ErrorMessage - Unexpected exception during request
java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found 74bcebe0-f815-11e8-b6f5-a9db75e01e2a; expected 6d47bde0-f815-11e8-b6f5-a9db75e01e2a)
	at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:385) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:570) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:377) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:362) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:342) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.cql3.statements.CreateTableStatement.announceMigration(CreateTableStatement.java:89) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:123) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:224) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:255) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:240) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:116) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:348) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_192]
	at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [cassandra-bundle.jar:0.59-SNAPSHOT]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_192]
Caused by: java.util.concurrent.ExecutionException: org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found 74bcebe0-f815-11e8-b6f5-a9db75e01e2a; expected 6d47bde0-f815-11e8-b6f5-a9db75e01e2a)
	at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_192]
	at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_192]
	at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:381) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	... 20 common frames omitted
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found 74bcebe0-f815-11e8-b6f5-a9db75e01e2a; expected 6d47bde0-f815-11e8-b6f5-a9db75e01e2a)
	at org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:941) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:895) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.config.Schema.updateTable(Schema.java:687) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1464) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1420) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1389) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1366) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.service.MigrationManager$1.runMayThrow(MigrationManager.java:588) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_192]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_192]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_192]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_192]
	at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) ~[cassandra-bundle.jar:0.59-SNAPSHOT]
	... 1 common frames omitted

Whenever I interact with my service I do get this warning:

00:07:16.921 [warn] org.apache.kafka.clients.NetworkClient [] - Error while fetching metadata with correlation id 1 : {token-topic=LEADER_NOT_AVAILABLE}
00:07:17.102 [warn] org.apache.kafka.clients.NetworkClient [] - Error while fetching metadata with correlation id 3 : {token-topic=LEADER_NOT_AVAILABLE}
00:07:17.363 [warn] org.apache.kafka.clients.NetworkClient [] - Error while fetching metadata with correlation id 4 : {token-topic=LEADER_NOT_AVAILABLE}
00:07:17.515 [warn] org.apache.kafka.clients.NetworkClient [] - Error while fetching metadata with correlation id 5 : {token-topic=LEADER_NOT_AVAILABLE}
00:07:17.843 [warn] org.apache.kafka.clients.NetworkClient [] - Error while fetching metadata with correlation id 6 : {token-topic=LEADER_NOT_AVAILABLE}

Here is an exemplary statup log:

My code can be found here.

  • I’ve implemented a simple authentication service where users can register and login. The registered users are persistent entites.
  • A cassandra read side builds a table where each row contains the uuid, name and email of a user. I’m using it to look up user uuids by their name or email.
  • Whenever a user logs in, the service response with a random generated token. This token will be used to authenticate at any microservice. Therefore it gets published to a message broker topic, so other services can build there own store of valid tokens.

The “Column family ID mismatch” errors usually happens when there are multiple concurrent updates to the same database schema. If you are running multiple services, ensure that they are using separate Cassandra keyspaces. If it’s just one service, be sure you’re not executing schema modifications concurrently.

It’s likely that this is the underlying cause for the other errors.

I’m only writing to cassandra in the userindex.scala file. I create the table and 3 indices, write users and fetch users by name or email. That shouldn’t be the issue.

Is it likely that such a schema modification is done by some calls to the lagom framework?

The error message doesn’t show which table is affected. Is it possible to get more details?

I’m only writing to cassandra in the userindex.scala file. I create the table and 3 indices, write users and fetch users by name or email. That shouldn’t be the issue.

I think that is the issue. The createTable method will be invoked multiple times at startup:

Once from the globalPrepare here:

also when the UserIndex class is created here:

The error message doesn’t show which table is affected. Is it possible to get more details?

Unfortunately, it’s a little bit difficult. You need to query the internal schema data directly for the reported UUID (see Keyspace, table, and column information).

I’ve dropped the table creation in the UserIndex class (commit).

The “Column family ID mismatch” error disappered but I’m still getting the other errors.

12:32:46.752 [error] akka.actor.OneForOneStrategy [sourceThread=auth-impl-application-akka.actor.default-dispatcher-2, akkaTimestamp=11:32:46.737UTC, akkaSource=akka://auth-impl-application/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton/singleton/readSideGlobalPrepare-UserIndexReadSideProcessor, sourceActorSystem=auth-impl-application] - Ask timed out on [Actor[akka://auth-impl-application/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton/singleton/readSideGlobalPrepare-UserIndexReadSideProcessor#-1146144955]] after [20000 ms]. Message of type [com.lightbend.lagom.internal.persistence.cluster.ClusterStartupTaskActor$Execute$] was sent by [Actor[akka://auth-impl-application/user/readSideGlobalPrepare-UserIndexReadSideProcessor-singleton/singleton/readSideGlobalPrepare-UserIndexReadSideProcessor#-1146144955]]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.
12:32:50.924 [error] akka.cluster.sharding.PersistentShardCoordinator [sourceThread=auth-impl-application-akka.actor.default-dispatcher-2, akkaTimestamp=11:32:50.924UTC, akkaSource=akka.tcp://auth-impl-application@127.0.0.1:33321/system/sharding/UserIndexReadSideProcessorCoordinator/singleton/coordinator, sourceActorSystem=auth-impl-application] - Persistence failure when replaying events for persistenceId [/sharding/UserIndexReadSideProcessorCoordinator]. Last known sequence number [0]

Here’s the complete log: https://gist.github.com/aki-ks/2fb474769fa0949db8117b6b96ac349c

If I use my service it will fail since the table is no longer created.
I still don’t get why the line caused the error. I expected it to work since the queries contain “IF NOT EXISTS” clauses.

java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table userindex
	at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
	at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462)
	at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
	at akka.persistence.cassandra.package$ListenableFutureConverter$$anon$2.$anonfun$run$2(package.scala:25)
	at scala.util.Try$.apply(Try.scala:209)
	at akka.persistence.cassandra.package$ListenableFutureConverter$$anon$2.run(package.scala:25)
	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table userindex
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:148)
	at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:220)
	at com.datastax.driver.core.SessionManager$4.apply(SessionManager.java:196)
	at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:211)
	at com.google.common.util.concurrent.AbstractTransformFuture$AsyncTransformFuture.doTransform(AbstractTransformFuture.java:200)
	at com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:130)
	at com.google.common.util.concurrent.MoreExecutors$5$1.run(MoreExecutors.java:952)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	... 1 common frames omitted

“IF NOT EXISTS” isn’t enough to avoid this problem. Schema updates in Cassandra are not atomic and are not safe to perform concurrently.

Other than that, I can’t quite spot what the problem is now. There still seems to be a problem with the database. I’d recommend adding some logging to your createTable function (or increasing the log level for the Cassandra driver) so that you can see how far it’s getting.

I’d also recommend switching your cluster sharding state store mode to ddata, which will reduce some of the noise in the logs at the very least. You can copy the recommended configuration from the template project to your application.conf:

I’ve just debugged the createTable method.

Creating the table takes ~10s and the index creations take ~7s each.
That sums up to ~25 seconds which causes lots of timeout exceptions.

So is it possible to configure a higher timeout for the ‘global prepare’?

Yes, it’s configurable with the lagom.persistence.read-side. global-prepare-timeout property:

That seems excessively slow, however. It would be worth investigating why it is taking so long.