I have followed the instructions on getting started with Lagom and have run reference implementation however when I go to the URL http://localhost:9000/api/hello/Alice I get an HTTP 500 error. The only thing I can see in the logs is:
2018-03-15T20:12:22.524Z [error] akka.actor.ActorSystemImpl [sourceThread=application-akka.actor.default-dispatcher-7, akkaSource=akka.actor.ActorSystemImpl(application), sourceActorSystem=application, akkaTimestamp=20:12:22.524UTC] - Internal server error, sending 500 response
akka.stream.StreamTcpException: Tcp command [Connect(0.0.0.0:57797,None,List(),Some(10 seconds),true)] failed because of Connect timeout of Some(10 seconds) expired
This is new for me. Can you describe which getting started guide did you follow? Scala or Java, sbt or maven? That will help us narrow down the possibilities.
Have you also see any other error on bootstraping?
Also now in the logs it is continually logging this:
08:42:36.149 [warn] org.apache.kafka.clients.NetworkClient [] - Connection to node -1 could not be established. Broker may not be available.
08:42:36.203 [warn] org.apache.kafka.clients.NetworkClient [] - Connection to node -1 could not be established. Broker may not be available.
this sounds like a specific setup in those two machines where the error happened. There are several possible context differences to look for to try and isolate the actual cause:
available memory
JVM version (major and minor)
sbt launcher version (I don’t think this is really an issue)
Make sure you clean your target/ folders completely upon retrials. ZooKeeper is known to sometimes corrupt it’s storage if closed abruptly.
@Jim is your machine configured to use a proxy or VPN? You might need to configure your system or your build tool to exclude all of your local IP addresses from proxying.
Not a fix, but to get moving again faster I added these tasks to sbt:
val cleanCas = taskKey[Unit]("Delete embedded-cassandra data.")
val fixKafka = taskKey[Unit]("When kafka won't start, run this to delete zookeeper data.")
then add the tasks to the root project:
lazy val `hello` = (project in file("."))
.aggregate(
// your services here
)
.settings(
fixKafka := IO.delete(baseDirectory.value / "target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/target/zookeeper_data"),
cleanCas := IO.delete(baseDirectory.value / "target/embedded-cassandra/data")
)
Java 1.8.0.144 (I’ve also another config with 1.8.0.161; I could try with Java 9 if you think it’s a good idea)
Lagom 1.4.0 (I can upgrade to 1.4.1 if it helps)
macOS High Sierra 10.13.3
I built my code from a derivation of online-auction-lagom example.
In my environment, this problem occurs at least once each day (after crash of components in my code).
This problem already occured when I studied online-auction-lagom example.
Connection to node -1 could not be established. Broker may not be available.
means the embedded Kafka is not running. When using sbt runAll there’s a forked JVM running KafkaLauncher. When you see that error what is actually hapening is that the sbt process where your services are running started a KafkaLauncher which crashed mid-start.
When that happens you can use jps (or ps aux) to see how the KafkaLauncher process is gone.
I don’t know of any way to reproduce the bug and prevent it so any information you can gather when that happens would be a great help to narrowing down the causes and eventually providing a fix. Feel free to move the discussion to an issue if you prefer.
Cheers,
PS: When you experience the crash you can have a look at Kafka logs in "target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/target/log4j
[info] Starting Kafka
[info] Starting Cassandra
..........
[info] Cassandra server running at 127.0.0.1:4000
Kafka Server closed unexpectedly.
If I inspect logs in the directory you suggest, I can see in server.log:
[2018-03-26 14:07:07,953] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2018-03-26 14:07:07,957] INFO Got user-level KeeperException when processing sessionid:0x16262346bea0000 type:create cxid:0x24 zxid:0xbe txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-03-26 14:07:07,957] INFO Got user-level KeeperException when processing sessionid:0x16262346bea0000 type:create cxid:0x25 zxid:0xbf txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-03-26 14:07:07,957] INFO Got user-level KeeperException when processing sessionid:0x16262346bea0000 type:create cxid:0x26 zxid:0xc0 txntype:-1 reqpath:n/a Error Path:/brokers/ids/0 Error:KeeperErrorCode = NodeExists for /brokers/ids/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2018-03-26 14:07:07,960] INFO Result of znode creation is: NODEEXISTS (kafka.utils.ZKCheckedEphemeral)
[2018-03-26 14:07:07,961] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/0. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering.
Can you see something useful here? (I can upload the entire file).
If not, I will create an issue with all data I have.