Hello,
When I start the shard in a docker environment, I get the following error when I invoke a persistent actor.
java.lang.IllegalArgumentException: Plugin class name must be defined in config property [cassandra-journal.class]
I do have the following config, which works well in a standalone SBT environment.
persistence {
journal.plugin = "cassandra-journal"
snapshot-store.plugin = "cassandra-snapshot-store"
cassandra {
journal {
class = "akka.persistence.cassandra.journal.CassandraJournal"
keyspace-autocreate = true
tables-autocreate = true
}
}
}
Kind regards
I have managed to get over the error. However, in docker it is still not able to resolve the contact point. It seems to be using the default value, even though in docker compose I am providing the internal address.
akka {
loglevel = INFO
remote {
artery {
canonical.hostname = ${clustering.ip}
canonical.port = ${clustering.port}
enabled = on
transport = tcp
}
}
actor {
provider = "cluster"
serialization-bindings {
"com.example.CborSerializable" = jackson-json
}
}
cluster {
roles=["sharded", "docker"]
sharding {
number-of-shards = 30
passivate-idle-entity-after = 2 minutes
role = "sharded"
}
seed-nodes = [
"akka://"${clustering.cluster.name}"@"${clustering.seed-ip}":"${clustering.seed-port}
]
shutdown-after-unsuccessful-join-seed-nodes = 40s
}
coordinated-shutdown.exit-jvm = on
persistence {
journal.plugin = "akka.persistence.cassandra.journal"
snapshot-store.plugin = "akka.persistence.snapshot-store.local"
snapshot-store.local.dir = "target/snapshot"
cassandra {
journal {
class = "akka.persistence.cassandra.journal.CassandraJournal"
keyspace-autocreate = true
tables-autocreate = true
contact-points = [${clustering.cassandra.contactpoint1}]
port = 9092
}
}
}
datastax-java-driver {
basic.contact-points = [${clustering.cassandra.contactpoint1}]
}
}
clustering {
ip = "127.0.0.1"
ip = ${?CLUSTER_IP}
port = 1600
defaultPort = 0
seed-ip = "127.0.0.1"
seed-ip = ${?CLUSTER_IP}
seed-ip = ${?SEED_PORT_1600_TCP_ADDR}
seed-port = 1600
seed-port = ${?SEED_PORT_1600_TCP_PORT}
cluster.name = ShoppingCart
cassandra.contactpoint1 = ${?CASSANDRA_CONTACT_POINT1}
}
cassandra-journal {
contact-points = [${clustering.cassandra.contactpoint1}]
}
cassandra-snapshot-store {
contact-points = [${clustering.cassandra.contactpoint1}]
}
datastax-java-driver {
basic.contact-points = [${clustering.cassandra.contactpoint1}]
}
There seems to be incompatibility between the Cassandra plugin versions. The configuration below gives the error java.lang.IllegalArgumentException: Plugin class name must be defined in config property [cassandra-journal.class] with 1.0.0-RC2, however works fine with 0.102. Any pointers will be highly appreciated. Kind regards
include "cluster-application-base.conf"
akka {
remote {
artery {
canonical.hostname = ${clustering.ip}
canonical.port = ${clustering.port}
}
}
cluster {
roles=["sharded", "docker"]
seed-nodes = [
"akka://"${clustering.cluster.name}"@"${clustering.seed-ip}":"${clustering.seed-port}
]
shutdown-after-unsuccessful-join-seed-nodes = 40s
}
coordinated-shutdown.exit-jvm = on
persistence {
journal.plugin = "cassandra-journal"
snapshot-store.plugin = "cassandra-snapshot-store"
}
}
clustering {
ip = "127.0.0.1"
ip = ${?CLUSTER_IP}
port = 1600
defaultPort = 0
seed-ip = "127.0.0.1"
seed-ip = ${?CLUSTER_IP}
seed-ip = ${?SEED_PORT_1600_TCP_ADDR}
seed-port = 1600
seed-port = ${?SEED_PORT_1600_TCP_PORT}
cluster.name = ArtifactStateCluster
cassandra.contactpoint1 = ${?CASSANDRA_CONTACT_POINT1}
}
cassandra-journal {
contact-points = [${clustering.cassandra.contactpoint1}]
}
cassandra-snapshot-store {
contact-points = [${clustering.cassandra.contactpoint1}]
}
The is cluster-application-base.conf
akka {
loglevel = INFO
actor {
provider = "cluster"
serialization-bindings {
"com.example.CborSerializable" = jackson-json
}
}
remote {
artery {
enabled = on
transport = tcp
}
}
cluster {
roles=["sharded"]
sharding {
number-of-shards = 30
passivate-idle-entity-after = 2 minutes
role = "sharded"
}
}
}
clustering {
cluster.name = ShoppingCart
}
I seem to be stuck with the latest version of the Cassandra plugin. When I add the journal class, it complains about missing session-provider. Even after adding session-provider, I get the same error. Unfortunately the documentation seems to be a bit inadequate. After struggling on this for a few days now, I have just gone back to a quite earlier version, with no real view on how to upgrade.
persistence {
journal.plugin = "akka.persistence.cassandra.journal"
cassandra {
session-provider = "akka.stream.alpakka.cassandra.DefaultSessionProvider"
journal {
class = "akka.persistence.cassandra.journal.CassandraJournal"
}
}
}
Caused by: com.typesafe.config.ConfigException$Missing: merge of cluster-application-docker.conf @ jar:file:/opt/docker/lib/com.example.shopping-cart-0.1.0.jar!/cluster-application-docker.conf: 25,reference.conf @ jar:file:/opt/docker/lib/com.typesafe.akka.akka-persistence_2.13-2.6.4.jar!/reference.conf: 96: No configuration setting found for key ‘session-provider’
I added the session-provider inside the journal entry as below, now I get a class not found exception for the specified class. 1.0RC2 seem to be out of sync with what is documented here.
https://doc.akka.io/docs/akka-persistence-cassandra/current/configuration.html
persistence {
journal.plugin = "akka.persistence.cassandra.journal"
cassandra {
session-provider = "akka.stream.alpakka.cassandra.DefaultSessionProvider"
journal {
session-provider = "akka.stream.alpakka.cassandra.DefaultSessionProvider"
class = "akka.persistence.cassandra.journal.CassandraJournal"
}
}
}
It will be very helpful if someone could please point me to something which I am obviously doing wrong.
Bit more progress. It doesn’t seem to be picking up the datastax driver configuration, and trying to default to localhost.
include "cluster-application-base.conf"
akka {
remote {
artery {
canonical.hostname = ${clustering.ip}
canonical.port = ${clustering.port}
}
}
cluster {
roles=["sharded", "docker"]
seed-nodes = [
"akka://"${clustering.cluster.name}"@"${clustering.seed-ip}":"${clustering.seed-port}
]
shutdown-after-unsuccessful-join-seed-nodes = 40s
}
coordinated-shutdown.exit-jvm = on
persistence {
journal.plugin = "akka.persistence.cassandra.journal"
cassandra {
datastax-java-driver-config = "datastax-java-driver"
journal {
class = "akka.persistence.cassandra.journal.CassandraJournal"
}
}
}
}
clustering {
ip = "127.0.0.1"
ip = ${?CLUSTER_IP}
port = 1600
defaultPort = 0
seed-ip = "127.0.0.1"
seed-ip = ${?CLUSTER_IP}
seed-ip = ${?SEED_PORT_1600_TCP_ADDR}
seed-port = 1600
seed-port = ${?SEED_PORT_1600_TCP_PORT}
cluster.name = ArtifactStateCluster
cassandra.contactpoint1 = ${?CASSANDRA_CONTACT_POINT1}
}
datastax-java-driver {
basic.contact-points = [${clustering.cassandra.contactpoint1}]
}
Frank
(Zinner)
April 29, 2020, 4:17pm
7
Hi guys, not sure if this could help but I managed to get my data into cassandra.
I digged into the source code and my dependencies and found my missing classes in the persistence-cassandra-1.0.0-RC2+3xxx package .
My config so far looks like this:
akka.persistence {
journal.plugin = "akka.persistence.cassandra.journal"
snapshot-store.plugin = "akka.persistence.cassandra.snapshot"
cassandra {
datastax-java-driver-config = "datastax-java-driver"
journal {
class = "akka.persistence.cassandra.journal.CassandraJournal"
keyspace-autocreate = true
tables-autocreate = true
}
snapshot {
class = "akka.persistence.cassandra.snapshot.CassandraSnapshotStore"
keyspace-autocreate = true
tables-autocreate = true
}
}
}
Frank,
Are you able to get the data into Cassandra running on a contact point other than localhost:9092?
Regards
Frank
(Zinner)
April 30, 2020, 7:33am
9
Alan,
I didn’t tried that …
From the datastax reference page maybe this could be an option to try out:
https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/configuration/#quick-overview
For array options, provide each element separately by appending an index to the path:
-Ddatastax-java-driver.basic.contact-points.0="127.0.0.1:9042"
-Ddatastax-java-driver.basic.contact-points.1="127.0.0.2:9042"
like in your example:
datastax-java-driver {
basic.contact-points.0 = ${clustering.cassandra.contactpoint1}
...
}
or try out the -D command line option first to overwrite the config to see if this would work out.
Cheers
Thank you, yes I believe I have done the same and I have got it working.
Kind regards
Frank
(Zinner)
April 30, 2020, 8:21am
11
Hi Alan,
maybe I found a possible solution configuration to try out:
datastax-java-driver {
// basic.contact-points.0 = "127.0.0.1:9042"
basic.contact-points = ["127.0.0.1:9042"]
basic.load-balancing-policy {
class = DefaultLoadBalancingPolicy
local-datacenter = datacenter1
}
}
of course you need to adapt this to your needs - I took the config from the datastax reference page and added the local-datacenter setting because the driver was complaining about this missing config. After I added the local-datacenter with the suggested datacenter it worked.
Cheers
Frank