Akka Cluster Configuration and Initialization

I want to setup the Akka Cluster using Spring Framework. I have below configuration

akka {
  loglevel = debug
  actor {
    provider = cluster
    serialization-bindings {
      "sample.cluster.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "gardenint01.mahek.com"
      canonical.port = 2551
    }
  }
  cluster {
    seed-nodes = [
      "akka://ClusterSystem@gardenint01.mahek.com:2551",      
      "akka://ClusterSystem@gardenint02.mahek.com:2551"      
      ]
       roles = ["seed","compute"]
       role { seed.min-nr-of-members = 2 }
  }
}

I am using this configuration in below code to setup Akka Cluster System initializing through SpringExtension

	 @Bean(name="ComputeClusterSystem")
     @Scope("prototype")
     public ActorSystem akkaGridClusterSystem() {
             final Config config = ConfigFactory.parseResources(compute-cluster.conf").
                     withFallback(ConfigFactory.parseString(configString)).
                     withFallback(ConfigFactory.systemProperties()).  
                     withFallback(ConfigFactory.systemEnvironment()). 
                     resolve();
             ActorSystem<Void> system = ActorSystem.create(RootBehavior.create(), "EngineComputeClusterSystem", config);
             // initialize the application context in the Akka Spring Extension
             SpringExtension.SpringExtProvider.get(system).initialize(applicationContext);
         return system;
     }

In Below code,I am registering the Service and worker routers and creating worker actors.

private static class RootBehavior {
         static Behavior<Void> create() {
             return Behaviors.setup(context -> {
               Cluster cluster = Cluster.get(context.getSystem());               
               boolean preferLocalRoutees=true;
               ActorRef<EngineComputeWorker.RunRequestCommand> workerRouter = context.spawn(Routers.group(RUN_WORKER_KEY).withRoundRobinRouting(preferLocalRoutees), "WorkerRouter");               
               ActorRef<ClusterComplRunService.Command> service = context.spawn(ClusterRunService.create(workerRouter), "RunService");
               context.getSystem().receptionist().tell(Receptionist.register(RUN_SERVICE_KEY, service.narrow()));               
                 // on every compute node there is one service instance that delegates to N local workers
                 final int numberOfWorkers = context.getSystem().settings().config().getInt("run-service.workers-per-node");                 
                 context.getLog().info("Starting {} workers", numberOfWorkers);	                 
                 for (int i = 0; i < numberOfWorkers; i++) {
                   context.getLog().info("Starting Computer Worker {} ","ComputeWorker" + i);
                   ActorRef<EngineComputeWorker.Command> worker = context.spawn(EngineComputeWorker.create(), "ComputeWorker" + i);
                   context.getLog().info("Started Computer Worker {} ","ComputeWorker" + i);
                   context.getSystem().receptionist().tell(Receptionist.register(RUN_WORKER_KEY, worker.narrow()));                 
               }
               return Behaviors.empty();
             });
           }
         }

So question is that Do I need to create number of worker actors as I did in the code or Akka configuration automatically create child worker actors dynamically. Also, Is above congfiguration is correct way to initialize the actor systems ?

I implemented using the Akka cluster example code but example starts the “compute” and “client” actor manually in main application with argument but I am looking to implement in the webapplication so at startup time, actor system should initialize and Service and worker actors should be created. Am I missing anything ?

That is right, you have to start the workers and register them to the receptionist.

Config is looking good. Is the Spring scope correct to create only one ActorSystem (per jvm)?

I’m not sure I get the question.
From the RootBehavior you can start workRouter and workers as in your example. You may also choose to conditionally start the workers (or the workRouter) depending on the Cluster node role.

Patriknw:
Config is looking good. Is the Spring scope correct to create only one ActorSystem (per jvm)?

There should be only one actor system across the all JVMs.I have 4 JVMs running on different 4 nodes. When request comes (from web application) through F5 (roundrobin), Every node will have service actor which takes the request and delegates work to AggregateWorker on local jvm (same jvm as service actor) , and AggregateWorker sends the work to actors across all JVMs.
How to config to have only one actorsystem across all nodes ?

Patriknw:I’m not sure I get the question.
From the RootBehavior you can start workRouter and workers as in your example. You may also choose to conditionally start the workers (or the workRouter) depending on the [Cluster node role].

I have non actor class (Rest Service) where I don’t have reference of the Behaviours , how can I create Behaviours from non actor class

Dharmesh, with the akka cluster you don’t really “have one actor system across all the nodes”. Well, you do … in that you’ve got four actor systems all called the same, running in four JVMs independently on four nodes. However each of these cluster members use the configured discovery mechanism to find and join the cluster.

Normally you’d be doing this because you’re sharding one of the actors in the cluster (typically a persistent actor). So you might have an actor, called Order. Each Order has a unique orderId and in the entire cluster there is every only one Order actor instance with each orderId. The receptionist and the cluster mechanism abstracts away the need for the AggregateWorker to know which node the particular orderId lives on. It just tells the cluster to route a message for a particular orderId and the cluster does the hard work of locating the node where the orderId is actually running at the moment.

But that doesn’t sound like what you’re doing. Are you just load-balancing the workload across multiple instances? In the past when I’ve done that, we just used the http load-balancing (or, JMS queue behaviour) to distribute the workload to some number of nodes, all running the same actor system with the same set of actors in them but completely independently of each other. Unless you have a situation where you are processing a few large transactions? In which case I would think about implementing cluster roles so that I could split the work up, but I’ve never used them in anger.