There are two reasons why this is not possible.
The first is more philosophical / principled reason. In CQRS / DDD, an aggregate (Entity in Lagom) defines a consistency boundary. We can refer to it as a transactional boundary as well. The main reason behind it is because it offers a simplified way of dealing with mutations and transactions. One model, one aggregate, one transaction. That’s basically the aggregate mantra from DDD.
The second is a technical reason and touches one of the design principles in Akka. An actor is location transparent. That means that when you get a ActorRef
and send a message to it, you may be sending a message over the network and reaching an actor living on another node. In the specific case of Lagom, a PersistentEntityRef
will refer to a sharded persistence actor that can be alive in any node of your cluster. As a consequence, we can’t have a transaction wrapping two calls to two different persistent actors (or Lagom entities) because they may live in two different nodes.
Possible alternative solution
There is a common technique in the CQRS world that may help you in that context. It consists of having “command side support table”. Basically, you keep a DB table that is updated before and after handling commands, thus on the command side.
In your specific case, you could have one to manage the list of ids. Each entry on that table must have a boolean associated with it to confirm that it’s effectively in use. In one transaction, you add the id to the table, but as unconfirmed. Then you pass the shared key to the entity and when the command completes you confirm that the shared key is also being used by that new entity. You do it by flipping the boolean to true.
So, basically you have three transactions:
- add entity id to the list of entities using the shared key, but unconfirmed.
- create the entity (can be a transaction on a different node)
- confirm that the entity was created and is using the shared key.
There are three transactions involved and at least two points of failures that need special attention.
If transaction #2 fails, you have added an id to the list, but the entity was never created. Because you don’t have the confirmation, you don’t return it when searching. You may need to clean it up via a scheduler or just leave it there. You should not clean it up immediately (see below).
If transaction #3 fails, you will need a retry mechanism. The easiest way is to have a read-side processor that listen to created
events and confirms it. In that case, you have eventual consistency.
Note that there is also the case that transaction #2 succeeds but you never receive the confirmation or the Future
times out. You may think it failed, but the entity was created. In that case, the read-side processor will ’see’ the event and confirm it.
This technique will mitigate the eventual consistency for the sunny day scenario. As soon you get the confirmation that the entity was created, you confirm it on the list and it’s already retrievable. Only when transaction #3 fails that you will be impacted by eventual consistency.