I am looking into the provided CQRS example.
While code is pretty self explanatory, I’m still trying to wrap my head around an issue.
When you add an item: val entityRef = sharding.entityRefFor(Expense.EntityKey, data.cartId)
Now, the behaviour that I see is that if the entity with the cartId does not exist, it get created and returned by cluster sharding. However, I would like to have a little bit more control over how and/or when an entity is created. Because if I send an ID that is wrong, it will get created nevertheless.
Is my understanding correct ? If not, what I’m getting wrong ? If yes, how do I control entity creation ?
The entity is not created when calling entityRefFor. It will be created when the message that you send via the EntityRef is received by the entity (actor). Nevertheless, the question remains.
The entity will automatically be passivated if not used any more so the cost is low. You could also immediately stop the entity after it has received the “wrong” command to reduce the time it remains in memory.
Thank you very much for your response.
I’m trying to wrap my head around on how to handle the workflow.
Let’s say a bad user starts sending a lot of bad IDs, just for fun. If my understanding of what you said is correct, there is no way to check in advance if the entity was created prior. So I will end with a lot of ‘polluted’ data in memory (or passivated).
I was planning to try to use ClusterShardingStatus and look in the response if the entity exists or not. Is that an option ?
Thank you very much!
Hi, Patrick!
Sorry for the late reply, had some health issues to sort out.
Here is what I’m referring.
Let’s say I implement a persistent entity actor as a state machine, so I make sure that at creation time, the only function is to set up the data. For simplicity, let’s assume the ID is Int.
Now, I can imagine I can receive, from a “bad” user, 2 billion messages with various IDs, translating to the SetName message. Since I have no means in verifying that it exists or not, I would call entityRefFor for each of them, bang him the message and return error. Which is OK so far. But I would end with 2 billions actors in bad state, which I have to carry on forever, unless I use the little utility provided to manually delete, which is not a good option IMHO.
Is there a way to avoid this kind of behaviour ?
Thank you!
OK, I think (hope) I understood what you mean.
In the cycle init -> apply ( processCommand, handleEvent ), processCommand should look at the current state, and if invalid ( i.e. the actor hasn’t been initialized properly ), it should Effect.stop.thenReply() ?
Also, is it OK to try to implement an FSM within Akka persistent actor context ?