Deploying Hello Lagom on Minikube

Hi there, I am trying to deploy the Hello Lagom sample application on minikube. So far I’ve managed to deploy the hello and hello-stream services, and the hello service is able to join its cluster, but the hello-stream service isn’t able to. Any help on pointing out what I did wrong would be greatly appreciated!

application.conf for hello service

play.application.loader = org.example.hello.impl.HelloLoader
play.server.pidfile.path=/dev/null
play.http.secret.key=<secret>

hello.cassandra.keyspace = hello

cassandra-journal.keyspace = ${hello.cassandra.keyspace}
cassandra-snapshot-store.keyspace = ${hello.cassandra.keyspace}
lagom.cluster.exit-jvm-when-system-terminated = on
lagom.persistence.read-side.cassandra.keyspace = ${hello.cassandra.keyspace}

akka {
  actor {
    provider = cluster
    # commands won't use play-json but Akka's jackson support
    "org.example.hello.impl.HelloCommandSerializable"    = jackson-json
  }

  cluster {
    shutdown-after-unsuccessful-join-seed-nodes = 40s
  }

  discovery {
    kubernetes-api {
      # in fact, this is already the default:
      pod-label-selector = "app=hello"
    }
  }

  management {
    cluster.bootstrap {
      contact-point-discovery {
        discovery-method = kubernetes-api
        service-name = "hello"
        required-contact-point-nr = 1
      }
    }
  }
}

application.conf for hello-stream service

play.application.loader = org.example.hellostream.impl.HelloStreamLoader
play.server.pidfile.path=/dev/null
play.http.secret.key=<secret>

lagom.cluster.exit-jvm-when-system-terminated = on

akka {
  actor {
    provider = cluster
    # commands won't use play-json but Akka's jackson support
    "org.example.hellostream.impl.HelloCommandSerializable"    = jackson-json
  }

  cluster {
    shutdown-after-unsuccessful-join-seed-nodes = 40s
  }

  discovery {
    kubernetes-api {
      # in fact, this is already the default:
      pod-label-selector = "app=hello-stream"
    }
  }

  management {
    cluster.bootstrap {
      contact-point-discovery {
        discovery-method = kubernetes-api
        service-name = "hello-stream"
        required-contact-point-nr = 1
      }
    }
  }
}

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
  labels:
    app: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
        - name: hello
          image: hello-impl:1.0-SNAPSHOT
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 9000
            - name: akka-remote
              containerPort: 2552
            - name: akka-mgmt-http
              containerPort: 8558
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-stream
  labels:
    app: hello-stream
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-stream
  template:
    metadata:
      labels:
        app: hello-stream
    spec:
      containers:
        - name: hello-stream
          image: hello-stream-impl:1.0-SNAPSHOT
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 9000
            - name: akka-remote
              containerPort: 2552
            - name: akka-mgmt-http
              containerPort: 8558

service.yml

apiVersion: v1
kind: Service
metadata:
  name: hello
  labels:
    app: hello
spec:
  selector:
    app: hello
  ports:
    - name: "http"
      port: 9000
---
apiVersion: v1
kind: Service
metadata:
  name: hello-stream
  labels:
    app: hello-stream
spec:
  selector:
    app: hello-stream
  ports:
    - name: "http"
      port: 9000

rbac.yml

---
#
# Create a role, `pod-reader`, that can list pods and
# bind the default service account in the `default` namespace
# to that role.
#

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: pod-reader
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
subjects:
  # Note the `name` line below. The first default refers to the namespace. The second refers to the service account name.
  # For instance, `name: system:serviceaccount:myns:default` would refer to the default service account in namespace `myns`
  - kind: User
    name: system:serviceaccount:default:default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

sample logs for hello service deployed on minikube which is able to join the cluster:

2021-06-15T14:55:34.276Z [info] akka.cluster.Cluster [akkaRemoteAddressUid=1484398402379539947, akkaAddress=akka://application@172.17.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaRemoteAddress=akka://application@172.17.0.5:25520, akkaTimestamp=14:55:34.276UTC, akkaMemberStatus=Joining] - Cluster Node [akka://application@172.17.0.5:25520] - Node [akka://application@172.17.0.5:25520] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
2021-06-15T14:55:34.284Z [info] akka.cluster.Cluster [akkaAddress=akka://application@172.17.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:34.284UTC] - Cluster Node [akka://application@172.17.0.5:25520] - is the new leader among reachable nodes (more leaders may exist)
2021-06-15T14:55:34.305Z [info] akka.cluster.Cluster [akkaRemoteAddressUid=1484398402379539947, akkaAddress=akka://application@172.17.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaRemoteAddress=akka://application@172.17.0.5:25520, akkaTimestamp=14:55:34.305UTC, akkaMemberStatus=Up] - Cluster Node [akka://application@172.17.0.5:25520] - Leader is moving node [akka://application@172.17.0.5:25520] to [Up]
2021-06-15T14:55:34.370Z [info] akka.cluster.singleton.ClusterSingletonManager [akkaAddress=akka://application@172.17.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-9, akkaSource=akka://application@172.17.0.5:25520/system/sharding/HelloAggregateCoordinator, sourceActorSystem=application, akkaTimestamp=14:55:34.370UTC] - Singleton manager starting singleton actor [akka://application/system/sharding/HelloAggregateCoordinator/singleton]
2021-06-15T14:55:34.372Z [info] akka.cluster.singleton.ClusterSingletonManager [akkaAddress=akka://application@172.17.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-9, akkaSource=akka://application@172.17.0.5:25520/system/sharding/HelloAggregateCoordinator, sourceActorSystem=application, akkaTimestamp=14:55:34.371UTC] - ClusterSingletonManager state change [Start -> Oldest]
2021-06-15T14:55:34.417Z [info] akka.cluster.sharding.DDataShardCoordinator [akkaAddress=akka://application@172.17.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-26, akkaSource=akka://application@172.17.0.5:25520/system/sharding/HelloAggregateCoordinator/singleton/coordinator, sourceActorSystem=application, akkaTimestamp=14:55:34.417UTC] - HelloAggregate: ShardCoordinator was moved to the active state with [0] shards

logs for hello-stream service deployed on Minikube which is unable to join the cluster:

2021-06-15T14:55:18.110Z [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
2021-06-15T14:55:19.312Z [info] akka.remote.artery.tcp.ArteryTcpTransport [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=ArteryTcpTransport(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:19.308UTC] - Remoting started with transport [Artery tcp]; listening on address [akka://application@172.17.0.6:25520] with UID [-3321251290877745918]
2021-06-15T14:55:19.380Z [info] akka.cluster.Cluster [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:19.379UTC] - Cluster Node [akka://application@172.17.0.6:25520] - Starting up, Akka version [2.6.14] ...
2021-06-15T14:55:20.083Z [info] akka.cluster.Cluster [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:20.082UTC] - Cluster Node [akka://application@172.17.0.6:25520] - Registered cluster JMX MBean [akka:type=Cluster]
2021-06-15T14:55:20.083Z [info] akka.cluster.Cluster [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:20.082UTC] - Cluster Node [akka://application@172.17.0.6:25520] - Started up successfully
2021-06-15T14:55:20.288Z [info] akka.cluster.Cluster [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=application-akka.actor.internal-dispatcher-3, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:20.286UTC] - Cluster Node [akka://application@172.17.0.6:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
2021-06-15T14:55:20.288Z [info] akka.cluster.Cluster [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=application-akka.actor.internal-dispatcher-3, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:20.288UTC] - Cluster Node [akka://application@172.17.0.6:25520] - No seed nodes found in configuration, relying on Cluster Bootstrap for joining
2021-06-15T14:55:23.354Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:23.354UTC] - Loading readiness checks [(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck), (sharding,akka.cluster.sharding.ClusterShardingHealthCheck)]
2021-06-15T14:55:23.356Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:23.355UTC] - Loading liveness checks []
2021-06-15T14:55:23.553Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:23.553UTC] - Binding Akka Management (HTTP) endpoint to: 172.17.0.6:8558
2021-06-15T14:55:23.800Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:23.799UTC] - Including HTTP management routes for ClusterHttpManagementRouteProvider
2021-06-15T14:55:23.965Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:23.965UTC] - Including HTTP management routes for ClusterBootstrap
2021-06-15T14:55:23.974Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:23.974UTC] - Using self contact point address: http://172.17.0.6:8558
2021-06-15T14:55:24.061Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:24.060UTC] - Including HTTP management routes for HealthCheckRoutes
2021-06-15T14:55:26.006Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application@172.17.0.6:25520, akkaHttpAddress=172.17.0.6:8558, sourceThread=application-akka.actor.default-dispatcher-10, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:26.001UTC] - Bound Akka Management (HTTP) endpoint to: 172.17.0.6:8558
2021-06-15T14:55:26.783Z [info] play.api.Play [] - Application started (Prod) (no global state)
2021-06-15T14:55:27.004Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0.0.0.0:9000
2021-06-15T14:55:29.486Z [info] akka.management.cluster.bootstrap.contactpoint.HttpClusterBootstrapRoutes [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=application-akka.actor.default-dispatcher-11, akkaSource=HttpClusterBootstrapRoutes(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:29.485UTC] - Bootstrap request from 172.17.0.5:57450: Contact Point returning 0 seed-nodes []
2021-06-15T14:55:30.856Z [info] akka.management.cluster.bootstrap.contactpoint.HttpClusterBootstrapRoutes [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=application-akka.actor.default-dispatcher-10, akkaSource=HttpClusterBootstrapRoutes(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:30.856UTC] - Bootstrap request from 172.17.0.5:57450: Contact Point returning 0 seed-nodes []
2021-06-15T14:55:31.986Z [info] akka.management.cluster.bootstrap.contactpoint.HttpClusterBootstrapRoutes [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=application-akka.actor.default-dispatcher-12, akkaSource=HttpClusterBootstrapRoutes(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:31.985UTC] - Bootstrap request from 172.17.0.5:57450: Contact Point returning 0 seed-nodes []
2021-06-15T14:55:33.145Z [info] akka.management.cluster.bootstrap.contactpoint.HttpClusterBootstrapRoutes [akkaAddress=akka://application@172.17.0.6:25520, sourceThread=application-akka.actor.default-dispatcher-12, akkaSource=HttpClusterBootstrapRoutes(akka://application), sourceActorSystem=application, akkaTimestamp=14:55:33.145UTC] - Bootstrap request from 172.17.0.5:57450: Contact Point returning 0 seed-nodes []

This has been resolved. It turns out the issue had nothing to do with Lagom but with commonLabels in the kustomization file which was overriding the labels for the deployments.