As per the docs, here’s how I setup my SDK:
akka.javasdk {
agent {
model-provider = openai
openai {
model-name = "gpt-4o-mini"
api-key = ${?OPENAI_API_KEY}
}
}
telemetry {
tracing {
collector-endpoint = "http://localhost:4317"
collector-endpoint = ${?COLLECTOR_ENDPOINT}
}
}
}
This seems to work in that I can see some spans running in jaeger as the image shows:
But as you can also see, the actual arguments to the call to OpenAI aren’t shown at all in the traces.
Perhaps I’m missing something?
We are actively working on the tracing right now, so it is possible that this improved with the just released SDK 3.5.1. However I don’t think we currently include the raw request sent to the model either. I’m not sure the future plan is to generally include it in the otel traces, but there are a lot of improvements around inspection of requests coming up.
For the record this is what the jaeger trace of a request using the helloworld-agent looks like on my machine (SDK 3.5.1):
Thanks for sharing. I tried upgrading to 3.5.1 but it still doesn’t give me that level of insight - I’m still getting the same traces as before.
Are you able to share how you configured OTEL and Jaeger? Beyond the application.conf I shared above, I also have these environment variables set:
export JAVA_TOOL_OPTIONS=“-javaagent:./opentelemetry-javaagent.jar”
export OTEL_SERVICE_NAME=shallow-research
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_LOGS_EXPORTER=none
export OTEL_EXPORTER_OTLP_ENABLED=true
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
Also worth mentioning my application is a multi-agent workflow, in case it matters.
In addition, to rule out other issues, I cloned the repo for the helloworld-agent and added the opentelemetry javaagent to it. The issue is reproducible that way.
So it seems it has to do with the auto instrumentation provided by the agent. How did you set yours up?
The intent with the tracing support in Akka SDK is not to use an otel agent at all, the otel instrumentation is provided out of the box without the agent. I guess if running with the agent it must be overriding the traces that the SDK and runtime is collecting.
I’ve tried not using the agent. In that case, do you simply run it with mvn compile exec:java
?
No additional configuration needed? When I remove the Java agent, nothing seems to get sent to Jaeger.
Ok this was user error! My bad!
Turns out I was using the wrong environment variable to enable tracing: TELEMETRY_ENABLED
which is incorrect (I’m not sure where I read that).
If I remove the java agent and instead use TRACING_ENABLED
I get the results I was expecting.
Thank you, I’m good to go for now!
1 Like
Sorry, I should have been more clear about that, just like you figured out, to get local tracing enabled the service needs to be started as described in the “tracing” sample readme, with the TRACING_ENABLED like so:
TRACING_ENABLED=true mvn compile exec:java
Yup. that is working!
Another question - part of why I went through this setup is to send LLM metrics to Langfuse (https://langfuse.com/)
Turns out their OTel endpoint doesn’t support gRPC, so I have configured the protocol as follows:
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
Yet, when I run the application, I get these errors:
Aug 26, 2025 5:47:55 PM io.opentelemetry.sdk.internal.ThrottlingLogger doLog
WARNING: Failed to export spans. Server responded with gRPC status code 2. Error message:
From some googling this seems to indicate that OTel is still trying to use grpc, but I’m unsure. Have you seen this previously?
I haven’t seen anything like that no, but I’m pretty sure such environment variables will are not picked up anywhere and the runtime is more or less hard coded to use gRPC for the metrics and tracing.
Note that we have really not intended that type of configuration for the tracing, it’s optimized for the platform deployment and eventually also the local console. I think it will also move further in that less flexible, but providing more things out of the box, direction with work we are doing right now.