OpenTelemetry
Spectra can export AI request data to external observability platforms via the OpenTelemetry Protocol (OTLP), giving you visibility into AI operations alongside your existing monitoring infrastructure. Each tracked AI request becomes a span in your distributed tracing system, complete with provider, model, token usage, cost, latency, and status metadata.
What is OpenTelemetry?
OpenTelemetry (often abbreviated as OTel) is an open-source, vendor-neutral observability framework maintained by the Cloud Native Computing Foundation (CNCF). It defines a standard format for telemetry data — traces, metrics, and logs — that is supported by virtually every major observability platform.
In practical terms, a trace represents the full journey of a request through your system. Each step in that journey is a span. Spectra creates a span for every tracked AI request, enriched with structured attributes including the provider name, model identifier, token counts, calculated cost, latency, and HTTP status code.
The key benefit of OpenTelemetry is portability. You export your traces in OTLP format once, and you can send them to any compatible backend — Jaeger, Grafana Tempo, Datadog, New Relic, Honeycomb, or dozens of others. If you later switch observability vendors, you change the endpoint configuration, not your application code.
When to Use OpenTelemetry
The OpenTelemetry integration is valuable when you already use an observability platform and want AI request data in the same place as the rest of your telemetry. Specific scenarios include:
- Correlating AI calls with other services in a distributed system to understand end-to-end request flow.
- Setting up alerts on AI request latency, error rates, or cost thresholds through your existing alerting infrastructure.
- Building unified dashboards that combine AI metrics with application performance metrics.
- Meeting enterprise monitoring requirements that mandate centralized observability with retention policies and access controls.
If you only need AI observability and don't have an existing tracing infrastructure, the built-in Spectra dashboard may be sufficient on its own.
Setup
Enable the integration in config/spectra.php and provide the OTLP endpoint for your collector or backend:
'integrations' => [
'opentelemetry' => [
'enabled' => env('SPECTRA_OTEL_ENABLED', true),
'endpoint' => env('SPECTRA_OTEL_ENDPOINT', 'http://localhost:4318/v1/traces'),
'headers' => [],
'service_version' => env('SPECTRA_OTEL_SERVICE_VERSION', '1.0.0'),
'resource_attributes' => [],
'timeout' => env('SPECTRA_OTEL_TIMEOUT', 10),
],
],Then set the environment variables:
SPECTRA_OTEL_ENABLED=true
SPECTRA_OTEL_ENDPOINT=http://localhost:4318/v1/tracesConfiguration Options
| Option | Default | Description |
|---|---|---|
enabled | false | Whether to export traces to the configured OTLP endpoint. |
endpoint | http://localhost:4318/v1/traces | The OTLP HTTP endpoint for your collector or observability backend. |
headers | [] | Custom HTTP headers sent with each export request. Typically used for authentication tokens or API keys. |
service_version | 1.0.0 | A version string included in trace metadata. Useful for identifying which deployment generated a given trace. |
resource_attributes | [] | Key-value pairs added to every exported trace. Used for deployment region, Kubernetes namespace, service tier, or other infrastructure context. |
timeout | 10 | HTTP timeout in seconds for OTLP export requests. Increase this if your backend is remote or slow to respond. |
Compatible Backends
Spectra exports traces in standard OTLP HTTP format, which is supported by all major observability platforms:
| Backend | Type | Endpoint Example |
|---|---|---|
| Jaeger | Open source | http://localhost:4318/v1/traces |
| Zipkin | Open source | http://localhost:9411/api/v2/spans |
| Grafana Tempo | Open source | http://tempo:4318/v1/traces |
| Datadog APM | Cloud | https://trace.agent.datadoghq.com/v1/traces |
| New Relic | Cloud | https://otlp.nr-data.net:4318/v1/traces |
| AWS X-Ray | Cloud | Via OpenTelemetry Collector |
| Google Cloud Trace | Cloud | Via OpenTelemetry Collector |
| Honeycomb | Cloud | https://api.honeycomb.io/v1/traces |
| Lightstep | Cloud | https://ingest.lightstep.com:443/traces/otlp/v0.9 |
TIP
The easiest way to test the OpenTelemetry integration locally is with Jaeger. Start a Jaeger instance with Docker:
docker run -d --name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latestSet SPECTRA_OTEL_ENDPOINT=http://localhost:4318/v1/traces and open http://localhost:16686 to view traces in the Jaeger UI.
Authentication
Most cloud backends require authentication headers. Add them to the headers array in the configuration. The exact header depends on your provider:
'opentelemetry' => [
'headers' => [
// Bearer token (New Relic, Honeycomb, etc.)
'Authorization' => 'Bearer ' . env('OTEL_AUTH_TOKEN'),
// API key (Datadog, etc.)
'x-api-key' => env('OTEL_API_KEY'),
// Honeycomb-specific team key
'x-honeycomb-team' => env('HONEYCOMB_API_KEY'),
],
],Resource Attributes
Resource attributes are key-value pairs added to every exported trace. They describe the environment in which the trace was generated — deployment region, Kubernetes namespace, service tier, and similar infrastructure metadata. Use them to filter and group traces in your observability backend:
'opentelemetry' => [
'resource_attributes' => [
'deployment.environment' => 'production',
'deployment.region' => 'us-east-1',
'k8s.namespace' => 'ai-services',
'service.team' => 'platform',
],
],Export Timing
OpenTelemetry export follows the same persistence mode as request storage, controlled by the queue configuration:
| Queue Config | Export Behavior |
|---|---|
queue.enabled: true | Dispatched as an ExportOtelTraceJob on your configured queue |
queue.after_response: true | Exported after the HTTP response is sent to the client (no added latency for the user) |
Both false (default) | Exported synchronously after the AI response completes |
NOTE
In non-HTTP contexts such as console commands and queue workers, after_response has no effect. Traces are always exported synchronously in those contexts unless queue mode is enabled.
Trace Correlation
Spectra assigns a trace_id to each tracked request. This identifier appears in both the Spectra dashboard and the exported OpenTelemetry spans, allowing you to follow a single user action across multiple AI calls and external services.
Automatic Trace IDs
Every tracked request receives a UUID trace ID automatically. Requests made within the same Spectra::track() callback or the same HTTP request share the same trace ID by default, making it easy to correlate related operations.
Custom Trace IDs
You can provide your own trace ID to integrate with an existing tracing system. This is useful when you want AI request spans to appear under the same trace as your application's HTTP request or background job:
$result = Spectra::track('openai', 'gpt-4o', function ($ctx) use ($myTraceId) {
return OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => $messages,
]);
}, ['trace_id' => $myTraceId]);Where Trace IDs Appear
| System | Location |
|---|---|
| Spectra Dashboard | Filterable in the request explorer's "Trace ID" field |
| OpenTelemetry | The span's trace_id attribute, visible in your backend's trace view |
This enables end-to-end tracing: a user clicking "Generate Summary" triggers an API route, which calls an AI model, which writes to the database, which dispatches a queue job — all linked under a single trace ID visible across both Spectra and your observability platform.