Skip to content

OpenTelemetry

Spectra can export AI request data to external observability platforms via the OpenTelemetry Protocol (OTLP), giving you visibility into AI operations alongside your existing monitoring infrastructure. Each tracked AI request becomes a span in your distributed tracing system, complete with provider, model, token usage, cost, latency, and status metadata.

What is OpenTelemetry?

OpenTelemetry (often abbreviated as OTel) is an open-source, vendor-neutral observability framework maintained by the Cloud Native Computing Foundation (CNCF). It defines a standard format for telemetry data — traces, metrics, and logs — that is supported by virtually every major observability platform.

In practical terms, a trace represents the full journey of a request through your system. Each step in that journey is a span. Spectra creates a span for every tracked AI request, enriched with structured attributes including the provider name, model identifier, token counts, calculated cost, latency, and HTTP status code.

The key benefit of OpenTelemetry is portability. You export your traces in OTLP format once, and you can send them to any compatible backend — Jaeger, Grafana Tempo, Datadog, New Relic, Honeycomb, or dozens of others. If you later switch observability vendors, you change the endpoint configuration, not your application code.

When to Use OpenTelemetry

The OpenTelemetry integration is valuable when you already use an observability platform and want AI request data in the same place as the rest of your telemetry. Specific scenarios include:

  • Correlating AI calls with other services in a distributed system to understand end-to-end request flow.
  • Setting up alerts on AI request latency, error rates, or cost thresholds through your existing alerting infrastructure.
  • Building unified dashboards that combine AI metrics with application performance metrics.
  • Meeting enterprise monitoring requirements that mandate centralized observability with retention policies and access controls.

If you only need AI observability and don't have an existing tracing infrastructure, the built-in Spectra dashboard may be sufficient on its own.

Setup

Enable the integration in config/spectra.php and provide the OTLP endpoint for your collector or backend:

php
'integrations' => [
    'opentelemetry' => [
        'enabled' => env('SPECTRA_OTEL_ENABLED', true),
        'endpoint' => env('SPECTRA_OTEL_ENDPOINT', 'http://localhost:4318/v1/traces'),
        'headers' => [],
        'service_version' => env('SPECTRA_OTEL_SERVICE_VERSION', '1.0.0'),
        'resource_attributes' => [],
        'timeout' => env('SPECTRA_OTEL_TIMEOUT', 10),
    ],
],

Then set the environment variables:

bash
SPECTRA_OTEL_ENABLED=true
SPECTRA_OTEL_ENDPOINT=http://localhost:4318/v1/traces

Configuration Options

OptionDefaultDescription
enabledfalseWhether to export traces to the configured OTLP endpoint.
endpointhttp://localhost:4318/v1/tracesThe OTLP HTTP endpoint for your collector or observability backend.
headers[]Custom HTTP headers sent with each export request. Typically used for authentication tokens or API keys.
service_version1.0.0A version string included in trace metadata. Useful for identifying which deployment generated a given trace.
resource_attributes[]Key-value pairs added to every exported trace. Used for deployment region, Kubernetes namespace, service tier, or other infrastructure context.
timeout10HTTP timeout in seconds for OTLP export requests. Increase this if your backend is remote or slow to respond.

Compatible Backends

Spectra exports traces in standard OTLP HTTP format, which is supported by all major observability platforms:

BackendTypeEndpoint Example
JaegerOpen sourcehttp://localhost:4318/v1/traces
ZipkinOpen sourcehttp://localhost:9411/api/v2/spans
Grafana TempoOpen sourcehttp://tempo:4318/v1/traces
Datadog APMCloudhttps://trace.agent.datadoghq.com/v1/traces
New RelicCloudhttps://otlp.nr-data.net:4318/v1/traces
AWS X-RayCloudVia OpenTelemetry Collector
Google Cloud TraceCloudVia OpenTelemetry Collector
HoneycombCloudhttps://api.honeycomb.io/v1/traces
LightstepCloudhttps://ingest.lightstep.com:443/traces/otlp/v0.9

TIP

The easiest way to test the OpenTelemetry integration locally is with Jaeger. Start a Jaeger instance with Docker:

shell
docker run -d --name jaeger \
  -p 16686:16686 \
  -p 4318:4318 \
  jaegertracing/all-in-one:latest

Set SPECTRA_OTEL_ENDPOINT=http://localhost:4318/v1/traces and open http://localhost:16686 to view traces in the Jaeger UI.

Authentication

Most cloud backends require authentication headers. Add them to the headers array in the configuration. The exact header depends on your provider:

php
'opentelemetry' => [
    'headers' => [
        // Bearer token (New Relic, Honeycomb, etc.)
        'Authorization' => 'Bearer ' . env('OTEL_AUTH_TOKEN'),

        // API key (Datadog, etc.)
        'x-api-key' => env('OTEL_API_KEY'),

        // Honeycomb-specific team key
        'x-honeycomb-team' => env('HONEYCOMB_API_KEY'),
    ],
],

Resource Attributes

Resource attributes are key-value pairs added to every exported trace. They describe the environment in which the trace was generated — deployment region, Kubernetes namespace, service tier, and similar infrastructure metadata. Use them to filter and group traces in your observability backend:

php
'opentelemetry' => [
    'resource_attributes' => [
        'deployment.environment' => 'production',
        'deployment.region' => 'us-east-1',
        'k8s.namespace' => 'ai-services',
        'service.team' => 'platform',
    ],
],

Export Timing

OpenTelemetry export follows the same persistence mode as request storage, controlled by the queue configuration:

Queue ConfigExport Behavior
queue.enabled: trueDispatched as an ExportOtelTraceJob on your configured queue
queue.after_response: trueExported after the HTTP response is sent to the client (no added latency for the user)
Both false (default)Exported synchronously after the AI response completes

NOTE

In non-HTTP contexts such as console commands and queue workers, after_response has no effect. Traces are always exported synchronously in those contexts unless queue mode is enabled.

Trace Correlation

Spectra assigns a trace_id to each tracked request. This identifier appears in both the Spectra dashboard and the exported OpenTelemetry spans, allowing you to follow a single user action across multiple AI calls and external services.

Automatic Trace IDs

Every tracked request receives a UUID trace ID automatically. Requests made within the same Spectra::track() callback or the same HTTP request share the same trace ID by default, making it easy to correlate related operations.

Custom Trace IDs

You can provide your own trace ID to integrate with an existing tracing system. This is useful when you want AI request spans to appear under the same trace as your application's HTTP request or background job:

php
$result = Spectra::track('openai', 'gpt-4o', function ($ctx) use ($myTraceId) {
    return OpenAI::chat()->create([
        'model' => 'gpt-4o',
        'messages' => $messages,
    ]);
}, ['trace_id' => $myTraceId]);

Where Trace IDs Appear

SystemLocation
Spectra DashboardFilterable in the request explorer's "Trace ID" field
OpenTelemetryThe span's trace_id attribute, visible in your backend's trace view

This enables end-to-end tracing: a user clicking "Generate Summary" triggers an API route, which calls an AI model, which writes to the database, which dispatches a queue job — all linked under a single trace ID visible across both Spectra and your observability platform.

Released under the MIT License.