Usage
Automatic Tracking
Spectra ships with watchers that intercept outgoing HTTP requests automatically. If you use the OpenAI PHP SDK, Laravel AI, Prism PHP, Guzzle, or Laravel's HTTP client to call any supported AI provider, Spectra detects and tracks the request with zero code changes. There is nothing to configure, wrap, or annotate — tracking starts working the moment you install the package.
The HttpWatcher listens to Laravel's HTTP client events. Any request sent via the Http facade to a recognized AI provider host — such as api.openai.com, api.anthropic.com, api.groq.com, or any other registered provider — is intercepted and tracked automatically. The watcher identifies the provider from the request hostname, validates that the endpoint is trackable (for example, /v1/chat/completions or /v1/embeddings), and then processes the response to extract usage metrics.
use Illuminate\Support\Facades\Http;
$response = Http::withToken(config('services.openai.api_key'))
->post('https://api.openai.com/v1/chat/completions', [
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Summarize this release note.'],
],
]);
// Spectra automatically captures: provider, model, tokens, cost, latency, statusThe OpenAiWatcher intercepts calls made through the openai-php/laravel package. This covers all SDK methods — chat completions, embeddings, image generation, audio transcription, and more. If you use the OpenAI PHP SDK, every call is tracked automatically:
use OpenAI\Laravel\Facades\OpenAI;
$response = OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Summarize this release note.'],
],
]);
echo $response->choices[0]->message->content;If your project uses Prism PHP, requests are also tracked automatically since Prism uses Laravel's HTTP client under the hood:
use EchoLabs\Prism\Prism;
$response = Prism::text()
->using('openai', 'gpt-4o')
->withPrompt('Summarize this release note.')
->generate();
echo $response->text;TIP
Automatic tracking applies whenever spectra.watcher.enabled is true (the default). It works for requests made via Laravel's Http facade, the OpenAI PHP SDK, and any library that uses either of these under the hood. Direct curl calls or other HTTP libraries are not intercepted — use the Guzzle middleware or manual tracking for those.
Provider Macros
Provider macros are a convenient way to interact with AI providers via Laravel's HTTP client. Each macro configures the base URL, authentication headers, JSON content type, and Spectra tracking metadata in a single call. You only need to provide the endpoint path and request body.
The macros automatically resolve API keys from your existing configuration. If you already have keys set up for the OpenAI PHP SDK, Prism, or Laravel's services config, the macros discover them without any extra setup.
OpenAI
// Base URL: https://api.openai.com/v1
// Auth: Bearer token (auto-discovered)
$response = Http::openai('gpt-4o')
->post('/chat/completions', [
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Write a haiku about Laravel.'],
],
]);The OpenAI macro also supports pricing tiers. If you use OpenAI's Batch API or Flex processing, pass the tier as a second argument to ensure accurate cost tracking:
$response = Http::openai('gpt-4o', pricingTier: 'batch')
->post('/chat/completions', [...]);Anthropic
// Base URL: https://api.anthropic.com/v1
// Auth: x-api-key header, anthropic-version: 2023-06-01
$response = Http::anthropic('claude-sonnet-4-20250514')
->post('/messages', [
'model' => 'claude-sonnet-4-20250514',
'max_tokens' => 1024,
'messages' => [
['role' => 'user', 'content' => 'Explain the difference between REST and GraphQL.'],
],
]);Groq
// Base URL: https://api.groq.com/openai/v1
// Auth: Bearer token
$response = Http::groq('llama-3.3-70b-versatile')
->post('/chat/completions', [
'model' => 'llama-3.3-70b-versatile',
'messages' => [
['role' => 'user', 'content' => 'What are the SOLID principles?'],
],
]);xAI (Grok)
// Base URL: https://api.x.ai/v1
// Auth: Bearer token
$response = Http::xai('grok-3')
->post('/chat/completions', [
'model' => 'grok-3',
'messages' => [
['role' => 'user', 'content' => 'Explain machine learning to a five-year-old.'],
],
]);Google AI (Gemini)
// Base URL: https://generativelanguage.googleapis.com/v1beta
// Auth: API key as query parameter
$response = Http::google('gemini-2.5-flash')
->post('/models/gemini-2.5-flash:generateContent', [
'contents' => [
['parts' => [['text' => 'Summarize the theory of relativity.']]],
],
]);OpenRouter
// Base URL: https://openrouter.ai/api/v1
// Auth: Bearer token, plus HTTP-Referer and X-Title headers
$response = Http::openrouter('anthropic/claude-3.5-sonnet')
->post('/chat/completions', [
'model' => 'anthropic/claude-3.5-sonnet',
'messages' => [
['role' => 'user', 'content' => 'What is OpenRouter?'],
],
]);Ollama
// Base URL: http://localhost:11434 (configurable via services.ollama.url)
// Auth: None (local server)
$response = Http::ollama('llama3.2')
->post('/api/chat', [
'model' => 'llama3.2',
'messages' => [
['role' => 'user', 'content' => 'Hello from local inference!'],
],
]);Replicate
// Base URL: https://api.replicate.com/v1
// Auth: Bearer token
$response = Http::replicate('stability-ai/sdxl')
->post('/predictions', [
'version' => 'abc123...',
'input' => [
'prompt' => 'A photo of an astronaut riding a horse on Mars',
],
]);API Key Auto-Discovery
All macros use Spectra's built-in ApiKeyResolver to locate API keys automatically. The resolver checks several configuration paths in priority order and uses the first non-empty value it finds. This means if you already have API keys configured for the OpenAI PHP SDK, Prism, or Laravel's services config, the macros will work without any additional setup.
| Provider | Resolution Chain (first match wins) |
|---|---|
| OpenAI | spectra.api_keys.openai → openai.api_key → services.openai.api_key → prism.providers.openai.api_key |
| Anthropic | spectra.api_keys.anthropic → services.anthropic.api_key → services.anthropic.key → prism.providers.anthropic.api_key |
spectra.api_keys.google → services.google.api_key → services.google.key → prism.providers.gemini.api_key | |
| Groq | spectra.api_keys.groq → services.groq.api_key → prism.providers.groq.api_key |
| xAI | spectra.api_keys.xai → services.xai.api_key → prism.providers.xai.api_key |
| Replicate | spectra.api_keys.replicate → services.replicate.api_key → prism.providers.replicate.api_key |
| OpenRouter | spectra.api_keys.openrouter → services.openrouter.api_key → services.openrouter.key → prism.providers.openrouter.api_key |
| ElevenLabs | spectra.api_keys.elevenlabs → services.elevenlabs.api_key → prism.providers.elevenlabs.api_key |
If you want Spectra-specific keys that take highest priority over all other sources, set them under the spectra.api_keys namespace in the configuration file or via dedicated environment variables:
# .env
SPECTRA_OPENAI_API_KEY=sk-...
SPECTRA_ANTHROPIC_API_KEY=sk-ant-...Guzzle Middleware
If you use Guzzle directly — for example, with a custom HTTP client that bypasses Laravel's Http facade — you can add Spectra's GuzzleMiddleware to the Guzzle handler stack for automatic tracking. The middleware wraps the Guzzle handler and intercepts requests and responses in the same way the built-in watchers do.
use GuzzleHttp\Client;
use GuzzleHttp\HandlerStack;
use Spectra\Http\GuzzleMiddleware;
$stack = HandlerStack::create();
$stack->push(GuzzleMiddleware::create('openai', 'gpt-4o'));
$client = new Client([
'handler' => $stack,
'base_uri' => 'https://api.openai.com/v1/',
'headers' => [
'Authorization' => 'Bearer ' . config('services.openai.api_key'),
'Content-Type' => 'application/json',
],
]);
$response = $client->post('chat/completions', [
'json' => [
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Hello from Guzzle!'],
],
],
]);If you don't know the provider at client creation time, pass 'auto' and Spectra will detect the provider from the request hostname at runtime:
$stack->push(GuzzleMiddleware::create('auto'));WARNING
The Guzzle middleware does not support tracking streaming responses. For streaming, use Spectra::stream() instead.
Manual Tracking
For full control over what gets tracked and how, use Spectra::track(). This wraps any callable in a tracking context and lets you attach tags, user attribution, and conversation metadata. The callback receives a RequestContext instance that you can use to enrich the tracked record.
use Spectra\Facades\Spectra;
use OpenAI\Laravel\Facades\OpenAI;
$result = Spectra::track('openai', 'gpt-4o', function ($ctx) {
$ctx->addTag('feature', 'release-summary');
$ctx->addTag('priority', 'high');
return OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Summarize this document.'],
],
]);
});
// $result is the return value of the callback
echo $result->choices[0]->message->content;The track() method automatically records success or failure based on whether the callback throws an exception. The provider and model arguments are used for cost lookup and dashboard classification. Tags attached via $ctx->addTag() appear in the dashboard for filtering and grouping — see Tags for more details.
User Attribution
Spectra automatically associates tracked requests with the currently authenticated user when tracking.auto_track_user is enabled in the configuration (the default). The user is stored via a polymorphic trackable relationship, which means you can query usage and costs per user through the dashboard or the API.
You can also manually assign a different trackable model — for example, to attribute requests to a team, project, or organization instead of the logged-in user:
$result = Spectra::track('openai', 'gpt-4o', function ($ctx) use ($team) {
$ctx->forTrackable($team);
return OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => $messages,
]);
});Conversation Tracking
For multi-turn conversations, you can attach a conversation identifier and turn number to correlate related requests in the dashboard:
$result = Spectra::track('openai', 'gpt-4o', function ($ctx) use ($conversationId, $turn) {
$ctx->inConversation($conversationId, $turn);
return OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => $messageHistory,
]);
});Streaming
Spectra supports tracking streaming (SSE) responses through the Spectra::stream() method. The method returns a StreamingTracker that wraps the stream, collects chunks, measures time-to-first-token latency, and persists the complete request when the stream finishes. You must call finish() after the stream is consumed to finalize tracking.
With the OpenAI SDK
use Spectra\Facades\Spectra;
use OpenAI\Laravel\Facades\OpenAI;
$tracker = Spectra::stream('openai', 'gpt-4o');
$stream = OpenAI::chat()->createStreamed([
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Write a short story about a robot.'],
],
]);
foreach ($tracker->track($stream) as $text) {
echo $text; // Stream text to the user in real-time
}
$tracker->finish();Streamed Image Generation (OpenAI Responses API)
OpenAI's Responses API supports streaming image generation. Spectra tracks partial image chunks and persists the complete image when the stream finishes:
use Spectra\Facades\Spectra;
use OpenAI\Laravel\Facades\OpenAI;
$tracker = Spectra::stream('openai', 'gpt-image-1');
$stream = OpenAI::responses()->createStreamed([
'model' => 'gpt-image-1',
'input' => 'A futuristic city skyline at sunset',
]);
foreach ($tracker->track($stream) as $chunk) {
// Image streams yield partial base64 image data
}
$tracker->finish();With the Laravel HTTP Client
$tracker = Spectra::stream('anthropic', 'claude-sonnet-4-20250514');
$response = Http::anthropic('claude-sonnet-4-20250514')
->withResponseType('stream')
->post('/messages', [
'model' => 'claude-sonnet-4-20250514',
'max_tokens' => 1024,
'stream' => true,
'messages' => [
['role' => 'user', 'content' => 'Tell me a joke.'],
],
]);
foreach ($tracker->track($response->body()) as $text) {
echo $text;
}
$tracker->finish();The finish() method returns the persisted SpectraRequest record, which includes full token counts, cost, latency, and time-to-first-token metrics.
Global Configuration
Spectra provides methods to set defaults that apply to all subsequent requests in the current process. These are useful in middleware, service providers, or anywhere you want consistent context across multiple AI calls without passing options to each individual request.
Global Tags
Attach tags to every tracked request in the current process. This is convenient for adding environment-level or deployment-level context:
Spectra::addGlobalTags(['environment' => 'production', 'region' => 'us-east-1']);Global Pricing Tier
Set the pricing tier globally so all requests calculate costs using the correct tier. This is particularly useful when processing batch jobs or operating under a specific pricing agreement:
Spectra::withPricingTier('batch');Global Trackable
Associate all requests in the current process with a specific user or Eloquent model:
Spectra::forUser($user);
// Or attribute to any Eloquent model
Spectra::forTrackable($team);Combining and Clearing Globals
Global settings can be chained together and cleared when no longer needed:
// Set multiple globals
Spectra::addGlobalTags(['environment' => 'production'])
->withPricingTier('batch')
->forUser($user);
// Clear all globals (e.g., at the end of a middleware)
Spectra::clearGlobals();Disabling Tracking
If you need to make a request to an AI provider without Spectra tracking it, use the withoutAITracking() macro. This prevents the request from being intercepted by any of Spectra's watchers:
$response = Http::withoutAITracking()
->withToken(config('services.openai.api_key'))
->post('https://api.openai.com/v1/chat/completions', [
'model' => 'gpt-4o',
'messages' => [['role' => 'user', 'content' => 'Hello!']],
]);TIP
Spectra only tracks requests to known trackable endpoints such as /v1/chat/completions and /v1/embeddings. Non-trackable endpoints like /v1/models or health checks are never tracked, so withoutAITracking() is only necessary when you want to explicitly skip tracking on an endpoint that would otherwise be observed.