Skip to content

Supported Models

Every tracked request is classified into a model type that determines which metrics are extracted, which pricing formula is applied, and how the request is rendered in the dashboard. Spectra supports six model types, each with its own set of metrics and pricing characteristics. This page describes what Spectra captures for each type and which providers support it.

All model types are tracked using the same mechanisms described in Usage — automatic watchers, provider macros, Guzzle middleware, or manual tracking. The provider's handler automatically detects the model type from the endpoint and response shape, so no additional configuration is needed on your part.

Text Completions

Text generation is the most widely supported model type, covering chat completions, message generation, and content generation across all major providers. Spectra extracts token-level metrics from every text request and uses them for both cost calculation and dashboard analytics.

Supported Providers

ProviderEndpointsExample Models
OpenAI/v1/chat/completions, /v1/responsesGPT-4o, GPT-4o-mini, o1, o3
Anthropic/v1/messagesClaude Opus, Sonnet, Haiku
Google/v1beta/models/{model}:generateContentGemini 2.5 Flash, Gemini 2.5 Pro
Groq/openai/v1/chat/completionsLlama 3.3, Mixtral, Gemma
xAI/v1/chat/completionsGrok-3, Grok-3-mini
Ollama/api/chat, /api/generateAny locally hosted model
OpenRouter/api/v1/chat/completionsAny model available on OpenRouter
Cohere/v2/chatCommand R, Command R+

Metrics

MetricColumnDescription
Prompt tokensprompt_tokensNumber of input tokens sent to the model
Completion tokenscompletion_tokensNumber of output tokens generated by the model
Cached tokensDeducted from prompt costTokens served from the provider's prompt cache at a discounted rate
Finish reasonfinish_reasonWhy generation stopped — stop, length, tool_calls, end_turn, etc.

Cached Token Pricing

Providers such as OpenAI, Anthropic, Google, and xAI offer prompt caching, where previously seen input tokens are served at a reduced rate. Spectra detects cached tokens automatically from the response metadata and applies the discounted price during cost calculation. The formula separates regular and cached prompt tokens to ensure accurate cost attribution:

regular_prompt_tokens = prompt_tokens - cached_tokens
prompt_cost = (regular_tokens × input_price + cached_tokens × cached_price) / 1,000,000

Embeddings

Embeddings convert text into dense vector representations used for semantic search, clustering, recommendation systems, and similarity comparisons. Spectra tracks embedding requests with token-level metrics. Unlike text completions, embedding models produce no output tokens — the cost is based entirely on input token count.

Supported Providers

ProviderEndpointExample Models
OpenAI/v1/embeddingstext-embedding-3-small, text-embedding-3-large, text-embedding-ada-002

Metrics

MetricColumnDescription
Prompt tokensprompt_tokensNumber of input tokens in the text being embedded
Completion tokenscompletion_tokensAlways 0 for embeddings — there are no output tokens

Embedding models are priced per input token only. When embedding multiple texts in a single request (batch embeddings), Spectra tracks the total token count across all inputs.

Image Generation

Spectra tracks image generation requests with image-specific metrics and optional media persistence. Image generation APIs typically return temporary URLs that expire after a set period. When media persistence is enabled, Spectra automatically downloads and stores the generated images before the URLs expire, ensuring you retain access to the output regardless of provider URL lifetimes.

Supported Providers

ProviderEndpointsExample Models
OpenAI/v1/images/generations, /v1/images/edits, /v1/images/variationsDALL-E 3, DALL-E 2, gpt-image-1
OpenAI (Responses API)/v1/responses (streaming and non-streaming)gpt-image-1
Replicate/v1/models/{owner}/{model}/predictionsStable Diffusion, FLUX

Metrics

MetricColumnDescription
Image countimage_countNumber of images generated in the request

Image models are priced per image. The cost depends on the model, image dimensions, and quality settings specified in the request. The image handler also supports the OpenAI Responses API for streaming image generation — Spectra captures the completed response from the response.completed event and extracts metrics through the same handler pipeline as non-streaming requests.

Media Persistence

Enable media persistence to automatically download generated images before provider URLs expire:

php
// config/spectra.php
'storage' => [
    'media' => [
        'enabled' => true,
        'disk' => 'local',     // Any Laravel filesystem disk
        'path' => 'spectra',   // Storage path prefix
    ],
],

When enabled, the media_storage_path column on the request record stores the paths to the persisted files as JSON.

Video Generation

Spectra tracks video generation with video-specific metrics including count and duration. Video generation is typically asynchronous — you submit a creation request and then poll for the completed result. Spectra's video handler implements the SkipsResponse interface, which prevents storing incomplete polling responses. Only the final, completed response is persisted with full metrics.

Supported Providers

ProviderEndpointsExample Models
OpenAI/v1/videos, /v1/videos/{id}Sora

Metrics

MetricColumnDescription
Video countvideo_countNumber of videos generated
Durationduration_secondsTotal duration of the generated video in seconds

Video models are priced per video generated. The video handler also implements the HasExpiration interface, making Spectra aware of the expiration timeline for generated video URLs. Media persistence works the same way as for images — when enabled, videos are downloaded to your configured Laravel filesystem disk before the URLs expire.

Text-to-Speech

Spectra tracks text-to-speech (TTS) requests by measuring the input text size for character-based pricing. TTS endpoints return binary audio data (MP3, Opus, etc.) rather than JSON, so the response payload is not stored in the database — only the request parameters and extracted metrics are recorded.

Supported Providers

ProviderEndpointsExample Models
OpenAI/v1/audio/speechtts-1, tts-1-hd
ElevenLabs/v1/text-to-speech/{voice_id}Eleven Multilingual, Eleven Turbo

Metrics

MetricColumnDescription
Input charactersinput_charactersNumber of characters in the input text

TTS models are priced per character or per million characters, depending on the provider.

Speech-to-Text

Spectra tracks speech-to-text (STT) transcription and translation requests, capturing both the audio duration and any token metrics returned by the provider. STT requests use multipart form data rather than JSON — the audio file is sent as a file attachment. Spectra handles this automatically.

Supported Providers

ProviderEndpointsExample Models
OpenAI/v1/audio/transcriptions, /v1/audio/translationswhisper-1

Metrics

MetricColumnDescription
Durationduration_secondsLength of the input audio in seconds
Prompt tokensprompt_tokensToken count from the transcription (when available)
Completion tokenscompletion_tokensToken count from the output (when available)

STT models are priced per minute of input audio.

Released under the MIT License.