Skip to content

OpenAI: Audio API (client.audio.speech, transcriptions, translations) not instrumented #174

@braintrust-bot

Description

@braintrust-bot

Summary

wrap_openai() instruments chat.completions, responses, embeddings, moderations, and beta.chat.completions, but does not instrument the Audio API. Calls to client.audio.speech.create(), client.audio.transcriptions.create(), and client.audio.translations.create() silently fall through to the unwrapped OpenAI client via NamedWrapper.__getattr__, producing no Braintrust spans.

Upstream API surface

The OpenAI Audio API includes three stable endpoints:

Method Model Purpose
audio.speech.create() tts-1, tts-1-hd, gpt-4o-mini-tts Text-to-speech generation
audio.transcriptions.create() whisper-1, gpt-4o-transcribe, gpt-4o-mini-transcribe Speech-to-text transcription
audio.translations.create() whisper-1 Audio translation to English

These are production-stable APIs (not beta), supported in the OpenAI Python SDK via openai.resources.audio.

Why this matters

Braintrust docs status

The OpenAI integration docs document wrap_openai() but do not mention Audio API support: not_found.

Local files inspected

  • py/src/braintrust/oai.pyOpenAIV1Wrapper (line ~955) and _apply_openai_wrapper (line ~1142) wrap chat, responses, embeddings, moderations, beta but not audio
  • py/src/braintrust/oai.pyNamedWrapper.__getattr__ (line ~20) silently delegates unrecognized attributes to the unwrapped client
  • py/src/braintrust/wrappers/test_openai.py — no test cases for client.audio.*
  • py/noxfile.py — no audio-specific test session

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions