Skip to main content

Learn how to enable language detection

Set the language query parameter to multi when calling https://waves-api.smallest.ai/api/v1/lightning/get_text. Lightning STT will auto-detect the spoken language across 30+ ISO 639-1 codes without changing your request body, whether you upload raw audio bytes or send a hosted URL.

Output format & field of interest

When language detection is enabled, the transcription, word_timestamps, and utterances arrays are emitted in the detected language. Persist the detected locale in your app by storing the language parameter you supplied (for auditing) and by inspecting downstream metadata such as subtitles or captions that inherit the localized transcript.

Sample request

curl --request POST \
  --url "https://waves-api.smallest.ai/api/v1/lightning/get_text?model=lightning&language=multi&word_timestamps=true" \
  --header "Authorization: Bearer $SMALLEST_API_KEY" \
  --header "Content-Type: audio/wav" \
  --data-binary "@/path/to/audio.wav"

Sample response

{
  "status": "success",
  "transcription": "Hola mundo.",
  "word_timestamps": [
    { "word": "Hola", "start": 0.0, "end": 0.4 },
    { "word": "mundo.", "start": 0.5, "end": 0.9 }
  ],
  "utterances": [
    { "text": "Hola mundo.", "start": 0.0, "end": 0.9 }
  ],
  "metadata": {
    "filename": "audio.wav",
    "duration": 1.0
  }
}