API Documentation
Getting Started
FloweryAI provides a simple REST API for converting speech to text. Our API endpoints accept audio files and return accurate transcriptions.
Base URL:
https://floweryai.l5.ca
Available Models
Model Name | Description |
---|---|
whisper-tiny | Lightweight model - fastest model |
whisper-base | Base model - good balance of speed and accuracy |
Endpoints
GET
/stt/models
List all available speech-to-text models
Response:
{
"available_models": [
{
"name": "whisper-tiny",
"description": "Tiny model for Whisper speech-to-text",
"provider": "floweryai",
"input_modalities": ["audio"],
"output_modalities": ["text"]
},
{
"name": "whisper-base",
"description": "Base model for Whisper speech-to-text",
"provider": "floweryai",
"input_modalities": ["audio"],
"output_modalities": ["text"]
}
]
}
Example:
curl https://floweryai.l5.ca/stt/models
POST
/stt/transcribe
Transcribe an audio file to text
Request:
Form data with:
audio
- The audio file to transcribe (WAV, MP3, M4A, FLAC)model
- (Optional) Model to use. Default: whisper-tinytimestamps
- (Optional) Set to "true" to include timestamps. Default: false
Response:
{
"transcription": "This is the transcribed text from the audio file."
}
Response with timestamps:
{
"transcription": "[00:00:00.000 --> 00:00:05.520] This is the transcribed text from the audio file.\n[00:00:05.520 --> 00:00:10.560] With timestamps showing when each segment was spoken."
}
Example:
curl -X POST https://floweryai.l5.ca/stt/transcribe \
-F "audio=@recording.wav" \
-F "model=whisper-tiny" \
-F "timestamps=true"
Code Examples
// Example: Transcribe an audio file
async function transcribeAudio(audioFile, model = 'whisper-tiny', timestamps = false) {
const formData = new FormData();
formData.append('audio', audioFile);
formData.append('model', model);
formData.append('timestamps', timestamps ? 'true' : 'false');
const response = await fetch('https://floweryai.l5.ca/stt/transcribe', {
method: 'POST',
body: formData
});
return await response.json();
}
// Usage with file input
document.getElementById('audioInput').addEventListener('change', async (e) => {
const file = e.target.files[0];
if (file) {
try {
// Get transcription with timestamps
const result = await transcribeAudio(file, 'whisper-tiny', true);
console.log('Transcription with timestamps:', result.transcription);
} catch (error) {
console.error('Error:', error);
}
}
});