Default Speech Interfaces¶
The Stark framework offers a default mechanism to incorporate speech interfaces from various platforms. This page elucidates the structure and usage of these interfaces.
Stark's speech interfaces comprise two primary components:
- Speech Recognizers: Convert spoken words into text.
- Speech Synthesizers: Translate text into audible speech.
Both components employ protocols, ensuring flexibility and extensibility when opting for different implementations.
An implementation utilizing the Vosk library. This recognizer captures audio input and processes it via the Vosk offline speech recognition engine.
Implemented using Silero models. The resultant speech can be audibly played using the
This synthesizer leverages Google Cloud's Text-to-Speech service. Ensure your credentials are properly configured before usage. The synthesized speech can be stored as a file for subsequent playback.
To integrate the speech interfaces:
- Select and instantiate your preferred speech recognizer.
- Select and instantiate your preferred speech synthesizer.
- Deploy the
run()function, supplying it with the
CommandsManager, recognizer, and synthesizer instances.
With the above configuration, your application will commence voice command listening and generate synthesized speech based on the logic within the commands manager.
- Confirm the required dependencies, such as Vosk, Silero, and Google Cloud, are in place (refer to Installation).
- Adequate error management and model verifications are essential for a production environment.
- For more nuanced interactions based on speech recognition outcomes, adjust the delegates.
Harness Stark's default speech interfaces to effortlessly and flexibly craft voice-centric applications. Choose the most suitable recognizer and synthesizer for your requirements, and integrate them smoothly.
Implementing Custom Speech Interface¶
For more information, consult Custom Speech Interfaces under the "Advanced" section.