-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Example request. LocalAudioTransport + Whisper + llm + tts #197
Comments
I got it running, DM me and I will give you the script. |
@ajram23 Can you paste it here? 馃檹 I didn't know that I can dm someone in GitHub! |
07-interruptible-local.py.txt |
@ajram23 Thank you for your example! It's quite similar to what I have implemented. Were you able to interact with the LLM? In my case, I can see the initial message from the LLM, but I seem to have an issue with the communication between the Whisper service and the LLMUserResponseAggregator. Here is my current code:
Here are my pipeline debug messages:
For some reason, the transcriptions from Whisper are not being passed to the LLMUserResponseAggregator. I've added print statements inside the LLMUserResponseAggregator to check the messages, but nothing is logged after the Whisper model transcribes the speech. I'm running this on a Mac M2. Any insights or suggestions on what might be going wrong would be greatly appreciated! Thank you for your help! |
@gaceladri in my case I was able to, not sure what is going with yours. |
Ok, thank you for the support and feedback! |
Hi 馃憢
I am having trouble running a local example that integrates LocalAudioTransport, WhisperSTTService, ElevenLabsTTSService, and OpenAILLMService.
I have successfully managed to run Whisper locally for transcription and another script that uses Eleven Labs and OpenAI for TTS and LLM services, respectively. However, I am struggling to combine these components to create a fully functional local conversation system.
To illustrate, here are the two examples I have working independently:
Example 1: Passing an LLM message to the TTS provider:
Example 2: Using Whisper locally:
Despite these individual successes, I'm unable to connect the transcriptions with the LLM and have a continuous conversation. Could you provide or add an example of a fully working local setup that demonstrates how to achieve this?
Thank you!
The text was updated successfully, but these errors were encountered: