Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Async Streaming Mock is different to sync streaming mock #4168

Closed
almajo opened this issue Jun 13, 2024 · 2 comments · Fixed by #4268
Closed

[Bug]: Async Streaming Mock is different to sync streaming mock #4168

almajo opened this issue Jun 13, 2024 · 2 comments · Fixed by #4268
Labels
bug Something isn't working

Comments

@almajo
Copy link

almajo commented Jun 13, 2024

What happened?

When using the streaming mock feature with the completion() function, everything works as expected and described in the docu

However, when using the acompletion function with stream=True, the whole mock_response is put into a delta content, instead of only a delta-token. This is mocking, so not the worst bug, but I guess we would like to keep completion and acompletion interchangeable.

Relevant log output

import litellm
generator = await litellm.acompletion(
            messages=[{"role":"user", "content":"Why is LiteLLM amazing?"}],
            mock_response="LiteLLM is awesome",
            stream=True,
            model="gpt-3.5-turbo",
        )
async for t in generator:
    print(t)

----
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(content='LiteLLM is awesome', role='assistant', function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(content='LiteLLM is awesome', role=None, function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(content='LiteLLM is awesome', role=None, function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(content='LiteLLM is awesome', role=None, function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(content='LiteLLM is awesome', role=None, function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason=None, index=0, delta=Delta(content='LiteLLM is awesome', role=None, function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)
ModelResponse(id='chatcmpl-cefbd92a-efbc-47da-913c-217cc9304f24', choices=[StreamingChoices(finish_reason='stop', index=0, delta=Delta(content=None, role=None, function_call=None, tool_calls=None), logprobs=None)], created=1718275107, model='gpt-3.5-turbo', object='chat.completion.chunk', system_fingerprint=None)

Twitter / LinkedIn details

No response

@almajo almajo added the bug Something isn't working label Jun 13, 2024
@ishaan-jaff
Copy link
Contributor

ishaan-jaff commented Jun 19, 2024

fixed here @almajo #4268

@ishaan-jaff
Copy link
Contributor

Hi @almajo any chance we can hop on a call ? I'd love to learn how how we can improve litellm for you.

I reached out to you on Linkedin if DMs work.
Sharing a link to my cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants