Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Assertion Error while using google models via OpenRouter #4355

Open
kushalsharma opened this issue Jun 22, 2024 · 0 comments
Open

[Bug]: Assertion Error while using google models via OpenRouter #4355

kushalsharma opened this issue Jun 22, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@kushalsharma
Copy link

What happened?

Code to reproduce bug :

from litellm import completion
import os

os.environ['OPENROUTER_API_KEY'] = ""
response = completion(
    model="openrouter/google/gemini-flash-1.5", 
    messages=[{"role": "user", "content": "some NSFW content to trigger error"}]
)

Relevant log output

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py in ?(response_object, model_response_object, response_type, stream, start_time, end_time, hidden_params)
   5321             return model_response_object
   5322     except Exception as e:
-> 5323         raise Exception(
   5324             f"Invalid response object {traceback.format_exc()}\n\nreceived_args={received_args}"

AssertionError: 

During handling of the above exception, another exception occurred:

Exception                                 Traceback (most recent call last)
~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/llms/openai.py in ?(self, model_response, timeout, optional_params, model, messages, print_verbose, api_key, api_base, acompletion, logging_obj, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider)
    831                 raise OpenAIError(status_code=e.status_code, message=str(e))
    832             else:
--> 833                 raise OpenAIError(status_code=500, message=traceback.format_exc())

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/llms/openai.py in ?(self, model_response, timeout, optional_params, model, messages, print_verbose, api_key, api_base, acompletion, logging_obj, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider)
    831                 raise OpenAIError(status_code=e.status_code, message=str(e))
    832             else:
--> 833                 raise OpenAIError(status_code=500, message=traceback.format_exc())

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py in ?(response_object, model_response_object, response_type, stream, start_time, end_time, hidden_params)
   5321             return model_response_object
   5322     except Exception as e:
-> 5323         raise Exception(
   5324             f"Invalid response object {traceback.format_exc()}\n\nreceived_args={received_args}"

Exception: Invalid response object Traceback (most recent call last):
  File "/Users/kush/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py", line 5198, in convert_to_model_response_object
    assert response_object["choices"] is not None and isinstance(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError


received_args={'response_object': {'id': None, 'choices': None, 'created': None, 'model': None, 'object': None, 'service_tier': None, 'system_fingerprint': None, 'usage': None, 'error': {'message': "Cannot read properties of undefined (reading 'parts')", 'code': 502}}, 'model_response_object': ModelResponse(id='chatcmpl-86f2f5d4-daf7-4be1-8520-fd5d02e56577', choices=[Choices(finish_reason='stop', index=0, message=Message(content='default', role='assistant'))], created=1719056210, model='None/google/gemini-flash-1.5', object='chat.completion', system_fingerprint=None, usage=Usage()), 'response_type': 'completion', 'stream': False, 'start_time': None, 'end_time': None, 'hidden_params': None}

During handling of the above exception, another exception occurred:

OpenAIError                               Traceback (most recent call last)
File ~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/main.py:1791, in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   1790 ## COMPLETION CALL
-> 1791 response = openai_chat_completions.completion(
   1792     model=model,
   1793     messages=messages,
   1794     headers=headers,
   1795     api_key=api_key,
   1796     api_base=api_base,
   1797     model_response=model_response,
   1798     print_verbose=print_verbose,
   1799     optional_params=optional_params,
   1800     litellm_params=litellm_params,
   1801     logger_fn=logger_fn,
   1802     logging_obj=logging,
   1803     acompletion=acompletion,
   1804     timeout=timeout,  # type: ignore
   1805 )
   1806 ## LOGGING

File ~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/llms/openai.py:833, in OpenAIChatCompletion.completion(self, model_response, timeout, optional_params, model, messages, print_verbose, api_key, api_base, acompletion, logging_obj, litellm_params, logger_fn, headers, custom_prompt_dict, client, organization, custom_llm_provider)
    832 else:
--> 833     raise OpenAIError(status_code=500, message=traceback.format_exc())

OpenAIError: Traceback (most recent call last):
  File "/Users/kush/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py", line 5198, in convert_to_model_response_object
    assert response_object["choices"] is not None and isinstance(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/kush/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/llms/openai.py", line 825, in completion
    raise e
  File "/Users/kush/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/llms/openai.py", line 792, in completion
    return convert_to_model_response_object(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/kush/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py", line 5323, in convert_to_model_response_object
    raise Exception(
Exception: Invalid response object Traceback (most recent call last):
  File "/Users/kush/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py", line 5198, in convert_to_model_response_object
    assert response_object["choices"] is not None and isinstance(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError


received_args={'response_object': {'id': None, 'choices': None, 'created': None, 'model': None, 'object': None, 'service_tier': None, 'system_fingerprint': None, 'usage': None, 'error': {'message': "Cannot read properties of undefined (reading 'parts')", 'code': 502}}, 'model_response_object': ModelResponse(id='chatcmpl-86f2f5d4-daf7-4be1-8520-fd5d02e56577', choices=[Choices(finish_reason='stop', index=0, message=Message(content='default', role='assistant'))], created=1719056210, model='None/google/gemini-flash-1.5', object='chat.completion', system_fingerprint=None, usage=Usage()), 'response_type': 'completion', 'stream': False, 'start_time': None, 'end_time': None, 'hidden_params': None}


During handling of the above exception, another exception occurred:

UnboundLocalError                         Traceback (most recent call last)
/var/folders/_0/5yzypzd977xb2k9p8ljjxmm00000gn/T/ipykernel_68268/1428737860.py in ?()
      1 from litellm import completion
      2 import os
      3 
      4 os.environ['OPENROUTER_API_KEY'] = "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"
----> 5 response = completion(
      6     model="openrouter/google/gemini-flash-1.5",
      7     messages=[{"role": "user", "content": "some text to generate NSFW content"}]
      8 )

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py in ?(*args, **kwargs)
    951                     if (
    952                         liteDebuggerClient and liteDebuggerClient.dashboard_url != None
    953                     ):  # make it easy to get to the debugger logs if you've initialized it
    954                         e.message += f"\n Check the log in your dashboard - {liteDebuggerClient.dashboard_url}"
--> 955             raise e

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py in ?(*args, **kwargs)
    951                     if (
    952                         liteDebuggerClient and liteDebuggerClient.dashboard_url != None
    953                     ):  # make it easy to get to the debugger logs if you've initialized it
    954                         e.message += f"\n Check the log in your dashboard - {liteDebuggerClient.dashboard_url}"
--> 955             raise e

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/main.py in ?(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   2576             )
   2577         return response
   2578     except Exception as e:
   2579         ## Map to OpenAI Exception
-> 2580         raise exception_type(
   2581             model=model,
   2582             custom_llm_provider=custom_llm_provider,
   2583             original_exception=e,

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py in ?(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
   7436         ):
   7437             threading.Thread(target=get_all_keys, args=(e.llm_provider,)).start()
   7438         # don't let an error with mapping interrupt the user from receiving an error from the llm api calls
   7439         if exception_mapping_worked:
-> 7440             raise e
   7441         else:
   7442             raise APIConnectionError(
   7443                 message="{}\n{}".format(original_exception, traceback.format_exc()),

~/miniforge3/envs/mm-scripts/lib/python3.12/site-packages/litellm/utils.py in ?(model, original_exception, custom_llm_provider, completion_kwargs, extra_kwargs)
   7436         ):
   7437             threading.Thread(target=get_all_keys, args=(e.llm_provider,)).start()
   7438         # don't let an error with mapping interrupt the user from receiving an error from the llm api calls
   7439         if exception_mapping_worked:
-> 7440             raise e
   7441         else:
   7442             raise APIConnectionError(
   7443                 message="{}\n{}".format(original_exception, traceback.format_exc()),

UnboundLocalError: cannot access local variable 'exception_provider' where it is not associated with a value

Twitter / LinkedIn details

No response

@kushalsharma kushalsharma added the bug Something isn't working label Jun 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant