Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support OpenAI's parallel_tool_calls #4235

Closed
minhduc0711 opened this issue Jun 17, 2024 · 2 comments
Closed

[Feature]: Support OpenAI's parallel_tool_calls #4235

minhduc0711 opened this issue Jun 17, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@minhduc0711
Copy link

The Feature

OpenAI recently added a new parallel_tool_calls parameter.

parallel_tool_calls: boolean, Optional, Defaults to true

Whether to enable parallel function calling during tool use.

Would be nice if we add the same parameter to the completion function.

Motivation, pitch

This is useful when I want only one tool call returned, without extra completion tokens being used for redundant tool calls.

Twitter / LinkedIn details

No response

@minhduc0711 minhduc0711 added the enhancement New feature or request label Jun 17, 2024
@krrishdholakia
Copy link
Contributor

doesn't this already work? @minhduc0711

@minhduc0711
Copy link
Author

My bad, I wasn't aware that LiteLLM has already supported provider-specific params.

From https://docs.litellm.ai/docs/completion/input#provider-specific-params:

Providers might offer params not supported by OpenAI (e.g. top_k). You can pass those in 2 ways:

  • via completion(): We'll pass the non-openai param, straight to the provider as part of the request body.
    e.g. completion(model="claude-instant-1", top_k=3)
  • via provider-specific config variable (e.g. litellm.OpenAIConfig()).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants