Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Context window of completion functions not accounted for #1377

Open
pskl opened this issue Oct 13, 2023 · 0 comments
Open

Context window of completion functions not accounted for #1377

pskl opened this issue Oct 13, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@pskl
Copy link

pskl commented Oct 13, 2023

Describe the bug

It seems that some evals require specific context window length, ex: make-me-say eval probably requires 32k?

It would be nice if there was a more DX friendly to know about this before it errors in the API call?

To Reproduce

oaieval gpt-3.5-turbo,gpt-3.5-turbo,gpt-3.5-turbo make-me-say --debug

This model's maximum context length is 4097 tokens. However, your messages resulted in 4123 tokens. Please reduce the length of the messages.

Code snippets

No response

OS

macOS

Python version

Python v3.9.7

Library version

openai-evals 1.0.3

@pskl pskl added the bug Something isn't working label Oct 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant