Skip to content

Issues: simonw/llm

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Pydantic failures
#520 opened Jun 24, 2024 by achille
backslash cannot be used in llm chat
#516 opened Jun 18, 2024 by grota
Parsing JSON in reponses
#514 opened Jun 15, 2024 by detrin
Limit max tokens in response
#513 opened Jun 15, 2024 by detrin
Autocomplete online cli
#512 opened Jun 14, 2024 by lzumot
Plugin for HF serverless inference
#510 opened Jun 12, 2024 by hugobowne
Log token performance stats enhancement New feature or request
#509 opened Jun 12, 2024 by simonw
asyncio support enhancement New feature or request
#507 opened Jun 10, 2024 by simonw
Hang on 429 response from OpenAI
#504 opened May 28, 2024 by tylerbrandt
Please add NVIDA cloud API
#503 opened May 28, 2024 by kirkog86
llm keys set openai
#498 opened May 20, 2024 by S4grg
All I ever get is "insufficient_quota"
#494 opened May 14, 2024 by ijt
Asynchronous API support enhancement New feature or request
#487 opened May 8, 2024 by czue
Enhancement idea: implement a self help
#484 opened May 5, 2024 by stoerr
[plugin] add IBM watsonx
#481 opened May 3, 2024 by h0rv
ProTip! Updated in the last three days: updated:>2024-06-22.