-
Here is a an example of how I tested it:
The first request implementation will take significantly longer to complete, about twice as much time. Any thoughts why and how to fix? I would like to use the VercelAI function to be able to switch LLMs easily down the road. |
Beta Was this translation helpful? Give feedback.
Answered by
lgrammel
Jun 16, 2024
Replies: 1 comment 1 reply
-
You are comparing |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
lgrammel
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You are comparing
gpt-4o
togpt-4-turbo
. After changing to the same model, there was no difference for me. In fact, the AI SDK was slightly faster (after moving it to the 2nd position to account for warmup).