Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response stream not working #8

Open
othmanelhoufi opened this issue Apr 8, 2023 · 5 comments
Open

Response stream not working #8

othmanelhoufi opened this issue Apr 8, 2023 · 5 comments
Labels
bug Something isn't working

Comments

@othmanelhoufi
Copy link

Hi,

Thanks again for this wonderful tool.
I noticed that you have two files "OpenAI.ts" and "OpenAIProvider.tsx" , I suppose the second is to keep track of conversation, although, the interface is frozen until the response is completely done then the entire message is printed. I was thinking a good way is to print word by word as they come from the stream.
I tried to edit this but can't seem to solve it.

@Nashex Nashex added the bug Something isn't working label Apr 8, 2023
@Nashex
Copy link
Owner

Nashex commented Apr 8, 2023

As of now, the words should be printed one by one as the come through the stream, if you have a fork are you up to date with main?

@othmanelhoufi
Copy link
Author

I am up to date with the main branch, I think maybe because I changed "a little bit" the request/response function so that it can work with the Azure OpenAI API, but I don't see how my changes could've affected the stream functionality.

Are you sure it actually works ?

@phookycom
Copy link

I have observed the following behavior: When using the app with "yarn dev" the answer is given word by word as expected. However, if you start the app in Production (e.g. "yarn install && yarn build && yarn start"), this no longer works. At the moment I haven't found out what the problem is.

@phookycom
Copy link

OK, I have found the problem. It was due to my NGINX configuration. Sorry for the confusion. With the following proxy configuration, the app behaves as expected even in Production.

    location / {
        proxy_pass http://127.0.0.1:3012;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_buffering off;
        chunked_transfer_encoding on;
    }

Thanks for your work!

@othmanelhoufi
Copy link
Author

Thanks for your input, but actually I made a fork in order to re-adapt the app for Azure OpenAI API (it's not totally the same as OpenAI API), doing so, I no longer have the feature of having the answer word by word even that Azure API allows it, it made me think that maybe I made a mistake.

Can you please give it a quick look ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants