Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 1266 Allow using streaming AI Service with tools without memory #1280

Merged
merged 7 commits into from
Jun 21, 2024

Conversation

Kugaaa
Copy link
Contributor

@Kugaaa Kugaaa commented Jun 13, 2024

Issue

See #1266

Change

  • Add temp memory list to allow using streaming AI Service with tools without memory
  • Add unit testing

General checklist

  • There are no breaking changes
  • I have added unit and integration tests for my change
  • I have manually run all the unit and integration tests in the module I have added/changed, and they are all green
  • I have manually run all the unit and integration tests in the core and main modules, and they are all green

Checklist for adding new model integration

  • I have added my new module in the BOM

Checklist for adding new embedding store integration

  • I have added a {NameOfIntegration}EmbeddingStoreIT that extends from either EmbeddingStoreIT or EmbeddingStoreWithFilteringIT
  • I have added my new module in the BOM

Checklist for changing existing embedding store integration

  • I have manually verified that the {NameOfIntegration}EmbeddingStore works correctly with the data persisted using the latest released version of LangChain4j

@langchain4j langchain4j added the P2 High priority label Jun 17, 2024
Copy link
Owner

@langchain4j langchain4j left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kugaaa thank you!

This does not seem to work as expected, I can see that should_use_tool_without_memory is failing for the second model (Mistral).

For OpenAI, it does not work properly as well:
In the first request, user message is sent.
In the second request, only assistant and tool messages are sent (user message is not sent, but should be).

@Kugaaa
Copy link
Contributor Author

Kugaaa commented Jun 18, 2024

@Kugaaa thank you!

This does not seem to work as expected, I can see that should_use_tool_without_memory is failing for the second model (Mistral).

For OpenAI, it does not work properly as well: In the first request, user message is sent. In the second request, only assistant and tool messages are sent (user message is not sent, but should be).

I'll check again

Copy link
Owner

@langchain4j langchain4j left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kugaaa thank you!

@langchain4j langchain4j merged commit 606c0f1 into langchain4j:main Jun 21, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 High priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants