Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add attribute “returnAsFinalAnswer” to Tool annotation ,let llm can use tool funcion return value as the output of the model directly,to reduce llm request count #1248

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

jacksonhu
Copy link
Contributor

@jacksonhu jacksonhu commented Jun 7, 2024

Issue

#124

Change

when we use tools ,we send request llm 3 times, e.g:
{
"role": "user",
"content": "What's the weather like in Shanghai today?"
},
{
"role": "assistant",
"content": null,
"function_call": {
"name": "get_current_weather",
"arguments": "{ "location": "shanghai"}"
}
},
{
"role": "function",
"name": "get_current_weather",
"content": "{"temperature": "25", "unit": "摄氏度", "description": "晴朗"}"
},
{
"role": "assistant",
"content": "It's 25 degrees Celsius and sunny in Shanghai today"
},

I think we should give user a choice to add a a “ReturnAsOutput” Annotation on get_current_weather method, let DefaultAiServices just return get_current_weather execution result,. like this:
@tool("get weather information by location")
@ReturnAsOutput
String get_current_weather(@p("location")String location) {
return "It's 25 degrees Celsius and sunny in Shanghai today"
}
and in DefaultAiServices.java ,add code below:
if (method.isAnnotationPresent(ReturnAsOutput.class)) {
String result = toolExecutor.getToolExecuteResult();
chatMemory.add(AiMessage.aiMessage(result));
return result;
}
response = context.chatModel.generate(chatMemory.messages(), context.toolSpecifications);

Copy link
Owner

@langchain4j langchain4j left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thank you!

Please see my comments.
Please fill in PR template and implement tests.


@Target(METHOD)
@Retention(RUNTIME)
public @interface ReturnAsOutput {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about adding an attribute to existing @Tool annotation instead of introducing a new annotation?

@Tool(returnDirectly = true)

We need to think about a better name though

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a better choice

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a better choice

Not sure I understand :) To what exactly "this" refers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a better choice

Not sure I understand :) To what exactly "this" refers?

my bad ,I mean adding an attribute to existing @tool annotation instead of introducing a new annotation is a better choice ,I will try to optimize my code

if (context.hasChatMemory()) {
context.chatMemory(memoryId).add(toolExecutionResultMessage);
} else {
messages.add(toolExecutionResultMessage);
}
if (method.isAnnotationPresent(ReturnAsOutput.class)) {
if (context.hasChatMemory()) {
context.chatMemory(memoryId).add(AiMessage.aiMessage(toolExecutionResult));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many LLM providers (such as OpenAI) will fail if AiMessage with tool execution request is not followed by a tool retult message

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for you review , this is maybe not a problem ,make tool method return directly also add “ toolExecutionResultMessage” ,see my comment in code lines

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, I misunderstood the change. But then why AiMessage.aiMessage(toolExecutionResult) is added to the memory if toolExecutionResultMessage has been already added?

Copy link
Contributor Author

@jacksonhu jacksonhu Jun 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will give you a example chat history to illustrate this:

[
        {
            "role": "user",
            "content": "how about the weather in Shanghai?"
        },
        {
            "role": "assistant",
            "content": null,
            "function_call": {
                "name": "get_current_weather",
                "arguments": "{ \"location\": \"Shanghai\"}"
            }
        },
        {----here is toolExecutionResultMessage,without this ,some LLM providers (such as OpenAI) will fail 
            "role": "function",
            "name": "get_current_weather",
            "content": "It's 25 degrees Celsius and sunny in Shanghai today"
        },
        {----here is toolExecutionResult using Tool function return value(usually,we need to request LLM to generate final answer ,here we use Tool function return value as final answer to reduce LLM requests 1 time)
            "role": "assistant",
            "content": "It's 25 degrees Celsius and sunny in Shanghai today"
        }]

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I see.

What is the use case for this? Do you plan to use this with chat memory? Is this for a "data extraction" use case?

Copy link
Contributor Author

@jacksonhu jacksonhu Jun 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I update the example chat history given to you before(use function content as assistant content).
this case is only for function call, and in function call must use chat memory.
I think this feature can make function call more reliable and more Fault-tolerant,sometimes we need LLM call funtion and output strict json string, but llm may not ,if we can use tool function return value as output ,We can expect to get a definite output(cuz we can control tool function return object or strict json )

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this case is only for function call

If this change is to use function calls to extract structured data, I would say that this is a misuse of the feature.

and in function call must use chat memory.

Memory is not needed any more to use tools.

I think this feature can make function call more reliable and more Fault-tolerant,sometimes we need LLM call funtion and output strict json string, but llm may not ,if we can use tool function return value as output ,We can expect to get a definite output(cuz we can control tool function return object or strict json )

Agree, we have an issue to address this. Soon one could use function calls to extract structured data. But for that you don't need to define a @Tool method. You could just define a POJO as a return type.

@@ -267,7 +281,8 @@ private static String getValueOfVariableIt(Parameter[] parameters, Object[] args
if (!parameter.isAnnotationPresent(dev.langchain4j.service.MemoryId.class)
&& !parameter.isAnnotationPresent(dev.langchain4j.service.UserMessage.class)
&& !parameter.isAnnotationPresent(dev.langchain4j.service.UserName.class)
&& (!parameter.isAnnotationPresent(dev.langchain4j.service.V.class) || isAnnotatedWithIt(parameter))) {
&& (!parameter.isAnnotationPresent(dev.langchain4j.service.V.class) || isAnnotatedWithIt(
parameter))) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please rollback all formatting changes

@@ -181,11 +184,20 @@ public Object invoke(Object proxy, Method method, Object[] args) throws Exceptio
toolExecutionRequest,
toolExecutionResult
);

if (context.hasChatMemory()) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@langchain4j make tool method return directly also add “ toolExecutionResultMessage”

@langchain4j langchain4j added the P2 High priority label Jun 11, 2024
@jacksonhu jacksonhu changed the title add ReturnAsOutput annotation on tool function,llm can use tool funcion return value as the output of the model reduce directly,reduce llm request count add ReturnAsOutput annotation on tool function,llm can use tool funcion return value as the output of the model directly,to reduce llm request count Jun 11, 2024
@jacksonhu jacksonhu changed the title add ReturnAsOutput annotation on tool function,llm can use tool funcion return value as the output of the model directly,to reduce llm request count add attribute “returnAsFinalAnswer” to Tool annotation ,let llm can use tool funcion return value as the output of the model directly,to reduce llm request count Jun 12, 2024
@juinjonn
Copy link

@langchain4j Hello! When will this change be released? Now I also encountered this scene. The result of using Aiservice cannot return the tool directly, resulting in a large request once llm, and the result of making the llm return tools. The guarantee of the llm is not very good Okay, because the result returned by the tool is structured, and the front -end dependencies rely on structured data for rendering

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 High priority
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants