You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In a typical scenario, the "end user" (human) would want the interface (single Agent, SoM, Group Chat etc) to retain their own context. Any system implementing this would most likely store the end user's query and the interface's response in some DB.
To recreate this, the only details available would be user's query and the response. This should be passed to the agent (user proxy or similar).
I would expect something like below
ConversableAgent's initiate_chat to take a previous history of context and then be able to use it.
While something like this can be achieved using initiate_chat's message param which can be a callable, having a native implementation would be helpful. Because in this scenario, it sometimes end up loosing the context, because it also depends on the system prompt of the agent to be able to realise this.
assistant=ConversableAgent(
name="Assistant",
system_message="Your are a friendly Q&A Bot""You may use the previous messages to understand the context of current Query""Return 'TERMINATE' when the task is done.",
llm_config=self.llm_config,
human_input_mode="NEVER",
default_auto_reply="Reply with 'TERMINATE' if the task is done.",
is_termination_msg=is_termination_msg
)
defmy_message(sender: ConversableAgent,
recipient: ConversableAgent,
context: dict) ->Union[str, Dict]:
message=context.get("user_query", "")
last_chats=context.get("last_chats", [])
prelude= [f"Query: {x.get('query')}. Answer: {x.get('answer')}"forxinlast_chats]
curr_msg=f"CURRENT QUERY: {message}"final_msg=f"Previous Messages: {' '.join(prelude)}. {curr_msg}"returnfinal_msguser_proxy.initiate_chat(
assistant,
message=my_message,
user_query=query,
last_chats=last_chats,
verbose=True
)
In another example from the blog mentioned here, people are replaying the chat in GroupChat and broadcasting it to all other agents. I believe this need can also be eliminated if only the user's query & response are passed onto while initiating group chat
I tried some versions of it locally. Based on my exp it seemed that the LLM is sometimes unnecessarily pulling up past context in current q&a even when it's not really needed.
What really worked for us (more deterministically) was to add a tool that would fetch last conversations. So the LLM can decide if it needs the last context.
I would expect something like below
initiate_chat
to take a previous history of context and then be able to use it.While something like this can be achieved using
initiate_chat
'smessage
param which can be a callable, having a native implementation would be helpful. Because in this scenario, it sometimes end up loosing the context, because it also depends on the system prompt of the agent to be able to realise this.In another example from the blog mentioned here, people are replaying the chat in GroupChat and broadcasting it to all other agents. I believe this need can also be eliminated if only the user's query & response are passed onto while initiating group chat
Originally posted by @r4881t in #2437 (comment)
The text was updated successfully, but these errors were encountered: