-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
在langchain-chatchat里得到的回答,和大模型直接推理得到的回答,不一样 #4019
Comments
To address the discrepancy in answers between Langchain-Chatchat and direct model inference, consider the following steps:
For detailed guidance, refer to Langchain-Chatchat's documentation and source code, especially sections related to chat processing, configuration examples, and knowledge base integration. This approach will help you identify and resolve the differences in answers between the two setups.
|
我也发现这个问题了 |
问题描述 / Problem Description
在langchain-chatchat里得到的回答,和大模型直接推理得到的回答,不一样
复现问题的步骤 / Steps to Reproduce
1.使用LLaMA-Factory的推理程序CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3.yaml
2.在langchain-chatchat的LLM对话界面
输入同样的问题,得到不一样的答案
预期的结果 / Expected Result
应该得到同样的回答
实际结果 / Actual Result
回答不一样
环境信息 / Environment Information
项目版本:v0.2.10
langchain版本:0.0.354. fastchat版本:0.2.35
是否使用 Docker 部署(是/否):否
使用的模型(ChatGLM2-6B / Qwen-7B 等):LLaMA3-8B-Chat、Qwen1.5-0.5B-Chat
使用的 Embedding 模型(moka-ai/m3e-base 等):bge-reranker-large
使用的向量库类型 (faiss / milvus / pg_vector 等): faiss
操作系统及版本 / Operating system and version:操作系统:Linux-4.19.24-7.34.cbp.al7.x86_64-x86_64-with-glibc2.35.
python版本:3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
The text was updated successfully, but these errors were encountered: