Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

本地部署模型,启动error,Register to controller报错socket.gaierror: Name or service not known #4025

Open
yanli789 opened this issue May 15, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@yanli789
Copy link

yanli789 commented May 15, 2024

问题描述 / Problem Description
http://127.0.0.1:20000启动error,Register to controller报错:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f505ddc7e80>: Failed to establish a new connection: [Errno -2] Name or service not known'))
运行环境没有访问互联网权限,所有模型离线安装好,按理说不应该有访问互联网域名的情况,烦请大佬指导我的步骤和配置是哪里有问题,万分感激。

复现问题的步骤 / Steps to Reproduce

  1. 安装好pg_vector后,运行python init_database.py --recreate-vs,提示添加到向量库成功。
  2. 配置 kb_config.py里pg数据库地址
  3. 配置model_config.py、server_config.py
  4. 执行 python startup.py -a 就报错

环境信息 / Environment Information

  • langchain-ChatGLM 版本/commit 号:Langchain-Chatchat v0.2.10
  • 是否使用 Docker 部署(是/否):否
  • 使用的模型(ChatGLM2-6B / Qwen-7B 等):Qwen-1_8B-Chat
  • 使用的 Embedding 模型(moka-ai/m3e-base 等):bge-large-zh
  • 使用的向量库类型 (faiss / milvus / pg_vector 等):pg_vector
  • 操作系统及版本 / Operating system and version: Linux version 3.10.0-1160.el7.x86_64
  • Python 版本 / Python version: Python 3.10.9
  • 其他相关环境信息 / Other relevant environment information:
  • transformers-4.37.2
  • cuda_11.7.r11.7

### model_config.py 配置如下:

MODEL_ROOT_PATH = "/mnt"
EMBEDDING_MODEL = "bge-large-zh"
EMBEDDING_DEVICE = "cuda"
RERANKER_MODEL = "bge-reranker-large"

USE_RERANKER = False
RERANKER_MAX_LENGTH = 1024

EMBEDDING_KEYWORD_FILE = "keywords.txt"
EMBEDDING_MODEL_OUTPUT_PATH = "output"
LLM_MODELS = ["Qwen-1_8B-Chat"]
Agent_MODEL = None

LLM_DEVICE = "cuda"
HISTORY_LEN = 3
MAX_TOKENS = 2048
TEMPERATURE = 0.7
ONLINE_LLM_MODEL = {
}

MODEL_PATH = {
"embed_model": {
"bge-large-zh": "/mnt/bge-large-zh",
},
"llm_model": {
"Qwen-1_8B-Chat": "/mnt/Qwen-1_8B-Chat",
},
"reranker": {
"bge-reranker-large": "/mnt/bge-reranker-large",
}
}

NLTK_DATA_PATH = os.path.join(os.path.dirname(os.path.dirname(file)), "nltk_data")

VLLM_MODEL_DICT = {
"Qwen-1_8B-Chat": "/mnt/Qwen-1_8B-Chat",
}

SUPPORT_AGENT_MODEL = [
"Qwen", # 所有Qwen系列本地模型
]

### server_config.py 配置如下:

OPEN_CROSS_DOMAIN = False

DEFAULT_BIND_HOST = "127.0.0.1"

WEBUI_SERVER = {
"host": "127.0.0.1",
"port": 8501,
}

API_SERVER = {
"host": "127.0.0.1",
"port": 7861,
}

FSCHAT_OPENAI_API = {
"host": "127.0.0.1",
"port": 20000,
}

FSCHAT_MODEL_WORKERS = {
"default": {
"host": "127.0.0.1",
"port": 20002,
"device": LLM_DEVICE,
},
"Qwen-1_8B-Chat": {
"device": "cuda",
},
}

FSCHAT_CONTROLLER = {
"host": "127.0.0.1",
"port": 20001,
"dispatch_method": "shortest_queue",
}

### kb_config.py配置如下:

kbs_config = {
"pg": {
"connection_uri": "postgresql://postgres:[email protected]:5432/langchain_chatchat",
},

"es": {
    "host": "10.7.212.157",
    "port": "23200",
    "index_name": "test_index",
    "user": "",
    "password": ""
},
"milvus_kwargs":{
    "search_params":{"metric_type": "L2"}, #在此处增加search_params
    "index_params":{"metric_type": "L2","index_type": "HNSW"} # 在此处增加index_params
},
"chromadb": {}

}

11125
2234

@yanli789 yanli789 added the bug Something isn't working label May 15, 2024
Copy link

dosubot bot commented May 15, 2024

To address the socket.gaierror: [Errno -2] Name or service not known error you're encountering due to the lack of internet access, you'll need to ensure all required resources, such as model weights or external files the application needs, are available locally. Here's a concise plan to resolve the issue:

  1. Host Required Files Locally: Make sure all necessary files are hosted within your local network or on a server that your environment can access.

  2. Modify Application Configuration: Adjust the application's settings to use local paths or URLs for fetching these resources instead of online sources. This might involve changes in environment variables, configuration files, or direct code modifications.

  3. Pre-download Required Resources: Manually download any resources the application attempts to fetch from the internet and place them in the expected directories. This includes model weights or vocabularies needed for initialization.

  4. Update DNS or Hosts File: As a workaround for DNS resolution issues, you can add an entry to your local hosts file for openaipublic.blob.core.windows.net to point to the local IP address where you've hosted the required files.

  5. Use a Proxy Server: If partial internet access is possible, configure your application to route requests through a proxy server by setting up HTTP_PROXY and HTTPS_PROXY environment variables.

For the specific case of Langchain-Chatchat, ensure you've correctly configured local paths for model files in model_config.py and other configuration files as needed. The application allows specifying local paths for models, which can be leveraged to avoid fetching from the internet.

If you're using models like Qwen-1_8B-Chat or the bge-large-zh embedding model, be aware of potential limitations in offline environments, especially regarding model performance and task completion capabilities.

Lastly, the application documentation should provide instructions for pre-downloading necessary models for offline use. Follow these guidelines to ensure all required resources are available locally before starting the application.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@yanli789 yanli789 changed the title 本地部署模型,20000启动error,Register to controller报错socket.gaierror: [Errno -2] Name or service not known 本地部署模型,启动error,Register to controller报错socket.gaierror: Name or service not known May 16, 2024
@yanli789
Copy link
Author

问题已解决

@zoupengfeiagent
Copy link

问题已解决

请问怎么解决的。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants