Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 本地启动报错 #4020

Open
Belee05 opened this issue May 15, 2024 · 4 comments
Open

[BUG] 本地启动报错 #4020

Belee05 opened this issue May 15, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@Belee05
Copy link

Belee05 commented May 15, 2024

问题描述 / Problem Description

本地启动报错,报错日志如下:

2024-05-14 18:11:17 | INFO | model_worker | Register to controller
2024-05-14 18:11:17 | ERROR | stderr | INFO: Started server process [86837]
2024-05-14 18:11:17 | ERROR | stderr | INFO: Waiting for application startup.
2024-05-14 18:11:17 | ERROR | stderr | INFO: Application startup complete.
2024-05-14 18:11:17 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit)
2024-05-14 18:11:20 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker ae64670c ...
2024-05-14 18:11:20 | ERROR | stderr | Process model_worker - chatglm3-6b:
2024-05-14 18:11:20 | ERROR | stderr | Traceback (most recent call last):
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/3.11.6/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2024-05-14 18:11:20 | ERROR | stderr | self.run()
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/3.11.6/lib/python3.11/multiprocessing/process.py", line 108, in run
2024-05-14 18:11:20 | ERROR | stderr | self._target(*self._args, **self._kwargs)
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/PycharmProjects/Langchain-Chatchat/startup.py", line 389, in run_model_worker
2024-05-14 18:11:20 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, **kwargs)
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/PycharmProjects/Langchain-Chatchat/startup.py", line 217, in create_model_worker_app
2024-05-14 18:11:20 | ERROR | stderr | worker = ModelWorker(
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 77, in init
2024-05-14 18:11:20 | ERROR | stderr | self.model, self.tokenizer = load_model(
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 348, in load_model
2024-05-14 18:11:20 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/fastchat/model/model_adapter.py", line 816, in load_model
2024-05-14 18:11:20 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 801, in from_pretrained
2024-05-14 18:11:20 | ERROR | stderr | return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
2024-05-14 18:11:20 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-14 18:11:20 | ERROR | stderr | File "/Users/ken.li/.pyenv/versions/chatglm/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained

复现问题的步骤 / Steps to Reproduce

  1. 执行python startup.py -a
  2. 有一堆ERROR报错日志,无法访问本地80端口
  3. 将ChatGLM3-6B 和bge-large-zh目录放到chatGLM同级和子目录都不行

环境信息 / Environment Information

  • langchain-ChatGLM 版本/commit 号:v0.2.10

  • 是否使用 Docker 部署(是/否):否

  • 使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM3-6B

  • 使用的 Embedding 模型(moka-ai/m3e-base 等):BAAI/bge-large-zh

  • 操作系统及版本 / Operating system and version: macos 13.6.4

  • Python 版本 / Python version: 3.11.6

  • 其他相关环境信息 / Other relevant environment information:

附加信息 / Additional Information
添加与问题相关的任何其他信息 / Add any other information related to the issue.

@Belee05 Belee05 added the bug Something isn't working label May 15, 2024
Copy link

dosubot bot commented May 15, 2024

To address the startup error you're encountering with python startup.py -a, here are some targeted steps to consider:

  1. Verify Port 80 Availability: Use sudo lsof -i :80 to check if port 80 is already in use. If it is, either stop the conflicting process or choose another port for your application.

  2. Run as Root: Since binding to port 80 on macOS requires root privileges, try executing the script with sudo.

  3. Check Python and Dependencies: Ensure all dependencies are compatible with Python 3.11.6. Consider using a virtual environment to manage these dependencies.

  4. Firewall and Permissions: Make sure the macOS firewall or any security software isn't blocking access to port 80.

  5. Alternative Port: If not strictly required to use port 80, switch to an alternative port like 8080 by adjusting the port configuration in your script.

  6. Review Error Logs: Examine the logs for specific error messages that could shed light on the issue. The exact error message from the traceback is crucial for diagnosing the problem.

  7. Model Placement: Ensure the ChatGLM3-6B and bge-large-zh models are placed in a directory recognized by the MODEL_ROOT_PATH variable, following the directory structure required by the application.

  8. Known Issues: Check if your issue aligns with known issues in Langchain-Chatchat version v0.2.10, such as issue Error reported during startup #443, which reports startup errors.

By following these steps, you should be able to diagnose and potentially resolve the startup error. If the problem persists, providing more specific error messages from the logs can help in further troubleshooting.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@ZhuLingda
Copy link

请问您解决这个问题了吗

@Belee05
Copy link
Author

Belee05 commented May 15, 2024

请问您解决这个问题了吗

没有。我在MODEL_ROOT_PATH中加了模型的路径,启动时也显示加载路径了,但还是会报错。
最后一行报错:RuntimeError: Internal: could not parse ModelProto from /Users/ken.li/PycharmProjects/Langchain-Chatchat/chatglm3-6b/tokenizer.model
在issue中看到机器人回复修改tokenizer_config.json,里面的内容原本就是:

"auto_map": {
    "AutoTokenizer": [
      "tokenization_chatglm.ChatGLMTokenizer",
      null
    ]
  },

不知道哪里有问题

@ZhuLingda
Copy link

请问您解决这个问题了吗

没有。我在MODEL_ROOT_PATH中加了模型的路径,启动时也显示加载路径了,但还是会报错。 最后一行报错:RuntimeError: Internal: could not parse ModelProto from /Users/ken.li/PycharmProjects/Langchain-Chatchat/chatglm3-6b/tokenizer.model 在issue中看到机器人回复修改tokenizer_config.json,里面的内容原本就是:

"auto_map": {
    "AutoTokenizer": [
      "tokenization_chatglm.ChatGLMTokenizer",
      null
    ]
  },

不知道哪里有问题

我将model_config.py中的MODEL_ROOT_PATH修改为chatglm的base dir后解决了这个问题,但是目前弹出了新的问题,是关于CUDA driver版本过时的问题,我正在解决

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants