We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pip install fire
expand.sh for llama_pro not working by default without fire lib
expand.sh
proof:
(llama_factory) administrator@srv01:/home/jupyter/LLaMA-Factory/examples/extras/llama_pro$ bash expand.sh Traceback (most recent call last): File "/home/jupyter/LLaMA-Factory/examples/extras/llama_pro/../../../scripts/llama_pro.py", line 11, in <module> import fire ModuleNotFoundError: No module named 'fire' (llama_factory) administrator@srv01:/home/jupyter/LLaMA-Factory/examples/extras/llama_pro$ pip install fire Collecting fire Downloading fire-0.6.0.tar.gz (88 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.4/88.4 kB 2.2 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Requirement already satisfied: six in /home/administrator/miniconda3/envs/llama_factory/lib/python3.10/site-packages (from fire) (1.16.0) Collecting termcolor (from fire) Using cached termcolor-2.4.0-py3-none-any.whl.metadata (6.1 kB) Using cached termcolor-2.4.0-py3-none-any.whl (7.7 kB) Building wheels for collected packages: fire Building wheel for fire (setup.py) ... done Created wheel for fire: filename=fire-0.6.0-py2.py3-none-any.whl size=117033 sha256=5fd037731c4eae2b79333b9d97e6a25d8f2ee5e9921b22ea12ab73d65507bb34 Stored in directory: /home/administrator/.cache/pip/wheels/d6/6d/5d/5b73fa0f46d01a793713f8859201361e9e581ced8c75e5c6a3 Successfully built fire Installing collected packages: termcolor, fire Successfully installed fire-0.6.0 termcolor-2.4.0 (llama_factory) administrator@srv01:/home/jupyter/LLaMA-Factory/examples/extras/llama_pro$ bash expand.sh Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 17.17it/s] Add layer 0 copied from layer 0 Add layer 1 copied from layer 1 Add layer 2 copied from layer 2 Add layer 3 copied from layer 3 Add layer 4 expanded from layer 3 Add layer 5 copied from layer 4 Add layer 6 copied from layer 5 Add layer 7 copied from layer 6 Add layer 8 copied from layer 7 Add layer 9 expanded from layer 7 Add layer 10 copied from layer 8 Add layer 11 copied from layer 9 Add layer 12 copied from layer 10 Add layer 13 copied from layer 11 Add layer 14 expanded from layer 11 Add layer 15 copied from layer 12 Add layer 16 copied from layer 13 Add layer 17 copied from layer 14 Add layer 18 copied from layer 15 Add layer 19 expanded from layer 15 Add layer 20 copied from layer 16 Add layer 21 copied from layer 17 Add layer 22 copied from layer 18 Add layer 23 copied from layer 19 Add layer 24 expanded from layer 19 Add layer 25 copied from layer 20 Add layer 26 copied from layer 21 Add layer 27 copied from layer 22 Add layer 28 copied from layer 23 Add layer 29 expanded from layer 23 Add layer 30 copied from layer 24 Add layer 31 copied from layer 25 Add layer 32 copied from layer 26 Add layer 33 copied from layer 27 Add layer 34 expanded from layer 27 Add layer 35 copied from layer 28 Add layer 36 copied from layer 29 Add layer 37 copied from layer 30 Add layer 38 copied from layer 31 Add layer 39 expanded from layer 31 Save weights: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:20<00:00, 2.23s/it] Model weights saved in ../../../models/Vikhr-7B-instruct_0.2-pro Fine-tune this model with: --model_name_or_path ../../../models/Vikhr-7B-instruct_0.2-pro \ --finetuning_type freeze \ --name_module_trainable all \ --num_layer_trainable 8 \ --use_llama_pro (llama_factory) administrator@srv01:/home/jupyter/LLaMA-Factory/examples/extras/llama_pro$
The text was updated successfully, but these errors were encountered:
Thanks!
Sorry, something went wrong.
No branches or pull requests
expand.sh
for llama_pro not working by default without fire libproof:
The text was updated successfully, but these errors were encountered: