-
Notifications
You must be signed in to change notification settings - Fork 274
Issues: pytorch/torchtune
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Support for Phi-3-mini-128k-instruct and larger context length models
#1120
opened Jun 25, 2024 by
dcsuka
Missing non-LoRA key tok_embeddings.weight from base model dict
#1110
opened Jun 22, 2024 by
vasicvuk
How to provide our own daaset.json or csv file to the llama2:7B finetune using tune run
#1099
opened Jun 19, 2024 by
himanshushukla12
Support NF4 quantization of linear layers without LoRA applied
#1093
opened Jun 14, 2024 by
ebsmothers
High memory usage on Llama3-70B full finetune during checkpoint save
#1092
opened Jun 14, 2024 by
ebsmothers
[Feature Request] Add lr_scheduler for full_finetune (single_device/distributed)
#1060
opened Jun 6, 2024 by
andyl98
Recommendations for obtaining validation dataset loss after each epoch
#1042
opened Jun 1, 2024 by
dcsuka
GPTQ quantization not working with fine-tuned LLaMA3 models
#1033
opened May 30, 2024 by
sanchitintel
Benchmark performance against other implementation such as
Llama-factory
and Unsloth
?
#1023
opened May 27, 2024 by
liyucheng09
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.