Skip to content

FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)

License

Notifications You must be signed in to change notification settings

promptslab/LLMtuner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

79 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLMTuner

LLMTuner: Fine-Tune Llama, Whisper, and other LLMs with best practices like LoRA, QLoRA, through a sleek, scikit-learn-inspired interface.

Installation

With pip

This repository is tested on Python 3.7+

You should install Promptify using Pip command

pip3 install git+https://github.com/promptslab/LLMTuner.git

Quick tour

To finetune Large models, we provide the Tuner API.

from llmtuner import Tuner, Dataset, Model, Deployment

# Initialize the Whisper model with parameter-efficient fine-tuning
model = Model("openai/whisper-small", use_peft=True)

# Create a dataset instance for the audio files
dataset = Dataset('/path/to/audio_folder')

# Set up the tuner with the model and dataset for fine-tuning
tuner = Tuner(model, dataset)

# Fine-tune the model
trained_model = tuner.fit()

# Inference with Fine-tuned model
tuner.inference('sample.wav')

# Launch an interactive UI for the fine-tuned model
tuner.launch_ui('Model Demo UI')

# Set up deployment for the fine-tuned model
deploy = Deployment('aws')  # Options: 'fastapi', 'aws', 'gcp', etc.

# Launch the model deployment
deploy.launch()

Features 🤖

  • 🏋️‍♂️ Effortless Fine-Tuning: Finetune state-of-the-art LLMs like Whisper, Llama with minimal code
  • ⚡️ Built-in utilities for techniques like LoRA and QLoRA
  • ⚡️ Interactive UI: Launch webapp demos for your finetuned models with one click
  • 🏎️ Simplified Inference: Fast inference without separate code
  • 🌐 Deployment Readiness: (Coming Soon) Deploy your models with minimal effort to aws, gcp etc, ready to share with the world.

Supported Models :

Task Name Colab Notebook Status
Fine-Tune Whisper Fine-Tune Whisper
Fine-Tune Whisper Quantized LoRA
Fine-Tune Llama Coming soon..

Community

If you are interested in Fine-tuning Open source LLMs, Building scalable Large models, Prompt-Engineering, and other latest research discussions, please consider joining PromptsLab
Join us on Discord

@misc{LLMtuner2023,
  title = {LLMTuner: Fine-Tune Large Models with best practices through a sleek, scikit-learn-inspired interface.},
  author = {Pal, Ankit},
  year = {2023},
  howpublished = {\url{https://github.com/promptslab/LLMtuner}}
}

💁 Contributing

We welcome any contributions to our open source project, including new features, improvements to infrastructure, and more comprehensive documentation. Please see the contributing guidelines