Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
Jun 28, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
An interpretable KBQA system that operates at the natural language level with the help of LLMs
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
✨✨Latest Advances on Multimodal Large Language Models
A one-stop data processing system to make data higher-quality, juicier, and more digestible for LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大语言模型提供更高质量、更丰富、更易”消化“的数据!
Code for Suri: Multi-constraint instruction following for long-form text generation
SCAR: Efficient Instruction-Tuning for Large Language Models via Style Consistency-Aware Response Ranking
An Open-sourced Knowledgable Large Language Model Framework.
总结Prompt&LLM论文,开源数据&模型,AIGC应用
awesome-LLM-controlled-constrained-generation
[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
"GraphEdit: Large Language Models for Graph Structure Learning"
Datasets collection and preprocessings framework for NLP extreme multitask learning
This repo contains a list of channels and sources from where LLMs should be learned
Code repository for "Introducing Airavata: Hindi Instruction-tuned LLM"
Instruction fine tuning BART for Dialogue Summarization | IT4772E | NLP Project 20232
Generative Representational Instruction Tuning
Add a description, image, and links to the instruction-tuning topic page so that developers can more easily learn about it.
To associate your repository with the instruction-tuning topic, visit your repo's landing page and select "manage topics."