Stumble upon a fine tuning that is unfathomable.
-
Updated
Feb 13, 2024 - Jupyter Notebook
Stumble upon a fine tuning that is unfathomable.
Comprehensive Compilation of Customized LLMs for Specific Domains and Industries
MLX Institute | Fine-tuning Llama-2 7B on The Onion to generate new satirical articles given a headline
Develop a Romanian legal domain Large Language Model (LLM) using pre-trained model and fine-tuning on legal texts. The fine-tuned model is available on Hugging Face.
The MistralAI API wrapper for Delphi utilizes the various advanced models developed by Mistral to provide robust capabilities for chat interactions, string embeddings, and precise code generation with Codestral.
This repo contains everything about transformers and NLP.
This repository implements a self-updating RAG (Retrograde Autoregressive Generation) model. It leverages Wikipedia for factual grounding and can fine-tune itself when information is unavailable. This allows the model to continually learn and adapt, offering a dynamic and informative response.
Pre-Training and Fine-Tuning transformer models using PyTorch and the Hugging Face Transformers library. Whether you're delving into pre-training with custom datasets or fine-tuning for specific classification tasks, these notebooks offer explanations and code for implementation.
DICE: Detecting In-distribution Data Contamination with LLM's Internal State
Fine-tune ChatGPT with few-shot learning for personalized resume bullet points.
Fine-Tuning and Evaluating a Falcon 7B Model for generating HTML code from input prompts.
Building a GPT-3 powered Amazon Support Bot for precise customer query responses via fine-tuned model on Amazon QA data
LegalDigest - NLP Project
This hands-on walks you through fine-tuning an open source LLM on Azure and serving the fine-tuned model on Azure. It is intended for Data Scientists and ML engineers who have experience with fine-tuning but are unfamiliar with Azure ML.
Chatbot built using Flask and the OpenAI GPT-3.5 turbo model. The chatbot allows users to interact with a language model powered by GPT-3.5 turbo and get responses based on their input.
Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
[ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models
Fine-tuning Mistral LLM for Adaptive Machine Translation
Add a description, image, and links to the fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning-llm topic, visit your repo's landing page and select "manage topics."