For this project, I fine-tuned two separate models for three tasks: document summarization, dialogue summarization and text classification
-
Updated
Jun 14, 2024 - Jupyter Notebook
For this project, I fine-tuned two separate models for three tasks: document summarization, dialogue summarization and text classification
Instruction fine tuning BART for Dialogue Summarization | IT4772E | NLP Project 20232
Finetuned FLAN-T5 to translate English to Hawaiian Pidgin
Web app for a therapist chatbot. Using a custom fine-tuned local flan-t5 model for summarisation and ChatGPT3.5 for chat.
Project based on PyTorch-lightning and Transformers for training Seq2SeqLM models, with a primary focus on MT5 and FLAN-T5, yet not limited to them
Performing Prompt engineering on a dialogue summarization task using Flan-T5 and the dialogsum dataset. Exploring how different prompts affect the output of the model, and compare zero-shot and few-shot inferences.
Discussed about 4 use-cases or case studies. Discussed about the approaches and significance of these use-cases as these are different from others. There are several approaches available which can be done using LLM but here the approaches and it's significance could bring insightful approaches towards it's execution.
LLM projects
perform deduplication on FLAN v2 dataset & Finetune LLaMa3 using this dataset
Multiple LLM based models for NLP tasks. Starting with Question answering on custom data
This repository is made for T5 model where user can train their model on any T5 model version.
Prompt-engineered RAGs for Open Domain Complex QA
Official Code for Analysis Done in the Paper "Frugal Prompting for Dialog Models"
This project is based on fine-tuning LLM models (FLAN-T5) for text summarisation task using PEFT approach. All evaluation metrics being computed on ROUGE scoring and LoRA optimisation techniques being used for fine-tuning.
Text-To-Text Textbots to Demonstrate Output Differences Between Models Trained on Filtered/Unfiltered Datasets for HSS4 - The Modern Context: Select Figures and Topics
Developed a generative large language model fine-tuned on Stack Overflow data for question answering.
Demonstration of LLM techniques such as prompt engineering, full finetuning, PEFT (LoRA) etc.
NLU_NLG Winter Semester
Fine-tuned FLAN T-5 using Instruction Fine-Tuning (Full), LoRA-based PEFT, and RLHF with PPO
Add a description, image, and links to the flan-t5 topic page so that developers can more easily learn about it.
To associate your repository with the flan-t5 topic, visit your repo's landing page and select "manage topics."