My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
-
Updated
Jul 31, 2024 - Jupyter Notebook
My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
Fine Tuning pegasus and flan-t5 pre-trained language model on dialogsum datasets for conversation summarization to to optimize context window in RAG-LLMs
This repo contains implementations of fine-tuning LLaMA LLM model using LoRA weights (PEFT) as well as focuses on the Retrieval Augmented Generation (RAG) framework.
This repository was commited under the action of executing important tasks on which modern Generative AI concepts are laid on. In particular, we focussed on three coding actions of Large Language Models. Extra and necessary details are given in the README.md file.
LLM projects
Dialogue Summary LLM - FLAN - T5: An implementation of the Flan-t5 LLM to summarize dialogues. Prompt Engineering , Fine tuning with PEFT and fine tuning with RL (PPO) is explored within this project.
Mistral and Mixtral (MoE) from scratch
A fine-tuned LLM great at answering questions about car repairs and maintenance.
A QLoRA+ LLM Ensemble with Schema-Linking for Text-to-SQL Generation
Fine-tune StarCoder2-3b for SQL tasks on limited resources with LORA. LORA reduces model size for faster training on smaller datasets. StarCoder2 is a family of code generation models (3B, 7B, and 15B), trained on 600+ programming languages from The Stack v2 and some natural language text such as Wikipedia, Arxiv, and GitHub issues.
A bash scripting assistant that helps you automate tasks. Powered by a streamlit chat interface, A finetuned nl2bash model generates bash code from natural language descriptions provided by the user
The task of this project is to Convert Natural Language to SQL Queries
Stumble upon a fine tuning that is unfathomable.
PEFT and LoRA to fine-tune large language models for dialogue summarization, reducing computational resources for broader application.
Fine-tuning an LLM to generate musical micro-genres
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
This repo contains everything about transformers and NLP.
Add a description, image, and links to the peft-fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the peft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."