Skip to content

My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.

Notifications You must be signed in to change notification settings

qiqinyi/GenAI-with-LLMs

Repository files navigation

GenAI-with-LLMs

This repository contains my lab work in “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.

Lab Resources

Below are the links to the specific lab notebooks. Each notebook is available in two versions: the Initial Version (before running all cells) and the Executed Version (after running all cells).

  1. Lab 1: Summarize Dialogue
    Perform dialog summarization using Generative AI. Experiment with in context learning such as zero shot, one shot and few shot inferences and tune associated configuration parameters at inference to influence results.
  1. Lab 2: Fine-Tune Generative AI Model
    Perform instruction fine tuning on an existing LLM from Hugging Face, Flan-T5 model. Explore both full fine tuning as well as PEFT (Parameter Efficient Fine Tuning) methods such as LoRA (Low Rank Adaptation) and evaluation using ROUGE metrics.
  1. Lab 3: Fine-Tune Model to Detoxify Summaries
    Further fine tune a Flan-T5 model using reinforcement learning with a reward model such as Meta AI's hate speech reward model to generate less toxic summaries. Use Proximal Policy Optimization (PPO) to fine-tune and detoxify the model.

Papers

  1. Attention Is All You Need

  2. BloombergGPT: A Large Language Model for Finance

  3. Scaling Instruction-Finetuned Language Models

  4. ReAct: Synergizing Reasoning and Acting in Language Models

Releases

No releases published

Packages

No packages published