Skip to content

This repository hosts materials from the CLiC-IT 2023 tutorial

License

Notifications You must be signed in to change notification settings

crux82/CLiC-it_2023_tutorial

Repository files navigation

CLiC-it 2023 Tutorial

Large Language Models and How to Instruction Tune Them (in a Sustainable Way)

This repository hosts materials from the CLiC-IT 2023 tutorial, aiming to:

The objective of this tutorial is:

  • Introduce Transformer-based architectures, including encoding-decoding, encoder-only, and decoder-only structures.
  • Demonstrate fine-tuning of Large Language Models (LLMs) on diverse datasets in a multi-task framework.
  • Utilize Low-Rank Adaptation (LoRA) for sustainable and efficient tuning on "modest" hardware (e.g., single 16GB RAM GPU).

The repository includes code for fine-tuning a Large Language Model (based on LLaMA) with instructions to solve all the tasks from EVALITA 2023. In particular, this tutorial shows how to encode data from different tasks into specific prompts and fine-tune the LLM using Q-LoRA. The code can also be used in Google Colab using an Nvidia-T4 GPU with 15GB memory.

The code is heavily based on the one used in ExtremITA system participating in EVALITA 2023:

Code

The overall process is divided into four steps:

Slides

The repository also features tutorial slides (LINK).

For queries or suggestions, raise an Issue in this repository or email croce@info.uniroma2.it or hromei@ing.uniroma2.it.