Skip to content

In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning

Notifications You must be signed in to change notification settings

xhan77/in-context-alignment

Repository files navigation

In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning

  • Use decode_icl_llama.py to generate responses with Llama-2-vanilla models with in-context alignment.
  • Use other decode_*.py files to generate responses with baseline models.
  • Use eval_outputs.py for automatic evaluations.
  • Example commands can be found in the headers of the files.
  • For more details, please refer to our paper or email Han at xiaochuang.han@gmail.com :)

About

In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning

Resources

Stars

Watchers

Forks

Languages