Skip to content

Commit

Permalink
add prompt templates and readme
Browse files Browse the repository at this point in the history
  • Loading branch information
xianxl committed Sep 23, 2024
1 parent e75a946 commit 8627a36
Show file tree
Hide file tree
Showing 3 changed files with 51 additions and 0 deletions.
23 changes: 23 additions & 0 deletions projects/self_taught_evaluator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Self-Taught Evaluators
TODO: insert a figure.

## Inference and Evaluation
Coming soon.

## Synthetic Preference Data
### Generate worse response
1. Given pairs of (instruction, response), run generation using the template specified in `data/prompts/worse_response.prompt`.
2. Run generation on the prompts from step 1, with temperature 0.7, and top_p=0.9
### Generate judgement

## Model Training
Coming soon.
## Citation
If you use data, model, or code from this work, please cite with the following BibTex entry:

@article{wang2024self,
title={Self-taught evaluators},
author={Wang, Tianlu and Kulikov, Ilia and Golovneva, Olga and Yu, Ping and Yuan, Weizhe and Dwivedi-Yu, Jane and Pang, Richard Yuanzhe and Fazel-Zarandi, Maryam and Weston, Jason and Li, Xian},
journal={arXiv preprint arXiv:2408.02666},
year={2024}
}
12 changes: 12 additions & 0 deletions projects/self_taught_evaluator/data/prompts/eval_plan.prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the user's instructions and answers the user's question better. Begin your evaluation by first verifying whether each response contains any obvious or subtle errors. Then propose an appropriate evaluaiton rubric, e.g. 1-5 criteria that are important for evaluating responses to this specific user question. Continue your evaluation by checking each response carefully along those criteria. Based on the analysis in previous steps, choose which response is better overall. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your evaluation, output your final verdict by strictly following this format: \"[[A]]\" if assistant A is better, \"[[B]]\" if assistant B is better.

[User Question]
{input}

[The Start of Assistant A's Answer]
{generation}
[The End of Assistant A's Answer]

[The Start of Assistant B's Answer]
{generation2}
[The End of Assistant B's Answer]
16 changes: 16 additions & 0 deletions projects/self_taught_evaluator/data/prompts/worse_response.prompt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
Below is a conversation between an user and an AI Assistant.

[User Question]
{input}

[The start of Assistant's Answer]
{generation}
[The end of Assistant's Answer]

Please first generate a modified instruction that is highly relevant but not semantically identical to the instruction above from the user. Then write a high-quality answer which is a good response to the modified instruction but not a good response to the original user question. IMPORTANT: Please strictly follow the following format:
[User Question Modified]
<provide a modified instruction here>

[The start of Assistant's answer to the modified instruction]
<provide a high-quality response to the modified instruction>
[The end of Assistant's answer to the modified instruction]

0 comments on commit 8627a36

Please sign in to comment.