Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support loss function for Self-play Preference Optimization #1612

Merged
merged 6 commits into from
May 2, 2024

Conversation

winglian
Copy link
Contributor

@winglian winglian commented May 2, 2024

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for this great addition !
Can you add a section here in the docs to mention this method: https://github.com/huggingface/trl/blob/main/docs/source/dpo_trainer.mdx#loss-functions ! 🙏

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@winglian
Copy link
Contributor Author

winglian commented May 2, 2024

@younesbelkada docs updated. thanks!

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot !

@younesbelkada younesbelkada requested a review from kashif May 2, 2024 12:49
@younesbelkada
Copy link
Contributor

cc @kashif wdyt? 🙏

@kashif
Copy link
Collaborator

kashif commented May 2, 2024

thanks @winglian can you kindly update the Config's doc and arguments

loss_type: Literal["sigmoid", "hinge", "ipo", "kto_pair", "bco_pair"] = "sigmoid"

docs/source/dpo_trainer.mdx Outdated Show resolved Hide resolved
winglian and others added 2 commits May 2, 2024 09:21
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
@kashif
Copy link
Collaborator

kashif commented May 2, 2024

@winglian do you want to add an option to the test? e.g. a ["gpt2", "sppo", True],

@kashif kashif merged commit adf17a5 into huggingface:main May 2, 2024
9 checks passed
@angelahzyuan
Copy link
Contributor

@winglian Thanks for adding our work! @younesbelkada @kashif Just submitted a new pull request at #1615. This updates the loss function according to Equation (4.8), with $P(y_w &gt; y_l) = 1$ and $P(y_l &gt; y_w) = 0$, and justified it in doc as the hard label version of the algorithm.

Screenshot 2024-05-02 at 9 01 54 PM

It should work well now for the first iteration. Our reported 3 iterations results was based on the soft label version.

@flozi00
Copy link

flozi00 commented May 5, 2024

I just gave it a try and it's working better than orpo for me now.
Just installed the main branch this evening with the follow up patch for the hard loss type.

@AGTSAAA
Copy link

AGTSAAA commented May 9, 2024

Hi @winglian @flozi00 Do you know what is the value of beta shoud I set for SPPO?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants