Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Request ]Fine-tuned BERT version used for discrimination #44

Open
zainsarwar865 opened this issue Aug 29, 2020 · 0 comments
Open

[Request ]Fine-tuned BERT version used for discrimination #44

zainsarwar865 opened this issue Aug 29, 2020 · 0 comments

Comments

@zainsarwar865
Copy link

I am conducting research on deep fake text and currently trying to replicate the results on the other models used as discriminators in the paper. The paper mentions a fine-tuned version of BERT in which you extended the maximum sequence length to 1024 by initializing new encodings.

Can you please upload those models? Also, could you tell what top-p threshold was used to generate text fed to BERT for discrimination?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant