Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BetterTransformer support for RemBERT #545

Merged
merged 3 commits into from
Dec 5, 2022

Conversation

hchings
Copy link
Contributor

@hchings hchings commented Dec 5, 2022

What does this PR do?

Add BetterTransformer support for RemBERT (part of #20372)

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Details

  • RemBERT's primary change is decoupling in/output embeddings. Since RemBERT uses the same Attention mechanism in BERT, it can use the same BertLayerBetterTransformer. (please correct me if I'm wrong)
  • Minor code cleanup to sort model names alphabetically as the convention in transformers repo.
  • All tests for encoders passed on GPU

Screen Shot 2022-12-04 at 3 57 10 PM

cc @younesbelkada, @michaelbenayoun, @fxmarty. Thanks.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Dec 5, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for adding BetterTransformer support for RemBERT 🎉
Very clean PR! Thanks also for taking care of the alphabetical order on the testing suite
Looking forward to your next contributions 💪

@hchings
Copy link
Contributor Author

hchings commented Dec 5, 2022

Hi @younesbelkada or @michaelbenayoun, I see that "first-time contributor needs a maintainer to approve running workflows". Could you help to approve that? Thanks!

@younesbelkada
Copy link
Contributor

Thanks for the heads-up! Just run the workflow and we'll merge once it's green

@hchings
Copy link
Contributor Author

hchings commented Dec 5, 2022

Hi @younesbelkada, there are two checks failed.

1/ test_inference_speed() in test_bettertransformer_encoded.py: this test only runs on bert-base-uncased model and will pass on GPU (test screenshot attached above) but can occasionally fail on CPU. All BetterTransformer-related PRs could easily have this check failed. Should we open an Issue to look into this?

 AssertionError: 0.05959849110001869 not less than 0.056976150400032566 : The converted model is slower than the original model.

I think it's either PyTorch's issue or sth has to be changed in BertLayerBetterTransformer because even on CPU BetterTransformer is supposed to be faster. Please advise, thanks.

2/ ONNX Runtime - I saw this test has been failing on main branch. Since this is completely irrelevant to the code changes in this PR, can we ignore it for merging this PR?

@younesbelkada
Copy link
Contributor

Hi @hchings
Thanks for your message
For 2/ this is expected as some tests are flaky so don't worry about that
For 1/ this is a bit strange since it passes on mac-os and not on ubuntu. This test used to be flaky and fails sometime. Let me get back to you on this

@younesbelkada
Copy link
Contributor

Ok the test was just flaky, let's merge!

@younesbelkada younesbelkada merged commit eff4c2b into huggingface:main Dec 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants