Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes to alternating SWA layers in Gemma2 #31775

Merged
merged 4 commits into from
Jul 11, 2024

Conversation

turboderp
Copy link
Contributor

What does this PR do?

  • Reverses the order of global and sliding attention layers in Gemma2. This brings it in line with Google's implementation in which sliding attention is used on layers 0, 2, 4.., whereas currently the Transformers implementation uses sliding attn on layers 1, 3, 5...

  • Changes HybridCache.update to read the sliding_window argument from cache_kwargs since it wasn't being parsed otherwise. The cache was created with alternating max seq lenghts of 4k and 8k, but all layers were being updated as if they were 8k, causing out-of-bounds errors and CUDA exceptions.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. @ArthurZucker

@LysandreJik
Copy link
Member

Thanks for your PR @turboderp, we're taking a look with @ArthurZucker

@fizzAI
Copy link

fizzAI commented Jul 7, 2024

Any updates on this? It's likely required to get the proper performance out of the Gemma 2 models

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks for fixing!

ArthurZucker

This comment was marked as duplicate.

@ArthurZucker
Copy link
Collaborator

The slow tests are gonna fail potentially cc @ydshieh if it's alright with you to update later on? I think a patch will include this!

@ArthurZucker ArthurZucker merged commit a695c18 into huggingface:main Jul 11, 2024
17 of 20 checks passed
@ArthurZucker
Copy link
Collaborator

Thanks @turboderp

ArthurZucker pushed a commit that referenced this pull request Jul 11, 2024
* HybridCache: Flip order of alternating global-attn/sliding-attn layers

* HybridCache: Read sliding_window argument from cache_kwargs

* Gemma2Model: Flip order of alternating global-attn/sliding-attn layers

* Code formatting
amyeroberts pushed a commit to amyeroberts/transformers that referenced this pull request Jul 19, 2024
* HybridCache: Flip order of alternating global-attn/sliding-attn layers

* HybridCache: Read sliding_window argument from cache_kwargs

* Gemma2Model: Flip order of alternating global-attn/sliding-attn layers

* Code formatting
MHRDYN7 pushed a commit to MHRDYN7/transformers that referenced this pull request Jul 23, 2024
* HybridCache: Flip order of alternating global-attn/sliding-attn layers

* HybridCache: Read sliding_window argument from cache_kwargs

* Gemma2Model: Flip order of alternating global-attn/sliding-attn layers

* Code formatting
zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request Jul 24, 2024
* HybridCache: Flip order of alternating global-attn/sliding-attn layers

* HybridCache: Read sliding_window argument from cache_kwargs

* Gemma2Model: Flip order of alternating global-attn/sliding-attn layers

* Code formatting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants