-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Text Generation] Causal Mask Support #1127
Merged
dbogunowicz
merged 7 commits into
feature/damian/causal_mask_fb
from
feature/damian/causal_mask_support
Jul 25, 2023
Merged
[Text Generation] Causal Mask Support #1127
dbogunowicz
merged 7 commits into
feature/damian/causal_mask_fb
from
feature/damian/causal_mask_support
Jul 25, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…om/neuralmagic/deepsparse into feature/damian/causal_mask_support
2 tasks
bfineran
requested changes
Jul 24, 2023
bfineran
pushed a commit
that referenced
this pull request
Jul 27, 2023
* Update helpers.py * correct implementation of the mapping from inputs to causal mask * [Text Generation] Causal Mask Support (#1127) * initial commit * clean up the PR * working implementation * Ben's review comments * [Text Generation] Multitoken prefill enablement (#1130) * initial commit * clean up the PR * working implementation * initial implementation, hacky lets clean it up * ready for review * few tiny quality improvements * simplify the logic for computing num of unmasked bits for creating attention_mask for the multitoken prefill * replace boolean causal mask for int64 causal mask * fix breaking tests
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Allows the user to set the argument
prompt_processing_sequence_length
of theTextGeneration
pipeline to a value different fromsequence_length
, effectively enabling running amultitoken_engine
in a scenario, when we feed itinput_ids
of any length, and robustly provide kv cache support. In other words, it enables prefilling the cache using subsequences of different lengths.Manual Testing
Complementary feature (and PR) from Sparseml: neuralmagic/sparseml#1676
(in this scenario we run with the default
prompt_processing_sequence_length=64
, but setting it to32
gives the same result naturally)