Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InContextLearning*Dataset Default padding sides hardcoded? #2778

Open
MFajcik opened this issue Dec 13, 2023 · 1 comment
Open

InContextLearning*Dataset Default padding sides hardcoded? #2778

MFajcik opened this issue Dec 13, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@MFajcik
Copy link

MFajcik commented Dec 13, 2023

Hi, I was wondering regarding your code here.

inp, continuation_span = _make_padded_input(context_enc, continuation_enc, self.max_seq_len,

Why do you assume right padding (for InContextLearningMultipleChoiceTaskDataset problem, but also some others)?

  1. Shouldn't the padding_side be derived from the tokenizer?
  2. Assuming right padding breaks some models (Mistral is unusable).

Thanks for information.

@MFajcik MFajcik added the bug Something isn't working label Dec 13, 2023
@dakinggg
Copy link
Contributor

dakinggg commented Jan 4, 2024

Hey @MFajcik, sorry for the delayed response. tokenizers have a default padding side set, but models should all be compatible with different padding sides (unless they explicitly error out). Generally speaking, we use right padding by default (for training, single forward passes, etc), and left padding for generation (necessary for the auto regressive generation and kv cache to work out). Mistral should work fine (we've run it). You may need to update to the latest transformers version. If you have an issue there, please send a full repro. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants