Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More tests #95

Closed
IlyasMoutawwakil opened this issue Dec 5, 2023 · 9 comments
Closed

More tests #95

IlyasMoutawwakil opened this issue Dec 5, 2023 · 9 comments
Assignees
Labels
good first issue Good for newcomers

Comments

@IlyasMoutawwakil
Copy link
Member

IlyasMoutawwakil commented Dec 5, 2023

Most of these features are covered in examples but not in the testing suite.

@IlyasMoutawwakil IlyasMoutawwakil changed the title More exhaustive tests More tests Dec 5, 2023
@IlyasMoutawwakil IlyasMoutawwakil added the good first issue Good for newcomers label Jan 11, 2024
@aliabdelkader
Copy link
Contributor

I am happy to take this issue if no one is working on it. It is my first issue, I might need some help.
It looks to me that we need to just add configuration files to make the current tests cover the mentioned use cases. Is that correct or did I miss something ?

@IlyasMoutawwakil
Copy link
Member Author

@aliabdelkader yess I left these as first issues for anyone interested in contributing and having these settings tested as part of the CI. You just need to add a configuration and make sure that it's tested in the CI (might require adding some packages).

aliabdelkader added a commit to aliabdelkader/optimum-benchmark that referenced this issue Feb 24, 2024
@aliabdelkader
Copy link
Contributor

#take
ok great, I am working on it.

@lopozz
Copy link
Contributor

lopozz commented Feb 26, 2024

Hi @aliabdelkader and @IlyasMoutawwakil I benchmarked some GPTQ models in the past week and I'd be up to write a test for this quantization scheme if you're not already on it. Let me know

@IlyasMoutawwakil
Copy link
Member Author

thanks ! would love to review the quantization configs ! make sure u include backend.no_weights: true,false when possible to ur sweep configs as the no weights feature is probably one of the most important features of optimum-benchmark and making sure it's always compatible with quantization schemes seems interesting

@aliabdelkader
Copy link
Contributor

@lopozz no i did not work on that scheme yet. So, I would not mind if you want to write the test for it.
@IlyasMoutawwakil sure, I will include the backend.no_weights: true,false sweep to the quantizatio test configs.

@lopozz
Copy link
Contributor

lopozz commented Feb 26, 2024

Hi @IlyasMoutawwakil I was trying the same configuration file of #125 and I got this error

AttributeError: 'LlamaRotaryEmbedding' object has no attribute 'cos_cached'

I believe it is caused by the newest refactoring. The last commit I see is 97e0b9e of 4 days ago, so I believe I am working with the latest version. Should I open a new issue for this?

@IlyasMoutawwakil
Copy link
Member Author

@lopozz I think it's a transformers issue unslothai/unsloth#168 (comment), try downgrading ur transformers version

@aliabdelkader
Copy link
Contributor

Hi,

I just want to mention that I wont be able to work on the point related to FAv2 in case someone else wants to pick it up.

IlyasMoutawwakil pushed a commit that referenced this issue Mar 22, 2024
Co-authored-by: Ali Abdelkader <aliabdelkader@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants