Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When will the new version 1.23 be available #2036

Closed
ponponon opened this issue Sep 29, 2024 · 3 comments
Closed

When will the new version 1.23 be available #2036

ponponon opened this issue Sep 29, 2024 · 3 comments

Comments

@ponponon
Copy link

Feature request

transformers has been updated to 4.45.1, but optimum requires transformers[sentencepiece]<4.45.0,>=4.29, which restricts me from using the new transformers version

Motivation

Because I need the from transformers import Qwen2VLForConditionalGeneration

transformers must be updated to 4.45.1

Otherwise, the following error will be reported

Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
Traceback (most recent call last):
  File "/home/pon/code/me/modelscope_example/Qwen2-VL-7B-Instruct-GPTQ-Int4.py", line 7, in <module>
    model = Qwen2VLForConditionalGeneration.from_pretrained(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3447, in from_pretrained
    hf_quantizer = AutoHfQuantizer.from_config(config.quantization_config, pre_quantized=pre_quantized)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages/transformers/quantizers/auto.py", line 144, in from_config
    return target_cls(quantization_config, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/pon/.local/share/virtualenvs/modelscope_example-DACykz4b/lib/python3.11/site-packages/transformers/quantizers/quantizer_gptq.py", line 47, in __init__
    from optimum.gptq import GPTQQuantizer
ModuleNotFoundError: No module named 'optimum'

Your contribution

There is no pr to provide, but I think it is possible to remove the upper limit version constraint of transformers in future versions and only keep the lower limit constraint

@h3110Fr13nd
Copy link
Contributor

I think this issue has already been taken care of in #2023 .
The support for transformers v4.45 has been added.

@ponponon This issue should be closed.

@johnnynunez
Copy link

But when will be next release tag? v1.23.0 @h3110Fr13nd

@echarlaix
Copy link
Collaborator

Hi @ponponon, v1.23.0 supporting transformers v4.45.* is now out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants