Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qwen API #32

Open
shp216 opened this issue Jul 9, 2024 · 3 comments
Open

Qwen API #32

shp216 opened this issue Jul 9, 2024 · 3 comments

Comments

@shp216
Copy link

shp216 commented Jul 9, 2024

Hello. Thank you for your great research.
I want to use mobileagentv2, and I'm wondering if the qwen API is essential for using it.
Since i'm an international, it seems impossible to obtain the qwen API.
Could you let me know if this means I cannot use this model?

I know that version 1 could be used with just the GPT API, but I want to know if version 2 cannot be used without the qwen API.

@junyangwang0410
Copy link
Collaborator

Hello. Thank you for your great research. I want to use mobileagentv2, and I'm wondering if the qwen API is essential for using it. Since i'm an international, it seems impossible to obtain the qwen API. Could you let me know if this means I cannot use this model?

I know that version 1 could be used with just the GPT API, but I want to know if version 2 cannot be used without the qwen API.

Hello.

Qwen API is not necessary because you can choose to deploy through local environment. Maybe the following can help you: https://github.com/X-PLUG/MobileAgent/tree/main/Mobile-Agent-v2#choose-the-appropriate-execution-method-for-your-needs

Choose the caption model :
If you choose the "local" method, you need to choose between "qwen-vl-chat" and "qwen-vl-chat-int4", where the "qwen-vl-chat" requires more GPU memory but offers better performance compared to "qwen-vl-chat-int4". At the same time, "qwen_api" can be vacant.

@shp216
Copy link
Author

shp216 commented Jul 9, 2024

Could you please let me know the minimum GPU requirements for running the “Local” method?
< If your device is not enough to run a 7B LLM, choose the "api" method. We use parallel calls to ensure efficiency. >

@junyangwang0410
Copy link
Collaborator

Could you please let me know the minimum GPU requirements for running the “Local” method? < If your device is not enough to run a 7B LLM, choose the "api" method. We use parallel calls to ensure efficiency. >

12GB if you use qwen-int4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants