-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does TinyChart inference work on a CPU? #66
Comments
Hi @matsuobasho, |
@zhangliang-04 thanks for the response, looking forward to the update |
@matsuobasho I just update the code to support cpu inference. However, I do not encounter the error you have mentioned. You can first pull the new code and try again. If it still does not work, could you provide more information about the error? (e.g. the stack traces) |
@zhangliang-04 thank you for making this update. I incorporated the changes and it seems to be running. HOWEVER, the For reference using the same type of prompt as in tutorials: text = "Create a brief summarization or extract key insights based on the chart image."
response = inference_model([img_path], text, model, tokenizer, image_processor, context_len, conv_mode="phi", max_new_tokens=1024) I'm running on Windows 10 with 16 GB of RAM and a 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz CPU (3 years old). At this processing speed, TinyChart is not useful. |
I'm running the inference notebook and just changed device to 'cpu':
I get an AssertionError on the last line:
AssertionError: Torch not compiled with CUDA enabled
The text was updated successfully, but these errors were encountered: