Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does TinyChart inference work on a CPU? #66

Open
matsuobasho opened this issue May 8, 2024 · 4 comments
Open

Does TinyChart inference work on a CPU? #66

matsuobasho opened this issue May 8, 2024 · 4 comments

Comments

@matsuobasho
Copy link

I'm running the inference notebook and just changed device to 'cpu':

import torch
from PIL import Image
from tinychart.model.builder import load_pretrained_model
from tinychart.mm_utils import get_model_name_from_path
from tinychart.eval.run_tiny_chart import inference_model
from tinychart.eval.eval_metric import parse_model_output, evaluate_cmds

def show_image(img_path):
    img = Image.open(img_path).convert('RGB')
    img.show()

# Build the model
model_path = "mPLUG/TinyChart-3B-768"

tokenizer, model, image_processor, context_len = load_pretrained_model(
    model_path, 
    model_base=None,
    model_name=get_model_name_from_path(model_path),
    device="cpu"
)

img_path = "my_image.png"
show_image(img_path)

text = "Create a brief summarization or extract key insights based on the chart image."
response = inference_model([img_path], text, model, tokenizer, image_processor, context_len, conv_mode="phi", max_new_tokens=1024)

I get an AssertionError on the last line:
AssertionError: Torch not compiled with CUDA enabled

@zhangliang-04
Copy link
Collaborator

Hi @matsuobasho,
TinyChart can be able to run on the CPU theoretically, but it may not yet supported by this code. I am going to check it and update the code this week.

@matsuobasho
Copy link
Author

@zhangliang-04 thanks for the response, looking forward to the update

@zhangliang-04
Copy link
Collaborator

I'm running the inference notebook and just changed device to 'cpu':

import torch
from PIL import Image
from tinychart.model.builder import load_pretrained_model
from tinychart.mm_utils import get_model_name_from_path
from tinychart.eval.run_tiny_chart import inference_model
from tinychart.eval.eval_metric import parse_model_output, evaluate_cmds

def show_image(img_path):
    img = Image.open(img_path).convert('RGB')
    img.show()

# Build the model
model_path = "mPLUG/TinyChart-3B-768"

tokenizer, model, image_processor, context_len = load_pretrained_model(
    model_path, 
    model_base=None,
    model_name=get_model_name_from_path(model_path),
    device="cpu"
)

img_path = "my_image.png"
show_image(img_path)

text = "Create a brief summarization or extract key insights based on the chart image."
response = inference_model([img_path], text, model, tokenizer, image_processor, context_len, conv_mode="phi", max_new_tokens=1024)

I get an AssertionError on the last line: AssertionError: Torch not compiled with CUDA enabled

@matsuobasho I just update the code to support cpu inference. However, I do not encounter the error you have mentioned. You can first pull the new code and try again. If it still does not work, could you provide more information about the error? (e.g. the stack traces)

@matsuobasho
Copy link
Author

matsuobasho commented May 14, 2024

@zhangliang-04 thank you for making this update. I incorporated the changes and it seems to be running. HOWEVER, the inference_model step has been running for the past 25 minutes on a 60 KB png file.

For reference using the same type of prompt as in tutorials:

text = "Create a brief summarization or extract key insights based on the chart image."
response = inference_model([img_path], text, model, tokenizer, image_processor, context_len, conv_mode="phi", max_new_tokens=1024)

I'm running on Windows 10 with 16 GB of RAM and a 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz CPU (3 years old).

At this processing speed, TinyChart is not useful.
Perhaps something is wrong in the default settings?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants