Skip to content

Commit

Permalink
Fix AVX512 error when using low or med vram with ipex
Browse files Browse the repository at this point in the history
  • Loading branch information
Disty0 committed May 26, 2023
1 parent 46d7d2f commit 8022de7
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion modules/sd_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ def load_model(checkpoint_info=None, already_loaded_state_dict=None, timer=None)
sd_hijack.model_hijack.hijack(sd_model)
timer.record("hijack")
sd_model.eval()
if shared.cmd_opts.use_ipex:
if shared.cmd_opts.use_ipex and not (shared.cmd_opts.lowvram or shared.cmd_opts.medvram):
sd_model = torch.xpu.optimize(sd_model, dtype=devices.dtype)
shared.log.info("Applied IPEX Optimize")
model_data.sd_model = sd_model
Expand Down

5 comments on commit 8022de7

@TotalDay
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

00:16:54-870571 ERROR Error running pip: install --upgrade torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
00:16:54-871927 ERROR Could not load torch: No module named 'torch'

@TotalDay
Copy link

@TotalDay TotalDay commented on 8022de7 May 26, 2023

@Disty0
Copy link
Collaborator Author

@Disty0 Disty0 commented on 8022de7 May 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open an issue instead of commenting on unrelated commits.

This is an issue on Intel's servers. It seems like they removed the latest version of ipex.
Switch to Automatic's virtual environment and install them manually:

source venv/bin/activate

And then use pip install --upgrade for each package in your message.

@vladmandic
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Disty0 how fast is torch.xpu.optimize? should it be enabled by default of behind a flag?

@Disty0
Copy link
Collaborator Author

@Disty0 Disty0 commented on 8022de7 May 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Disty0 how fast is torch.xpu.optimize? should it be enabled by default of behind a flag?

It doesn't seem to change much for me but it added +%10-20 extra performance for other users.
I takes less than a second to to apply. torch.xpu.optimize.
We could make this another flag but i enabled it with ipex by default since we are already using --use-ipex.

Please sign in to comment.