-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable SD XL ONNX export and ONNX Runtime inference #1168
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks for working on it! Probably we could have export / run tests & doc as well in the PR?
Edit: not sure what happened, I was not seeing the right diff and I see there are tests already!
optimum/pipelines/diffusers/pipeline_stable_diffusion_xl_img2img.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, awesome!
(just left taste comments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @echarlaix
* add stable diffusion XL export * fix style * fix test model name * fix style * remove clip with projection from test * change model name * fix style * remove need create pretrainedconfig * fix style * fix dummy input generation * add saving second tokenzier when exporting a SD XL model * fix style * add SD XL pipeline * fix style * add test * add watermarker * fix style * add watermark * add test * set default height width stable diffusion pipeline * enable img2img task * fix style * enable to only have the second tokenizer and text encoder * add test * fix cli export * adapt test for batch size > 1
Any option to run fp16 without cuda for O4 optimization on AMD? |
Add Stable Diffusion XL ONNX export and pipelines to enable ONNX Runtime inference for text-to-image and image-to-image
Export using the CLI :
For inference :