Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Add image encoder onnx export #326

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

Dominic23331
Copy link

This allow export image encoder to onnx. I create a new file to export it.

@facebook-github-bot
Copy link

Hi @Dominic23331!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 4, 2023
@Dominic23331
Copy link
Author

I found some problems when i try to export vit_h model, and i`m going to fix it.

"--use-preprocess",
action="store_true",
help=(
"Replaces the model's predicted mask quality score with the stability "

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Dominic23331 Is something wrong with this help text?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will correct it as soon as possible

@zoey9628
Copy link

it's not works for my code. please help !!!!
the error is
image

@Dominic23331
Copy link
Author

it's not works for my code. please help !!!! the error is image

which model do you export? you must use pytorch2.0 to export onnx model.
你导出的是哪个模型?请使用pytorch2.0导出onnx模型。

@zoey9628
Copy link

it's not works for my code. please help !!!! the error is image

which model do you export? you must use pytorch2.0 to export onnx model. 你导出的是哪个模型?请使用pytorch2.0导出onnx模型。

导出encoder模型,也需要pytorch2.0吗?我的torch是1.10.0

@Dominic23331
Copy link
Author

it's not works for my code. please help !!!! the error is image

which model do you export? you must use pytorch2.0 to export onnx model. 你导出的是哪个模型?请使用pytorch2.0导出onnx模型。

导出encoder模型,也需要pytorch2.0吗?我的torch是1.10.0

你试试用2.0导出

@zoey9628
Copy link

it's not works for my code. please help !!!! the error is image

which model do you export? you must use pytorch2.0 to export onnx model. 你导出的是哪个模型?请使用pytorch2.0导出onnx模型。

导出encoder模型,也需要pytorch2.0吗?我的torch是1.10.0

你试试用2.0导出

好的,感谢!!

@DickyQi
Copy link

DickyQi commented May 18, 2023

I meet a problem when I convert vit-h encoder model:
Traceback (most recent call last):
File "scripts/export_image_encoder.py", line 183, in
run_export(
File "scripts/export_image_encoder.py", line 167, in run_export
weights = np.frombuffer(fp.read(), dtype=np.float32)
ValueError: buffer size must be a multiple of element size

torch version 2.0.1
Any idea?

@zoey9628
Copy link

image
我转了之后生成很多这种小算子,这个转出结果是正常的吗?怎么合到一起呢?

@Dominic23331
Copy link
Author

image 我转了之后生成很多这种小算子,这个转出结果是正常的吗?怎么合到一起呢?

你导出的是vit_h模型吗?使用vit_h模型由于大小超过2g,pytorch会分别保存参数和网络结构,你在加载的时候将参数导入应该就行了,具体你看一下onnxruntime的文档吧

@Dominic23331
Copy link
Author

I meet a problem when I convert vit-h encoder model: Traceback (most recent call last): File "scripts/export_image_encoder.py", line 183, in run_export( File "scripts/export_image_encoder.py", line 167, in run_export weights = np.frombuffer(fp.read(), dtype=np.float32) ValueError: buffer size must be a multiple of element size

torch version 2.0.1 Any idea?

When exporting the vit_h model, use a folder to store it, so this issue will not occur. I will fix this bug as soon as possible

@zoey9628
Copy link

image 我转了之后生成很多这种小算子,这个转出结果是正常的吗?怎么合到一起呢?

你导出的是vit_h模型吗?使用vit_h模型由于大小超过2g,pytorch会分别保存参数和网络结构,你在加载的时候将参数导入应该就行了,具体你看一下onnxruntime的文档吧

是的,我用的vit_h模型,感谢回复!!

@neutron-1114
Copy link

感谢分享,已经成功部署!点赞!

@stihuangyuan
Copy link

image 我转了之后生成很多这种小算子,这个转出结果是正常的吗?怎么合到一起呢?

你导出的是vit_h模型吗?使用vit_h模型由于大小超过2g,pytorch会分别保存参数和网络结构,你在加载的时候将参数导入应该就行了,具体你看一下onnxruntime的文档吧

是的,我用的vit_h模型,感谢回复!!

你好,vit_h模型encoder和decoder转onnx你使用成功了吗?

@chl916185
Copy link

I meet a problem when I convert vit-h encoder model: Traceback (most recent call last): File "scripts/export_image_encoder.py", line 183, in run_export( File "scripts/export_image_encoder.py", line 167, in run_export weights = np.frombuffer(fp.read(), dtype=np.float32) ValueError: buffer size must be a multiple of element size
torch version 2.0.1 Any idea?

When exporting the vit_h model, use a folder to store it, so this issue will not occur. I will fix this bug as soon as possible

请问您解决了嘛

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants