-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert models to TorchScript #46
Comments
Torchscript does not currently support these models. |
Got it. Is that because some of the ops aren't supported yet? Is there another way to deploy these models to a c++ environment? (E.g. onnx --> caffe2 or tensorrt) Is this lack of support true for object detection models in general? Or is this more specific to the SOTA implementations in detectron? How much work would it be to get one of these models into a c++ compatible format? Thanks! |
We're working on getting TorchScript support. onnx/caffe2 deployment support (discussed in #8) is |
Thank you! Do you know if this lack of support true for object detection models in general? Or is this more specific to the SOTA implementations in detectron? And second, does torchscript support the operations used in detectron or does torchscript require source changes to make this work? |
I'm curious to know the best way to deploy object detection models trained in pytorch to an optimized format runnable in c++. |
@bfortuner Using libtorch does not gain much acceleration in terms of speed. Exporting to onnx and convert to TensorRT engine is the best way to deploy these models. Also, onnxruntime trying supporting all ops on top of TensorRT provider, but there are lots of them does not supported and have to running on CPU. |
@ppwwyyxx Thanks for the added clarity. Could you expand at all on what you mean with "take some time to be ready"? Is that something like for the next release or more in some unknown distant future? |
Yeah, I'm wondering if there is a tutorial/paper about recommended approaches to c++ deployment with PyTorch. It seems there are a lot of different ways, but it's not clear what the "best" way is, or what the PyTorch team recommends in the future. I'll post in PyTorch discussion! |
FYI torchvision models (including Faster R-CNN and Mask R-CNN) will soon support exporting its models to both ONNX and TorchScript, see pytorch/vision#1461 pytorch/vision#1407 and pytorch/vision#1401 for some representative PRs. I believe the learnings from this conversion step done for torchvision models will be very helpful for planning detectron2 models to be exportable to TorchScript. |
Thanks for the update! I'm curious to know if TorchScript needs to make changes, too (are there any hard blockers)? Or is it mostly on our end to make our code compatible with the current TorchScript api? The PRs above suggest it will still be a burden for our developers to bring their SOTA models into production |
@bfortuner I think it will be a two-sided change: TorchScript support for Python features will continue improving, but the user might need to adapt a bit their code to make it better fit the current supported. As pytorch/vision#1407 already shows, a complicated model such as Mask R-CNN can already be converted to TorchScript, without changing too much the code (although the original code took some precautions to avoid using too many Python features). cc @suo who can give a more accurate picture of TorchScript |
Say I want to convert a detectron2 mask-rcnn model to C++ (ideally using torchscript/libtorch), what's the current best approach? I tried various things last week but with no good solution. Things I tried (using recent detectron2, pytorch and torchvision code):
I get similar errors when trying to convert other layers (example, torchscript didn't support
|
Hi all, looks like PyTorch 1.4 and torchvision 0.5 have made progress on this and a couple of related issues. When will we see the updates rolling out to detectron2? Please see my related question here on the forum: https://discuss.pytorch.org/t/pytorch-1-4-torchvision-0-5-vs-detectron/67002 |
Hi, I am also having some problems of JIT conversion. It raises an error:
Since ONNX conversion gives fix size of input, it is not suitable in my case. Any help please? |
An obvious disadvantage of ONNX is we need to fix the input, but some detection models can take flexible size input. JIT-supporting is necessary and urgent. |
any good news? |
Progress has been made recently (https://github.com/facebookresearch/detectron2/pulls?q=is%3Apr+author%3Achenbohua3+) on this issue and if everything goes well most models should be scriptable within a few months. |
Very through try. Did you figure out a way to export the model to an onnx model that can be loaded by other runtime or to an torchscript model? |
subscribe the thread |
Thanks a lot for all the amazing work being done on this project, it's appreciated a lot! I understand that detectron models are currently not scriptable with TorchScript. @ppwwyyxx could you please elaborate on what exactly is missing for making Mask R-CNN and PointRend scriptable? Is it blocked by pytorch/pytorch#36061? |
pytorch/pytorch#36061 is the main blocker |
Replacing the lists of modules with @ppwwyyxx do you think it makes sense to wait for proper support for classes in TorchScript or rather change the implementation of |
I haven't got to that step yet (since we can't break pre-trained models) but I'll go double check the story around scripted classes in C++. detectron2/detectron2/export/torchscript_patch.py Lines 196 to 197 in 4ef254f
|
It turns out that converting the |
Hi @tkuenzle Do you gain anything in terms of time per frame with the C++ / libtorch version? For a single frame, or maybe by running multiple C++ threads in parallel? |
I cannot really comment on time per frame because our focus is on running the model on mobile devices. I don't think sharing code would be that helpful, because it mostly depends on what models you want to script. Thanks to the work of @chenbohua3 most of the heads are scriptable already and thus the effort to make complete models scriptable is rather small. The main steps you have to take are the following:
def forward(input):
output = self.model(input)
return [o["instances"].pred_masks for o in output] I hope this helps! |
FYI we just added support scripting & tracing for the most common models (R-CNN and RetinaNet). They will export models to torchscript format successfully. There aren't proper APIs & docs yet, but basic usage is now shown in unittests: detectron2/tests/test_export_torchscript.py Lines 23 to 150 in f1d0c05
|
Thanks a lot, that's great news @ppwwyyxx! Would you be willing to accept PRs for making some of the other models scriptable? |
@ppwwyyxx When I run the test, I get this error
Seems to be related to the issue above. Is there something I'm supposed to do to preprocess the models so they don't have lists and instead have ModuleLists? |
If I add model.backbone.bottom_up.stages = nn.ModuleList(model.backbone.bottom_up.stages)
model.backbone.lateral_convs = nn.ModuleList(model.backbone.lateral_convs)
model.backbone.output_convs = nn.ModuleList(model.backbone.output_convs) it seems to work, but only for a single image. Does batched mode not work yet? |
@danielgordon10 your pytorch is still not new enough. |
@ppwwyyxx What's the minimum pytorch version? That was yesterday's nightly. |
It now requires yesterday's pytorch commits which are supposed to be in today's nightly. I'm closing this issue because the scope is too general (also renaming it so it only involves torchscript) and majority of work is done. There are some remaining TODOs about usability that should be addressed as separate issues:
Thanks a lot to pytorch JIT team and @chenbohua3 @bddpqq from Alibaba for making this happen! |
I was able to successfully convert the model, thank you. But when I use the model in an android project, I get the following error; 2021-02-27 02:25:36.537 21060-21060/org.pytorch.demo.imagesegmentation E/AndroidRuntime: FATAL EXCEPTION: main
Process: org.pytorch.demo.imagesegmentation, PID: 21060
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.demo.imagesegmentation/org.pytorch.imagesegmentation.MainActivity}: com.facebook.jni.CppException:
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File "/usr/local/lib/python3.7/dist-packages/torchvision/ops/boxes.py", line 42
"""
_assert_has_ops()
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Serialized File "code/__torch__/torchvision/ops/boxes.py", line 26
_8 = __torch__.torchvision.extension._assert_has_ops
_9 = _8()
_10 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _10
|
@Muratoter please follow the instructions in https://github.com/pytorch/android-demo-app/tree/master/D2Go to get detectron2 models running on Android Note that you need to add
to your |
Hello, I trained a model and I converted the model to model.ts successfully. Can we use it in windows 10? I get an error when loading the model. error on this line:
|
Hello all I have solved the torchscript integration with accurate result |
@sctrueew |
* add two stage * update two stage with warmup * update warmup * update model init
* add two stage dab deformable detr * update two stage criterion * dino * Add two-stage dab-deformable-detr (facebookresearch#46) * add two stage * update two stage with warmup * update warmup * update model init * refine dab-deformable-two-stage model config * refine dino project * delete redundant files * add readme for dino * refine dino config Co-authored-by: SlongLiu <slongliu86@gmail.com> Co-authored-by: hao zhang <zhanghao@dgx061.scc.idea> Co-authored-by: Shilong Liu <34858619+SlongLiu@users.noreply.github.com> Co-authored-by: ntianhe ren <rentianhe@dgx061.scc.idea>
Do you have any examples of how to convert these models into a format runnable in C++?
The text was updated successfully, but these errors were encountered: