Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreML convert error #315

Closed
mathpopo opened this issue Jul 7, 2020 · 18 comments
Closed

CoreML convert error #315

mathpopo opened this issue Jul 7, 2020 · 18 comments
Labels
bug Something isn't working Stale

Comments

@mathpopo
Copy link

mathpopo commented Jul 7, 2020

ONNX export success, saved as ./yolov5s.onnx

Starting CoreML export with coremltools 4.0b1...
WARNING:root:Tuple detected at graph output. This will be flattened in the converted model.
Converting Frontend ==> MIL Ops: 4%|▍ | 60/1415 [00:00<00:04, 303.59 ops/s]
CoreML export failure: PyTorch convert function for op leaky_relu_ not implemented

@mathpopo mathpopo added the bug Something isn't working label Jul 7, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Jul 7, 2020

Hello @mathpopo, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

@mathpopo we have updated export.py a bit to better support the 3 export channels (ONNX, TorchScript and CoreML). See #251 for a tutorial on how to use export.py. Note that export.py is mainly a guide that provides simple first steps for users like yourself to begin creating your own export pipelines though, it does not provide end-to-end export functionality.

We do offer paid end-to-end export services however, as well as providing reference apps for iOS and Android to use your exported models in. If you have a business idea for YOLOv5 at the edge, we'd be happy to help you get started! You can email glenn.jocher@ultralytics.com for details if interested.

@dlawrences
Copy link
Contributor

dlawrences commented Jul 9, 2020

@mathpopo

Building from source the latest coremltools (https://github.com/apple/coremltools) will fix this issue.

It has been dealt with in apple/coremltools@02ddf84

@imyoungyang
Copy link

@dlawrences I can't find the document to build coremltools from the source. Can you give me the link? Otherwise, I need to wait coremltools new version release.

@dlawrences
Copy link
Contributor

Hi @imyoungyang

There are some details here...

You should do:

conda activate <your-python-environment>
cd to root of coremltools
mkdir build && cd build
cmake ../
make install
cp ../setup.py ./
python setup.py install

And then, you need to copy libcoremlpython.so from where you have built the package (in my case, /Users/laurentiudiaconu/Downloads/coremltools-source/coremltools/) to your environment coremltools installation (in my case, /Users/laurentiudiaconu/opt/miniconda3/envs/pytorch_15/lib/python3.7/site-packages/coremltools-4.0b1-py3.7.egg/coremltools/).

@dlawrences
Copy link
Contributor

dlawrences commented Jul 21, 2020 via email

@joshgreifer
Copy link

joshgreifer commented Jul 21, 2020

Thanks, I deleted my comment, but not before you replied - I copied the python code from the coremltoolsdirectory to my conda env, and now all fine, and I've now successfully export my torch model to CoreML.

@dlawrences
Copy link
Contributor

Thanks, I deleted my comment, but not before you replied - I copied the python code from the coremltoolsdirectory to my conda env, and now all fine, and I've now successfully export my torch model to CoreML.

Great @joshgreifer

@hovhanns
Copy link

hovhanns commented Aug 6, 2020

Hi Everyone,
I'm trying to convert Yolov5s.pt model to CoreML model. The problem is, that I need to have Detect layer enabled, so I commented this line

model.model[-1].export = True # set Detect() layer export=True

But the Coremltools throws an exception
CoreML export failure: node 2321 (expand) got 3 input(s), expected 2
Without comment, it exports successfully but returns strange output, and it is unclear what to do with that output.
Is there any way to keep the Detect layer and convert it to CoreML?

Thanks.

@hovhanns
Copy link

hovhanns commented Aug 7, 2020

I've used some suggestions from here #343 and implemented the Detect layer's inference function as post-processing.


def detect_layer_inf(self, x):
    z = []
    for i in range(self.nl):
        bs, _, ny, nx, _ = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)

        if self.grid[i].shape[2:4] != x[i].shape[2:4]:
            self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

        y = x[i].sigmoid()
        y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
        y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
        z.append(y.view(bs, -1, self.no))

        return torch.cat(z, 1)

pred = model(img, augment=opt.augment)
detect_layer = model.model[-1]
pred = detect_layer_inf(detect_layer, pred)

Hope this will help.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 7, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@AlvinZheng
Copy link

I've used some suggestions from here #343 and implemented the Detect layer's inference function as post-processing.


def detect_layer_inf(self, x):
    z = []
    for i in range(self.nl):
        bs, _, ny, nx, _ = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)

        if self.grid[i].shape[2:4] != x[i].shape[2:4]:
            self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

        y = x[i].sigmoid()
        y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
        y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
        z.append(y.view(bs, -1, self.no))

        return torch.cat(z, 1)

pred = model(img, augment=opt.augment)
detect_layer = model.model[-1]
pred = detect_layer_inf(detect_layer, pred)

Hope this will help.

I have the same problem when export to mlmodel, but could you tell me where to add these code?

@waheed0332
Copy link

@hovhanns can you explain your solution a littler more.

@hovhanns
Copy link

@AlvinZheng @waheed0332

# This line gets the output from the exported model, 
pred = model(img, augment=opt.augment)
# This takes the last layer (Detect layer)
detect_layer = model.model[-1]
# This one calls the function that I wrote above and does the final inference.
pred = detect_layer_inf(detect_layer, pred)

All these things are done because of

model.model[-1].export = True # set Detect() layer export=True
this line.
In fact, our model output changes after exporting, and the above-mentioned post-processing is necessary to have the right output.

@maidmehic
Copy link

maidmehic commented Dec 1, 2020

Hi @hovhanns, what was your Vision request output after these changes, was it[VNCoreMLFeatureValueObservation] or[VNRecognizedObjectObservation].

@hovhanns
Copy link

hovhanns commented Dec 1, 2020

Hi @maidmehic, The output was bounding boxes and classes only(Objective-C based array), then we added some logic there to draw the boxes on the taken picture.

@tcollins590
Copy link

@hovhanns do you have an example of how to run this in swift?

@hovhanns
Copy link

hovhanns commented Oct 25, 2021

@tylercollins590
I changed the Detect layer forward function to return already processed values, then I give that array to non_max_suppression algorithm. Here is the function.

def forward(self, x):
        # x = x.copy()  # for profiling
        z = []  # inference output
        self.training |= self.export
        for i in range(self.nl):
            x[i] = self.m[i](x[i])  # conv
            bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
            x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()

            #if not self.training:  # inference
            if self.grid[i].shape[2:4] != x[i].shape[2:4]:
                self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

            y = x[i].sigmoid()
            y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i].to(x[i].device)) * self.stride[i]  # xy
            y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh

            z.append(y) #.view(bs, -1, self.no))
            break
        
        return z if self.training else (torch.cat(z, 1), x)

Now just export the model and pass it to non_max_suppression function(that can be implemented in swift.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

10 participants