Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FYI: Sucessfully converted NanoDet to TensorFlow with eval implemetation in JavaScript #187

Open
vladmandic opened this issue Mar 16, 2021 · 19 comments

Comments

@vladmandic
Copy link

vladmandic commented Mar 16, 2021

In case anybody is interested, NanoDet works like a charm in TensorFlow and TensorFlow/JS

Converted models, full JS code as well as conversion notes are at:
https://github.com/vladmandic/nanodet

I've re-implemented score & box decoding to squeeze a bit more performance
and moved NMS as a post-decoding task to avoid calculation of low-probability boxes
(code is fully commented with explanations)

Implementation is now directly applicable to both nanodet-m and nanodet-g and it auto-adjusts
(nanodet-t converted model has some corrupted layers and cannot be used just yet)

Model works in NodeJS using TensorFlow backend and Browser using WebGL backend, but does not work using WASM backend due to missing implementation for an operation SparseToDense (tensorflow/tfjs#4824)

Thank you for a great TINY model!

@batrlatom
Copy link

I have tried it, works great! I was trying to export the transformer version, I have succeeded. But the inference code is not compatible with such version. Do you think you would take a look on this? I think that I am missing something there. For ease, I am sharing already converted model ... https://recall-models.s3.eu-central-1.amazonaws.com/nanodet_transformer/model.json

@vladmandic
Copy link
Author

it works with m and g variations, but not with t (transformer).
there is one trivial change needed:

const baseSize = strideSize * 13;

should be * 10 as transformer pretrained model has different convolution sizes

but then you have the critical error:

Message: In[0] and In[1] has different ndims: [400,1,128] vs. [128,384]

looking at the model code, this happens when two tensors passed to matMul op are not compatibile - something went wrong during the conversion

i'd need to go over entire model workflow to figure out why (likely an incompatible broadcast, but that's just a guess), but at the end, seems like converter just cannot handle transformer model

@batrlatom
Copy link

Ok, thank you for checking it out. If you will have time sometimes and will try to use adapt it for transformer, just update it here, so we also will be able to use it. I tried the transformer network in the colab and it seems like great update over normal nanodet (320x320 resolution). It was able to pick a person inside the bus, I was surprised. Great work with the conversion then!!!

@RangiLyu RangiLyu pinned this issue Mar 19, 2021
@batrlatom
Copy link

batrlatom commented Mar 19, 2021

@vladmandic btw. I was playing with the code a little more and I am unable to get results that are similar to the results before the conversion. I am using the same conversion code as you described in the gist and the same code for inference. But results are different. Would you mind sharing the converted tfjs model for basic mobiledet_m_416 ?

@vladmandic
Copy link
Author

@batrlatom here's a link, i'll keep it up for few days:
https://1drv.ms/u/s!ArSgEgBb24AcnMJcgIY1iwsA52WSWg?e=8TFmBa

@batrlatom
Copy link

thanks, I will take a look!

@batrlatom
Copy link

batrlatom commented Mar 20, 2021

There is not any problem with the conversion as your model gives me the same result. But the difference between js and vanilla implementation lies in the scores. Scores calculated by your code are about half the value of the original implementation.

original output
output from js

Nevertheless, the boxes are on a similar spot, everything is fine, only scores are more off. Did you come across similar
problem or you did not experience any bigger differences?

@vladmandic
Copy link
Author

vladmandic commented Mar 20, 2021

i've decided to skip softmax op on scores output tensor, so it's by choice

if you want, you can add it just before

const scoreIdx = scores.argMax(1).dataSync(); // location of highest scores

also, boxes are slightly tighter in the original, but there is a lot of math involved (it calculates every possible box and then interpolates) and it's not executed in the model, but as post-processing - and javascript is slow with that, so I'm cheating a bit (discarding most boxes early on and focusing on high value ones only) to get "good-enough" results.

@batrlatom
Copy link

I get it. Thank you for the help and great work!

@zye1996
Copy link

zye1996 commented Mar 23, 2021

unable to produce the result the same as the original, it seems no person class is deteced. wondering if you have any idea? Thx

k2-_372f1b5d-2b35-4684-a2a4-11da05cec113 v1-nanodet
image

@vladmandic
Copy link
Author

probably i do have a bug - i'll take a look tomorrow.
most likely it's due to different stride sizes require different score normalizations and i skipped that part.

can you upload the original of the same photo so i can directly compare results in tests?

@zye1996
Copy link

zye1996 commented Mar 23, 2021

sure no problem. here is the image

k2-_372f1b5d-2b35-4684-a2a4-11da05cec113 v1

@vladmandic
Copy link
Author

vladmandic commented Mar 23, 2021

I've created a git repository as it's easier to update work than gist
(plus i can upload actual model files):
https://github.com/vladmandic/nanodet

The issue was that I was looking at only highest score within any given stride while in reality you can have two different valid objects near each other and slightly overlapping (like person and bicycle on your image)
I've fixed that and now results are directly comparable

Also, implementation is now directly applicable to both nanodet-m and nanodet-g and it auto-adjusts
(nanodet-t cannot be converted just yet)

To fine-tune results, modify nanodet.js: modelOptions
Specifically, values for minScore and iouThreshold

@batrlatom
Copy link

I have tried your new code for nanodet.js with custom model but without success ( old code somehow works ) . Do you think you could take a look on it? For purpose of testing, the model is at : https://recall-models.s3.eu-central-1.amazonaws.com/nanodet_hardhat/model.json . There are 3 classes and you can take a image for prediction for example at https://images.unsplash.com/photo-1603516270950-26e4f5004ffd?ixid=MXwxMjA3fDB8MHxzZWFyY2h8Mnx8aGFyZGhhdHxlbnwwfHwwfA%3D%3D&ixlib=rb-1.2.1&w=1000&q=80

@vladmandic
Copy link
Author

vladmandic commented Mar 26, 2021

@batrlatom

my implementation had a hard-coded number of expected classes to 80
as that is number in coco classes which is what original model has been trained on:

  const scoresT = res.find((a) => (a.shape[1] === (baseSize ** 2) && a.shape[2] === 80))?.squeeze();

and your custom model has only 3 classes, as it can be seen from the model.signature.output:

'Identity:0': { name: 'Identity:0', dtype: 'DT_FLOAT', tensorShape: { dim: [ { size: '1' }, { size: '2704' }, { size: '3' } } },

so my implementation could not find correct tensor that represents scores
and you can see that in the actual output:
Found scores tensor: undefined

I just changed hard-coded value to value of labels.length,
so as long as you're importing (or defining as an array) labels that match the model you should be fine:

const { labels } = require('./coco-labels');

note: just make sure your number of classes is never 32 or 44 as that is dimension
of boxes tensor and there is nothing else to base tensor match operation on :)

@batrlatom
Copy link

Works well, thank you!

@Techn08055
Copy link

In case anybody is interested, NanoDet works like a charm in TensorFlow and TensorFlow/JS

Converted models, full JS code as well as conversion notes are at: https://github.com/vladmandic/nanodet

I've re-implemented score & box decoding to squeeze a bit more performance and moved NMS as a post-decoding task to avoid calculation of low-probability boxes (code is fully commented with explanations)

Implementation is now directly applicable to both nanodet-m and nanodet-g and it auto-adjusts (nanodet-t converted model has some corrupted layers and cannot be used just yet)

Model works in NodeJS using TensorFlow backend and Browser using WebGL backend, but does not work using WASM backend due to missing implementation for an operation SparseToDense (tensorflow/tfjs#4824)

Thank you for a great TINY model!

Hey I tried converting to tensorflow and tflite but the model seems to perform poor I tried the inference provided here:
https://github.com/PINTO0309/PINTO_model_zoo/blob/main/072_NanoDet/demo/demo_tflite.py

@lijianyuhoopox
Copy link

looks like the mode not OK.

1595001411699

@adithya1-7
Copy link

Hey @vladmandic
Can you help me with this issue
#460

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants