Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leaky_relu -> tf.maximum(x, alpha*x) #165

Open
albertfaromatics opened this issue Jul 7, 2020 · 5 comments
Open

Leaky_relu -> tf.maximum(x, alpha*x) #165

albertfaromatics opened this issue Jul 7, 2020 · 5 comments
Assignees
Labels
comp:compiler Compiler related issues comp:model Model related isssues type:feature Feature requests

Comments

@albertfaromatics
Copy link

I've been looking for information about the quantization and support on Leaky_relu of the coral, but I cannot see anything about it being available nor a possible future date.

Looking at the implementation of leaky_relu, and the list of supported ops, I've seen that Maximum is supported. Could we implement tf.nn.leaky_relu as tf.maximum(x, alpha * x), being alpha < 1?

Will it work after quantization + compile?

@Namburger Namburger added compiler enhancement New feature or request labels Jul 7, 2020
@Namburger
Copy link

Namburger commented Jul 7, 2020

@albertfaromatics I'm not sure about leaky-relu, but we do have prelu in our roadmap which is quite similar.

tf.maximum(x, alpha * x), being alpha < 1

maximum is definitely a supported op, so in theory should works!

@albertfaromatics
Copy link
Author

@Namburger thanks for your response! We will wait then. Model with ReLU is working fine, but as we are using YOLO, I think that being able to eliminate de dying neurons problem will improve quite a bit the detection, in particular of small objects!

@Namburger
Copy link

yes, the problem with small objects is complicated, right now we're working on a pure application side solution demo where we split images into smaller chunk and applies nms to get resulting small objects. It has been working well for us, but it's not ideal

@albertfaromatics
Copy link
Author

That was our first approach: split image in 4 subimages and do multiple inference.
Our main problem with this is that with the Coral we achieved a x25 speed inference increase but when splitting images in 4 subimages, our increase was only ~x5: not enough for our solution.

The implementation seems to be training and possible to quantize, but not tested it yet. I will report back when I test it and see if there's an improvement or not.

@Namburger I also have another question: I'm trying to do the same with mish
y = x * tanh(ln(1 * e^x))

Looking at the supported ops on coral:

  • tanh appears
  • both ln and e^x does not appear. Does it mean they are not supported?

Thanks for your incredible work and help!

@Namburger
Copy link

Namburger commented Jul 9, 2020

@albertfaromatics
:/ so I don't think tflite's interger quantization supported tf.math.log yet (assuming this is what you want to use in place of ln).

This page will give you a good list of what ops are supported for quantization and this page shows their roadmap.
The compiler supports are usually blocked until we at least have the quantization from tflite, so maybe this would be a good feature request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:compiler Compiler related issues comp:model Model related isssues type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

4 participants