-
-
Notifications
You must be signed in to change notification settings - Fork 16k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFlite output data #1981
Comments
👋 Hello @Tommydw, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
The data contained in the [1, 25200, 7] array can be found in this file: outputdata.txt
Should I add a Non Max Suppression or something else, can someone help me please? |
@Tommydw thresholding and NMS are not part of the TFLite model. The TFLite_detection_posprocess module you show is only available for SSD I believe, this is the first time I've seen it for mobilenet. I've had discussions with Google about helping us apply it to YOLOv5 but the implementation is a bit detailed and we have not made recent progress on it, so currently TFLite models require additional NMS. |
@Tommydw where did you get your mobilenet model from with the included TFLite_detection_postprocess? The column labels by the way to get you started are |
@glenn-jocher Thank you very much!
I downloaded the SSD MobileNet V2 FPNLite 320x320 from the TF2 Model ZOO and trained it with Tensorflow 2.4, exported it with "export_tflite_graph_tf2.py" and converted it to TFLite with "convert-to-tflite.py" in TF-nightly (2.5.0-dev20210120) |
Ah, ok yes it's SSD based then. As the note mentions the NMS pipeline only works with SSD models today I believe: |
@glenn-jocher the code works now (if it is good), but my tflite model detects objects very badly. Also when I convert the yolov5s.pt in the same and other ways My current code:
|
@glenn-jocher I found the problem, I converted the input data to 1.0 to -1.0 instead of 1.0 to 0.0 😅 Works fine now, thanks! |
I got a quite similar output after converting my custom trained Yolov5 to tflite. Can you please explain more how to get to : |
@AsmaJegham I wrote the solution for my problem at stack overflow: https://stackoverflow.com/a/65953473/15050934 |
what did you exactly do ? did you add these functions to the tf.py file and rerun the conversion or took the existing outputs and somehow gave them as an input to these functions ? sorry, my question might look a bit silly for you. |
@Tommydw Your solution really helped me. May i know if you have implemented nms on the same piece of code? |
@Tommydw I have the same problem, this solution really help us. If you could give more details, I would be very grateful. |
@Tommydw Are you using your tflite model with Android Studio ? Because I need to add metadata to mine in order to import it in Android Studio and it seems that I'm trying to build my own script to add metadata to the model, but if you already did it, maybe you could share it here :) |
@Tommydw where did you get the output_details? "output_data = interpreter.get_tensor(output_details[0]['index'])" |
|
@mmsumapas Hi bro, good morning! |
I just followed the code provided by @Tommydw.
I implemented it in raspberry pi 4 and the code works perfectly.
…On Mon, Jan 3, 2022 at 10:03 PM ebdjesus ***@***.***> wrote:
@mmsumapas <https://github.com/mmsumapas> Hi bro, good morning!
Did it work for you?
Could share which file moved and line?
Thank you so much
—
Reply to this email directly, view it on GitHub
<#1981 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKZNX4B4LBD66DZKHQCDPODUUGUD7ANCNFSM4WIR3V5A>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@mmsumapas In my case I'm using on android mobile generating tflite file. But thanks for the reply I will try to follow what @Tommydw did |
@AsmaJegham How did you resolve your issue ? I'm facing same issue and didn't able to figure it out . |
Hi @Tommydw |
@Tommydw Thank You Man, Your solution helped a lot. I was searching for the solution for long time. But I am facing a problem, hoping you can help. I am getting multiple bounding boxes for same object, What can be done? |
@d3ath-add3r seems like you need to apply NMS to your results. |
Is there any way to replicate @Tommydw 's code without using tf math methods? Can we substitute, for example, |
Hi , i want to ask something , i want to inference tflite with c++ interpreter , how can i be sure get true output sorting ? Is the same python ? xywh conf class ? Or are there any example for c++ microcontroller ? Thanks for answering. |
hi i want to ask something, is anyone succesfully create a model from custom data with yolov5, convert it to tflite and using it in android using flutter framework? |
@ryecries plz refer to the document of exporting models. |
@ryecries, to convert a YOLOv5 model to TFLite format, you can follow the steps outlined in the document on exporting models. This document provides detailed instructions on how to export your trained YOLOv5 model to TFLite. Once you have the TFLite model, you can use it in your Flutter project by referring to the Flutter TFLite plugin's documentation on how to load and run the TFLite model within your app. |
@Tommydw @glenn-jocher @AsmaJegham @zldrobit @matinmoezzi @ovshake @mmsumapas @ Hi Guys Please give me some time and Help Me i hv trained yolov5s model and converted into tflite which is giving 1 output array when interpreted on an image please help to convert yolov5 to tflite to get 4 arrays @Tommydw i hv seen your solution in stackoverflow didnt get it how to do and where to do in flow please help me and let me know the flow and process and code when to convert ,I mean inbetween yolo to tflite or after tflite please help the export of yolov5s to tflite should give 4 arrays ........because i will upload tflite in ai camera |
@RohanEmpire hello! It sounds like you're looking for a way to modify the output of your YOLOv5 TFLite model to match the output format of an SSD MobileNet model. The YOLOv5 model typically outputs a single tensor with the shape To achieve this, you would need to modify the post-processing step of your YOLOv5 TFLite model. This involves interpreting the single output tensor and splitting it into the desired four arrays. This is not a straightforward export option and requires custom code to be written. Here's a high-level overview of the steps you might take:
Please note that this is a non-trivial task and requires a good understanding of both the YOLOv5 output format and TensorFlow/TFLite operations. If you're not familiar with these concepts, it might be helpful to work with someone who has experience in deep learning and TensorFlow to assist you with this task. Remember that the YOLOv5 repository and documentation are great resources, and you can often find help and advice by engaging with the community there. Good luck with your project! |
@RohanEmpire Maybe you could refer to the repo. confidence, class, and bbox are extracted from the output of the tflite model, as follows: |
Thanks The camera where i need to upload tflite model with take only 4 arrays as input boxes,confidence,class,num_detections initially i use to upload ssd mobnet tflite into cam which gives 4 arrays as output the results are very bad with ssd mobnet where yolo gives me very good results so i need to get a readymade tflite which me 4 arrays same like ssd so that i can upload to camera( camera takes tflite which give only 4 arrays ouput) Implement in TFLite Model: If you want these four arrays to be the direct output of the TFLite model, you would need to implement this post-processing as custom operations within the TFLite graph. This is advanced and typically requires knowledge of TensorFlow's graph operations HELP ME IN MODIFYING THE FILES REQUIRED TO CONVERT TFLITE(READYMADE TFLITE THAT GIVES 4 ARRAYS(LOCATION,CONFIDENCE,CLASS,NUM_DETECTIONS) please give me your time ,and help me which files i need to modify and run tflite lite conversion code so that i can get camera required tflite |
@RohanEmpire Maybe you could refer to the repo. confidence, class, and bbox are extracted from the output of the tflite model, as follows: https://github.com/zldrobit/yolov5/blob/e796283b53a6d4198a2b7067bfa976b542395d25/android/app/src/main/java/org/tensorflow/lite/examples/detection/tflite/YoloV5ClassifierDetect.java#L421-L451 @zldrobit really thanks for your time but i need a readymade tflite which give me 4 arrays on an image thanks |
@glenn-jocher Hey |
Hello @RohanEmpire, The Here's a brief outline of what you need to do:
This post-processing needs to be done after you run inference with the TFLite model and before you interpret the results. If you want to integrate this directly into the TFLite model, it would require custom operations, which is quite complex. For a camera system that requires a TFLite model outputting a fixed number of detections, you would typically handle this in the post-processing step within the camera's software. If the camera's software is not modifiable, you might need to create a custom TFLite model with integrated post-processing, which is an advanced task. If you're not familiar with these operations, I would recommend working with someone who has experience in TensorFlow and TFLite to help you modify your model accordingly. |
@glenn-jocher Hey Thanks |
Hello @RohanEmpire, If the outputs are getting jumbled after applying NMS during the conversion from To resolve this, you should:
If you're still facing issues, you might need to step through your code with a debugger or add logging to better understand where the outputs are getting mixed up. Remember, if the camera system you're working with has specific requirements for the model output, it's crucial that the model's post-processing aligns with these requirements. If the camera software cannot be modified, you may need to adapt your model's output format to match the camera's input expectations. Since this can be quite complex, if you're not experienced with these processes, consider reaching out to the community or collaborating with someone who has expertise in model conversion and TFLite to assist you further. |
@glenn-jocher Thanks Brother |
You're welcome, @RohanEmpire! If you have any more questions or need further assistance, feel free to reach out. Best of luck with your project! 😊👍 |
!cd yolo && python export.py --weights "C:\Users\Desktop\YOLO_25\HELMET_JACKET_TF_25\yolo\runs\train\exp\weights\best.pt" --img 640 --data dataset.yaml --include tflite Output Array 0: !cd yolo && python export.py --weights "C:\Users\Desktop\YOLO_25\HELMET_JACKET_TF_25\yolo\runs\train\exp\weights\best.pt" --img 640 --nms --data dataset.yaml --include tflite Output Array 0: @glenn-jocher When I include nms in the code im getting good results but ouput order is getting changed with out nms im getting right order but box upon box issue I need location,class,score,num_detections Please help |
Hello @RohanEmpire, It seems like the output order is not matching your expectations when NMS is included. The typical output order for object detection models is Here's what you can do:
If you continue to face issues, you may need to debug the export process step by step to identify where the discrepancy is occurring. Since this involves digging into the code, if you're not comfortable with it, consider reaching out to the community or someone with experience in model conversion for assistance. Remember to keep backups of your original working scripts so you can always revert to a known good state if needed. |
@RohanEmpire TFLite >=2.7 does change the order of output tensors. According to tensorflow/tensorflow#33303 (comment), this is a regression of TFLite >=2.7. BTW, what's the version of TFLite you are using? |
@glenn-jocher Thanks giving jumbled ouputs I want location,class,scores,num_detections order(tf 2.5 default order) |
Hello @RohanEmpire, It appears that the issue you're encountering with the output order after applying NMS is related to a known behavior in TensorFlow Lite. Since you're using TensorFlow 2.5 and experiencing a change in the output order after NMS, it's likely due to the way the TFLite conversion or the NMS operation is implemented. Here's what you can do to address this:
Remember to test your changes thoroughly to ensure that the output order is consistent across different inputs and scenarios. If you're still having trouble, you may need to seek further assistance from the TensorFlow community or from someone with deep expertise in TensorFlow Lite conversions. |
@glenn-jocher Thanks for your time |
You're welcome, @RohanEmpire! If you have any more questions in the future or need further assistance, don't hesitate to ask. Good luck with your project! 😊🚀 |
I exported the model using --nms. Note: This is in flutter. |
@mp051998 hello, The error message indicates that the TensorFlow Lite interpreter is encountering an operation (in this case, Here's what you can do:
If you're still having trouble after trying these steps, you may need to seek further assistance from the TensorFlow Lite or Flutter communities, as they might have more specific guidance for your use case. |
Hi @glenn-jocher, Thanks for the reply. I was under the impression the nms was important and it had to be handled at export time itself (since I'm a beginner at all this), but yes, I ended up writing a function using the regular output and converted it into the format I required. It's working now I think. Also, I may be wrong but I don't think the FlexDelegate support is there on flutter tflite. |
Hello @mp051998, I'm glad to hear that you've managed to resolve the issue by writing a custom function to format the output as needed. Handling NMS at export time can be convenient, but it's not strictly necessary, and performing NMS post-inference gives you more control over the process. Regarding the FlexDelegate, you are correct that TensorFlow Lite for Flutter might not support all TensorFlow ops out of the box, and the FlexDelegate integration may not be as straightforward as in other environments. It's great that you found a workaround that doesn't require the use of unsupported ops. If you have any more questions or run into further issues, feel free to reach out. Best of luck with your project! 🌟 |
@Tommydw can you provide full python code with nms? |
Hello @mahirk27, Certainly! Here's a simple example of how you can apply Non-Maximum Suppression (NMS) using PyTorch: import torch
def nms(boxes, scores, iou_threshold):
# Sort scores and corresponding boxes
idxs = scores.argsort(descending=True)
keep = []
while idxs.numel() > 0:
# Take the box with the highest score
max_score_idx = idxs[0]
max_score_box = boxes[max_score_idx]
keep.append(max_score_idx.item())
# Compute IoU of the remaining boxes with the max score box
other_boxes = boxes[idxs[1:]]
ious = bbox_iou(max_score_box.unsqueeze(0), other_boxes)
# Remove boxes with IoU above the threshold
idxs = idxs[1:][ious < iou_threshold]
return keep
# Example usage
boxes = torch.tensor([[10, 10, 50, 50], [20, 20, 40, 40], [30, 30, 60, 60]])
scores = torch.tensor([0.9, 0.8, 0.7])
iou_threshold = 0.1
keep_idxs = nms(boxes, scores, iou_threshold)
print("Boxes to keep:", keep_idxs) This is a basic implementation. For production, you might want to use optimized libraries like torchvision which include a built-in NMS function. Hope this helps! 😊 |
❔Question
Hi, I have successfully trained a custom model based on YOLOv5s and converted the model to TFlite. I feel silly asking, but how do you use the output data?
I get as output:
from the converted YOLOv5 model
But I expect an output like:
(this one is from a tensorflow lite mobilenet model (trained to give 10 output data, default for tflite))
It may also be some other form of output, but I honestly have no idea how to get the boxes, classes, scores from a [1,25200,7] array.
Can anyone help me with this??
(on 15-January-2021 I updated pytorch, tensorflow and yolov5 to the latest version)
The text was updated successfully, but these errors were encountered: