-
-
Notifications
You must be signed in to change notification settings - Fork 15.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple polygon per object #11476
Comments
👋 Hello @PixelFinder, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit. Introducing YOLOv8 🚀We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀! Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects. Check out our YOLOv8 Docs for details and get started with: pip install ultralytics |
Hi @PixelFinder, Annotating heavily occluded objects can be a challenging task as each annotation option has its own drawbacks. However, as you mentioned, option B would be the suitable annotation for heavily occluded objects. In this case, you would need to create one annotation for each visible part of the object and assign the same instance ID to those annotations. Regarding YOLOv5, it supports custom COCO-style annotations where multiple polygons can be assigned the same instance ID. You can create annotations for the visible and occluded parts of each object and assign the same instance ID to all polygons belonging to the same object. You can then use these annotations to train YOLOv5 for object detection tasks. I hope this helps. Let me know if you have any further questions. |
Hi @glenn-jocher, I couldn't find specific use cases of multiple polygons per object using YOLOv5 (but I have a strong preference to use this repository). Do you know any example or provide me with an example how to incorporate the custom COCO-style annotations? |
Hi @PixelFinder, You're welcome! I'm glad my response was helpful. Regarding merging custom COCO-style annotations in YOLOv5, you can manually add the instance ID to each row of the TXT file annotation for each polygon representing an object. Each annotation is a line in As for multiple polygons per object, the best approach would be to create a separate row in the TXT file annotation for each visible polygon of the object and assign the same instance ID to all polygons corresponding to the same object. For example, if an object is partially occluded in an image, you can create a separate annotation row for each visible part and assign the same instance ID to all those rows. I don't have a specific example of this approach, but you may find some useful resources in the COCO dataset annotation format documentation and COCO JSON annotation format documentation. I hope this helps. Let me know if you have any further questions! |
Thank you @glenn-jocher! Just for my understanding (and maybe saving a lot of headaches at a later stage). Is that correct? |
You're welcome, @PixelFinder! Regarding your question, yes, that is correct. The format for instance segmentation with an instance ID should be:
Where I hope this clears up any confusion. Let me know if you have any additional questions! |
Related issue. Using the JSON2YOLO script, you can merge multiple polygons in the COCO format into a polygon in the YOLOv5/v8 format.
|
@glenn-jocher After some annotation work, I fail in getting the training started with the instance ID. I get the error: I have my annotations in the .txt file as the following: I have two polygons of instance 1952 corresponding to class 0. How do I include the instance IDs into training? |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Hey Glenn, I am not entirely sure how we can apply multiple polygons to the same instance for transfer learning in YOLOv8-seg. Would appreciate any insights! Thanks in advance, |
In conclusion, will there be no problem in training if the polygons separated for the same instance in yolov5-seg are written in yolo format with independent rows? |
Hi @youngjae-avikus, |
I don't quite understand.
|
@ryouchinsa |
Hi @youngjae-avikus,
Then convert the COCO format file using the JSON2YOLO script. Those 2 polygons are merged into a polygon with narrow 2 lines which have 0 width. |
@ryouchinsa |
Hi @youngjae-avikus, |
@ryouchinsa hello everyone, I'd like to clarify a few points regarding the YOLOv5 annotation format and handling of multiple polygons per object:
Remember, the key to successful training is consistent and accurate annotations. If your dataset requires multiple polygons per instance, ensure that your annotation process and training pipeline are aligned to handle this scenario effectively. For more detailed information on YOLOv5's capabilities and how to prepare your data, please refer to the Ultralytics documentation. |
@glenn-jocher For the merged polygon, if you could separate it into multiple polygons when training, it would be very helpful. Because when converting the merged polygon to the mask image, sometimes narrow white lines appear in the background. Example merged polygon and the mask image. |
Hi @ryouchinsa, I appreciate your feedback and the work you're doing with RectLabel. It's great to hear about tools that can handle the intricacies of annotation conversion effectively. Regarding the separation of merged polygons during training, this is indeed a complex issue. The YOLOv5 segmentation models are designed to work with the standard YOLO format, which does not natively support multiple polygons per instance. However, the community is always evolving, and we welcome contributions that can enhance the capabilities of YOLOv5, including better handling of complex annotation scenarios. For the issue of narrow white lines appearing in the mask image, this is a known challenge with merged polygons. It's encouraging to hear that RectLabel has an algorithm to separate merged polygons to avoid this problem. As the YOLOv5 project continues to develop, we'll keep an eye on such enhancements and consider how they might be integrated into future versions. In the meantime, users should continue to use the best tools available to them for their specific annotation and training needs. Thank you for sharing your insights and for contributing to the machine learning community. Your efforts help improve the overall experience for users working on object detection and segmentation tasks. |
Search before asking
Question
How would you annotate heavily occluded objects resulting in multiple polygons per object? I made the following example.
When two cans are heavily occluded like in A. How would you annotate the blue can. My intuition thinks of option B, having two polygons per instance, in this case the blue can and assign the same instance ID to the two polygons. Another option is option C, where you also annotate the non-visible part of the blue can. However, you assign quite a lot of pixels to two instances, and I don't know how the model will react to this in terms of performance. In addition, annotation is more effort when using the roboflow smart polygon function.
But I can't seem to figure out how option B is allowed in YOLOv5. I know COCO allows multiple polygons per instance id, but as far as I know YOLO does not. Would you recommend to use another repo or could YOLOv5 actually handle annotations like option B?
Additional
No response
The text was updated successfully, but these errors were encountered: