Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss masking to deal with sparsely labeled data #5917

Closed
1 task done
leandervaneekelen opened this issue Dec 8, 2021 · 5 comments
Closed
1 task done

Loss masking to deal with sparsely labeled data #5917

leandervaneekelen opened this issue Dec 8, 2021 · 5 comments
Labels
question Further information is requested

Comments

@leandervaneekelen
Copy link
Contributor

Search before asking

Question

Hi there! I work on histopathology data, where I'm trying to detect four different cell types (below is a typical example of an image with YOLOv5 formatted ground truth on top of it - mea culpa for the graininess).
image

However, as you obviously know, gathering labeled data is immensely costly, especially in the medical field where domain-specific knowledge is needed. Therefor, I want to start training using sparsely labeled data (i.e. only having some fraction of the cells in the training patches be labeled).

Now, the Tips for Best Training Results page quite clearly states that datasets must have 'label consistency', i.e. no sparse labeling. I was wondering if I can circumvent this requirement by performing some kind of loss masking, where you only calculate the loss over objects that you actually have a label for. The most 'straightforward' option is to simply ignore all predictions that have a IoU below a certain threshold with all labeled objects, but there are other options, e.g. fully labeling only a quadrant of each patch/image (this would allow you to even recognize false positives in that quadrant).

I was unable to find any discussion about loss masking for training on sparsely labeled data, or the label consistency requirement in general. Do you expect this to work, or is there some mechanistic reason why sparse data with loss masking won't work for YOLOv5? I am aware of all the drawbacks, such as being unable to determine false positives, but given the fact people have made some impressive systems with only sparsely labeled data, I wanted to give it a shot.

Additional

No response

@leandervaneekelen leandervaneekelen added the question Further information is requested label Dec 8, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Dec 8, 2021

👋 Hello @leandervaneekelen, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@leandervaneekelen this entirely depends on your priorities, funding level, risk tolerance, timeline etc.

If you are happy doing high risk basic research that will consume time and effort with no guaranteed return and a high likelihood of failure then sure you can experiment with all sorts of architecture, training, loss, active learning techniques. Thousands of students around the world do this every day with YOLOv5 I'm sure.

If you want a low risk result, are deploying a product to market, have a fixed budget, investors to answer to etc. and need a guaranteed return on your time and money then just label your data and train like everyone else and if your dataset is aligned with the existing datasets in terms of size and variety then you should expect a similar accuracy on your trained model.

@glenn-jocher
Copy link
Member

glenn-jocher commented Dec 8, 2021

@leandervaneekelen also to answer your question specifically you definitely can not only compute loss on positive samples as then the model will never learn what is not a positive sample, and in deployment all neurons will be optimized to always output detections at all times, ergo your entire image will be filled with FPs.

@leandervaneekelen
Copy link
Contributor Author

leandervaneekelen commented Dec 9, 2021

@glenn-jocher Hi Glenn, thanks for your thoughts. Luckily, I'm just a lowly PhD student, so at most I risk a disappointed supervisor, not angry shareholders ;)

I think I did a poor job at giving you enough context in my original post; I was a bit too naive in my first example (only having foreground labels, which you rightly pointed out won't work for a detection model). In reality, I have a dataset comprising of rectangular regions of interests (ROIs) that are fully annotated, from which I sample square patches. What I am proposing is loss masking for situations where only part of the patch is fully annotated, like in the scenario below.

image

Here, the solid line represents the boundaries of the ROIs with all the schematically drawn cells in them (assume we know a bounding box for every cell). The striped lines represent the boundaries of the patches (patch #1 and #2 have the same size XY, while ROIs #1 and #2 can be mn and p*q). The hatched area is outside of the ROIs and we don't have any labels for that. What I am proposing when I say loss masking, is to ignore all the predictions in the hatched area in the calculation of the loss.

If I'm correct, this way you are still training the network on positive and negative samples, right?

@glenn-jocher
Copy link
Member

@leandervaneekelen yes if only parts of the image are labelled you might crop those parts in the dataloader or do some loss masking as you mentioned in the loss function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants