Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding Loss Function using ground truth labels in linear_loss part #89

Open
SonalKumar95 opened this issue Jan 10, 2024 · 2 comments

Comments

@SonalKumar95
Copy link

Hello there,

Please let me know if I'm wrong. In the train_segmentation.py, the loss function includes two extra loss functions, i.e., linear_loss and cluster_loss. For unsupervised training, the cluster_loss seems okay, but the linear_loss uses the ground truth labels. Is it the case or I'm getting it wrong? Please help me.

Thanks in advance.

@mhamilton723
Copy link
Owner

mhamilton723 commented Jan 10, 2024 via email

@bio-mlhui
Copy link

I am also confused. The detached_code = torch.clone(model_output[1].detach()) will generate detached_code which does not requires grad. However, the linear_output = self.linear_model(detached_code) will generate linear_output which requires grad. Since linear_output is used to compute linear_loss which uses ground truth mask labels, the final loss will backward its gradient to the linear_model. Does this mean the final algorithm is not unsupervised?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants