Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does not support negative images in training #36

Open
rishabhfrinks123 opened this issue Mar 23, 2023 · 3 comments
Open

Does not support negative images in training #36

rishabhfrinks123 opened this issue Mar 23, 2023 · 3 comments

Comments

@rishabhfrinks123
Copy link

I put some images which does not have label in it i.e completely black mask with respect to those images and the output of it is tensors having nan's instead of 0's

outputs----- tensor([[[[nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], ..., ..., [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan, ..., nan, nan, nan]]]], device='cuda:0',

due to which there is an error coming up in loss.backward()

Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: CUDA error: device-side assert triggered

My idea was to put some negative images into training so that model understands a bit clearly about the busy background, and as we remove these negative images and corresponding masks the code is working fine.

Please confirm , how to resolve this so that i can consider those negative images as well ???

@Karel911
Copy link
Owner

To compute the loss, you should set the values of label to 0, not nan.
Since you set the values to nan, the loss.backward error occurred.

@rishabhfrinks123
Copy link
Author

No i did not assign any labels to nan , as the input goes to inital_conv it gives the output as nan , i can't figure out why?? and Secondly if i manually assign the outputs which are coming nan as 0 and put to cuda again and then compute the loss , than it would not be correct for the optimizers which are working on loss to reduce it . I meant to say that if i manually assign output as 0's than loss would be 0 for negative images from the very first epoch which may not be good for the learning.
Please suggest if i am thinking right or not

@rishabhfrinks123
Copy link
Author

To be specific--- the mask are labelled as 0 only for the images which do not have any object

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants