Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move queue to gpu when resuming checkpoint - SWAV self supervised model #684

Merged
merged 12 commits into from
Aug 13, 2021

Conversation

thiyagu145
Copy link
Contributor

@thiyagu145 thiyagu145 commented Jul 6, 2021

What does this PR do?

Fixes # (issue)

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests? [not needed for typos/docs]
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

The current queue loading fails when resuming training with checkpointed queue. 
This PR moves the queue to GPU when resuming from checkpoint.
@github-actions github-actions bot added the model label Jul 6, 2021
@thiyagu145 thiyagu145 changed the title Move queue to gpu when preloading Move queue to gpu when resuming checkpoint - SWAV self supervised model Jul 6, 2021
@ananyahjha93
Copy link
Contributor

@thiyagu145 good catch. thanks for the PR.

@codecov
Copy link

codecov bot commented Jul 15, 2021

Codecov Report

Merging #684 (97ec6e6) into master (4ece8db) will not change coverage.
The diff coverage is 0.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #684   +/-   ##
=======================================
  Coverage   72.17%   72.17%           
=======================================
  Files         121      121           
  Lines        7551     7551           
=======================================
  Hits         5450     5450           
  Misses       2101     2101           
Flag Coverage Δ
cpu 72.17% <0.00%> (ø)
pytest 72.17% <0.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...l_bolts/models/self_supervised/swav/swav_module.py 45.45% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4ece8db...97ec6e6. Read the comment docs.

@awaelchli awaelchli added the fix fixing issues... label Jul 15, 2021
@awaelchli awaelchli mentioned this pull request Jul 15, 2021
8 tasks
@Borda
Copy link
Member

Borda commented Jul 28, 2021

@thiyagu145 @ananyahjha93 mind check the failing tests?

@thiyagu145
Copy link
Contributor Author

hi @Borda
TypeError: transfer_batch_to_device() missing 1 required positional argument: 'dataloader_idx'
I'm not sure if this error is coming up because of the usage of self.queue.to(device).

Copy link
Contributor

@ethanwharris ethanwharris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 😃

@Borda Borda merged commit 30065cf into Lightning-Universe:master Aug 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fix fixing issues... model
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants