Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add tests for single scalar return from training #2587

Merged
merged 5 commits into from
Jul 11, 2020
Merged

Conversation

williamFalcon
Copy link
Contributor

add tests to make sure a single scalar can be returned from train loop

@williamFalcon williamFalcon changed the title add tests for single scalar return from training [wip] add tests for single scalar return from training Jul 11, 2020
@mergify mergify bot requested a review from a team July 11, 2020 20:51
@codecov
Copy link

codecov bot commented Jul 11, 2020

Codecov Report

Merging #2587 into master will increase coverage by 0%.
The diff coverage is 100%.

@@          Coverage Diff           @@
##           master   #2587   +/-   ##
======================================
  Coverage      91%     91%           
======================================
  Files          70      70           
  Lines        5755    5763    +8     
======================================
+ Hits         5243    5252    +9     
+ Misses        512     511    -1     

@pep8speaks
Copy link

pep8speaks commented Jul 11, 2020

Hello @williamFalcon! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-07-11 21:26:56 UTC

@Borda Borda added the ci Continuous Integration label Jul 11, 2020
@williamFalcon williamFalcon changed the title [wip] add tests for single scalar return from training add tests for single scalar return from training Jul 11, 2020
@williamFalcon williamFalcon merged commit 1d565e1 into master Jul 11, 2020
@Borda Borda deleted the return2 branch July 11, 2020 21:56
self.training_step_end_called = True

# make sure loss has the grad
assert isinstance(output, torch.Tensor)
Copy link
Member

@Borda Borda Jul 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like a test, but why is it in DeterministicModel
well maybe rather ask why these tests are not part of the template?

@mergify mergify bot requested a review from a team July 11, 2020 22:01
assert len(trainer.progress_bar_metrics) == 0

# make sure training outputs what is expected
for batch_idx, batch in enumerate(model.train_dataloader()):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why you do not set directly?

batch_idx = 0
batch = model.train_dataloader()[0]

@mergify mergify bot requested a review from a team July 11, 2020 22:04
model.training_step_end = model.training_step_end_scalar
model.val_dataloader = None

trainer = Trainer(fast_dev_run=True, weights_summary=None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing tmpdir

@mergify mergify bot requested review from a team July 11, 2020 22:06
assert not model.training_step_end_called
assert not model.training_epoch_end_called

# make sure training outputs what is expected
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all these asserts bellow seems to be the same in all three functions... rather wrap into single assert block, otherwise, it takes some time to check what is the difference...

@mergify mergify bot requested a review from a team July 11, 2020 22:09
@Borda Borda mentioned this pull request Jul 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci Continuous Integration
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants