Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Return the evaluation result of Trainer.test #1694

Closed
reactivetype opened this issue May 1, 2020 · 6 comments · Fixed by #2029
Closed

Return the evaluation result of Trainer.test #1694

reactivetype opened this issue May 1, 2020 · 6 comments · Fixed by #2029
Assignees
Labels
feature Is an improvement or enhancement help wanted Open to be worked on

Comments

@reactivetype
Copy link

🚀 Feature

This enhancement request is to let Trainer.test return the dictionary of test metrics.

Motivation

  • Currently, Trainer.test returns nothing. The user would have to open Tensorboard to see the test result or write a custom logger. For beginners, this enhancement offers simplicity to beginners.

  • In some scenarios for hparams search, the calling program needs to have the test result of each trial. Thus, it would be handy if Trainer.test returns a value.

def objective(trial):
   ...
   trainer = Trainer(...)
   result = trainer.test(best_model, validation_loader)
   return result['test_acc']

Pitch

If test_epoch_end already defines the return value, we can return the eval_results from run_evaluation and let Trainer.test return that result.

Alternatives

Pass a reference to a mutable collection of metrics summary to the Module and on_test_end, the user can choose to update the collection.

Additional context

There could be some refactoring that needs to be done as currently, Trainer.test has different code paths depending on whether model is passed or not or whether ddp is used or not.

@reactivetype reactivetype added feature Is an improvement or enhancement help wanted Open to be worked on labels May 1, 2020
@github-actions
Copy link
Contributor

github-actions bot commented May 1, 2020

Hi! thanks for your contribution!, great first issue!

@awaelchli
Copy link
Member

awaelchli commented May 1, 2020

Currently, Trainer.test returns nothing. The user would have to open Tensorboard to see the test result or write a custom logger. For beginners, this enhancement offers simplicity to beginners.

Hi, I believe it is already possible to get the metrics. The trainer stores them after validation or test.
See here. The callback metrics include the progress bar metrics and the log metrics, so everything you return in the epoch end method. So I suggest this workaround.

I also think that returning the results in test would be a good feature to have.

@williamFalcon
Copy link
Contributor

.test() already prints out the metrics no?

But yeah, it would be nice to get:

metrics = trainer.test()

However, this is non-trivial in ddp or TPUs... but we can try it out. @awaelchli want to give this a shot? this is a hard one lol

@williamFalcon
Copy link
Contributor

or we could do the same hack we did with weights on ddp and save the metrics to disk, exit process, load back up and return to user.

@reactivetype
Copy link
Author

@williamFalcon thanks for the work in #2029 as it helps fix the issue for the ddp case. However, it seems the part that returns the evaluation result is still missing. Maybe you want to re-open the issue?

@Borda
Copy link
Member

Borda commented Jun 8, 2020

it is also related to #1989

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants