Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for logging hydra's hparams to TensorBoard. #1417

Closed
wants to merge 9 commits into from

Conversation

lkhphuc
Copy link

@lkhphuc lkhphuc commented Apr 8, 2020

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

What does this PR do?

As discussed in #807, currently I can incorporate Hydra with Pytorch-Lightning easily. The only thing that's missing is hparams logging to Tensorboard.

Usage becomes:

  • in conf/config.yaml
data:
  - path: my_dataset/
model:
  - pretrained: albert-based-uncased
  - num_layers: 2
  - dropout: 0.5
optim:
  - lr: 3e-4
  - momentum: 0.99
  - scheduler: CosineLR

# Group all pytorch-lightning trainer args under this
trainer:
  - gpus: 1
  - val_percent_check: 0.1
  .... 
  • in train.py
@hydra.main(config_file="conf/config.yaml")
def main(cfg: DictConfig):
    module = MyModule(hparams=cfg)
    trainer = Trainer(**OmegaConf.to_container(cfg.trainer, resolve=True))
    trainer.fit(module)

For this PR I think no document is needed, but if needed I can write up some minimal examples to use Hydra with Lightning.

@mergify mergify bot requested a review from a team April 8, 2020 15:10
@Borda Borda added the feature Is an improvement or enhancement label Apr 8, 2020
@Borda Borda added this to the 0.7.3 milestone Apr 8, 2020
pytorch_lightning/loggers/base.py Outdated Show resolved Hide resolved
@codecov
Copy link

codecov bot commented Apr 8, 2020

Codecov Report

Merging #1417 into master will decrease coverage by <1%.
The diff coverage is 71%.

@@           Coverage Diff           @@
##           master   #1417    +/-   ##
=======================================
- Coverage      92%     92%   -<1%     
=======================================
  Files          63      63            
  Lines        3384    3389     +5     
=======================================
+ Hits         3104    3107     +3     
- Misses        280     282     +2

@pep8speaks
Copy link

pep8speaks commented Apr 8, 2020

Hello @lkhphuc! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-04-22 13:49:19 UTC

@Borda Borda requested review from luiscape, MattPainter01 and a team April 8, 2020 16:44
@Borda Borda changed the title Add support for logging hydra's hparams to TensorBoard. [blocked by #1419] Add support for logging hydra's hparams to TensorBoard. Apr 8, 2020
@Borda Borda changed the title [blocked by #1419] Add support for logging hydra's hparams to TensorBoard. Add support for logging hydra's hparams to TensorBoard. Apr 8, 2020
@Borda
Copy link
Member

Borda commented Apr 8, 2020

changelog need to be rebased on new release #1419

@lkhphuc
Copy link
Author

lkhphuc commented Apr 8, 2020

Will do when it's merged.

@mergify
Copy link
Contributor

mergify bot commented Apr 10, 2020

This pull request is now in conflict... :(

@omry
Copy link
Contributor

omry commented Apr 12, 2020

trainer = Trainer(**OmegaConf.to_container(cfg.trainer, resolve=True))

I recommend against converting the OmegaConf object to dict unless there is absolutely no choice.
There are a few features offered by OmegaConf that are lost when converting to primitive containers.

For example, interpolation and mutation validation (The second will become a much bigger thing in the upcoming OmegaConf 2.0 and Hydra 1.0, with the introduction of Structured configs).

@lkhphuc
Copy link
Author

lkhphuc commented Apr 12, 2020

If there's a way to use Hydra without the ugly ** keywords arguments expansion I would definitely use it, but as I tried up to now this is the only way that works.

There are a few features offered by OmegaConf that are lost when converting to primitive containers.

As these are only trainer's flag and handled by Lightning, I don't see any lost benefit for user at all. User can still use all the features provided by Hydra when pass in the config to their own Module.

@omry
Copy link
Contributor

omry commented Apr 12, 2020

If there's a way to use Hydra without the ugly ** keywords arguments expansion I would definitely use it, but as I tried up to now this is the only way that works.

Have you tried this?

trainer = Trainer(**cfg.trainer)

There are a few features offered by OmegaConf that are lost when converting to primitive containers.

As these are only trainer's flag and handled by Lightning, I don't see any lost benefit for user at all. User can still use all the features provided by Hydra when pass in the config to their own Module.

This is only partially true: you will get inferior error reporting for code or configuration errors.

pytorch_lightning/loggers/base.py Show resolved Hide resolved
pytorch_lightning/loggers/base.py Outdated Show resolved Hide resolved
@Borda
Copy link
Member

Borda commented Apr 16, 2020

@lkhphuc how is it going, do you need my help to finish it?
btw, I think it is linked to #1271

matplotlib>=3.1.1
hydra_core >=0.11.3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please pin to 0.11.3 for now.
The next version (1.0.0) will break some compatibility and it would be good to test it before you allow it.

One of the most important changes is that the supported Python versions are changing from >= 2.7 to >=3.6.

I will have a release candidate for Hydra 1.0 in a few weeks, at that point, after a test - you can relax the restriction.

@lkhphuc
Copy link
Author

lkhphuc commented Apr 21, 2020

Hi @Borda , I rebased on master and pinned the hydra version for now as suggested.

This is a very simple PR, instead of passing a Namespace to Module's hparams, you can pass a DictConfig and everything from Lightining still works fine (this PR fixes the last problem of Tensorboard logging hyper-parameters).

This is mostly for my own use case so I'm not sure if it's aligned with Lightning roadmap in the future or not, just think this patch might help.
If Lightning + Hydra is planed for future deep integration and this patch might complicated things then feel free to close this one.

@Borda
Copy link
Member

Borda commented Apr 22, 2020

@williamFalcon what do you say about adding this dependence?

requirements-extra.txt Outdated Show resolved Hide resolved
@mergify
Copy link
Contributor

mergify bot commented Apr 22, 2020

This pull request is now in conflict... :(

@Borda Borda modified the milestones: 0.7.4, 0.7.5 Apr 24, 2020
@Borda Borda requested review from williamFalcon and removed request for luiscape April 30, 2020 16:11
@Borda
Copy link
Member

Borda commented Apr 30, 2020

@williamFalcon can you give your opinion on adding this extra dependency?

@omry
Copy link
Contributor

omry commented May 1, 2020

I don't think it makes sense.

  1. OmegaConf dependency is enough for this, there is absolutely no reason to depend on Hydra here.
  2. This can probably also be done in userland, converting the config object to a primitive before passing it in. (OmegaConf.to_container(params, resolve=True))

@Borda Borda added discussion In a discussion stage won't fix This will not be worked on labels May 1, 2020
@stale stale bot removed the won't fix This will not be worked on label May 1, 2020
@omry
Copy link
Contributor

omry commented May 1, 2020

As I said earlier, I would be happy to chat with the PytorchLightning leads about what a proper integration might look like.

@Borda
Copy link
Member

Borda commented May 1, 2020

@williamFalcon ^^

@williamFalcon
Copy link
Contributor

hi @omry ping me on lightning slack or pt slack?

@omry
Copy link
Contributor

omry commented May 4, 2020

hi @omry ping me on lightning slack or pt slack?

sent you a direct message in the lightning slack.

@reactivetype
Copy link

Adding hydra as dependency may add tech debt later. I suggest to improve support for Omegaconf as alternative to argparse's Namespace. I think PL does support this already to some extent.

@williamFalcon
Copy link
Contributor

yeah, no hydra dep. I need to think about it this week but we should make it easy to use hydra.

@Borda Borda added the won't fix This will not be worked on label May 10, 2020
@Borda Borda closed this May 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion In a discussion stage feature Is an improvement or enhancement won't fix This will not be worked on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants