Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainer.add_argparse_args(parser) break the default Tensorboard hparams logging. #1321

Closed
lkhphuc opened this issue Mar 31, 2020 · 2 comments
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@lkhphuc
Copy link

lkhphuc commented Mar 31, 2020

🐛 Bug

Trainer.add_argparse_args(parser) break the default Tensorboard hparams logging.

To Reproduce

Steps to reproduce the behavior:

I pretty much just put together the sample codes in the Hyperparameters section in the docs and it's throw the error.

Code sample

class LitMNIST(pl.LightningModule):
  def __init__(self, hparams):
    super(LitMNIST, self).__init__()
    self.hparams = hparams

    self.layer_1 = torch.nn.Linear(28 * 28, hparams.layer_1_dim)

  def forward(self, x):
    return self.layer_1(x)

  def train_dataloader(self):
    return DataLoader(mydata(), batch_size=self.hparams.batch_size)

  def configure_optimizers(self):
    return Adam(self.parameters(), lr=self.hparams.learning_rate)


def main(args):
    model = LitMNIST(args)
    trainer = pl.Trainer()
    trainer.fit(model)


if __name__ == "__main__":
    parser = ArgumentParser()

    # parametrize the network
    parser.add_argument('--layer_1_dim', type=int, default=128)
    parser.add_argument('--learning_rate', type=float, default=1e-3)

    # add all the available options to the trainer
    parser = pl.Trainer.add_argparse_args(parser)

    args = parser.parse_args()
    main(args)

Traceback (most recent call last):
  File "tmp.py", line 56, in <module>
    main(args)
  File "tmp.py", line 40, in main
    trainer.fit(model)
  File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/trainer/trainer.py", line 630, in fit
    self.run_pretrain_routine(model)
  File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/trainer/trainer.py", line 748, in run_pretrain_routine
    self.logger.log_hyperparams(ref_model.hparams)
  File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/loggers/base.py", line 18, in wrapped_fn
    fn(self, *args, **kwargs)
  File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/pytorch_ligh
tning/loggers/tensorboard.py", line 113, in log_hyperparams
    exp, ssi, sei = hparams(params, {})
  File "/Users/phuc/miniconda3/envs/thinc/lib/python3.7/site-packages/torch/utils/
tensorboard/summary.py", line 156, in hparams
    raise ValueError('value should be one of int, float, str, bool, or torch.Tenso
r')
ValueError: value should be one of int, float, str, bool, or torch.Tensor

The value it fails at is key callback with value [].

Expected behavior

Trainer.add_argparse_args(parser) should not create trouble for tensorboard hparams logging.

Environment

  • PyTorch Version (e.g., 1.0): 1.3.1
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, source): pip
  • Python version: 3.7.5
@lkhphuc lkhphuc added bug Something isn't working help wanted Open to be worked on labels Mar 31, 2020
@awaelchli
Copy link
Member

awaelchli commented Apr 1, 2020

Hi, not sure but it looks like this got fixed on master recently. See #1130.
Could you try installing from master and try again?

pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade

@lkhphuc
Copy link
Author

lkhphuc commented Apr 1, 2020

Yep that fixes it. Thanks.

@lkhphuc lkhphuc closed this as completed Apr 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

No branches or pull requests

2 participants