Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainer.parse_argparser does not yield sensible default for default_root_dir #1916

Closed
jeremyjordan opened this issue May 21, 2020 · 6 comments · Fixed by #2526
Closed

Trainer.parse_argparser does not yield sensible default for default_root_dir #1916

jeremyjordan opened this issue May 21, 2020 · 6 comments · Fixed by #2526
Assignees
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@jeremyjordan
Copy link
Contributor

🐛 Bug

Using Trainer.parse_argparser returns True for default_root_dir, however, a string is expected.

To Reproduce

Steps to reproduce the behavior:

>>> from pytorch_lightning import Trainer
>>> from argparse import ArgumentParser, Namespace
>>> parser = ArgumentParser(add_help=False)
>>> parser = Trainer.add_argparse_args(parent_parser=parser)
>>> args = Trainer.parse_argparser(parser)
>>> args
Namespace(accumulate_grad_batches=1, amp_level='O1', auto_lr_find=False, auto_scale_batch_size=False, auto_select_gpus=False, benchmark=False, check_val_every_n_epoch=1, checkpoint_callback=True, default_root_dir=True, deterministic=False, distributed_backend=True, early_stop_callback=False, fast_dev_run=False, gpus=<function Trainer._arg_default at 0x1219efdd0>, gradient_clip_val=0, log_gpu_memory=True, log_save_interval=100, logger=True, max_epochs=1000, max_steps=True, min_epochs=1, min_steps=True, num_nodes=1, num_processes=1, num_sanity_val_steps=2, overfit_pct=0.0, precision=32, print_nan_grads=False, process_position=0, profiler=True, progress_bar_callback=True, progress_bar_refresh_rate=1, reload_dataloaders_every_epoch=False, replace_sampler_ddp=True, resume_from_checkpoint=True, row_log_interval=10, terminate_on_nan=False, test_percent_check=1.0, tpu_cores=True, track_grad_norm=-1, train_percent_check=1.0, truncated_bptt_steps=True, val_check_interval=1.0, val_percent_check=1.0, weights_save_path=True, weights_summary='full')
@jeremyjordan jeremyjordan added the help wanted Open to be worked on label May 21, 2020
@jeremyjordan
Copy link
Contributor Author

@Borda any ideas?

@Borda Borda added the bug Something isn't working label May 22, 2020
@Borda Borda self-assigned this May 23, 2020
@jeremyjordan
Copy link
Contributor Author

jeremyjordan commented May 24, 2020

Same unexpected behavior for weights_save_path which is causing some tests to fail in #1504, I can temporarily resolve this by treating True the same as None (eg. select default value for the user) but this might not be ideal.

@lezwon
Copy link
Contributor

lezwon commented Jun 23, 2020

@jeremyjordan is there an update on this? I was facing this issue while writing CLI tests for #2094. I can continue with those tests once this issue is fixed.

@EspenHa
Copy link
Contributor

EspenHa commented Jul 6, 2020

This issue actually affects all paramters that default to None in __init__, for instance: min_steps, max_steps, log_gpu_memory, distributed_backend, weights_save_path, truncated_bptt_steps, and resume_from_checkpoint are all set to True.

I have a branch with a test for this here: https://github.com/EspenHa/pytorch-lightning/tree/add_argparse_test
I also implemented a fix here: https://github.com/EspenHa/pytorch-lightning/tree/fix_argparse_bug

@Borda If you would like, I can submit these as a PR?

@Borda
Copy link
Member

Borda commented Jul 6, 2020

@EspenHa great, pls send a PR 🚀

@EspenHa
Copy link
Contributor

EspenHa commented Jul 6, 2020

@Borda PR submitted here: #2526 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants