Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't cast Trainer automatically generated args to their required types #1139

Closed
alexeykarnachev opened this issue Mar 13, 2020 · 8 comments · Fixed by #1147
Closed

Can't cast Trainer automatically generated args to their required types #1139

alexeykarnachev opened this issue Mar 13, 2020 · 8 comments · Fixed by #1147
Labels
bug Something isn't working feature Is an improvement or enhancement let's do it! approved to implement
Milestone

Comments

@alexeykarnachev
Copy link
Contributor

alexeykarnachev commented Mar 13, 2020

❓ Questions and Help

What is your question?

I'm not sure, that this is a bug, so I put it like a question.
The problem is: if I want to add the Trainer arguments to my custom ArgumentParser object, I call the add_argparse_args Trainer classmethod. But this method doesn't cast the Trainer arguments to their required types.
It forces me to cast the arguments by myself. Like so:

trainer_args.update(
        {
            'accumulate_grad_batches': int(trainer_args['accumulate_grad_batches']),
            'train_percent_check': float(trainer_args['train_percent_check']),
            'val_percent_check': float(trainer_args['val_percent_check']),
            'val_check_interval': int(trainer_args['val_check_interval']),
            'track_grad_norm': int(trainer_args['track_grad_norm']),
            'max_epochs': int(trainer_args['max_epochs']),
            'precision': int(trainer_args['precision']),
            'gradient_clip_val': float(trainer_args['gradient_clip_val']),
        }
)

And after that, I can pass updated arguments to the Trainer:

trainer = pytorch_lightning.Trainer(
        **trainer_args
)

And I can't find a central place, where the Trainer handles an automatically generated arguments (their types).

What have you tried?

I've tried to pass arguments to the trainer without handling their types. For instance, if i'll not cast accumulate_grad_batches to the integer type, the exception will be raised:

TypeError: Gradient accumulation supports only int and dict types

What's your environment?

  • OS: [Linux]
  • Packaging [pip]
  • Version [0.7.1]
@alexeykarnachev alexeykarnachev added the question Further information is requested label Mar 13, 2020
@Borda
Copy link
Member

Borda commented Mar 13, 2020

@alexeykarnachev could you pls give a complete example, I am missing which is the ArgumentParser... here it seems like you want to update the trainer_args by itself

trainer_args.update({'accumulate_grad_batches': int(trainer_args['accumulate_grad_batches']), ...)

if you need the default values, pls use default_attributes

@alexeykarnachev
Copy link
Contributor Author

alexeykarnachev commented Mar 13, 2020

Yes, sorry for the insufficient details from my side.

Here is a gist:
https://gist.github.com/alexeykarnachev/fd010b797d92873905780c856e9334d5

It's not runnable script, because I've simplified it a lot. But it contains the problem i've described.

If I'll execute this script with any Trainer argument, for example:

python script.py --my_custom_arg_1=1 --my_custom_arg_n=2 --accumulate_grad_batches=1

It will fail without the types casting, because accumulate_grad_batches will be interpreted as a string, and Trainer will fail.

@Borda
Copy link
Member

Borda commented Mar 14, 2020

if do understand your use-case you want to have some parameters to be passed from CMD and/but some Trainer arguments to be fixed - forced to have a certain value, right?

fixed_args = {
            'logger': tb_logger_callback,
            'checkpoint_callback': model_checkpoint_callback,
            'show_progress_bar': True,
            'progress_bar_refresh_rate': 1,
            'row_log_interval': 1,
}
trainer_args = vars(args)
trainer_args.update(fixed_args)

but in the case of accumulate_grad_batches (and some others) we shall do some casting eg with eval
Woudl you mind sending a PR to fix Trainer.from_argparse_args(...) and/or Trainer.add_argparse_args(...) containing also type?

@Borda Borda added bug Something isn't working feature Is an improvement or enhancement let's do it! approved to implement and removed information needed question Further information is requested labels Mar 14, 2020
@Borda Borda added this to the 0.7.2 milestone Mar 14, 2020
@alexeykarnachev
Copy link
Contributor Author

Yes, I'll do it.

But do you have any suggestions of how can it be performed ?
I see it like this:
in the Trainer.add_argparse_args classmethod there is a loop:

        for arg in trainer_default_params:
            parser.add_argument(
                f'--{arg}',
                default=trainer_default_params[arg],
                dest=arg,
                help='autogenerated by pl.Trainer'
            )

And it's possible to pass a type argument to the parser.add_argument, but this "type" first needs to be obtained somehow.
I suppose, that it can be done via the signature inspection in the Trainer.default_attributes classmethod.

@alexeykarnachev alexeykarnachev changed the title Can't cast Trainer automatically generated args to their required types WIP Can't cast Trainer automatically generated args to their required types Mar 14, 2020
@alexeykarnachev alexeykarnachev changed the title WIP Can't cast Trainer automatically generated args to their required types Can't cast Trainer automatically generated args to their required types Mar 14, 2020
@alexeykarnachev
Copy link
Contributor Author

Please, consider a PR #1147

@williamFalcon
Copy link
Contributor

don’t we already have support for this?

@williamFalcon
Copy link
Contributor

@alexeykarnachev
Copy link
Contributor Author

@williamFalcon this issue is about assigning types for the automatically added arguments. By default argparser assumes that they are strings. And these string typed args will be passed to the trainer constructor, and will break it. So, this issue is about types casting, but not about any new functionality

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working feature Is an improvement or enhancement let's do it! approved to implement
Projects
None yet
3 participants