Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃悶 Fix inferencer in Gradio #332

Merged
merged 3 commits into from
May 25, 2022
Merged

馃悶 Fix inferencer in Gradio #332

merged 3 commits into from
May 25, 2022

Conversation

ashwinvaidya17
Copy link
Collaborator

Description

Changes

  • Bug fix (non-breaking change which fixes an issue)

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

@ashwinvaidya17 ashwinvaidya17 added the Bug Something isn't working label May 24, 2022
Returns:
Namespace: List of arguments.
"""
parser = ArgumentParser()
parser.add_argument("--config", type=Path, required=True, help="Path to a model config file")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could keep config to be consistent with other entrypoints?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure what to call it. We can either pass the model config or the model name now. If I switch back to config should I drop calling inferencer with just model name?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--config would be inline with the new PL CLI, so I would prefer that.

If I switch back to config should I drop calling inferencer with just model name?

Not sure if I get this part

Copy link
Collaborator Author

@ashwinvaidya17 ashwinvaidya17 May 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is that initially with config parameter we had to pass the yaml file to the inferencer. I changed it to model so that we can either pass yaml or only the model name. So, I wasn't sure sure what to call this parameter. If I change it back to config then passing just the model name might not match with the parameter name. In which case we can drop passing only the model name and keep yaml as the only option. It might be better in some sense as it will force people to ensure that their train config matches with the config they use for inference. Otherwise inferencer might pick up the default config with just the model name which might not match with the path for config that was used to train with.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Passing only config would ensure the right config file is passed. Otherwise, it would be the default config, which may not be the same as the one that is used to train the model.

tools/inference_gradio.py Show resolved Hide resolved
Comment on lines 89 to 112
inferencer = OpenVINOInferencer(
config=config, path=weight_path, meta_data_path=meta_data
)
inferencer = OpenVINOInferencer(config=config, path=weight_path, meta_data_path=meta_data_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for being pedantic, but one final comment :)

Comment on lines 58 to 60
parser.add_argument("--config", type=Path, required=True, help="Path to a model config file")
parser.add_argument("--weight_path", type=Path, required=True, help="Path to a model weights")
parser.add_argument("--meta_data", type=Path, required=False, help="Path to JSON file containing the metadata.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a bit of inconsistency in naming. In fact, this is the case for other entrypoints. I think we should stick to one of the following

- config, weights, meta_data
- config_path, weight_path, meta_data_path

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's alright. The whole point of the review is to ensure code quality

Copy link
Contributor

@djdameln djdameln left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with these changes, but I guess we'll need to refactor gradio inference once we switch to the PL inferencer in #298. This should probably be included in #298 before merging.

@samet-akcay samet-akcay merged commit b044e63 into development May 25, 2022
@samet-akcay samet-akcay deleted the fix/ashwin/gradio branch May 25, 2022 14:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Gradio Error
3 participants