Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(WIP) Visualizer improvements pt2 #298

Merged

Conversation

djdameln
Copy link
Contributor

@djdameln djdameln commented May 6, 2022

Description

(WIP) Second part of the visualizer refactor. This PR mainly focuses on the inferencer. The torch inferencer was simplified to use PyTorch Lightning. The visualizer callback was refactored to also generate and show the visualization results after each predict batch. OpenVINO inference was moved to a separate file in tools/openvino.py

Addresses #317 #294 #326

Known issues:
Currently the visualization results cannot be customized (which types of images should be shown)
Visualization results are only shown, not saved to the file system.

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice modification. I am looking forward for the final inferencer design

"""
parser = ArgumentParser()
# --model_config_path will be deprecated in 0.2.8 and removed in 0.2.9
parser.add_argument("--model_config_path", type=str, required=False, help="Path to a model config file")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is a new file, I feel we should drop deprecated flags.

parser.add_argument("--weight_path", type=Path, required=True, help="Path to a model weights")
parser.add_argument("--image_path", type=Path, required=True, help="Path to an image to infer.")
parser.add_argument("--save_path", type=Path, required=False, help="Path to save the output image.")
parser.add_argument("--meta_data", type=Path, required=False, help="Path to JSON file containing the metadata.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need metadata to load threshold for OpenVINO models. required should be set to True.

)

args = parser.parse_args()
if args.model_config_path is not None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. We don't need deprecation warnings in a new file.

_dataloader_idx (int): Index of the dataloader that yielded the current batch (unused).
"""
for visualizer in self.generate_visualizer(outputs):
visualizer.show()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is work in progress I am assuming the final design will not always show the results by default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This requires substantial changes to the visualizer callback class, so I'm planning to address that in part 3 of the refactor. That one will focus on customizability of the visualizations. This means that if we merge this PR, we will temporarily always show the visualizations during inference. If we don't want this, I can re-target this PR to a feature branch to which I'll then also target pt3.

@@ -96,6 +95,60 @@ def _add_images(
if "local" in module.hparams.project.log_images_to:
visualizer.save(Path(module.hparams.project.path) / "images" / filename.parent.name / filename.name)

def generate_visualizer(self, outputs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice use of yield. I think we should also add a return type to the method. Something like -> Iterator[Visualizer]

@djdameln djdameln mentioned this pull request May 11, 2022
11 tasks
Comment on lines 52 to 55
warnings.warn(
"Torch Inferencer is deprecated and will be removed in a future version. To run Inference with a PyTorch "
"model, please follow the example in `tools/inference.py`"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we could decide and specify when we deprecate it. Note that, in every sprint, we aim to release a new version.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I'm not entirely sure if we should deprecate this file at all. There might be some use cases where users want to perform pure pytorch inference. What do you think @samet-akcay @ashwinvaidya17?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we support pure pytorch inference? If I remember correctly we still load the model config file for pytorch models and load the lightning module.

Copy link
Contributor Author

@djdameln djdameln May 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm yeah, we still use the PL module in this (the old) torch inferencer. In that case, maybe we could refactor the (pure) torch inferencer to load the model from a .pth file and only use the classes in torch_model.py.

Copy link
Contributor Author

@djdameln djdameln May 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel we should support the possibility of pure pytorch inference, in case some users might need it. Default could still be the new PL inferencer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we decide to keep the torch implementation, this would then need to be removed

Comment on lines 68 to 85
def add_label(prediction: np.ndarray, scores: float, font: int = cv2.FONT_HERSHEY_PLAIN) -> np.ndarray:
"""If the model outputs score, it adds the score to the output image.

Args:
prediction (np.ndarray): Resized anomaly map.
scores (float): Confidence score.

Returns:
np.ndarray: Image with score text.
"""
text = f"Confidence Score {scores:.0%}"
font_size = prediction.shape[1] // 1024 + 1 # Text scale is calculated based on the reference size of 1024
(width, height), baseline = cv2.getTextSize(text, font, font_size, thickness=font_size // 2)
label_patch = np.zeros((height + baseline, width + baseline, 3), dtype=np.uint8)
label_patch[:, :] = (225, 252, 134)
cv2.putText(label_patch, text, (0, baseline // 2 + height), font, font_size, 0, lineType=cv2.LINE_AA)
prediction[: baseline + height, : baseline + width] = label_patch
return prediction
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this address PR #291?

Comment on lines 100 to 101
module = import_module("anomalib.deploy.inferencers.openvino")
OpenVINOInferencer = getattr(module, "OpenVINOInferencer") # pylint: disable=invalid-name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed? Wouldn't it be possible to just import OpenVINOInferencer at the top of the module?

if args.image_path.is_dir():
# Write the output to save_path in the same structure as the input directory.
for image in args.image_path.glob("**/*"):
if image.is_file() and image.suffix in (".jpg", ".png", ".jpeg"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use IMG_ExTENSIONS from torchvision?

from torchvision.datasets.folder import IMG_EXTENSIONS

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason being that there could be other image extensions we could actually support, but exclude with (".jpg", ".png", ".jpeg")

@djdameln djdameln changed the base branch from development to feature/visualization-inference-refactor May 25, 2022 14:52
@samet-akcay samet-akcay merged commit ef4190d into feature/visualization-inference-refactor May 25, 2022
@samet-akcay samet-akcay deleted the da/inference-visualization branch May 25, 2022 14:58
shakib-root pushed a commit to shakib-root/anomalib that referenced this pull request Jun 2, 2022
* add methods for adding labels to image

* refactor visualizer

* move add_label to post processing helpers

* fix tests

* support single image in visualizer

* call generate

* revert default task type

* typing

* refactor inferencer and visualizer

* add torch inference entrypoint

* revert torch inferencer for backward compatibility

* add torch inferencer to init files

* pause visualization

* openvino.py -> openvino_inference.py

* address comments

* remove unused argument

* remove depracation warning
samet-akcay pushed a commit that referenced this pull request Jul 1, 2022
* Visualizer improvements Part II (#298)

* add methods for adding labels to image

* refactor visualizer

* move add_label to post processing helpers

* fix tests

* support single image in visualizer

* call generate

* revert default task type

* typing

* refactor inferencer and visualizer

* add torch inference entrypoint

* revert torch inferencer for backward compatibility

* add torch inferencer to init files

* pause visualization

* openvino.py -> openvino_inference.py

* address comments

* remove unused argument

* remove depracation warning

* 📜 Add Inference Documentation (#308)

* add inference and export guides

* finish sentence

* fix typo

* add to index, change required field

* 🚜 Visualizer refactor pt3 (#374)

* make saving and showing of visualizations configurable

* make saving and showing of visualizations configurable

* fix visualizer callback tests

* remove unused imports

* fix openvino test

* add comments

* Visualizer refactor pt4 (#379)

* make visualization mode configurable

* change default visualization mode

* value checks

* check mask range, move save and show to visualizer

* fix visualizer tests

* change pred_mask value check

* address circular import

* address mypy issue

* fix visualizer test and mypy

* fix openvino tests

* fix typo

* add license header

* add new inference parameters to guide

* refactor inference entrypoint structure

* fix visualizer in cli

* remove param from kwargs

* fix visualizer args in notebook

* Fix torchvision version (#397)

* 🛠 Fix torchvision version (#397)

Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants