-
Notifications
You must be signed in to change notification settings - Fork 638
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(WIP) Visualizer improvements pt2 #298
(WIP) Visualizer improvements pt2 #298
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice modification. I am looking forward for the final inferencer design
tools/openvino.py
Outdated
""" | ||
parser = ArgumentParser() | ||
# --model_config_path will be deprecated in 0.2.8 and removed in 0.2.9 | ||
parser.add_argument("--model_config_path", type=str, required=False, help="Path to a model config file") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a new file, I feel we should drop deprecated flags.
tools/openvino.py
Outdated
parser.add_argument("--weight_path", type=Path, required=True, help="Path to a model weights") | ||
parser.add_argument("--image_path", type=Path, required=True, help="Path to an image to infer.") | ||
parser.add_argument("--save_path", type=Path, required=False, help="Path to save the output image.") | ||
parser.add_argument("--meta_data", type=Path, required=False, help="Path to JSON file containing the metadata.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need metadata to load threshold for OpenVINO models. required
should be set to True.
tools/openvino.py
Outdated
) | ||
|
||
args = parser.parse_args() | ||
if args.model_config_path is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. We don't need deprecation warnings in a new file.
_dataloader_idx (int): Index of the dataloader that yielded the current batch (unused). | ||
""" | ||
for visualizer in self.generate_visualizer(outputs): | ||
visualizer.show() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is work in progress I am assuming the final design will not always show the results by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This requires substantial changes to the visualizer callback class, so I'm planning to address that in part 3 of the refactor. That one will focus on customizability of the visualizations. This means that if we merge this PR, we will temporarily always show the visualizations during inference. If we don't want this, I can re-target this PR to a feature branch to which I'll then also target pt3.
@@ -96,6 +95,60 @@ def _add_images( | |||
if "local" in module.hparams.project.log_images_to: | |||
visualizer.save(Path(module.hparams.project.path) / "images" / filename.parent.name / filename.name) | |||
|
|||
def generate_visualizer(self, outputs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice use of yield. I think we should also add a return type to the method. Something like -> Iterator[Visualizer]
anomalib/deploy/inferencers/torch.py
Outdated
warnings.warn( | ||
"Torch Inferencer is deprecated and will be removed in a future version. To run Inference with a PyTorch " | ||
"model, please follow the example in `tools/inference.py`" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we could decide and specify when we deprecate it. Note that, in every sprint, we aim to release a new version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'm not entirely sure if we should deprecate this file at all. There might be some use cases where users want to perform pure pytorch inference. What do you think @samet-akcay @ashwinvaidya17?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we support pure pytorch inference? If I remember correctly we still load the model config file for pytorch models and load the lightning module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm yeah, we still use the PL module in this (the old) torch inferencer. In that case, maybe we could refactor the (pure) torch inferencer to load the model from a .pth
file and only use the classes in torch_model.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel we should support the possibility of pure pytorch inference, in case some users might need it. Default could still be the new PL inferencer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we decide to keep the torch implementation, this would then need to be removed
tools/openvino.py
Outdated
def add_label(prediction: np.ndarray, scores: float, font: int = cv2.FONT_HERSHEY_PLAIN) -> np.ndarray: | ||
"""If the model outputs score, it adds the score to the output image. | ||
|
||
Args: | ||
prediction (np.ndarray): Resized anomaly map. | ||
scores (float): Confidence score. | ||
|
||
Returns: | ||
np.ndarray: Image with score text. | ||
""" | ||
text = f"Confidence Score {scores:.0%}" | ||
font_size = prediction.shape[1] // 1024 + 1 # Text scale is calculated based on the reference size of 1024 | ||
(width, height), baseline = cv2.getTextSize(text, font, font_size, thickness=font_size // 2) | ||
label_patch = np.zeros((height + baseline, width + baseline, 3), dtype=np.uint8) | ||
label_patch[:, :] = (225, 252, 134) | ||
cv2.putText(label_patch, text, (0, baseline // 2 + height), font, font_size, 0, lineType=cv2.LINE_AA) | ||
prediction[: baseline + height, : baseline + width] = label_patch | ||
return prediction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this address PR #291?
tools/openvino.py
Outdated
module = import_module("anomalib.deploy.inferencers.openvino") | ||
OpenVINOInferencer = getattr(module, "OpenVINOInferencer") # pylint: disable=invalid-name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still needed? Wouldn't it be possible to just import OpenVINOInferencer
at the top of the module?
tools/openvino.py
Outdated
if args.image_path.is_dir(): | ||
# Write the output to save_path in the same structure as the input directory. | ||
for image in args.image_path.glob("**/*"): | ||
if image.is_file() and image.suffix in (".jpg", ".png", ".jpeg"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use IMG_ExTENSIONS
from torchvision?
from torchvision.datasets.folder import IMG_EXTENSIONS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason being that there could be other image extensions we could actually support, but exclude with (".jpg", ".png", ".jpeg")
* add methods for adding labels to image * refactor visualizer * move add_label to post processing helpers * fix tests * support single image in visualizer * call generate * revert default task type * typing * refactor inferencer and visualizer * add torch inference entrypoint * revert torch inferencer for backward compatibility * add torch inferencer to init files * pause visualization * openvino.py -> openvino_inference.py * address comments * remove unused argument * remove depracation warning
* Visualizer improvements Part II (#298) * add methods for adding labels to image * refactor visualizer * move add_label to post processing helpers * fix tests * support single image in visualizer * call generate * revert default task type * typing * refactor inferencer and visualizer * add torch inference entrypoint * revert torch inferencer for backward compatibility * add torch inferencer to init files * pause visualization * openvino.py -> openvino_inference.py * address comments * remove unused argument * remove depracation warning * 📜 Add Inference Documentation (#308) * add inference and export guides * finish sentence * fix typo * add to index, change required field * 🚜 Visualizer refactor pt3 (#374) * make saving and showing of visualizations configurable * make saving and showing of visualizations configurable * fix visualizer callback tests * remove unused imports * fix openvino test * add comments * Visualizer refactor pt4 (#379) * make visualization mode configurable * change default visualization mode * value checks * check mask range, move save and show to visualizer * fix visualizer tests * change pred_mask value check * address circular import * address mypy issue * fix visualizer test and mypy * fix openvino tests * fix typo * add license header * add new inference parameters to guide * refactor inference entrypoint structure * fix visualizer in cli * remove param from kwargs * fix visualizer args in notebook * Fix torchvision version (#397) * 🛠 Fix torchvision version (#397) Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com>
Description
(WIP) Second part of the visualizer refactor. This PR mainly focuses on the inferencer. The torch inferencer was simplified to use PyTorch Lightning. The visualizer callback was refactored to also generate and show the visualization results after each predict batch. OpenVINO inference was moved to a separate file in
tools/openvino.py
Addresses #317 #294 #326
Known issues:
Currently the visualization results cannot be customized (which types of images should be shown)
Visualization results are only shown, not saved to the file system.
Changes
Checklist