Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add metric visualizations #429

Merged
merged 23 commits into from
Jul 13, 2022
Merged

Add metric visualizations #429

merged 23 commits into from
Jul 13, 2022

Conversation

ORippler
Copy link
Contributor

@ORippler ORippler commented Jul 11, 2022

Description

This PR:

  • Adds 2 additional metrics:
    • AUPR
    • AUPRO
  • Adds plots for all metrics by refactoring VisualizerCallback into VisualizerCallbackBase and VisualizerCallbackimage, and by introducing a new VisualizerCallbackMetric class

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

* `VisualizerCallback` is refactored into a base `VisualizerCallbackBase`
class which is used to construct `VisualizerCallbackImage` via inheritance.
* Fix bug in `VisualizerCallback.on_test_end`, where `pl_module.logger`
was accessed instead of iterating over `trainer.loggers`
(wandb errort silently before iirc)
* Skeleton for `VisualizerCallbackMetric` is added
Comment on lines 104 to 106
for logger in trainer.loggers:
if isinstance(logger, AnomalibWandbLogger):
logger.save()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsure whether it's "bad practice" to call .save on a AnomalibWandbLogger multiple times, as is done currently. Any insights?

Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 Jul 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has been a while since I have looked into AnomalibWandbLogger but ideally it should only be called once at the very end. This is explained in the comment above This is because logging as a single batch ensures that all images appear as part of the same step.. In case it is being called multiple times then my guess is that the isinstance falls back to the base class and probably logger ends up always being an instance of AnomalibWandbLogger (LightningLogger more specifically). Maybe a good idea would be to use type() but mypy or pylint might complain.

Copy link
Contributor Author

@ORippler ORippler Jul 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opted with clearing AnomalibWandbLogger.image_list on save. Thus, all predictions are in the first step, and all metrics in the second step (though still under the same namespace)

@samet-akcay
Copy link
Contributor

@ORippler, thanks a lot for your contribution! Just so know, code quality checks have failed due to number of reasons

  1. anomalib/utils/cli/cli.py:21:0: E0611: No name 'VisualizerCallback' in module 'anomalib.utils.callbacks' (no-name-in-module)
  2. notebooks/200_models/201_fastflow.ipynb:cell_1:17:0: E0611: No name 'VisualizerCallback' in module 'anomalib.utils.callbacks' (no-name-in-module)

For more details, you could refer to this.

Will you be able to address this ?

@ORippler
Copy link
Contributor Author

@ORippler, thanks a lot for your contribution! Just so know, code quality checks have failed due to number of reasons

1. anomalib/utils/cli/cli.py:21:0: E0611: No name 'VisualizerCallback' in module 'anomalib.utils.callbacks' (no-name-in-module)

2. notebooks/200_models/201_fastflow.ipynb:cell_1:17:0: E0611: No name 'VisualizerCallback' in module 'anomalib.utils.callbacks' (no-name-in-module)

For more details, you could refer to this.

Will you be able to address this ?

Sure, once I get all the functionality implemented.

Otherwise, matplotlib starts to complain and objects aren't gced
properly
`Visualizer` is needed by all Callbacks for writing images to disk
Every metric that should be visualized needs to implement its own
`generate_figure` function, and the resulting plot is then saved by
`VisualizerCallbackMetric`
* Needed to change signature of `AnomalyModule._collect_outputs` for
pixel-wise metrics as we need spatial dimensions for
connected-componenta-analysis in AUPRO
* Add AUPRO, which uses kornia and the fact that
per-region overlap == per-region tpr for fast AUPRO computation
* Updated docstrings for AUPR/AUROC
Due to bugs in CLI, feature could not be tested well
@ORippler ORippler marked this pull request as ready for review July 12, 2022 15:13
@ORippler
Copy link
Contributor Author

ORippler commented Jul 12, 2022

@samet-akcay

I got the core functionality implemented and tested it locally so far. Some points from my side:

  • While I confirmed that metric-images are saved to tensorboard via the following snippet:
import cv2
import numpy as np
from matplotlib import pyplot as plt
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator

acc = EventAccumulator("PATH")
acc.Reload()

for tag in acc.Tags()["images"]:
    if "image_" in tag or "pixel_" in tag:
        events = acc.Images(tag)

        for index, event in enumerate(events):
            s = np.frombuffer(event.encoded_image_string, dtype=np.uint8)
            image = cv2.imdecode(s, cv2.IMREAD_COLOR)
            image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
            plt.imshow(image)
            plt.show()

they don't necessarily all appear when I open a tensorboard on the folder.

  • I did not test the changes to CLI, as I did not manage to call anomalib test after training (project.path is no longer in the namespace and cannot be provided via anomalib test --config --project.path. Therefore, a new time-stamped folder ist generated instead...)
  • instead of testing for presence of generate_figure, one could potentially generate an AnomalibMetric class with an abstractmethod, and let our own metrics inherit from that class (and subsequently test via isinstance(AnomalibMetric)).
  • The current implementation results in two calls to Metric._compute as opposed to the singular call prior. While this is not a bottleneck (at least not on an RTX 3090), one could nonetheless think of storing results from the first call and simply returning them instead.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the efforts!

metrics = getattr(pl_module, metric_type)
for metric in metrics.values():
# `generate_figure` needs to be defined for every metric that should be plotted automatically
if hasattr(metric, "generate_figure"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree maybe having a base metric with generate_figure as an abstract method would be a cleaner approach. But we can address this in a different pr

anomalib/utils/callbacks/visualizer_base.py Outdated Show resolved Hide resolved
Since unique fpr/tpr curves are generated for each label,
we cannot use fpr_index across calls ro `roc`. Instead, we now
bilinearly resample generated pro curves at `fpr <= self.fpr_limit`
to fixed sampling points, and then aggregate over the resampled curve.
We follow a scheme similar to the ROC curve, where the PRO curve is
composed of per-region TPR plotted against global FPR.
@ORippler
Copy link
Contributor Author

In addition to bugfixing the PRO-calculation, i passed the mypy checks and clarified wording.

Btw, mypy behaves differently when called with tox vs via the pre-commit hook (which didn't threw the errors I now fixed), at least in the provided Dockerfile

@samet-akcay
Copy link
Contributor

In addition to bugfixing the PRO-calculation, i passed the mypy checks and clarified wording.

Btw, mypy behaves differently when called with tox vs via the pre-commit hook (which didn't threw the errors I now fixed), at least in the provided Dockerfile

Thanks for the update. We'll have a look at this

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ORippler, thanks again for such contribution! I only have a single comment regarding the naming. Other than, that looks great!

anomalib/utils/callbacks/__init__.py Outdated Show resolved Hide resolved
We now prepend the last part, i.e. `VisualizerCallbackBase` -> `BaseVisualizerCallback`
@ORippler
Copy link
Contributor Author

Alright so the test for ganomaly should pass now as well (There were a couple of bugs in visualizer for the classification task as well as in the config for ganomaly, which were unrelated to my PR).

@samet-akcay
Copy link
Contributor

Alright so the test for ganomaly should pass now as well (There were a couple of bugs in visualizer for the classification task as well as in the config for ganomaly, which were unrelated to my PR).

Thanks, yeah, I also noticed the classification bug in the visualizer yesterday. Will fix it in a separate PR

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks for your effort and valuable contribution!

@samet-akcay samet-akcay merged commit ba27019 into openvinotoolkit:development Jul 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add plots and additional metrics
4 participants