Skip to content

Commit

Permalink
✨ Create Anomalib CLI (#378)
Browse files Browse the repository at this point in the history
* Refactored MVTec datamodule

* 🏷️  Rename BTech datamodule

* 🏷️  Rename Folder datamodule

* Create datamodule jupyter notebook

* Added mvtec into jupyter notebook

* Apply black formatter

* Finished MVTec

* 🏷 Rename the name of the folder

* 🏷  Renamed anomaly-datamodule.ipynb to mvtec.ipynb

* ➕ Created BTech notebook

* 🚚 Move the main description from mvtec to README.md

* Polish btech jupyter notebook

* ➕Created folder jupyter notebook

* Addres PR comments.

* Format the notebooks

* Added black, isort, flake8 and pylint

* ➕ Add mdformat to dev requirements

* 🛠  Update the nnotebook markdowns

* Configured pre-commit for notebooks

* ➕ Add DATAMODULE_REGISTRY to the datamodules.

* ➕ Added relative imports to __init__ modules.

* Register callbacks via @CALLBACK_REGISTRY

* ➕ Added missing callback imports

* ➕ Add AnomalibCLI

* ➕ Add AnomalibCLI

* ➕ Add `normalization_method` to `MetricsConfigurationCallback`

* ➕ Add padim config file

* 🛠 Fix pytorch-lightning version to add extra

* Update torchmetrics version requirement

* Add anomalib CLI entrypoint

* Remove --save_images flag from the cli

* Refactor trainer.py and create main() function

* Updated anomalib entrypoint. Removed trainer.py from cli

* Add logger to cli

* ➕ Add patchcore config

* ➕ Add Cflow CLI config

* 🚚  Move padim and patchcore configs to config directory

* Add cflow, dfkde, dfm, draem, fastflow, padim and patchcore

* ➕ Add ganomaly config

* add reverse distillation and stfpm configs

* Update notebooks

* Create the project directory only during traaining.

* 🏷 Renamed config directory to configs

* Fix the project directory creation logic.

* Add the training CLI command to the README.md

* 🛠  Fix incorrect statement.

* 🧹 Cleanup

* Fix v.4.0 to v.0.4.0

* Added more description to AnomalibCLI docstring.

* Added TODO comments to address later.

* Extracted methods to simplify `before_instantiate_classes` method
  • Loading branch information
samet-akcay committed Jun 24, 2022
1 parent 93bfe84 commit ef62199
Show file tree
Hide file tree
Showing 43 changed files with 1,478 additions and 89 deletions.
17 changes: 16 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ pip install -e .
```

## Training
### ⚠️ Anomalib < v.0.4.0

By default [`python tools/train.py`](https://gitlab-icv.inn.intel.com/algo_rnd_team/anomaly/-/blob/development/train.py)
runs [PADIM](https://arxiv.org/abs/2011.08785) model on `leather` category from the [MVTec AD](https://www.mvtec.com/company/research/datasets/mvtec-ad) [(CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) dataset.
Expand Down Expand Up @@ -142,8 +143,22 @@ dataset:
use_random_tiling: False
random_tile_count: 16
```
## Inference

### ⚠️ Anomalib > v.0.4.0 Beta - Subject to Change
We introduce a new CLI approach that uses [PyTorch Lightning CLI](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_cli.html). To train a model using the new CLI, one would call the following:
```bash
anomalib fit --config <path/to/new/config/file>
```

For instance, to train a [PatchCore](https://github.com/openvinotoolkit/anomalib/tree/development/anomalib/models/patchcore) model, the following command would be run:
```bash
anomalib fit --config ./configs/model/patchcore.yaml
```

The new CLI approach offers a lot more flexibility, details of which are explained in the [documentation](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_cli.html).

## Inference
### ⚠️ Anomalib < v.0.4.0
Anomalib contains several tools that can be used to perform inference with a trained model. The script in [`tools/inference`](tools/inference.py) contains an example of how the inference tools can be used to generate a prediction for an input image.

If the specified weight path points to a PyTorch Lightning checkpoint file (`.ckpt`), inference will run in PyTorch. If the path points to an ONNX graph (`.onnx`) or OpenVINO IR (`.bin` or `.xml`), inference will run in OpenVINO.
Expand Down
2 changes: 2 additions & 0 deletions anomalib/data/btech.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
import pandas as pd
from pandas.core.frame import DataFrame
from pytorch_lightning.core.datamodule import LightningDataModule
from pytorch_lightning.utilities.cli import DATAMODULE_REGISTRY
from pytorch_lightning.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from torch import Tensor
from torch.utils.data import DataLoader
Expand Down Expand Up @@ -257,6 +258,7 @@ def __getitem__(self, index: int) -> Dict[str, Union[str, Tensor]]:
return item


@DATAMODULE_REGISTRY
class BTech(LightningDataModule):
"""BTechDataModule Lightning Data Module."""

Expand Down
2 changes: 2 additions & 0 deletions anomalib/data/folder.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
import numpy as np
from pandas.core.frame import DataFrame
from pytorch_lightning.core.datamodule import LightningDataModule
from pytorch_lightning.utilities.cli import DATAMODULE_REGISTRY
from pytorch_lightning.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from torch import Tensor
from torch.utils.data import DataLoader, Dataset
Expand Down Expand Up @@ -301,6 +302,7 @@ def __getitem__(self, index: int) -> Dict[str, Union[str, Tensor]]:
return item


@DATAMODULE_REGISTRY
class Folder(LightningDataModule):
"""Folder Lightning Data Module."""

Expand Down
2 changes: 2 additions & 0 deletions anomalib/data/mvtec.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
import pandas as pd
from pandas.core.frame import DataFrame
from pytorch_lightning.core.datamodule import LightningDataModule
from pytorch_lightning.utilities.cli import DATAMODULE_REGISTRY
from pytorch_lightning.utilities.types import EVAL_DATALOADERS, TRAIN_DATALOADERS
from torch import Tensor
from torch.utils.data import DataLoader
Expand Down Expand Up @@ -280,6 +281,7 @@ def __getitem__(self, index: int) -> Dict[str, Union[str, Tensor]]:
return item


@DATAMODULE_REGISTRY
class MVTec(LightningDataModule):
"""MVTec AD Lightning Data Module."""

Expand Down
23 changes: 23 additions & 0 deletions anomalib/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,30 @@
from omegaconf import DictConfig, ListConfig
from torch import load

from anomalib.models.cflow import Cflow
from anomalib.models.components import AnomalyModule
from anomalib.models.dfkde import Dfkde
from anomalib.models.dfm import Dfm
from anomalib.models.draem import Draem
from anomalib.models.fastflow import Fastflow
from anomalib.models.ganomaly import Ganomaly
from anomalib.models.padim import Padim
from anomalib.models.patchcore import Patchcore
from anomalib.models.reverse_distillation import ReverseDistillation
from anomalib.models.stfpm import Stfpm

__all__ = [
"Cflow",
"Dfkde",
"Dfm",
"Draem",
"Fastflow",
"Ganomaly",
"Padim",
"Patchcore",
"ReverseDistillation",
"Stfpm",
]

logger = logging.getLogger(__name__)

Expand Down
4 changes: 2 additions & 2 deletions anomalib/models/cflow/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@
# See the License for the specific language governing permissions
# and limitations under the License.

from .lightning_model import CflowLightning
from .lightning_model import Cflow, CflowLightning

__all__ = ["CflowLightning"]
__all__ = ["Cflow", "CflowLightning"]
48 changes: 26 additions & 22 deletions anomalib/models/cflow/lightning_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ def __init__(
coupling_blocks: int = 8,
clamp_alpha: float = 1.9,
permute_soft: bool = False,
lr: float = 0.0001,
):
super().__init__()

Expand All @@ -64,6 +65,31 @@ def __init__(
permute_soft=permute_soft,
)
self.automatic_optimization = False
# TODO: LR should be part of optimizer in config.yaml! Since cflow has custom
# optimizer this is to be addressed later.
self.learning_rate = lr

def configure_optimizers(self) -> torch.optim.Optimizer:
"""Configures optimizers for each decoder.
Note:
This method is used for the existing CLI.
When PL CLI is introduced, configure optimizers method will be
deprecated, and optimizers will be configured from either
config.yaml file or from CLI.
Returns:
Optimizer: Adam optimizer for each decoder
"""
decoders_parameters = []
for decoder_idx in range(len(self.model.pool_layers)):
decoders_parameters.extend(list(self.model.decoders[decoder_idx].parameters()))

optimizer = optim.Adam(
params=decoders_parameters,
lr=self.learning_rate,
)
return optimizer

def training_step(self, batch, _): # pylint: disable=arguments-differ
"""Training Step of CFLOW.
Expand Down Expand Up @@ -193,25 +219,3 @@ def configure_callbacks(self):
mode=self.hparams.model.early_stopping.mode,
)
return [early_stopping]

def configure_optimizers(self) -> torch.optim.Optimizer:
"""Configures optimizers for each decoder.
Note:
This method is used for the existing CLI.
When PL CLI is introduced, configure optimizers method will be
deprecated, and optimizers will be configured from either
config.yaml file or from CLI.
Returns:
Optimizer: Adam optimizer for each decoder
"""
decoders_parameters = []
for decoder_idx in range(len(self.model.pool_layers)):
decoders_parameters.extend(list(self.model.decoders[decoder_idx].parameters()))

optimizer = optim.Adam(
params=decoders_parameters,
lr=self.hparams.model.lr,
)
return optimizer
4 changes: 2 additions & 2 deletions anomalib/models/dfkde/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@
# See the License for the specific language governing permissions
# and limitations under the License.

from .lightning_model import DfkdeLightning
from .lightning_model import Dfkde, DfkdeLightning

__all__ = ["DfkdeLightning"]
__all__ = ["Dfkde", "DfkdeLightning"]
4 changes: 2 additions & 2 deletions anomalib/models/dfm/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@
# See the License for the specific language governing permissions
# and limitations under the License.

from .lightning_model import DfmLightning
from .lightning_model import Dfm, DfmLightning

__all__ = ["DfmLightning"]
__all__ = ["Dfm", "DfmLightning"]
4 changes: 2 additions & 2 deletions anomalib/models/draem/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from .lightning_model import DraemLightning
from .lightning_model import Draem, DraemLightning

__all__ = ["DraemLightning"]
__all__ = ["Draem", "DraemLightning"]
2 changes: 1 addition & 1 deletion anomalib/models/fastflow/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@
from .loss import FastflowLoss
from .torch_model import FastflowModel

__all__ = ["FastflowModel", "FastflowLoss", "FastflowLightning", "Fastflow"]
__all__ = ["FastflowModel", "FastflowLoss", "Fastflow", "FastflowLightning"]
4 changes: 2 additions & 2 deletions anomalib/models/ganomaly/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@
# See the License for the specific language governing permissions
# and limitations under the License.

from .lightning_model import GanomalyLightning
from .lightning_model import Ganomaly, GanomalyLightning

__all__ = ["GanomalyLightning"]
__all__ = ["Ganomaly", "GanomalyLightning"]
60 changes: 36 additions & 24 deletions anomalib/models/ganomaly/lightning_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,9 @@ def __init__(
wadv: int = 1,
wcon: int = 50,
wenc: int = 1,
lr: float = 0.0002,
beta1: float = 0.5,
beta2: float = 0.999,
):
super().__init__()

Expand All @@ -82,11 +85,41 @@ def __init__(
self.generator_loss = GeneratorLoss(wadv, wcon, wenc)
self.discriminator_loss = DiscriminatorLoss()

# TODO: LR should be part of optimizer in config.yaml! Since ganomaly has custom
# optimizer this is to be addressed later.
self.learning_rate = lr
self.beta1 = beta1
self.beta2 = beta2

def _reset_min_max(self):
"""Resets min_max scores."""
self.min_scores = torch.tensor(float("inf"), dtype=torch.float32) # pylint: disable=not-callable
self.max_scores = torch.tensor(float("-inf"), dtype=torch.float32) # pylint: disable=not-callable

def configure_optimizers(self) -> List[optim.Optimizer]:
"""Configures optimizers for each decoder.
Note:
This method is used for the existing CLI.
When PL CLI is introduced, configure optimizers method will be
deprecated, and optimizers will be configured from either
config.yaml file or from CLI.
Returns:
Optimizer: Adam optimizer for each decoder
"""
optimizer_d = optim.Adam(
self.model.discriminator.parameters(),
lr=self.learning_rate,
betas=(self.beta1, self.beta2),
)
optimizer_g = optim.Adam(
self.model.generator.parameters(),
lr=self.learning_rate,
betas=(self.beta1, self.beta2),
)
return [optimizer_d, optimizer_g]

def training_step(self, batch, _, optimizer_idx): # pylint: disable=arguments-differ
"""Training step.
Expand Down Expand Up @@ -191,6 +224,9 @@ def __init__(self, hparams: Union[DictConfig, ListConfig]) -> None:
wadv=hparams.model.wadv,
wcon=hparams.model.wcon,
wenc=hparams.model.wenc,
lr=hparams.model.lr,
beta1=hparams.model.beta1,
beta2=hparams.model.beta2,
)
self.hparams: Union[DictConfig, ListConfig] # type: ignore
self.save_hyperparameters(hparams)
Expand All @@ -210,27 +246,3 @@ def configure_callbacks(self):
mode=self.hparams.model.early_stopping.mode,
)
return [early_stopping]

def configure_optimizers(self) -> List[optim.Optimizer]:
"""Configures optimizers for each decoder.
Note:
This method is used for the existing CLI.
When PL CLI is introduced, configure optimizers method will be
deprecated, and optimizers will be configured from either
config.yaml file or from CLI.
Returns:
Optimizer: Adam optimizer for each decoder
"""
optimizer_d = optim.Adam(
self.model.discriminator.parameters(),
lr=self.hparams.model.lr,
betas=(self.hparams.model.beta1, self.hparams.model.beta2),
)
optimizer_g = optim.Adam(
self.model.generator.parameters(),
lr=self.hparams.model.lr,
betas=(self.hparams.model.beta1, self.hparams.model.beta2),
)
return [optimizer_d, optimizer_g]
4 changes: 2 additions & 2 deletions anomalib/models/padim/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@
# See the License for the specific language governing permissions
# and limitations under the License.

from .lightning_model import PadimLightning
from .lightning_model import Padim, PadimLightning

__all__ = ["PadimLightning"]
__all__ = ["Padim", "PadimLightning"]
4 changes: 2 additions & 2 deletions anomalib/models/patchcore/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,6 @@
# See the License for the specific language governing permissions
# and limitations under the License.

from .lightning_model import PatchcoreLightning
from .lightning_model import Patchcore, PatchcoreLightning

__all__ = ["PatchcoreLightning"]
__all__ = ["Patchcore", "PatchcoreLightning"]
4 changes: 2 additions & 2 deletions anomalib/models/reverse_distillation/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from .lightning_model import ReverseDistillationLightning
from .lightning_model import ReverseDistillation, ReverseDistillationLightning

__all__ = ["ReverseDistillationLightning"]
__all__ = ["ReverseDistillation", "ReverseDistillationLightning"]
Loading

0 comments on commit ef62199

Please sign in to comment.