Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump up pytorch-lightning version to 1.6.0 or higher #193

Merged
merged 17 commits into from
Apr 8, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
8ccf8f6
bump up pytorch-lightning version
samet-akcay Apr 4, 2022
1c5203d
bump up to 1.6.0 or higher
samet-akcay Apr 4, 2022
0011e16
Merge branch 'development' of github.com:openvinotoolkit/anomalib int…
samet-akcay Apr 5, 2022
6913bf4
🛠 Fix typo.
samet-akcay Apr 5, 2022
a5f981a
Merge branch 'development' of github.com:openvinotoolkit/anomalib int…
samet-akcay Apr 5, 2022
de21af5
🛠 Fix mypy issues.
samet-akcay Apr 5, 2022
cef5fb8
Merge branch 'development' of github.com:openvinotoolkit/anomalib int…
samet-akcay Apr 6, 2022
403ce71
🔄 Switch soft permutation to false by default since this radically
samet-akcay Apr 6, 2022
f82f16b
Merge branch 'fix/sa/cflow-switch-to-hard-permutation-by-default' of …
samet-akcay Apr 6, 2022
b8ff5d7
Merge branch 'development' of github.com:openvinotoolkit/anomalib int…
samet-akcay Apr 6, 2022
6ea7ff4
🗑 Remove `self.automatic_optimization=False` from dfkde to automatica…
samet-akcay Apr 8, 2022
9c70375
🗑 Remove `self.automatic_optimization=False` from dfm to automaticall…
samet-akcay Apr 8, 2022
70da46b
🗑 Remove `self.automatic_optimization=False` from padim to automatica…
samet-akcay Apr 8, 2022
9e865f4
🗑 Remove `self.automatic_optimization=False` from patchcore to automa…
samet-akcay Apr 8, 2022
07e4111
🔄 replace `checkpoint_callback` with `enable_checkpointing`
samet-akcay Apr 8, 2022
dc0f9d3
✏️ Edit `check_val_every_n_epoch` from 2 ➡️ 1 to save the weights
samet-akcay Apr 8, 2022
c6f0662
✏️ Set `check_val_every_n_epoch` to 1 for `fast_run`.
samet-akcay Apr 8, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion anomalib/models/cflow/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
2 changes: 1 addition & 1 deletion anomalib/models/dfkde/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1 # Don't validate before extracting features.
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
1 change: 0 additions & 1 deletion anomalib/models/dfkde/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,6 @@ def __init__(self, hparams: Union[DictConfig, ListConfig]):
hparams.model.backbone, hparams.model.max_training_points, threshold_steepness, threshold_offset
)

self.automatic_optimization = False
self.embeddings: List[Tensor] = []

@staticmethod
Expand Down
2 changes: 1 addition & 1 deletion anomalib/models/dfm/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1 # Don't validate before extracting features.
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
1 change: 0 additions & 1 deletion anomalib/models/dfm/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ def __init__(self, hparams: Union[DictConfig, ListConfig]):
self.model: DFMModel = DFMModel(
backbone=hparams.model.backbone, n_comps=hparams.model.pca_level, score_type=hparams.model.score_type
)
self.automatic_optimization = False
self.embeddings: List[Tensor] = []

@staticmethod
Expand Down
2 changes: 1 addition & 1 deletion anomalib/models/ganomaly/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 2
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
2 changes: 1 addition & 1 deletion anomalib/models/padim/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1 # Don't validate before extracting features.
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
1 change: 0 additions & 1 deletion anomalib/models/padim/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -294,7 +294,6 @@ def __init__(self, hparams: Union[DictConfig, ListConfig]):
).eval()

self.stats: List[Tensor] = []
self.automatic_optimization = False
self.embeddings: List[Tensor] = []

@staticmethod
Expand Down
2 changes: 1 addition & 1 deletion anomalib/models/patchcore/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1 # Don't validate before extracting features.
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
1 change: 0 additions & 1 deletion anomalib/models/patchcore/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,6 @@ def __init__(self, hparams) -> None:
backbone=hparams.model.backbone,
apply_tiling=hparams.dataset.tiling.apply,
)
self.automatic_optimization = False
self.embeddings: List[Tensor] = []

def configure_optimizers(self) -> None:
Expand Down
2 changes: 1 addition & 1 deletion anomalib/models/stfpm/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,9 @@ trainer:
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 1
checkpoint_callback: true
default_root_dir: null
deterministic: false
enable_checkpointing: true
fast_dev_run: false
gpus: 1
gradient_clip_val: 0
Expand Down
3 changes: 2 additions & 1 deletion anomalib/utils/callbacks/visualizer_callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
from anomalib.pre_processing.transforms import Denormalize
from anomalib.utils import loggers
from anomalib.utils.loggers import AnomalibWandbLogger
from anomalib.utils.loggers.base import ImageLoggerBase


class VisualizerCallback(Callback):
Expand Down Expand Up @@ -68,7 +69,7 @@ def _add_images(
for log_to in module.hparams.project.log_images_to:
if log_to in loggers.AVAILABLE_LOGGERS:
# check if logger object is same as the requested object
if log_to in logger_type and module.logger is not None:
if log_to in logger_type and module.logger is not None and isinstance(module.logger, ImageLoggerBase):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why this is needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is to silence mypy. Some typings have changed in pl 1.6, causing some mypy issues.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, so no functional changes to the visualizer callback then

module.logger.add_image(
image=visualizer.figure,
name=filename.parent.name + "_" + filename.name,
Expand Down
2 changes: 1 addition & 1 deletion anomalib/utils/loggers/wandb.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ def __init__(
anonymous: Optional[bool] = None,
version: Optional[str] = None,
project: Optional[str] = None,
log_model: Optional[bool] = False,
log_model: Union[str, bool] = False,
experiment=None,
prefix: Optional[str] = "",
sync_step: Optional[bool] = None,
Expand Down
2 changes: 1 addition & 1 deletion requirements/base.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ opencv-python>=4.5.3.56
opencv-contrib-python==4.5.5.62
pandas~=1.1.5
pillow==9.0.0
pytorch-lightning==1.5.9
pytorch-lightning>=1.6.0
torch==1.8.1
torchvision==0.9.1
torchtext==0.9.1
Expand Down
1 change: 1 addition & 0 deletions tests/helpers/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ def setup_model_train(
# Train the model.
if fast_run:
config.trainer.max_epochs = 1
config.trainer.check_val_every_n_epoch = 1

trainer = Trainer(callbacks=callbacks, **config.trainer)
trainer.fit(model=model, datamodule=datamodule)
Expand Down