Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Issue with self.model_size Initialization in EfficientAD Class #2157

Closed
1 task done
jrusbo opened this issue Jun 26, 2024 · 5 comments · Fixed by #2159
Closed
1 task done

[Bug]: Issue with self.model_size Initialization in EfficientAD Class #2157

jrusbo opened this issue Jun 26, 2024 · 5 comments · Fixed by #2159
Labels
Bug Something isn't working

Comments

@jrusbo
Copy link

jrusbo commented Jun 26, 2024

Describe the bug

I encountered an issue while using the EfficientAD class in the Anomalib library. The self.model_size attribute is set to a string value ("small" or "medium") during initialization. This leads to a problem when the prepare_pretrained_model function is called, specifically at the line where self.model_size.value is accessed, since a string object has no value attribute.

Dataset

Folder

Model

Other (please specify in the field below)

Steps to reproduce the behavior

  1. Create a configuration file where all model parameters are specified.
  2. Initialize the EfficientAD class using the get_model function inside the `src/anomalib/models/init.py
  3. Call the fit function of the engine

OS information

OS information:

  • OS: Windows 11
  • Python version: 3.11
  • Anomalib version: 1.1.0

Expected behavior

self.model_size should be initialized as an instance of EfficientAdModelSize rather than a plain string. This will ensure that the value attribute is accessible.

I think it should look like this:

self.model_size = EfficientAdModelSize(model_size)

### Screenshots

_No response_

### Pip/GitHub

pip

### What version/branch did you use?

_No response_

### Configuration YAML

```yaml
model:
  class_path: anomalib.models.EfficientAd
  init_args:
    imagenet_dir: datasets/imagenette
    teacher_out_channels: 384
    model_size: small # options: [small, medium]
    lr: 0.0001
    weight_decay: 0.00001
    padding: false
    pad_maps: true # relevant for "padding: false", see EfficientAd in lightning_model.py

Logs

File "src\anomaly_detection\train_anomalib.py", line 89, in train
    engine.fit(model=model, datamodule=datamodule)
  File "venv2\Lib\site-packages\anomalib\engine\engine.py", line 540, in fit
    self.trainer.fit(model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
  File "venv2\Lib\site-packages\mlflow\utils\autologging_utils\safety.py", line 576, in safe_patch_function
    patch_function(call_original, *args, **kwargs)
  File "venv2\Lib\site-packages\mlflow\utils\autologging_utils\safety.py", line 249, in patch_with_managed_run
    result = patch_function(original, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\mlflow\pytorch\_lightning_autolog.py", line 537, in patched_fit
    result = original(self, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\mlflow\utils\autologging_utils\safety.py", line 557, in call_original
    return call_original_fn_with_event_logging(_original_fn, og_args, og_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\mlflow\utils\autologging_utils\safety.py", line 492, in call_original_fn_with_event_logging
    original_fn_result = original_fn(*og_args, **og_kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\mlflow\utils\autologging_utils\safety.py", line 554, in _original_fn
    original_result = original(*_og_args, **_og_kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "venv2\Lib\site-packages\lightning\pytorch\trainer\call.py", line 44, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "venv2\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 987, in _run
    results = self._run_stage()
              ^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\lightning\pytorch\trainer\trainer.py", line 1033, in _run_stage
    self.fit_loop.run()
  File "venv2\Lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 201, in run
    self.on_run_start()
  File "venv2\Lib\site-packages\lightning\pytorch\loops\fit_loop.py", line 328, in on_run_start
    call._call_lightning_module_hook(trainer, "on_train_start")
  File "venv2\Lib\site-packages\lightning\pytorch\trainer\call.py", line 157, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "venv2\Lib\site-packages\anomalib\models\image\efficient_ad\lightning_model.py", line 251, in on_train_start
    self.prepare_pretrained_model()
  File "venv2\Lib\site-packages\anomalib\models\image\efficient_ad\lightning_model.py", line 93, in prepare_pretrained_model
    pretrained_models_dir / "efficientad_pretrained_weights" / f"pretrained_teacher_{self.model_size.value}.pth"
                                                                                     ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'value'

Code of Conduct

  • I agree to follow this project's Code of Conduct
@Gornoka
Copy link
Contributor

Gornoka commented Jun 27, 2024

If I see it correctly this change would fix the issue:

        model_size_str = self.model_size.value if isinstance(self.model_size, EfficientAdModelSize) else self.model_size
        teacher_path = (
            pretrained_models_dir / "efficientad_pretrained_weights" / f"pretrained_teacher_{model_size_str}.pth"
        )

I think it would be cleaner to cast the param into the enum on init, this would also make the type hinting more intuitive.

@CarlosNacher
Copy link
Contributor

Hi, set "S" / "M" instead of "small" / "medium" in config.yaml file

@alexriedel1
Copy link
Contributor

Hi, set "S" / "M" instead of "small" / "medium" in config.yaml file

This won't help as "S" or "M" are strings too..

@CarlosNacher
Copy link
Contributor

Hi, set "S" / "M" instead of "small" / "medium" in config.yaml file

This won't help as "S" or "M" are strings too..

Totally, I misunderstood the Issue key question, sorry

@samet-akcay samet-akcay added the Bug Something isn't working label Jul 10, 2024
@Gornoka
Copy link
Contributor

Gornoka commented Jul 10, 2024

thanks for merging

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working
Projects
Status: ✅ Done
5 participants