Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: PaDiM didn't use config.model.pre_trained. #514

Merged
merged 2 commits into from
Aug 31, 2022

Conversation

jingt2ch
Copy link
Contributor

Description

PaDim did not use 'pre_trained' value in config.yaml. This PR fixes it.

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • Refactor (non-breaking change which refactors the code base)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

@samet-akcay
Copy link
Contributor

@jingt2ch, thanks for the fix. Looks like the rest of the models do not use pre_trained configuration neither. Would you like to modify those as well, or are you good with padim only?

@jingt2ch
Copy link
Contributor Author

@samet-akcay, thank for your review. I'm developing a PaDiM based model, so I'm good enough with PaDiM only.

I'm not sure about the other models. If I understand other models and need fix, I will send PR again.

@samet-akcay samet-akcay merged commit 702acc1 into openvinotoolkit:main Aug 31, 2022
@djdameln
Copy link
Contributor

djdameln commented Sep 1, 2022

@jingt2ch thanks for fixing this. We're trying to understand your use case to determine if we should provide this fix for the other models as well. Would you mind explaining the added value of allowing a non pre-trained backbone in feature extraction based models such as PaDiM? Since the backbone in these types of models is not fine-tuned during training, the model weights would always remain at their randomly initialized values and the backbone would not be able to extract meaningful features.

@jingt2ch
Copy link
Contributor Author

jingt2ch commented Sep 7, 2022

@djdameln Sorry for my late reply.
In my use case, I wanted to evaluate the impact of backbone weights. When I evaluated my PaDiM based model, I found that the weights had too small effect. I thought there might be an implementation mistake in my model. So I used anomalib's pre_trained flag to evaluate original PaDiM.
I don't know about other models enough. However, someone may want to see the effect of pre_trained flag on other models as well.

Incidentally, even in the original PaDiM, the effect of untrained weights was smaller than expected.
image

@djdameln
Copy link
Contributor

djdameln commented Sep 7, 2022

@jingt2ch Thanks for clarifying. I guess it makes sense from a research perspective to compare the models with a non-pretrained backbone. It could be used as a sort of baseline against which to compare backbones trained on different datasets. In this respect it would make sense to allow this for the other Anomalib models as well.

Incidentally, even in the original PaDiM, the effect of untrained weights was smaller than expected.

I noticed the same thing when I was investigating this, and was surprised by how well the PaDiM model performs with random weights. I guess without the pretrained weights the backbone provides a random non-linear dimensionality reduction of the input images, that is apparently still sufficiently descriptive to allow discriminating between the classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants