Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix efficient ad #2015

Merged
merged 5 commits into from
Apr 28, 2024
Merged

Fix efficient ad #2015

merged 5 commits into from
Apr 28, 2024

Conversation

abc-125
Copy link
Contributor

@abc-125 abc-125 commented Apr 22, 2024

📝 Description

Several changes to fix the performance:

  • Replaced teacher weights with the ones from Nelson1425, thanks to @alexriedel1 for advice. Renamed layers to match anomalib implementation, new weights can be downloaded here.
  • Removed Imagenet normalization from transforms for this model, because it is in the forward for every submodel already. Added checkup to detect if it is added by the user.
  • The train_batch_size should be 1, as described in the paper. It seems hard to hardcode because it is used in the AnomalibDataModule, which has no access to the model's name, so I have just added a checkup to raise an error if the value is not 1.
  • Removed parameter batch_size from efficient_ad.yaml, because it was affecting only Imagenet batch size, which should always be 1 according to the paper.
  • Changed max epochs in efficient_ad.yaml to 1000, because if you don't have many images per class (as in MVTec AD), the model needs to run more epochs to finish 70000 steps, which are needed to reproduce results from the paper.

✨ Changes

Select what type of change your PR is:

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • 🔨 Refactor (non-breaking change which refactors the code base)
  • 🚀 New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • 📚 Documentation update
  • 🔒 Security update

✅ Checklist

Before you submit your pull request, please make sure you have completed the following steps:

  • 📋 I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
  • 📚 I have made the necessary updates to the documentation (if applicable).
  • 🧪 I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).

For more information about code review checklists, see the Code Review Checklist.

@alexriedel1
Copy link
Contributor

Do you see any performance increases? If so can you provide some numbers?

@abc-125
Copy link
Contributor Author

abc-125 commented Apr 24, 2024

Do you see any performance increases? If so can you provide some numbers?

Sure, here are the results for image-level AUROC, seed 42, different classes of MVTec dataset, and model sizes (S or M):

  • After setting train_batch_size to 1 and max_epochs to 1000 to fit the paper settings, capsules S: 87.8
  • After replacing teacher weights, capsules S: 98.8, screw S: 81.2
  • After removing the second Imagenet Normalization, screw S: 95.9, cable S: 94.8, transistor S: 99.8, pill M: 98.1

The last row shows results similar to the results from the paper.

@alexriedel1
Copy link
Contributor

Do you see any performance increases? If so can you provide some numbers?

Sure, here are the results for image-level AUROC, seed 42, different classes of MVTec dataset, and model sizes (S or M):

  • After setting train_batch_size to 1 and max_epochs to 1000 to fit the paper settings, capsules S: 87.8
  • After replacing teacher weights, capsules S: 98.8, screw S: 81.2
  • After removing the second Imagenet Normalization, screw S: 95.9, cable S: 94.8, transistor S: 99.8, pill M: 98.1

The last row shows results similar to the results from the paper.

Looks good to me!
The AUC-ROC EfficientAD paper results for comparison are:
EfficientAD S:
Screw: 0.97315
Cable: 0.92537
Transistor: 0.998

EfficientAD M:
Pill: 0.98608

Anomalib Patchcore WideResnet_50:
Screw: 0.924
Cable: 0.990
Transistor: 1.000
Pill: 0.924

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the effort, @abc-125. The changes look good to me. If @alexriedel1 also agrees, we could merge.

@alexriedel1 can you let me know what you think? Thanks

@alexriedel1
Copy link
Contributor

Thanks a lot for the effort, @abc-125. The changes look good to me. If @alexriedel1 also agrees, we could merge.

@alexriedel1 can you let me know what you think? Thanks

Yes go for it!
Be sure to include the new model weights in the releases and then the weight download can be updated accordingly

WEIGHTS_DOWNLOAD_INFO = DownloadInfo(
name="efficientad_pretrained_weights.zip",
url="https://github.com/openvinotoolkit/anomalib/releases/download/efficientad_pretrained_weights/efficientad_pretrained_weights.zip",
hashsum="c09aeaa2b33f244b3261a5efdaeae8f8284a949470a4c5a526c61275fe62684a",
)

Copy link
Collaborator

@ashwinvaidya17 ashwinvaidya17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

@blaz-r
Copy link
Contributor

blaz-r commented Apr 26, 2024

Great job 😄. This also partially fixes #1813

@samet-akcay
Copy link
Contributor

@ashwinvaidya17, the tests are failing. I guess it's related to the weights?

@ashwinvaidya17
Copy link
Collaborator

I think the tests need to be updated
image

@samet-akcay
Copy link
Contributor

Ah yes, of course!

@abc-125
Copy link
Contributor Author

abc-125 commented Apr 27, 2024

I think the tests need to be updated

Thank you, @ashwinvaidya17! I have updated the train_batch_size for this model in test_models.py.

@samet-akcay
Copy link
Contributor

I think the tests need to be updated

Thank you, @ashwinvaidya17! I have updated the train_batch_size for this model in test_models.py.

Thanks @abc-125, merging now!

@samet-akcay samet-akcay merged commit aee41f2 into openvinotoolkit:main Apr 28, 2024
4 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

📉 Investigate the low performance of EfficientAD
5 participants