Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make model initialization configurable and not hard coded - pretrained=True option #381

Closed
baraklior opened this issue Jun 20, 2022 · 1 comment · Fixed by #431
Closed
Assignees
Labels
Enhancement New feature or request Model

Comments

@baraklior
Copy link

My problem
I work in a production environment with no internet access at runtime and i'm loading weights from a file.
when initializing a FeaturExtractor it in turn loads a model with a hard coded argument pretrained=True, so it attempts to download the model weights.
specifically i'm looking at this line but it pops up in a few places:
self.feature_extractor = FeatureExtractor(backbone=_backbone(pretrained=True), layers=["avgpool"]).eval()

proposed solution
make the pretrained option to be configurable by environment variables / as an init argument / config option in the classes -
TorchInferencer
DfkdeModel
FeatureExtractor
and any other classes that initialize a model

the offending file
https://github.com/openvinotoolkit/anomalib/blob/cab7aa21aba6876173585a6d300c63238b16fb11/anomalib/models/dfkde/torch_model.py

@samet-akcay samet-akcay added Enhancement New feature or request Model labels Jun 24, 2022
@samet-akcay
Copy link
Contributor

@baraklior, just to understand the request correctly, you want to add pre_trained flag to config files to set it on/off depending on the need, right? For example, something like this for DFKDE:

config

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement New feature or request Model
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants