Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In the recent release the concept of using remote inference with validators was introduced. Along with this came the option to skip installing a validator's ML models with the intention of using said remote inference endpoints instead. While there is a flag to control this behaviour, it was defaulted to skip model installation unless told otherwise.
First, this is a bad assumption since it can leave validators in an unusable state without knowing and referencing the docs for this feature.
Second, post-install is also currently where other hub validators that a particular validator depends on are installed. See #966 as an example of this.
This PR corrects this behaviour by implementing the following:
local_models
option defaults toNone
instead ofFalse
use_remote_inferencing
is set totrue
in the.guardrailsrc
and the validator being installed is tagged as having a remote inference endpoint (i.e.module_manifest.tags.has_guardrails_endpoint
).--install-local-models
flag always indicates the post-install should be run--no-install-local-models
flag always indicates the post-install should not be run