Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML][Inference] adding license checks #49056

Conversation

benwtrent
Copy link
Member

This adds license checks for the inference processor creation and the inference action.

@benwtrent benwtrent added >non-issue :ml Machine learning labels Nov 13, 2019
@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (:ml)

@benwtrent
Copy link
Member Author

run elasticsearch-ci/1


@Override
public void licenseStateChanged() {
this.inferenceAllowed = licenseState.isMachineLearningAllowed();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the mechanism which updates licenseState?
Am I right that it is a mutable object and the license listener notifies licenseStateChanged method after the internal state of licenseState object changes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is a singleton that gets mutated in place. See other examples in org.elasticsearch.xpack.ml.InvalidLicenseEnforcer

}

@Override
protected void doExecute(Task task, InferModelAction.Request request, ActionListener<InferModelAction.Response> listener) {

if (licenseState.isMachineLearningAllowed() == false) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do other actions (get, delete) already have this license check?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I don't think they should either. They don't really provide licensed value IMO. This is similar to how we treat anomaly jobs as well.

Co-Authored-By: Przemysław Witek <przemyslaw.witek@elastic.co>
Copy link
Contributor

@przemekwitek przemekwitek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -342,10 +342,16 @@

@Override
public Map<String, Processor.Factory> getProcessors(Processor.Parameters parameters) {
if (this.enabled == false) {
return Collections.emptyMap();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs some due diligence. If there was an ingest pipeline in the cluster state containing an inference ingest processor and the cluster was restarted with xpack.ml.enabled: false what would happen?

The tentative plan for restoring full cluster snapshots in Cloud in the future is to disable all X-Pack plugins during the snapshot restore, which will lead to this exact situation.

If there's any doubt about what will happen it might be safer to allow the ingest processors to exist but just have them fail on every document they process (via the failure response from the infer model action) if the license is invalid.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@droberts195 aren't xpack.ml.enabled and the license checks to different things? Additionally, this is exactly what the enrich project has done (if enrich is disabled, do not provide the processors). I will reach out to core features to see what they think about this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@droberts195 pipelines are stored as maps in the cluster state and are not fully marshaled until the pipeline is created on the ingest node. So, clusterstate restoration and pipeline instantiation are two different steps.

If there was an ingest pipeline in the cluster state containing an inference ingest processor and the cluster was restarted with xpack.ml.enabled: false what would happen?

The cluster will start up fine, the pipeline will just fail to be instantiated on the node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK great. Sounds like it's not a problem then and the code can stay as it is.

@benwtrent benwtrent merged commit cfd5c64 into elastic:feature/ml-inference Nov 14, 2019
@benwtrent benwtrent deleted the feature/ml-inference-licensing branch November 14, 2019 14:13
benwtrent added a commit that referenced this pull request Nov 18, 2019
* [ML][Inference] adds lazy model loader and inference (#47410)

This adds a couple of things:

- A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them
- A Model class and its first sub-class LocalModel. Used to cache model information and run inference.
- Transport action and handler for requests to infer against a local model
Related Feature PRs: 
* [ML][Inference] Adjust inference configuration option API (#47812)

* [ML][Inference] adds logistic_regression output aggregator (#48075)

* [ML][Inference] Adding read/del trained models (#47882)

* [ML][Inference] Adding inference ingest processor (#47859)

* [ML][Inference] fixing classification inference for ensemble (#48463)

* [ML][Inference] Adding model memory estimations (#48323)

* [ML][Inference] adding more options to inference processor (#48545)

* [ML][Inference] handle string values better in feature extraction (#48584)

* [ML][Inference] Adding _stats endpoint for inference (#48492)

* [ML][Inference] add inference processors and trained models to usage (#47869)

* [ML][Inference] add new flag for optionally including model definition (#48718)

* [ML][Inference] adding license checks (#49056)

* [ML][Inference] Adding memory and compute estimates to inference (#48955)
benwtrent added a commit to benwtrent/elasticsearch that referenced this pull request Nov 18, 2019
* [ML][Inference] adds lazy model loader and inference (elastic#47410)

This adds a couple of things:

- A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them
- A Model class and its first sub-class LocalModel. Used to cache model information and run inference.
- Transport action and handler for requests to infer against a local model
Related Feature PRs:
* [ML][Inference] Adjust inference configuration option API (elastic#47812)

* [ML][Inference] adds logistic_regression output aggregator (elastic#48075)

* [ML][Inference] Adding read/del trained models (elastic#47882)

* [ML][Inference] Adding inference ingest processor (elastic#47859)

* [ML][Inference] fixing classification inference for ensemble (elastic#48463)

* [ML][Inference] Adding model memory estimations (elastic#48323)

* [ML][Inference] adding more options to inference processor (elastic#48545)

* [ML][Inference] handle string values better in feature extraction (elastic#48584)

* [ML][Inference] Adding _stats endpoint for inference (elastic#48492)

* [ML][Inference] add inference processors and trained models to usage (elastic#47869)

* [ML][Inference] add new flag for optionally including model definition (elastic#48718)

* [ML][Inference] adding license checks (elastic#49056)

* [ML][Inference] Adding memory and compute estimates to inference (elastic#48955)
benwtrent added a commit that referenced this pull request Nov 18, 2019
* [ML] ML Model Inference Ingest Processor (#49052)

* [ML][Inference] adds lazy model loader and inference (#47410)

This adds a couple of things:

- A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them
- A Model class and its first sub-class LocalModel. Used to cache model information and run inference.
- Transport action and handler for requests to infer against a local model
Related Feature PRs:

* [ML][Inference] Adjust inference configuration option API (#47812)

* [ML][Inference] adds logistic_regression output aggregator (#48075)

* [ML][Inference] Adding read/del trained models (#47882)

* [ML][Inference] Adding inference ingest processor (#47859)

* [ML][Inference] fixing classification inference for ensemble (#48463)

* [ML][Inference] Adding model memory estimations (#48323)

* [ML][Inference] adding more options to inference processor (#48545)

* [ML][Inference] handle string values better in feature extraction (#48584)

* [ML][Inference] Adding _stats endpoint for inference (#48492)

* [ML][Inference] add inference processors and trained models to usage (#47869)

* [ML][Inference] add new flag for optionally including model definition (#48718)

* [ML][Inference] adding license checks (#49056)

* [ML][Inference] Adding memory and compute estimates to inference (#48955)

* fixing version of indexed docs for model inference
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:ml Machine learning >non-issue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants