[ML][Inference] Adding read/del trained models#47882
[ML][Inference] Adding read/del trained models#47882benwtrent merged 6 commits intoelastic:feature/ml-inferencefrom
Conversation
|
Pinging @elastic/ml-core (:ml) |
| import java.util.Date; | ||
|
|
||
|
|
||
| public class InferenceAuditMessage extends AbstractAuditMessage { |
There was a problem hiding this comment.
Could you add a unit test, similar to AnomalyDetectionAuditMessageTests?
There was a problem hiding this comment.
I will, but it seems like overkill to me as this class essentially does nothing.
| super(in); | ||
| } | ||
|
|
||
| public Response(QueryPage<TrainedModelConfig> analytics) { |
There was a problem hiding this comment.
Rename analytics to trainedModels or similar?
| private final Client client; | ||
| private final ClusterService clusterService; | ||
|
|
||
| public Factory(Client client, ClusterService clusterService, Settings settings) { |
There was a problem hiding this comment.
settings parameter is unused, is it ok to remove it?
There was a problem hiding this comment.
@przemekwitek this class is unused, I prefer to just keep it like this in waiting for #47859. If #47859 is merged first, then I can fix the conflicts here. If this PR is merged first, then that PR will have to address conflicts.
| public static class Factory implements Processor.Factory, Consumer<ClusterState> { | ||
|
|
||
| private final Client client; | ||
| private final ClusterService clusterService; |
There was a problem hiding this comment.
It will be, neither is client really. This class is just sitting here waiting on: #47859
.../ml/src/main/java/org/elasticsearch/xpack/ml/inference/persistence/TrainedModelProvider.java
Show resolved
Hide resolved
|
|
||
| @Override | ||
| protected String[] getIndices() { | ||
| return new String[] {InferenceIndexConstants.INDEX_PATTERN }; |
There was a problem hiding this comment.
| return new String[] {InferenceIndexConstants.INDEX_PATTERN }; | |
| return new String[] { InferenceIndexConstants.INDEX_PATTERN }; |
|
|
||
| /** | ||
| * The action is a master node action to ensure it reads an up-to-date cluster | ||
| * state in order to determine whether there is a persistent task for the analytics |
| Set<String> referencedModels = getReferencedModelKeys(currentIngestMetadata); | ||
|
|
||
| if (referencedModels.contains(id)) { | ||
| listener.onFailure(new ElasticsearchStatusException("Cannot delete mode [{}] as it is still referenced by ingest processors", |
There was a problem hiding this comment.
| listener.onFailure(new ElasticsearchStatusException("Cannot delete mode [{}] as it is still referenced by ingest processors", | |
| listener.onFailure(new ElasticsearchStatusException("Cannot delete model [{}] as it is still referenced by ingest processors", |
|
|
||
| private Set<String> getReferencedModelKeys(IngestMetadata ingestMetadata) { | ||
| Set<String> allReferencedModelKeys = new HashSet<>(); | ||
| if (ingestMetadata != null) { |
There was a problem hiding this comment.
This is a matter of taste, but I would add this in the beginning of this method:
if (ingestMetadata == null) {
return Collections.emptySet();
}
This way it is clear what happens on null case and the rest of the method focuses on non-null case.
| - match: { count: 0 } | ||
| - match: { trained_model_configs: [] } | ||
|
|
||
| - do: |
.../ml/src/main/java/org/elasticsearch/xpack/ml/inference/persistence/TrainedModelProvider.java
Show resolved
Hide resolved
* [ML][Inference] adds lazy model loader and inference (#47410) This adds a couple of things: - A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them - A Model class and its first sub-class LocalModel. Used to cache model information and run inference. - Transport action and handler for requests to infer against a local model Related Feature PRs: * [ML][Inference] Adjust inference configuration option API (#47812) * [ML][Inference] adds logistic_regression output aggregator (#48075) * [ML][Inference] Adding read/del trained models (#47882) * [ML][Inference] Adding inference ingest processor (#47859) * [ML][Inference] fixing classification inference for ensemble (#48463) * [ML][Inference] Adding model memory estimations (#48323) * [ML][Inference] adding more options to inference processor (#48545) * [ML][Inference] handle string values better in feature extraction (#48584) * [ML][Inference] Adding _stats endpoint for inference (#48492) * [ML][Inference] add inference processors and trained models to usage (#47869) * [ML][Inference] add new flag for optionally including model definition (#48718) * [ML][Inference] adding license checks (#49056) * [ML][Inference] Adding memory and compute estimates to inference (#48955)
* [ML][Inference] adds lazy model loader and inference (elastic#47410) This adds a couple of things: - A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them - A Model class and its first sub-class LocalModel. Used to cache model information and run inference. - Transport action and handler for requests to infer against a local model Related Feature PRs: * [ML][Inference] Adjust inference configuration option API (elastic#47812) * [ML][Inference] adds logistic_regression output aggregator (elastic#48075) * [ML][Inference] Adding read/del trained models (elastic#47882) * [ML][Inference] Adding inference ingest processor (elastic#47859) * [ML][Inference] fixing classification inference for ensemble (elastic#48463) * [ML][Inference] Adding model memory estimations (elastic#48323) * [ML][Inference] adding more options to inference processor (elastic#48545) * [ML][Inference] handle string values better in feature extraction (elastic#48584) * [ML][Inference] Adding _stats endpoint for inference (elastic#48492) * [ML][Inference] add inference processors and trained models to usage (elastic#47869) * [ML][Inference] add new flag for optionally including model definition (elastic#48718) * [ML][Inference] adding license checks (elastic#49056) * [ML][Inference] Adding memory and compute estimates to inference (elastic#48955)
* [ML] ML Model Inference Ingest Processor (#49052) * [ML][Inference] adds lazy model loader and inference (#47410) This adds a couple of things: - A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them - A Model class and its first sub-class LocalModel. Used to cache model information and run inference. - Transport action and handler for requests to infer against a local model Related Feature PRs: * [ML][Inference] Adjust inference configuration option API (#47812) * [ML][Inference] adds logistic_regression output aggregator (#48075) * [ML][Inference] Adding read/del trained models (#47882) * [ML][Inference] Adding inference ingest processor (#47859) * [ML][Inference] fixing classification inference for ensemble (#48463) * [ML][Inference] Adding model memory estimations (#48323) * [ML][Inference] adding more options to inference processor (#48545) * [ML][Inference] handle string values better in feature extraction (#48584) * [ML][Inference] Adding _stats endpoint for inference (#48492) * [ML][Inference] add inference processors and trained models to usage (#47869) * [ML][Inference] add new flag for optionally including model definition (#48718) * [ML][Inference] adding license checks (#49056) * [ML][Inference] Adding memory and compute estimates to inference (#48955) * fixing version of indexed docs for model inference
Adds two endpoints for DELETE and GET trained models.
We don't allow DELETE for models that are referenced by a ingest pipeline.