Skip to content

Add validation to inference update API#137241

Open
Anmol202005 wants to merge 1 commit intoelastic:mainfrom
Anmol202005:valid
Open

Add validation to inference update API#137241
Anmol202005 wants to merge 1 commit intoelastic:mainfrom
Anmol202005:valid

Conversation

@Anmol202005
Copy link
Copy Markdown

Fixes: #122356
Added validation to inference update API.

@cla-checker-service
Copy link
Copy Markdown

cla-checker-service bot commented Oct 28, 2025

💚 CLA has been signed

@elasticsearchmachine elasticsearchmachine added v9.3.0 needs:triage Requires assignment of a team area label external-contributor Pull request authored by a developer outside the Elasticsearch team labels Oct 28, 2025
@benwtrent benwtrent added :ml Machine learning and removed needs:triage Requires assignment of a team area label labels Oct 28, 2025
@elasticsearchmachine elasticsearchmachine added the Team:ML Meta label for the ML team label Oct 28, 2025
@elasticsearchmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/ml-core (Team:ML)

)
);

ModelValidatorBuilder.buildModelValidator(newModel.getTaskType(), service.get())
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From some quick testing, I believe this validation will always fail to pass. The reason is that newModel is a generic Model class object. When the validator tries to make a validation call to the service, it will call the doInfer function within the proper service class. These functions will check that the model is of the expected service-specific <ServiceName>Model class (example of this check for JinaAI) which will always be false and will throw an exception that fails validation. In order for validation to pass, the model provided to the validator must be of the service-specific type. There are a few options I can think of for accomplishing this:

  1. You can convert the Model back to an UnparsedModel (through a process like ModelRegistry.modelToUnparsedModel function) and then run it through the parsing logic of the service (through a parsePersistedConfigWithSecrets call). I don't believe this is the best way to accomplish this as it involves unnecessary deserialization/re-serialization.
  2. You can update the logic within combineExistingModelWithNewSettings to take the existingUnparsedModel instead of the existingParsedModel and run the updates on the various settings while they are still unparsed. Then use parsePersistedConfigWithSecrets on the updated UnparsedModel object to get a service-specific model object for validation. I would recommend looking into this option first.
  3. We can move the combineExistingModelWithNewSettings into a more service specific location. This may be within the individual service specific models or may just be in the service implementations. This will likely require defining it separately for each service implementation which may be a significant rework so I would try option 2 before considering this route.

Let me know if you have any questions about this. I've also added a comment about testing in the issue linked in the description as I think we will want some tests to ensure this logic is functioning properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

>bug external-contributor Pull request authored by a developer outside the Elasticsearch team :ml Machine learning Team:ML Meta label for the ML team v9.5.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ML] Inference update API does not validate the updated settings

6 participants