Skip to content

[ML] Trained model size calculated incorrectly #107831

@davidkyle

Description

@davidkyle

Elasticsearch Version

8.13.0

Installed Plugins

No response

Java Version

bundled

OS Version

any

Problem Description

The trained model stats incorrectly calculates the required_native_memory_bytes field.

Since #98139 calculating the model's required memory has taken into account the number of allocations as each allocation uses extra memory. The memory requirements for each deployment are different depending on the number of allocations. The bug is that the total number of deployed allocations is used to calculate the required memory rather than the deployment's number of allocations.

required_native_memory_bytes should be calculated per deployment using the correct number of allocations

The bug only affects the Stats API output it does not affect deploying a model

Steps to Reproduce

Deploy any 2 NLP model in machine learning. Change the number of allocations for the first model and observe the required memory change for second model.

Logs (if relevant)

No response

Metadata

Metadata

Assignees

Labels

:mlMachine learning>bugTeam:MLMeta label for the ML team

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions