Skip to content

[ML] Frustrating experience creating DFA job when xpack.ml.max_model_memory_limit is set #60486

@droberts195

Description

@droberts195

When xpack.ml.max_model_memory_limit is set it is possible that a particular data frame analytics job is impossible for the cluster to create and run successfully. But I can imagine that the way this manifests itself in the UI will cause immense frustration.

In the following example xpack.ml.max_model_memory_limit was set to 410mb.

After creating the initial config and clicking "Create" you get an error like this:

Screenshot 2020-03-18 at 10 57 22

The obvious reaction will then be to edit the model memory limit to bring it down to the maximum permitted:

Screenshot 2020-03-18 at 10 57 46

This works, and you can create the job, but then when you try to start it you get this error:

Screenshot 2020-03-18 at 10 58 04

Given that the backend code is so defensive about stopping you running a job that has a model memory limit less than the estimated requirement it would be better if the UI was also stricter, and broke the bad news that you cannot do what you want to at an earlier stage.

Metadata

Metadata

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions