[ML] Limit categorization memory usage#1167
Conversation
Anomaly detection jobs have a model_memory_limit setting. This is supposed to restrict the amount of memory the job can use, however, in the past the limit only applied to anomaly detection and not to categorization. This change applies memory limiting to categorization as follows: - When a job is in hard_limit status no new categories will be created. The input document that could not be categorized is discarded as it cannot take part in anomaly detection without a category. The failed_category_count statistic is incremented each time this happens. - When a job is in soft_limit status, we stop recording examples for the category. Fixes elastic#1130
|
retest |
tveasey
left a comment
There was a problem hiding this comment.
This basically looks good. I have one question regarding difference in the way you're testing if allocations are allow.
lib/api/CFieldDataCategorizer.cc
Outdated
| m_NumRecordsHandled{0}, m_OutputFieldCategory{m_Overrides[MLCATEGORY_NAME]}, | ||
| m_MaxMatchingLength{0}, m_JsonOutputWriter{jsonOutputWriter}, | ||
| m_CategorizationFieldName{config.categorizationFieldName()}, | ||
| m_CategorizationFilter{}, m_PersistenceManager{persistenceManager} { |
There was a problem hiding this comment.
nit: there doesn't seem to be any need to default initialise m_CategorizationFilter here. Also, how about initialising builtin types m_NumRecordsHandled, etc in the class body?
| //! A soft categorization failure means downstream components can continue, | ||
| //! by considering the input record to be in some "uncategorizable" | ||
| //! category. | ||
| static const int SOFT_CATEGORIZATION_FAILURE_ERROR; |
|
|
||
| bool CDataCategorizer::addExample(int categoryId, const std::string& example) { | ||
| // Don't add examples if we're memory-limited | ||
| if (m_Limits.resourceMonitor().getMemoryStatus() != model_t::E_MemoryStatusOk) { |
There was a problem hiding this comment.
Is there any reason not to use this->areNewCategoriesAllowed() here? I couldn't think of one.
There was a problem hiding this comment.
areNewCategoriesAllowed() returns false when the job is in hard_limit. But we stop adding examples as soon as the job goes to soft_limit. It's in the spec for memory limiting in #1130 and the PR description, but I will expand the comment in the code to say this.
Anomaly detection jobs have a model_memory_limit setting. This is supposed to restrict the amount of memory the job can use, however, in the past the limit only applied to anomaly detection and not to categorization. This change applies memory limiting to categorization as follows: - When a job is in hard_limit status no new categories will be created. The input document that could not be categorized is discarded as it cannot take part in anomaly detection without a category. The failed_category_count statistic is incremented each time this happens. - When a job is in soft_limit status, we stop recording examples for the category. Backport of elastic#1167
Anomaly detection jobs have a model_memory_limit setting. This is supposed to restrict the amount of memory the job can use, however, in the past the limit only applied to anomaly detection and not to categorization. This change applies memory limiting to categorization as follows: - When a job is in hard_limit status no new categories will be created. The input document that could not be categorized is discarded as it cannot take part in anomaly detection without a category. The failed_category_count statistic is incremented each time this happens. - When a job is in soft_limit status, we stop recording examples for the category. Backport of #1167
Anomaly detection jobs have a model_memory_limit setting. This is
supposed to restrict the amount of memory the job can use, however,
in the past the limit only applied to anomaly detection and not to
categorization.
This change applies memory limiting to categorization as follows:
created. The input document that could not be categorized is
discarded as it cannot take part in anomaly detection without a
category. The failed_category_count statistic is incremented
each time this happens.
for the category.
Fixes #1130