Increase max compilation rate for Filebeat modules tests#9777
Increase max compilation rate for Filebeat modules tests#9777ruflin merged 3 commits intoelastic:masterfrom
Conversation
|
@jakelandis A few questions about the above:
The pipeline for the above can be found here: https://github.com/elastic/beats/blob/master/filebeat/module/logstash/slowlog/ingest/pipeline-plain.json It does not contain an scripts but quite a few grok patterns. |
|
@jakelandis An interesting addition here. The docs state the limit is 15/min, but the error above says 75/5min which is similar but no the same ... |
There was a problem hiding this comment.
CI is failing with:
Too many dynamic script compilations within, max: [100/1m]
So looks like even 100/1m is not enough 😬😬😬
There was a problem hiding this comment.
I set it now to 1000/1m and at least locally it works. This is mainly to get test back to green. We should figure out about the why afterwards.
Module tests started to fail with the following message: ``` Too many dynamic script compilations within, max: [75/5m]; please use indexed, or scripts with parameters instead; this limit can be changed by the [script.max_compilations_rate] setting ``` The reason for this is (I think) that we started to use scripts more heavily in our pipelines so we hit this limit. For the tests we can easily increase the limit to solve this. In production this could still be an issue when all pipelines are loaded at the same time and we need to find a solution here. In parallel we should also investigate if there are some non parameterised scripts which get recompiled every time.
kaiyan-sheng
left a comment
There was a problem hiding this comment.
Thanks for updating postgresql-9.6-new-timestamp.log-expected.json!
Partial backport of ##9777. This PR ncreases the dynamic script compilations for Filebeat system tests to `1000/1m`.
Module tests started to fail with the following message:
The reason for this is (I think) that we started to use scripts more heavily in our pipelines so we hit this limit. For the tests we can easily increase the limit to solve this. In production this could still be an issue when all pipelines are loaded at the same time and we need to find a solution here.
In parallel we should also investigate if there are some non parameterised scripts which get recompiled every time.