There seems to be a memory leak with add_process_metadata that is reproduced with the reference configuration provided to run Auditbeat in Kubernetes.
This processor is used in this scenario to obtain the container.id from the process.id, so add_kubernetes_metadata can enrich events. But issue is also reproduced when add_kubernetes_metadata is not used.
Tried to reproduce in a simpler scenario, only with docker, but memory usage of this processor didn't seem to increase beyond ~13MB. In the linked discuss issue there seems to be problems even with 1GB memory limits. Difference could be in the maximum number of pids allowed (sysctl kernel.pid_max).
add_process_metadata has a process cache whose entries are never cleaned, but the key is the pid, so its size is effectively limited by the maximum number of pids in the machine. The problem may be that kernel.pid_max can be quite big.
Some stragegy should be applied to remove unneeded or expired entries from this cache.
For confirmed bugs, please report:
There seems to be a memory leak with
add_process_metadatathat is reproduced with the reference configuration provided to run Auditbeat in Kubernetes.This processor is used in this scenario to obtain the
container.idfrom theprocess.id, soadd_kubernetes_metadatacan enrich events. But issue is also reproduced whenadd_kubernetes_metadatais not used.Tried to reproduce in a simpler scenario, only with docker, but memory usage of this processor didn't seem to increase beyond ~13MB. In the linked discuss issue there seems to be problems even with 1GB memory limits. Difference could be in the maximum number of pids allowed (
sysctl kernel.pid_max).add_process_metadatahas a process cache whose entries are never cleaned, but the key is the pid, so its size is effectively limited by the maximum number of pids in the machine. The problem may be thatkernel.pid_maxcan be quite big.Some stragegy should be applied to remove unneeded or expired entries from this cache.
For confirmed bugs, please report: