It would be great if the Kubernetes Leader Election provider was exposed to integrations so there isn't a risk of collecting data twice for certain integrations. https://www.elastic.co/guide/en/fleet/master/kubernetes_leaderelection-provider.html
An example of where this could be applied is the AWS Billing Integration. If you run this integration on a policy applied to a K8s Elastic Agent Daemon Set, the billing AWS API is polled by all the Agents within that Daemon Set and can quickly run up Cloud costs as discussed here #7350. The irony of collecting billing metrics generating bills is not lost.
Instead it would we great if on a per-integration basis it was possible to instruct an integration to only run on the Elastic Agent which is currently Leader.
This also could be defined on the processor level if the Kubernetes Leader Election variables were exposed to the integration, demonstrated below.
- drop_event:
when:
${kubernetes_leaderelection.leader}: false
This method however doesn't solve the problem of each Agent calling an API as the events are only dropped after collection but before sending to Elasticsearch.
I know we suggest running a second Agent as a Deployment with a dedicated policy but that adds more overhead than needed, especially when there is precedent for this functionality within the Kubernetes integration.
It would be great if the Kubernetes Leader Election provider was exposed to integrations so there isn't a risk of collecting data twice for certain integrations. https://www.elastic.co/guide/en/fleet/master/kubernetes_leaderelection-provider.html
An example of where this could be applied is the AWS Billing Integration. If you run this integration on a policy applied to a K8s Elastic Agent Daemon Set, the billing AWS API is polled by all the Agents within that Daemon Set and can quickly run up Cloud costs as discussed here #7350. The irony of collecting billing metrics generating bills is not lost.
Instead it would we great if on a per-integration basis it was possible to instruct an integration to only run on the Elastic Agent which is currently Leader.
This also could be defined on the processor level if the Kubernetes Leader Election variables were exposed to the integration, demonstrated below.
This method however doesn't solve the problem of each Agent calling an API as the events are only dropped after collection but before sending to Elasticsearch.
I know we suggest running a second Agent as a Deployment with a dedicated policy but that adds more overhead than needed, especially when there is precedent for this functionality within the Kubernetes integration.