In Beats we have leader election feature for k8s(#19731), so as to make it possible to avoid deploying a singleton instance of Metricbeat via a k8s Deployment for cluster wide metrics collection. The implementation is based on go-client's leader election implementation: https://pkg.go.dev/k8s.io/client-go/tools/leaderelection
It would be nice to support the similar in Elastic Agent too. What comes first to my mind is a separate provider called kubernetes_leaderelection since its implementation currently is not related to kubernetes provider (no resource discovery is required).
An example configuration would look like this:
providers:
kubernetes_leaderelection:
leader_lease: leader-election-elastic-agent
leader_lease: This will be the name of the lock lease. One can monitor the status of the lease with kubectl describe lease leader-election-elastic-agent.
And then users can define inputs that would be enabled only by the leader Pod with:
- name: kube-state-metrics
type: kubernetes/metrics
use_output: default
meta:
package:
name: kubernetes
version: 0.2.8
data_stream:
namespace: default
streams:
- data_stream:
dataset: kubernetes.state_node
type: metrics
metricsets:
- state_node
add_metadata: true
hosts:
- 'kube-state-metrics:8080'
period: 10s
condition: ${kubernetes_leaderelection.leader} == 'true'
The input will be enabled by the Pod that will acquire the lock and will set kubernetes_leaderelection.leader to true.
(condition can be set on input level too I guess so as to be the same for all state_* datastreams)
With this setup Managed version could be simplified since we will only have to deal with a Deamonset with the same configuration along the Pods.
@blakerouse @ph @ruflin @masci @exekias @mukeshelastic let me know what you think.
In Beats we have leader election feature for k8s(#19731), so as to make it possible to avoid deploying a singleton instance of Metricbeat via a k8s Deployment for cluster wide metrics collection. The implementation is based on go-client's leader election implementation: https://pkg.go.dev/k8s.io/client-go/tools/leaderelection
It would be nice to support the similar in Elastic Agent too. What comes first to my mind is a separate provider called
kubernetes_leaderelectionsince its implementation currently is not related tokubernetesprovider (no resource discovery is required).An example configuration would look like this:
leader_lease: This will be the name of the lock lease. One can monitor the status of the lease withkubectl describe lease leader-election-elastic-agent.And then users can define inputs that would be enabled only by the leader Pod with:
The input will be enabled by the Pod that will acquire the lock and will set
kubernetes_leaderelection.leadertotrue.(condition can be set on input level too I guess so as to be the same for all state_* datastreams)
With this setup Managed version could be simplified since we will only have to deal with a Deamonset with the same configuration along the Pods.
@blakerouse @ph @ruflin @masci @exekias @mukeshelastic let me know what you think.