Right now, when deploying Metricbeat on Kubernetes we deploy two different resources:
- A Daemonset, used to deploy an agent per node in the cluster, each agent will monitor the host and all it's running workloads.
- A Deployment, to deploy an additional agent to monitor at the cluster scope, currently used for
kube-state-metrics and Kubernetes events.
We could simplify this by removing the need for a Deployment in favour of some type of leader election within the DaemonSet. In a nutshell, we need a way to flag one of the agents in the Daemonset as leader, making it in charge of both host and cluster scopes.
While leader election is challenging, API Server provides a great locking mechanism that is present in K8S, and we can leverage in many ways. So we don't really need things like agents talking to each other.
We should discuss our options here, one possibility is doing something like this:
- Use a secret/configmap to claim leadership, agents should try to create it. If it exists and it's not expired (through some defined TTL), there is an active leader.
- Make the Kubernetes autodiscover provider handle this, and pass a
leader field as part of autodiscover events. This way cluster scope metricsets can be configured based on that.
Right now, when deploying Metricbeat on Kubernetes we deploy two different resources:
kube-state-metricsand Kubernetes events.We could simplify this by removing the need for a Deployment in favour of some type of leader election within the DaemonSet. In a nutshell, we need a way to flag one of the agents in the Daemonset as leader, making it in charge of both host and cluster scopes.
While leader election is challenging, API Server provides a great locking mechanism that is present in K8S, and we can leverage in many ways. So we don't really need things like agents talking to each other.
We should discuss our options here, one possibility is doing something like this:
leaderfield as part of autodiscover events. This way cluster scope metricsets can be configured based on that.