Add kubernetes_leaderelection provider#24913
Conversation
Signed-off-by: chrismark <chrismarkou92@gmail.com>
|
Pinging @elastic/agent (Team:Agent) |
|
Pinging @elastic/integrations (Team:Integrations) |
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Trends 🧪💚 Flaky test reportTests succeeded. Expand to view the summary
Test stats 🧪
|
Signed-off-by: chrismark <chrismarkou92@gmail.com>
|
@blakerouse could you have a look into this one please when you have the time? I mostly want to get feedback from Agent's team point of view. |
deploy/kubernetes/elastic-agent-standalone/elastic-agent-standalone-role.yaml
Outdated
Show resolved
Hide resolved
deploy/kubernetes/elastic-agent-standalone/elastic-agent-standalone-role.yaml
Outdated
Show resolved
Hide resolved
...elastic-agent/pkg/composable/providers/kubernetesleaderelection/kubernetes_leaderelection.go
Show resolved
Hide resolved
| type Config struct { | ||
| KubeConfig string `config:"kube_config"` | ||
| // Name of the leaderelection lease | ||
| LeaderLease string `config:"leader_lease"` |
There was a problem hiding this comment.
So by default running Elastic Agent this is off. Is that the behavior we want? Being that will require a specific setting in the providers top-level key, and not something that could then be used by Fleet at the moment.
I think that we should have this enabled by default, by having it have a default value.
| id = "elastic-agent-leader-" + agentInfo.AgentID() | ||
| } | ||
|
|
||
| ns, err := kubernetes.InClusterNamespace() |
There was a problem hiding this comment.
Should there also be a configuration value for this? Might want to run it on a different namespace?
There was a problem hiding this comment.
The convention here is that we will create the lease object under the same namespace with one where Agent is running. We also add role so as Agent to have access only to leases under the same namespace, see #24913 (comment)
So I think we should keep it as is and not expose this as an option to the user.
...elastic-agent/pkg/composable/providers/kubernetesleaderelection/kubernetes_leaderelection.go
Show resolved
Hide resolved
| ) | ||
|
|
||
| func init() { | ||
| composable.Providers.AddDynamicProvider("kubernetes_leaderelection", DynamicProviderBuilder) |
There was a problem hiding this comment.
I question if this should be a dynamic provider. I think this should be a context provider. That way leader election can effect all the other vars even dynamic ones discovered by the main kubernetes provider.
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
|
@jsoriano @blakerouse thanks for your review/comments. I think have I covered your comments so you can consider it ready for another review round. |
| id = "elastic-agent-leader-" + agentInfo.AgentID() | ||
| } | ||
|
|
||
| ns, err := kubernetes.InClusterNamespace() |
(cherry picked from commit 2494ae5)
(cherry picked from commit 2494ae5)
What does this PR do?
Adds support for leaderelection provider in Agent based on #24267.
After this one is merged I plan to tune standalone manifest in follow-up PR so as to remove deployment section and only deploy as Deamonset with leaderelection "on".
Why is it important?
To support cluster wide metrics collection by only one Agent at a time between a group of multiple Agents (ie Deamonset).
On a second step we can change the standalone manifest, completely remove the Deployment of Agent and go on with only the Daemonset leveraging the leaderelection feature.
How to test this PR locally
./elastic-agent -e -d "*" -c /elastic-agent.yml inspect output -o defaultRelated issues