Add deployment name in pod's meta#23610
Conversation
Signed-off-by: chrismark <chrismarkou92@gmail.com>
|
Pinging @elastic/integrations (Team:Integrations) |
💔 Build Failed
Expand to view the summary
Build stats
Test stats 🧪
Steps errors
Expand to view the steps failures
|
| Test | Results |
|---|---|
| Failed | 0 |
| Passed | 44470 |
| Skipped | 4612 |
| Total | 49082 |
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
jsoriano
left a comment
There was a problem hiding this comment.
I am ok with following with this approach, but we have to take into account that add_kubernetes_metadata won't add the deployment name, we have to think if this is something we have to support, but we can do it as a separate issue.
| if rsName != nil { | ||
| rsName := rsName.(string) |
There was a problem hiding this comment.
Nit. Make it slightly safer in case rsName is not a string.
| if rsName != nil { | |
| rsName := rsName.(string) | |
| if rsName, ok := rsName.(string); ok { |
| }, | ||
| } | ||
|
|
||
| _, err := client.AppsV1().ReplicaSets(namespace).Create(context.TODO(), rs, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Nit. In tests it is ok to use context.Background() 🙂
| _, err := client.AppsV1().ReplicaSets(namespace).Create(context.TODO(), rs, metav1.CreateOptions{}) | |
| _, err := client.AppsV1().ReplicaSets(namespace).Create(context.Background(), rs, metav1.CreateOptions{}) |
Signed-off-by: chrismark <chrismarkou92@gmail.com>
|
the CI failure is unrelated to the PR, were taking a look |
|
Thanks for looking into this @kuisathaverat, I'm gonna merge this one. |
(cherry picked from commit 2d7e7b4)
What does this PR do?
This PR adds
kubernetes.deployment.namein Pods' metadata. Right now we only addkubernetes.replicaset.namewhen a Pod is handled by a Deployment since a Deployment controls the Pod through a ReplicaSet and the ReplicaSet is considered as the direct owner of the Pod in k8s API. In this we need an extra step check to retrieve the owner of the ReplicaSet if possible.NOTE: Since with this addition Beats will try to get replicaset objects from k8s API the following addition in the clusterrole is required:
See: #23716
Why is it important?
So as to correlate Pods with Deployments controlling them.
How to test this PR locally
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yamlpodandstate_podmetricset includekubernetes.deployment.namewhenkubernetes.replicaset.nameexists. As an example see screenshots below.Extra: Verify that
add_kubernetes_metadataaddskubernetes.deployment.namewhenkubernetes.replicaset.nameexists, using the following configuration with Filebeat on k8s:Related issues
Screenshots