[Kubernetes Module] Add missing metrics for replicaset and pod owners#36746
[Kubernetes Module] Add missing metrics for replicaset and pod owners#36746
Conversation
|
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
|
Based on the fact that there was no change in the asciidoc file, I think you forgot to run |
Yes sorry for that Constanca, I opened the PR more as a draft in order to showcase the changes needed. I will push latest changes and also make the tests today |
|
This pull request is now in conflicts. Could you fix it? 🙏 |
ChrsMark
left a comment
There was a problem hiding this comment.
@gizas would that PR add 2nd layer controller/owner's information?
The 1st owner's information is provided already through the Metadata's Enricher at https://github.com/elastic/elastic-agent-autodiscover/blob/2cc3dcb075fe437467387d36096f554c13c6d144/kubernetes/metadata/resource.go#L138-L152.
So if this PR is only to cover the 1st layer owner's metadata then it seems it's already supported and the PR is redundant?
| "owner": { | ||
| "is_controller": "true", | ||
| "kind": "ReplicaSet", | ||
| "name": "coredns-787d4945fb" |
There was a problem hiding this comment.
How this would replace the disabled addition of the kubernetes.deployment field? This only adds the first level owner so in a case of Deploymenyt->ReplicaSet->Pod it won't add the Deployment information, right?
Also this is happening already supported through the Metadata Enricher -> https://github.com/elastic/elastic-agent-autodiscover/blob/2cc3dcb075fe437467387d36096f554c13c6d144/kubernetes/metadata/resource.go#L138-L152
There was a problem hiding this comment.
It does not add the deployment name correct only 1st level, you are right.
I was thinking that we can justify that this is enough for the users in order to figure out the reference. Of course we need to document this. Otherwise even if we can try in code or with ingest pipelines to trim the strings and produce ancestors we might fall in mistakes because we are not sure for the existance.
Also if we integrate this elastic/elastic-agent-autodiscover#62, this means that also the enrichers will loose those metadata is not it?
There was a problem hiding this comment.
Through the Enricher every metrics' document for Pod's like pod.cpu.usage will be enriched with Pod's metadata. In this metadata the Enricher adds the owner name like kubernetes.cronjob.name, kubernetes.replicaset.name etc. So this is not related to the settings you change at elastic/elastic-agent-autodiscover#62. That's a diffrent feature takes place at https://github.com/elastic/elastic-agent-autodiscover/blob/2cc3dcb075fe437467387d36096f554c13c6d144/kubernetes/metadata/pod.go#L96-L122.
So still this patch seems to be redundant.
Could you try to run Metricbeat with k8s module enabled and see that Pod's metrics are enriched with the owner name (disable the deployment and cronjob metadata as usually to isolate the functionality)? This will help you understand how the feature works and that this patch actually overlaps already existent information.
Co-authored-by: Chris Mark <chrismarkou92@gmail.com>
Co-authored-by: Chris Mark <chrismarkou92@gmail.com>
@ChrsMark you are correct. I have repeated the tests and the 1st level citizens are there. So there is no need for this. I am going to close is as duplicate. We will try to figure out how to add the 2nd level citizen creator on separate story |
Proposed commit message
Checklist
CHANGELOG.next.asciidocorCHANGELOG-developer.next.asciidoc.How to test this PR locally
This will copy your new metricbeat executable into the kubernetes cluster you want to observe
kubernetes.replicaset.replicas.owner.*andkubernetes.pod.owner.*fieldsRelated issues
add_resource_metadata.cronjoboverloads the memory usage elastic-agent-autodiscover#31Screenshots
For Pod that owner is node:

For Pod that owner is replicaset:
For replicaset that owner is deployment:
