Area(s)
area:k8s
Is your change request related to a problem? Please describe.
At the moment, there are no Semantic Conventions defined for k8s metrics.
Describe the solution you'd like
Even if we cannot consider the k8s metrics as stable we can start considering adding metrics that are not controversial to get some progress here. This issue aims to collect the existing k8s metrics that exist in the Collector and keep track of any related work.
Bellow I'm providing an initial list with metrics coming from the kubeletstats and k8scluster receivers. Note that these are matter to change with the time being, so we should be getting back to the Collector to verify the current state.
cc: @open-telemetry/semconv-k8s-approvers
Describe alternatives you've considered
No response
Additional context
Below there are some metrics from namespaces other than k8s.* as well. I leave them in there intentionally in order to take them into account accordingly.
kubeletstats metrics
cpu metrics: #1489 ✅
memory metrics: #1490 ✅
filesystem metrics: #1488 ✅
network metrics: #1487 ✅
uptime metrics: #1486 ✅
volume metrics: #1485 ✅
k8scluster metrics
deployment metrics: #1636 ✅
cronjob metrics: #1660 ✅
k8s.cronjob.active_jobs
daemonset metrics: #1649 ✅
k8s.daemonset.current_scheduled_nodes
k8s.daemonset.desired_scheduled_nodes
k8s.daemonset.misscheduled_nodes
k8s.daemonset.ready_nodes
hpa metrics: #1644 ✅
k8s.hpa.max_replicas
k8s.hpa.min_replicas
k8s.hpa.current_replicas
k8s.hpa.desired_replicas
job metrics: #1660 ✅
k8s.job.active_pods
k8s.job.desired_successful_pods
k8s.job.failed_pods
k8s.job.max_parallel_pods
k8s.job.successful_pods
namespace metrics: #1668 ✅
k8s.namespace.phase
replicaset metrics: #1636 ✅
k8s.replicaset.desired
k8s.replicaset.available
replication_controller metrics #1636 ✅
k8s.replication_controller.desired
k8s.replication_controller.available
statefulset metrics: #1637 ✅
k8s.statefulset.desired_pods
k8s.statefulset.ready_pods
k8s.statefulset.current_pods
k8s.statefulset.updated_pods
container metrics #2074 ✅
k8s.container.cpu_request
k8s.container.cpu_limit
k8s.container.memory_request
k8s.container.memory_limit
k8s.container.storage_request
k8s.container.storage_limit
k8s.container.ephemeralstorage_request
k8s.container.ephemeralstorage_limit
k8s.container.restarts
k8s.container.ready
pod metrics #2075 ✅
k8s.pod.phase
k8s.pod.status_reason
resource_quota metrics #2076 ✅
k8s.resource_quota.hard_limit
k8s.resource_quota.used
node metrics #2077 ✅
k8s.node.condition
related issue: open-telemetry/opentelemetry-collector-contrib#33760
Openshift metrics #2078 ✅
openshift.clusterquota.limit
openshift.clusterquota.used
openshift.appliedclusterquota.limit
openshift.appliedclusterquota.used
Related issues
TBA
Area(s)
area:k8s
Is your change request related to a problem? Please describe.
At the moment, there are no Semantic Conventions defined for k8s metrics.
Describe the solution you'd like
Even if we cannot consider the k8s metrics as stable we can start considering adding metrics that are not controversial to get some progress here. This issue aims to collect the existing k8s metrics that exist in the Collector and keep track of any related work.
Bellow I'm providing an initial list with metrics coming from the
kubeletstatsandk8sclusterreceivers. Note that these are matter to change with the time being, so we should be getting back to the Collector to verify the current state.cc: @open-telemetry/semconv-k8s-approvers
Describe alternatives you've considered
No response
Additional context
Below there are some metrics from namespaces other than
k8s.*as well. I leave them in there intentionally in order to take them into account accordingly.kubeletstats metrics
cpu metrics: #1489 ✅
memory metrics: #1490 ✅
filesystem metrics: #1488 ✅
network metrics: #1487 ✅
uptime metrics: #1486 ✅
volume metrics: #1485 ✅
k8scluster metrics
deployment metrics: #1636 ✅
cronjob metrics: #1660 ✅
k8s.cronjob.active_jobsdaemonset metrics: #1649 ✅
k8s.daemonset.current_scheduled_nodesk8s.daemonset.desired_scheduled_nodesk8s.daemonset.misscheduled_nodesk8s.daemonset.ready_nodeshpa metrics: #1644 ✅
k8s.hpa.max_replicask8s.hpa.min_replicask8s.hpa.current_replicask8s.hpa.desired_replicasjob metrics: #1660 ✅
k8s.job.active_podsk8s.job.desired_successful_podsk8s.job.failed_podsk8s.job.max_parallel_podsk8s.job.successful_podsnamespace metrics: #1668 ✅
k8s.namespace.phasereplicaset metrics: #1636 ✅
k8s.replicaset.desiredk8s.replicaset.availablereplication_controller metrics #1636 ✅
k8s.replication_controller.desiredk8s.replication_controller.availablestatefulset metrics: #1637 ✅
k8s.statefulset.desired_podsk8s.statefulset.ready_podsk8s.statefulset.current_podsk8s.statefulset.updated_podscontainer metrics #2074 ✅
k8s.container.cpu_requestk8s.container.cpu_limitk8s.container.memory_requestk8s.container.memory_limitk8s.container.storage_requestk8s.container.storage_limitk8s.container.ephemeralstorage_requestk8s.container.ephemeralstorage_limitk8s.container.restartsk8s.container.readypod metrics #2075 ✅
k8s.pod.phasek8s.pod.status_reasonresource_quota metrics #2076 ✅
k8s.resource_quota.hard_limitk8s.resource_quota.usednode metrics #2077 ✅
k8s.node.conditionrelated issue: open-telemetry/opentelemetry-collector-contrib#33760
Openshift metrics #2078 ✅
openshift.clusterquota.limitopenshift.clusterquota.usedopenshift.appliedclusterquota.limitopenshift.appliedclusterquota.usedRelated issues
TBA