Skip to content

Downward API shows incorrect values for cpu.request/cpu.limit when exposed through volumeMount #64579

@chrissnell

Description

@chrissnell

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
Using the Downward API as a disk volume, I am getting incorrect values for cpu.request and cpu.limit.

What you expected to happen:
I expect to see the same values for cpu.limit and cpu.request that I see when I run kubectl describe pod <...>.

How to reproduce it (as minimally and precisely as possible):

Create a pod with the Downward API exposed through a volumeMount:

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: "100m"
            memory: "15M"
          limits:
            cpu: "500m"
            memory: "100M"
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
      volumes:
        - name: podinfo
        downwardAPI:
          items:
            - path: "cpu_limit"
              resourceFieldRef:
                containerName: ssl-proxy
                resource: limits.cpu
            - path: "cpu_request"
              resourceFieldRef:
                containerName: ssl-proxy
                resource: requests.cpu
            - path: "mem_limit"
              resourceFieldRef:
                containerName: ssl-proxy
                resource: limits.memory
            - path: "mem_request"
              resourceFieldRef:
                containerName: ssl-proxy
                resource: requests.memory

kubectl exec -t -i <pod> bash to get a shell and then:

# cd /etc/podinfo
# for i in *; do echo "$i -> $(cat $i)"; done
cpu_limit -> 1
cpu_request -> 1
mem_limit -> 500000000
mem_request -> 100000000

Observe that cpu_limit and cpu_request are both 1.

Now, adjust the limit and request:

kubectl set resources replicationcontroller nginx --requests=cpu=900m --limits=cpu=1400m

Wait for your pod to be rebuilt and exec a new bash shell onto it.

# cd /etc/podinfo
# for i in *; do echo "$i -> $(cat $i)"; done
cpu_limit -> 2
cpu_request -> 1
mem_limit -> 500000000
mem_request -> 100000000

Notice how the values appear to be rounded up. They should probably be displayed as millicores (int).

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    AWS / kops / t2.xlarge nodes

  • OS (e.g. from /etc/os-release): Container Linux by CoreOS stable (1745.4.0)

  • Kernel (e.g. uname -a): Linux ip-10-25-50-145.ec2.internal 4.14.42-coreos #1 SMP Thu May 24 21:10:56 UTC 2018 x86_64 Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz GenuineIntel GNU/Linux

  • Install tools:

  • Others:

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.sig/schedulingCategorizes an issue or PR as relevant to SIG Scheduling.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions