Skip to content

Storage utilisation doesn't seem to add-up... #13516

@srcshelton

Description

@srcshelton

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Could anyone explain this output?

# podman system prune -f
Deleted Images
Total reclaimed space: 0B

# podman system df
TYPE           TOTAL       ACTIVE      SIZE        RECLAIMABLE
Images         111         19          45.22GB     38.73GB (0%)
Containers     20          0           3.806GB     3.806GB (100%)
Local Volumes  38          0           569.8MB     1.585GB (200%)

# podman volume prune -f
3081eba3e280889614b140a06368543a932110679a1e376c9ed3698ee4a51e3b
c0bf3bfa98b8b62559615fd6f8845d01a384c4ed89d71905223c585f7b09180f

# podman system df
TYPE           TOTAL       ACTIVE      SIZE        RECLAIMABLE
Images         111         19          45.22GB     38.73GB (0%)
Containers     20          0           3.806GB     3.806GB (100%)
Local Volumes  36          0           371.9MB     0B (0%)

… as a side-note, I've often seem podman system prune returning what appear to be feasibly high figures (… larger than the containing filesystem, in some cases) when reporting on storage savings - I always assumed this was double-counting shared layers, or similar...

The output above seems to indicate that system prune did not actually "Remove all unused pod, container, image and volume data" but only removed images and stopped containers, but then somehow we were left with a state where podman considered there to be 3 times as much reclaimable volume space as the amount of storage that existing volumes were actually consuming, and in any case reported this as 200% (which doesn't appear to indicate "200% more on top", as the container reclaimable space of 3.8GB out of 3.8GB is listed as 100%. You'd assume that for volumes with 570MB reclaimable the percentage should likewise be 100%, so 1.585GB should be about 279%… although, from the final output, the true figure looks as if it should have been 34.7%?)

Could having two containers which both inherit the same volume from a (terminated) progenitor container cause some form of double-counting error?

Also, in this case all running containers are paused - is that perhaps causing the reclaimable Container storage to be misrepresented as 100%?

Steps to reproduce the issue:

  1. podman system prune -f;

  2. Observe amount of space reported to have been cleared;

  3. Observe podman system df output.

Describe the results you received:

  • The prune operation did not reclaim all reclaimable space;

  • The sizes and percentages don't appear to be internally consistent, or map to real-world disk utilisation.

Describe the results you expected:

  • A system prune operation should surely reclaim the maximum amount of storage (handling image/container/volume dependencies as necessary)?

  • Percentages and utilisation figures should match-up, reports of storage space consumed and freed should match filesystem usage data.

Output of podman version:

Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.7
Git Commit:   342c8259381b63296e96ad29519bd4b9c7afbf97
Built:        Mon Mar  7 01:21:28 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: app-containers/conmon-2.1.0
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: bdb4f6e56cd193d40b75ffc9725d4b74a18cb33c'
  cpus: 8
  distribution:
    distribution: gentoo
    version: unknown
  eventLogger: file
  hostname: dellr330
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.16.12-gentoo
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 943378432
  memTotal: 68205772800
  networkBackend: cni
  ociRuntime:
    name: crun
    package: app-containers/crun-1.4.3
    path: /usr/bin/crun
    version: |-
      crun version 1.4.3
      commit: 61c9600d1335127eba65632731e2d72bc3f0b9e8
      spec: 1.0.0
      +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: app-containers/slirp4netns-1.1.12
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 21388185600
  swapTotal: 25769787392
  uptime: 50h 23m 28.56s (Approximately 2.08 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: localhost:5000
  search:
  - docker.io
  - docker.pkg.github.com
  - quay.io
  - public.ecr.aws
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 20
    paused: 20
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev
  graphRoot: /space/podman/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp/.private/root
  imageStore:
    number: 109
  runRoot: /var/run/podman
  volumePath: /space/podman/volumes
version:
  APIVersion: 4.0.2
  Built: 1646616088
  BuiltTime: Mon Mar  7 01:21:28 2022
  GitCommit: 342c8259381b63296e96ad29519bd4b9c7afbf97
  GoVersion: go1.17.7
  OsArch: linux/amd64
  Version: 4.0.2

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Metadata

Metadata

Assignees

Labels

kind/bugCategorizes issue or PR as related to a bug.locked - please file new issue/PRAssist humans wanting to comment on an old issue or PR with locked comments.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions