Skip to content

2026.2 worker memory usage doubled vs 2025.12 (~500Mi → ~1020Mi) #20537

@AKhozya

Description

@AKhozya

Description

After upgrading from 2025.12.4 to 2026.2.0, the ak worker process memory usage approximately doubled.

Environment

  • Previous version: 2025.12.4
  • New version: 2026.2.0
  • Deployment: Kubernetes (K3s), 2 worker replicas
  • Database: PostgreSQL 18.2 via PgBouncer (transaction pooling)
  • Redis: 8.6.0

Observed Memory Usage (Prometheus container_memory_working_set_bytes)

Version Pod Stable Memory
2025.12.4 worker-9pxtk ~466Mi
2025.12.4 worker-hsc9g ~511Mi
2026.2.0 worker-99nck ~1033Mi
2026.2.0 worker-pf2fq ~1023Mi
2026.2.0 worker-fgjlf ~1014Mi
2026.2.0 worker-xvg9d ~1014Mi

The server (ak server) pods did not show a similar increase — they stayed in the 500-800Mi range on both versions.

Impact

We had to increase the worker container memory limit from 1200Mi to 1500Mi. The old 1200Mi limit triggered ContainerMemoryNearLimit alerts at 85% (1023Mi/1200Mi).

Notes

  • The jump is too large (~500Mi) to be explained by the Python 3.14 runtime alone
  • Possible contributors: new Django apps loaded (WS-Federation, Fleet connector, SSF, lifecycle management, endpoint devices), new chunked_queryset usage, django_postgres_cache backend
  • May not be the bug pe se, but I was suprised + monioring caught memory consumption was near limit

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingbug/confirmedConfirmed bugs

    Type

    Projects

    Status

    Done

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions