-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Labels
Milestone
Description
Description
After upgrading from 2025.12.4 to 2026.2.0, the ak worker process memory usage approximately doubled.
Environment
- Previous version: 2025.12.4
- New version: 2026.2.0
- Deployment: Kubernetes (K3s), 2 worker replicas
- Database: PostgreSQL 18.2 via PgBouncer (transaction pooling)
- Redis: 8.6.0
Observed Memory Usage (Prometheus container_memory_working_set_bytes)
| Version | Pod | Stable Memory |
|---|---|---|
| 2025.12.4 | worker-9pxtk | ~466Mi |
| 2025.12.4 | worker-hsc9g | ~511Mi |
| 2026.2.0 | worker-99nck | ~1033Mi |
| 2026.2.0 | worker-pf2fq | ~1023Mi |
| 2026.2.0 | worker-fgjlf | ~1014Mi |
| 2026.2.0 | worker-xvg9d | ~1014Mi |
The server (ak server) pods did not show a similar increase — they stayed in the 500-800Mi range on both versions.
Impact
We had to increase the worker container memory limit from 1200Mi to 1500Mi. The old 1200Mi limit triggered ContainerMemoryNearLimit alerts at 85% (1023Mi/1200Mi).
Notes
- The jump is too large (~500Mi) to be explained by the Python 3.14 runtime alone
- Possible contributors: new Django apps loaded (WS-Federation, Fleet connector, SSF, lifecycle management, endpoint devices), new
chunked_querysetusage,django_postgres_cachebackend - May not be the bug pe se, but I was suprised + monioring caught memory consumption was near limit
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Type
Projects
Status
Done