Originally discussed in #10645 (comment), @joestringer advised to open a separate issue for this behaviour/bug which seems to be a regression.
Bug report
After attempting a 1.8.2 -> 1.8.3 upgrade on Ubuntu 20.04 / 5.4.0-1021-aws, agent pods end up crashing with the following logs. Rolling back to 1.8.2, pods are healthy and do not contain those errors. 🤷
{"error":"Failed to sysctl -w net.ipv4.conf.eth0.rp_filter=2: could not open the sysctl file /proc/sys/net/ipv4/conf/eth0/rp_filter: open /proc/sys/net/ipv4/conf/eth0/rp_filter: no such file or directory","level":"error","msg":"Error while initializing daemon","subsys":"daemon"}
{"error":"Failed to sysctl -w net.ipv4.conf.eth0.rp_filter=2: could not open the sysctl file /proc/sys/net/ipv4/conf/eth0/rp_filter: open /proc/sys/net/ipv4/conf/eth0/rp_filter: no such file or directory","level":"fatal","msg":"Error while creating daemon","subsys":"daemon"}
{"error":"Operation cannot be fulfilled on ciliumnodes.cilium.io \"ip-10-6-11-13.eu-west-1.compute.internal\": the object has been modified; please apply your changes to the latest version and try again","level":"warning","msg":"Unable to update CiliumNode custom resource","subsys":"ipam"}
{"level":"info","msg":"regenerating all endpoints","reason":"one or more identities created or deleted","subsys":"endpoint-manager"}
{"level":"info","msg":"regenerating all endpoints","reason":"one or more identities created or deleted","subsys":"endpoint-manager"}
General Information
- Cilium version:
1.8.3
- Kernel version:
5.4.0-1021-aws
- Orchestration system version:
EKS : v1.17.9-eks-4c6976
How to reproduce the issue
- [TO BE CONFIRMED] -> run an EKS cluster with AWS Nitro instances, presumably the issue is around the lack of
eth0 interface
- It should work with 1.8.2 but upgrading to 1.8.3 would cause the Cilium agent to crash after a few seconds
Originally discussed in #10645 (comment), @joestringer advised to open a separate issue for this behaviour/bug which seems to be a regression.
Bug report
After attempting a
1.8.2->1.8.3upgrade on Ubuntu 20.04 / 5.4.0-1021-aws, agent pods end up crashing with the following logs. Rolling back to 1.8.2, pods are healthy and do not contain those errors. 🤷General Information
1.8.35.4.0-1021-awsEKS : v1.17.9-eks-4c6976How to reproduce the issue
eth0interface