-
Notifications
You must be signed in to change notification settings - Fork 42.8k
kube-proxy logs exploding in 4000-node scale tests with services enabled #48052
Copy link
Copy link
Closed
Labels
kind/api-changeCategorizes issue or PR as related to adding, removing, or otherwise changing an APICategorizes issue or PR as related to adding, removing, or otherwise changing an APIsig/networkCategorizes an issue or PR as relevant to SIG Network.Categorizes an issue or PR as relevant to SIG Network.sig/scalabilityCategorizes an issue or PR as relevant to SIG Scalability.Categorizes an issue or PR as relevant to SIG Scalability.
Description
Ref #47865 (comment)
From our recent scale test runs:
- With services disabled, size of kube-proxy.log = 921B
- With services enabled, size of kube-proxy.log (+ rotated logs) = ~6 GB
The reason is we are printing the whole iptables (which is super big in our tests) in case of failure in the restore step (this line -https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L1574)
This huge log traffic seems to oom-kill fluentd too.
cc @kubernetes/sig-network-misc @kubernetes/sig-scalability-misc @bowei @gmarek
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
kind/api-changeCategorizes issue or PR as related to adding, removing, or otherwise changing an APICategorizes issue or PR as related to adding, removing, or otherwise changing an APIsig/networkCategorizes an issue or PR as relevant to SIG Network.Categorizes an issue or PR as relevant to SIG Network.sig/scalabilityCategorizes an issue or PR as relevant to SIG Scalability.Categorizes an issue or PR as relevant to SIG Scalability.