cilium: bpf-based hostport implementation#10592
Conversation
ae0add1 to
1da3d05
Compare
|
test-me-please |
1da3d05 to
279ca43
Compare
|
test-me-please |
|
test-docs-please |
279ca43 to
6bbc8c2
Compare
|
test-docs-please |
|
test-me-please |
6bbc8c2 to
e52c4fd
Compare
|
test-docs-please |
There was a problem hiding this comment.
Nit: kubectl run nginx --generator=run-pod/v1 --image=nginx --replicas=1 --port=80 --hostport=8080?
There was a problem hiding this comment.
I was referring to the example yaml from earlier in the same doc, so I kept it as is but with the added hostPort.
There was a problem hiding this comment.
Do we want to disable the hostPort feature for new users? Asking, as for the existing users after the upgrade to v1.8 the feature will be enabled due to the agent defaulting the flag to true.
There was a problem hiding this comment.
I would keep it enabled otherwise people might miss it and still deploy the portmap CNI chaining. I'll do a follow-up for the upgrade guide to document the support and actions needed.
59221bc to
c4d9f1a
Compare
|
test-docs-please |
c4d9f1a to
e02b3bc
Compare
|
test-me-please |
|
test-docs-please |
|
test-me-please |
e02b3bc to
830c317
Compare
|
test-me-please |
|
test-docs-please |
830c317 to
3baae90
Compare
|
test-me-please |
|
test-docs-please |
3baae90 to
19cac1e
Compare
|
(prior docs test was green) |
|
test-me-please |
|
(prior travis ci was green, x86 is green here, arm64 flaky in unrelated spot) |
In order to use hostPort today, users can only consume it via deploying
Cilium through --set global.cni.chainingMode=portmap in order to then
have iptables set up for each pod that specifies a hostPort. We almost
have all the BPF infrastructure in place for natively supporting BPF
based hostPort. This work implements the hostPort feature as a service
mapping. It is part of the kube-proxy-free implementation since it
depends on NodePort infrastructure.
Example:
# ./daemon/cilium-agent --identity-allocation-mode=crd --enable-ipv6=true --enable-ipv4=true --disable-envoy-version-check=true --tunnel=disabled --k8s-kubeconfig-path=$HOME/.kube/config --kube-proxy-replacement=strict --enable-l7-proxy=false
# cat hostport.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 90
# kubectl apply -f ./hostport.yaml
pod/nginx created
# ./cilium/cilium service list
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.178.29:6443
2 10.96.0.10:53 ClusterIP 1 => 10.29.6.103:53
2 => 10.29.24.228:53
3 10.96.0.10:9153 ClusterIP 1 => 10.29.6.103:9153
2 => 10.29.24.228:9153
4 192.168.178.29:90 HostPort 1 => 10.29.245.35:80
From own node:
# curl 192.168.178.29:90
<!DOCTYPE html>
<html>
<head>
[...]
From remote node:
root@tank:~# curl 192.168.178.29:90
<!DOCTYPE html>
<html>
<head>
[...]
The HostPort implementation is a hybrid of ClusterIP and NodePort in
terms of implementation in that making it reachable on the own node is
just a single service map entry (as opposed to further exposing it via
loopback address, etc) and for external traffic the backend is always
local to the node and can be mapped from any port.
Fixes: #10359
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Document the new HostPort mapping feature under the advanced section and add Helm support for it. I've also added a small setup validation as I think it's quite useful. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Add connectivity test cases from internal and external. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
19cac1e to
7616447
Compare
|
test-me-please |
See commit msg.
This change is