Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
For Kubernetes Release 1.10 or older, clusters can only be run in either IPv4-only, IPv6-only, or in a VERY limited dual-stack configuration. The current dual-stack support is limited by the following restrictions:
- Pods can have dual-stack addresses (if a CNI plugin is used that supports dual-stack, e.g. bridge or calico), but Kubernetes is only aware of 1 address per pod
- Nodes can have dual-stack addresses
- Kube system pods (api server, controller manager, etc.) can only have 1 address per pod (all pods have IPv4 or all pods have IPv6)
- Only 1 service CIDR can be configured for the cluster, so service IPs are all IPv4 or all IPv6
- Kube-proxy only supports 1 family of iptables at a time (choose iptables or ip6tables)
- Endpoints for services are all IPv4 or all IPv6
- Kube-dns is capable of running dual-stack, but it is currently only made aware of either all IPv4 or all IPv6 service addresses
What you expected to happen:
Kubernetes should be aware of and support dual-stack addresses for pods, nodes and services. To make this happen, the required changes will include (but not be limited to) the following:
- Kubernetes needs to be made aware of multiple IPs per pod (up to one IPv4, possibly multiple IPv6 addrs)
- Kubernetes needs to be configurable for multiple (at least 2) service CIDRs
- Kube-proxy (and IPVS) needs to support IPv4 and IPv6 services in parallel (e.g. drive iptables and ip6tables in parallel)
- Health/liveness/readiness probes should be configurable for IPv4 or IPv6, with possibly IPv4 probes in parallel with IPv6 probes
- Cloud provider plugins should be able to report dual-stack external/internal addresses
- Kubectl commands and output displays will need to be modified for dual-stack
- Some Kubeadm support will probably need to be added for dual-stack
- New e2e test cases will probably need to be added to test parallel IPv4/IPv6 connectivity between pods, nodes, and services
Some questions to be answered when a feature spec is written:
- Should individual service manifests be able to select IPv4 or IPv6 service IPs, or every service gets IPv4 + IPv6 service IPs?
- How to extend dual-stack to future multiple interface per pod support? (Or at least how not to "paint ourselves into a corner" with the dual-stack changes?)
How to reproduce it (as minimally and precisely as possible):
Just try to set up a Kubernetes cluster in dual-stack mode.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): 1.10 or older
- Cloud provider or hardware configuration: Any
- OS (e.g. from /etc/os-release): Any
- Kernel (e.g.
uname -a): Any
- Install tools:
- Others:
Corresponding Feature Issue:
kubernetes/enhancements#563
Task/To-Do List:
https://docs.google.com/spreadsheets/d/1KjvjvmZkhLI9M1BXVRVNn_oQpcwCB424x-0lOY2a3G0/edit#gid=0
Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
For Kubernetes Release 1.10 or older, clusters can only be run in either IPv4-only, IPv6-only, or in a VERY limited dual-stack configuration. The current dual-stack support is limited by the following restrictions:
What you expected to happen:
Kubernetes should be aware of and support dual-stack addresses for pods, nodes and services. To make this happen, the required changes will include (but not be limited to) the following:
Some questions to be answered when a feature spec is written:
How to reproduce it (as minimally and precisely as possible):
Just try to set up a Kubernetes cluster in dual-stack mode.
Anything else we need to know?:
Environment:
kubectl version): 1.10 or olderuname -a): AnyCorresponding Feature Issue:
kubernetes/enhancements#563
Task/To-Do List:
https://docs.google.com/spreadsheets/d/1KjvjvmZkhLI9M1BXVRVNn_oQpcwCB424x-0lOY2a3G0/edit#gid=0