Conversation
|
Hi @aojea. Thanks for your PR. I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/ok-to-test |
neolit123
left a comment
There was a problem hiding this comment.
thanks for the PR, i've added some generic comments.
| controlPlaneEndpoint: "{{ .ControlPlaneEndpoint }}" | ||
| {{- end }} | ||
| # we use a well know port for making the API server discoverable inside docker network. | ||
| # from the host machine such port will be accessible via a random local port instead. |
There was a problem hiding this comment.
this comment is about the port.
There was a problem hiding this comment.
don't understand the comment, I had to quote the ControlPlaneEndpoint because it failed to create the JSON from the template using IPv6 addresses
There was a problem hiding this comment.
i wanted to say that the comment # we use a well know port for making... is now above advertiseAddress, while it applies to the port bellow.
There was a problem hiding this comment.
I think that the comment is ok, it's above the api section not above the port field, to explain that the apiendpoint uses a well know port.
|
|
||
| // get the control plane endpoint, in case the cluster has an external load balancer in | ||
| // front of the control-plane nodes | ||
| controlPlaneEndpoint, err := getControlPlaneEndpoint(ec) |
There was a problem hiding this comment.
getControlPlaneEndpoint might need modifications based on IPv6.
have you tested an HA setup?
There was a problem hiding this comment.
didn't test it, but will do.
I've already modified the function func getControlPlaneEndpoint( to support IPv6, check L135
There was a problem hiding this comment.
@neolit123 results of an HA setup test
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b37bcbcf3b6 kindest/node:v1.13.3 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes kind-worker2
89fa42fd404e kindest/node:v1.13.3 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes kind-worker1
0d0caea990ca kindest/node:v1.13.3 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes 33453/tcp, 0.0.0.0:33453->6443/tcp kind-control-plane3
422bd7c52d53 kindest/node:v1.13.3 "/usr/local/bin/entr…" 13 minutes ago Up 12 minutes 34151/tcp, 0.0.0.0:34151->6443/tcp kind-control-plane2
780f8cde9d6d kindest/node:v1.13.3 "/usr/local/bin/entr…" 13 minutes ago Up 13 minutes 37927/tcp, 0.0.0.0:37927->6443/tcp kind-control-plane1
3f2e867b33b6 kindest/node:v1.13.3 "/usr/local/bin/entr…" 13 minutes ago Up 13 minutes 40189/tcp, 0.0.0.0:40189->6443/tcp kind-lb
# kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes [2001:db8:1::242:ac11:3]:6443,[2001:db8:1::242:ac11:4]:6443,[2001:db8:1::242:ac11:5]:6443 11m
kube-system kube-controller-manager <none> 10m
kube-system kube-dns 10.32.0.2:53,10.32.0.3:53,10.32.0.2:53 + 1 more... 10m
kube-system kube-scheduler <none> 10m
with current parameters:
./kind create cluster --ipv6 --config config-ipv6.yaml
and config-ipv6.yaml
# this config file contains all config fields with comments
kind: Config
apiVersion: kind.sigs.k8s.io/v1alpha2
# 3 control plane node and 3 workers
nodes:
# the control plane node config
- role: control-plane
replicas: 3
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
metadata:
name: config
networking:
serviceSubnet: "fd00:1234::/112"
# the three workers
- role: worker
# replicas specifes the number of nodes to create with this configuration
replicas: 2
- role: external-load-balancer
@BenTheElder I'd like to add some e2e scenarios, could you help me or guide me?
Should those tests be part of this PR?
There was a problem hiding this comment.
@aojea
hm, given weave supposedly does not support ipv6, how are you testing this?
./kind create cluster --ipv6
we need to remove the --ipv6 flag from the CLI, should be config only.
the CLI is only for the basic functionality.
@BenTheElder I'd like to add some e2e scenarios, could you help me or guide me?
Should those tests be part of this PR?
let's get the support in kind first, and then we can think about e2e.
ideally things like ipv6 should be owned by sig-network, but i can think we might be able to add this in the sig-cluster-lifecycle dashboard too.
i will try your PR later today myself, hopefully.
There was a problem hiding this comment.
@neolit123 seems that the cluster control plane is ipv6 but the data plane is ipv4, let me do more tests changing to the cni bridge plugin and come back to you
There was a problem hiding this comment.
@aojea nice.
podSubnet: "fd00:100::/64" serviceSubnet: "fd00:1234::/112"
do we have to instruct the users to patch the config with IPv6 CIDRs if they want IPv6 support.
kind create cluster --ipv6
ideally should be a flag in the kind Config.
There was a problem hiding this comment.
Where should be the best place to add the ipv6 flag?
What do you think to add it in the new networking section of the v1alpha3 API #340 ?
There was a problem hiding this comment.
What do you think to add it in the new networking section of the v1alpha3 API
yes, makes sense.
and how about this?
do we have to instruct the users to patch the config with IPv6 CIDRs if they want IPv6 support?
There was a problem hiding this comment.
@neolit123 that's a good question but I miss context here, but I think that maybe can be a topic to be discussed in kubeadm.
I could check that kubeadm sets a default servicesubnet if none is provided, should we do the same for IPv6? if that's the case we should add a flag to kubeadm then
There was a problem hiding this comment.
I could check that kubeadm sets a default servicesubnet if none is provided, should we do the same for IPv6? if that's the case we should add a flag to kubeadm then
here are kubeadm defaults for these CIDRs:
https://github.com/kubernetes/kubernetes/blob/b66e332d3c19ce78e00dd7c904fa29e8f6784ba0/cmd/kubeadm/app/apis/kubeadm/v1beta1/defaults.go#L31-L34
it not very likely that we will add a flag or a field in the kubeadm config for this.
IPv6 is explicit on the kubeadm side and up to the administrator to decide what stack to use. so kind as a kubeadm wrapper might have to handle this explicitly (as your patch is doing here).
at least that's how i see it for now.
|
@BenTheElder it's failing to verify the docker image Do I need to recreate the file? how can I do it? |
|
|
/hold until I test the haproxy scenario, also I want to test it changing the CNI plugin to verify that IPv6 works ok, please add more comments and suggestions |
|
[intend to start doing some more tests around this ~tomorrow] Thanks again for working on this, apologies for the review latency on my end. I've been looking deeper into the CNI options FWIW, intend to do some more testing with calico 👍 |
|
@BenTheElder after playing with different CNI plugins with unsatisfactory results I explored other options. |
911c3bb to
f694fd1
Compare
|
jobs are failing because of
it seems related to this #126 🤔 are the CI jobs using a different |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: aojea The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
rebased over the kindnet 0.2.0 branch and squashed, we need the new kindest/kindnetd:0.2.0 images in order to test it |
|
/test pull-kind-conformance-parallel-ipv6 |
|
/test pull-kind-conformance-parallel-ipv6 |
426d602 to
d5c22dc
Compare
|
/test pull-kind-conformance-parallel-ipv6 |
|
/test pull-kind-conformance-parallel-ipv6 |
|
/retest |
|
/test pull-kind-conformance-parallel-ipv6 |
Adds an environment variable (IP_FAMILT) to the script that runs the e2e tests in the CI to allow to choose the kubernetes ip family used in the clusters. By default the variable choose ipv4
This commits adds allows kind to create IPv6 Kubernetes clusters and makes the code ready to implement dual stack support. For simplicity, only one address of each ip family is considered. It adds 2 new options to the v1alpha3 API: ipFamily and serviceSubnet
|
/test pull-kind-conformance-parallel-ipv6 |
|
/hold
|
|
/test pull-kind-conformance-parallel-ipv6 |
|
@aojea: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
| KUBECONFIG="$(kind get kubeconfig-path)" | ||
| export KUBECONFIG | ||
|
|
||
| if [[ "${IP_FAMILY:-ipv4}" == "ipv6" ]]; then |
There was a problem hiding this comment.
this is not great but not exactly our fault and we can revisit it later and it's orthogonal to the actual CLI work for ipv6
| # to the cluster after rewriting the kubeconfig to point to localhost | ||
| apiServer: | ||
| certSANs: [localhost, {{.APIServerAddress}}] | ||
| certSANs: [localhost, "{{.APIServerAddress}}"] |
There was a problem hiding this comment.
controlPlaneEndpoint above needs quoting
|
I had to add a tiny commmit to fix the quoting in the config (#636), but otherwise this is good to go. It's merged now! 🎉 |
This PR adds allows creating IPv6 Kubernetes clusters with
kindand have in mind a future dual-stack implementation, considering for simplicity, only one address of each protocol.It adds a new option
ipFamilyto the v1alpha3 API that allows choosing the IP family of the cluster.To avoid issues with the different networking options the podSubnet and the serviceSubnet
kubeadmvalues are predefined with the following values:We can create a Kubernetes IPv6 cluster with the following config:
Test results with IPv4 and IPv6
References:
Fixes #280