-
Notifications
You must be signed in to change notification settings - Fork 42.8k
Rolling node update #5511
Copy link
Copy link
Closed
Labels
area/kubectllifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.priority/awaiting-more-evidenceLowest priority. Possibly useful, but not yet enough support to actually get it done.Lowest priority. Possibly useful, but not yet enough support to actually get it done.sig/cluster-lifecycleCategorizes an issue or PR as relevant to SIG Cluster Lifecycle.Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Metadata
Metadata
Assignees
Labels
area/kubectllifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.priority/awaiting-more-evidenceLowest priority. Possibly useful, but not yet enough support to actually get it done.Lowest priority. Possibly useful, but not yet enough support to actually get it done.sig/cluster-lifecycleCategorizes an issue or PR as relevant to SIG Cluster Lifecycle.Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Capturing conversation with @bgrant0607 just now.
New feature for kubectl:
when you run
kubectl rollingnodeupdate --exec myprogram --success_label=k3=v3 --fail_label=k4-v4 --selector k1=v1,k2=v2, then it would do this:This cleanly separates the problem of taking a node in and out (which kubectl should know about) from what you might do to a node once it is removed. Things you might do when it is removed include making salt/chef/puppet update the node, reboot the node, delete the node, power it down and wait for a human to touch it, etc.
The clean removal steps are documented in docs/cluster_management.md under
Maintenance on a node.