Initial support for multiple control plane revisions#20798
Initial support for multiple control plane revisions#20798istio-testing merged 1 commit intoistio:masterfrom
Conversation
63de87b to
536c2e2
Compare
There was a problem hiding this comment.
apps/v1 Deployment label selectors are immutable. We'll need a new deployment for every revision bump. This is arguably a good thing for versioning multiple control planes. How will this in 1.5 where we really only support in-place upgrades?
There was a problem hiding this comment.
This should be a new deployment. I need to modify this PR to make it so its istiod-{{.revision}}, I forgot that somehow. in 1.5 we could remove this selector I guess?
Am I right in thinking that the 1.5 deployment (matching app: pilot) would match the 1.6 deployment (which has app: pilot as well as a revision label) and things will get weird
There was a problem hiding this comment.
istiod-{{.revision}} deployment naming to match the istio.io/rev label sounds good.
I think we can remove the selector for 1.5. See my comment below about setting the istio.io/rev directly in the deployment below.
I think you're right. 1.5 pods w/app=pilot would match the 1.6 deployments. If we changed the app label value that shouldn't be a problem though. s/pilot/istiod in 1.6 maybe?
There was a problem hiding this comment.
the only reason I kept it to pilot was to avoid issues where we had tooling selecting app=pilot (eg istioctl). But if we are already selecting on only that, we may not have much choice
There was a problem hiding this comment.
I think you're right. 1.5 pods w/app=pilot would match the 1.6 deployments.
I had this wrong. The k8s pilot service would match both. The replicasets versions shouldn't get mixed up. That is pretty much how our bookinfo demo is setup today.
There was a problem hiding this comment.
Interesting, seems like you can't actually run into the deployment label issue since replicaset adds a pod-hash label.
Let me fix up the names, service selectors, etc
There was a problem hiding this comment.
You might also be able to filter in the k8s list/watch. This would reduce memory usage in the client cache and decrease load on the kube-apiserver.
There was a problem hiding this comment.
I really wanted to, but there is no OR support on labels. So we cannot express label is empty OR label is revision123. Unless there is some tricky boolean logic I missed
There was a problem hiding this comment.
"label is empty" or "label not set at all"? In the chart above istio.io/revision is only added if the revision value is not empty.
Assuming "label is not set at all", then the "OR" decision is made when the informer is started, i.e. watch all objects vs. only objects with istio.io/rev=.
There was a problem hiding this comment.
"label not set at all". If its not set, it applies to all revisions. This is the "normal" case - all current configs are likely in this category. If we just watch objects with istio.io/rev label then we miss the "global" objects
There was a problem hiding this comment.
The value of istio.io/rev should be .Values.global.revision. You could set the envvar directly in the deployment. That would get around the immutable deployment label selector for 1.5 I think? We could add an annotation to the pod for purely informational purposes, but that would only exist for Istio < 1.6.
There was a problem hiding this comment.
I think we want it to have a different selector intentionally though, right? otherwise the two are coupled. I need to look into what happens with selectors overlapping on deployments, I don't quite understand.
But I also think we want a label - we might actually want it on every single resource, so we can do kubectl apply --prune -listio.io/rev=foo type things? Otherwise it will be hard to clean up
There was a problem hiding this comment.
A revision label on the deployment to match the associated resources is desirable and something I think we can make work in 1.6. IIUC, that requires proper multi-control plane support (e.g. istiod-)?
My understanding of this PR is to make istiod (1.5) forward compatible with 1.6 and honor the istio.io/revision label on resources. It would be great if we could sync the resource revision labels with the deployment revision label but that didn't seem possible for 1.5 and I don't think is strictly required. As a short term solution you could hardwire the REVISION envvar so that config filtering doesn't depend on the deployment's revision label, e.g.
{{- if .Values.global.revision }}
- name: REVISION
valueFrom:
fieldRef:
fieldPath: {{ Values.global.revision }}
{{- end }}536c2e2 to
c8dcdbb
Compare
c8dcdbb to
7474a91
Compare
7474a91 to
fc0cb03
Compare
costinm
left a comment
There was a problem hiding this comment.
We can add back the revision to names in separate PR.
There was a problem hiding this comment.
It needs a revision as well.
fc0cb03 to
238ee06
Compare
|
/cherrypick release-1.5 |
|
@howardjohn: #20798 failed to apply on top of branch "release-1.5": DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…tio#20798) (cherry picked from commit ea5abed)
This is intended to be a minimal change to support backporting to 1.5.
The benefit of backporting to 1.5 is that it will make the old 1.5
control planes behave more correctly once we ship 1.6.
Design doc: https://docs.google.com/document/d/1d1z5PC8wvh9QiR3NesAaPm6w2HnE4-KIfjB7GB-_Wf8/