Kubelet watches necessary secrets/configmaps instead of periodic polling#64752
Conversation
|
@wojtek-t: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
255ed93 to
75826ea
Compare
|
/test pull-kubernetes-integration |
There was a problem hiding this comment.
The name ManagerMode seems too generic. Can we make it more specific for ConfigMap & Secrets?
There was a problem hiding this comment.
Also, is there any value to configure secrets and configmaps differently? I'm guessing not...
There was a problem hiding this comment.
Changed to ConfigMapAndSecretManagerMode - if you have any better suggestion, I can change one more time.
Regarding configuring differently - I consciously merge those two - I think we don't want to diverge them.
Let me know if you disagree with that.
There was a problem hiding this comment.
Thanks :). Comments below:
- Is it necessary to expose this as a config? How risky do we think the change is? We already watch Pods, dynamic Kubelet config watches ConfigMaps, so watch is not unprecedented in the Kubelet.
- If so, I think we can potentially improve the naming and structure here. Naming is hard :). See below.
With respect to "manager", I really like the advice in https://blog.codinghorror.com/i-shall-call-it-somethingmanager/
Mode is more general than it has to be, specifically this is a strategy for observing resource updates.
ResourceNotifyStrategy was my first guess of what to call the strategy type, but this is still a bit vague, IMO, because it is not clear who is being notified (is it the resource, or the Kubelet?), or why there is a notification.
ResourceChangedEventStrategy might be a bit better, since ResourceChangedEvent makes it clear what's going on. But Event is overloaded for another Kubernetes concept, so I don't like this either.
ResourceChangeDetectionStrategy manages to avoid overloading Event. This is my best guess so far, IMO. Still feels verbose, maybe there's a way to simplify it.
For a field name, maybe ConfigMapAndSecretChangeDetectionStrategy. It's a bit verbose, so if there's a standard term that clearly captures ConfigMaps and Secrets, and whatever other objects this field will cover in the future, consider using that. But be wary of making it too general.
Other things:
Should we allow the strategies for ConfigMaps and Secrets to be configured independently?
I think we can simply use the infinitive of the verb. Instead of Caching and Watching, we can just say Cache and Watch. Simple is a bit vague, would it be more accurate to call this strategy Poll?
There was a problem hiding this comment.
Also, structure-wise, is there a broader category for resource-consumption-lifecycles in the Kubelet, and should we invent a config substructure for that?
There was a problem hiding this comment.
I like: ResourceChangeDetectionStrategy - changed.
Regarding specific configmap and secrets:
- I don't have any idea for a common term for those two - so I left
ConfigMapAndSecretChangeDetectionStrategy - I don't want to split them, because we want to avoid diverging those two in my opinion
About specific infinitive verbs - yeah, that sounds fine. Also, I decided for "Fetch" instead of "Poll" - it's more accurate I think.
About the structure - I don't think we will have more about that, so introducing substructure doesn't make sense now in my opinion.
pkg/kubelet/secret/secret_manager.go
Outdated
There was a problem hiding this comment.
Merge this with the previous imports?
There was a problem hiding this comment.
Merge this with the previous import.
There was a problem hiding this comment.
Since "watching" is the default, why do you need to set it explicitly everywhere?
There was a problem hiding this comment.
Those are some kubeadm test data - it seems we are explicitly listing the whole config in mutliple places.
I didn't dig deeper into those kubeadm files, but the problem seems orthogonal to this PR.
|
@yujuhong: GitHub didn't allow me to request PR reviews from the following users: FYI. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
75826ea to
e55dcbb
Compare
There was a problem hiding this comment.
Those are some kubeadm test data - it seems we are explicitly listing the whole config in mutliple places.
I didn't dig deeper into those kubeadm files, but the problem seems orthogonal to this PR.
There was a problem hiding this comment.
Changed to ConfigMapAndSecretManagerMode - if you have any better suggestion, I can change one more time.
Regarding configuring differently - I consciously merge those two - I think we don't want to diverge them.
Let me know if you disagree with that.
pkg/kubelet/secret/secret_manager.go
Outdated
ec37421 to
a1fc044
Compare
There was a problem hiding this comment.
I like: ResourceChangeDetectionStrategy - changed.
Regarding specific configmap and secrets:
- I don't have any idea for a common term for those two - so I left
ConfigMapAndSecretChangeDetectionStrategy - I don't want to split them, because we want to avoid diverging those two in my opinion
About specific infinitive verbs - yeah, that sounds fine. Also, I decided for "Fetch" instead of "Poll" - it's more accurate I think.
About the structure - I don't think we will have more about that, so introducing substructure doesn't make sense now in my opinion.
There was a problem hiding this comment.
s/Watche/Watch
(and other places)
There was a problem hiding this comment.
Had another thought on this - would it be more standard to call it Get, given that this strategy is just sending a GET request to the API server?
a1fc044 to
72a0f4d
Compare
| clusterDNS: | ||
| - 10.96.0.10 | ||
| clusterDomain: cluster.local | ||
| configMapAndSecretChangeDetectionStrategy: Watch |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mtaufen, wojtek-t, yujuhong The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Review the full test history for this PR. Silence the bot with an |
|
@wojtek-t how did this affect things on the large scale tests? |
|
[MILESTONENOTIFIER] Milestone Pull Request Labels Incomplete @derekwaynecarr @mtaufen @wojtek-t @yujuhong Action required: This pull request requires label changes. If the required changes are not made within 2 days, the pull request will be moved out of the v1.12 milestone. kind: Must specify exactly one of |
@liggitt When I was running 5k-node tests this visibly reduced cpu-load on apiserver (though drop on apicall latencies wasn't significant). My goal of merging it early in the cycle is to have many more runs over the whole cycle so that in case of problems we will fix them (or worst case disable it). But my two experiments didn't show any. |
|
Automatic merge from submit-queue (batch tested with PRs 65187, 65206, 65223, 64752, 65238). If you want to cherry-pick this change to another branch, please follow the instructions here. |
|
Hi, i am in situation : some pods need to detect a changed configmap, and every deploy i am obliged to redeploy some pods to get the new values of configmap... i know that i can do the rolling and restart ... but i want to put this |
No description provided.