KEP-3633: graduate matchLabelKeys/mismatchLabelKeys to beta#4450
KEP-3633: graduate matchLabelKeys/mismatchLabelKeys to beta#4450k8s-ci-robot merged 3 commits intokubernetes:masterfrom
Conversation
59f2fe2 to
580f1c1
Compare
alculquicondor
left a comment
There was a problem hiding this comment.
LGTM overall, just a few nits.
|
@alculquicondor Fixed, thanks. |
| Also, we should make sure this feature brings no significant performance degradation in both Filter and Score. | ||
|
|
||
| - `k8s.io/kubernetes/test/integration/scheduler_perf/scheduler_perf_test.go`: https://storage.googleapis.com/k8s-triage/index.html?test=BenchmarkPerfScheduling |
There was a problem hiding this comment.
After we change the way to implement this feature, this feature no longer has an impact on the scheduling latency since this feature just modifies labelSelector and doesn't change anything in the scheduling flow. So, we don't implement any new test case in scheduler_perf for this feature.
There was a problem hiding this comment.
Just to clarify for the PRR: this is already implemented with no impact to scheduling.
alculquicondor
left a comment
There was a problem hiding this comment.
/approve
/lgtm
ping @wojtek-t
| Also, we should make sure this feature brings no significant performance degradation in both Filter and Score. | ||
|
|
||
| - `k8s.io/kubernetes/test/integration/scheduler_perf/scheduler_perf_test.go`: https://storage.googleapis.com/k8s-triage/index.html?test=BenchmarkPerfScheduling |
There was a problem hiding this comment.
Just to clarify for the PRR: this is already implemented with no impact to scheduling.
wojtek-t
left a comment
There was a problem hiding this comment.
Just some minor comments from the PRR pov - PTAL
| will rollout across nodes. | ||
| --> | ||
|
|
||
| It shouldn't impact already running workloads. It's an opt-in feature, |
There was a problem hiding this comment.
Can't comment on unchanged lines, so commenting here:
Are there any tests for feature enablement/disablement?
Were those added?
In particular the comment from the template is relevant here.
There was a problem hiding this comment.
No... I'll add the one within this release, fortunately it's not too late like minDomains 😓
There was a problem hiding this comment.
There was a problem hiding this comment.
@wojtek-t By the way, do we have to have a manual test (Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?) then? Is there any expectation difference between this automated enablement/disablement test and the manual test?
There was a problem hiding this comment.
The manual test should be performed as well. You can put the intent of the test in the KEP now and send an update later indicating that you executed it.
| - A spike on metric `schedule_attempts_total{result="error|unschedulable"}` when pods using this feature are added. | ||
|
|
||
| No need to check latency of the scheduler because the scheduler doesn't get changed at all for this feature. | ||
| The only possibility is the bug in the Pod creation process in kube-apiserver and it results in some unintended scheduling. |
There was a problem hiding this comment.
Is this really true? The computations on scheduler side might actually be slightly more expensive no? [e.g. a bit larger selectors to compute against pods...]
@alculquicondor - thoughts?
There was a problem hiding this comment.
Maybe... In some scenarios the users might be migrating from manually setting labels to using the new automated way.
There was a problem hiding this comment.
The computations on scheduler side might actually be slightly more expensive no?
a bit larger selectors to compute against pods
This is correct though, is it realistic that worth being added here?: the labelselector is added by matchLabelKeys → the time the scheduler takes to calculate the labelselector becomes too big?
If it happens, it's likely not the problem of this feature, but the problem of the entire PodAffinity that labelSelector calculation is somehow buggy.
There was a problem hiding this comment.
Maybe it's not worth adding, but also please don't mislead the reader that "latency of scheduler doesn't get changed" - it is implicitly changed by using its features more.
There was a problem hiding this comment.
I added the mention of the scheduler's latency, but with a note that we could care less about that.
| 12. restart kube-apiserver with this feature enabled. (**Second enablement**) | ||
| 13. No change in Pods created before. | ||
| 14. create one Pod with `matchLabelKeys` in PodAffinity. | ||
| 15. `labelSelector` in PodAffinity should be changed based on `matchLabelKeys` and label value in the Pod because the feature is enabled. |
There was a problem hiding this comment.
This scenarios is fine - please report to the KEP back after running it [doesn't block the KEP merge, but please block the actualy graduation on it]
|
@sanposhiho - can you please address remaining comments? the feature freeze is <24h away |
|
@wojtek-t Sorry for the delay, I couldn't have time before. |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor, sanposhiho, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
MatchLabelKeysto Pod Affinity and Pod Anti Affinity #3633