Skip to content

configure proper kops flags for kube-scheduler qps and burst configuration#17752

Merged
k8s-ci-robot merged 1 commit intokubernetes:masterfrom
alaypatel07:dra-5k-qps-config-2
Nov 13, 2025
Merged

configure proper kops flags for kube-scheduler qps and burst configuration#17752
k8s-ci-robot merged 1 commit intokubernetes:masterfrom
alaypatel07:dra-5k-qps-config-2

Conversation

@alaypatel07
Copy link
Copy Markdown
Contributor

@alaypatel07 alaypatel07 commented Nov 10, 2025

fixes: #17751

Signed-off-by: Alay Patel <alayp@nvidia.com>
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Nov 10, 2025
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 13, 2025
@k8s-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hakman

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 13, 2025
@k8s-ci-robot k8s-ci-robot merged commit 06263b1 into kubernetes:master Nov 13, 2025
27 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.35 milestone Nov 13, 2025
@upodroid
Copy link
Copy Markdown
Member

We need to revert this PR, its causing kops to crash

https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-gce-100-ipalias-using-cl2/1988844548314370048

W1113 05:43:17.676897   17159 kopsassets.go:69] Using base url from env var: KOPS_BASE_URL="https://storage.googleapis.com/k8s-staging-kops/kops/releases/1.34.0-beta.2+v1.34.0-beta.1-148-g2aab469fed"
panic: reflect: call of reflect.Value.Set on zero Value [recovered, repanicked]

goroutine 1 [running]:
go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End.deferwrap1()
	go.opentelemetry.io/otel/sdk@v1.38.0/trace/span.go:468 +0x25
go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End(0xc0007da3c0, {0x0, 0x0, 0x0?})
	go.opentelemetry.io/otel/sdk@v1.38.0/trace/span.go:517 +0xbf2
panic({0x73aca60?, 0xc0013da528?})
	runtime/panic.go:783 +0x132
go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End.deferwrap1()
	go.opentelemetry.io/otel/sdk@v1.38.0/trace/span.go:468 +0x25
go.opentelemetry.io/otel/sdk/trace.(*recordingSpan).End(0xc0007da780, {0x0, 0x0, 0xc00087d970?})
	go.opentelemetry.io/otel/sdk@v1.38.0/trace/span.go:517 +0xbf2
panic({0x73aca60?, 0xc0013da528?})
	runtime/panic.go:783 +0x132
reflect.flag.mustBeExportedSlow(0x0?)
	reflect/value.go:232 +0xb6
reflect.flag.mustBeExported(...)
	reflect/value.go:226
reflect.Value.Set({0x82d0580?, 0xc000ed6958?, 0x6fe6420?}, {0x0?, 0x0?, 0x6fe6060?})
	reflect/value.go:2126 +0x9a
k8s.io/kops/pkg/model/components/kubescheduler.MapToUnstructured.func2(0xc0013da4f8, 0xd?, {0x82d0580?, 0xc000ed6958?, 0x0?})
	k8s.io/kops/pkg/model/components/kubescheduler/model.go:184 +0x4f2
k8s.io/kops/util/pkg/reflectutils.reflectRecursive(0xc0013da378, {0x828dca0?, 0xc000ed6900?, 0xc000ed6900?}, 0xc00087c420, 0xc00087c3f6)
	k8s.io/kops/util/pkg/reflectutils/walk.go:156 +0x63b
k8s.io/kops/util/pkg/reflectutils.reflectRecursive(0xc0013da378, {0x76005a0?, 0xc000ed6900?, 0x0?}, 0xc00087c420, 0xc00087c3f6)
	k8s.io/kops/util/pkg/reflectutils/walk.go:210 +0x265
k8s.io/kops/util/pkg/reflectutils.ReflectRecursive(...)
	k8s.io/kops/util/pkg/reflectutils/walk.go:115
k8s.io/kops/pkg/model/components/kubescheduler.MapToUnstructured({0x76005a0, 0xc000ed6900}, 0x92480b0?)
	k8s.io/kops/pkg/model/components/kubescheduler/model.go:198 +0xdb
k8s.io/kops/pkg/model/components/kubescheduler.(*KubeSchedulerBuilder).buildSchedulerConfig(0xc000caa0e0)
	k8s.io/kops/pkg/model/components/kubescheduler/model.go:103 +0x5e5
k8s.io/kops/pkg/model/components/kubescheduler.(*KubeSchedulerBuilder).Build(0xc000caa0e0, 0xc000eab2a0?)
	k8s.io/kops/pkg/model/components/kubescheduler/model.go:54 +0x1c
k8s.io/kops/upup/pkg/fi/cloudup.(*Loader).BuildTasks(0xc00087d058, {0x92d8e38, 0xc0006ff8f0}, 0xc000926870)
	k8s.io/kops/upup/pkg/fi/cloudup/loader.go:47 +0x142
k8s.io/kops/upup/pkg/fi/cloudup.(*ApplyClusterCmd).Run(0xc00087d4d8, {0x92d8e38, 0xc0006ff8f0})
	k8s.io/kops/upup/pkg/fi/cloudup/apply_cluster.go:708 +0x5769
main.RunUpdateCluster({0x92d8e38, 0xc0006ff8f0}, 0xc0008a1770, {0x9258360, 0xc000134018}, 0xc0007e4400)
	k8s.io/kops/cmd/kops/update_cluster.go:384 +0x1250
main.RunCreateCluster({0x92d8e38?, 0xc0006fe7b0?}, 0xc0008a1770, {0x9258360, 0xc000134018}, 0xc00077ef08)
	k8s.io/kops/cmd/kops/create_cluster.go:822 +0x1b2e
main.NewCmdCreateCluster.func1(0xc0007a0608, {0xc000073508?, 0x4?, 0x8381a53?})
	k8s.io/kops/cmd/kops/create_cluster.go:204 +0x195
github.com/spf13/cobra.(*Command).execute(0xc0007a0608, {0xc000072e08, 0x63, 0x63})
	github.com/spf13/cobra@v1.10.1/command.go:1015 +0xb02
github.com/spf13/cobra.(*Command).ExecuteC(0xc8fc300)
	github.com/spf13/cobra@v1.10.1/command.go:1148 +0x465
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.10.1/command.go:1071
github.com/spf13/cobra.(*Command).ExecuteContext(...)
	github.com/spf13/cobra@v1.10.1/command.go:1064
main.Execute({0x92d8d38, 0xc94fac0})
	k8s.io/kops/cmd/kops/root.go:100 +0x356
main.run({0x92d8d38, 0xc94fac0})
	k8s.io/kops/cmd/kops/main.go:55 +0x15a
main.main()
	k8s.io/kops/cmd/kops/main.go:29 +0x25

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

kops scalability tests doesnt have a way to configure scheduler QPS

4 participants