Skip to content

roachtest: fix missing binary for TPC-C in multitenant upgrade test#143055

Merged
craig[bot] merged 1 commit intocockroachdb:masterfrom
shubhamdhama:fix-mt-upgrade-tpcc-missing-binary
Mar 20, 2025
Merged

roachtest: fix missing binary for TPC-C in multitenant upgrade test#143055
craig[bot] merged 1 commit intocockroachdb:masterfrom
shubhamdhama:fix-mt-upgrade-tpcc-missing-binary

Conversation

@shubhamdhama
Copy link
Copy Markdown
Contributor

Summary: In multitenant upgrade tests, the TPC-C workload may fail if the required binary is missing on a node. This issue can occur when no tenant is created on nodes with the previous binary version, and the workload attempts to run using that binary.

A sample excerpt from the upgrade plan illustrates the process:

├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│   ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│   ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│   │   ├── restart node 2 with binary version v24.1.13 (6)
│   │   ├── restart node 1 with binary version v24.1.13 (7)
│   │   ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│   │   ├── restart node 3 with binary version v24.1.13 (9)
│   │   ├── restart node 4 with binary version v24.1.13 (10)
│   │   └── run "run workload on tenants" (11)
│   ├── run "run workload on tenants" (12)

Once all the nodes are upgraded (step 10), we enter the finalizing phase in step 11. Our cluster configuration would then look like this,

[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
                      n1           n2           n3           n4
released versions     v24.1.13     v24.1.13     v24.1.13     v24.1.13
binary versions       24.1         24.1         24.1         24.1
cluster versions      24.1         24.1         24.1         24.1

This implies that our tenant would also start with the target version as we finalize (see #138233). Then we run the TPC-C workload on tenant nodes using the version we are migrating from—likely for compatibility reasons. However, the required binary may be absent if, during step 4, we did not create any tenants with the previous version due to probabilistic selection. The fix is simple: upload the binary used to run TPC-C. The process first checks whether the binary is already present, so no extra performance overhead occurs if it is.

Fixes: #140507
Informs: #142807
Release note: None
Epic: None

Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.

A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│   ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│   ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│   │   ├── restart node 2 with binary version v24.1.13 (6)
│   │   ├── restart node 1 with binary version v24.1.13 (7)
│   │   ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│   │   ├── restart node 3 with binary version v24.1.13 (9)
│   │   ├── restart node 4 with binary version v24.1.13 (10)
│   │   └── run "run workload on tenants" (11)
│   ├── run "run workload on tenants" (12)
```

Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,

```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
                      n1           n2           n3           n4
released versions     v24.1.13     v24.1.13     v24.1.13     v24.1.13
binary versions       24.1         24.1         24.1         24.1
cluster versions      24.1         24.1         24.1         24.1
```

This implies that our tenant would also start with the target version as we
finalize (see cockroachdb#138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.

Fixes: cockroachdb#140507
Informs: cockroachdb#142807
Release note: None
Epic: None
@cockroach-teamcity
Copy link
Copy Markdown
Member

This change is Reviewable

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes an issue in the multitenant upgrade tests where the TPC-C workload could fail due to a missing binary. The changes remove the previously passed binaryPath parameter from runTPCC and instead dynamically upload the required cockroach binary if it isn’t already present.

  • Updated runTPCC signature to rely on the version from h.Context() rather than an externally computed binaryPath.
  • Integrated a binary upload step via clusterupgrade.UploadCockroach to ensure the TPC-C binary is available on tenant nodes.
Comments suppressed due to low confidence (1)

pkg/cmd/roachtest/tests/multitenant_upgrade.go:190

  • [nitpick] Consider enhancing the error message to include additional context (such as the node or tenant identifier) to aid troubleshooting if the upload fails.
binaryPath, err := clusterupgrade.UploadCockroach(ctx, t, l, c, nodes, version)

@shubhamdhama
Copy link
Copy Markdown
Contributor Author

TFTR!

bors r=rimadeodhar

@craig
Copy link
Copy Markdown
Contributor

craig bot commented Mar 20, 2025

@craig craig bot merged commit 4d166e0 into cockroachdb:master Mar 20, 2025
24 checks passed
@blathers-crl
Copy link
Copy Markdown

blathers-crl bot commented Mar 20, 2025

Based on the specified backports for this PR, I applied new labels to the following linked issue(s). Please adjust the labels as needed to match the branches actually affected by the issue(s), including adding any known older branches.


Issue #140507: branch-release-25.1.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

@cthumuluru-crdb
Copy link
Copy Markdown
Contributor

blathers backport 24.3

@blathers-crl
Copy link
Copy Markdown

blathers-crl bot commented Sep 16, 2025

Based on the specified backports for this PR, I applied new labels to the following linked issue(s). Please adjust the labels as needed to match the branches actually affected by the issue(s), including adding any known older branches.


Issue #140507: branch-release-24.3.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

roachtest: multitenant-upgrade failed

5 participants