roachtest: Deflake multitenant upgrade test#138233
roachtest: Deflake multitenant upgrade test#138233craig[bot] merged 1 commit intocockroachdb:masterfrom
Conversation
There was a problem hiding this comment.
I think h.DefaultService could return h.Tenant in this case,
func (h *Helper) DefaultService() *Service {
if h.Tenant != nil {
return h.Tenant
}
return h.System
}I still have half-baked understanding here, but h.Tenant may not be in finalizing state.
Anyways, while I'm not sure what I said above is correct, can we use h.IsFinalizing()? It's shorter and it would also cover the above case,
cockroach/pkg/cmd/roachtest/roachtestutil/mixedversion/context.go
Lines 242 to 256 in 049c30a
There was a problem hiding this comment.
Oh the deployment mode is mixedversion.SystemOnlyDeployment and we are explicitly controlling the tenants, so h.Tenant may indeed be nil. Still, h.IsFinalizing would be a nice nit.
The multitenant upgrade test enforces different test scenarios while upgrading tenants in a mixed version state. The test enforces the following cases: 1. Start storage cluster with binary version: x, cluster version: x 2. Create some tenants with binary version: x and ensure they can connect to the cluster and run a workload. 3. Using the mixed version test framework, upgrade the storage cluster with binary version: x+1, cluster version: x. In this mixed version state, create remaining tenants with binary version: x and run a workload. 4. Finalize the storage cluster. At this point, the storage cluster has binary version: x+1 and cluster version: x+1 5. Upgrade tenants with binary version: x+1 and confirm tenants can connect to the storage cluster and run a workload. In cockroachdb#131847, the test was rewritten using the new mixed version test framework. However, this change exposed this test to a scenario that can cause this test to fail at step 3 above. The MVT framework also runs the mixed version test (i.e. with the tenant at the older binary version) when the cluster is in the finalizing stage. This scenario is run with a prefixed probability. However, if we attempt to start the tenants with the previous version (i.e. the version the cluster is being upgraded from) when the cluster is being finalized, the tenants rightfully fail to connect which the test incorrectly interprets as a failure. As a result, we would see this test fail occassionally since the test was updated to use the new MVT. This PR modifies the test to ensure that in the finalizing state, we start the tenants with the right version. Epic: none Fixes: cockroachdb#136447 Release note: None
a73006c to
2de665c
Compare
|
TFTR! bors r+ |
|
Based on the specified backports for this PR, I applied new labels to the following linked issue(s). Please adjust the labels as needed to match the branches actually affected by the issue(s), including adding any known older branches. Issue #136447: branch-release-24.3. 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.
A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│ ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│ ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│ │ ├── restart node 2 with binary version v24.1.13 (6)
│ │ ├── restart node 1 with binary version v24.1.13 (7)
│ │ ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│ │ ├── restart node 3 with binary version v24.1.13 (9)
│ │ ├── restart node 4 with binary version v24.1.13 (10)
│ │ └── run "run workload on tenants" (11)
│ ├── run "run workload on tenants" (12)
```
Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,
```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
n1 n2 n3 n4
released versions v24.1.13 v24.1.13 v24.1.13 v24.1.13
binary versions 24.1 24.1 24.1 24.1
cluster versions 24.1 24.1 24.1 24.1
```
This implies that our tenant would also start with the target version as we
finalize (see cockroachdb#138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.
Fixes: cockroachdb#140507
Informs: cockroachdb#142807
Release note: None
Epic: None
143055: roachtest: fix missing binary for TPC-C in multitenant upgrade test r=rimadeodhar a=shubhamdhama
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the required binary is missing on a node. This issue can occur when no tenant is created on nodes with the previous binary version, and the workload attempts to run using that binary.
A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│ ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│ ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│ │ ├── restart node 2 with binary version v24.1.13 (6)
│ │ ├── restart node 1 with binary version v24.1.13 (7)
│ │ ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│ │ ├── restart node 3 with binary version v24.1.13 (9)
│ │ ├── restart node 4 with binary version v24.1.13 (10)
│ │ └── run "run workload on tenants" (11)
│ ├── run "run workload on tenants" (12)
```
Once all the nodes are upgraded (step 10), we enter the finalizing phase in step 11. Our cluster configuration would then look like this,
```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
n1 n2 n3 n4
released versions v24.1.13 v24.1.13 v24.1.13 v24.1.13
binary versions 24.1 24.1 24.1 24.1
cluster versions 24.1 24.1 24.1 24.1
```
This implies that our tenant would also start with the target version as we finalize (see #138233). Then we run the TPC-C workload on tenant nodes using the version we are migrating from—likely for compatibility reasons. However, the required binary may be absent if, during step 4, we did not create any tenants with the previous version due to probabilistic selection. The fix is simple: upload the binary used to run TPC-C. The process first checks whether the binary is already present, so no extra performance overhead occurs if it is.
Fixes: #140507
Informs: #142807
Release note: None
Epic: None
Co-authored-by: Shubham Dhama <shubham.dhama@cockroachlabs.com>
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.
A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│ ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│ ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│ │ ├── restart node 2 with binary version v24.1.13 (6)
│ │ ├── restart node 1 with binary version v24.1.13 (7)
│ │ ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│ │ ├── restart node 3 with binary version v24.1.13 (9)
│ │ ├── restart node 4 with binary version v24.1.13 (10)
│ │ └── run "run workload on tenants" (11)
│ ├── run "run workload on tenants" (12)
```
Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,
```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
n1 n2 n3 n4
released versions v24.1.13 v24.1.13 v24.1.13 v24.1.13
binary versions 24.1 24.1 24.1 24.1
cluster versions 24.1 24.1 24.1 24.1
```
This implies that our tenant would also start with the target version as we
finalize (see #138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.
Fixes: #140507
Informs: #142807
Release note: None
Epic: None
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.
A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│ ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│ ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│ │ ├── restart node 2 with binary version v24.1.13 (6)
│ │ ├── restart node 1 with binary version v24.1.13 (7)
│ │ ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│ │ ├── restart node 3 with binary version v24.1.13 (9)
│ │ ├── restart node 4 with binary version v24.1.13 (10)
│ │ └── run "run workload on tenants" (11)
│ ├── run "run workload on tenants" (12)
```
Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,
```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
n1 n2 n3 n4
released versions v24.1.13 v24.1.13 v24.1.13 v24.1.13
binary versions 24.1 24.1 24.1 24.1
cluster versions 24.1 24.1 24.1 24.1
```
This implies that our tenant would also start with the target version as we
finalize (see #138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.
Fixes: #140507
Informs: #142807
Release note: None
Epic: None
The multitenant upgrade test enforces different test scenarios while upgrading tenants in a mixed version state. The test enforces the following cases:
In #131847, the test was rewritten using the new mixed version test framework. However, this change exposed this test to a scenario that can cause this test to fail at step 3 above. The MVT framework also runs the mixed version test (i.e. with the tenant at the older binary version) when the cluster is in the finalizing stage. This scenario is run with a prefixed probability. However, if we attempt to start the tenants with the previous version (i.e. the version the cluster is being upgraded from) when the cluster is being finalized, the tenants rightfully fail to connect which the test incorrectly interprets as a failure. As a result, we would see this test fail occassionally since the test was updated to use the new MVT. This PR modifies the test to ensure that in the finalizing state, we start the tenants with the right version.
Epic: none
Fixes: #136447
Release note: None