Skip to content

roachtest: Deflake multitenant upgrade test#138233

Merged
craig[bot] merged 1 commit intocockroachdb:masterfrom
rimadeodhar:fix-mt-upgrade-test
Jan 13, 2025
Merged

roachtest: Deflake multitenant upgrade test#138233
craig[bot] merged 1 commit intocockroachdb:masterfrom
rimadeodhar:fix-mt-upgrade-test

Conversation

@rimadeodhar
Copy link
Copy Markdown
Collaborator

@rimadeodhar rimadeodhar commented Jan 3, 2025

The multitenant upgrade test enforces different test scenarios while upgrading tenants in a mixed version state. The test enforces the following cases:

  1. Start storage cluster with binary version: x, cluster version: x
  2. Create some tenants with binary version: x and ensure they can connect to the cluster and run a workload.
  3. Using the mixed version test framework, upgrade the storage cluster with binary version: x+1, cluster version: x. In this mixed version state, create remaining tenants with binary version: x and run a workload.
  4. Finalize the storage cluster. At this point, the storage cluster has binary version: x+1 and cluster version: x+1
  5. Upgrade tenants with binary version: x+1 and confirm tenants can connect to the storage cluster and run a workload.

In #131847, the test was rewritten using the new mixed version test framework. However, this change exposed this test to a scenario that can cause this test to fail at step 3 above. The MVT framework also runs the mixed version test (i.e. with the tenant at the older binary version) when the cluster is in the finalizing stage. This scenario is run with a prefixed probability. However, if we attempt to start the tenants with the previous version (i.e. the version the cluster is being upgraded from) when the cluster is being finalized, the tenants rightfully fail to connect which the test incorrectly interprets as a failure. As a result, we would see this test fail occassionally since the test was updated to use the new MVT. This PR modifies the test to ensure that in the finalizing state, we start the tenants with the right version.

Epic: none
Fixes: #136447
Release note: None

@cockroach-teamcity
Copy link
Copy Markdown
Member

This change is Reviewable

@rimadeodhar rimadeodhar added the backport-24.3.x Flags PRs that need to be backported to 24.3 label Jan 3, 2025
Comment on lines 223 to 224
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think h.DefaultService could return h.Tenant in this case,

func (h *Helper) DefaultService() *Service {
	if h.Tenant != nil {
		return h.Tenant
	}

	return h.System
}

I still have half-baked understanding here, but h.Tenant may not be in finalizing state.

Anyways, while I'm not sure what I said above is correct, can we use h.IsFinalizing()? It's shorter and it would also cover the above case,

// Finalizing returns whether the cluster is known to be
// finalizing. Since virtual clusters rely on the system tenant for
// various operations, this function returns `true` if either the
// system or virtual cluster are in the process of finalizing the
// upgrade.
func (c *Context) Finalizing() bool {
systemFinalizing := c.System.Finalizing
var tenantFinalizing bool
if c.Tenant != nil {
tenantFinalizing = c.Tenant.Finalizing
}
return systemFinalizing || tenantFinalizing
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh the deployment mode is mixedversion.SystemOnlyDeployment and we are explicitly controlling the tenants, so h.Tenant may indeed be nil. Still, h.IsFinalizing would be a nice nit.

The multitenant upgrade test enforces different test
scenarios while upgrading tenants in a mixed version state.
The test enforces the following cases:
1. Start storage cluster with binary version: x, cluster version: x
2. Create some tenants with binary version: x and ensure
   they can connect to the cluster and run a workload.
3. Using the mixed version test framework, upgrade the storage cluster
   with binary version: x+1, cluster version: x. In this mixed version
   state, create remaining tenants with binary version: x and run a workload.
4. Finalize the storage cluster. At this point, the storage cluster
   has binary version: x+1 and cluster version: x+1
5. Upgrade tenants with binary version: x+1 and confirm tenants
   can connect to the storage cluster and run a workload.

In cockroachdb#131847, the
test was rewritten using the new mixed version test framework.
However, this change exposed this test to a scenario
that can cause this test to fail at step 3 above. The MVT framework
also runs the mixed version test (i.e. with the tenant
at the older binary version) when the cluster is in the
finalizing stage. This scenario is run with a prefixed
probability. However, if we attempt to start the tenants
with the previous version (i.e. the version the cluster
is being upgraded from) when the cluster is being finalized,
the tenants rightfully fail to connect which the test
incorrectly interprets as a failure. As a result, we would see this
test fail occassionally since the test was updated to use the new MVT.
This PR modifies the test to ensure that in the finalizing state,
we start the tenants with the right version.

Epic: none
Fixes: cockroachdb#136447
Release note: None
@rimadeodhar
Copy link
Copy Markdown
Collaborator Author

TFTR!

bors r+

@craig
Copy link
Copy Markdown
Contributor

craig bot commented Jan 13, 2025

@craig craig bot merged commit 2e25969 into cockroachdb:master Jan 13, 2025
@blathers-crl
Copy link
Copy Markdown

blathers-crl bot commented Jan 13, 2025

Based on the specified backports for this PR, I applied new labels to the following linked issue(s). Please adjust the labels as needed to match the branches actually affected by the issue(s), including adding any known older branches.


Issue #136447: branch-release-24.3.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

shubhamdhama added a commit to shubhamdhama/cockroach that referenced this pull request Mar 18, 2025
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.

A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│   ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│   ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│   │   ├── restart node 2 with binary version v24.1.13 (6)
│   │   ├── restart node 1 with binary version v24.1.13 (7)
│   │   ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│   │   ├── restart node 3 with binary version v24.1.13 (9)
│   │   ├── restart node 4 with binary version v24.1.13 (10)
│   │   └── run "run workload on tenants" (11)
│   ├── run "run workload on tenants" (12)
```

Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,

```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
                      n1           n2           n3           n4
released versions     v24.1.13     v24.1.13     v24.1.13     v24.1.13
binary versions       24.1         24.1         24.1         24.1
cluster versions      24.1         24.1         24.1         24.1
```

This implies that our tenant would also start with the target version as we
finalize (see cockroachdb#138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.

Fixes: cockroachdb#140507
Informs: cockroachdb#142807
Release note: None
Epic: None
craig bot pushed a commit that referenced this pull request Mar 20, 2025
143055: roachtest: fix missing binary for TPC-C in multitenant upgrade test r=rimadeodhar a=shubhamdhama

Summary: In multitenant upgrade tests, the TPC-C workload may fail if the required binary is missing on a node. This issue can occur when no tenant is created on nodes with the previous binary version, and the workload attempts to run using that binary.

A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│   ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│   ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│   │   ├── restart node 2 with binary version v24.1.13 (6)
│   │   ├── restart node 1 with binary version v24.1.13 (7)
│   │   ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│   │   ├── restart node 3 with binary version v24.1.13 (9)
│   │   ├── restart node 4 with binary version v24.1.13 (10)
│   │   └── run "run workload on tenants" (11)
│   ├── run "run workload on tenants" (12)
```

Once all the nodes are upgraded (step 10), we enter the finalizing phase in step 11. Our cluster configuration would then look like this,

```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
                      n1           n2           n3           n4
released versions     v24.1.13     v24.1.13     v24.1.13     v24.1.13
binary versions       24.1         24.1         24.1         24.1
cluster versions      24.1         24.1         24.1         24.1
```

This implies that our tenant would also start with the target version as we finalize (see #138233). Then we run the TPC-C workload on tenant nodes using the version we are migrating from—likely for compatibility reasons. However, the required binary may be absent if, during step 4, we did not create any tenants with the previous version due to probabilistic selection. The fix is simple: upload the binary used to run TPC-C. The process first checks whether the binary is already present, so no extra performance overhead occurs if it is.

Fixes: #140507
Informs: #142807
Release note: None
Epic: None

Co-authored-by: Shubham Dhama <shubham.dhama@cockroachlabs.com>
blathers-crl bot pushed a commit that referenced this pull request Mar 20, 2025
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.

A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│   ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│   ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│   │   ├── restart node 2 with binary version v24.1.13 (6)
│   │   ├── restart node 1 with binary version v24.1.13 (7)
│   │   ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│   │   ├── restart node 3 with binary version v24.1.13 (9)
│   │   ├── restart node 4 with binary version v24.1.13 (10)
│   │   └── run "run workload on tenants" (11)
│   ├── run "run workload on tenants" (12)
```

Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,

```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
                      n1           n2           n3           n4
released versions     v24.1.13     v24.1.13     v24.1.13     v24.1.13
binary versions       24.1         24.1         24.1         24.1
cluster versions      24.1         24.1         24.1         24.1
```

This implies that our tenant would also start with the target version as we
finalize (see #138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.

Fixes: #140507
Informs: #142807
Release note: None
Epic: None
blathers-crl bot pushed a commit that referenced this pull request Sep 16, 2025
Summary: In multitenant upgrade tests, the TPC-C workload may fail if the
required binary is missing on a node. This issue can occur when no tenant
is created on nodes with the previous binary version, and the workload
attempts to run using that binary.

A sample excerpt from the upgrade plan illustrates the process:
```
├── start cluster at version "v23.2.20" (1)
├── wait for all nodes (:1-4) to acknowledge cluster version '23.2' on system tenant (2)
├── set cluster setting "storage.ingest_split.enabled" to 'false' on system tenant (3)
├── run "maybe create some tenants" (4)
├── upgrade cluster from "v23.2.20" to "v24.1.13"
│   ├── prevent auto-upgrades on system tenant by setting `preserve_downgrade_option` (5)
│   ├── upgrade nodes :1-4 from "v23.2.20" to "v24.1.13"
│   │   ├── restart node 2 with binary version v24.1.13 (6)
│   │   ├── restart node 1 with binary version v24.1.13 (7)
│   │   ├── allow upgrade to happen on system tenant by resetting `preserve_downgrade_option` (8)
│   │   ├── restart node 3 with binary version v24.1.13 (9)
│   │   ├── restart node 4 with binary version v24.1.13 (10)
│   │   └── run "run workload on tenants" (11)
│   ├── run "run workload on tenants" (12)
```

Once all the nodes are upgraded (step 10), we enter the finalizing phase in
step 11. Our cluster configuration would then look like this,

```
[mixed-version-test/11_run-run-workload-on-tenants] 2025/03/13 10:47:21 runner.go:423: current cluster configuration:
                      n1           n2           n3           n4
released versions     v24.1.13     v24.1.13     v24.1.13     v24.1.13
binary versions       24.1         24.1         24.1         24.1
cluster versions      24.1         24.1         24.1         24.1
```

This implies that our tenant would also start with the target version as we
finalize (see #138233). Then we run the TPC-C workload on tenant nodes
using the version we are migrating from—likely for compatibility reasons.
However, the required binary may be absent if, during step 4, we did not
create any tenants with the previous version due to probabilistic
selection. The fix is simple: upload the binary used to run TPC-C. The
process first checks whether the binary is already present, so no extra
performance overhead occurs if it is.

Fixes: #140507
Informs: #142807
Release note: None
Epic: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport-24.3.x Flags PRs that need to be backported to 24.3

Projects

None yet

Development

Successfully merging this pull request may close these issues.

roachtest: multitenant-upgrade failed

3 participants