Skip to content

fix(cluster): fix deletion when pd addr is not updated#6548

Merged
ti-chi-bot[bot] merged 1 commit intopingcap:feature/v2from
liubog2008:liubo02/fix-deleting-when-pd-down
Nov 18, 2025
Merged

fix(cluster): fix deletion when pd addr is not updated#6548
ti-chi-bot[bot] merged 1 commit intopingcap:feature/v2from
liubog2008:liubo02/fix-deleting-when-pd-down

Conversation

@liubog2008
Copy link
Member

If pd addr is not updated, instances cannot be deleted even if the whole cluster is deleting.

@ti-chi-bot ti-chi-bot bot requested a review from shonge November 18, 2025 08:22
@github-actions github-actions bot added the v2 for operator v2 label Nov 18, 2025
@ti-chi-bot ti-chi-bot bot added the size/M label Nov 18, 2025
@fgksgf fgksgf requested a review from Copilot November 18, 2025 08:22
@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 0% with 30 lines in your changes missing coverage. Please review.
✅ Project coverage is 40.35%. Comparing base (a4059d0) to head (29816b2).

Additional details and impacted files
@@              Coverage Diff               @@
##           feature/v2    #6548      +/-   ##
==============================================
- Coverage       40.39%   40.35%   -0.05%     
==============================================
  Files             366      366              
  Lines           20362    20383      +21     
==============================================
  Hits             8226     8226              
- Misses          12136    12157      +21     
Flag Coverage Δ
unittest 40.35% <0.00%> (-0.05%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@fgksgf
Copy link
Member

fgksgf commented Nov 18, 2025

/lgtm

@ti-chi-bot ti-chi-bot bot added the lgtm label Nov 18, 2025
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Nov 18, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fgksgf

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Nov 18, 2025

[LGTM Timeline notifier]

Timeline:

  • 2025-11-18 08:29:00.254195332 +0000 UTC m=+303.903389779: ☑️ agreed by fgksgf.

@ti-chi-bot ti-chi-bot bot added the approved label Nov 18, 2025
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes a deletion bug where instances could not be deleted when the PD (Placement Driver) address was not updated, even when the entire cluster was being deleted. The fix reorders conditional checks in the reconciliation logic to prioritize cluster deletion over PD address validation.

  • Moved the PD address validation check to occur after the cluster deletion check across all component controllers
  • Ensures that when a cluster is being deleted, all subresources and finalizers are removed directly without requiring PD address to be registered
  • Applied consistently across 9 different component controllers (TSO, TiProxy, TiKV, TiFlash, TiDB, TiCDC, Scheduling, Scheduler, and ReplicationWorker)

Reviewed Changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated no comments.

Show a summary per file
File Description
pkg/controllers/tso/builder.go Reordered checks to allow TSO instance deletion when cluster is deleting
pkg/controllers/tiproxy/builder.go Reordered checks to allow TiProxy instance deletion when cluster is deleting
pkg/controllers/tikv/builder.go Reordered checks to allow TiKV instance deletion when cluster is deleting
pkg/controllers/tiflash/builder.go Reordered checks to allow TiFlash instance deletion when cluster is deleting
pkg/controllers/tidb/builder.go Reordered checks to allow TiDB instance deletion when cluster is deleting
pkg/controllers/ticdc/builder.go Reordered checks to allow TiCDC instance deletion when cluster is deleting
pkg/controllers/scheduling/builder.go Reordered checks to allow Scheduling instance deletion when cluster is deleting
pkg/controllers/scheduler/builder.go Reordered checks to allow Scheduler instance deletion when cluster is deleting
pkg/controllers/replicationworker/builder.go Reordered checks to allow ReplicationWorker instance deletion when cluster is deleting

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Signed-off-by: liubo02 <liubo02@pingcap.com>
@liubog2008 liubog2008 force-pushed the liubo02/fix-deleting-when-pd-down branch from 29816b2 to 749f857 Compare November 18, 2025 12:37
@liubog2008
Copy link
Member Author

/cherry-pick release-2.0

@ti-chi-bot
Copy link
Member

@liubog2008: once the present PR merges, I will cherry-pick it on top of release-2.0 in the new PR and assign it to you.

Details

In response to this:

/cherry-pick release-2.0

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

@ti-chi-bot ti-chi-bot bot merged commit 060d4d6 into pingcap:feature/v2 Nov 18, 2025
9 checks passed
@ti-chi-bot
Copy link
Member

@liubog2008: new pull request created to branch release-2.0: #6550.

Details

In response to this:

/cherry-pick release-2.0

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants