Skip to content

Always run cancellation handle for interrupted optimizations#6549

Merged
agourlay merged 3 commits intodevfrom
always-handle-cancellation-for-interrupted-optimizations
May 20, 2025
Merged

Always run cancellation handle for interrupted optimizations#6549
agourlay merged 3 commits intodevfrom
always-handle-cancellation-for-interrupted-optimizations

Conversation

@agourlay
Copy link
Member

@agourlay agourlay commented May 19, 2025

Seems to fix Crasher, will let CI chew on it.

This PR makes sure we handle the on going proxy segments when failing with any error to build a new optimized segment.

@agourlay agourlay changed the title Always run cancelletatiob handle for interrupted optimizations Always run cancelletation handle for interrupted optimizations May 19, 2025
@agourlay agourlay changed the title Always run cancelletation handle for interrupted optimizations Always run cancellelation handle for interrupted optimizations May 19, 2025
@agourlay agourlay changed the title Always run cancellelation handle for interrupted optimizations Always run cancelation handle for interrupted optimizations May 19, 2025
Copy link
Member

@timvisee timvisee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's still some places where we might error without handling cancellation, for example:

ProxyIndexChange::Create(schema, version) => {
optimized_segment.create_field_index(
*version,
field_name,
Some(schema),
&hw_counter,
)?;
}
ProxyIndexChange::Delete(version) => {
optimized_segment.delete_field_index(*version, field_name)?;
}
ProxyIndexChange::DeleteIfIncompatible(version, schema) => {
optimized_segment
.delete_field_index_if_incompatible(*version, field_name, schema)?;
}
}

I think it'd be best to handle those too, possibly by moving everything that works with proxies into a sub-function.

@coderabbitai

This comment was marked as resolved.

use io::storage_version::StorageVersion;
use itertools::Itertools;
use parking_lot::{Mutex, RwLockUpgradableReadGuard};
use segment::common::operation_error::{OperationResult, check_process_stopped};
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have unified the code file on on the local self.check_cancellation to not have two styles.

@timvisee
Copy link
Member

timvisee commented May 20, 2025

To tackle this, I suggest: #6557

That way we handle cancellation on all errors in a single place.

@timvisee timvisee changed the title Always run cancelation handle for interrupted optimizations Always run cancellation handle for interrupted optimizations May 20, 2025
)

* Move segment optimization into separate function, cancel in single place

* Don't use owned lock

* Unify cancellation check
@agourlay agourlay merged commit 7a39c33 into dev May 20, 2025
17 checks passed
@agourlay agourlay deleted the always-handle-cancellation-for-interrupted-optimizations branch May 20, 2025 10:59
generall pushed a commit that referenced this pull request May 22, 2025
* Always run cancelletatiob handle for interrupted optimizations

* Stick to one way of checking cancelation

* Move segment optimization in sub function, cancel in single place (#6557)

* Move segment optimization into separate function, cancel in single place

* Don't use owned lock

* Unify cancellation check

---------

Co-authored-by: Tim Visée <tim+github@visee.me>
n0x29a pushed a commit that referenced this pull request May 23, 2025
* Always run cancelletatiob handle for interrupted optimizations

* Stick to one way of checking cancelation

* Move segment optimization in sub function, cancel in single place (#6557)

* Move segment optimization into separate function, cancel in single place

* Don't use owned lock

* Unify cancellation check

---------

Co-authored-by: Tim Visée <tim+github@visee.me>
@coderabbitai coderabbitai bot mentioned this pull request Oct 6, 2025
@coderabbitai coderabbitai bot mentioned this pull request Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants