Implement reverse performance optimization#4775
Implement reverse performance optimization#4775RyanL1997 merged 26 commits intoopensearch-project:mainfrom
reverse performance optimization#4775Conversation
4483045 to
0246535
Compare
There was a problem hiding this comment.
QQ: I recall the major comment on original PR is early optimization in analyzer layer. Is this new PR trying to address the concern? Ref: #4056 (comment)
Hi Chen, I think that's a valid concern. However, after trying it out, I think it has significant complexity comparing to the current approach. I think |
noCharger
left a comment
There was a problem hiding this comment.
Can you add benchmark results on before VS after?
Signed-off-by: Kai Huang <ahkcs@amazon.com>
|
@songkant-aws Updated PR with a new approach to resolve the issue mentioned, details in the PR description |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java`:
- Around line 749-775: The current shuttle in insertReversedSortInTree inserts
the reversed LogicalSort immediately after the first Sort with collation, which
can place it below a limit barrier and change semantics; update the shuttle to
treat a Sort with getFetch() != null or getOffset() != null as a barrier and,
when such a barrier is encountered during traversal, wrap that barrier (the
visited barrier node) with the reversed sort instead of inserting below the
downstream Sort. Concretely, inside the anonymous RelHomogeneousShuttle in
insertReversedSortInTree, detect org.apache.calcite.rel.core.Sort where
getFetch()!=null || getOffset()!=null and, upon visiting it, if the downstream
collation Sort was found during traversal, return
org.apache.calcite.rel.logical.LogicalSort.create(visitedBarrier,
reversedCollation, null, null) so the reversed sort sits above the limit; keep
using sortFound to avoid multiple inserts and ensure you still recurse via
super.visit(other) to obtain visited children before wrapping.
- Around line 781-806: The current in-place replacement uses
MetadataQuery.collations() (which can return input-propagated collation) and
then replaces a Sort node including its fetch, which breaks fetch-only Sort
semantics; change the logic in CalciteRelNodeVisitor so that when currentNode is
an instance of org.apache.calcite.rel.core.Sort you first inspect
existingSort.getCollation(): if existingSort.getCollation() is non-empty,
compute reversedCollation from that and do the in-place replacement via
PlanUtils.replaceTop as before; but if existingSort.getCollation() is null or
empty (fetch-only Sort), do NOT replaceTop with a new LogicalSort containing
fetch — instead compute the reversed collation from metadata (if needed) but
call context.relBuilder.sort(reversedCollation) to add a separate Sort after the
limit so the "limit then reverse" semantics are preserved.
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java`:
- Around line 21-111: Update the CalciteNoPushdownIT class javadoc to reflect
the actual suite contents (only CalciteExplainIT is enabled) and add an inline
code comment in the class (near the `@Suite.SuiteClasses` annotation) explaining
why the remaining tests are commented out or intentionally excluded (e.g.,
resource/time constraints, flaky tests, or strategic coverage choice),
referencing the class name CalciteNoPushdownIT and the enabled test symbol
CalciteExplainIT.class so maintainers can understand the reduced scope and
rationale.
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`:
- Around line 288-313: The top comment in the testStreamstatsByWithReverse
method is incorrect: replace the line that says "Test that reverse is ignored
after streamstats with partitioning (by clause)" with a brief description that
reverse is effective when backtracking is enabled (i.e., streamstats with
partitioning supports reversing the __stream_seq__ order), so the test's
expectations (reversed rows) match the comment; update only the comment in
testStreamstatsByWithReverse to reflect that behavior.
In `@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java`:
- Around line 214-235: Add a new unit test method named testSortHeadReverse that
constructs the PPL "source=EMP | sort SAL | head 5 | reverse", calls
getRelNode(ppl), and asserts the logical plan via verifyLogical(root,
expectedLogical) where expectedLogical represents a top-level DESC reverse sort
above an ASC sort+fetch (e.g., LogicalSort(sort0=[$5], dir0=[DESC-nulls-last])
above LogicalSort(sort0=[$5], dir0=[ASC-nulls-first], fetch=[5]) above
LogicalTableScan(...)); this ensures the reverse is applied after the limit and
prevents the backtracking bug.
🧹 Nitpick comments (1)
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java (1)
181-192: Clarify the "natural order" assumption in the comment.Line 190 states "Without sort or
@timestamp, reverse is ignored, so data comes in natural order" and line 191 claims this is "ascending by account_number". OpenSearch's natural order is not guaranteed to be byaccount_number. Consider updating the comment to clarify this is the expected order for the specific test data loading, or useverifyDataRows(unordered) if the specific order isn't critical to the test's purpose.
core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java
Show resolved
Hide resolved
core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java
Show resolved
Hide resolved
integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java
Outdated
Show resolved
Hide resolved
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java
Show resolved
Hide resolved
ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLTimechartTest.java`:
- Around line 365-419: Add null/boundary/error tests in CalcitePPLTimechartTest
to cover reverse+timechart edge cases: add a test method (e.g.,
testTimechartWithReverse_NullTimefield) that runs "source=events | timechart
timefield=@timestamp span=1m count() | reverse" on data where `@timestamp` can be
NULL and verify the logical plan and SQL via getRelNode, verifyLogical and
verifyPPLToSparkSQL include proper IS NULL handling or expected ordering; add a
boundary/empty-input test (e.g., testTimechartWithReverse_EmptySource) that runs
the pipeline on an empty source and asserts the plan still generates a valid
aggregate/sort; and add an invalid-input test (e.g.,
testTimechartWithReverse_InvalidTimefield) that calls getRelNode with a
nonexistent timefield and asserts the parser/optimizer throws the expected
exception using assertThrows (or the project's error-assertion helper). Ensure
each new test references the same helper methods (getRelNode, verifyLogical,
verifyPPLToSparkSQL) and asserts the specific failure message or null-handling
behavior.
| // ==================== Timechart with Reverse tests ==================== | ||
| // These tests verify that reverse works correctly with timechart. | ||
| // Timechart always adds a sort at the end of its plan, so reverse will | ||
| // find the collation via metadata query (tier 1) and flip the sort direction. | ||
|
|
||
| @Test | ||
| public void testTimechartWithReverse() { | ||
| // Timechart adds ORDER BY @timestamp ASC at the end | ||
| // Reverse should flip it to DESC | ||
| String ppl = "source=events | timechart count() | reverse"; | ||
| RelNode root = getRelNode(ppl); | ||
| // Reverse replaces the timechart's ASC sort in-place with DESC | ||
| String expectedLogical = | ||
| "LogicalSort(sort0=[$0], dir0=[DESC])\n" | ||
| + " LogicalProject(@timestamp=[$0], count()=[$1])\n" | ||
| + " LogicalAggregate(group=[{0}], count()=[COUNT()])\n" | ||
| + " LogicalProject(@timestamp0=[SPAN($0, 1, 'm')])\n" | ||
| + " LogicalFilter(condition=[IS NOT NULL($0)])\n" | ||
| + " LogicalTableScan(table=[[scott, events]])\n"; | ||
| verifyLogical(root, expectedLogical); | ||
|
|
||
| String expectedSparkSql = | ||
| "SELECT SPAN(`@timestamp`, 1, 'm') `@timestamp`, COUNT(*) `count()`\n" | ||
| + "FROM `scott`.`events`\n" | ||
| + "WHERE `@timestamp` IS NOT NULL\n" | ||
| + "GROUP BY SPAN(`@timestamp`, 1, 'm')\n" | ||
| + "ORDER BY 1 DESC NULLS FIRST"; | ||
| verifyPPLToSparkSQL(root, expectedSparkSql); | ||
| } | ||
|
|
||
| @Test | ||
| public void testTimechartWithCustomTimefieldAndReverse() { | ||
| // Timechart with custom timefield should also work with reverse | ||
| // The sort is on created_at (the custom field), not @timestamp | ||
| String ppl = "source=events | timechart timefield=created_at span=1month count() | reverse"; | ||
| RelNode root = getRelNode(ppl); | ||
|
|
||
| // Reverse replaces the timechart's ASC sort in-place with DESC | ||
| String expectedLogical = | ||
| "LogicalSort(sort0=[$0], dir0=[DESC])\n" | ||
| + " LogicalProject(created_at=[$0], count()=[$1])\n" | ||
| + " LogicalAggregate(group=[{0}], count()=[COUNT()])\n" | ||
| + " LogicalProject(created_at0=[SPAN($1, 1, 'M')])\n" | ||
| + " LogicalFilter(condition=[IS NOT NULL($1)])\n" | ||
| + " LogicalTableScan(table=[[scott, events]])\n"; | ||
| verifyLogical(root, expectedLogical); | ||
|
|
||
| String expectedSparkSql = | ||
| "SELECT SPAN(`created_at`, 1, 'M') `created_at`, COUNT(*) `count()`\n" | ||
| + "FROM `scott`.`events`\n" | ||
| + "WHERE `created_at` IS NOT NULL\n" | ||
| + "GROUP BY SPAN(`created_at`, 1, 'M')\n" | ||
| + "ORDER BY 1 DESC NULLS FIRST"; | ||
| verifyPPLToSparkSQL(root, expectedSparkSql); | ||
| } |
There was a problem hiding this comment.
Add null/boundary/error coverage for the new reverse+timechart paths.
These new tests cover the happy path, but the test guidelines require null-input, boundary, and invalid-input coverage for new functionality. Please add at least one case (e.g., NULL timefield values and an invalid timefield) to satisfy those requirements. As per coding guidelines: "NULL input tests must be included for all new functions", "Include boundary condition tests (min/max values, empty inputs) for all new functions", and "Include error condition tests (invalid inputs, exceptions) for all new functions".
🤖 Prompt for AI Agents
In
`@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLTimechartTest.java`
around lines 365 - 419, Add null/boundary/error tests in CalcitePPLTimechartTest
to cover reverse+timechart edge cases: add a test method (e.g.,
testTimechartWithReverse_NullTimefield) that runs "source=events | timechart
timefield=@timestamp span=1m count() | reverse" on data where `@timestamp` can be
NULL and verify the logical plan and SQL via getRelNode, verifyLogical and
verifyPPLToSparkSQL include proper IS NULL handling or expected ordering; add a
boundary/empty-input test (e.g., testTimechartWithReverse_EmptySource) that runs
the pipeline on an empty source and asserts the plan still generates a valid
aggregate/sort; and add an invalid-input test (e.g.,
testTimechartWithReverse_InvalidTimefield) that calls getRelNode with a
nonexistent timefield and asserts the parser/optimizer throws the expected
exception using assertThrows (or the project's error-assertion helper). Ensure
each new test references the same helper methods (getRelNode, verifyLogical,
verifyPPLToSparkSQL) and asserts the specific failure message or null-handling
behavior.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`:
- Around line 181-192: In testReverseIgnoredWithoutSortOrTimestamp, remove the
brittle order assertion by replacing the call to verifyDataRowsInOrder(result,
rows(1), rows(6), rows(13)) with an order-independent check: fetch the result
rows from executeQuery (same TEST_INDEX_BANK query) and assert that the set of
returned account_number values equals the expected set {1,6,13} (or compare the
result to a baseline executeQuery without the reverse pipe), using the test
helper that asserts unordered equality (or implement a small helper to compare
collections ignoring order) so the test no longer depends on natural execution
order; keep verifySchema as-is.
🧹 Nitpick comments (2)
ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java (1)
252-257: Clarify comment: SAL is available during sorting, removed only in final output.The comment states "The sort field (SAL) is removed from schema by fields command" which could be misleading. Looking at the logical plan, the
LogicalProject(ENAME=[$1])sits on top of both sorts, meaning SAL is fully available when sorting occurs. The projection only removes SAL from the final output, not from the sort's scope.Consider rewording to clarify the behavior:
`@Test` public void testSortFieldsReverse() { - // Test backtracking: sort on SAL, then project only ENAME, then reverse - // The sort field (SAL) is removed from schema by fields command - // But reverse should still work by backtracking to find the sort + // Test backtracking: sort on SAL, then project only ENAME, then reverse + // The sort happens before projection, so SAL is available for sorting + // Reverse backtracks through the project to find and reverse the sort String ppl = "source=EMP | sort SAL | fields ENAME | reverse";core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java (1)
783-845: Method exceeds 50-line guideline; consider extracting helper methods.The implementation is correct and properly handles all edge cases including the metadata propagation issue:
- Lines 800-803 check
existingSort.getCollation()(the node's own collation) rather than relying solely on metadata- The
fetch == null && offset == nullcheck ensures fetch-only Sorts go to the else branch, correctly adding the reversed sort on topHowever, at ~62 lines this method exceeds the 50-line guideline for this file. Consider extracting the main branches into focused helpers:
♻️ Suggested structure
private void handleReverseSortInPlace(Sort existingSort, CalcitePlanContext context) { // Lines 804-811: Pure collation sort replacement } private void handleReverseViaBacktracking(RelNode currentNode, CalcitePlanContext context) { // Lines 820-842: Backtracking + `@timestamp` fallback }
| @Test | ||
| public void testReverseIgnoredWithoutSortOrTimestamp() throws IOException { | ||
| // Test that reverse is ignored when there's no explicit sort and no @timestamp field | ||
| // BANK index doesn't have @timestamp, so reverse should be ignored | ||
| JSONObject result = | ||
| executeQuery( | ||
| String.format("source=%s | fields account_number | reverse | head 3", TEST_INDEX_BANK)); | ||
| verifySchema(result, schema("account_number", "bigint")); | ||
| // Without sort or @timestamp, reverse is ignored, so data comes in natural order | ||
| // The first 3 documents in natural order (ascending by account_number) | ||
| verifyDataRowsInOrder(result, rows(1), rows(6), rows(13)); | ||
| } |
There was a problem hiding this comment.
Avoid asserting natural order for the no-sort case.
Line 189–191 assumes a deterministic “natural order” without an explicit sort, which can be unstable and make the test flaky. Prefer an order-independent assertion (or compare to a baseline query without reverse) so the test doesn’t rely on execution order.
[suggested fix]
🔧 One option: remove head and assert unordered rows
- String.format("source=%s | fields account_number | reverse | head 3", TEST_INDEX_BANK));
+ String.format("source=%s | fields account_number | reverse", TEST_INDEX_BANK));
...
- // Without sort or `@timestamp`, reverse is ignored, so data comes in natural order
- // The first 3 documents in natural order (ascending by account_number)
- verifyDataRowsInOrder(result, rows(1), rows(6), rows(13));
+ // Without sort or `@timestamp`, reverse is ignored; assert content without order.
+ verifyDataRows(result, rows(1), rows(6), rows(13), rows(18), rows(20), rows(25), rows(32));As per coding guidelines, “Tests must not rely on execution order; ensure test independence”.
🤖 Prompt for AI Agents
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`
around lines 181 - 192, In testReverseIgnoredWithoutSortOrTimestamp, remove the
brittle order assertion by replacing the call to verifyDataRowsInOrder(result,
rows(1), rows(6), rows(13)) with an order-independent check: fetch the result
rows from executeQuery (same TEST_INDEX_BANK query) and assert that the set of
returned account_number values equals the expected set {1,6,13} (or compare the
result to a baseline executeQuery without the reverse pipe), using the test
helper that asserts unordered equality (or implement a small helper to compare
collections ignoring order) so the test no longer depends on natural execution
order; keep verifySchema as-is.
| if (node instanceof org.apache.calcite.rel.core.Sort) { | ||
| org.apache.calcite.rel.core.Sort sort = (org.apache.calcite.rel.core.Sort) node; |
There was a problem hiding this comment.
Nit: Use imported class. -> node instanceof Sort
| private RelNode insertReversedSortInTree( | ||
| RelNode root, RelCollation reversedCollation, CalcitePlanContext context) { | ||
| return root.accept( | ||
| new org.apache.calcite.rel.RelHomogeneousShuttle() { |
There was a problem hiding this comment.
Nit: Same as above comment. Use imported class in this method
| if (currentNode instanceof org.apache.calcite.rel.core.Sort) { | ||
| org.apache.calcite.rel.core.Sort existingSort = | ||
| (org.apache.calcite.rel.core.Sort) currentNode; |
There was a problem hiding this comment.
Nit: Same here. Use imported class in this method
| if (existingSort.getCollation() != null | ||
| && !existingSort.getCollation().getFieldCollations().isEmpty() | ||
| && existingSort.fetch == null | ||
| && existingSort.offset == null) { | ||
| // Pure collation sort (no fetch/offset) - replace in-place to avoid consecutive | ||
| // sorts. Calcite's physical optimizer merges consecutive LogicalSort nodes and may | ||
| // discard the reversed direction. Replacing in-place avoids this issue. | ||
| RelCollation reversedFromSort = PlanUtils.reverseCollation(existingSort.getCollation()); | ||
| RelNode replacedSort = | ||
| org.apache.calcite.rel.logical.LogicalSort.create( | ||
| existingSort.getInput(), reversedFromSort, null, null); | ||
| PlanUtils.replaceTop(context.relBuilder, replacedSort); |
There was a problem hiding this comment.
Is this observation correct? The logic is weird because that means an explicit query like source =t | sort -a | sort a could be equivalent to the query of source=t | sort -a. I think it's not merge. The worst case is it needs double sort in case of no pushdown. But in pushdown case, I think the sort will be pushed twice. And finally the top sort will override the scan's traitset.
I checked the PR out in my local env. When I remove this piece of code, the explainIT still gives me the expected collation in physical plan.
There was a problem hiding this comment.
The in-place replacement is intentional and necessary. The issue is that explainIT only shows the logical plan before Calcite's physical optimizer runs. During actual execution, SortRemoveRule merges consecutive LogicalSort nodes and keeps the lower sort's direction, effectively discarding the reversed one. So for a query like sort a | reverse, the plan becomes Sort(ASC) → Sort(DESC), and the optimizer collapses it back to Sort(ASC).
I confirmed this by running the integration tests (CalciteReverseCommandIT) — they fail without the in-place replacement. The workaround replaces the existing Sort node directly instead of stacking a new one on top, so there are no consecutive sorts for the optimizer to merge.
There was a problem hiding this comment.
detailed description regarding this issue: #5125
There was a problem hiding this comment.
I think the conclusion is not true. Explained plans in explainIT include both original logical plan and optimized physical plan. Calcite's VolcanoPlanner optimizes both logical plans and physical plans, then the final optimized physical plan will generate a compiled Java code to execute the query. Rules never execute the query.
The SortRemoveRule replaces an explicit sort operator with a trait requirement on the child RelSubset. It doesn't do the simple merge. It assigns ordering trait to different RelSubsets. VolcanoPlanner will compute the cheapest plan from different RelSubsets. A simplified process is like 1st round SortRemoveRule changes Sort(1 Desc) - Sort(1 Asc) - RelSubset#0(NONE) => Sort(1 Desc) - RelSubset#1(1 Asc), 2nd round SortRemoveRule changes Sort(1 Desc) - RelSubset#2(1 Asc) => RelSubset#3(1 Desc). The final plan must satisfy the (1 Desc) ordering requirements.
The actual root cause is there is a shallow copy bug in AggPushDownAction. In case of testReverseAfterAggregationWithSort test case failure you mentioned in the issue. The actual optimized physical plan is:
// query: source=%s | stats count() as c by gender | sort gender | reverse
calcite:
logical: |
LogicalSystemLimit(sort0=[$1], dir0=[DESC-nulls-last], fetch=[10000], type=[QUERY_SIZE_LIMIT])
LogicalSort(sort0=[$1], dir0=[DESC-nulls-last])
LogicalSort(sort0=[$1], dir0=[ASC-nulls-first])
LogicalProject(c=[$1], gender=[$0])
LogicalAggregate(group=[{0}], c=[COUNT()])
LogicalProject(gender=[$4])
CalciteLogicalIndexScan(table=[[OpenSearch, opensearch-sql_test_index_bank]])
physical: |
CalciteEnumerableIndexScan(table=[[OpenSearch, opensearch-sql_test_index_bank]], PushDownContext=[[AGGREGATION->rel#50:LogicalAggregate.NONE.[](input=RelSubset#49,group={0},c=COUNT()), PROJECT->[c, gender], SORT->[1 DESC LAST], LIMIT->10000], OpenSearchRequestBuilder(sourceBuilder={"from":0,"size":0,"timeout":"1m","aggregations":{"composite_buckets":{"composite":{"size":10000,"sources":[{"gender":{"terms":{"field":"gender.keyword","missing_bucket":true,"missing_order":"first","order":"asc"}}}]}}}}, requestedTotalSize=10000, pageSize=null, startFrom=0)])
You can see PushDownContext has correct SORT action, aka SORT->[1 DESC LAST], which means SortRemoveRule didn't pick the ASC-nulls-first as winner. The problem is the agg request builder doesn't have correct ordering on buckets. You can easily verify SortRemoveRule doesn't do anything wrong by manually removing our own rules like SortIndexScanRule. The testReverseAfterAggregationWithSort will pass without our SortIndexScanRule.
Regardless of the actual bug. I think your current approach still has some value. The collapse of consecutive sorts will reduce VolcanoPlanner's optimization space size. You can keep the current design.
| // Found a collation Sort - insert reversed sort on top of it | ||
| if (sort.getCollation() != null | ||
| && !sort.getCollation().getFieldCollations().isEmpty()) { | ||
| sortFound = true; | ||
| RelNode visitedSort = super.visit(other); | ||
| return org.apache.calcite.rel.logical.LogicalSort.create( | ||
| visitedSort, reversedCollation, null, null); |
There was a problem hiding this comment.
If the merge sort theory below is correct(I think it needs careful verification), this part should replace the visitedSort with reversedSort. The current behavior is appending visitedSort to the child of reversedSort, which creates consecutive sorts.
There was a problem hiding this comment.
Good catch! I've updated this to replace the sort in-place: instead of super.visit(other) (which returns the Sort node and stacks on top), it now uses sort.getInput().accept(this) to visit only the children, then creates a new Sort with the reversed collation using those children as input. This is the same in-place replacement pattern used in visitReverse above.
songkant-aws
left a comment
There was a problem hiding this comment.
LGTM so far. Please let others review and approve.
|
This PR is stalled because it has been open for 2 weeks with no activity. |
|
@LantaoJin Please also review this PR when you're free, thanks! |
|
This PR is stalled because it has been open for 2 weeks with no activity. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
* Init CLAUDE.md (#5259) Signed-off-by: Heng Qian <qianheng@amazon.com> * Add label to exempt specific PRs from stalled labeling (#5263) * Implement `reverse` performance optimization (#4775) Co-authored-by: Jialiang Liang <jiallian@amazon.com> * Add songkant-aws as maintainer (#5244) * Move some maintainers from active to Emeritus (#5260) * Move inactive current maintainers to Emeritus Signed-off-by: Lantao Jin <ltjin@amazon.com> * Remove affiliation column for emeritus maintainers Signed-off-by: Lantao Jin <ltjin@amazon.com> * formatted Signed-off-by: Lantao Jin <ltjin@amazon.com> * Fix formatting in MAINTAINERS.md Signed-off-by: Simeon Widdis <sawiddis@gmail.com> Signed-off-by: Simeon Widdis <sawiddis@gmail.com> --------- Signed-off-by: Lantao Jin <ltjin@amazon.com> Signed-off-by: Simeon Widdis <sawiddis@gmail.com> Co-authored-by: Simeon Widdis <sawiddis@gmail.com> * Add query cancellation support via _tasks/_cancel API for PPL queries (#5254) * Add query cancellation support via _tasks/_cancel API for PPL queries Signed-off-by: Sunil Ramchandra Pawar <pawar_sr@apple.com> * Refactor PPL query cancellation to cooperative model and other PR suggestions. Signed-off-by: Sunil Ramchandra Pawar <pawar_sr@apple.com> --------- Signed-off-by: Sunil Ramchandra Pawar <pawar_sr@apple.com> * Add Calcite native SQL planning in UnifiedQueryPlanner (#5257) * feat(api): Add Calcite native SQL planning path in UnifiedQueryPlanner Add SQL support to the unified query API using Calcite's native parser pipeline (SqlParser → SqlValidator → SqlToRelConverter → RelNode), bypassing the ANTLR parser used by PPL. Changes: - UnifiedQueryPlanner: use PlanningStrategy to dispatch CalciteNativeStrategy vs CustomVisitorStrategy - CalciteNativeStrategy: Calcite Planner with try-with-resources for ANSI SQL - CustomVisitorStrategy: ANTLR-based path for PPL (and future SQL V2) - UnifiedQueryContext: SqlParser.Config with Casing.UNCHANGED to preserve lowercase OpenSearch index names Signed-off-by: Chen Dai <daichen@amazon.com> * test(api): Add SQL planner tests and refactor test base for multi-language support - Refactor UnifiedQueryTestBase with queryType() hook for subclass override - Add UnifiedSqlQueryPlannerTest covering SELECT, WHERE, GROUP BY, JOIN, ORDER BY, subquery, case sensitivity, namespaces, and error handling - Update UnifiedQueryContextTest to verify SQL context creation Signed-off-by: Chen Dai <daichen@amazon.com> * perf(benchmarks): Add SQL queries to UnifiedQueryBenchmark Add language (PPL/SQL) and queryPattern param dimensions for side-by-side comparison of equivalent queries across both languages. Remove separate UnifiedSqlQueryBenchmark in favor of unified class. Signed-off-by: Chen Dai <daichen@amazon.com> * docs(api): Update README to reflect SQL support in UnifiedQueryPlanner Signed-off-by: Chen Dai <daichen@amazon.com> * fix(api): Normalize trailing whitespace in assertPlan comparison RelOptUtil.toString() appends a trailing newline after the last plan node, which doesn't match Java text block expectations. Also add \r\n normalization for Windows CI compatibility, consistent with the existing pattern in core module tests. Signed-off-by: Chen Dai <daichen@amazon.com> --------- Signed-off-by: Chen Dai <daichen@amazon.com> * [Feature] Support graphLookup with literal value as its start (#5253) * [Feature] Support graphLookup as top-level PPL command (#5243) Add support for graphLookup as the first command in a PPL query with literal start values, instead of requiring piped input from source=. Syntax: graphLookup table start="value" edge=from-->to as output graphLookup table start=("v1", "v2") edge=from-->to as output Signed-off-by: Heng Qian <qianheng@amazon.com> * Spotless check Signed-off-by: Heng Qian <qianheng@amazon.com> * Ignore child pipe if using start value Signed-off-by: Heng Qian <qianheng@amazon.com> * Add graphLookup integration tests per PPL command checklist - Add explain plan tests in CalciteExplainIT with YAML assertions - Add v2-unsupported tests in NewAddedCommandsIT - Add CalcitePPLGraphLookupIT to CalciteNoPushdownIT suite - Skip graphLookup tests when pushdown is disabled (required by impl) - Add expected plan YAML files for piped and top-level graphLookup Signed-off-by: Heng Qian <qianheng@amazon.com> * Remove brace of start value list Signed-off-by: Heng Qian <qianheng@amazon.com> --------- Signed-off-by: Heng Qian <qianheng@amazon.com> * Apply docs website feedback to ppl functions (#5207) * apply doc website feedback to ppl functions Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> * take out comments Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> * fix json_append example Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> * fix json_append example Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> * fix links Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> --------- Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> Signed-off-by: ritvibhatt <53196324+ritvibhatt@users.noreply.github.com> * feat(api): Add profiling support to unified query API (#5268) Add query profiling infrastructure that measures time spent in each query phase (analyze, optimize, execute, format). Profiling is opt-in via UnifiedQueryContext.builder().profiling(true) and uses thread-local context to avoid passing profiling state through every method. Key changes: - QueryProfiling/ProfileContext for thread-local profiling lifecycle - UnifiedQueryContext.measure() API for timing arbitrary phases - Auto-profiling in UnifiedQueryPlanner (analyze) and compiler (optimize) - UnifiedQueryTestBase shared test fixture for unified query tests - Comprehensive profiling tests with non-flaky >= 0 timing assertions Signed-off-by: Chen Dai <daichen@amazon.com> * Add UnifiedQueryParser with language-specific implementations (#5274) Extract parsing logic from UnifiedQueryPlanner into a UnifiedQueryParser interface with language-specific implementations: PPLQueryParser (returns UnresolvedPlan) and CalciteSqlQueryParser (returns SqlNode). UnifiedQueryContext owns the parser instance, created eagerly by the builder which has direct access to query type and future SQL config. Each implementation receives only its required dependencies: PPLQueryParser takes Settings, CalciteSqlQueryParser takes CalcitePlanContext. UnifiedQueryPlanner.CustomVisitorStrategy now obtains the parser from the context via the interface type. Signed-off-by: Chen Dai <daichen@amazon.com> * Fix flaky TPC-H Q1 test due to bugs in `MatcherUtils.closeTo()` (#5283) * Fix the flaky tpch Q1 Signed-off-by: Lantao Jin <ltjin@amazon.com> * Change to ULP-aware to handle floating-point precision differences Signed-off-by: Lantao Jin <ltjin@amazon.com> --------- Signed-off-by: Lantao Jin <ltjin@amazon.com> --------- Signed-off-by: Heng Qian <qianheng@amazon.com> Signed-off-by: Lantao Jin <ltjin@amazon.com> Signed-off-by: Simeon Widdis <sawiddis@gmail.com> Signed-off-by: Sunil Ramchandra Pawar <pawar_sr@apple.com> Signed-off-by: Chen Dai <daichen@amazon.com> Signed-off-by: Ritvi Bhatt <ribhatt@amazon.com> Signed-off-by: ritvibhatt <53196324+ritvibhatt@users.noreply.github.com> Signed-off-by: Kai Huang <ahkcs@amazon.com> Co-authored-by: qianheng <qianheng@amazon.com> Co-authored-by: Simeon Widdis <sawiddis@gmail.com> Co-authored-by: Jialiang Liang <jiallian@amazon.com> Co-authored-by: Lantao Jin <ltjin@amazon.com> Co-authored-by: Sunil Ramchandra Pawar <pawar_sr@apple.com> Co-authored-by: Chen Dai <daichen@amazon.com> Co-authored-by: ritvibhatt <53196324+ritvibhatt@users.noreply.github.com>
Description
Originally from #4056 by @selsong
This PR implements a significant performance optimization for the
reversecommand by eliminating the expensive ROW_NUMBER() window function and implementing a three-tier logic based on query context.Resolves #3924
Resolves #5125
Motivation
The previous implementation used ROW_NUMBER() window function which:
Solution: Three-Tier Reverse Logic
The
reversecommand now follows context-aware behavior:Key Implementation Details
reverseis applied directly on top of a sort, the sort node is replaced in-place usingPlanUtils.replaceTop()rather than stacking a new sort. This avoids a Calcite optimizer issue where consecutiveLogicalSortnodes get merged, discarding the reversed direction (Reverse Optimization fails when creating consecutive sorts due to Calcite physical optimizer merging #5125).reverseis separated from the sort by non-blocking operators (where,eval,fields), the implementation walks the RelNode tree to find the upstream sort and inserts a reversed sort viaRelHomogeneousShuttle.Examples
1. Reverse with Explicit Sort (Primary Use Case)
Query:
Behavior: Flips all sort directions:
+balance, -firstname→-balance, +firstnameLogical Plan:
Physical Plan: (efficiently pushes reversed sort to OpenSearch)
2. Reverse with @timestamp (Time-Series Optimization)
Query:
Behavior: When no explicit sort exists but the index has an @timestamp field, reverse automatically sorts by @timestamp DESC to show most recent events first.
Use Case: Common pattern in log analysis - users want recent logs first
Logical Plan:
3. Reverse Ignored (No-Op Case)
Query:
Behavior: When there's no explicit sort AND no @timestamp field, reverse is ignored. Results appear in natural index order.
Rationale: Avoid expensive operations when reverse has no meaningful semantic interpretation.
Logical Plan:
Note: No sort node is added - reverse is completely ignored.
4. Double Reverse (Cancellation)
Query:
Behavior: Two reverses cancel each other out, returning to original sort order.
Logical Plan:
Final sort order matches original query:
+balance, -firstname5. Backtracking Through Non-Blocking Operators
Query:
Behavior: Reverse walks past the
where(filter) node to find and reverse the upstream sort.Logical Plan:
6. Reverse After Aggregation (Blocking Operator)
Query:
Behavior: Aggregation is a blocking operator. The sort after aggregation is visible to reverse and gets flipped.
7. Timechart with Reverse
Query:
Behavior: Timechart produces a time-bucketed aggregation with an implicit sort. Reverse flips the time order from ASC to DESC.
Related Issues
Resolves #3924
Resolves #5125
Check List
--signoff.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.