Skip to content

ESQL: Add BlockHash#lookup#107762

Merged
nik9000 merged 9 commits intoelastic:mainfrom
nik9000:hash_lookup_real_2
Apr 24, 2024
Merged

ESQL: Add BlockHash#lookup#107762
nik9000 merged 9 commits intoelastic:mainfrom
nik9000:hash_lookup_real_2

Conversation

@nik9000
Copy link
Copy Markdown
Member

@nik9000 nik9000 commented Apr 23, 2024

Adds a lookup method to BlockHash which finds keys that were already in the hash without modifying it and returns the "ordinal" that the BlockHash produced when that key had been called with add.

For multi-column keys this can change the number of values pretty drastically. You get a combinatorial explosion of values. So if you have three columns with 2 values each the most values you can get is 2*2*2=8. If you have five columns with ten values each you can have 100,000 values in a single position! That's too many.

Let's do an example! This one has a two row block containing three columns. One row has two values in each column so it could produce at most 8 values. In this case one of the values is missing from the hash, so it only produces 7.

Page:

a b c
1 4 6
1, 2 3, 4 5, 6

BlockHash contents:

a b c
1 3 5
1 3 6
1 4 5
1 4 6
2 3 5
2 3 6
2 4 6

Results:

ord
3
0, 1, 2, 3, 4, 5, 6

The add method has a fairly fool-proof mechanism to work around this, it calls it's consumers with a callback that can split positions into multiple calls. It calls the callback in batches of like 16,000 positions at a time. And aggs uses the callback. So you can aggregate over five colunms with ten values each. It's slow, but the callbacks let us get through it.

Unlike add, lookup can't use a callback. We're going to need it to return Iterator of IntBlocks containing ordinals. That's just how we're going to use it. That'd be ok, but we can't split a single position across multiple Blocks. That's just not how Block works.

So, instead, we fail the query if we produce more than 100,000 entries in a single position. We'd like to stop collecting and emit a warning, but that's a problem for another change. That's a single 400kb array which is quite big.

Anyway! If we're not bumping into massive rows we emit IntBlocks targeting a particular size in memory. Likely we'll also want to plug in a target number of rows as well, but for now this'll do.

nik9000 added 2 commits April 22, 2024 18:12
Adds a `lookup` method to `BlockHash` which finds keys that were already
in the hash without modifying it and returns the "ordinal" that the
`BlockHash` produced when that key had been called with `add`.

For multi-column keys this can change the number of values pretty
drastically. You get a combinatorial explosion of values. So if you have
three columns with 2 values each the most values you can get is 2*2*2=8.
If you have five columns with ten values each you can have 100,000
values in a single position! That's too many.

Let's do an example! This one has a two row block containing three
colunms. One row has two values in each column so it could produce at
most 8 values. In this case one of the values is missing from the hash,
so it only produces 7.

Block:
|   a  |   b  |   c  |
| ----:| ----:| ----:|
|    1 |    4 |    6 |
| 1, 2 | 3, 4 | 5, 6 |

BlockHash contents:
| a | b | c |
| -:| -:| -:|
| 1 | 3 | 5 |
| 1 | 3 | 6 |
| 1 | 4 | 5 |
| 1 | 4 | 6 |
| 2 | 3 | 5 |
| 2 | 3 | 6 |
| 2 | 4 | 6 |

Results:

|          ord        |
| -------------------:|
|                   3 |
| 0, 1, 2, 3, 4, 5, 6 |

The `add` method has a fairly fool-proof mechanism to work around this,
it calls it's consumers with a callback that can split positions into
multiple calls. It calls the callback in batches of like 16,000
positions at a time. And aggs uses the callback. So you can aggregate
over five colunms with ten values each. It's slow, but the callbacks
let us get through it.

Unlike `add`, `lookup` can't use a callback. We're going to need it to
return `Iterator` of `IntBlock`s containing ordinals. That's just how
we're going to use it. That'd be ok, but we can't split a single
position across multiple `Block`s. That's just not how `Block` works.

So, instead, we fail the query if we produce more than 100,000 entries
in a single position. We'd like to stop collecting and emit a warning,
but that's a problem for another change. That's a single 400kb array
which is quite big.

Anyway! If we're not bumping into massive rows we emit `IntBlocks`
targeting a particular size in memory. Likely we'll also want to plug in
a target number of rows as well, but for now this'll do.
@elasticsearchmachine elasticsearchmachine added the Team:Analytics Meta label for analytical engine team (ESQL/Aggs/Geo) label Apr 23, 2024
@elasticsearchmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/es-analytical-engine (Team:Analytics)


@Override
public ReleasableIterator<IntBlock> lookup(Page page, ByteSizeValue targetBlockSize) {
throw new UnsupportedOperationException("TODO");
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of these are plugged in yet, but I figured this PR was big enough.

@Override
public ReleasableIterator<IntBlock> lookup(Page page, ByteSizeValue targetBlockSize) {
throw new UnsupportedOperationException("TODO");
}
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure when we'll be able to plug this one in so I just left it.

@nik9000 nik9000 requested a review from dnhatn April 23, 2024 13:37

@Override
public boolean hasNext() {
return next != null;
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might make sense to flip these from "build early" to "build late" - but I've not figured out quite how to do that yet. We have the option either way.

* all blocks returned by the iterator will equal {@link Page#getPositionCount} but
* will "target" a size of {@code targetBlockSize}.
* <p>
* Calling this will either {@link Page#releaseBlocks() release} the blocks immediately
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find it a bit confusing that the lookup API releases the input Page itself, although the Javadoc explains this clearly. Should the caller manage the lifecycle of the input page instead? However, I'm totally fine if we decide to stick with this approach.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah! It's tricky for the caller to manage the lifecycle of the page if you want to free it when the blocks are done.

Maybe I can read the blocks from the page and bump their ref count. So the caller can throw away the page immediately the iterator frees when it's done with it's blocks....

Copy link
Copy Markdown
Member

@dnhatn dnhatn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks Nik.

@nik9000 nik9000 added the auto-merge-without-approval Automatically merge pull request when CI checks pass (NB doesn't wait for reviews!) label Apr 23, 2024
* since it creates a new Exception every time a new array is created.
*/
private static final boolean TRACK_ALLOCATIONS = false;
private static final boolean TRACK_ALLOCATIONS = true;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leftover?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Damn, yeah.

@nik9000
Copy link
Copy Markdown
Member Author

nik9000 commented Apr 24, 2024

Docs PR builds are busted incorrectly and this doesn't touch docs so I'm going to merge anyway.

@nik9000 nik9000 merged commit 0f68c67 into elastic:main Apr 24, 2024
@nik9000 nik9000 deleted the hash_lookup_real_2 branch April 24, 2024 12:31
elasticsearchmachine pushed a commit that referenced this pull request Jun 7, 2024
This adds support for `LOOKUP`, a command that implements a sort of
inline `ENRICH`, using data that is passed in the request:

```
$ curl -uelastic:password -HContent-Type:application/json -XPOST \
    'localhost:9200/_query?error_trace&pretty&format=txt' \
-d'{
    "query": "ROW a=1::LONG | LOOKUP t ON a",
    "tables": {
        "t": {
            "a:long":     [    1,     4,     2],
            "v1:integer": [   10,    11,    12],
            "v2:keyword": ["cat", "dog", "wow"]
        }
    },
    "version": "2024.04.01"
}'
      v1       |      v2       |       a       
---------------+---------------+---------------
10             |cat            |1
```

This required these PRs: * #107624 * #107634 * #107701 * #107762 *
#107923 * #107894 * #107982 * #108012 * #108020 * #108169 * #108191 *
#108334 * #108482 * #108696 * #109040 * #109045

Closes #107306
craigtaverner pushed a commit to craigtaverner/elasticsearch that referenced this pull request Jun 11, 2024
This adds support for `LOOKUP`, a command that implements a sort of
inline `ENRICH`, using data that is passed in the request:

```
$ curl -uelastic:password -HContent-Type:application/json -XPOST \
    'localhost:9200/_query?error_trace&pretty&format=txt' \
-d'{
    "query": "ROW a=1::LONG | LOOKUP t ON a",
    "tables": {
        "t": {
            "a:long":     [    1,     4,     2],
            "v1:integer": [   10,    11,    12],
            "v2:keyword": ["cat", "dog", "wow"]
        }
    },
    "version": "2024.04.01"
}'
      v1       |      v2       |       a
---------------+---------------+---------------
10             |cat            |1
```

This required these PRs: * elastic#107624 * elastic#107634 * elastic#107701 * elastic#107762 *

Closes elastic#107306
craigtaverner added a commit to craigtaverner/elasticsearch that referenced this pull request Jun 11, 2024
The second prototype replaced MultiTypeField.Unresolved with MultiTypeField, but this clashed with existing behaviour around mapping unused MultiTypeFields to `unsupported` and `null`, so this new attempt simply adds new fields, resulting in more than one field with the same name.
We still need to store this new field in EsRelation, so that physical planner can insert it into FieldExtractExec, so this is quite similar to the second protototype.

The following query works in this third prototype:

```
multiIndexIpString
FROM sample_data* METADATA _index
| EVAL client_ip = TO_IP(client_ip)
| KEEP _index, @timestamp, client_ip, event_duration, message
| SORT _index ASC, @timestamp DESC
```

As with the previous prototyep, we no longer need an aggregation to force the conversion function onto the data node, as the 'real' conversion is now done at field extraction time using the converter function previously saved in the EsRelation and replanned into the EsQueryExec.

Support row-stride-reader for LoadFromMany

Add missing ESQL version after rebase on main

Fixed missing block release

Simplify UnresolvedUnionTypes

Support other commands, notably WHERE

Update docs/changelog/107545.yaml

Fix changelog

Removed unused code

Slight code reduction in analyser of union types

Removed unused interface method

Fix bug in copying blocks (array overrun)

Convert MultiTypeEsField.UnresolvedField back to InvalidMappedField

This is to ensure older behaviour still works.

Simplify InvalidMappedField support

Rather than complex code to recreate InvalidMappedField from MultiTypeEsField.UnresolvedField, we rely on the fact that this is the parent class anyway, so we can resolve this during plan serialization/deserialization anyway. Much simpler

Simplify InvalidMappedField support further

Combining InvalidMappedField and MultiTypeEsField.UnresolvedField into one class simplifies plan serialization even further.

InvalidMappedField is used slightly differently in QL

We need to separate the aggregatable used in the original really-invalid mapped field from the aggregatable used if the field can indeed be used as a union-type in ES|QL.

Updated version limitation after 8.14 branch

Try debug CI failures in multi-node clusters

Support type conversion in rowstride reader on single leaf

Disable union_types from CsvTests

Keep track of per-shard converters for LoadFromMany

Simplify block loader convert function

Code cleanup

Added unit test for ValuesSourceReaderOperator including field type conversions at block loading

Added test for @timestamp and fixed related bug

It turns out that most, but not all, DataType values have the same esType as typeName, and @timestamp is one that does not, using `date` for esType and `datetime` for typename. Our EsqlIndexResolver was recording multi-type fields with `esType`, while later the actual type conversion was using an evaluator that relied on DataTypes.typeFromName(typeName).
So we fixed the EsqlIndexResolver to rather use typeName.

Added more tests, with three indices combined and two type conversions

Disable lucene-pushdown on union-type fields

Since the union-type rewriter replaced conversion functions with new FieldAttributes, these were passing the check for being possible to push-down, which was incorrect. Now we prevent that.

Set union-type aggregatable flag to false always

This simplifies the push-down check.

Fixed tests after rebase on main

Add unit tests for union-types (same field, different type)

Remove generic warnings

Test code cleanup and clarifying comments

Remove -IT_tests_only in favor of CsvTests assumeFalse

Improved comment

Code review updates

Code review updates

Remove changes to ql/EsRelation

And it turned out the latest version of union type no longer needed these changes anyway, and was using the new EsRelation in the ESQL module without these changes.

Port InvalidMappedField to ESQL

Note, this extends the QL version of InvalidMappedField, so is not a complete port. This is necessary because of the intertwining of QL IndexResolver and EsqlIndexResolver. Once those classes are disentangled, we can completely break InvalidMappedField from QL and make it a forbidden type.

Fix capabilities line after rebase on main

Revert QL FieldAttribute and extend with ESQL FieldAttribute

So as to remove any edits to QL code, we extend FieldAttribute in the ESQL code with the changes required, since is simply to include the `field` in the hascode and equals methods.

Revert "Revert QL FieldAttribute and extend with ESQL FieldAttribute"

This reverts commit 168c6c75436e26b83e083cd3de8e18062e116bc9.

Switch UNION_TYPES from EsqlFeatures to EsqlCapabilities

Make hashcode and equals aligned

And removed unused method from earlier union-types work where we kept the NodeId during re-writing (which we no longer do).

Replace required_feature with required_capability after rebase

Switch union_types capability back to feature, because capabilities do not work in mixed clusters

Revert "Switch union_types capability back to feature, because capabilities do not work in mixed clusters"

This reverts commit 56d58bedf756dbad703c07bf4cdb991d4341c1ae.

Added test for multiple columns from same fields

Both IP and Date are tested

Fix bug with incorrectly resolving invalid types

And added more tests

Fixed bug with multiple fields of same name

This fix simply removes the original field already at the EsRelation level, which covers all test cases but has the side effect of having the final field no-longer be unsupported/null when the alias does not overwrite the field with the same name.
This is not exactly the correct semantic intent.
The original field name should be unsupported/null unless the user explicitly overwrote the name with `field=TO_TYPE(field)`, which effectively deletes the old field anyway.

Fixed bug with multiple conversions of the same field

This also fixes the issue with the previous fix that incorrectly reported the converted type for the original field.

More tests with multiple fields and KEEP/DROP combinations

Replace skip with capabilities in YML tests

Fixed missing ql->esql import change afer merging main

Merged two InvalidMappedField classes

After the QL code was ported to esql.core, we can now make the edits directly in InvalidMappedField instead of having one extend the other.

Move FieldAttribute edits from QL to ESQL

ESQL: Prepare analyzer for LOOKUP (elastic#109045)

This extracts two fairly uncontroversial changes that were in the main
LOOKUP PR into a smaller change that's easier to review.

ESQL: Move serialization for EsField (elastic#109222)

This moves the serialization logic for `EsField` into the `EsField`
subclasses to better align with the way rest of Elasticsearch works. It
also switches them from ESQL's home grown `writeNamed` thing to
`NamedWriteable`. These are wire compatible with one another.

ESQL: Move serialization of `Attribute` (elastic#109267)

This moves the serialization of `Attribute` classes used in ESQL into
the classes themselves to better line up with the rest of Elasticsearch.

ES|QL: add MV_APPEND function (elastic#107001)

Adding `MV_APPEND(value1, value2)` function, that appends two values
creating a single multi-value. If one or both the inputs are
multi-values, the result is the concatenation of all the values, eg.

```
MV_APPEND([a, b], [c, d]) -> [a, b, c, d]
```

~I think for this specific case it makes sense to consider `null` values
as empty arrays, so that~ ~MV_APPEND(value, null) -> value~ ~It is
pretty uncommon for ESQL (all the other functions, apart from
`COALESCE`, short-circuit to `null` when one of the values is null), so
let's discuss this behavior.~

[EDIT] considering the feedback from Andrei, I changed this logic and
made it consistent with the other functions: now if one of the
parameters is null, the function returns null

[ES|QL] Convert string to datetime when the other size of an arithmetic operator is date_period or time_duration (elastic#108455)

* convert string to datetime when the other side of binary operator is temporal amount

ESQL: Move `NamedExpression` serialization (elastic#109380)

This moves the serialization for the remaining `NamedExpression`
subclass into the class itself, and switches all direct serialization of
`NamedExpression`s to `readNamedWriteable` and friends. All other
`NamedExpression` subclasses extend from `Attribute` who's serialization
was moved ealier. They are already registered under the "category class"
for `Attribute`. This also registers them as `NamedExpression`s.

ESQL: Implement LOOKUP, an "inline" enrich (elastic#107987)

This adds support for `LOOKUP`, a command that implements a sort of
inline `ENRICH`, using data that is passed in the request:

```
$ curl -uelastic:password -HContent-Type:application/json -XPOST \
    'localhost:9200/_query?error_trace&pretty&format=txt' \
-d'{
    "query": "ROW a=1::LONG | LOOKUP t ON a",
    "tables": {
        "t": {
            "a:long":     [    1,     4,     2],
            "v1:integer": [   10,    11,    12],
            "v2:keyword": ["cat", "dog", "wow"]
        }
    },
    "version": "2024.04.01"
}'
      v1       |      v2       |       a
---------------+---------------+---------------
10             |cat            |1
```

This required these PRs: * elastic#107624 * elastic#107634 * elastic#107701 * elastic#107762 *

Closes elastic#107306

parent 32ac5ba755dd5c24364a210f1097ae093fdcbd75
author Craig Taverner <craig@amanzi.com> 1717779549 +0200
committer Craig Taverner <craig@amanzi.com> 1718115775 +0200

Fixed compile error after merging in main

Fixed strange merge issues from main

Remove version from ES|QL test queries after merging main

Fixed union-types on nested fields

Switch to Luigi's solution, and expand nested tests

Cleanup after rebase
craigtaverner added a commit that referenced this pull request Jun 19, 2024
* Union Types Support

The second prototype replaced MultiTypeField.Unresolved with MultiTypeField, but this clashed with existing behaviour around mapping unused MultiTypeFields to `unsupported` and `null`, so this new attempt simply adds new fields, resulting in more than one field with the same name.
We still need to store this new field in EsRelation, so that physical planner can insert it into FieldExtractExec, so this is quite similar to the second protototype.

The following query works in this third prototype:

```
multiIndexIpString
FROM sample_data* METADATA _index
| EVAL client_ip = TO_IP(client_ip)
| KEEP _index, @timestamp, client_ip, event_duration, message
| SORT _index ASC, @timestamp DESC
```

As with the previous prototyep, we no longer need an aggregation to force the conversion function onto the data node, as the 'real' conversion is now done at field extraction time using the converter function previously saved in the EsRelation and replanned into the EsQueryExec.

Support row-stride-reader for LoadFromMany

Add missing ESQL version after rebase on main

Fixed missing block release

Simplify UnresolvedUnionTypes

Support other commands, notably WHERE

Update docs/changelog/107545.yaml

Fix changelog

Removed unused code

Slight code reduction in analyser of union types

Removed unused interface method

Fix bug in copying blocks (array overrun)

Convert MultiTypeEsField.UnresolvedField back to InvalidMappedField

This is to ensure older behaviour still works.

Simplify InvalidMappedField support

Rather than complex code to recreate InvalidMappedField from MultiTypeEsField.UnresolvedField, we rely on the fact that this is the parent class anyway, so we can resolve this during plan serialization/deserialization anyway. Much simpler

Simplify InvalidMappedField support further

Combining InvalidMappedField and MultiTypeEsField.UnresolvedField into one class simplifies plan serialization even further.

InvalidMappedField is used slightly differently in QL

We need to separate the aggregatable used in the original really-invalid mapped field from the aggregatable used if the field can indeed be used as a union-type in ES|QL.

Updated version limitation after 8.14 branch

Try debug CI failures in multi-node clusters

Support type conversion in rowstride reader on single leaf

Disable union_types from CsvTests

Keep track of per-shard converters for LoadFromMany

Simplify block loader convert function

Code cleanup

Added unit test for ValuesSourceReaderOperator including field type conversions at block loading

Added test for @timestamp and fixed related bug

It turns out that most, but not all, DataType values have the same esType as typeName, and @timestamp is one that does not, using `date` for esType and `datetime` for typename. Our EsqlIndexResolver was recording multi-type fields with `esType`, while later the actual type conversion was using an evaluator that relied on DataTypes.typeFromName(typeName).
So we fixed the EsqlIndexResolver to rather use typeName.

Added more tests, with three indices combined and two type conversions

Disable lucene-pushdown on union-type fields

Since the union-type rewriter replaced conversion functions with new FieldAttributes, these were passing the check for being possible to push-down, which was incorrect. Now we prevent that.

Set union-type aggregatable flag to false always

This simplifies the push-down check.

Fixed tests after rebase on main

Add unit tests for union-types (same field, different type)

Remove generic warnings

Test code cleanup and clarifying comments

Remove -IT_tests_only in favor of CsvTests assumeFalse

Improved comment

Code review updates

Code review updates

Remove changes to ql/EsRelation

And it turned out the latest version of union type no longer needed these changes anyway, and was using the new EsRelation in the ESQL module without these changes.

Port InvalidMappedField to ESQL

Note, this extends the QL version of InvalidMappedField, so is not a complete port. This is necessary because of the intertwining of QL IndexResolver and EsqlIndexResolver. Once those classes are disentangled, we can completely break InvalidMappedField from QL and make it a forbidden type.

Fix capabilities line after rebase on main

Revert QL FieldAttribute and extend with ESQL FieldAttribute

So as to remove any edits to QL code, we extend FieldAttribute in the ESQL code with the changes required, since is simply to include the `field` in the hascode and equals methods.

Revert "Revert QL FieldAttribute and extend with ESQL FieldAttribute"

This reverts commit 168c6c75436e26b83e083cd3de8e18062e116bc9.

Switch UNION_TYPES from EsqlFeatures to EsqlCapabilities

Make hashcode and equals aligned

And removed unused method from earlier union-types work where we kept the NodeId during re-writing (which we no longer do).

Replace required_feature with required_capability after rebase

Switch union_types capability back to feature, because capabilities do not work in mixed clusters

Revert "Switch union_types capability back to feature, because capabilities do not work in mixed clusters"

This reverts commit 56d58bedf756dbad703c07bf4cdb991d4341c1ae.

Added test for multiple columns from same fields

Both IP and Date are tested

Fix bug with incorrectly resolving invalid types

And added more tests

Fixed bug with multiple fields of same name

This fix simply removes the original field already at the EsRelation level, which covers all test cases but has the side effect of having the final field no-longer be unsupported/null when the alias does not overwrite the field with the same name.
This is not exactly the correct semantic intent.
The original field name should be unsupported/null unless the user explicitly overwrote the name with `field=TO_TYPE(field)`, which effectively deletes the old field anyway.

Fixed bug with multiple conversions of the same field

This also fixes the issue with the previous fix that incorrectly reported the converted type for the original field.

More tests with multiple fields and KEEP/DROP combinations

Replace skip with capabilities in YML tests

Fixed missing ql->esql import change afer merging main

Merged two InvalidMappedField classes

After the QL code was ported to esql.core, we can now make the edits directly in InvalidMappedField instead of having one extend the other.

Move FieldAttribute edits from QL to ESQL

ESQL: Prepare analyzer for LOOKUP (#109045)

This extracts two fairly uncontroversial changes that were in the main
LOOKUP PR into a smaller change that's easier to review.

ESQL: Move serialization for EsField (#109222)

This moves the serialization logic for `EsField` into the `EsField`
subclasses to better align with the way rest of Elasticsearch works. It
also switches them from ESQL's home grown `writeNamed` thing to
`NamedWriteable`. These are wire compatible with one another.

ESQL: Move serialization of `Attribute` (#109267)

This moves the serialization of `Attribute` classes used in ESQL into
the classes themselves to better line up with the rest of Elasticsearch.

ES|QL: add MV_APPEND function (#107001)

Adding `MV_APPEND(value1, value2)` function, that appends two values
creating a single multi-value. If one or both the inputs are
multi-values, the result is the concatenation of all the values, eg.

```
MV_APPEND([a, b], [c, d]) -> [a, b, c, d]
```

~I think for this specific case it makes sense to consider `null` values
as empty arrays, so that~ ~MV_APPEND(value, null) -> value~ ~It is
pretty uncommon for ESQL (all the other functions, apart from
`COALESCE`, short-circuit to `null` when one of the values is null), so
let's discuss this behavior.~

[EDIT] considering the feedback from Andrei, I changed this logic and
made it consistent with the other functions: now if one of the
parameters is null, the function returns null

[ES|QL] Convert string to datetime when the other size of an arithmetic operator is date_period or time_duration (#108455)

* convert string to datetime when the other side of binary operator is temporal amount

ESQL: Move `NamedExpression` serialization (#109380)

This moves the serialization for the remaining `NamedExpression`
subclass into the class itself, and switches all direct serialization of
`NamedExpression`s to `readNamedWriteable` and friends. All other
`NamedExpression` subclasses extend from `Attribute` who's serialization
was moved ealier. They are already registered under the "category class"
for `Attribute`. This also registers them as `NamedExpression`s.

ESQL: Implement LOOKUP, an "inline" enrich (#107987)

This adds support for `LOOKUP`, a command that implements a sort of
inline `ENRICH`, using data that is passed in the request:

```
$ curl -uelastic:password -HContent-Type:application/json -XPOST \
    'localhost:9200/_query?error_trace&pretty&format=txt' \
-d'{
    "query": "ROW a=1::LONG | LOOKUP t ON a",
    "tables": {
        "t": {
            "a:long":     [    1,     4,     2],
            "v1:integer": [   10,    11,    12],
            "v2:keyword": ["cat", "dog", "wow"]
        }
    },
    "version": "2024.04.01"
}'
      v1       |      v2       |       a
---------------+---------------+---------------
10             |cat            |1
```

This required these PRs: * #107624 * #107634 * #107701 * #107762 *

Closes #107306

parent 32ac5ba755dd5c24364a210f1097ae093fdcbd75
author Craig Taverner <craig@amanzi.com> 1717779549 +0200
committer Craig Taverner <craig@amanzi.com> 1718115775 +0200

Fixed compile error after merging in main

Fixed strange merge issues from main

Remove version from ES|QL test queries after merging main

Fixed union-types on nested fields

Switch to Luigi's solution, and expand nested tests

Cleanup after rebase

* Added more tests from code review

Note that one test, `multiIndexIpStringStatsInline` is muted due to failing with the error:

    UnresolvedException: Invalid call to dataType on an unresolved object ?client_ip

* Make CsvTests consistent with integration tests for capabilities

The integration tests do not fail the tests if the capability does not even exist on cluster nodes, instead the tests are ignored. The same behaviour should happen with CsvTests for consistency.

* Return assumeThat to assertThat, but change order

This way we don't have to add more features to the test framework in this PR, but we would probably want a mute feature (like a `skip` line).

* Move serialization of MultiTypeEsField to NamedWritable approach

Since the sub-fields are AbstractConvertFunction expressions, and Expression is not yet fully supported as a category class for NamedWritable, we need a few slight tweaks to this, notably registering this explicitly in the EsqlPlugin, as well as calling PlanStreamInput.readExpression() instead of StreamInput.readNamedWritable(Expression.class). These can be removed later once Expression is fully supported as a category class.

* Remove attempt to mute two failed tests

We used required_capability to mute the tests, but this caused issues with CsvTests which also uses this as a spelling mistake checker for typing the capability name wrong, so we tried to use muted-tests.yml, but that only mutes tests in specific run configurations (ie. we need to mute each and every IT class separately).

So now we just remove the tests entirely. We left a comment in the muted-tests.yml file for future reference about how to mute csv-spec tests.

* Fix rather massive issue with performance of testConcurrentSerialization

Recreating the config on every test was very expensive.

* Code review by Nik

---------

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Analytics/ES|QL AKA ESQL auto-merge-without-approval Automatically merge pull request when CI checks pass (NB doesn't wait for reviews!) >non-issue Team:Analytics Meta label for analytical engine team (ESQL/Aggs/Geo) v8.15.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants