You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/engines/table-engines/integrations/iceberg.md
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,14 @@ sidebar_label: Iceberg
6
6
7
7
# Iceberg Table Engine
8
8
9
+
:::warning
10
+
We recommend using the [Iceberg Table Function](/docs/en/sql-reference/table-functions/iceberg.md) for working with Iceberg data in ClickHouse. The Iceberg Table Function currently provides sufficient functionality, offering a partial read-only interface for Iceberg tables.
11
+
12
+
The Iceberg Table Engine is available but may have limitations. ClickHouse wasn't originally designed to support tables with externally changing schemas, which can affect the functionality of the Iceberg Table Engine. As a result, some features that work with regular tables may be unavailable or may not function correctly, especially when using the old analyzer.
13
+
14
+
For optimal compatibility, we suggest using the Iceberg Table Function while we continue to improve support for the Iceberg Table Engine.
15
+
:::
16
+
9
17
This engine provides a read-only integration with existing Apache [Iceberg](https://iceberg.apache.org/) tables in Amazon S3, Azure, HDFS and locally stored tables.
Table engine `Iceberg` is an alias to `IcebergS3` now.
65
73
74
+
**Schema Evolution**
75
+
At the moment, with the help of CH, you can read iceberg tables, the schema of which has changed over time. We currently support reading tables where columns have been added and removed, and their order has changed. You can also change a column where a value is required to one where NULL is allowed. Additionally, we support permitted type casting for simple types, namely:
76
+
* int -> long
77
+
* float -> double
78
+
* decimal(P, S) -> decimal(P', S) where P' > P.
79
+
80
+
Currently, it is not possible to change nested structures or the types of elements within arrays and maps.
81
+
82
+
To read a table where the schema has changed after its creation with dynamic schema inference, set allow_dynamic_metadata_for_data_lakes = true when creating the table.
83
+
66
84
### Data cache {#data-cache}
67
85
68
86
`Iceberg` table engine and table function support data caching same as `S3`, `AzureBlobStorage`, `HDFS` storages. See [here](../../../engines/table-engines/integrations/s3.md#data-cache).
Copy file name to clipboardExpand all lines: docs/en/engines/table-engines/special/memory.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,8 +36,6 @@ Upper and lower bounds can be specified to limit Memory engine table size, effec
36
36
- Requires `max_rows_to_keep`
37
37
-`max_rows_to_keep` — Maximum rows to keep within memory table where oldest rows are deleted on each insertion (i.e circular buffer). Max rows can exceed the stated limit if the oldest batch of rows to remove falls under the `min_rows_to_keep` limit when adding a large block.
-`time_zone` — Time zone. [String](../data-types/string.md).
3692
3692
3693
3693
:::note
3694
-
If `expr` is a number, it is interpreted as the number of seconds since the beginning of the Unix Epoch (as Unix timestamp).
3695
-
If `expr` is a [String](../data-types/string.md), it may be interpreted as a Unix timestamp or as a string representation of date / date with time.
3694
+
If `expr` is a number, it is interpreted as the number of seconds since the beginning of the Unix Epoch (as Unix timestamp).
3695
+
If `expr` is a [String](../data-types/string.md), it may be interpreted as a Unix timestamp or as a string representation of date / date with time.
3696
3696
Thus, parsing of short numbers' string representations (up to 4 digits) is explicitly disabled due to ambiguity, e.g. a string `'1999'` may be both a year (an incomplete string representation of Date / DateTime) or a unix timestamp. Longer numeric strings are allowed.
3697
3697
:::
3698
3698
@@ -5536,7 +5536,7 @@ Result:
5536
5536
5537
5537
## reinterpretAsUInt256
5538
5538
5539
-
Performs byte reinterpretation by treating the input value as a value of type UInt256. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
5539
+
Performs byte reinterpretation by treating the input value as a value of type UInt256. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
5540
5540
5541
5541
**Syntax**
5542
5542
@@ -5612,7 +5612,7 @@ Result:
5612
5612
5613
5613
## reinterpretAsInt16
5614
5614
5615
-
Performs byte reinterpretation by treating the input value as a value of type Int16. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
5615
+
Performs byte reinterpretation by treating the input value as a value of type Int16. Unlike [`CAST`](#cast), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless.
5616
5616
5617
5617
**Syntax**
5618
5618
@@ -7228,6 +7228,45 @@ Result:
7228
7228
└──────────────────────────────┘
7229
7229
```
7230
7230
7231
+
## toUnixTimestamp64Second
7232
+
7233
+
Converts a `DateTime64` to a `Int64` value with fixed second precision. The input value is scaled up or down appropriately depending on its precision.
7234
+
7235
+
:::note
7236
+
The output value is a timestamp in UTC, not in the timezone of `DateTime64`.
7237
+
:::
7238
+
7239
+
**Syntax**
7240
+
7241
+
```sql
7242
+
toUnixTimestamp64Second(value)
7243
+
```
7244
+
7245
+
**Arguments**
7246
+
7247
+
-`value` — DateTime64 value with any precision. [DateTime64](../data-types/datetime64.md).
7248
+
7249
+
**Returned value**
7250
+
7251
+
-`value` converted to the `Int64` data type. [Int64](../data-types/int-uint.md).
7252
+
7253
+
**Example**
7254
+
7255
+
Query:
7256
+
7257
+
```sql
7258
+
WITH toDateTime64('2009-02-13 23:31:31.011', 3, 'UTC') AS dt64
7259
+
SELECT toUnixTimestamp64Second(dt64);
7260
+
```
7261
+
7262
+
Result:
7263
+
7264
+
```response
7265
+
┌─toUnixTimestamp64Second(dt64)─┐
7266
+
│ 1234567891 │
7267
+
└───────────────────────────────┘
7268
+
```
7269
+
7231
7270
## toUnixTimestamp64Micro
7232
7271
7233
7272
Converts a `DateTime64` to a `Int64` value with fixed microsecond precision. The input value is scaled up or down appropriately depending on its precision.
At the moment, with the help of CH, you can read iceberg tables, the schema of which has changed over time. We currently support reading tables where columns have been added and removed, and their order has changed. You can also change a column where a value is required to one where NULL is allowed. Additionally, we support permitted type casting for simple types, namely:
70
+
* int -> long
71
+
* float -> double
72
+
* decimal(P, S) -> decimal(P', S) where P' > P.
73
+
74
+
Currently, it is not possible to change nested structures or the types of elements within arrays and maps.
75
+
68
76
**Aliases**
69
77
70
78
Table function `iceberg` is an alias to `icebergS3` now.
0 commit comments