Skip to content

Commit 7b9ee84

Browse files
authored
Merge branch 'master' into settings6
2 parents 1042989 + 7ae91ee commit 7b9ee84

95 files changed

Lines changed: 1982 additions & 308 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ For a description of request parameters, see [request description](../../../sql-
3737

3838
**Query clauses**
3939

40-
When creating an `AggregatingMergeTree` table the same [clauses](../../../engines/table-engines/mergetree-family/mergetree.md) are required, as when creating a `MergeTree` table.
40+
When creating an `AggregatingMergeTree` table, the same [clauses](../../../engines/table-engines/mergetree-family/mergetree.md) are required as when creating a `MergeTree` table.
4141

4242
<details markdown="1">
4343

@@ -62,19 +62,19 @@ All of the parameters have the same meaning as in `MergeTree`.
6262
## SELECT and INSERT {#select-and-insert}
6363

6464
To insert data, use [INSERT SELECT](../../../sql-reference/statements/insert-into.md) query with aggregate -State- functions.
65-
When selecting data from `AggregatingMergeTree` table, use `GROUP BY` clause and the same aggregate functions as when inserting data, but using `-Merge` suffix.
65+
When selecting data from `AggregatingMergeTree` table, use `GROUP BY` clause and the same aggregate functions as when inserting data, but using the `-Merge` suffix.
6666

67-
In the results of `SELECT` query, the values of `AggregateFunction` type have implementation-specific binary representation for all of the ClickHouse output formats. If dump data into, for example, `TabSeparated` format with `SELECT` query then this dump can be loaded back using `INSERT` query.
67+
In the results of `SELECT` query, the values of `AggregateFunction` type have implementation-specific binary representation for all of the ClickHouse output formats. For example, if you dump data into `TabSeparated` format with a `SELECT` query, then this dump can be loaded back using an `INSERT` query.
6868

6969
## Example of an Aggregated Materialized View {#example-of-an-aggregated-materialized-view}
7070

71-
The following examples assumes that you have a database named `test` so make sure you create that if it doesn't already exist:
71+
The following example assumes that you have a database named `test`, so create it if it doesn't already exist:
7272

7373
```sql
7474
CREATE DATABASE test;
7575
```
7676

77-
We will create the table `test.visits` that contain the raw data:
77+
Now create the table `test.visits` that contains the raw data:
7878

7979
``` sql
8080
CREATE TABLE test.visits
@@ -86,9 +86,9 @@ CREATE TABLE test.visits
8686
) ENGINE = MergeTree ORDER BY (StartDate, CounterID);
8787
```
8888

89-
Next, we need to create an `AggregatingMergeTree` table that will store `AggregationFunction`s that keep track of the total number of visits and the number of unique users.
89+
Next, you need an `AggregatingMergeTree` table that will store `AggregationFunction`s that keep track of the total number of visits and the number of unique users.
9090

91-
`AggregatingMergeTree` materialized view that watches the `test.visits` table, and use the `AggregateFunction` type:
91+
Create an `AggregatingMergeTree` materialized view that watches the `test.visits` table, and uses the `AggregateFunction` type:
9292

9393
``` sql
9494
CREATE TABLE test.agg_visits (
@@ -100,7 +100,7 @@ CREATE TABLE test.agg_visits (
100100
ENGINE = AggregatingMergeTree() ORDER BY (StartDate, CounterID);
101101
```
102102

103-
And then let's create a materialized view that populates `test.agg_visits` from `test.visits` :
103+
Create a materialized view that populates `test.agg_visits` from `test.visits`:
104104

105105
```sql
106106
CREATE MATERIALIZED VIEW test.visits_mv TO test.agg_visits
@@ -113,7 +113,7 @@ FROM test.visits
113113
GROUP BY StartDate, CounterID;
114114
```
115115

116-
Inserting data into the `test.visits` table.
116+
Insert data into the `test.visits` table:
117117

118118
``` sql
119119
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
@@ -122,7 +122,7 @@ INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
122122

123123
The data is inserted in both `test.visits` and `test.agg_visits`.
124124

125-
To get the aggregated data, we need to execute a query such as `SELECT ... GROUP BY ...` from the materialized view `test.mv_visits`:
125+
To get the aggregated data, execute a query such as `SELECT ... GROUP BY ...` from the materialized view `test.mv_visits`:
126126

127127
```sql
128128
SELECT
@@ -140,14 +140,14 @@ ORDER BY StartDate;
140140
└─────────────────────────┴────────┴───────┘
141141
```
142142

143-
And how about if we add another couple of records to `test.visits`, but this time we'll use a different timestamp for one of the records:
143+
Add another couple of records to `test.visits`, but this time try using a different timestamp for one of the records:
144144

145145
```sql
146146
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
147147
VALUES (1669446031000, 2, 5, 10), (1667446031000, 3, 7, 5);
148148
```
149149

150-
If we then run the `SELECT` query again, we'll see the following output:
150+
Run the `SELECT` query again, which will return the following output:
151151

152152
```text
153153
┌───────────────StartDate─┬─Visits─┬─Users─┐

docs/en/operations/server-configuration-parameters/settings.md

Lines changed: 34 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2217,6 +2217,39 @@ If the table does not exist, ClickHouse will create it. If the structure of the
22172217
</query_log>
22182218
```
22192219

2220+
# query_metric_log {#query_metric_log}
2221+
2222+
It is disabled by default.
2223+
2224+
**Enabling**
2225+
2226+
To manually turn on metrics history collection [`system.query_metric_log`](../../operations/system-tables/query_metric_log.md), create `/etc/clickhouse-server/config.d/query_metric_log.xml` with the following content:
2227+
2228+
``` xml
2229+
<clickhouse>
2230+
<query_metric_log>
2231+
<database>system</database>
2232+
<table>query_metric_log</table>
2233+
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
2234+
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
2235+
<max_size_rows>1048576</max_size_rows>
2236+
<reserved_size_rows>8192</reserved_size_rows>
2237+
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
2238+
<flush_on_crash>false</flush_on_crash>
2239+
</query_metric_log>
2240+
</clickhouse>
2241+
```
2242+
2243+
**Disabling**
2244+
2245+
To disable `query_metric_log` setting, you should create the following file `/etc/clickhouse-server/config.d/disable_query_metric_log.xml` with the following content:
2246+
2247+
``` xml
2248+
<clickhouse>
2249+
<query_metric_log remove="1" />
2250+
</clickhouse>
2251+
```
2252+
22202253
## query_cache {#server_configuration_parameters_query-cache}
22212254

22222255
[Query cache](../query-cache.md) configuration.
@@ -3109,7 +3142,7 @@ By default, tunneling (i.e, `HTTP CONNECT`) is used to make `HTTPS` requests ove
31093142

31103143
### no_proxy
31113144
By default, all requests will go through the proxy. In order to disable it for specific hosts, the `no_proxy` variable must be set.
3112-
It can be set inside the `<proxy>` clause for list and remote resolvers and as an environment variable for environment resolver.
3145+
It can be set inside the `<proxy>` clause for list and remote resolvers and as an environment variable for environment resolver.
31133146
It supports IP addresses, domains, subdomains and `'*'` wildcard for full bypass. Leading dots are stripped just like curl does.
31143147

31153148
Example:
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
slug: /en/operations/system-tables/query_metric_log
3+
---
4+
# query_metric_log
5+
6+
Contains history of memory and metric values from table `system.events` for individual queries, periodically flushed to disk.
7+
8+
Once a query starts, data is collected at periodic intervals of `query_metric_log_interval` milliseconds (which is set to 1000
9+
by default). The data is also collected when the query finishes if the query takes longer than `query_metric_log_interval`.
10+
11+
Columns:
12+
- `query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the query.
13+
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
14+
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
15+
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
16+
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Event time with microseconds resolution.
17+
18+
**Example**
19+
20+
``` sql
21+
SELECT * FROM system.query_metric_log LIMIT 1 FORMAT Vertical;
22+
```
23+
24+
``` text
25+
Row 1:
26+
──────
27+
query_id: 97c8ba04-b6d4-4bd7-b13e-6201c5c6e49d
28+
hostname: clickhouse.eu-central1.internal
29+
event_date: 2020-09-05
30+
event_time: 2020-09-05 16:22:33
31+
event_time_microseconds: 2020-09-05 16:22:33.196807
32+
memory_usage: 313434219
33+
peak_memory_usage: 598951986
34+
ProfileEvent_Query: 0
35+
ProfileEvent_SelectQuery: 0
36+
ProfileEvent_InsertQuery: 0
37+
ProfileEvent_FailedQuery: 0
38+
ProfileEvent_FailedSelectQuery: 0
39+
...
40+
```
41+
42+
**See also**
43+
44+
- [query_metric_log setting](../../operations/server-configuration-parameters/settings.md#query_metric_log) — Enabling and disabling the setting.
45+
- [query_metric_log_interval](../../operations/settings/settings.md#query_metric_log_interval)
46+
- [system.asynchronous_metrics](../../operations/system-tables/asynchronous_metrics.md) — Contains periodically calculated metrics.
47+
- [system.events](../../operations/system-tables/events.md#system_tables-events) — Contains a number of events that occurred.
48+
- [system.metrics](../../operations/system-tables/metrics.md) — Contains instantly calculated metrics.
49+
- [Monitoring](../../operations/monitoring.md) — Base concepts of ClickHouse monitoring.

docs/en/sql-reference/functions/type-conversion-functions.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6867,6 +6867,18 @@ Same as for [parseDateTimeInJodaSyntax](#parsedatetimeinjodasyntax) except that
68676867

68686868
Same as for [parseDateTimeInJodaSyntax](#parsedatetimeinjodasyntax) except that it returns `NULL` when it encounters a date format that cannot be processed.
68696869

6870+
## parseDateTime64InJodaSyntax
6871+
6872+
Similar to [parseDateTimeInJodaSyntax](#parsedatetimeinjodasyntax). Differently, it returns a value of type [DateTime64](../data-types/datetime64.md).
6873+
6874+
## parseDateTime64InJodaSyntaxOrZero
6875+
6876+
Same as for [parseDateTime64InJodaSyntax](#parsedatetime64injodasyntax) except that it returns zero date when it encounters a date format that cannot be processed.
6877+
6878+
## parseDateTime64InJodaSyntaxOrNull
6879+
6880+
Same as for [parseDateTime64InJodaSyntax](#parsedatetime64injodasyntax) except that it returns `NULL` when it encounters a date format that cannot be processed.
6881+
68706882
## parseDateTimeBestEffort
68716883
## parseDateTime32BestEffort
68726884

programs/server/config.xml

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1195,6 +1195,19 @@
11951195
<flush_on_crash>false</flush_on_crash>
11961196
</error_log>
11971197

1198+
<!-- Query metric log contains rows Contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk
1199+
every "collect_interval_milliseconds" interval-->
1200+
<query_metric_log>
1201+
<database>system</database>
1202+
<table>query_metric_log</table>
1203+
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
1204+
<max_size_rows>1048576</max_size_rows>
1205+
<reserved_size_rows>8192</reserved_size_rows>
1206+
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
1207+
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
1208+
<flush_on_crash>false</flush_on_crash>
1209+
</query_metric_log>
1210+
11981211
<!--
11991212
Asynchronous metric log contains values of metrics from
12001213
system.asynchronous_metrics.

programs/server/config.yaml.example

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -743,6 +743,13 @@ error_log:
743743
flush_interval_milliseconds: 7500
744744
collect_interval_milliseconds: 1000
745745

746+
# Query metric log contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk.
747+
query_metric_log:
748+
database: system
749+
table: query_metric_log
750+
flush_interval_milliseconds: 7500
751+
collect_interval_milliseconds: 1000
752+
746753
# Asynchronous metric log contains values of metrics from
747754
# system.asynchronous_metrics.
748755
asynchronous_metric_log:

0 commit comments

Comments
 (0)