Skip to content

Add new TimeSeries table engine to handle Prometheus protocols#64183

Merged
nikitamikhaylov merged 25 commits intoClickHouse:masterfrom
vitlibar:ts-engine
Aug 8, 2024
Merged

Add new TimeSeries table engine to handle Prometheus protocols#64183
nikitamikhaylov merged 25 commits intoClickHouse:masterfrom
vitlibar:ts-engine

Conversation

@vitlibar
Copy link
Copy Markdown
Member

@vitlibar vitlibar commented May 21, 2024

Changelog category:

  • New Feature

Changelog entry:

Add new TimeSeries table engine:

  • by default:
CREATE TABLE tbl ENGINE=TimeSeries
  • or with specifying engines of its internal tables:
CREATE TABLE tbl ENGINE=TimeSeries DATA ENGINE=MergeTree TAGS ENGINE=ReplacingMergeTree METRICS ENGINE=ReplacingMergeTree

This table can then be used to support Prometheus protocols (remote write, remote read, query) after enabling them in the server configuration:

<clickhouse>
    <prometheus>
        <port>8053</port>
        <handlers>
            <my_rule_1>
                <url>/write</url>
                <handler>
                    <type>remote_write</type>
                    <database>default</table>
                    <table>tbl</table>
                </handler>
            </my_rule_1>
            <my_rule_2>
                <url>/read</url>
                <handler>
                    <type>remote_read</type>
                    <database>default</table>
                    <table>tbl</table>
                </handler>
            </my_rule_2>
            <my_rule_3>
                <url>/metrics</url>
                <handler>
                    <type>expose_metrics</type>
                    <metrics>true</metrics>
                    <asynchronous_metrics>true</asynchronous_metrics>
                    <events>true</events>
                    <errors>true</errors>
                </handler>
            </my_rule_3>
        </handlers>
    </prometheus>
</clickhouse>

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/

CI Settings (Only check the boxes if you know what you are doing):

  • Allow: All Required Checks
  • Allow: Docs
  • Allow: Stateless tests
  • Allow: Stateful tests
  • Allow: Integration Tests
  • Allow: Performance tests
  • Allow: All Builds
  • Allow: batch 1, 2 for multi-batch jobs
  • Allow: batch 3, 4, 5, 6 for multi-batch jobs

  • Exclude: Style check
  • Exclude: Fast test
  • Exclude: All with ASAN
  • Exclude: All with TSAN, MSAN, UBSAN, Coverage
  • Exclude: All with aarch64, release, debug

  • Do not test
  • Woolen Wolfdog
  • Upload binaries for special builds
  • Disable merge-commit
  • Disable CI cache

@vitlibar vitlibar changed the title Add new TimeSeries table engine to handle Prometheus protocols [WIP] Add new TimeSeries table engine to handle Prometheus protocols May 21, 2024
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you plan to tune it later?

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those tables are customizable in two ways:

  1. The columns of an inner target table can be specified explicitly as columns of a TimeSeries table, and the table engine can be changed too:
CREATE TABLE ts (id UInt128 CODEC(ZSTD(3))) ENGINE = TimeSeries() DATA ENGINE=ReplicatedMergeTree(ReplicatedMergeTree('zkpath', 'replica') ORDER BY id, timestamp PARTITION BY toYYYYMM(timestamp)
  1. You can create the target table as you wish and then make a TimeSeries table use it:
CREATE TABLE my_data(id UInt128, timestamp DateTime64(3), value Float64) ENGINE=MergeTree ORDER BY id, timestamp;
CREATE TABLE ts ENGINE=TimeSeries() DATA=my_data;

Copy link
Copy Markdown
Contributor

@UnamedRus UnamedRus May 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it expected that metrics can be filtered by tags often why not have it in EAV format?

id UInt128,
metric_name LowCardinality(String),
tag_name LowCardinality(String) ~
tag_value String

ORDER BY (metric_name, tag_name, id)

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar May 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

id is a hash of the metric_name combined with a sorted list of {tag_name, tag_value} pairs

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metric_name LowCardinality(String),
tag_name LowCardinality(String) ~
tag_value String

Why not tag_value LowCartdinality(String) then?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand from where it's coming.

But 2 things concerns me:

  1. Heavily unbalanced columns: id ~16 bytes, metric_name ~ 10-20 bytes, tags potentially hundreds bytes
  2. How filtering will work by tags.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not tag_value LowCartdinality(String) then?

We assume that tags value will be much more unique.
Because two metrics cannot exists with the exact same combination of tags.

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about the structure of the tags table more. For supporting the Prometheus read protocol it's actually more convenient to have the structure of the tags table in the manner where all the tags related to a specific ID are described in one row. So

id UInt128,
metric_name LowCardinality(String),
tag_name LowCardinality(String) ~
tag_value String

is not really suitable.

The following structure

id UInt128,
metric_name String,
tags Map(String, String)

must be ok, and also I've added support for putting some tags into separate columns of the tags table. For example, the following statement

CREATE TABLE ts ENGINE=TimeSeries() SETTINGS tags_to_columns={'job': 'job', 'instance':'instance'}

will create the tags table with columns

id UInt128,
metric_name String,
job String,
instance String,
tags Map(String, String)

The types of such columns can be adjusted, for example

CREATE TABLE ts (metric_name LowCardinality(String), job LowCardinality(Nullable(String))) ENGINE=TimeSeries() SETTINGS tags_to_columns={'job': 'job', 'instance':'instance'}

will create the tags table with columns

id UInt128,
metric_name LowCardinality(String),
job LowCardinality(Nullable(String)),
instance String,
tags Map(String, String)

Copy link
Copy Markdown
Contributor

@UnamedRus UnamedRus Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#47045

What if make buckets for map column? (ie have X map columns, and distribute tags across them using hash function)

As bucketed map type wasn't implemented yet

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#47045
bucketed map type wasn't implemented yet

Bucketed maps will be supported of course.

What if make buckets for map column? (ie have X map columns, and distribute tags across them using hash function

That's exactly what a bucketed map does. It's better to wait until #47045 will be ready.

Copy link
Copy Markdown
Contributor

@UnamedRus UnamedRus Jun 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to wait until #47045 will be ready.

It looks kinda dead, but ok :)

Anyway explicit materialization of certain columns is OK.

How query builder will work with them?
It will be some specific implementation for Prometheus Protocol or general, like "finish constraint optimization in production" #33544 ?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UInt128 looks really generous for potentially trillion rows dataset.

16 bytes for ID
8 bytes for Timestamp
8 bytes for Value

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, maybe UInt64 for ID is enough

Copy link
Copy Markdown
Contributor

@UnamedRus UnamedRus May 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, due to nature of running prometheus collector those id (actually exact set of tags and ids are byproduct of that set of tags) will rotate pretty frequently, and also it does mean, that we are actually not interested in data for particular ID, we interesting in data by particular tag combination and metric name. Commonly few "id" will match them (old and new pod for example)

So, i have few ideas in mind: use something like snowflake for id's

Or something more strange, for each "metric name" reserve range of 10000 incremental id's and fill this range with data for that particular metric name and different tags. Just in attempt to get better data locality.

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar May 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe UInt128 is not too big because this table is always stored ordered by ID first - so the id column will contain sequences of the same values, and that should be compressed pretty well.

i have few ideas in mind: use something like snowflake for id's

What do you mean?

Or something more strange, for each "metric name" reserve range of 10000 incremental id's and fill this range with data for that particular metric name and different tags. Just in attempt to get better data locality.

Incremental IDs aren't suitable for implementation because if we used them then INSERT to a TimeSeries table would require an internal SELECT to understand which ID is next. And it would also cause issues with concurrency and when internal tables are replicated. So it's better to use some kind of hash for ID.

Perhaps ID can be split into two parts: the first 32 or 64 bits would be used for a hash of the metric name, and the remaining 32 or 64 bits would be used for a hash of all the tags' names and values. Then the data for the same metric, but different tags will be stored next to each other.

Copy link
Copy Markdown
Contributor

@UnamedRus UnamedRus May 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe UInt128 is not too big because this table is always stored ordered by ID first - so the id column will contain sequences of the same values, and that should be compressed pretty well.

It's still not good for more data needs to be decompressed/compressed and also compared (equality condition on that column, also assume that compare of 8 byte field implemented in native way) during queries

What do you mean?

Just generate snowflakeID for each "new" tag-value combinations.

table would require an internal SELECT to understand which ID is next.

Well, technically it can be solved by kind of dictionary, in memory which will map hash of tags-values into id's

And it would also cause issues with concurrency and when internal tables are replicated.

It's not like we care about consistency of those ID's. we only care about that we can map tag combination to particular (or list of particular) ID on this node. Users will filter by tags, not IDs itself

Perhaps ID can be split into two parts: the first 32 or 64 bits would be used for a hash of the metric name, and the remaining 32 or 64 bits would be used for a hash of all the tags' names and values.

It's ~ok approach, but it would be really great to avoid using 128 bit id's and if we go with lower granularity, split 64 bits to fit data will be more complicated due higher chance of collisions.

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar Jun 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to make it customizable. The TimeSeries engine supports settings now and also allows to describe columns to change their default types, for example:

CREATE TABLE ts(id UInt64) ENGINE=TimeSeries() SETTINGS id_algorithm='SipHash_MetricNameLow32_And_TagsHigh32'

Copy link
Copy Markdown
Contributor

@UnamedRus UnamedRus Jun 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be distribution can be shifted a bit, lit 24 for MetricName and 40 for Tags

Another thought, that it may sense to have some tags before metric name in number generator (like ENV or tenant)
What do you think, if id_algorithm can be defined as UDF name?

CREATE TABLE ts(id UInt64) ENGINE=TimeSeries() SETTINGS id_algorithm='generatorId'


CREATE FUNCTION generatorId AS (metric_name, tags) -> sipHash64(metric_name, tags);

CREATE FUNCTION generatorId AS (metric_name, tags) -> bitShiftLeft(56, sipHash64(tags['env'])) + bitShiftLeft(40, sipHash64(metric_name)::UInt16) + bitShiftRight(24, sipHash64(tags));  -- it's not correct, but idea is clear i hope.

Copy link
Copy Markdown
Member Author

@vitlibar vitlibar Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've implemented something like that but with using the DEFAULT expression for the id column instead of the generatorId function:

CREATE TABLE prometheus
(
        id FixedString(16) DEFAULT murmurHash3_128(metric_name, all_tags),
        metric_name LowCardinality(String),
        all_tags Map(String, String)
) ENGINE=TimeSeries

So the DEFAULT expression for id can do bitShiftLeft or even access a dictionary.

@robot-ch-test-poll4 robot-ch-test-poll4 added pr-feature Pull request with new product feature submodule changed At least one submodule changed in this PR. labels Jun 4, 2024
@robot-ch-test-poll4
Copy link
Copy Markdown
Contributor

robot-ch-test-poll4 commented Jun 4, 2024

This is an automated comment for commit f37e0c7 with description of existing statuses. It's updated for the latest CI running

❌ Click here to open a full report in a separate page

Check nameDescriptionStatus
BuildsThere's no description for the check yet, please add it to tests/ci/ci_config.py:CHECK_DESCRIPTIONS❌ failure
Successful checks
Check nameDescriptionStatus
AST fuzzerRuns randomly generated queries to catch program errors. The build type is optionally given in parenthesis. If it fails, ask a maintainer for help✅ success
ClickBenchRuns [ClickBench](https://github.com/ClickHouse/ClickBench/) with instant-attach table✅ success
Compatibility checkChecks that clickhouse binary runs on distributions with old libc versions. If it fails, ask a maintainer for help✅ success
Docker keeper imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docker server imageThe check to build and optionally push the mentioned image to docker hub✅ success
Docs checkBuilds and tests the documentation✅ success
Fast testNormally this is the first check that is ran for a PR. It builds ClickHouse and runs most of stateless functional tests, omitting some. If it fails, further checks are not started until it is fixed. Look at the report to see which tests fail, then reproduce the failure locally as described here✅ success
Flaky testsChecks if new added or modified tests are flaky by running them repeatedly, in parallel, with more randomization. Functional tests are run 100 times with address sanitizer, and additional randomization of thread scheduling. Integration tests are run up to 10 times. If at least once a new test has failed, or was too long, this check will be red. We don't allow flaky tests, read the doc✅ success
Install packagesChecks that the built packages are installable in a clear environment✅ success
Integration testsThe integration tests report. In parenthesis the package type is given, and in square brackets are the optional part/total tests✅ success
Performance ComparisonMeasure changes in query performance. The performance test report is described in detail here. In square brackets are the optional part/total tests✅ success
Stateful testsRuns stateful functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stateless testsRuns stateless functional tests for ClickHouse binaries built in various configurations -- release, debug, with sanitizers, etc✅ success
Stress testRuns stateless functional tests concurrently from several clients to detect concurrency-related errors✅ success
Style checkRuns a set of checks to keep the code style clean. If some of tests failed, see the related log from the report✅ success
Unit testsRuns the unit tests for different release types✅ success
Upgrade checkRuns stress tests on server version from last release and then tries to upgrade it to the version from the PR. It checks if the new server can successfully startup without any errors, crashes or sanitizer asserts✅ success

@vitlibar vitlibar force-pushed the ts-engine branch 4 times, most recently from 7b6f75c to 970fd1f Compare June 4, 2024 15:15
@agelwarg
Copy link
Copy Markdown

Do you intend to expose additional interfaces for receiving/transforming data into the same engine. For example, Influx line protocol (https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/).

@qoega
Copy link
Copy Markdown
Member

qoega commented Jun 12, 2024

Influx line protocol is compatible with current formats: here is the example #62924

@agelwarg
Copy link
Copy Markdown

agelwarg commented Jun 12, 2024

Influx line protocol is compatible with current formats: here is the example #62924

I was referring to whether or not there is a plan to expose another http endpoint, similar to what is being done for prometheus remote write.

@robot-ch-test-poll2 robot-ch-test-poll2 added the pr-synced-to-cloud The PR is synced to the cloud repo label Jun 13, 2024
@syepes
Copy link
Copy Markdown

syepes commented Jun 15, 2024

This is great news

@vitlibar vitlibar force-pushed the ts-engine branch 5 times, most recently from 18681dd to c65db8e Compare June 21, 2024 18:11
@vitlibar vitlibar changed the title [WIP] Add new TimeSeries table engine to handle Prometheus protocols Add new TimeSeries table engine to handle Prometheus protocols Jun 21, 2024
@vitlibar vitlibar marked this pull request as ready for review June 21, 2024 18:11
@vitlibar
Copy link
Copy Markdown
Member Author

Do you intend to expose additional interfaces for receiving/transforming data into the same engine. For example, Influx line protocol (https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/).

I think that can be implemented later as a new format:

INSERT INTO time_series_table FORMAT InfluxLineProtocol

@vitlibar vitlibar force-pushed the ts-engine branch 3 times, most recently from c629a5b to aaf39d2 Compare June 23, 2024 19:20
@nikitamikhaylov
Copy link
Copy Markdown
Member

nikitamikhaylov commented Aug 6, 2024

Very nice finding.

❯ grep -Fa '<Fatal>' clickhouse-server.log
2024.08.06 21:39:51.201548 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Fatal> : Logical error: 'Unexpected return type from materialize. Expected LowCardinality(String). Got String. Action:
2024.08.06 21:39:51.243027 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Fatal> : Stack trace (when copying this message, always include the lines below):
2024.08.06 21:39:51.243745 [ 671 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
2024.08.06 21:39:51.243828 [ 671 ] {} <Fatal> BaseDaemon: (version 24.8.1.1649, build id: 5ABCCFC459A41E8E77C3D0C4D8C55923739A61B0, git hash: 406be745959c62100d5a97425df63234b14a756c) (from thread 11) Received signal 6
2024.08.06 21:39:51.243874 [ 671 ] {} <Fatal> BaseDaemon: Signal description: Aborted
2024.08.06 21:39:51.243906 [ 671 ] {} <Fatal> BaseDaemon:
2024.08.06 21:39:51.243988 [ 671 ] {} <Fatal> BaseDaemon: Stack trace: 0x000055b636a5860d 0x000055b636fe3027 0x00007f361ecbf520 0x00007f361ed139fd 0x00007f361ecbf476 0x00007f361eca57f3 0x000055b6369e910b 0x000055b6369eabd1 0x000055b626766445 0x000055b62bd1a735 0x000055b643fdca58 0x000055b64907be21 0x000055b64907a43a 0x000055b649079f51 0x000055b6490790a6 0x000055b648a5476e 0x000055b648a3b471 0x000055b648a3a6d2 0x000055b648842523 0x000055b648844b11 0x000055b648832672 0x000055b648831b10 0x000055b64882da4c 0x000055b6488d8234 0x000055b64fbaa1ef 0x000055b64fbaadd7 0x000055b64faad6eb 0x000055b64faa7848 0x000055b62671b059 0x00007f361ed11ac3 0x00007f361eda3850
2024.08.06 21:39:51.244028 [ 671 ] {} <Fatal> BaseDaemon: ########################################
2024.08.06 21:39:51.244081 [ 671 ] {} <Fatal> BaseDaemon: (version 24.8.1.1649, build id: 5ABCCFC459A41E8E77C3D0C4D8C55923739A61B0, git hash: 406be745959c62100d5a97425df63234b14a756c) (from thread 11) (query_id: ed2b8177-22db-4418-9381-0fc5f4d5d148) (query: ) Received signal Aborted (6)
2024.08.06 21:39:51.244128 [ 671 ] {} <Fatal> BaseDaemon:
2024.08.06 21:39:51.244159 [ 671 ] {} <Fatal> BaseDaemon: Stack trace: 0x000055b636a5860d 0x000055b636fe3027 0x00007f361ecbf520 0x00007f361ed139fd 0x00007f361ecbf476 0x00007f361eca57f3 0x000055b6369e910b 0x000055b6369eabd1 0x000055b626766445 0x000055b62bd1a735 0x000055b643fdca58 0x000055b64907be21 0x000055b64907a43a 0x000055b649079f51 0x000055b6490790a6 0x000055b648a5476e 0x000055b648a3b471 0x000055b648a3a6d2 0x000055b648842523 0x000055b648844b11 0x000055b648832672 0x000055b648831b10 0x000055b64882da4c 0x000055b6488d8234 0x000055b64fbaa1ef 0x000055b64fbaadd7 0x000055b64faad6eb 0x000055b64faa7848 0x000055b62671b059 0x00007f361ed11ac3 0x00007f361eda3850
2024.08.06 21:39:51.290906 [ 671 ] {} <Fatal> BaseDaemon: 0.0. inlined from ./build_docker/./src/Common/StackTrace.cpp:349: StackTrace::tryCapture()
2024.08.06 21:39:51.290964 [ 671 ] {} <Fatal> BaseDaemon: 0. ./build_docker/./src/Common/StackTrace.cpp:318: StackTrace::StackTrace(ucontext_t const&) @ 0x000000001ac2060d
2024.08.06 21:39:51.365592 [ 671 ] {} <Fatal> BaseDaemon: 1. ./build_docker/./src/Common/SignalHandlers.cpp:0: signalHandler(int, siginfo_t*, void*) @ 0x000000001b1ab027
2024.08.06 21:39:51.365640 [ 671 ] {} <Fatal> BaseDaemon: 2. ? @ 0x00007f361ecbf520
2024.08.06 21:39:51.365669 [ 671 ] {} <Fatal> BaseDaemon: 3. ? @ 0x00007f361ed139fd
2024.08.06 21:39:51.365695 [ 671 ] {} <Fatal> BaseDaemon: 4. ? @ 0x00007f361ecbf476
2024.08.06 21:39:51.365726 [ 671 ] {} <Fatal> BaseDaemon: 5. ? @ 0x00007f361eca57f3
2024.08.06 21:39:51.452670 [ 671 ] {} <Fatal> BaseDaemon: 6.0. inlined from ./contrib/llvm-project/libcxx/include/atomic:958: int std::__cxx_atomic_load[abi:v15007]<int>(std::__cxx_atomic_base_impl<int> const*, std::memory_order)
2024.08.06 21:39:51.452746 [ 671 ] {} <Fatal> BaseDaemon: 6.1. inlined from ./contrib/llvm-project/libcxx/include/atomic:1560: std::__atomic_base<int, false>::load[abi:v15007](std::memory_order) const
2024.08.06 21:39:51.452792 [ 671 ] {} <Fatal> BaseDaemon: 6.2. inlined from ./contrib/llvm-project/libcxx/include/atomic:1564: std::__atomic_base<int, false>::operator int[abi:v15007]() const
2024.08.06 21:39:51.452836 [ 671 ] {} <Fatal> BaseDaemon: 6.3. inlined from ./base/poco/Foundation/include/Poco/Logger.h:2354: Poco::Logger::is(int) const
2024.08.06 21:39:51.452874 [ 671 ] {} <Fatal> BaseDaemon: 6. ./build_docker/./src/Common/Exception.cpp:47: DB::abortOnFailedAssertion(String const&, void* const*, unsigned long, unsigned long) @ 0x000000001abb110b
2024.08.06 21:39:51.537217 [ 671 ] {} <Fatal> BaseDaemon: 7.0. inlined from ./build_docker/./src/Common/Exception.cpp:208: DB::Exception::getStackFramePointers() const
2024.08.06 21:39:51.537277 [ 671 ] {} <Fatal> BaseDaemon: 7. ./build_docker/./src/Common/Exception.cpp:115: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001abb2bd1
2024.08.06 21:39:51.626748 [ 671 ] {} <Fatal> BaseDaemon: 8. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000a92e445
2024.08.06 21:39:51.665179 [ 671 ] {} <Fatal> BaseDaemon: 9. DB::Exception::Exception<String, String, String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&, String&&, String&&) @ 0x000000000fee2735
2024.08.06 21:39:51.811199 [ 671 ] {} <Fatal> BaseDaemon: 10.0. inlined from ./build_docker/./src/Interpreters/ExpressionActions.cpp:639: DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool, bool)
2024.08.06 21:39:51.811276 [ 671 ] {} <Fatal> BaseDaemon: 10. ./build_docker/./src/Interpreters/ExpressionActions.cpp:770: DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool, bool) const @ 0x00000000281a4a58
2024.08.06 21:39:51.839688 [ 671 ] {} <Fatal> BaseDaemon: 11. ./build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:0: DB::ConvertingTransform::onConsume(DB::Chunk) @ 0x000000002d243e21
2024.08.06 21:39:51.887995 [ 671 ] {} <Fatal> BaseDaemon: 12.0. inlined from ./contrib/llvm-project/libcxx/include/vector:432: __destroy_vector
2024.08.06 21:39:51.888058 [ 671 ] {} <Fatal> BaseDaemon: 12.1. inlined from ./contrib/llvm-project/libcxx/include/vector:449: ~vector
2024.08.06 21:39:51.888103 [ 671 ] {} <Fatal> BaseDaemon: 12.2. inlined from ./src/Common/CollectionOfDerived.h:28: ~CollectionOfDerivedItems
2024.08.06 21:39:51.888147 [ 671 ] {} <Fatal> BaseDaemon: 12.3. inlined from ./src/Processors/Chunk.h:52: ~Chunk
2024.08.06 21:39:51.888181 [ 671 ] {} <Fatal> BaseDaemon: 12.4. inlined from ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150: operator()
2024.08.06 21:39:51.888214 [ 671 ] {} <Fatal> BaseDaemon: 12.5. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: ?
2024.08.06 21:39:51.888244 [ 671 ] {} <Fatal> BaseDaemon: 12.6. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:479: ?
2024.08.06 21:39:51.888273 [ 671 ] {} <Fatal> BaseDaemon: 12.7. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:235: ?
2024.08.06 21:39:51.888301 [ 671 ] {} <Fatal> BaseDaemon: 12. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000002d24243a
2024.08.06 21:39:51.938859 [ 671 ] {} <Fatal> BaseDaemon: 13. ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:102: DB::runStep(std::function<void ()>, DB::ThreadStatus*, std::atomic<unsigned long>*) @ 0x000000002d241f51
2024.08.06 21:39:51.969139 [ 671 ] {} <Fatal> BaseDaemon: 14.0. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:818: ?
2024.08.06 21:39:51.969193 [ 671 ] {} <Fatal> BaseDaemon: 14.1. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:1184: ?
2024.08.06 21:39:51.969235 [ 671 ] {} <Fatal> BaseDaemon: 14. ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150: DB::ExceptionKeepingTransform::work() @ 0x000000002d2410a6
2024.08.06 21:39:51.986534 [ 671 ] {} <Fatal> BaseDaemon: 15.0. inlined from ./contrib/llvm-project/libcxx/include/list:588: std::__list_imp<DB::ExecutingGraph::Edge, std::allocator<DB::ExecutingGraph::Edge>>::__sz[abi:v15007]() const
2024.08.06 21:39:51.986594 [ 671 ] {} <Fatal> BaseDaemon: 15.1. inlined from ./contrib/llvm-project/libcxx/include/list:616: std::__list_imp<DB::ExecutingGraph::Edge, std::allocator<DB::ExecutingGraph::Edge>>::empty[abi:v15007]() const
2024.08.06 21:39:51.986638 [ 671 ] {} <Fatal> BaseDaemon: 15.2. inlined from ./contrib/llvm-project/libcxx/include/list:918: std::list<DB::ExecutingGraph::Edge, std::allocator<DB::ExecutingGraph::Edge>>::empty[abi:v15007]() const
2024.08.06 21:39:51.986687 [ 671 ] {} <Fatal> BaseDaemon: 15.3. inlined from ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:50: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*)
2024.08.06 21:39:51.986726 [ 671 ] {} <Fatal> BaseDaemon: 15. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:96: DB::ExecutionThreadContext::executeTask() @ 0x000000002cc1c76e
2024.08.06 21:39:52.040865 [ 671 ] {} <Fatal> BaseDaemon: 16. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000002cc03471
2024.08.06 21:39:52.096004 [ 671 ] {} <Fatal> BaseDaemon: 17. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:150: DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x000000002cc026d2
2024.08.06 21:39:52.248222 [ 671 ] {} <Fatal> BaseDaemon: 18. ./build_docker/./src/Storages/TimeSeries/PrometheusRemoteWriteProtocol.cpp:546: DB::(anonymous namespace)::insertToTargetTables(DB::(anonymous namespace)::BlocksToInsert&&, DB::StorageTimeSeries&, std::shared_ptr<DB::Context const>, Poco::Logger*) @ 0x000000002ca0a523
2024.08.06 21:39:52.421962 [ 671 ] {} <Fatal> BaseDaemon: 19. ./build_docker/./src/Storages/TimeSeries/PrometheusRemoteWriteProtocol.cpp:0: DB::PrometheusRemoteWriteProtocol::writeMetricsMetadata(google::protobuf::RepeatedPtrField<prometheus::MetricMetadata> const&) @ 0x000000002ca0cb11
2024.08.06 21:39:52.473941 [ 671 ] {} <Fatal> BaseDaemon: 20. ./build_docker/./src/Server/PrometheusRequestHandler.cpp:232: DB::PrometheusRequestHandler::RemoteWriteImpl::handlingRequestWithContext(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x000000002c9fa672
2024.08.06 21:39:52.514405 [ 671 ] {} <Fatal> BaseDaemon: 21.0. inlined from ./contrib/llvm-project/libcxx/include/optional:260: ~__optional_destruct_base
2024.08.06 21:39:52.514475 [ 671 ] {} <Fatal> BaseDaemon: 21. ./build_docker/./src/Server/PrometheusRequestHandler.cpp:125: DB::PrometheusRequestHandler::ImplWithContext::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x000000002c9f9b10
2024.08.06 21:39:52.557280 [ 671 ] {} <Fatal> BaseDaemon: 22. ./build_docker/./src/Server/PrometheusRequestHandler.cpp:361: DB::PrometheusRequestHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&, StrongTypedef<unsigned long, ProfileEvents::EventTag> const&) @ 0x000000002c9f5a4c
2024.08.06 21:39:52.566851 [ 671 ] {} <Fatal> BaseDaemon: 23. ./build_docker/./src/Server/HTTP/HTTPServerConnection.cpp:0: DB::HTTPServerConnection::run() @ 0x000000002caa0234
2024.08.06 21:39:52.572206 [ 671 ] {} <Fatal> BaseDaemon: 24. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x0000000033d721ef
2024.08.06 21:39:52.582879 [ 671 ] {} <Fatal> BaseDaemon: 25.0. inlined from ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15007](Poco::Net::TCPServerConnection*) const
2024.08.06 21:39:52.582951 [ 671 ] {} <Fatal> BaseDaemon: 25.1. inlined from ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::unique_ptr<Poco::Net::TCPServerConnection, std::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15007](Poco::Net::TCPServerConnection*)
2024.08.06 21:39:52.582998 [ 671 ] {} <Fatal> BaseDaemon: 25.2. inlined from ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
2024.08.06 21:39:52.583040 [ 671 ] {} <Fatal> BaseDaemon: 25. ./build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x0000000033d72dd7
2024.08.06 21:39:52.595480 [ 671 ] {} <Fatal> BaseDaemon: 26. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:219: Poco::PooledThread::run() @ 0x0000000033c756eb
2024.08.06 21:39:52.606763 [ 671 ] {} <Fatal> BaseDaemon: 27.0. inlined from ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::get()
2024.08.06 21:39:52.606824 [ 671 ] {} <Fatal> BaseDaemon: 27.1. inlined from ./base/poco/Foundation/include/Poco/SharedPtr.h:139: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
2024.08.06 21:39:52.606871 [ 671 ] {} <Fatal> BaseDaemon: 27.2. inlined from ./base/poco/Foundation/include/Poco/SharedPtr.h:180: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
2024.08.06 21:39:52.606907 [ 671 ] {} <Fatal> BaseDaemon: 27. ./base/poco/Foundation/src/Thread_POSIX.cpp:350: Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000033c6f848
2024.08.06 21:39:52.647539 [ 671 ] {} <Fatal> BaseDaemon: 28. asan_thread_start(void*) @ 0x000000000a8e3059
2024.08.06 21:39:52.647591 [ 671 ] {} <Fatal> BaseDaemon: 29. ? @ 0x00007f361ed11ac3
2024.08.06 21:39:52.647644 [ 671 ] {} <Fatal> BaseDaemon: 30. ? @ 0x00007f361eda3850
2024.08.06 21:39:52.647679 [ 671 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2024.08.06 21:39:53.989199 [ 671 ] {} <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
2024.08.06 21:39:53.989377 [ 671 ] {} <Fatal> BaseDaemon: Changed settings: allow_experimental_time_series_table = true
2024.08.06 21:39:57.243961 [ 673 ] {} <Fatal> BaseDaemon: ########## Short fault info ############
2024.08.06 21:39:57.244033 [ 673 ] {} <Fatal> BaseDaemon: (version 24.8.1.1649, build id: 5ABCCFC459A41E8E77C3D0C4D8C55923739A61B0, git hash: 406be745959c62100d5a97425df63234b14a756c) (from thread 11) Received signal 11
2024.08.06 21:39:57.244076 [ 673 ] {} <Fatal> BaseDaemon: Signal description: Segmentation fault
2024.08.06 21:39:57.244117 [ 673 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2024.08.06 21:39:57.244175 [ 673 ] {} <Fatal> BaseDaemon: Stack trace: 0x000055b636a5860d 0x000055b636fe3027 0x00007f361ecbf520 0x00007f361eca5899 0x000055b6369e910b 0x000055b6369eabd1 0x000055b626766445 0x000055b62bd1a735 0x000055b643fdca58 0x000055b64907be21 0x000055b64907a43a 0x000055b649079f51 0x000055b6490790a6 0x000055b648a5476e 0x000055b648a3b471 0x000055b648a3a6d2 0x000055b648842523 0x000055b648844b11 0x000055b648832672 0x000055b648831b10 0x000055b64882da4c 0x000055b6488d8234 0x000055b64fbaa1ef 0x000055b64fbaadd7 0x000055b64faad6eb 0x000055b64faa7848 0x000055b62671b059 0x00007f361ed11ac3 0x00007f361eda3850
2024.08.06 21:39:57.244212 [ 673 ] {} <Fatal> BaseDaemon: ########################################
2024.08.06 21:39:57.244267 [ 673 ] {} <Fatal> BaseDaemon: (version 24.8.1.1649, build id: 5ABCCFC459A41E8E77C3D0C4D8C55923739A61B0, git hash: 406be745959c62100d5a97425df63234b14a756c) (from thread 11) (query_id: ed2b8177-22db-4418-9381-0fc5f4d5d148) (query: ) Received signal Segmentation fault (11)
2024.08.06 21:39:57.244317 [ 673 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2024.08.06 21:39:57.244347 [ 673 ] {} <Fatal> BaseDaemon: Stack trace: 0x000055b636a5860d 0x000055b636fe3027 0x00007f361ecbf520 0x00007f361eca5899 0x000055b6369e910b 0x000055b6369eabd1 0x000055b626766445 0x000055b62bd1a735 0x000055b643fdca58 0x000055b64907be21 0x000055b64907a43a 0x000055b649079f51 0x000055b6490790a6 0x000055b648a5476e 0x000055b648a3b471 0x000055b648a3a6d2 0x000055b648842523 0x000055b648844b11 0x000055b648832672 0x000055b648831b10 0x000055b64882da4c 0x000055b6488d8234 0x000055b64fbaa1ef 0x000055b64fbaadd7 0x000055b64faad6eb 0x000055b64faa7848 0x000055b62671b059 0x00007f361ed11ac3 0x00007f361eda3850
2024.08.06 21:39:57.290501 [ 673 ] {} <Fatal> BaseDaemon: 0.0. inlined from ./build_docker/./src/Common/StackTrace.cpp:349: StackTrace::tryCapture()
2024.08.06 21:39:57.290570 [ 673 ] {} <Fatal> BaseDaemon: 0. ./build_docker/./src/Common/StackTrace.cpp:318: StackTrace::StackTrace(ucontext_t const&) @ 0x000000001ac2060d
2024.08.06 21:39:57.364633 [ 673 ] {} <Fatal> BaseDaemon: 1. ./build_docker/./src/Common/SignalHandlers.cpp:0: signalHandler(int, siginfo_t*, void*) @ 0x000000001b1ab027
2024.08.06 21:39:57.364695 [ 673 ] {} <Fatal> BaseDaemon: 2. ? @ 0x00007f361ecbf520
2024.08.06 21:39:57.364725 [ 673 ] {} <Fatal> BaseDaemon: 3. ? @ 0x00007f361eca5899
2024.08.06 21:39:57.454236 [ 673 ] {} <Fatal> BaseDaemon: 4.0. inlined from ./contrib/llvm-project/libcxx/include/atomic:958: int std::__cxx_atomic_load[abi:v15007]<int>(std::__cxx_atomic_base_impl<int> const*, std::memory_order)
2024.08.06 21:39:57.454307 [ 673 ] {} <Fatal> BaseDaemon: 4.1. inlined from ./contrib/llvm-project/libcxx/include/atomic:1560: std::__atomic_base<int, false>::load[abi:v15007](std::memory_order) const
2024.08.06 21:39:57.454352 [ 673 ] {} <Fatal> BaseDaemon: 4.2. inlined from ./contrib/llvm-project/libcxx/include/atomic:1564: std::__atomic_base<int, false>::operator int[abi:v15007]() const
2024.08.06 21:39:57.454398 [ 673 ] {} <Fatal> BaseDaemon: 4.3. inlined from ./base/poco/Foundation/include/Poco/Logger.h:2354: Poco::Logger::is(int) const
2024.08.06 21:39:57.454452 [ 673 ] {} <Fatal> BaseDaemon: 4. ./build_docker/./src/Common/Exception.cpp:47: DB::abortOnFailedAssertion(String const&, void* const*, unsigned long, unsigned long) @ 0x000000001abb110b
2024.08.06 21:39:57.540818 [ 673 ] {} <Fatal> BaseDaemon: 5.0. inlined from ./build_docker/./src/Common/Exception.cpp:208: DB::Exception::getStackFramePointers() const
2024.08.06 21:39:57.540892 [ 673 ] {} <Fatal> BaseDaemon: 5. ./build_docker/./src/Common/Exception.cpp:115: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001abb2bd1
2024.08.06 21:39:57.581860 [ 673 ] {} <Fatal> BaseDaemon: 6. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x000000000a92e445
2024.08.06 21:39:57.621378 [ 673 ] {} <Fatal> BaseDaemon: 7. DB::Exception::Exception<String, String, String, String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&, String&&, String&&, String&&) @ 0x000000000fee2735
2024.08.06 21:39:57.770696 [ 673 ] {} <Fatal> BaseDaemon: 8.0. inlined from ./build_docker/./src/Interpreters/ExpressionActions.cpp:639: DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool, bool)
2024.08.06 21:39:57.770770 [ 673 ] {} <Fatal> BaseDaemon: 8. ./build_docker/./src/Interpreters/ExpressionActions.cpp:770: DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool, bool) const @ 0x00000000281a4a58
2024.08.06 21:39:57.800234 [ 673 ] {} <Fatal> BaseDaemon: 9. ./build_docker/./src/Processors/Transforms/ExpressionTransform.cpp:0: DB::ConvertingTransform::onConsume(DB::Chunk) @ 0x000000002d243e21
2024.08.06 21:39:57.850632 [ 673 ] {} <Fatal> BaseDaemon: 10.0. inlined from ./contrib/llvm-project/libcxx/include/vector:432: __destroy_vector
2024.08.06 21:39:57.850696 [ 673 ] {} <Fatal> BaseDaemon: 10.1. inlined from ./contrib/llvm-project/libcxx/include/vector:449: ~vector
2024.08.06 21:39:57.850736 [ 673 ] {} <Fatal> BaseDaemon: 10.2. inlined from ./src/Common/CollectionOfDerived.h:28: ~CollectionOfDerivedItems
2024.08.06 21:39:57.850778 [ 673 ] {} <Fatal> BaseDaemon: 10.3. inlined from ./src/Processors/Chunk.h:52: ~Chunk
2024.08.06 21:39:57.850817 [ 673 ] {} <Fatal> BaseDaemon: 10.4. inlined from ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150: operator()
2024.08.06 21:39:57.850847 [ 673 ] {} <Fatal> BaseDaemon: 10.5. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:394: ?
2024.08.06 21:39:57.850876 [ 673 ] {} <Fatal> BaseDaemon: 10.6. inlined from ./contrib/llvm-project/libcxx/include/__functional/invoke.h:479: ?
2024.08.06 21:39:57.850914 [ 673 ] {} <Fatal> BaseDaemon: 10.7. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:235: ?
2024.08.06 21:39:57.850945 [ 673 ] {} <Fatal> BaseDaemon: 10. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000002d24243a
2024.08.06 21:39:57.910655 [ 673 ] {} <Fatal> BaseDaemon: 11. ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:102: DB::runStep(std::function<void ()>, DB::ThreadStatus*, std::atomic<unsigned long>*) @ 0x000000002d241f51
2024.08.06 21:39:57.954387 [ 673 ] {} <Fatal> BaseDaemon: 12.0. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:818: ?
2024.08.06 21:39:57.954465 [ 673 ] {} <Fatal> BaseDaemon: 12.1. inlined from ./contrib/llvm-project/libcxx/include/__functional/function.h:1184: ?
2024.08.06 21:39:57.954522 [ 673 ] {} <Fatal> BaseDaemon: 12. ./build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150: DB::ExceptionKeepingTransform::work() @ 0x000000002d2410a6
2024.08.06 21:39:57.976795 [ 673 ] {} <Fatal> BaseDaemon: 13.0. inlined from ./contrib/llvm-project/libcxx/include/list:588: std::__list_imp<DB::ExecutingGraph::Edge, std::allocator<DB::ExecutingGraph::Edge>>::__sz[abi:v15007]() const
2024.08.06 21:39:57.976876 [ 673 ] {} <Fatal> BaseDaemon: 13.1. inlined from ./contrib/llvm-project/libcxx/include/list:616: std::__list_imp<DB::ExecutingGraph::Edge, std::allocator<DB::ExecutingGraph::Edge>>::empty[abi:v15007]() const
2024.08.06 21:39:57.976923 [ 673 ] {} <Fatal> BaseDaemon: 13.2. inlined from ./contrib/llvm-project/libcxx/include/list:918: std::list<DB::ExecutingGraph::Edge, std::allocator<DB::ExecutingGraph::Edge>>::empty[abi:v15007]() const
2024.08.06 21:39:57.976974 [ 673 ] {} <Fatal> BaseDaemon: 13.3. inlined from ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:50: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*)
2024.08.06 21:39:57.977016 [ 673 ] {} <Fatal> BaseDaemon: 13. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:96: DB::ExecutionThreadContext::executeTask() @ 0x000000002cc1c76e
2024.08.06 21:39:58.031414 [ 673 ] {} <Fatal> BaseDaemon: 14. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x000000002cc03471
2024.08.06 21:39:58.088216 [ 673 ] {} <Fatal> BaseDaemon: 15. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:150: DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x000000002cc026d2
2024.08.06 21:39:58.236847 [ 673 ] {} <Fatal> BaseDaemon: 16. ./build_docker/./src/Storages/TimeSeries/PrometheusRemoteWriteProtocol.cpp:546: DB::(anonymous namespace)::insertToTargetTables(DB::(anonymous namespace)::BlocksToInsert&&, DB::StorageTimeSeries&, std::shared_ptr<DB::Context const>, Poco::Logger*) @ 0x000000002ca0a523
2024.08.06 21:39:58.356508 [ 673 ] {} <Fatal> BaseDaemon: 17. ./build_docker/./src/Storages/TimeSeries/PrometheusRemoteWriteProtocol.cpp:0: DB::PrometheusRemoteWriteProtocol::writeMetricsMetadata(google::protobuf::RepeatedPtrField<prometheus::MetricMetadata> const&) @ 0x000000002ca0cb11
2024.08.06 21:39:58.405153 [ 673 ] {} <Fatal> BaseDaemon: 18. ./build_docker/./src/Server/PrometheusRequestHandler.cpp:232: DB::PrometheusRequestHandler::RemoteWriteImpl::handlingRequestWithContext(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x000000002c9fa672
2024.08.06 21:39:58.442487 [ 673 ] {} <Fatal> BaseDaemon: 19.0. inlined from ./contrib/llvm-project/libcxx/include/optional:260: ~__optional_destruct_base
2024.08.06 21:39:58.442550 [ 673 ] {} <Fatal> BaseDaemon: 19. ./build_docker/./src/Server/PrometheusRequestHandler.cpp:125: DB::PrometheusRequestHandler::ImplWithContext::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x000000002c9f9b10
2024.08.06 21:39:58.483051 [ 673 ] {} <Fatal> BaseDaemon: 20. ./build_docker/./src/Server/PrometheusRequestHandler.cpp:361: DB::PrometheusRequestHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&, StrongTypedef<unsigned long, ProfileEvents::EventTag> const&) @ 0x000000002c9f5a4c
2024.08.06 21:39:58.491988 [ 673 ] {} <Fatal> BaseDaemon: 21. ./build_docker/./src/Server/HTTP/HTTPServerConnection.cpp:0: DB::HTTPServerConnection::run() @ 0x000000002caa0234
2024.08.06 21:39:58.497085 [ 673 ] {} <Fatal> BaseDaemon: 22. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x0000000033d721ef
2024.08.06 21:39:58.506908 [ 673 ] {} <Fatal> BaseDaemon: 23.0. inlined from ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15007](Poco::Net::TCPServerConnection*) const
2024.08.06 21:39:58.506985 [ 673 ] {} <Fatal> BaseDaemon: 23.1. inlined from ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::unique_ptr<Poco::Net::TCPServerConnection, std::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15007](Poco::Net::TCPServerConnection*)
2024.08.06 21:39:58.507023 [ 673 ] {} <Fatal> BaseDaemon: 23.2. inlined from ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
2024.08.06 21:39:58.507063 [ 673 ] {} <Fatal> BaseDaemon: 23. ./build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x0000000033d72dd7
2024.08.06 21:39:58.518742 [ 673 ] {} <Fatal> BaseDaemon: 24. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:219: Poco::PooledThread::run() @ 0x0000000033c756eb
2024.08.06 21:39:58.529182 [ 673 ] {} <Fatal> BaseDaemon: 25.0. inlined from ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::get()
2024.08.06 21:39:58.529255 [ 673 ] {} <Fatal> BaseDaemon: 25.1. inlined from ./base/poco/Foundation/include/Poco/SharedPtr.h:139: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
2024.08.06 21:39:58.529314 [ 673 ] {} <Fatal> BaseDaemon: 25.2. inlined from ./base/poco/Foundation/include/Poco/SharedPtr.h:180: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
2024.08.06 21:39:58.529354 [ 673 ] {} <Fatal> BaseDaemon: 25. ./base/poco/Foundation/src/Thread_POSIX.cpp:350: Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000033c6f848
2024.08.06 21:39:58.569930 [ 673 ] {} <Fatal> BaseDaemon: 26. asan_thread_start(void*) @ 0x000000000a8e3059
2024.08.06 21:39:58.569987 [ 673 ] {} <Fatal> BaseDaemon: 27. ? @ 0x00007f361ed11ac3
2024.08.06 21:39:58.570026 [ 673 ] {} <Fatal> BaseDaemon: 28. ? @ 0x00007f361eda3850
2024.08.06 21:39:58.570053 [ 673 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2024.08.06 21:39:59.898490 [ 673 ] {} <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
2024.08.06 21:39:59.898664 [ 673 ] {} <Fatal> BaseDaemon: Changed settings: allow_experimental_time_series_table = true

and the query

❯ grep -Fa '{ed2b8177-22db-4418-9381-0fc5f4d5d148}' clickhouse-server.log
2024.08.06 21:39:51.199570 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Trace> PrometheusRemoteWriteProtocol: default.prometheus (2f28b648-8f59-4288-820a-2aa5a10e678e): Writing 217 metrics metadata
2024.08.06 21:39:51.199808 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Information> PrometheusRemoteWriteProtocol: default.prometheus (2f28b648-8f59-4288-820a-2aa5a10e678e): Inserting 217 rows to the metrics table
2024.08.06 21:39:51.201062 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Test> InterpreterInsertQuery: Pipeline could use up to 0 thread
2024.08.06 21:39:51.201548 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Fatal> : Logical error: 'Unexpected return type from materialize. Expected LowCardinality(String). Got String. Action:
2024.08.06 21:39:51.243027 [ 11 ] {ed2b8177-22db-4418-9381-0fc5f4d5d148} <Fatal> : Stack trace (when copying this message, always include the lines below):

@antonio2368 Any ideas?

@nikitamikhaylov
Copy link
Copy Markdown
Member

nikitamikhaylov commented Aug 8, 2024

FreeBSD build was broken in master, revert is here: #68014
PR Check — expect adding docs for pr-feature - there is a problem with documentation, I've extracted it fully to another PR: #67940

Going to override the checks and merge this thing.

@nikitamikhaylov nikitamikhaylov added this pull request to the merge queue Aug 8, 2024
Merged via the queue into ClickHouse:master with commit 4c289aa Aug 8, 2024
@robot-clickhouse-ci-1 robot-clickhouse-ci-1 added the pr-synced-to-cloud The PR is synced to the cloud repo label Aug 8, 2024
@jessechencx
Copy link
Copy Markdown

Can we now use PromQL to query ClickHouse data? Could you please provide an example? And are there any performance concerns? @vitlibar

@joschi
Copy link
Copy Markdown
Contributor

joschi commented Sep 11, 2024

Can we now use PromQL to query ClickHouse data?

No, not yet. Please subscribe to #57545 to stay in the loop.

@jessechencx
Copy link
Copy Markdown

Refer to this https://clickhouse.com/docs/en/engines/table-engines/special/time_series, it should be possible to read data from ClickHouse using Prometheus, which is equivalent to using PromQL to read from ClickHouse. is right? @joschi

@joschi
Copy link
Copy Markdown
Contributor

joschi commented Sep 11, 2024

@jessechencx No.

@jessechencx
Copy link
Copy Markdown

why can not? I would like to use ClickHouse as the backend storage for Prometheus and then query the data in ClickHouse through Prometheus. and use time_series as table engine. @joschi

@alont
Copy link
Copy Markdown
Contributor

alont commented Sep 11, 2024

why can not? I would like to use ClickHouse as the backend storage for Prometheus and then query the data in ClickHouse through Prometheus. and use time_series as table engine. @joschi

@jessechencx, reading from ClickHouse using Prometheus is done using the Prometheus remote read API, not the PromQL query API. You can use it to ship data between Prometheus and ClickHouse, but not to query that data with PromQL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-feature Pull request with new product feature pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.