Skip to content

changefeedccl: Increase message size limits for kafka sink.#76265

Merged
craig[bot] merged 1 commit intocockroachdb:masterfrom
miretskiy:kafka
Feb 9, 2022
Merged

changefeedccl: Increase message size limits for kafka sink.#76265
craig[bot] merged 1 commit intocockroachdb:masterfrom
miretskiy:kafka

Conversation

@miretskiy
Copy link
Copy Markdown
Contributor

Sarama library, used by kafka sink, limits the maximum message
sizes locally. When those limits are exceeded, sarama library
returns confusing error message which seems to imply that the remote
kafka server rejected the message, even though this rejection happened
locally:
kafka server: Message was too large, server rejected it to avoid allocation error.

This PR addresses the problem by increasing sarama limits to 2GB
(max int32).

An alternative approach was to extend kafka_sink_config to specify
maximum message size. However, this alternative is less desirable.
For one, the user supplied configuration can run afoul other limits
imposed by sarama library (e.g. MaxRequestSize), so more configuration
option must be added. In addition, this really exposes very low level
implementation details in the sarama library -- something that we
probably should not do.

Fixes #76258

Release Notes (enterprise change): Kafka sink supports larger messages,
up to 2GB in size.

@miretskiy miretskiy requested a review from HonoreDB February 8, 2022 21:44
@miretskiy miretskiy requested a review from a team as a code owner February 8, 2022 21:44
@cockroach-teamcity
Copy link
Copy Markdown
Member

This change is Reviewable

Sarama library, used by kafka sink, limits the maximum message
sizes locally. When those limits are exceeded, sarama library
returns confusing error message which seems to imply that the remote
kafka server rejected the message, even though this rejection happened
locally:
   `kafka server: Message was too large, server rejected it to avoid allocation error.`

This PR addresses the problem by increasing sarama limits to 2GB
(max int32).

An alternative approach was to extend `kafka_sink_config` to specify
maximum message size.  However, this alternative is less desirable.
For one, the user supplied configuration can run afoul other limits
imposed by sarama library (e.g. `MaxRequestSize`), so more configuration
option must be added.  In addition, this really exposes very low level
implementation details in the sarama library -- something that we
probably should not do.

Fixes cockroachdb#76258

Release Notes (enterprise change): Kafka sink supports larger messages,
up to 2GB in size.
@miretskiy
Copy link
Copy Markdown
Contributor Author

bors r+

@craig
Copy link
Copy Markdown
Contributor

craig bot commented Feb 9, 2022

Build succeeded:

@blathers-crl
Copy link
Copy Markdown

blathers-crl bot commented Feb 9, 2022

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error creating merge commit from e109b4b to blathers/backport-release-21.1-76265: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 21.1.x failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

changefeed: Increase maximum message sizes.

3 participants