fix: Improve Postgres performance#13318
Closed
erezrokah wants to merge 1 commit intocloudquery:mainfrom
Closed
Conversation
| defaultBatchSize = 10000 | ||
| defaultBatchSizeBytes = 1000000 | ||
| defaultBatchTimeout = 10 * time.Second | ||
| defaultBatchSizeBytes = 100000000 |
Contributor
There was a problem hiding this comment.
also should be in the doc + should be a major bump
Member
Author
|
Going to split this PR so we can release the removal of list tables in a non breaking change |
This was referenced Aug 25, 2023
Member
Author
kodiakhq bot
pushed a commit
that referenced
this pull request
Aug 25, 2023
#### Summary Extracted from #13318. Witnessed the bottleneck using `pprof`. With default batch settings I'm getting `2m29s` sync time instead of ~`11m`, With `batch_size_bytes: 100000000` and `batch_timeout: 60s` I'm getting `1m40s`. (Fixed in #13324) Used spec: ```yaml kind: source spec: name: aws path: "cloudquery/aws" version: "v19.0.0" tables: - "*" skip_tables: - "aws_cloudtrail_*" - "aws_iam_*" - "aws_servicequotas_*" destinations: - postgresql --- kind: destination spec: name: "postgresql" registry: "grpc" path: localhost:8888 # path: cloudquery/postgresql # version "v5.0.5" spec: connection_string: "postgresql://postgres:pass@localhost:5432/postgres?sslmode=disable" # batch_size_bytes: 100000000 # batch_timeout: "60s" ``` <!--
kodiakhq bot
pushed a commit
that referenced
this pull request
Aug 28, 2023
#### Summary Extracted from #13318. BEGIN_COMMIT_OVERRIDE feat: Increase default batch size bytes to `100000000` (100 MB) and default batch timeout to `60` seconds. BREAKING-CHANGE: Increase default batch size bytes to `100000000` (100 MB) and default batch timeout to `60` seconds. We discovered a default higher batch size bytes and timeout settings provide better out of the box performance for the PostgreSQL destination. We're marking it as a breaking change as it might increase memory consumption in some environments. END_COMMIT_OVERRIDE
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Still need to figure out how to replace the logic I deleted, but listing information schema on each batch insert is creating a bottleneck (saw it on
pprof). Probably due to read/write locks or our queries to list tables and columns are slow.Changes in this PR result in about x10 improvement.
Spec used (I used an old source since I tested on an old Postgres version to get the expected numbers):
Before:

After (with Postgres running from localhost with this fix):
