Skip to content

[zstd][#60] Add decompression size sanity Check#115

Merged
Viq111 merged 5 commits into1.xfrom
viq111/decompress-sanity-check
Apr 14, 2022
Merged

[zstd][#60] Add decompression size sanity Check#115
Viq111 merged 5 commits into1.xfrom
viq111/decompress-sanity-check

Conversation

@Viq111
Copy link
Copy Markdown
Collaborator

@Viq111 Viq111 commented Apr 6, 2022

Fix #60 (see for full description).

Previously, we were trusting the input from the zstd header to allocate the memory on Decompress API. This meant a carefully crafted payload could make the zstd wrapper allocates an arbitrary size:

ᐅ git checkout eaf4b06330d96d58d043f862d1c25f4bc8db7f37
ᐅ go test -v
[...]
--- PASS: TestLegacy (0.00s)
    --- PASS: TestLegacy/0 (0.00s)
    --- PASS: TestLegacy/1 (0.00s)
=== RUN   TestBadPayloadZipBomb
^C
signal: interrupt
FAIL	github.com/DataDog/zstd	82.757s

Screen Shot 2022-04-06 at 17 34 53

Screen Shot 2022-04-06 at 17 35 00

After this fix, we correctly don't allocate too much.
The main part of the change is at: 30c4b29#diff-fc46d6feefba07384476b9d2f5a4ae18d6f7322dcd684c388ebe1b2b8d19d394R55-R74

We now basically allocate up to: min(ZSTD_getFrameContentSize(), max(1MB, 10*input_size))

Viq111 added 4 commits April 6, 2022 17:32
Fix #60
Before we were blindly trusting the data returned by ZSTD_getDecompressedSize. This mean with a specifically crafter payload, we would try to allocate a lot of memory resulting in potential DOS.
Now set a sane limit and fall back to streaming
Since zstd 1.3.0+, the zstd header has size hint so you should be able to decompress the first time directly with that hint OR directly fallback to streaming. No need to try 3 times
@Viq111 Viq111 marked this pull request as ready for review April 6, 2022 21:40
@Viq111
Copy link
Copy Markdown
Collaborator Author

Viq111 commented Apr 14, 2022

No real changes on benchmark

@Viq111 Viq111 merged commit ed849f7 into 1.x Apr 14, 2022
Viq111 added a commit that referenced this pull request Jun 2, 2022
Fix #118
With the introduction of #115 to prevent zipbombs, the check was too strict and was checking a too large size boundary. This fixes it and adds a test
Viq111 added a commit that referenced this pull request Aug 24, 2023
Contrary to Decompress, bulk.Decompress does not retry with stream decompression on failing to Decompress.

#115 introduced preventing to allocate too big buffers when the input zstd header was malicious. This had the side-effect that for highly compressed payloads (> 10x), the dst buffer was still resized and would fail.

This fixes the issue and adds a test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Decoder doesn't check output size before allocating

2 participants