Enforce worker limits in Compression Streams API#2502
Conversation
|
This is modeled after the limiting approach recently added in dh.c++ and should avoid CPU utilization issues when compressing a large amount of data at once.
|
anonrig
left a comment
There was a problem hiding this comment.
Is this easily testable? I feel such that should have a test that validates the behavior.
|
Testing this in workerd is likely difficult since we do not actually implement the cpu limiting functionality (it's all a non-op in workerd). We'll likely need to add a test to the internal repo separately. |
|
Labeling as "needs-internal-pr" so that we can get a test added internally to verify this |
| KJ_IF_SOME(outcome, IoContext::current().getLimitEnforcer().getLimitsExceeded()) { | ||
| if (outcome == EventOutcome::EXCEEDED_CPU) { | ||
| JSG_FAIL_REQUIRE(Error, "Compression Stream write failed: Exceeded CPU limit"); | ||
| } else if (outcome == EventOutcome::EXCEEDED_MEMORY) { |
There was a problem hiding this comment.
So one key bit here is that I don't believe we are yet reporting the size of the buffer here to v8, so v8 itself won't really have any insight into the memory consumed by the decompression itself -- that is, unless these chunks are immediately being moved into a v8::ArrayBuffer. So I'm not sure if the memory check here will be effective? Have you been able to verify this appropriately accounts for the memory used during decompression?
There was a problem hiding this comment.
This change was designed with CPU time limiting in mind; it does not take any additional steps to accurately measure memory consumption, but once that is implemented properly it should be able to limit that too. I can add a comment indicating this.
|
Downstream PR and test have been added. |
eefa115 to
3a91020
Compare
No description provided.