perf(ext/web): optimize node:buffer base64 encode/decode#32647
Merged
bartlomieju merged 7 commits intodenoland:mainfrom Mar 14, 2026
Merged
perf(ext/web): optimize node:buffer base64 encode/decode#32647bartlomieju merged 7 commits intodenoland:mainfrom
bartlomieju merged 7 commits intodenoland:mainfrom
Conversation
Three optimizations for base64 operations in node:buffer: 1. Add `op_base64_decode_into` - decodes base64 directly into a target buffer at an offset, eliminating the intermediate Uint8Array allocation and blitBuffer copy that `base64Write` previously required. Uses stack allocation for inputs ≤8KB to avoid heap alloc overhead. 2. Add `op_base64_encode_from_buffer` - encodes a sub-range of a buffer to base64, avoiding the JS-side TypedArrayPrototypeSlice copy in `base64Slice`. 3. Change `op_base64_decode` to use `#[string(onebyte)] Cow<[u8]>` instead of `#[string] String`, avoiding UTF-8 conversion overhead since base64 is always ASCII. Benchmarks (50K iterations, 1K warmup, 4KB payload): - decode: 1849 → 2483 MB/s (+34%) - write: 2332 → 3547 MB/s (+52%) - encode: unchanged (already competitive) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
For properly-padded base64 (the common case), decode directly from input to target buffer with zero intermediate copies. Falls back to forgiving decode for whitespace/missing padding. Also switch op_base64_decode to forgiving_decode_to_vec for cleaner single-pass decoding. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…t paths - op_base64_encode_from_buffer now uses v8::String::new_from_one_byte directly (base64 is always ASCII), with stack allocation for ≤6KB input - Add base64 fast paths in Buffer.prototype.toString and write to skip getEncodingOps dispatch overhead (avoids toLowerCase on every call) - Hybrid approach: small buffers (<=4KB) use #[string] return path, large buffers use new_from_one_byte for better throughput Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
11 tasks
base64_simd::STANDARD.decode panics (assert!) when the destination buffer is smaller than the decoded length, rather than returning Err. This caused crashes when Buffer.write() was called with a long base64 string on a small buffer. Also fixes clippy lint for manual div_ceil. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Optimizes
node:bufferbase64 encode/decode/write to close the performance gap with Node.js.Decode & Write
op_base64_decode_intofast-call op that decodes base64 directly into a target buffer, used byBuffer.prototype.write(str, 'base64')— avoids allocating intermediate Uint8Arraybase64_simd::STANDARD.decodefast path for properly-padded base64 (zero intermediate copies), with fallback toforgiving_decodefor whitespace/missing paddingop_base64_decodeto useforgiving_decode_to_vecfor cleaner single-pass decodingEncode
op_base64_encode_from_bufferop that encodes a sub-range of a buffer to base64 usingv8::String::new_from_one_bytedirectly (base64 is always ASCII, avoids UTF-8 processing overhead)#[string]return path, large buffers usenew_from_one_bytefor better throughputJS dispatch
Buffer.prototype.toStringandBuffer.prototype.writeto skipgetEncodingOpsdispatch overhead (avoidstoLowerCase()on every call)Benchmark results (50K iterations, vs Node.js)
Faster than Node on 7/18 benchmarks, within 0.85-0.96x on most of the rest.
Towards #24323
Test plan
tools/format.jsandtools/lint.js --jspass🤖 Generated with Claude Code