Skip to content

perf: removed unnecessary offset and byteToHex calc#675

Closed
Cadienvan wants to merge 1 commit intouuidjs:mainfrom
Cadienvan:main
Closed

perf: removed unnecessary offset and byteToHex calc#675
Cadienvan wants to merge 1 commit intouuidjs:mainfrom
Cadienvan:main

Conversation

@Cadienvan
Copy link
Copy Markdown

Before:

uuid.stringify() x 1,298,884 ops/sec ±1.39% (91 runs sampled)
uuid.parse() x 1,620,813 ops/sec ±0.23% (97 runs sampled)

uuid.v1() x 885,718 ops/sec ±0.78% (96 runs sampled)
uuid.v1() fill existing array x 2,211,049 ops/sec ±0.48% (90 runs sampled)
uuid.v4() x 9,312,249 ops/sec ±2.75% (86 runs sampled)
uuid.v4() fill existing array x 2,894,864 ops/sec ±2.10% (90 runs sampled)
uuid.v3() x 151,383 ops/sec ±1.88% (75 runs sampled)
uuid.v5() x 168,435 ops/sec ±2.53% (64 runs sampled)

After:

uuid.stringify() x 1,557,441 ops/sec ±5.29% (87 runs sampled)
uuid.parse() x 1,631,789 ops/sec ±0.22% (97 runs sampled)

uuid.v1() x 1,118,442 ops/sec ±0.32% (96 runs sampled)
uuid.v1() fill existing array x 2,258,223 ops/sec ±0.44% (93 runs sampled)
uuid.v4() x 9,465,136 ops/sec ±2.26% (88 runs sampled)
uuid.v4() fill existing array x 2,986,081 ops/sec ±0.18% (96 runs sampled)
uuid.v3() x 155,015 ops/sec ±2.15% (75 runs sampled)
uuid.v5() x 171,307 ops/sec ±2.27% (77 runs sampled)

Results vary, yet we have less calcs to do and an internal implementation preventing a useless offset being used.

@broofa
Copy link
Copy Markdown
Member

broofa commented Jan 8, 2023

Thanks for the contribution, however I'm going to pass on this PR at this time. The perf improvement doesn't justify the increased bundle size, or the maintenance burden of having multiple, separate implementations for stringify().

@broofa broofa closed this Jan 8, 2023
@Cadienvan
Copy link
Copy Markdown
Author

What about just going for the bytesToHex?

@broofa
Copy link
Copy Markdown
Member

broofa commented Jan 9, 2023

I'm afraid the value just isn't there for this.

We're at a point with this module where performance is simply not a high priority. The vast majority of use cases for UUIDs involve operations that are several orders of magnitude slower than UUID generation. E.g. v1() perf is on the order of 1,000,000 ops / second. For v4() it's ~10,000,000 ops / second. Contrast that to databases or memory caches or network requests, where things typically max out at 1,000-10,000 ops / second. UUID perf ends up "in the noise".

If you have a real-world use case where the performance difference here actually matters (e.g. "This will reduce our AWS costs by $XXX/month"), I'd be very interested in hearing about it. But barring that, there's not really any benefit here to offset the increased code size and complexity (trivial as that may seem.)

(Edit to add: FWIW, we have taken PRs in the past on the basis of perf improvement - specifically #513 and #597 - but those yielded > 4X increases to perf for v4(), which is the main API people use. And, too, the code changes were relatively small.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants