Skip to content

Adding benchmarking tool to compare performance between PRs and releases#868

Merged
jdmarshall merged 4 commits intonode-config:masterfrom
jdmarshall:benchmarks
Feb 10, 2026
Merged

Adding benchmarking tool to compare performance between PRs and releases#868
jdmarshall merged 4 commits intonode-config:masterfrom
jdmarshall:benchmarks

Conversation

@jdmarshall
Copy link
Copy Markdown
Collaborator

@jdmarshall jdmarshall commented Jan 20, 2026

This is a tool I wrote while working on prom-client, which used a similar tool that was defunct.

What I found is that it gets confusing looking at multiple PRs and seeing if anyone is being blamed for regressions or double counting improvements that have landed in trunk from another PR. So I extended the functionality to allow git urls as version numbers. This wraps a library called bench-node, which is authored by a member of the nodejs team.

The set of versions being checked here is excessive, but I wanted to verify that none of the code I had written or merged since becoming a maintainer had introduced any problems, and I'm not seeing them, at least in the paths currently being exercised here. We can delete a few of these as soon as the benchmark count gets larger but we're setting at 20 seconds for these tests.

Summary by CodeRabbit

  • Chores
    • Added benchmarking infrastructure and sample benchmark configurations.
    • Added new npm scripts: "benchmarks" and "ci" (ci runs tests plus benchmarks).
    • CI workflow updated to run the new "ci" script during test checks.
    • Benchmarks now excluded from published package and associated dev dependency added.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jan 20, 2026

📝 Walkthrough

Walkthrough

Added a benchmark suite and configs; CI now runs npm run ci; benchmarks added to .npmignore; package.json gains devDependency faceoff and new ci and benchmarks scripts.

Changes

Cohort / File(s) Summary
CI & Packaging
​.github/workflows/tests.yml, ​.npmignore, package.json
CI "Run Tests" step now invokes npm run ci; .npmignore now excludes benchmarks; package.json adds devDependency faceoff and scripts ci and benchmarks.
Benchmark runner
benchmarks/index.js
New benchmark harness using faceoff that loads sample configs, defines three suites (access functions, Util functions, config.util), runs benchmarks, logs results, and exits non-zero on failure.
Benchmark configs (JSON / JSON5 / JSONC / meta)
benchmarks/config/benchmark.json, benchmarks/config/default.json, benchmarks/config/default.json5, benchmarks/config/default.jsonc
New static config files (JSON/JSON5/JSONC) providing sample data (db port, Customers, arrays, strings with quotes, comments) for benchmark loading.
Benchmark configs (JS / MJS / regexp)
benchmarks/config/default.js, benchmarks/config/default.mjs, benchmarks/config/local-regexp.js
New JS/MJS modules exporting configuration objects (getters, buffers, dates, regexps); .mjs provides a default ESM export for loader tests.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant CLI as "CLI: npm run benchmarks"
    participant Runner as "benchmarks/index.js"
    participant Loader as "Config Loader/Util"
    participant Faceoff as "faceoff"
    participant Process as "process/console"

    CLI->>Runner: start Node with flags
    Runner->>Loader: set NODE_ENV & load SAMPLE_DATA (benchmarks/config/*)
    Runner->>Faceoff: register benchmark suites (access, util, config.util)
    Faceoff->>Runner: execute suites and collect metrics
    Runner->>Process: log results
    alt failures present
        Runner->>Process: exit non-zero
    else
        Runner->>Process: exit 0
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇
I nibble configs, JSON and more,
I race with Faceoff across the floor.
Keys and regexps, dates in a line,
I hop, I time, I measure — fine!
A rabbit's cheer for benchmarks divine.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main objective: adding a benchmarking tool to compare performance between PRs and releases, which aligns with the comprehensive changes introducing benchmarking configuration files, index, and npm scripts.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@benchmarks/index.js`:
- Around line 24-25: The object key 'branch ' contains an unintended trailing
space; update the key to 'branch' (remove the trailing space) in the mapping
where it's defined (the entry currently written as 'branch ') so downstream
lookups and benchmark output use the correct key name; check any code that
references 'branch ' and update those references to 'branch' if present.
- Around line 6-9: Move the environment setup so NODE_ENV and NODE_CONFIG_DIR
are assigned before invoking loadConfig; specifically, set process.env.NODE_ENV
= 'benchmark' and process.env.NODE_CONFIG_DIR = configDir prior to the call to
loadConfig() that initializes SAMPLE_DATA, ensuring the SAMPLE_DATA =
loadConfig() call reads the correct environment-dependent config.
🧹 Nitpick comments (4)
.github/workflows/tests.yml (1)

22-23: Consider upgrading actions/setup-node to v4.

actions/checkout@v4 is used on line 19, but actions/setup-node@v2 is outdated. For consistency and to benefit from bug fixes and performance improvements, consider upgrading to v4.

Suggested fix
      - name: Setup Node.js
-        uses: actions/setup-node@v2
+        uses: actions/setup-node@v4
        with:
          node-version: '22'
package.json (1)

60-62: Consider renaming build script to clarify its purpose.

The build script name typically implies compilation/transpilation, but here it runs tests and benchmarks. This could be confusing for contributors. Consider a more descriptive name like ci or check.

That said, this is a minor naming concern and the current approach is functional.

benchmarks/config/default.mjs (1)

1-9: Minor: Inconsistent property quoting style.

EnvOverride uses unquoted keys while AnotherModule uses quoted keys. Consider using a consistent style throughout for readability.

Suggested fix
 export default {
     EnvOverride: {
         parm_number_1: "from_default_mjs",
         parm2: 22
     },
-    "AnotherModule": {
-        "parm10": "value10"
+    AnotherModule: {
+        parm10: "value10"
     }
 };
benchmarks/index.js (1)

29-38: Unused variables val may cause linter warnings.

The val variables are assigned but never read, which could trigger ESLint no-unused-vars warnings. While this is intentional for benchmarking purposes, consider using a void expression or disabling the lint rule locally.

Suggested fix to suppress warnings
   suite.add('get', (config, keys) => {
     for (const key of keys) {
-      let val = config[key];
+      void config[key];
     }
   });
 
   suite.add('get()', (config, keys) => {
     for (const key of keys) {
-      let val = config.get(key);
+      void config.get(key);
     }
   });

@jdmarshall jdmarshall merged commit 64206b3 into node-config:master Feb 10, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant