Skip to content

feat: render token-usage.jsonl in the MCP gateway step summary#24029

Merged
pelikhan merged 1 commit intomainfrom
copilot/update-gateway-step-summary-renderer
Apr 2, 2026
Merged

feat: render token-usage.jsonl in the MCP gateway step summary#24029
pelikhan merged 1 commit intomainfrom
copilot/update-gateway-step-summary-renderer

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 2, 2026

The MCP gateway step summary (parse_mcp_gateway_log.cjs) had no visibility into token consumption from the firewall proxy. The token-usage.jsonl file produced at /tmp/gh-aw/sandbox/firewall/logs/api-proxy-logs/token-usage.jsonl was only rendered by the separate parse_token_usage.sh shell step — this brings equivalent rendering into the JS gateway summary renderer so token data appears alongside gateway activity in a single step.

Changes

  • parseTokenUsageJsonl(jsonlContent) — parses JSONL, aggregates input/output/cache tokens and duration per model
  • formatDurationMs(ms) — formats milliseconds as 500ms / 2.5s / 1m30s
  • generateTokenUsageSummary(summary) — renders a ### 📊 Token Usage markdown table with per-model rows sorted by total tokens, a Total row, and cache efficiency percentage when non-zero
  • writeStepSummaryWithTokenUsage(core) — consolidates the "append token usage + core.summary.write()" pattern across all 4 main() exit paths; uses core.debug() for the "no file found" case to avoid log noise in normal runs

The rendered output matches the format already produced by parse_token_usage.sh:

### 📊 Token Usage

| Model | Input | Output | Cache Read | Cache Write | Requests | Duration |
|-------|------:|-------:|-----------:|------------:|---------:|---------:|
| claude-sonnet-4-6 | 3 | 414 | 40,984 | 26,035 | 2 | 10.4s |
| **Total** | **6** | **864** | **55,028** | **26,035** | **3** | **11.4s** |

_Cache efficiency: 90.0%_

15 new unit tests added covering all three pure functions.

Add token usage rendering to parse_mcp_gateway_log.cjs so the MCP
gateway step summary also shows per-model token usage from the firewall
proxy log at /tmp/gh-aw/sandbox/firewall/logs/api-proxy-logs/token-usage.jsonl.

- parseTokenUsageJsonl(): parse JSONL and aggregate by model
- formatDurationMs(): human-readable duration formatting
- generateTokenUsageSummary(): markdown table with totals and cache efficiency
- writeStepSummaryWithTokenUsage(): appends token usage then writes summary
- 15 new tests covering all three pure functions

Agent-Logs-Url: https://github.com/github/gh-aw/sessions/14beb7e2-e78b-458f-89c2-db4319a6ded2

Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
@pelikhan pelikhan marked this pull request as ready for review April 2, 2026 05:36
Copilot AI review requested due to automatic review settings April 2, 2026 05:36
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds token-usage visibility to the MCP gateway GitHub Actions step summary so token consumption from the firewall proxy is rendered alongside gateway activity.

Changes:

  • Added JSONL parsing + aggregation for firewall token-usage.jsonl and a Markdown renderer for a per-model + total token usage table.
  • Introduced duration formatting helper and integrated token-usage rendering into all main() exit paths via a consolidated summary writer.
  • Added unit tests for the new pure helper functions.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
actions/setup/js/parse_mcp_gateway_log.cjs Parse/render token-usage.jsonl and append token usage to the step summary across all exit paths.
actions/setup/js/parse_mcp_gateway_log.test.cjs Adds unit tests for token-usage parsing, duration formatting, and Markdown summary generation.
Comments suppressed due to low confidence (1)

actions/setup/js/parse_mcp_gateway_log.cjs:112

  • generateTokenUsageSummary() starts with "### 📊 Token Usage" without a leading newline. When appended after other sections that don't end with a newline (e.g., generateGatewayLogSummary() can end with "</details>"), the heading can run into the previous line. Consider prefixing the section with a newline (matching parse_token_usage.sh, which prints \n### ...) or ensuring the caller always adds a separator newline before appending.
  const lines = [];
  lines.push("### 📊 Token Usage\n");
  lines.push("| Model | Input | Output | Cache Read | Cache Write | Requests | Duration |");
  lines.push("|-------|------:|-------:|-----------:|------------:|---------:|---------:|");

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +30 to +31
const minutes = Math.floor(seconds / 60);
const secs = Math.round(seconds % 60);
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

formatDurationMs() can emit invalid minute/second strings because it rounds seconds % 60 (e.g., 119500ms -> minutes=1, secs=60 => 1m60s). Consider computing total whole seconds first (or carrying when secs === 60) so the seconds component is always 0–59. Also note this minute-format output differs from actions/setup/sh/parse_token_usage.sh, which renders durations as %.1fs even for >60s; if the goal is to match that output, keep durations in seconds with one decimal instead of switching to XmYs.

This issue also appears on line 109 of the same file.

Suggested change
const minutes = Math.floor(seconds / 60);
const secs = Math.round(seconds % 60);
const totalSeconds = Math.round(seconds);
const minutes = Math.floor(totalSeconds / 60);
const secs = totalSeconds % 60;

Copilot uses AI. Check for mistakes.
expect(formatDurationMs(60000)).toBe("1m0s");
expect(formatDurationMs(90000)).toBe("1m30s");
expect(formatDurationMs(120000)).toBe("2m0s");
});
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formatDurationMs tests don't cover the rounding edge case where the current implementation can produce 1m60s (e.g., ~119.5s). Adding a boundary test (and asserting normalization to 2m0s or similar) would prevent regressions once the formatting logic is fixed.

Suggested change
});
});
test("normalizes rounding at minute boundary", () => {
// ~119.5s should round up to exactly 2 minutes, not "1m60s"
expect(formatDurationMs(119500)).toBe("2m0s");
});

Copilot uses AI. Check for mistakes.
@pelikhan pelikhan merged commit 3493467 into main Apr 2, 2026
103 of 152 checks passed
@pelikhan pelikhan deleted the copilot/update-gateway-step-summary-renderer branch April 2, 2026 05:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants