Conversation
… use it Create shared/token-logs-24h.md that encapsulates the cache-check + fallback download pattern for 24h Copilot and Claude token logs. The shared pre-step checks the Token Logs Fetch workflow's cache-memory artifact for today's data before downloading, ensuring logs are fetched at most once per day. Update copilot-token-optimizer, claude-token-optimizer, copilot-token-usage-analyzer, and claude-token-usage-analyzer to import the shared step and remove ~40 lines of duplicated cache-check boilerplate each. Agent-Logs-Url: https://github.com/github/gh-aw/sessions/bd594fad-df12-41fd-8dc4-a14bfe12624d Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Introduces a shared “token logs (24h)” pre-step workflow to reuse cached Copilot/Claude token logs across multiple analysis/optimization workflows, reducing duplicated cache-restore boilerplate.
Changes:
- Added
.github/workflows/shared/token-logs-24h.mdto restore cached logs (or fetch fresh) into a standard/tmp/gh-aw/token-logs/location. - Updated the 4 token analyzer/optimizer workflows to consume logs from the shared standard path and import the shared step.
- Regenerated the corresponding
*.lock.ymlmanifests to include the new shared import and injected restore step.
Show a summary per file
| File | Description |
|---|---|
| .github/workflows/shared/token-logs-24h.md | New shared pre-step to restore/fetch 24h Copilot/Claude logs into a common temp directory. |
| .github/workflows/copilot-token-usage-analyzer.md | Replaces inline log-fetch/cache logic with a copy from the shared pre-step output and imports the shared step. |
| .github/workflows/copilot-token-usage-analyzer.lock.yml | Regenerated manifest to include the shared import and injected restore step. |
| .github/workflows/copilot-token-optimizer.md | Replaces inline log-fetch/cache logic with a copy from the shared pre-step output and imports the shared step. |
| .github/workflows/copilot-token-optimizer.lock.yml | Regenerated manifest to include the shared import and injected restore step. |
| .github/workflows/claude-token-usage-analyzer.md | Replaces inline log-fetch/cache logic with a copy from the shared pre-step output and imports the shared step. |
| .github/workflows/claude-token-usage-analyzer.lock.yml | Regenerated manifest to include the shared import and injected restore step. |
| .github/workflows/claude-token-optimizer.md | Replaces inline log-fetch/cache logic with a copy from the shared pre-step output and imports the shared step. |
| .github/workflows/claude-token-optimizer.lock.yml | Regenerated manifest to include the shared import and injected restore step. |
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Comments suppressed due to low confidence (4)
.github/workflows/copilot-token-usage-analyzer.lock.yml:309
- This new “Restore 24h token logs from cache” step runs before “Install gh-aw CLI” but invokes
gh aw logson cache-miss. In a fresh runner where thegithub/gh-awextension isn’t installed yet, the fallback will fail and can result in empty log files being used downstream. Fix by ensuringgh-awis installed before this step runs (or make the restore step install/upgradegithub/gh-awitself before callinggh aw).
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Restore 24h token logs from cache
run: "set -euo pipefail\nTOKEN_LOGS_DIR=\"/tmp/gh-aw/token-logs\"\nmkdir -p \"$TOKEN_LOGS_DIR\"\nTODAY=$(date -u +%Y-%m-%d)\n\n# Look for today's pre-fetched data from the Token Logs Fetch workflow\nFETCH_RUN_ID=$(gh run list \\\n --workflow \"token-logs-fetch.lock.yml\" \\\n --status success \\\n --limit 1 \\\n --json databaseId \\\n --jq '.[0].databaseId' 2>/dev/null || echo \"\")\n\nUSED_CACHE=false\nif [ -n \"$FETCH_RUN_ID\" ]; then\n CACHE_TMP=\"/tmp/gh-aw/token-logs-fetch-cache\"\n mkdir -p \"$CACHE_TMP\"\n gh run download \"$FETCH_RUN_ID\" \\\n --repo \"$GITHUB_REPOSITORY\" \\\n --name \"cache-memory\" \\\n --dir \"$CACHE_TMP\" \\\n 2>/dev/null || true\n CACHE_DATE=$(cat \"$CACHE_TMP/token-logs/fetch-date.txt\" 2>/dev/null || echo \"\")\n if [ \"$CACHE_DATE\" = \"$TODAY\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/copilot-runs.json\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/claude-runs.json\" ]; then\n echo \"✅ Using pre-fetched logs from Token Logs Fetch run $FETCH_RUN_ID (date: $CACHE_DATE)\"\n cp \"$CACHE_TMP/token-logs/copilot-runs.json\" \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n cp \"$CACHE_TMP/token-logs/claude-runs.json\" \"$TOKEN_LOGS_DIR/claude-runs.json\"\n USED_CACHE=true\n else\n echo \"ℹ️ No valid cached logs found (cache date: ${CACHE_DATE:-none}, today: $TODAY)\"\n fi\nfi\n\nif [ \"$USED_CACHE\" != \"true\" ]; then\n echo \"📥 Downloading Copilot and Claude workflow runs from last 24 hours...\"\n\n gh aw logs \\\n --engine copilot \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-copilot-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-copilot-raw.json\n jq '.runs // []' /tmp/token-logs-copilot-raw.json > \"$TOKEN_LOGS_DIR/copilot-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n\n gh aw logs \\\n --engine claude \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-claude-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-claude-raw.json\n jq '.runs // []' /tmp/token-logs-claude-raw.json > \"$TOKEN_LOGS_DIR/claude-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/claude-runs.json\"\nfi\n\necho \"✅ Copilot runs: $(jq 'length' \"$TOKEN_LOGS_DIR/copilot-runs.json\")\"\necho \"✅ Claude runs: $(jq 'length' \"$TOKEN_LOGS_DIR/claude-runs.json\")\""
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Install gh-aw CLI
run: |
.github/workflows/copilot-token-optimizer.lock.yml:326
- This new “Restore 24h token logs from cache” step runs before “Install gh-aw CLI” but invokes
gh aw logson cache-miss. If thegithub/gh-awextension isn’t installed yet, the fallback can fail and lead to empty log files being consumed. Ensuregh-awis installed before this step runs, or have this step install/upgradegithub/gh-awbefore callinggh aw.
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Restore 24h token logs from cache
run: "set -euo pipefail\nTOKEN_LOGS_DIR=\"/tmp/gh-aw/token-logs\"\nmkdir -p \"$TOKEN_LOGS_DIR\"\nTODAY=$(date -u +%Y-%m-%d)\n\n# Look for today's pre-fetched data from the Token Logs Fetch workflow\nFETCH_RUN_ID=$(gh run list \\\n --workflow \"token-logs-fetch.lock.yml\" \\\n --status success \\\n --limit 1 \\\n --json databaseId \\\n --jq '.[0].databaseId' 2>/dev/null || echo \"\")\n\nUSED_CACHE=false\nif [ -n \"$FETCH_RUN_ID\" ]; then\n CACHE_TMP=\"/tmp/gh-aw/token-logs-fetch-cache\"\n mkdir -p \"$CACHE_TMP\"\n gh run download \"$FETCH_RUN_ID\" \\\n --repo \"$GITHUB_REPOSITORY\" \\\n --name \"cache-memory\" \\\n --dir \"$CACHE_TMP\" \\\n 2>/dev/null || true\n CACHE_DATE=$(cat \"$CACHE_TMP/token-logs/fetch-date.txt\" 2>/dev/null || echo \"\")\n if [ \"$CACHE_DATE\" = \"$TODAY\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/copilot-runs.json\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/claude-runs.json\" ]; then\n echo \"✅ Using pre-fetched logs from Token Logs Fetch run $FETCH_RUN_ID (date: $CACHE_DATE)\"\n cp \"$CACHE_TMP/token-logs/copilot-runs.json\" \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n cp \"$CACHE_TMP/token-logs/claude-runs.json\" \"$TOKEN_LOGS_DIR/claude-runs.json\"\n USED_CACHE=true\n else\n echo \"ℹ️ No valid cached logs found (cache date: ${CACHE_DATE:-none}, today: $TODAY)\"\n fi\nfi\n\nif [ \"$USED_CACHE\" != \"true\" ]; then\n echo \"📥 Downloading Copilot and Claude workflow runs from last 24 hours...\"\n\n gh aw logs \\\n --engine copilot \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-copilot-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-copilot-raw.json\n jq '.runs // []' /tmp/token-logs-copilot-raw.json > \"$TOKEN_LOGS_DIR/copilot-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n\n gh aw logs \\\n --engine claude \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-claude-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-claude-raw.json\n jq '.runs // []' /tmp/token-logs-claude-raw.json > \"$TOKEN_LOGS_DIR/claude-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/claude-runs.json\"\nfi\n\necho \"✅ Copilot runs: $(jq 'length' \"$TOKEN_LOGS_DIR/copilot-runs.json\")\"\necho \"✅ Claude runs: $(jq 'length' \"$TOKEN_LOGS_DIR/claude-runs.json\")\""
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Install gh-aw CLI
run: |
.github/workflows/claude-token-usage-analyzer.lock.yml:309
- This new “Restore 24h token logs from cache” step runs before “Install gh-aw CLI” but invokes
gh aw logson cache-miss. On a runner without thegithub/gh-awextension installed yet, the fallback can fail and yield empty logs for the analyzer. Ensuregh-awis installed before this step executes, or have this step install/upgradegithub/gh-awbefore callinggh aw.
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Restore 24h token logs from cache
run: "set -euo pipefail\nTOKEN_LOGS_DIR=\"/tmp/gh-aw/token-logs\"\nmkdir -p \"$TOKEN_LOGS_DIR\"\nTODAY=$(date -u +%Y-%m-%d)\n\n# Look for today's pre-fetched data from the Token Logs Fetch workflow\nFETCH_RUN_ID=$(gh run list \\\n --workflow \"token-logs-fetch.lock.yml\" \\\n --status success \\\n --limit 1 \\\n --json databaseId \\\n --jq '.[0].databaseId' 2>/dev/null || echo \"\")\n\nUSED_CACHE=false\nif [ -n \"$FETCH_RUN_ID\" ]; then\n CACHE_TMP=\"/tmp/gh-aw/token-logs-fetch-cache\"\n mkdir -p \"$CACHE_TMP\"\n gh run download \"$FETCH_RUN_ID\" \\\n --repo \"$GITHUB_REPOSITORY\" \\\n --name \"cache-memory\" \\\n --dir \"$CACHE_TMP\" \\\n 2>/dev/null || true\n CACHE_DATE=$(cat \"$CACHE_TMP/token-logs/fetch-date.txt\" 2>/dev/null || echo \"\")\n if [ \"$CACHE_DATE\" = \"$TODAY\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/copilot-runs.json\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/claude-runs.json\" ]; then\n echo \"✅ Using pre-fetched logs from Token Logs Fetch run $FETCH_RUN_ID (date: $CACHE_DATE)\"\n cp \"$CACHE_TMP/token-logs/copilot-runs.json\" \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n cp \"$CACHE_TMP/token-logs/claude-runs.json\" \"$TOKEN_LOGS_DIR/claude-runs.json\"\n USED_CACHE=true\n else\n echo \"ℹ️ No valid cached logs found (cache date: ${CACHE_DATE:-none}, today: $TODAY)\"\n fi\nfi\n\nif [ \"$USED_CACHE\" != \"true\" ]; then\n echo \"📥 Downloading Copilot and Claude workflow runs from last 24 hours...\"\n\n gh aw logs \\\n --engine copilot \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-copilot-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-copilot-raw.json\n jq '.runs // []' /tmp/token-logs-copilot-raw.json > \"$TOKEN_LOGS_DIR/copilot-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n\n gh aw logs \\\n --engine claude \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-claude-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-claude-raw.json\n jq '.runs // []' /tmp/token-logs-claude-raw.json > \"$TOKEN_LOGS_DIR/claude-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/claude-runs.json\"\nfi\n\necho \"✅ Copilot runs: $(jq 'length' \"$TOKEN_LOGS_DIR/copilot-runs.json\")\"\necho \"✅ Claude runs: $(jq 'length' \"$TOKEN_LOGS_DIR/claude-runs.json\")\""
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Install gh-aw CLI
run: |
.github/workflows/claude-token-optimizer.lock.yml:326
- This new “Restore 24h token logs from cache” step runs before “Install gh-aw CLI” but invokes
gh aw logson cache-miss. If thegithub/gh-awextension isn’t installed yet, the fallback can fail and downstream steps may proceed with empty run logs. Ensuregh-awis installed before this step executes, or have this step install/upgradegithub/gh-awbefore callinggh aw.
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Restore 24h token logs from cache
run: "set -euo pipefail\nTOKEN_LOGS_DIR=\"/tmp/gh-aw/token-logs\"\nmkdir -p \"$TOKEN_LOGS_DIR\"\nTODAY=$(date -u +%Y-%m-%d)\n\n# Look for today's pre-fetched data from the Token Logs Fetch workflow\nFETCH_RUN_ID=$(gh run list \\\n --workflow \"token-logs-fetch.lock.yml\" \\\n --status success \\\n --limit 1 \\\n --json databaseId \\\n --jq '.[0].databaseId' 2>/dev/null || echo \"\")\n\nUSED_CACHE=false\nif [ -n \"$FETCH_RUN_ID\" ]; then\n CACHE_TMP=\"/tmp/gh-aw/token-logs-fetch-cache\"\n mkdir -p \"$CACHE_TMP\"\n gh run download \"$FETCH_RUN_ID\" \\\n --repo \"$GITHUB_REPOSITORY\" \\\n --name \"cache-memory\" \\\n --dir \"$CACHE_TMP\" \\\n 2>/dev/null || true\n CACHE_DATE=$(cat \"$CACHE_TMP/token-logs/fetch-date.txt\" 2>/dev/null || echo \"\")\n if [ \"$CACHE_DATE\" = \"$TODAY\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/copilot-runs.json\" ] && \\\n [ -s \"$CACHE_TMP/token-logs/claude-runs.json\" ]; then\n echo \"✅ Using pre-fetched logs from Token Logs Fetch run $FETCH_RUN_ID (date: $CACHE_DATE)\"\n cp \"$CACHE_TMP/token-logs/copilot-runs.json\" \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n cp \"$CACHE_TMP/token-logs/claude-runs.json\" \"$TOKEN_LOGS_DIR/claude-runs.json\"\n USED_CACHE=true\n else\n echo \"ℹ️ No valid cached logs found (cache date: ${CACHE_DATE:-none}, today: $TODAY)\"\n fi\nfi\n\nif [ \"$USED_CACHE\" != \"true\" ]; then\n echo \"📥 Downloading Copilot and Claude workflow runs from last 24 hours...\"\n\n gh aw logs \\\n --engine copilot \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-copilot-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-copilot-raw.json\n jq '.runs // []' /tmp/token-logs-copilot-raw.json > \"$TOKEN_LOGS_DIR/copilot-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/copilot-runs.json\"\n\n gh aw logs \\\n --engine claude \\\n --start-date -1d \\\n --json \\\n -c 300 \\\n > /tmp/token-logs-claude-raw.json 2>/dev/null || echo '{\"runs\":[]}' > /tmp/token-logs-claude-raw.json\n jq '.runs // []' /tmp/token-logs-claude-raw.json > \"$TOKEN_LOGS_DIR/claude-runs.json\" 2>/dev/null || echo \"[]\" > \"$TOKEN_LOGS_DIR/claude-runs.json\"\nfi\n\necho \"✅ Copilot runs: $(jq 'length' \"$TOKEN_LOGS_DIR/copilot-runs.json\")\"\necho \"✅ Claude runs: $(jq 'length' \"$TOKEN_LOGS_DIR/claude-runs.json\")\""
- env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
name: Install gh-aw CLI
run: |
- Files reviewed: 9/9 changed files
- Comments generated: 2
| gh aw logs \ | ||
| --engine copilot \ | ||
| --start-date -1d \ | ||
| --json \ | ||
| -c 300 \ | ||
| > /tmp/token-logs-copilot-raw.json 2>/dev/null || echo '{"runs":[]}' > /tmp/token-logs-copilot-raw.json | ||
| jq '.runs // []' /tmp/token-logs-copilot-raw.json > "$TOKEN_LOGS_DIR/copilot-runs.json" 2>/dev/null || echo "[]" > "$TOKEN_LOGS_DIR/copilot-runs.json" | ||
|
|
||
| gh aw logs \ | ||
| --engine claude \ | ||
| --start-date -1d \ | ||
| --json \ | ||
| -c 300 \ | ||
| > /tmp/token-logs-claude-raw.json 2>/dev/null || echo '{"runs":[]}' > /tmp/token-logs-claude-raw.json | ||
| jq '.runs // []' /tmp/token-logs-claude-raw.json > "$TOKEN_LOGS_DIR/claude-runs.json" 2>/dev/null || echo "[]" > "$TOKEN_LOGS_DIR/claude-runs.json" |
There was a problem hiding this comment.
The fallback path calls gh aw logs, but this shared step doesn’t ensure the gh-aw extension is installed. Because imported steps are executed before the importing workflow’s own steps (see generated *.lock.yml where this runs before “Install gh-aw CLI”), a cache-miss will run gh aw logs without the extension and silently produce empty JSON, leading to incorrect “no runs” reports. Make this step self-contained by installing/upgrading github/gh-aw (like token-logs-fetch.md does) before invoking gh aw, and consider failing if gh aw logs still can’t run rather than defaulting to [].
| gh aw logs \ | |
| --engine copilot \ | |
| --start-date -1d \ | |
| --json \ | |
| -c 300 \ | |
| > /tmp/token-logs-copilot-raw.json 2>/dev/null || echo '{"runs":[]}' > /tmp/token-logs-copilot-raw.json | |
| jq '.runs // []' /tmp/token-logs-copilot-raw.json > "$TOKEN_LOGS_DIR/copilot-runs.json" 2>/dev/null || echo "[]" > "$TOKEN_LOGS_DIR/copilot-runs.json" | |
| gh aw logs \ | |
| --engine claude \ | |
| --start-date -1d \ | |
| --json \ | |
| -c 300 \ | |
| > /tmp/token-logs-claude-raw.json 2>/dev/null || echo '{"runs":[]}' > /tmp/token-logs-claude-raw.json | |
| jq '.runs // []' /tmp/token-logs-claude-raw.json > "$TOKEN_LOGS_DIR/claude-runs.json" 2>/dev/null || echo "[]" > "$TOKEN_LOGS_DIR/claude-runs.json" | |
| echo "🔧 Ensuring github/gh-aw extension is installed..." | |
| gh extension install github/gh-aw >/dev/null 2>&1 || gh extension upgrade github/gh-aw >/dev/null 2>&1 | |
| if ! gh aw --help >/dev/null 2>&1; then | |
| echo "❌ gh-aw extension is not available; cannot fetch token logs" >&2 | |
| exit 1 | |
| fi | |
| gh aw logs \ | |
| --engine copilot \ | |
| --start-date -1d \ | |
| --json \ | |
| -c 300 \ | |
| > /tmp/token-logs-copilot-raw.json | |
| jq '.runs // []' /tmp/token-logs-copilot-raw.json > "$TOKEN_LOGS_DIR/copilot-runs.json" | |
| gh aw logs \ | |
| --engine claude \ | |
| --start-date -1d \ | |
| --json \ | |
| -c 300 \ | |
| > /tmp/token-logs-claude-raw.json | |
| jq '.runs // []' /tmp/token-logs-claude-raw.json > "$TOKEN_LOGS_DIR/claude-runs.json" |
| FETCH_RUN_ID=$(gh run list \ | ||
| --workflow "token-logs-fetch.lock.yml" \ | ||
| --status success \ | ||
| --limit 1 \ | ||
| --json databaseId \ | ||
| --jq '.[0].databaseId' 2>/dev/null || echo "") | ||
|
|
||
| USED_CACHE=false | ||
| if [ -n "$FETCH_RUN_ID" ]; then | ||
| CACHE_TMP="/tmp/gh-aw/token-logs-fetch-cache" | ||
| mkdir -p "$CACHE_TMP" | ||
| gh run download "$FETCH_RUN_ID" \ | ||
| --repo "$GITHUB_REPOSITORY" \ | ||
| --name "cache-memory" \ | ||
| --dir "$CACHE_TMP" \ | ||
| 2>/dev/null || true |
There was a problem hiding this comment.
gh run list --jq '.[0].databaseId' returns null when there are no matching runs, which is a non-empty string; the [ -n "$FETCH_RUN_ID" ] check will treat it as valid and attempt gh run download "null" (errors are currently suppressed). Consider using --jq '.[0].databaseId // empty' or explicitly checking for null to avoid the extra failing download attempt and make cache detection more robust.
Summary
Creates a shared agentic workflow (
shared/token-logs-24h.md) that encapsulates the "check cache → download if missing" pattern for 24h Copilot and Claude token logs, and updates the four token analysis/optimizer workflows to use it.Problem
Four workflows (
copilot-token-optimizer,claude-token-optimizer,copilot-token-usage-analyzer,claude-token-usage-analyzer) each had ~40 lines of identical boilerplate to:token-logs-fetch.lock.ymlcache-memoryartifactgh aw logsdirectly if notThis duplicated code made maintenance harder and the fallback paths diverged slightly between workflows.
Solution
New file:
.github/workflows/shared/token-logs-24h.mdA shared pre-step that:
cache-memoryartifact for today's data (UTC date match)gh aw logsif no valid cache is found/tmp/gh-aw/token-logs/copilot-runs.jsonand/tmp/gh-aw/token-logs/claude-runs.jsonUpdated workflows (each removes ~40 lines of boilerplate):
copilot-token-optimizer.md→ importsshared/token-logs-24h.md, reads from standard pathclaude-token-optimizer.md→ importsshared/token-logs-24h.md, reads from standard pathcopilot-token-usage-analyzer.md→ importsshared/token-logs-24h.md, reads from standard pathclaude-token-usage-analyzer.md→ importsshared/token-logs-24h.md, reads from standard pathCache Architecture