Conversation
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: size-xs Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…_PROMPT_PATTERNS.md Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…ts calls in all news workflows Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…ds and templates for every document Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
Updates the agentic news workflow sources to ensure MCP access is routed through the AWF gateway and to strengthen the “AI must perform template-based analysis” protocol so workflows don’t proceed with fabricated/stub analysis when data download fails.
Changes:
- Adds
source scripts/mcp-setup.shbeforepre-article-analysis.ts(and lookback invocations) across multiplenews-*.mdworkflows andSHARED_PROMPT_PATTERNS.md. - Adds pipeline logging (
tee /tmp/pipeline-output.log) and “zero data downloaded” verification messaging. - Restructures
news-realtime-monitor.mdto clarify scripts-as-download-only and mandates methods/templates-driven AI analysis; also updates commit instructions to includeanalysis/data/.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 24 comments.
Show a summary per file
| File | Description |
|---|---|
| .github/workflows/news-weekly-review.md | Sources MCP gateway setup before analysis run; adds log capture + download verification. |
| .github/workflows/news-week-ahead.md | Sources MCP gateway setup before analysis run; adds log capture + download verification. |
| .github/workflows/news-realtime-monitor.md | Reframes Step 1.5 as download-only; adds mandatory methods/templates protocol; adds download verification and commits analysis/data/. |
| .github/workflows/news-propositions.md | Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback. |
| .github/workflows/news-motions.md | Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback. |
| .github/workflows/news-monthly-review.md | Sources MCP gateway setup before analysis run; adds log capture + download verification. |
| .github/workflows/news-month-ahead.md | Sources MCP gateway setup before analysis run; adds log capture + download verification. |
| .github/workflows/news-interpellations.md | Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback. |
| .github/workflows/news-evening-analysis.md | Sources MCP gateway setup before populate/analysis; adds logging and verification; sources setup in lookback. |
| .github/workflows/news-committee-reports.md | Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback. |
| .github/workflows/news-article-generator.md | Sources MCP gateway setup before analysis run; adds log capture + download verification. |
| .github/workflows/SHARED_PROMPT_PATTERNS.md | Adds “AI-driven analysis using methods & templates” and debugging protocols; sources MCP setup in lookback snippet. |
| source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 100 2>&1 | tee /tmp/pipeline-output.log | ||
| PIPE_EXIT=${PIPESTATUS[0]} |
There was a problem hiding this comment.
The updated pipeline command uses source ... && npx ... | tee ... and then reads PIPESTATUS[0]. If source scripts/mcp-setup.sh fails, the pipeline won’t run and PIPESTATUS may remain from a previous pipeline, so PIPE_EXIT can be incorrect. Safer: run source on its own with an explicit failure path, or wrap the whole compound in { ...; } 2>&1 | tee ... so PIPESTATUS[0] always reflects this invocation.
| source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 100 2>&1 | tee /tmp/pipeline-output.log | |
| PIPE_EXIT=${PIPESTATUS[0]} | |
| if ! source scripts/mcp-setup.sh; then | |
| echo "❌ Failed to source scripts/mcp-setup.sh — MCP environment is not configured." | |
| PIPE_EXIT=1 | |
| else | |
| { npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 100; } 2>&1 | tee /tmp/pipeline-output.log | |
| PIPE_EXIT=${PIPESTATUS[0]} | |
| fi |
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | ||
| echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" | ||
| if [ "$DATA_JSON_COUNT" -eq 0 ]; then | ||
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." | ||
| fi |
There was a problem hiding this comment.
DATA_JSON_COUNT is computed over all of analysis/data/, so it can mask a failed download in this run if any JSON exists from previous runs. For a reliable gate, read Documents Analyzed from analysis/daily/$ARTICLE_DATE/data-download-manifest.md or count analysis/daily/$ARTICLE_DATE/documents/*.json.
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | ||
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" |
There was a problem hiding this comment.
Same “zero data” signal problem as the other doc-type workflows: DATA_JSON_COUNT is global and can prevent the warning when MANIFEST_DOCS==0 for this run. Prefer using the scoped manifest (analysis/daily/$ARTICLE_DATE/committeeReports/data-download-manifest.md) and/or scoped documents directory for the check.
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" | |
| DATA_JSON_COUNT=$(find "analysis/daily/$ARTICLE_DATE/committeeReports/" -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files (scoped to $ARTICLE_DATE/committeeReports): $DATA_JSON_COUNT" |
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | ||
| echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" | ||
| if [ "$DATA_JSON_COUNT" -eq 0 ]; then | ||
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." |
There was a problem hiding this comment.
DATA_JSON_COUNT=$(find analysis/data/ ... | wc -l) counts all JSON under analysis/data/, including leftovers from previous runs, so it doesn’t reliably indicate whether this pipeline run downloaded anything for $ARTICLE_DATE. Consider validating via the per-date manifest (analysis/daily/$ARTICLE_DATE/data-download-manifest.md → Documents Analyzed) and/or checking analysis/daily/$ARTICLE_DATE/documents/*.json instead.
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)" | |
| if [ "$DATA_JSON_COUNT" -eq 0 ]; then | |
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." | |
| DATA_JSON_COUNT=$(find "analysis/daily/$ARTICLE_DATE/documents" -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 JSON data files for $ARTICLE_DATE: $DATA_JSON_COUNT (must be > 0)" | |
| if [ "$DATA_JSON_COUNT" -eq 0 ]; then | |
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data for $ARTICLE_DATE. Agent MUST diagnose and fix — do NOT fabricate analysis." |
| npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 || echo "⚠️ Daily analysis failed (non-blocking) — article generation will proceed without enrichment" | ||
| echo "✅ Analysis artifacts written to analysis/daily/$ARTICLE_DATE/" | ||
| # CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway | ||
| source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 2>&1 | tee /tmp/pipeline-output.log |
There was a problem hiding this comment.
source scripts/mcp-setup.sh && npx ... | tee ... followed by PIPESTATUS[0] can yield a wrong PIPE_EXIT if source fails (pipeline never runs, PIPESTATUS may be stale). Prefer: source as a separate step with explicit failure handling, or { source ...; npx ...; } 2>&1 | tee ....
| source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 2>&1 | tee /tmp/pipeline-output.log | |
| source scripts/mcp-setup.sh || { echo "🚨 Failed to source scripts/mcp-setup.sh — cannot run analysis pipeline."; exit 1; } | |
| npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 2>&1 | tee /tmp/pipeline-output.log |
| echo "📥 Downloading data for $ARTICLE_DATE..." | ||
| # CRITICAL: Source mcp-setup.sh to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the AWF gateway | ||
| # Scripts download data only — analysis is done by AI afterwards | ||
| source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}" && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log |
There was a problem hiding this comment.
The main download command uses source ... && echo ... && npx ... | tee ... and then reads PIPESTATUS[0]. If source scripts/mcp-setup.sh fails, the pipeline won’t run and PIPESTATUS may be stale, so PIPE_EXIT can be incorrect. Suggest: source as a separate step with explicit failure handling, or wrap everything in { ...; } 2>&1 | tee ... so PIPESTATUS[0] always corresponds to this run.
| source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}" && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log | |
| source scripts/mcp-setup.sh | |
| SOURCE_EXIT=$? | |
| if [ "$SOURCE_EXIT" -ne 0 ]; then | |
| echo "❌ Failed to source scripts/mcp-setup.sh (exit code $SOURCE_EXIT) — MCP configuration is required" | |
| exit "$SOURCE_EXIT" | |
| fi | |
| { echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}"; npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50; } 2>&1 | tee /tmp/pipeline-output.log |
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | ||
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" | ||
| if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then | ||
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." |
There was a problem hiding this comment.
Same issue as motions: the “zero data” condition requires both MANIFEST_DOCS==0 and a global analysis/data JSON count of 0. Any cached JSON from other runs will hide a real “0 docs for this date/type” case. Prefer gating purely on the scoped manifest (or the scoped analysis/daily/$ARTICLE_DATE/propositions/documents/*.json count).
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" | |
| if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then | |
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." | |
| DATA_JSON_COUNT=$(find "analysis/daily/$ARTICLE_DATE/propositions/documents/" -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 Documents in manifest: $MANIFEST_DOCS, scoped JSON data files: $DATA_JSON_COUNT" | |
| if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then | |
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data for $ARTICLE_DATE/propositions. Agent MUST diagnose and fix — do NOT fabricate analysis." |
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | ||
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" | ||
| if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then | ||
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." |
There was a problem hiding this comment.
Same “zero data” gate issue: checking a global analysis/data JSON count can mask MANIFEST_DOCS==0 for this date/type if any previous JSON exists. If you want a reliable signal, key off the scoped manifest and/or analysis/daily/$ARTICLE_DATE/interpellations/documents/*.json.
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT" | |
| if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then | |
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis." | |
| DOC_JSON_DIR="analysis/daily/$ARTICLE_DATE/interpellations/documents" | |
| if [ -d "$DOC_JSON_DIR" ]; then | |
| DATA_JSON_COUNT=$(find "$DOC_JSON_DIR" -name "*.json" -type f 2>/dev/null | wc -l) | |
| else | |
| DATA_JSON_COUNT=0 | |
| fi | |
| echo "📊 Documents in manifest: $MANIFEST_DOCS, scoped JSON data files: $DATA_JSON_COUNT (date=$ARTICLE_DATE, type=interpellations)" | |
| if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then | |
| echo "🚨 CRITICAL: Pipeline downloaded ZERO data for $ARTICLE_DATE/interpellations. Agent MUST diagnose and fix — do NOT fabricate analysis." |
| echo "ℹ️ No existing manifest data for $LOOKBACK_DATE — running pre-article analysis" | ||
| npx tsx scripts/pre-article-analysis.ts --date "$LOOKBACK_DATE" --limit 50 --doc-type motions 2>/dev/null || true | ||
| source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$LOOKBACK_DATE" --limit 50 --doc-type motions 2>/dev/null || true | ||
| # Re-check manifest after running analysis |
There was a problem hiding this comment.
The lookback fallback still runs the pipeline with 2>/dev/null || true. Given the new “agent MUST diagnose and fix” protocol, suppressing output here makes it much harder to troubleshoot repeated lookback failures. Consider teeing output to a log and surfacing the exit code (at least when the manifest remains at 0).
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | ||
| echo "📊 JSON data files: $DATA_JSON_COUNT" |
There was a problem hiding this comment.
The “Check results” snippet uses a global JSON count under analysis/data/. Because analysis/data/ is a cache across runs, this can be >0 even when the current run downloaded 0 docs for $ARTICLE_DATE. Recommend validating via the per-date manifest (Documents Analyzed) and/or analysis/daily/$ARTICLE_DATE[/<docType>]/documents/*.json to avoid false positives.
| DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 JSON data files: $DATA_JSON_COUNT" | |
| DATA_JSON_COUNT=$(find "analysis/daily/${ARTICLE_DATE}" -path "*/documents/*.json" -type f 2>/dev/null | wc -l) | |
| echo "📊 JSON data files for ${ARTICLE_DATE}: $DATA_JSON_COUNT" |
All news agentic workflows download 0 documents because
pre-article-analysis.tsruns withoutsource scripts/mcp-setup.sh— the MCP client falls back to the public URL which is blocked by the AWF sandbox firewall. The agent then fabricates analysis files without following any methodology guides or templates.MCP gateway setup (root cause)
Added
source scripts/mcp-setup.shbefore everypre-article-analysis.tsinvocation across all 11news-*.mdworkflows + lookback fallback calls +SHARED_PROMPT_PATTERNS.md.Before:
After:
Data verification
Every workflow now checks actual download results and emits a clear signal when zero data was downloaded:
Enforce AI-driven analysis using methods & templates (
SHARED_PROMPT_PATTERNS.md)New "AI-Driven Analysis Using Methods & Templates" protocol — the key design principle:
New "Script Debugging & Fixing Protocol" — agent must diagnose/fix broken scripts instead of silently skipping.
news-realtime-monitor.mdrestructureanalysis/data/so downloaded JSON is committed alongside analysis💬 Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.