Skip to content

Fix agentic workflow data pipeline: add MCP gateway setup + enforce AI-driven analysis using templates#1449

Merged
pethers merged 5 commits intomainfrom
copilot/fix-data-download-issues
Mar 29, 2026
Merged

Fix agentic workflow data pipeline: add MCP gateway setup + enforce AI-driven analysis using templates#1449
pethers merged 5 commits intomainfrom
copilot/fix-data-download-issues

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 29, 2026

All news agentic workflows download 0 documents because pre-article-analysis.ts runs without source scripts/mcp-setup.sh — the MCP client falls back to the public URL which is blocked by the AWF sandbox firewall. The agent then fabricates analysis files without following any methodology guides or templates.

MCP gateway setup (root cause)

Added source scripts/mcp-setup.sh before every pre-article-analysis.ts invocation across all 11 news-*.md workflows + lookback fallback calls + SHARED_PROMPT_PATTERNS.md.

Before:

npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50 || echo "⚠️ Analysis failed (non-blocking)"

After:

source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}"
npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log

Data verification

Every workflow now checks actual download results and emits a clear signal when zero data was downloaded:

DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
if [ "$DATA_JSON_COUNT" -eq 0 ]; then
  echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix."
fi

Enforce AI-driven analysis using methods & templates (SHARED_PROMPT_PATTERNS.md)

New "AI-Driven Analysis Using Methods & Templates" protocol — the key design principle:

  • Scripts = download data only
  • AI = do ALL analysis reading all 6 methodology guides + all 8 templates, creating analysis for every downloaded document

New "Script Debugging & Fixing Protocol" — agent must diagnose/fix broken scripts instead of silently skipping.

news-realtime-monitor.md restructure

  • Step 1.5 renamed to "Download Data Using Scripts" (clarifies scripts only download)
  • Added "MANDATORY: AI Must Analyse ALL Data Using Methods & Templates" section with explicit 6-step protocol, template-to-file mapping table, and compliance checklist
  • Commit step now includes analysis/data/ so downloaded JSON is committed alongside analysis

💬 Send tasks to Copilot coding agent from Slack and Teams to turn conversations into code. Copilot posts an update in your thread when it's finished.

@github-actions github-actions bot added the size-xs Extra small change (< 10 lines) label Mar 29, 2026
@github-actions
Copy link
Copy Markdown
Contributor

🏷️ Automatic Labeling Summary

This PR has been automatically labeled based on the files changed and PR metadata.

Applied Labels: size-xs

Label Categories

  • 🗳️ Content: news, dashboard, visualization, intelligence
  • 💻 Technology: html-css, javascript, workflow, security
  • 📊 Data: cia-data, riksdag-data, data-pipeline, schema
  • 🌍 I18n: i18n, translation, rtl
  • 🔒 ISMS: isms, iso-27001, nist-csf, cis-controls
  • 🏗️ Infrastructure: ci-cd, deployment, performance, monitoring
  • 🔄 Quality: testing, accessibility, documentation, refactor
  • 🤖 AI: agent, skill, agentic-workflow

For more information, see .github/labeler.yml.

@github-actions
Copy link
Copy Markdown
Contributor

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

…_PROMPT_PATTERNS.md

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
@github-actions github-actions bot added documentation Documentation updates workflow GitHub Actions workflows ci-cd CI/CD pipeline changes news News articles and content generation agentic-workflow Agentic workflow changes size-m Medium change (50-250 lines) labels Mar 29, 2026
@github-actions
Copy link
Copy Markdown
Contributor

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

…ts calls in all news workflows

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
@github-actions github-actions bot added the size-l Large change (250-1000 lines) label Mar 29, 2026
@github-actions
Copy link
Copy Markdown
Contributor

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

…ds and templates for every document

Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/e765a832-29de-41ac-af70-9cc20a41254b

Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown
Contributor

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

@github-actions
Copy link
Copy Markdown
Contributor

🔍 Lighthouse Performance Audit

Category Score Status
Performance 85/100 🟡
Accessibility 95/100 🟢
Best Practices 90/100 🟢
SEO 95/100 🟢

📥 Download full Lighthouse report

Budget Compliance: Performance budgets enforced via budget.json

Copilot AI changed the title [WIP] Fix data download and analysis template issues Fix agentic workflow data pipeline: add MCP gateway setup + enforce AI-driven analysis using templates Mar 29, 2026
Copilot AI requested a review from pethers March 29, 2026 23:45
@pethers pethers marked this pull request as ready for review March 29, 2026 23:49
Copilot AI review requested due to automatic review settings March 29, 2026 23:49
@pethers pethers merged commit 604fdd9 into main Mar 29, 2026
14 checks passed
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the agentic news workflow sources to ensure MCP access is routed through the AWF gateway and to strengthen the “AI must perform template-based analysis” protocol so workflows don’t proceed with fabricated/stub analysis when data download fails.

Changes:

  • Adds source scripts/mcp-setup.sh before pre-article-analysis.ts (and lookback invocations) across multiple news-*.md workflows and SHARED_PROMPT_PATTERNS.md.
  • Adds pipeline logging (tee /tmp/pipeline-output.log) and “zero data downloaded” verification messaging.
  • Restructures news-realtime-monitor.md to clarify scripts-as-download-only and mandates methods/templates-driven AI analysis; also updates commit instructions to include analysis/data/.

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 24 comments.

Show a summary per file
File Description
.github/workflows/news-weekly-review.md Sources MCP gateway setup before analysis run; adds log capture + download verification.
.github/workflows/news-week-ahead.md Sources MCP gateway setup before analysis run; adds log capture + download verification.
.github/workflows/news-realtime-monitor.md Reframes Step 1.5 as download-only; adds mandatory methods/templates protocol; adds download verification and commits analysis/data/.
.github/workflows/news-propositions.md Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback.
.github/workflows/news-motions.md Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback.
.github/workflows/news-monthly-review.md Sources MCP gateway setup before analysis run; adds log capture + download verification.
.github/workflows/news-month-ahead.md Sources MCP gateway setup before analysis run; adds log capture + download verification.
.github/workflows/news-interpellations.md Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback.
.github/workflows/news-evening-analysis.md Sources MCP gateway setup before populate/analysis; adds logging and verification; sources setup in lookback.
.github/workflows/news-committee-reports.md Sources MCP gateway setup; adds logging and manifest-based verification; sources setup in lookback.
.github/workflows/news-article-generator.md Sources MCP gateway setup before analysis run; adds log capture + download verification.
.github/workflows/SHARED_PROMPT_PATTERNS.md Adds “AI-driven analysis using methods & templates” and debugging protocols; sources MCP setup in lookback snippet.

Comment on lines +373 to +374
source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 100 2>&1 | tee /tmp/pipeline-output.log
PIPE_EXIT=${PIPESTATUS[0]}
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The updated pipeline command uses source ... && npx ... | tee ... and then reads PIPESTATUS[0]. If source scripts/mcp-setup.sh fails, the pipeline won’t run and PIPESTATUS may remain from a previous pipeline, so PIPE_EXIT can be incorrect. Safer: run source on its own with an explicit failure path, or wrap the whole compound in { ...; } 2>&1 | tee ... so PIPESTATUS[0] always reflects this invocation.

Suggested change
source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 100 2>&1 | tee /tmp/pipeline-output.log
PIPE_EXIT=${PIPESTATUS[0]}
if ! source scripts/mcp-setup.sh; then
echo "❌ Failed to source scripts/mcp-setup.sh — MCP environment is not configured."
PIPE_EXIT=1
else
{ npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 100; } 2>&1 | tee /tmp/pipeline-output.log
PIPE_EXIT=${PIPESTATUS[0]}
fi

Copilot uses AI. Check for mistakes.
Comment on lines +377 to +381
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)"
if [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
fi
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DATA_JSON_COUNT is computed over all of analysis/data/, so it can mask a failed download in this run if any JSON exists from previous runs. For a reliable gate, read Documents Analyzed from analysis/daily/$ARTICLE_DATE/data-download-manifest.md or count analysis/daily/$ARTICLE_DATE/documents/*.json.

Copilot uses AI. Check for mistakes.
Comment on lines +414 to +415
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT"
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same “zero data” signal problem as the other doc-type workflows: DATA_JSON_COUNT is global and can prevent the warning when MANIFEST_DOCS==0 for this run. Prefer using the scoped manifest (analysis/daily/$ARTICLE_DATE/committeeReports/data-download-manifest.md) and/or scoped documents directory for the check.

Suggested change
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT"
DATA_JSON_COUNT=$(find "analysis/daily/$ARTICLE_DATE/committeeReports/" -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files (scoped to $ARTICLE_DATE/committeeReports): $DATA_JSON_COUNT"

Copilot uses AI. Check for mistakes.
Comment on lines +381 to +384
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)"
if [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DATA_JSON_COUNT=$(find analysis/data/ ... | wc -l) counts all JSON under analysis/data/, including leftovers from previous runs, so it doesn’t reliably indicate whether this pipeline run downloaded anything for $ARTICLE_DATE. Consider validating via the per-date manifest (analysis/daily/$ARTICLE_DATE/data-download-manifest.mdDocuments Analyzed) and/or checking analysis/daily/$ARTICLE_DATE/documents/*.json instead.

Suggested change
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files: $DATA_JSON_COUNT (must be > 0)"
if [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
DATA_JSON_COUNT=$(find "analysis/daily/$ARTICLE_DATE/documents" -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files for $ARTICLE_DATE: $DATA_JSON_COUNT (must be > 0)"
if [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data for $ARTICLE_DATE. Agent MUST diagnose and fix — do NOT fabricate analysis."

Copilot uses AI. Check for mistakes.
npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 || echo "⚠️ Daily analysis failed (non-blocking) — article generation will proceed without enrichment"
echo "✅ Analysis artifacts written to analysis/daily/$ARTICLE_DATE/"
# CRITICAL: Source mcp-setup.sh FIRST to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the gateway
source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 2>&1 | tee /tmp/pipeline-output.log
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

source scripts/mcp-setup.sh && npx ... | tee ... followed by PIPESTATUS[0] can yield a wrong PIPE_EXIT if source fails (pipeline never runs, PIPESTATUS may be stale). Prefer: source as a separate step with explicit failure handling, or { source ...; npx ...; } 2>&1 | tee ....

Suggested change
source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 2>&1 | tee /tmp/pipeline-output.log
source scripts/mcp-setup.sh || { echo "🚨 Failed to source scripts/mcp-setup.sh — cannot run analysis pipeline."; exit 1; }
npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 200 2>&1 | tee /tmp/pipeline-output.log

Copilot uses AI. Check for mistakes.
echo "📥 Downloading data for $ARTICLE_DATE..."
# CRITICAL: Source mcp-setup.sh to set MCP_SERVER_URL and MCP_AUTH_TOKEN for the AWF gateway
# Scripts download data only — analysis is done by AI afterwards
source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}" && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main download command uses source ... && echo ... && npx ... | tee ... and then reads PIPESTATUS[0]. If source scripts/mcp-setup.sh fails, the pipeline won’t run and PIPESTATUS may be stale, so PIPE_EXIT can be incorrect. Suggest: source as a separate step with explicit failure handling, or wrap everything in { ...; } 2>&1 | tee ... so PIPESTATUS[0] always corresponds to this run.

Suggested change
source scripts/mcp-setup.sh && echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}" && npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50 2>&1 | tee /tmp/pipeline-output.log
source scripts/mcp-setup.sh
SOURCE_EXIT=$?
if [ "$SOURCE_EXIT" -ne 0 ]; then
echo "❌ Failed to source scripts/mcp-setup.sh (exit code $SOURCE_EXIT) — MCP configuration is required"
exit "$SOURCE_EXIT"
fi
{ echo "MCP_SERVER_URL=${MCP_SERVER_URL:-NOT SET}"; npx tsx scripts/pre-article-analysis.ts --date "$ARTICLE_DATE" --limit 50; } 2>&1 | tee /tmp/pipeline-output.log

Copilot uses AI. Check for mistakes.
Comment on lines +414 to +417
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT"
if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as motions: the “zero data” condition requires both MANIFEST_DOCS==0 and a global analysis/data JSON count of 0. Any cached JSON from other runs will hide a real “0 docs for this date/type” case. Prefer gating purely on the scoped manifest (or the scoped analysis/daily/$ARTICLE_DATE/propositions/documents/*.json count).

Suggested change
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT"
if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
DATA_JSON_COUNT=$(find "analysis/daily/$ARTICLE_DATE/propositions/documents/" -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, scoped JSON data files: $DATA_JSON_COUNT"
if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data for $ARTICLE_DATE/propositions. Agent MUST diagnose and fix — do NOT fabricate analysis."

Copilot uses AI. Check for mistakes.
Comment on lines +416 to +419
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT"
if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same “zero data” gate issue: checking a global analysis/data JSON count can mask MANIFEST_DOCS==0 for this date/type if any previous JSON exists. If you want a reliable signal, key off the scoped manifest and/or analysis/daily/$ARTICLE_DATE/interpellations/documents/*.json.

Suggested change
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 Documents in manifest: $MANIFEST_DOCS, JSON data files: $DATA_JSON_COUNT"
if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data. Agent MUST diagnose and fix — do NOT fabricate analysis."
DOC_JSON_DIR="analysis/daily/$ARTICLE_DATE/interpellations/documents"
if [ -d "$DOC_JSON_DIR" ]; then
DATA_JSON_COUNT=$(find "$DOC_JSON_DIR" -name "*.json" -type f 2>/dev/null | wc -l)
else
DATA_JSON_COUNT=0
fi
echo "📊 Documents in manifest: $MANIFEST_DOCS, scoped JSON data files: $DATA_JSON_COUNT (date=$ARTICLE_DATE, type=interpellations)"
if [ "$MANIFEST_DOCS" -eq 0 ] && [ "$DATA_JSON_COUNT" -eq 0 ]; then
echo "🚨 CRITICAL: Pipeline downloaded ZERO data for $ARTICLE_DATE/interpellations. Agent MUST diagnose and fix — do NOT fabricate analysis."

Copilot uses AI. Check for mistakes.
Comment on lines 461 to 463
echo "ℹ️ No existing manifest data for $LOOKBACK_DATE — running pre-article analysis"
npx tsx scripts/pre-article-analysis.ts --date "$LOOKBACK_DATE" --limit 50 --doc-type motions 2>/dev/null || true
source scripts/mcp-setup.sh && npx tsx scripts/pre-article-analysis.ts --date "$LOOKBACK_DATE" --limit 50 --doc-type motions 2>/dev/null || true
# Re-check manifest after running analysis
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lookback fallback still runs the pipeline with 2>/dev/null || true. Given the new “agent MUST diagnose and fix” protocol, suppressing output here makes it much harder to troubleshoot repeated lookback failures. Consider teeing output to a log and surfacing the exit code (at least when the manifest remains at 0).

Copilot uses AI. Check for mistakes.
Comment on lines +85 to +86
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files: $DATA_JSON_COUNT"
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The “Check results” snippet uses a global JSON count under analysis/data/. Because analysis/data/ is a cache across runs, this can be >0 even when the current run downloaded 0 docs for $ARTICLE_DATE. Recommend validating via the per-date manifest (Documents Analyzed) and/or analysis/daily/$ARTICLE_DATE[/<docType>]/documents/*.json to avoid false positives.

Suggested change
DATA_JSON_COUNT=$(find analysis/data/ -name "*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files: $DATA_JSON_COUNT"
DATA_JSON_COUNT=$(find "analysis/daily/${ARTICLE_DATE}" -path "*/documents/*.json" -type f 2>/dev/null | wc -l)
echo "📊 JSON data files for ${ARTICLE_DATE}: $DATA_JSON_COUNT"

Copilot uses AI. Check for mistakes.
@pethers pethers deleted the copilot/fix-data-download-issues branch March 31, 2026 11:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agentic-workflow Agentic workflow changes ci-cd CI/CD pipeline changes documentation Documentation updates news News articles and content generation size-l Large change (250-1000 lines) size-m Medium change (50-250 lines) size-xs Extra small change (< 10 lines) workflow GitHub Actions workflows

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants