Mandate analysis in all agentic workflows — no workflow run wasted#1482
Mandate analysis in all agentic workflows — no workflow run wasted#1482
Conversation
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: size-xs Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…alysis step to news-translate - SHARED_PROMPT_PATTERNS.md: Add new first-principle section requiring ALL workflows to follow ai-driven-analysis-guide.md and produce/improve analysis artifacts on every run - news-translate.md: Add Step 3b for mandatory analysis review/improvement, update time budget, update commit step to include analysis files, update PR template and final instructions Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/9a8e573e-8ba5-40a5-90e3-79c5d10850c4 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…instructions - Make commit message conditional on whether analysis files were actually staged - Break long final instruction paragraph into numbered list for readability Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/9a8e573e-8ba5-40a5-90e3-79c5d10850c4 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
Establishes an explicit, cross-workflow mandate that every agentic run must review/create/improve analysis artifacts (following analysis/methodologies/ai-driven-analysis-guide.md and the templates), and updates the translation workflow to include a dedicated analysis-improvement step plus staging/commit logic for analysis/daily/.
Changes:
- Added a “UNIVERSAL RULE: No Workflow Run Wasted” section and a reusable analysis-gap detection/checklist block to
SHARED_PROMPT_PATTERNS.md. - Added “Step 3b: Mandatory Analysis Improvement” to
news-translate.md, updated time budget, and stagedanalysis/daily/alongside translations with a PR file-count guard and conditional commit message.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| .github/workflows/SHARED_PROMPT_PATTERNS.md | Adds the universal analysis mandate, a bash-based analysis gap scan, and an improvement checklist to be copied into workflows. |
| .github/workflows/news-translate.md | Inserts mandatory analysis-improvement step into translation workflow, adjusts time allocations, and stages/commits analysis artifacts with safe-outputs file limit considerations. |
| - [ ] No `[REQUIRED]` placeholders remain (fill with evidence-based content) | ||
| - [ ] SWOT entries cite specific dok_id, vote counts, party names (not generic text) | ||
| - [ ] Risk matrix has numeric L×I scores (not placeholder values) | ||
| - [ ] Stakeholder analysis covers all 6 groups with evidence (not generic perspectives) |
There was a problem hiding this comment.
The checklist says the stakeholder analysis should cover “all 6 groups”, but the stakeholder impact template defines 8 stakeholder groups (Citizens, Government, Opposition, Business, Civil Society, International, Judiciary, Media). Update the checklist to match the template to avoid agents producing incomplete stakeholder sections.
| - [ ] Stakeholder analysis covers all 6 groups with evidence (not generic perspectives) | |
| - [ ] Stakeholder analysis covers all 8 groups (Citizens, Government, Opposition, Business, Civil Society, International, Judiciary, Media) with evidence (not generic perspectives) |
| echo "⚠️ Files with [REQUIRED] placeholders: $REQUIRED_PLACEHOLDERS" | ||
| echo "⚠️ Files missing Mermaid diagrams: $MISSING_MERMAID" | ||
| echo "📍 Analysis target directory: $ANALYSIS_TARGET" | ||
| else | ||
| echo "📋 No existing analysis found for nearby dates — skip improvement (translation-only run)" | ||
| fi |
There was a problem hiding this comment.
The protocol says analysis improvement is NON-NEGOTIABLE, but the else-branch explicitly allows “skip improvement (translation-only run)” when no existing analysis is found. This contradicts the universal mandate in this PR; when no nearby analysis exists, the workflow should instruct the agent to create new analysis artifacts (per the guide/templates) rather than skipping.
.github/workflows/news-translate.md
Outdated
| ```bash | ||
| # Check for existing analysis needing improvement | ||
| ARTICLE_DATE="${{ github.event.inputs.article_date }}" | ||
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date +%Y-%m-%d)" |
There was a problem hiding this comment.
Date fallback uses date +%Y-%m-%d (runner local time) in the new analysis-improvement and staging steps. Other parts of this workflow (and other workflows) use UTC dates for analysis/daily/ paths; using local time can stage/write the wrong day around midnight. Prefer date -u +%Y-%m-%d for these fallbacks to keep analysis paths consistent.
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date +%Y-%m-%d)" | |
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date -u +%Y-%m-%d)" |
| echo "⚠️ Staged $STAGED_COUNT files exceeds 100-file PR limit. Removing analysis to fit." | ||
| git reset HEAD -- analysis/ 2>/dev/null || true | ||
| STAGED_COUNT=$(git diff --cached --name-only | wc -l) | ||
| fi |
There was a problem hiding this comment.
The 100-file PR limit guard only drops analysis/ once when STAGED_COUNT > 90. If translations/metadata alone still exceed the safe-outputs limit, the workflow will proceed and PR creation may fail. Consider following the existing repo pattern (multiple pruning steps and/or a hard failure) to guarantee STAGED_COUNT <= 100 before committing.
| fi | |
| fi | |
| # Hard guard: never proceed with more than 100 staged files | |
| if [ "$STAGED_COUNT" -gt 100 ]; then | |
| echo "❌ Still have $STAGED_COUNT staged files after dropping analysis/, exceeding the 100-file safe-outputs limit." | |
| echo " Aborting commit and PR creation to avoid workflow failure. Please reduce the number of changed files and rerun." | |
| git status --short || true | |
| exit 1 | |
| fi |
.github/workflows/news-translate.md
Outdated
| "title": "🌐 Article Translations - {date}", | ||
| "body": "## Article Translations\n\nLanguages: {list}\nArticles translated: {count}\nSource: news-translate workflow", | ||
| "title": "🌐 Article Translations + 📊 Analysis - {date}", | ||
| "body": "## Article Translations\n\nLanguages: {list}\nArticles translated: {count}\nAnalysis files improved: {analysis_count}\nSource: news-translate workflow", |
There was a problem hiding this comment.
The PR creation template adds an Analysis files improved: {analysis_count} placeholder, but there’s no preceding instruction to compute/provide {analysis_count} (unlike the commit step which computes ANALYSIS_STAGED). This risks creating PRs with an unresolved placeholder or incorrect counts. Consider instructing the agent to populate it from the staged analysis count (or remove the placeholder).
| "body": "## Article Translations\n\nLanguages: {list}\nArticles translated: {count}\nAnalysis files improved: {analysis_count}\nSource: news-translate workflow", | |
| "body": "## Article Translations\n\nLanguages: {list}\nArticles translated: {count}\nAnalysis files improved: see commit message for file count\nSource: news-translate workflow", |
.github/workflows/news-translate.md
Outdated
|
|
||
| # Stage improved analysis artifacts (mandatory — no workflow run wasted) | ||
| ARTICLE_DATE="${{ github.event.inputs.article_date }}" | ||
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date +%Y-%m-%d)" |
There was a problem hiding this comment.
Same UTC issue here: the fallback for ARTICLE_DATE uses local time (date +%Y-%m-%d) before staging analysis/daily/${ARTICLE_DATE}/. To avoid staging the wrong date near midnight, use date -u +%Y-%m-%d consistently (matching earlier preflight steps).
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date +%Y-%m-%d)" | |
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date -u +%Y-%m-%d)" |
|
@copilot apply changes based on the comments in this thread |
…alysis, hard file limit guard, PR template
- Fix stakeholder groups from 6 to 8 (Citizens, Government, Opposition,
Business, Civil Society, International, Judiciary, Media)
- Use UTC dates (date -u) in analysis and staging steps
- Replace "skip improvement" with create-new-analysis instruction
- Add hard >100 file guard with abort before commit
- Replace unresolved {analysis_count} placeholder with commit ref
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/eab3bad3-1ed7-4338-b106-31ef28872549
Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
All 6 review comments addressed in
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
| - Correct any factual errors or outdated information | ||
| - Complete any `[REQUIRED]` placeholders | ||
| 5. **If no existing analysis exists**: Create new analysis following the full protocol (Steps 1–6 in the AI-Driven Analysis section below) | ||
| 6. **Commit analysis artifacts** to the `analysis/` folder — analysis MUST always be committed alongside any other workflow output |
There was a problem hiding this comment.
The universal requirement says analysis artifacts "MUST always be committed alongside any other workflow output", but later in this same document the staging patterns explicitly prune analysis/weekly and analysis/data to stay under the safe-outputs 100-file limit. This creates a conflicting directive for workflow authors. Consider updating this requirement to explicitly describe the safe-outputs limit exception and the intended prioritization order (e.g., keep a minimal/high-impact analysis subset and prune lower-priority artifacts first when nearing the limit).
| 6. **Commit analysis artifacts** to the `analysis/` folder — analysis MUST always be committed alongside any other workflow output | |
| 6. **Commit analysis artifacts** to the `analysis/` folder — analysis MUST always be committed alongside any other workflow output, subject to the GitHub Actions `safe-outputs` 100-file limit. When approaching this limit, prioritize committing a minimal, high-impact subset of analysis (e.g., daily summaries and key findings) and prune lower-priority or bulk artifacts first (e.g., `analysis/weekly/`, `analysis/data/`). |
.github/workflows/news-translate.md
Outdated
| echo "⚠️ Staged $STAGED_COUNT files is approaching the 100-file PR limit (preemptive guard at >90). Removing analysis to stay within the limit." | ||
| git reset HEAD -- analysis/ 2>/dev/null || true |
There was a problem hiding this comment.
This preemptive safe-outputs guard removes all staged analysis (git reset ... analysis/) when >90 files are staged, which can directly contradict the workflow’s new “mandatory analysis improvement” requirement and can result in a PR with translations but no analysis changes. Instead of dropping analysis entirely, consider prioritizing and staging a minimal analysis subset (e.g., only the specific improved file(s) under the chosen ANALYSIS_TARGET), and/or reducing translation scope (fewer languages/articles) to keep at least one analysis improvement committed per run.
| echo "⚠️ Staged $STAGED_COUNT files is approaching the 100-file PR limit (preemptive guard at >90). Removing analysis to stay within the limit." | |
| git reset HEAD -- analysis/ 2>/dev/null || true | |
| echo "⚠️ Staged $STAGED_COUNT files is approaching the 100-file PR limit (preemptive guard at >90). Reducing analysis scope to a minimal priority subset to stay within the limit." | |
| # First unstage all analysis artifacts | |
| git reset HEAD -- analysis/ 2>/dev/null || true | |
| # Re-stage minimal high-priority analysis for the current article date (to keep mandatory analysis improvements) | |
| git add "analysis/daily/${ARTICLE_DATE}/" 2>/dev/null || true |
|
@copilot apply changes based on the comments in this thread |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
… rule, keep minimal analysis subset when pruning - Update universal commit requirement in SHARED_PROMPT_PATTERNS.md to explicitly describe the safe-outputs 100-file limit exception and prioritization order - Change >90 guard to unstage all analysis then re-stage only current date's daily analysis, keeping mandatory analysis improvements instead of dropping everything Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/119e9827-2837-4df7-a336-7d0b2f8df4ae Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Both review comments addressed in
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
.github/workflows/news-translate.md
Outdated
| echo "================================" | ||
| ``` | ||
|
|
||
| **⏰ TIME GUARD**: Check elapsed time before starting analysis improvement. If more than 45 minutes have passed, limit improvements to the single most impactful file (the one with the most `[REQUIRED]` placeholders). |
There was a problem hiding this comment.
The analysis-improvement time guard uses a 45-minute threshold here, but the rest of the workflow’s time-guard guidance consistently uses 40 minutes (e.g., earlier elapsed-time guards and the final "Time management" note). Align these thresholds to avoid conflicting instructions that could cause the agent to start analysis too late and risk timeouts.
| **⏰ TIME GUARD**: Check elapsed time before starting analysis improvement. If more than 45 minutes have passed, limit improvements to the single most impactful file (the one with the most `[REQUIRED]` placeholders). | |
| **⏰ TIME GUARD**: Check elapsed time before starting analysis improvement. If more than 40 minutes have passed, limit improvements to the single most impactful file (the one with the most `[REQUIRED]` placeholders). |
.github/workflows/news-translate.md
Outdated
| # Re-stage minimal high-priority analysis for the current article date (to keep mandatory analysis improvements) | ||
| git add "analysis/daily/${ARTICLE_DATE}/" 2>/dev/null || true |
There was a problem hiding this comment.
In the >90 staged-file pruning block, the script unstages all analysis/ and then re-stages only analysis/daily/${ARTICLE_DATE}/. If Step 3b improved analysis in a nearby date directory (or any analysis outside the exact ARTICLE_DATE folder), those improvements will be silently dropped from the commit when pruning triggers. Consider re-staging the same set of analysis directories that were added earlier (ARTICLE_DATE plus the DAYS_BACK loop), or derive the specific changed analysis paths from git diff --name-only before the reset and re-add those to preserve actual improvements while still staying under the limit.
| # Re-stage minimal high-priority analysis for the current article date (to keep mandatory analysis improvements) | |
| git add "analysis/daily/${ARTICLE_DATE}/" 2>/dev/null || true | |
| # Re-stage minimal high-priority analysis for the current article date and nearby dates | |
| # (mirror the earlier ARTICLE_DATE + DAYS_BACK staging so genuine improvements aren't silently dropped) | |
| git add "analysis/daily/${ARTICLE_DATE}/" 2>/dev/null || true | |
| for DAYS_BACK in 1 2 3; do | |
| CHECK_DATE="$(date -u -d "$ARTICLE_DATE - $DAYS_BACK days" +%Y-%m-%d 2>/dev/null || date -u -j -f "%Y-%m-%d" "$ARTICLE_DATE" -v-"$DAYS_BACK"d +%Y-%m-%d 2>/dev/null || true)" | |
| [ -z "$CHECK_DATE" ] && continue | |
| git add "analysis/daily/${CHECK_DATE}/" 2>/dev/null || true | |
| done |
|
@copilot apply changes based on the comments in this thread |
…tes in pruning block - Change analysis time guard from 45 → 40 minutes to match other workflow time guards - Expand >90 pruning re-stage to include ARTICLE_DATE + DAYS_BACK(1-3) with GNU/BSD date fallback, mirroring the earlier staging loop so nearby-date improvements aren't silently dropped Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/6d1cfdb4-0dcc-4470-a384-82df2eda2c4f Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Both review comments addressed in
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
.github/workflows/news-translate.md
Outdated
|
|
||
| # Hard guard: never proceed with more than 100 staged files | ||
| if [ "$STAGED_COUNT" -gt 100 ]; then | ||
| echo "❌ Still have $STAGED_COUNT staged files after dropping analysis/, exceeding the 100-file safe-outputs limit." |
There was a problem hiding this comment.
The hard-guard error message says the staged count exceeds the limit “after dropping analysis/”, but at this point the script has already re-staged a minimal subset of analysis files. This wording is misleading for debugging (it may still be >100 due to translations, or due to the re-staged analysis). Consider rephrasing to reflect the actual state (e.g., “after pruning analysis to the minimal subset”) and optionally include a quick breakdown of staged counts by path (news/ vs analysis/).
| echo "❌ Still have $STAGED_COUNT staged files after dropping analysis/, exceeding the 100-file safe-outputs limit." | |
| echo "❌ Still have $STAGED_COUNT staged files after pruning analysis to a minimal subset, exceeding the 100-file safe-outputs limit." | |
| NEWS_STAGED_HARDGUARD=$(git diff --cached --name-only -- 'news/' 2>/dev/null | wc -l) | |
| ANALYSIS_STAGED_HARDGUARD=$(git diff --cached --name-only -- 'analysis/' 2>/dev/null | wc -l) | |
| echo " Staged breakdown: $NEWS_STAGED_HARDGUARD news/ files, $ANALYSIS_STAGED_HARDGUARD analysis/ files." |
| ```bash | ||
| # Check for existing analysis needing improvement | ||
| ARTICLE_DATE="${{ github.event.inputs.article_date }}" | ||
| [ -z "$ARTICLE_DATE" ] && ARTICLE_DATE="$(date -u +%Y-%m-%d)" | ||
|
|
||
| echo "=== Mandatory Analysis Improvement Check ===" | ||
| # Check current date first, then nearby dates | ||
| ANALYSIS_TARGET="" | ||
| for CHECK_OFFSET in 0 1 2 3; do | ||
| CHECK_DATE=$(date -u -d "$ARTICLE_DATE - $CHECK_OFFSET days" +%Y-%m-%d 2>/dev/null || date -u -v-${CHECK_OFFSET}d -j -f "%Y-%m-%d" "$ARTICLE_DATE" +%Y-%m-%d 2>/dev/null) | ||
| [ -z "$CHECK_DATE" ] && continue |
There was a problem hiding this comment.
This new analysis-check block correctly uses UTC (date -u), but the same file still has at least one earlier fallback that uses local time (date +%Y-%m-%d around the translation parity-check section). If the runner timezone ever differs, the workflow can target different dates in different steps. Prefer using date -u consistently for all ARTICLE_DATE fallbacks throughout the workflow to avoid cross-step mismatches.
|
@copilot apply changes based on the comments in this thread |
… throughout 1. Rephrased hard-guard error from "after dropping analysis/" to "after pruning analysis to a minimal subset" with news/analysis staged count breakdown 2. Fixed all 6 remaining local-time `date +%Y-%m-%d` fallbacks to use UTC `date -u +%Y-%m-%d` consistently throughout the workflow Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/6ab54be1-9829-4b49-bf2c-7f24e81cb43e Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Both review comments addressed in
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
news-translate.mddate -u) consistently in analysis/staging steps{analysis_count}placeholder with commit message referencegrep -qLMermaid detection (suppressed output → always 0) in both filesgit addwith loop tolerating missing patterns underbash -edatefallback for CHECK_DATE in analysis staging loopdate -u) in commit messages consistentlyANALYSIS_DIRvariable (logic usesANALYSIS_TARGET)date +%Y-%m-%dfallbacks to UTC (date -u) throughout workflow