Conversation
🏷️ Automatic Labeling SummaryThis PR has been automatically labeled based on the files changed and PR metadata. Applied Labels: size-xs Label Categories
For more information, see |
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
…te patterns - SHARED_PROMPT_PATTERNS.md: Expand quality gate to check documents/ subdirectory, detect stub patterns (empty SWOT, generic boilerplate), verify coverage - news-realtime-monitor.md: Add inline quality gate bash block with per-file stub detection, add explicit per-file analysis protocol with steps A-D - news-evening-analysis.md: Add inline quality gate bash block with same checks - Add anti-pattern warning in SHARED_PROMPT_PATTERNS Step 4 listing specific rejected output patterns from script-generated stubs Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/576202f1-0944-4fac-91c7-d3b2974d9a0c Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
- Replace `$(grep -c ... || echo 0)` with `$(grep -c ... || true)` to prevent
double "0\n0" output when no matches found
- Add `${VAR:-0}` safety for integer comparisons with potentially empty values
- Use more specific SWOT stub regex: `_No (strengths|weaknesses|...) identified_`
- Verified: quality gate now correctly catches all 10 PR #1463 stubs (score 6/6)
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/576202f1-0944-4fac-91c7-d3b2974d9a0c
Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
There was a problem hiding this comment.
Pull request overview
This PR strengthens the agentic analysis “quality gate” so it checks all analysis markdown under analysis/daily/$DATE/ (including documents/) and blocks known script-stub/boilerplate per-file analyses—addressing the failure mode seen in PR #1463.
Changes:
- Expand quality gate scope from
"$ANALYSIS_DIR"/*.mdtofind "$ANALYSIS_DIR" -name "*.md"and add per-file stub scoring + per-file coverage checks inSHARED_PROMPT_PATTERNS.md. - Inline a blocking quality gate + more explicit per-file analysis protocol into
news-realtime-monitor.md. - Add an inline (blocking) quality gate block to
news-evening-analysis.md.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
.github/workflows/news-realtime-monitor.md |
Adds a blocking per-file analysis protocol and inlined quality gate before proceeding to later workflow steps. |
.github/workflows/news-evening-analysis.md |
Adds an inline, blocking quality gate step to prevent proceeding with stubbed per-file analyses. |
.github/workflows/SHARED_PROMPT_PATTERNS.md |
Updates canonical quality gate to scan documents/, detect stub patterns, and validate per-file analysis coverage. |
| 1. `view analysis/templates/per-file-political-intelligence.md` — read FULLY, note the required structure | ||
| 2. `view analysis/methodologies/ai-driven-analysis-guide.md` — read the "BAD vs GOOD" examples | ||
| 3. `view analysis/methodologies/political-swot-framework.md` — understand evidence tables |
There was a problem hiding this comment.
The new Step A only instructs reading 3 files, but the repo’s standardized AI-driven analysis protocol requires reading all 6 methodology guides and all 8 analysis templates before writing any analysis. To avoid agents following the shorter list and producing non-compliant output, align this Step A with the SHARED_PROMPT_PATTERNS protocol (or explicitly reference it without reducing the required-reading set).
| 1. `view analysis/templates/per-file-political-intelligence.md` — read FULLY, note the required structure | |
| 2. `view analysis/methodologies/ai-driven-analysis-guide.md` — read the "BAD vs GOOD" examples | |
| 3. `view analysis/methodologies/political-swot-framework.md` — understand evidence tables | |
| 1. Follow the organization-wide **SHARED_PROMPT_PATTERNS Step A** exactly: read **all 6 methodology guides** and **all 8 analysis templates** defined there (in `analysis/methodologies/` and `analysis/templates/`) **before writing any analysis**. Do NOT subset or skip any required document. | |
| 2. After completing SHARED_PROMPT_PATTERNS Step A, (re)read these **news-monitor-specific assets**: | |
| - `view analysis/templates/per-file-political-intelligence.md` — read FULLY, note the required structure | |
| - `view analysis/methodologies/ai-driven-analysis-guide.md` — read the "BAD vs GOOD" examples | |
| - `view analysis/methodologies/political-swot-framework.md` — understand evidence tables |
| 2. For EACH JSON file: | ||
| a. Read it with `view` — extract dok_id, titel, datum, parti, organ | ||
| b. Apply ALL 6 analytical lenses (classification, SWOT, risk, STRIDE, stakeholders, forward indicators) | ||
| c. Write or rewrite `{dok_id}-analysis.md` following the per-file template EXACTLY |
There was a problem hiding this comment.
Per-file analysis filename convention is inconsistent across the workflow instructions/templates: this section uses {dok_id}-analysis.md, but other parts of the workflow and the per-file template still refer to {id}.analysis.md. Standardize on the actual generated/validated convention (scripts + quality gate expect *-analysis.md) and update the remaining references so agents don’t create files that the coverage/stub checks miss.
| c. Write or rewrite `{dok_id}-analysis.md` following the per-file template EXACTLY | |
| c. Write or rewrite the per-file analysis markdown so that its filename matches the `*-analysis.md` convention (for example `{dok_id}-analysis.md`) and follows the per-file template EXACTLY |
| 3. **Write `{dok_id}-analysis.md`** alongside the data file, following `per-file-political-intelligence.md` template EXACTLY | ||
| 4. **Include ≥1 Mermaid diagram** with REAL data from the document (not placeholder) | ||
| 5. **Quality gate**: ≥3 evidence citations with dok_id, confidence labels on all claims, zero `[REQUIRED]` placeholders |
There was a problem hiding this comment.
This update switches per-file naming to {dok_id}-analysis.md, but SHARED_PROMPT_PATTERNS still contains other guidance using {id}.analysis.md later in the file and the per-file template currently documents .analysis.md naming. Please make the naming convention consistent across SHARED_PROMPT_PATTERNS and the per-file template so the new coverage/stub checks (which look for *-analysis.md) aren’t undermined by conflicting instructions.
| #### B5. MANDATORY Quality Gate — Run Before Proceeding | ||
|
|
||
| > 🚨 **BLOCKING**: Do NOT proceed to article generation or commit until this quality gate passes. If it fails, go back and fix analysis files. | ||
|
|
||
| ```bash | ||
| if [ -z "${ARTICLE_DATE:-}" ]; then | ||
| if [ -n "${{ github.event.inputs.article_date }}" ]; then | ||
| ARTICLE_DATE="${{ github.event.inputs.article_date }}" | ||
| else | ||
| ARTICLE_DATE=$(date -u +%Y-%m-%d) | ||
| fi | ||
| fi | ||
| ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE" | ||
| QUALITY_PASS=true | ||
| FAIL_COUNT=0 | ||
|
|
||
| echo "=== 🔍 Analysis Quality Gate Check ===" | ||
|
|
||
| DAILY_MD_FILES=$(find "$ANALYSIS_DIR" -maxdepth 1 -name "*.md" -type f 2>/dev/null) | ||
| PERFILE_MD_FILES=$(find "$ANALYSIS_DIR/documents" -name "*-analysis.md" -type f 2>/dev/null) | ||
| ALL_MD_FILES=$(find "$ANALYSIS_DIR" -name "*.md" -type f 2>/dev/null) | ||
| echo "📊 Daily: $(echo "$DAILY_MD_FILES" | grep -c '.' 2>/dev/null || true) | Per-file: $(echo "$PERFILE_MD_FILES" | grep -c '.' 2>/dev/null || true)" | ||
|
|
||
| # Check 1: Daily synthesis Mermaid diagrams | ||
| for f in $DAILY_MD_FILES; do | ||
| [ ! -f "$f" ] && continue | ||
| if [ "$(grep -c '```mermaid' "$f" 2>/dev/null || true)" -eq 0 ]; then | ||
| echo "❌ FAIL: $(basename "$f") has NO Mermaid diagrams" | ||
| QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1)) | ||
| fi | ||
| done | ||
|
|
||
| # Check 2: No [REQUIRED] placeholders | ||
| for f in $ALL_MD_FILES; do | ||
| [ ! -f "$f" ] && continue | ||
| if [ "$(grep -c '\[REQUIRED\]' "$f" 2>/dev/null || true)" -gt 0 ]; then | ||
| echo "❌ FAIL: $(basename "$f") has unfilled [REQUIRED] placeholders" | ||
| QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1)) | ||
| fi | ||
| done | ||
|
|
||
| # Check 3: Per-file analyses must NOT be stubs | ||
| STUB_COUNT=0 | ||
| for f in $PERFILE_MD_FILES; do | ||
| [ ! -f "$f" ] && continue | ||
| STUB_SCORE=0 | ||
| [ "$(grep -cE '_No (strengths|weaknesses|opportunities|threats) identified_' "$f" 2>/dev/null || true)" -ge 2 ] && STUB_SCORE=$((STUB_SCORE + 2)) | ||
| [ "$(grep -c 'this document requires assessment of\|this document warrants scrutiny for\|this document may affect business' "$f" 2>/dev/null || true)" -ge 2 ] && STUB_SCORE=$((STUB_SCORE + 2)) | ||
| [ "$(grep -c '```mermaid' "$f" 2>/dev/null || true)" -eq 0 ] && STUB_SCORE=$((STUB_SCORE + 1)) | ||
| [ "$(grep -c '^|' "$f" 2>/dev/null || true)" -lt 2 ] && STUB_SCORE=$((STUB_SCORE + 1)) | ||
| if [ "${STUB_SCORE:-0}" -ge 3 ]; then | ||
| echo "❌ FAIL: $(basename "$f") is a stub (score=$STUB_SCORE) — MUST be replaced with real analysis" | ||
| STUB_COUNT=$((STUB_COUNT + 1)); QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1)) | ||
| fi | ||
| done |
There was a problem hiding this comment.
PR description says the evening workflow got the “same inline quality gate”, but this block is a simplified subset (e.g., it doesn’t check Mermaid style directives/color-coding, SWOT evidence tables with dok_id, or per-file coverage vs JSON count). Either expand this gate to match the SHARED_PROMPT_PATTERNS checks (including coverage + style directives) or adjust the PR description so expectations match what’s actually enforced.
|
@copilot apply changes based on the comments in this thread |
…lysis.md naming, expand evening quality gate
1. Step A now references ALL 6 methodology guides + ALL 8 templates via
SHARED_PROMPT_PATTERNS Steps 2-3 (not a reduced 3-file list)
2. Standardized filename convention to *-analysis.md (dash) across all 3 files
- SHARED_PROMPT_PATTERNS: {id}.analysis.md → {dok_id}-analysis.md
- news-realtime-monitor: {dok_id}.analysis.md → {dok_id}-analysis.md
- news-evening-analysis: {id}.analysis.md → {dok_id}-analysis.md
3. Expanded evening-analysis quality gate from 3 checks to 7 checks matching
SHARED_PROMPT_PATTERNS (added: style directives, SWOT evidence tables,
structured tables, coverage vs JSON count)
Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b81b556e-59b9-49c8-b0d2-1744b795d910
Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
Applied all 4 review comments in commit a75d934:
|
🔍 Lighthouse Performance Audit
📥 Download full Lighthouse report Budget Compliance: Performance budgets enforced via |
PR #1463 shipped 10 per-file analyses that were unreplaced script-generated stubs (empty SWOT quadrants, generic boilerplate text, zero Mermaid diagrams). Root cause: the quality gate only glob-checked
$ANALYSIS_DIR/*.md— missing everything indocuments/. A bash bug (grep -c+|| echo 0producing"0\n0") also broke integer comparisons silently.Quality gate expansion (
SHARED_PROMPT_PATTERNS.md)find "$ANALYSIS_DIR" -name "*.md"replaces"$ANALYSIS_DIR"/*.mdglob — now coversdocuments/subdirectory_No strengths identified_, generic perspective text, missing Mermaid diagrams, missing tables. Score ≥ 3 = blocking failure.jsonindocuments/must have a corresponding-analysis.mdpre-article-analysis.tsstubsInline quality gate in workflows
news-realtime-monitor.md: Replaced vague "see SHARED_PROMPT_PATTERNS" reference with explicit 4-step protocol (A: read templates → B: create per-file analyses → C: rewrite synthesis → D: run quality gate bash block). Step A now references ALL 6 methodology guides + ALL 8 analysis templates via SHARED_PROMPT_PATTERNS Steps 2–3 (not a reduced subset). Gate is blocking before Step 2.news-evening-analysis.md: Added full inline quality gate as step B5 with all 7 checks matching SHARED_PROMPT_PATTERNS (Mermaid diagrams, style directives,[REQUIRED]placeholders, SWOT evidence tables, structured tables, stub detection, and per-file coverage vs JSON count)Filename convention standardization (all 3 files)
Standardized per-file analysis naming to
*-analysis.md(dash convention) throughout — replaced all{id}.analysis.md(dot convention) references in SHARED_PROMPT_PATTERNS, news-realtime-monitor, and news-evening-analysis to match what scripts generate and quality gate checks validate.Bash bug fix (all 3 files)
Validation
Quality gate now correctly catches all 10 PR #1463 stubs (score 6/6 each). All 55 workflow tests and 68 analysis framework tests pass.