Skip to content

Commit a75d934

Browse files
Copilotpethers
andcommitted
Address review comments: expand Step A reading list, standardize -analysis.md naming, expand evening quality gate
1. Step A now references ALL 6 methodology guides + ALL 8 templates via SHARED_PROMPT_PATTERNS Steps 2-3 (not a reduced 3-file list) 2. Standardized filename convention to *-analysis.md (dash) across all 3 files - SHARED_PROMPT_PATTERNS: {id}.analysis.md → {dok_id}-analysis.md - news-realtime-monitor: {dok_id}.analysis.md → {dok_id}-analysis.md - news-evening-analysis: {id}.analysis.md → {dok_id}-analysis.md 3. Expanded evening-analysis quality gate from 3 checks to 7 checks matching SHARED_PROMPT_PATTERNS (added: style directives, SWOT evidence tables, structured tables, coverage vs JSON count) Agent-Logs-Url: https://github.com/Hack23/riksdagsmonitor/sessions/b81b556e-59b9-49c8-b0d2-1744b795d910 Co-authored-by: pethers <1726836+pethers@users.noreply.github.com>
1 parent f516944 commit a75d934

3 files changed

Lines changed: 70 additions & 16 deletions

File tree

.github/workflows/SHARED_PROMPT_PATTERNS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -630,7 +630,7 @@ Read these methodology documents to guide your analysis:
630630
- STRIDE threat analysis (where applicable)
631631
- Stakeholder impact matrix (6 lenses)
632632
- Forward indicators (specific watch items)
633-
3. **Write analysis:** Save as `{id}.analysis.md` alongside the data file
633+
3. **Write analysis:** Save as `{dok_id}-analysis.md` alongside the data file
634634
4. **Include Mermaid diagrams** — at least 1 per file, color-coded:
635635
```
636636
style X fill:#dc3545,color:#fff /* Red — critical */

.github/workflows/news-evening-analysis.md

Lines changed: 61 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -699,7 +699,7 @@ For each pending file from the catalog (ordered by significance — propositions
699699
7. **Stakeholders** — 6-lens impact matrix (government, opposition, citizen, economic, international, media)
700700
8. **Forward indicators** — Specific watch items with concrete timelines and triggers
701701
9. **Mermaid diagrams** — At least 1 diagram with REAL data from the file (not placeholder text)
702-
10. **Write** `{id}.analysis.md` alongside the data file
702+
10. **Write** `{dok_id}-analysis.md` alongside the data file
703703
704704
**Quality standard:** Each analysis file must match [SWOT.md](../../SWOT.md) / [THREAT_MODEL.md](../../THREAT_MODEL.md) quality — Hack23 header badges, color-coded Mermaid diagrams, evidence tables with confidence labels, and actionable intelligence.
705705
@@ -771,6 +771,7 @@ fi
771771
ANALYSIS_DIR="analysis/daily/$ARTICLE_DATE"
772772
QUALITY_PASS=true
773773
FAIL_COUNT=0
774+
WARN_COUNT=0
774775

775776
echo "=== 🔍 Analysis Quality Gate Check ==="
776777

@@ -782,28 +783,62 @@ echo "📊 Daily: $(echo "$DAILY_MD_FILES" | grep -c '.' 2>/dev/null || true) |
782783
# Check 1: Daily synthesis Mermaid diagrams
783784
for f in $DAILY_MD_FILES; do
784785
[ ! -f "$f" ] && continue
785-
if [ "$(grep -c '```mermaid' "$f" 2>/dev/null || true)" -eq 0 ]; then
786+
MERMAID_COUNT=$(grep -c '```mermaid' "$f" 2>/dev/null || true)
787+
if [ "${MERMAID_COUNT:-0}" -eq 0 ]; then
786788
echo "❌ FAIL: $(basename "$f") has NO Mermaid diagrams"
787789
QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1))
788790
fi
789791
done
790792

791-
# Check 2: No [REQUIRED] placeholders
793+
# Check 2: Color-coded style directives in Mermaid diagrams
794+
for f in $DAILY_MD_FILES; do
795+
[ ! -f "$f" ] && continue
796+
if grep -q '```mermaid' "$f" 2>/dev/null; then
797+
STYLE_COUNT=$(grep -c 'style.*fill:#' "$f" 2>/dev/null || true)
798+
if [ "${STYLE_COUNT:-0}" -eq 0 ]; then
799+
echo "❌ FAIL: $(basename "$f") Mermaid has NO color-coded style directives"
800+
QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1))
801+
fi
802+
fi
803+
done
804+
805+
# Check 3: No [REQUIRED] placeholders
792806
for f in $ALL_MD_FILES; do
793807
[ ! -f "$f" ] && continue
794-
if [ "$(grep -c '\[REQUIRED\]' "$f" 2>/dev/null || true)" -gt 0 ]; then
795-
echo "❌ FAIL: $(basename "$f") has unfilled [REQUIRED] placeholders"
808+
REQ_COUNT=$(grep -c '\[REQUIRED\]' "$f" 2>/dev/null || true)
809+
if [ "${REQ_COUNT:-0}" -gt 0 ]; then
810+
echo "❌ FAIL: $(basename "$f") has $REQ_COUNT unfilled [REQUIRED] placeholders"
796811
QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1))
797812
fi
798813
done
799814

800-
# Check 3: Per-file analyses must NOT be stubs
815+
# Check 4: SWOT evidence tables with dok_id
816+
SWOT_FILE="$ANALYSIS_DIR/swot-analysis.md"
817+
if [ -f "$SWOT_FILE" ]; then
818+
TABLE_COUNT=$(grep -c '|.*dok_id\||.*Evidence' "$SWOT_FILE" 2>/dev/null || true)
819+
if [ "${TABLE_COUNT:-0}" -eq 0 ]; then
820+
echo "❌ FAIL: swot-analysis.md has NO evidence tables with dok_id"
821+
QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1))
822+
fi
823+
fi
824+
825+
# Check 5: Structured tables in daily synthesis (not just plain prose)
826+
for f in $DAILY_MD_FILES; do
827+
[ ! -f "$f" ] && continue
828+
TABLE_COUNT=$(grep -c '^|' "$f" 2>/dev/null || true)
829+
if [ "${TABLE_COUNT:-0}" -lt 3 ]; then
830+
echo "⚠️ WARNING: $(basename "$f") has only $TABLE_COUNT table rows — templates require structured tables"
831+
WARN_COUNT=$((WARN_COUNT + 1))
832+
fi
833+
done
834+
835+
# Check 6: Per-file analyses must NOT be stubs
801836
STUB_COUNT=0
802837
for f in $PERFILE_MD_FILES; do
803838
[ ! -f "$f" ] && continue
804839
STUB_SCORE=0
805840
[ "$(grep -cE '_No (strengths|weaknesses|opportunities|threats) identified_' "$f" 2>/dev/null || true)" -ge 2 ] && STUB_SCORE=$((STUB_SCORE + 2))
806-
[ "$(grep -c 'this document requires assessment of\|this document warrants scrutiny for\|this document may affect business' "$f" 2>/dev/null || true)" -ge 2 ] && STUB_SCORE=$((STUB_SCORE + 2))
841+
[ "$(grep -c 'this document requires assessment of\|this document warrants scrutiny for\|this document may affect business\|this document has low newsworthiness\|this document must be assessed for' "$f" 2>/dev/null || true)" -ge 2 ] && STUB_SCORE=$((STUB_SCORE + 2))
807842
[ "$(grep -c '```mermaid' "$f" 2>/dev/null || true)" -eq 0 ] && STUB_SCORE=$((STUB_SCORE + 1))
808843
[ "$(grep -c '^|' "$f" 2>/dev/null || true)" -lt 2 ] && STUB_SCORE=$((STUB_SCORE + 1))
809844
if [ "${STUB_SCORE:-0}" -ge 3 ]; then
@@ -812,12 +847,29 @@ for f in $PERFILE_MD_FILES; do
812847
fi
813848
done
814849

850+
# Check 7: Coverage — every JSON must have an analysis
851+
if [ -d "$ANALYSIS_DIR/documents" ]; then
852+
JSON_COUNT=$(find "$ANALYSIS_DIR/documents" -name "*.json" -type f 2>/dev/null | wc -l)
853+
ANALYSIS_MD_COUNT=$(find "$ANALYSIS_DIR/documents" -name "*-analysis.md" -type f 2>/dev/null | wc -l)
854+
if [ "${JSON_COUNT:-0}" -gt 0 ] && [ "${ANALYSIS_MD_COUNT:-0}" -lt "${JSON_COUNT:-0}" ]; then
855+
echo "❌ FAIL: Only $ANALYSIS_MD_COUNT analysis files for $JSON_COUNT data files"
856+
QUALITY_PASS=false; FAIL_COUNT=$((FAIL_COUNT + 1))
857+
elif [ "${JSON_COUNT:-0}" -gt 0 ]; then
858+
echo "✅ PASS: $ANALYSIS_MD_COUNT analysis files for $JSON_COUNT data files"
859+
fi
860+
fi
861+
815862
echo ""
863+
echo "=== Quality Gate Summary ==="
864+
echo "Failures: $FAIL_COUNT | Warnings: $WARN_COUNT"
816865
if [ "$QUALITY_PASS" = "true" ]; then
817866
echo "✅ Quality gate PASSED — proceed to article generation"
818867
else
819868
echo "❌ Quality gate FAILED ($FAIL_COUNT failures) — fix analysis files before proceeding"
820869
[ "${STUB_COUNT:-0}" -gt 0 ] && echo "🚨 $STUB_COUNT per-file analyses are stubs — read analysis/templates/per-file-political-intelligence.md and rewrite"
870+
echo "📌 For per-file analyses: read analysis/templates/per-file-political-intelligence.md"
871+
echo "📌 For daily synthesis: read the corresponding template in analysis/templates/"
872+
echo "📌 Reference good examples: SWOT.md, THREAT_MODEL.md"
821873
fi
822874
```
823875
@@ -827,7 +879,7 @@ fi
827879
828880
**Before deciding whether to generate articles or call noop, you MUST:**
829881
830-
1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and per-file `.analysis.md` files — read `synthesis-summary.md` and significance scores to understand what was found
882+
1. **Review the analysis artifacts** in `analysis/daily/YYYY-MM-DD/` and per-file `-analysis.md` files — read `synthesis-summary.md` and significance scores to understand what was found
831883
2. **Summarize the analysis findings** — note how many documents were downloaded, their significance scores, key themes, and risk levels
832884
3. **ALWAYS commit analysis artifacts** regardless of whether articles will be generated:
833885
@@ -846,7 +898,7 @@ else
846898
fi
847899
```
848900
849-
> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the analysis produced ANY output files (per-file `.analysis.md` or daily synthesis), you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Evening Analysis - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if NO analysis output was generated.
901+
> **🚨 CRITICAL RULE: Never call `safeoutputs___noop` if analysis artifacts exist.** If the analysis produced ANY output files (per-file `-analysis.md` or daily synthesis), you MUST commit them via `safeoutputs___create_pull_request` — even if no articles are generated. Use an analysis-only PR with title: `📊 Analysis Only - Evening Analysis - {date}` and label `analysis-only`. Only use `safeoutputs___noop` if NO analysis output was generated.
850902
851903
## Step 2: Gather Parliamentary Data
852904

.github/workflows/news-realtime-monitor.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ fi
295295
- `analysis/templates/stakeholder-impact.md`
296296
- `analysis/templates/significance-scoring.md`
297297

298-
3. **For EVERY downloaded document/data file**: apply ALL 6 analytical lenses and create `{dok_id}.analysis.md` following the per-file template. Cite specific data (dok_id, vote counts, party names). Include ≥1 color-coded Mermaid diagram with `style` directives.
298+
3. **For EVERY downloaded document/data file**: apply ALL 6 analytical lenses and create `{dok_id}-analysis.md` following the per-file template. Cite specific data (dok_id, vote counts, party names). Include ≥1 color-coded Mermaid diagram with `style` directives.
299299

300300
4. **Create/rewrite ALL 7 daily synthesis files** in `analysis/daily/$ARTICLE_DATE/` — each MUST follow its template EXACTLY (metadata header, Mermaid diagrams with color-coded style directives, structured evidence tables, confidence labels, no `[REQUIRED]` placeholders).
301301

@@ -392,16 +392,18 @@ echo "📊 Total pending per-file analysis files (all dates): $PENDING"
392392
After data is downloaded, you MUST complete ALL of these steps before proceeding to event detection:
393393

394394
**Step A — Read templates and methodologies** (FIRST, before writing anything):
395-
1. `view analysis/templates/per-file-political-intelligence.md` — read FULLY, note the required structure
396-
2. `view analysis/methodologies/ai-driven-analysis-guide.md` — read the "BAD vs GOOD" examples
397-
3. `view analysis/methodologies/political-swot-framework.md` — understand evidence tables
395+
1. Follow the organization-wide **SHARED_PROMPT_PATTERNS Step 2 + Step 3** exactly: read **all 6 methodology guides** and **all 8 analysis templates** defined there (in `analysis/methodologies/` and `analysis/templates/`) **before writing any analysis**. Do NOT subset or skip any required document.
396+
2. After completing SHARED_PROMPT_PATTERNS Steps 2–3, (re)read these **news-monitor-specific assets**:
397+
- `view analysis/templates/per-file-political-intelligence.md` — read FULLY, note the required structure
398+
- `view analysis/methodologies/ai-driven-analysis-guide.md` — read the "BAD vs GOOD" examples
399+
- `view analysis/methodologies/political-swot-framework.md` — understand evidence tables
398400

399401
**Step B — Create real per-file analyses** (for EVERY document):
400402
1. List all downloaded documents: `find analysis/daily/${ARTICLE_DATE:-$(date -u +%Y-%m-%d)}/documents/ -name "*.json" -type f`
401403
2. For EACH JSON file:
402404
a. Read it with `view` — extract dok_id, titel, datum, parti, organ
403405
b. Apply ALL 6 analytical lenses (classification, SWOT, risk, STRIDE, stakeholders, forward indicators)
404-
c. Write or rewrite `{dok_id}-analysis.md` following the per-file template EXACTLY
406+
c. Write or rewrite the per-file analysis markdown so that its filename matches the `*-analysis.md` convention (for example `{dok_id}-analysis.md`) and follows the per-file template EXACTLY
405407
d. Include ≥1 color-coded Mermaid diagram with `style` directives and REAL data
406408
e. Include structured evidence tables with dok_id, confidence, impact columns
407409
f. SWOT quadrants must have REAL entries — NOT "_No strengths identified_"
@@ -529,7 +531,7 @@ fi
529531
**Before deciding whether to generate articles or call noop, you MUST:**
530532

531533
1. **Verify data was downloaded**`find analysis/data/ -name "*.json" -type f | wc -l` must be > 0
532-
2. **Verify analysis was created** — every downloaded document has a `.analysis.md` file
534+
2. **Verify analysis was created** — every downloaded document has a `-analysis.md` file
533535
3. **Verify daily synthesis files follow templates** — no `[REQUIRED]` placeholders, Mermaid diagrams with real data
534536
4. **ALWAYS commit data AND analysis together**:
535537

0 commit comments

Comments
 (0)