Skip to content

feat(templates): Create manufacturing reference learning scenario #618

@WilliamBerryiii

Description

@WilliamBerryiii

Overview

Create a reference learning scenario based on a manufacturing context that the dt-learning-tutor uses to ground exercises and examples across all 9 DT methods. The scenario follows a factory floor improvement project — relatable, concrete, and complex enough to demonstrate every method meaningfully. Curriculum practice exercises (#617) reference this scenario for continuity, so learners build understanding progressively rather than encountering disconnected examples.

Target File

.github/instructions/dt-curriculum-scenario-manufacturing.instructions.md

Frontmatter

---
description: 'Manufacturing reference scenario for DT learning — factory floor improvement project used across all 9 curriculum modules'
applyTo: '**/.copilot-tracking/dt/**/curriculum-*'
---

Required Content

Scenario Overview

A manufacturing plant is experiencing quality issues on a production line. The scenario provides:

  • Context: Mid-size manufacturer, mixed automation and manual processes, multiple shifts
  • Problem signal: Rising defect rates, operator frustration, customer complaints
  • Stakeholders: Line operators, shift supervisors, quality engineers, plant manager, customers
  • Complexity: Technical (equipment), human (training, fatigue), organizational (shift handoffs), and external (customer expectations) dimensions

The scenario is deliberately multi-dimensional so every DT method has meaningful material to work with.

Per-Method Scenario Content

Each method gets scenario-specific material for exercises:

Method Scenario Content
1 — Scoping Define the project scope — which production line, which defect types, which stakeholders to involve. Practice: Write a scoping statement for the quality improvement project.
2 — Research Plan stakeholder interviews — operators, supervisors, quality engineers. Practice: Draft 5 interview questions for line operators about their experience with defects.
3 — Synthesis Analyze fictional interview data — recurring themes about shift handoffs and equipment calibration. Practice: Create an affinity map from provided data points.
4 — Brainstorming Generate solutions for the top insights. Practice: Use SCAMPER on the shift handoff problem to generate 10+ ideas.
5 — Concepts Develop the most promising ideas into concepts. Practice: Articulate one concept with Desirability / Feasibility / Viability assessment.
6 — Prototypes Plan a lo-fi prototype for the selected concept. Practice: Describe a paper prototype or role-play scenario for testing the concept.
7 — Testing Design a test plan for the prototype. Practice: Write 3 test scenarios with success criteria for the factory floor prototype.
8 — Iteration Review fictional test results and plan refinements. Practice: Analyze provided test feedback and propose 3 specific changes.
9 — Handoff Prepare implementation documentation. Practice: Draft a handoff summary with scope, solution, validation results, and next steps.

Fictional Data Sets

Include small fictional data sets the tutor can present during exercises:

  • Interview excerpt snippets (3-5 per stakeholder type, 2-3 sentences each)
  • Affinity map data points (15-20 short observations for clustering)
  • Test result summaries (5-8 test scenarios with mixed pass/fail outcomes)

Keep data sets compact — enough for meaningful exercises without overwhelming the instruction file.

Scenario Continuity

The scenario flows logically through all 9 methods. Each module's exercise builds on previous outputs:

  • Scoping defines what Research investigates
  • Research produces data that Synthesis analyzes
  • Synthesis insights feed Brainstorming
  • And so on through Handoff

The tutor references previous exercise outputs when advancing to the next module, reinforcing the cumulative nature of the DT methodology.

Token Budget

Target: ~2,000-2,500 tokens (loaded alongside curriculum modules as reference context)

How to Build This File

This is an .instructions.md file — use the prompt-builder agent (not task-implementor) for the authoring phase.

Workflow: /task-research/task-plan/prompt-build/task-review

Between each phase, use /clear to reset context, then attach the output artifact from the previous phase as input for the next.

Phase 1: Research

Research manufacturing improvement scenarios suitable for DT application.

Source Material: DT methodology and scenario design — #file:.github/instructions/prompt-builder.instructions.md for authoring standards, the DT4HVE source materials for scenario patterns, and the curriculum files (#617) for per-method exercise requirements that the scenario must support.

Steps:

  1. Type /clear to start a fresh conversation.
  2. Attach #file:.github/instructions/prompt-builder.instructions.md and any available curriculum file drafts.
  3. Copy the prompt below into chat and send.
/task-research topic="DT manufacturing reference scenario"

Research manufacturing improvement scenarios suitable for a Design Thinking
learning reference.

Extract:
- Manufacturing process improvement patterns (quality, efficiency, safety)
- Stakeholder types in manufacturing contexts (operators, supervisors, engineers)
- Problem dimensions (technical, human, organizational, external)
- Per-method exercise requirements from the curriculum specification
- Fictional data set patterns (interview excerpts, observations, test results)
- Prompt-builder compliance requirements for .instructions.md files

Output: DT manufacturing scenario research

Phase 2: Plan

Plan the scenario with per-method content and fictional data sets.

Steps:

  1. Type /clear to reset the conversation.
  2. Attach the research document from Phase 1.
  3. Copy the prompt below into chat and send.
/task-plan

Plan the manufacturing reference learning scenario.

Use the attached research document as input. The plan should cover:
- Factory context (plant type, processes, shifts, stakeholders)
- Problem signal (defect types, operator frustration, customer complaints)
- Per-method scenario content for all 9 methods with exercises
- Fictional data sets (interview excerpts, affinity data, test results)
- Scenario continuity thread connecting all 9 methods progressively
- Token budget (~2,000-2,500)

Output: .copilot-tracking/plans/{date}-dt-manufacturing-scenario-plan.md

Phase 3: Build

Author the scenario instruction file using prompt-builder.

Steps:

  1. Type /clear to reset the conversation.
  2. Attach the plan document from Phase 2.
  3. Copy the prompt below into chat and send.
/prompt-build

Build the manufacturing reference scenario following the attached plan.

Create .github/instructions/dt-curriculum-scenario-manufacturing.instructions.md:
- Frontmatter: description, applyTo targeting all curriculum artifact paths
- Scenario overview (context, problem, stakeholders, complexity dimensions)
- Per-method content with exercises for all 9 methods
- Fictional data sets (interview excerpts, affinity data points, test results)
- Scenario flows logically through all 9 methods with continuity
- Data is realistic but obviously fictional
- Exercises completable in 5-10 minutes each

Output: .github/instructions/dt-curriculum-scenario-manufacturing.instructions.md

Phase 4: Review

Validate scenario continuity and exercise quality.

Steps:

  1. Type /clear to reset the conversation.
  2. Attach the plan document from Phase 2.
  3. Copy the prompt below into chat and send.
/task-review

Review the manufacturing reference scenario against the attached plan.

Validate:
- Frontmatter has description and applyTo targeting curriculum paths
- Scenario overview establishes context, problem, stakeholders, complexity
- Per-method content provides meaningful exercises for all 9 methods
- Fictional data sets included (interviews, affinity data, test results)
- Scenario flows logically through all 9 methods with continuity
- Exercises are completable in 5-10 minutes each
- Token count within ~2,000-2,500 target
- Prompt-builder compliance verified

Output: .copilot-tracking/reviews/{date}-dt-manufacturing-scenario-review.md

After Review

  • Pass — Open a PR with the scenario file.
  • Iterate — Return to Phase 3 with the review document to fix identified issues.
  • Escalate — Return to Phase 1 to investigate scenario design gaps.

Authoring Standards

Follow .github/instructions/prompt-builder.instructions.md:

  • applyTo targets all curriculum artifact paths (loaded alongside curriculum modules)
  • Consistent per-method structure
  • Fictional data is clearly presented as example data, not real

Success Criteria

  • File created at .github/instructions/dt-curriculum-scenario-manufacturing.instructions.md
  • Frontmatter includes description and applyTo targeting curriculum paths
  • Scenario overview establishes context, problem, stakeholders, and complexity dimensions
  • Per-method content provides meaningful exercises for all 9 methods
  • Fictional data sets included (interviews, affinity data, test results)
  • Scenario flows logically through all 9 methods with continuity
  • Exercises are completable in 5-10 minutes each
  • Token count within ~2,000-2,500 target
  • Passes task-reviewer validation against prompt-builder standards
  • Each prompt, instructions, or agent file registered in collections/design-thinking.collection.yml with path and kind fields
  • Each prompt, instructions, or agent file registered in collections/hve-core-all.collection.yml with path and kind fields
  • npm run plugin:generate succeeds after collection manifest updates

Metadata

Metadata

Assignees

Labels

featureNew feature triggering minor version bumpinstructionsCopilot instruction files (.instructions.md)

Projects

Status

Done

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions