-
Notifications
You must be signed in to change notification settings - Fork 125
Description
Overview
Create dt-method-05-concepts.instructions.md — the method-tier instruction file for Method 5: User Concepts. Users take the most promising ideas from brainstorming (Method 4) and develop them into articulated concepts with enough structure to evaluate. This method introduces the three-lens validation (Desirability / Feasibility / Viability) and M365 Copilot image prompt generation for concept visualization.
Target File
.github/instructions/dt-method-05-concepts.instructions.md
Frontmatter
---
description: 'Design Thinking Method 5: User Concepts — concept development, three-lens validation, and visual concept communication'
applyTo: '**/.copilot-tracking/dt/**/method-05*'
---Required Content
Method Purpose
Method 5 transforms raw ideas into structured concepts. Users articulate what the idea is, who it serves, and why it matters. Each concept is evaluated through three lenses — Desirability (do users want this?), Feasibility (can we build this?), Viability (should we build this?) — to identify concepts worth prototyping.
Three Sub-Methods
| Sub-Method | Phase | Coach Behavior |
|---|---|---|
| 5a — Concept Planning | Planning | Help user select ideas to develop into concepts. Coach asks: "Which ideas from brainstorming had the strongest connection to your research insights?" |
| 5b — Concept Articulation | Execution | Guide concept development — who, what, why, how. Coach helps structure thinking: "Can you describe this concept as if explaining it to someone who wasn't in the brainstorming session?" |
| 5c — Concept Evaluation | Documentation | Apply three-lens validation. Coach facilitates honest assessment: "We've confirmed desirability — what would need to be true for this to be feasible?" |
Two Specialized Hats
| Hat | Role | When Activated |
|---|---|---|
| Concept Architect | Helps structure and articulate raw ideas into coherent concepts with clear value propositions | During 5a and 5b |
| Three-Lens Evaluator | Facilitates D/F/V assessment without killing concepts prematurely — identifies strengths and gaps per lens | During 5c |
Three-Lens Validation (D/F/V)
Each concept is evaluated through:
- Desirability: Does this address a real user need? Is there evidence from Method 2 research? Would users choose this over alternatives?
- Feasibility: Can this be built with available technology, skills, and time? What are the technical risks? What unknowns remain?
- Viability: Does this align with organizational goals? Is there a sustainable model? What's the cost-to-value ratio?
The coach helps users hold all three lenses simultaneously without letting any single lens dominate. A concept strong in desirability but weak in feasibility isn't discarded — it's flagged for feasibility research in later methods.
M365 Copilot Image Prompt Generation
The coach produces descriptive image prompts that users copy-paste into M365 Copilot for concept visualization:
- Prompts describe the concept scenario visually — who is using it, what they see, what the environment looks like
- Image prompts are framed for communication, not for wireframing (lo-fi constraint still applies)
- Example structure: "Create an image showing [persona] in [context] using [concept] to [accomplish goal]. The style should be [sketch/illustration] to convey that this is an early concept, not a final design."
- The coach does not generate images directly — it produces prompts the user takes to M365 Copilot
Method 5 Artifacts
Outputs stored at .copilot-tracking/dt/{project-slug}/method-05-concepts/:
concept-cards/— One file per concept: description, value proposition, target user, D/F/V assessmentdfv-matrix.md— Comparative D/F/V evaluation across conceptsimage-prompts.md— M365 Copilot image prompts for concept visualizationselection-summary.md— Concepts selected for prototyping with rationale
Lo-Fi Quality Enforcement
Method 5 increases fidelity slightly from Method 4 but remains conceptual:
- Concept cards are structured narratives, not specification documents
- D/F/V assessments are conversational evaluations, not scored rubrics
- Image prompts describe scenarios, not interface layouts
- Coach redirects over-specification: "That's getting into implementation — let's keep it at the concept level for now"
Coaching Approach
The coach in Method 5:
- Bridges ideation to structure: "You have a great raw idea — let's figure out what makes it tick"
- Challenges vague concepts: "Who specifically would use this, and what would change for them?"
- Balances optimism with realism through the three lenses without being a gatekeeper
- Encourages visual communication: "Want an image prompt you can use in Copilot to show what this concept looks like in practice?"
Token Budget
Target: ~1,500-2,000 tokens (method tier)
How to Build This File
This is an .instructions.md file — use the prompt-builder agent (not task-implementor) for the authoring phase.
Workflow: /task-research → /task-plan → /prompt-build → /task-review
Between each phase, run /clear to reset context.
Phase 1: Research
Source Material:
design-thinking-for-hve-capabilities/guidance/05-user-concepts.md
This file lives in the DT4HVE repository. If you don't have local access, ask the user to provide it or useread_fileif the repo is cloned nearby.
Steps:
- Read the source material above.
- Read
.github/instructions/prompt-builder.instructions.mdfor authoring standards. - Read any existing
dt-method-*instruction files for structural precedent. - Gather content on concept articulation patterns, three-lens D/F/V evaluation framework, concept card generation, and M365 Copilot image prompt integration.
Starter prompt:
/task-research
Research for IS028: dt-method-05-concepts.instructions.md
Read the DT4HVE source material at design-thinking-for-hve-capabilities/guidance/05-user-concepts.md. Extract:
- Concept articulation patterns — how raw ideas become structured concepts
- Three-lens evaluation framework (Desirability / Feasibility / Viability)
- Concept card generation — structure, content, and quality expectations
- M365 Copilot image prompt integration — how the coach produces visualization prompts
- Lo-fi quality enforcement for concept-level artifacts
Also read .github/instructions/prompt-builder.instructions.md for authoring standards and any existing dt-method-*.instructions.md files for structural precedent.
Output: research summary from Phase 1 above
Phase 2: Plan
Steps:
- Review the research output from Phase 1.
- Plan the instruction file structure — method purpose, three sub-methods, two specialized hats, D/F/V framework, concept cards, image prompt guidance.
- Define section ordering, token allocation, and applyTo targeting.
Starter prompt:
/task-plan
Plan for IS028: dt-method-05-concepts.instructions.md
Using the Phase 1 research output, plan the instruction file:
- Method purpose: transforming raw ideas into structured, evaluable concepts
- Three sub-methods: 5a Concept Planning, 5b Concept Articulation, 5c Concept Evaluation
- Two hats: Concept Architect, Three-Lens Evaluator
- D/F/V framework: balanced evaluation that is facilitative, not judgmental
- Concept cards: structured narratives with value propositions and target users
- Image prompt guidance: M365 Copilot prompts as a natural coaching behavior
- Section ordering and token budget allocation (~1,500-2,000 tokens)
- applyTo: '**/.copilot-tracking/dt/**/method-05*'
Output: plan at .copilot-tracking/plans/{date}-is028-dt-method-05-plan.md
Phase 3: Build
Steps:
- Review the plan from Phase 2.
- Author the instruction file using
/prompt-build. - Integrate image prompt generation as a natural coaching behavior. Ensure D/F/V evaluation is facilitative, not judgmental.
Starter prompt:
/prompt-build file=.github/instructions/dt-method-05-concepts.instructions.md
Build IS028 using the plan at .copilot-tracking/plans/{date}-is028-dt-method-05-plan.md.
This is a method-tier instruction file for Method 5: User Concepts. Key authoring notes:
- applyTo targets Method 5 artifact paths: '**/.copilot-tracking/dt/**/method-05*'
- Integrate image prompt generation as a natural coaching behavior, not a separate tool
- Ensure D/F/V evaluation is facilitative, not judgmental — the coach helps users hold all three lenses
- Three sub-methods with consistent table structure
- Two specialized hats with activation triggers
- Concept cards are structured narratives, not specification documents
- Lo-fi enforcement: concepts are articulated but not over-specified
- M365 Copilot image prompts are guidance for the user, not AI self-generation
Phase 4: Review
Steps:
- Review the built file against prompt-builder standards and the issue requirements.
- Validate D/F/V framework completeness, concept card quality, image prompt integration, and coaching facilitation tone.
Starter prompt:
/task-review
Review IS028: .github/instructions/dt-method-05-concepts.instructions.md
Validate against:
- prompt-builder.instructions.md authoring standards
- D/F/V framework completeness — all three lenses with balanced evaluation guidance
- Concept card quality — structured narratives with clear value propositions
- Image prompt integration — M365 Copilot prompts woven into coaching, not bolted on
- Coaching facilitation tone — evaluative but not judgmental
- Lo-fi enforcement for concept artifacts
- Token budget: ~1,500-2,000 tokens
- Frontmatter applyTo correctness
- Three sub-methods and two hats with consistent table structure
After Review
- Pass: Mark IS028 complete.
- Iterate: Address review findings, rebuild, re-review.
- Escalate: If blocked by missing DT4HVE source material or architectural questions, raise to the user.
Authoring Standards
Follow .github/instructions/prompt-builder.instructions.md:
applyTotargets Method 5 artifact paths- Three sub-methods with consistent table structure
- Two specialized hats with activation triggers
- M365 Copilot image prompts are guidance for the user, not AI self-generation
Success Criteria
- File created at
.github/instructions/dt-method-05-concepts.instructions.md - Frontmatter
applyTotargets Method 5 artifact paths - Three sub-methods defined (planning, articulation, evaluation)
- Two specialized hats with clear activation triggers
- Three-lens validation (D/F/V) framework with coaching guidance for balanced evaluation
- M365 Copilot image prompt generation integrated as a coaching behavior
- Lo-fi quality constraints enforced — concepts are structured narratives, not specs
- Token count within ~1,500-2,000 target
- Passes task-reviewer validation against prompt-builder standards
- Each prompt, instructions, or agent file registered in
collections/design-thinking.collection.ymlwithpathandkindfields - Each prompt, instructions, or agent file registered in
collections/hve-core-all.collection.ymlwithpathandkindfields -
npm run plugin:generatesucceeds after collection manifest updates
Metadata
Metadata
Assignees
Labels
Type
Projects
Status