Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:30:43 +08:00
commit 9840842eeb
9 changed files with 1511 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "spec-driven-development",
"description": "Spec-driven development workflow commands",
"version": "1.1.0",
"author": {
"name": "Kasper Junge",
"url": "https://github.com/kasperjunge"
},
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# spec-driven-development
Spec-driven development workflow commands

301
commands/clarify_design.md Normal file
View File

@@ -0,0 +1,301 @@
# Clarify Design
Clarifies technical design and architecture based on requirements.
## Initial Setup
Respond with:
```
I'm ready to help clarify the technical design.
I'll read your requirements from spec/requirements.md to understand what needs to be built.
Please describe your tech stack and design preferences.
```
Wait for user input.
## Workflow
1. **Read** `spec/requirements.md` FULLY
2. **Receive user's tech preferences**
3. **Ask clarifying questions** (3-5 at a time) about:
- Specific technologies and versions
- Architecture patterns and structure
- Data storage and management
- UI/interface approach
- Integration points and dependencies
- Testing approach and preferences
4. **Continue asking** until no more questions remain
5. **Generate design document** at `spec/design.md`
## Guidelines
- Each bullet with a clarifying question should be numbered
- Be specific: name actual technologies with versions, not placeholders
- Align with requirements: every design decision should trace back to requirements
- If user doesn't mention UI approach and requirements need one, ask or suggest
- If user has no testing preferences, create appropriate strategy based on tech stack
- Include concrete file/folder structures
- Match complexity to project scope - don't over-engineer
- Design for both happy path and error cases
## Chat Output Format
After completing design:
```
I've created a design document at spec/design.md.
The design includes:
- Complete tech stack with rationale
- Component breakdown with responsibilities
- Data model and UI structure
- Key user flows and interactions
- File structure and development approach
Please review and let me know if you'd like any adjustments.
```
## File Output Format
Create `spec/design.md`:
```markdown
# Design: [Project Name]
## Design Overview
[2-3 sentences describing the technical approach and key architectural decisions]
## Tech Stack
### Languages & Frameworks
- **Language**: [e.g., JavaScript/TypeScript]
- **Framework**: [e.g., React 18]
- **Build Tool**: [e.g., Vite]
- **Styling**: [e.g., Tailwind CSS]
### Data & State
- **Data Storage**: [e.g., localStorage, PostgreSQL, MongoDB]
- **State Management**: [e.g., React Context, Redux, Zustand]
- **Data Format**: [e.g., JSON, structured data model]
### Dependencies
- [Library 1]: [Purpose]
- [Library 2]: [Purpose]
**Rationale**: [Why this tech stack? How does it fit the requirements?]
## System Architecture
### High-Level Architecture
```
[Diagram or description of system layers]
Example:
┌─────────────────────────────┐
│ Presentation Layer │ ← UI Components
├─────────────────────────────┤
│ Business Logic Layer │ ← State Management
├─────────────────────────────┤
│ Data Layer │ ← Storage/API
└─────────────────────────────┘
```
### Component Breakdown
#### Component: [Name]
**Purpose**: [What this component does]
**Location**: `src/components/ComponentName.tsx`
**Responsibilities**:
- [Responsibility 1]
- [Responsibility 2]
**Props/Interface**:
```typescript
interface ComponentProps {
prop1: string;
prop2: number;
onAction: () => void;
}
```
**State**:
- [State item 1]: [Purpose]
- [State item 2]: [Purpose]
## Data Model
### Entity: [EntityName]
```typescript
interface EntityName {
id: string;
field1: string;
field2: number;
field3: Date;
}
```
**Purpose**: [What this represents]
**Relationships**: [How it relates to other entities]
## User Interface Design
### Screen: [ScreenName]
**Purpose**: [What user accomplishes here]
**Layout**:
```
┌─────────────────────────┐
│ Header │
├─────────────────────────┤
│ Main Content │
│ - Element 1 │
│ - Element 2 │
├─────────────────────────┤
│ Footer/Actions │
└─────────────────────────┘
```
**Key Elements**:
- [Element 1]: [Purpose and behavior]
- [Element 2]: [Purpose and behavior]
**User Interactions**:
- [Action 1] → [Result]
- [Action 2] → [Result]
## Key Interactions & Flows
### Flow: [FlowName]
**Scenario**: [User story this implements]
1. User [action]
2. System [response]
3. System [next step]
4. User sees [result]
**Error Handling**:
- If [error condition] → [behavior]
- If [error condition] → [behavior]
## File Structure
```
project-root/
├── src/
│ ├── components/
│ │ ├── ComponentA.tsx
│ │ └── ComponentB.tsx
│ ├── hooks/
│ │ └── useCustomHook.ts
│ ├── utils/
│ │ └── helpers.ts
│ ├── types/
│ │ └── index.ts
│ ├── App.tsx
│ └── main.tsx
├── spec/
│ ├── requirements.md
│ └── design.md
├── package.json
└── README.md
```
## Design Decisions & Tradeoffs
### Decision: [DecisionName]
**Choice**: [What we decided]
**Alternatives Considered**: [What else we could do]
**Rationale**: [Why we chose this]
**Tradeoffs**: [What we gain/lose]
## Non-Functional Considerations
### Performance
- [Performance approach, e.g., "Lazy load components"]
- [Optimization strategy, e.g., "Memoize expensive calculations"]
### Scalability
- [How design handles growth]
- [What needs to change if requirements scale]
### Accessibility
- [Accessibility approach, e.g., "ARIA labels on all interactive elements"]
- [Keyboard navigation strategy]
### Error Handling
- [Error handling strategy]
- [User feedback approach]
## Testing Strategy
**Philosophy**: Every task must be verified before moving to the next. Testing is incremental and continuous.
### Testing Tools & Framework
- **Testing Framework**: [e.g., Jest, pytest, Vitest, etc.]
- **Testing Library**: [e.g., React Testing Library, unittest, etc.]
- **Test Runner Command**: `[e.g., npm test, pytest, etc.]`
- **Coverage Tool** (if applicable): [e.g., Jest coverage, coverage.py]
### Verification Approach for Each Task Type
#### Code/Logic Tasks
- **Verification Method**: Automated tests
- **When to Test**: [After each task / TDD / as you go]
- **What to Test**:
- Function behavior with valid inputs
- Edge cases and error handling
- Integration with other components
#### UI/Component Tasks
- **Verification Method**: [Automated tests + manual verification / manual only]
- **Automated Tests**: Component rendering, props, interactions
- **Manual Verification**: Visual appearance, responsiveness, UX flow
#### Configuration/Setup Tasks
- **Verification Method**: [e.g., "Run build command", "Run dev server"]
- **Success Criteria**: No errors, expected output
### Test Writing Approach
[Describe when tests should be written:]
- **Test-Driven Development (TDD)**: Write tests before implementation
- **Test-After**: Implement feature, then write tests
- **Incremental**: Write tests as you implement
### Critical Testing Rules
1. Every task must have clear verification steps
2. Tests must pass before moving to next task
3. If tests fail repeatedly (3+ times), stop and reassess
4. Document any tasks that cannot be automated (manual verification)
### Unit Tests
- **What**: Test individual components/functions in isolation
- **Where**: `src/__tests__/` or alongside source files
- **Key areas**: [List critical functions to test]
- **Run Command**: `[command to run unit tests]`
### Integration Tests
- **What**: Test component interactions and data flow
- **Key flows**: [List important user flows to test]
- **Run Command**: `[command to run integration tests]`
### Manual Testing Scenarios
- **What**: UI/UX verification that requires human judgment
- **Key scenarios**: [List scenarios to manually verify]
- **When**: After completing each phase with UI changes
## Development Approach
### Phase Breakdown
[High-level phases - detailed breakdown happens in plan.md]
1. **Phase 1**: [e.g., "Setup & Core Structure"]
2. **Phase 2**: [e.g., "Data Layer"]
3. **Phase 3**: [e.g., "UI Components"]
4. **Phase 4**: [e.g., "Integration & Polish"]
### Development Standards
- [Coding conventions]
- [Naming conventions]
- [Documentation requirements]
## Open Questions
[Any remaining technical uncertainties - resolve before planning]
- [Question 1]
- [Question 2]
## References
- Requirements: `spec/requirements.md`
- [Any relevant documentation or resources]
```

View File

@@ -0,0 +1,119 @@
# Clarify Requirements
Helps clarify project requirements by asking focused questions and documenting user needs.
## Initial Setup
Respond with:
```
I'm ready to help clarify your project requirements.
Please describe what you want to build.
```
Wait for user input.
## Workflow
1. **Receive user's project description**
2. **Ask clarifying questions** (3-5 at a time) about:
- User needs and goals
- Key scenarios and use cases
- Edge cases and error handling
- Scope boundaries (what's in/out)
- Success criteria
3. **Continue asking** until no more questions remain
4. **Generate requirements document** at `spec/requirements.md`
## Guidelines
- Each bullet with a clarifying question should be numbered
- Stay user-focused: describe WHAT users need, not HOW to build it
- No tech stack, architecture, or implementation details
- Be specific: acceptance criteria should be testable
- Avoid vague terms like "user-friendly" or "fast"
- Include edge cases and error scenarios
- Use "Out of Scope" section aggressively
- Every feature should map to a clear user need
## Chat Output Format
After completing requirements:
```
I've created a requirements document at spec/requirements.md.
The document includes:
- [X] user stories with acceptance criteria
- [X] functional requirements
- Non-functional requirements
- Clear scope boundaries
Please review and let me know if you'd like any adjustments.
```
## File Output Format
Create `spec/requirements.md`:
```markdown
# Requirements: [Project Name]
## Project Overview
[2-3 sentences describing what this project is and why it exists]
## Target Users
[Who will use this? What's their context and needs?]
## User Stories
### Story 1: [User Goal]
**As a** [type of user]
**I want to** [action]
**So that** [benefit/value]
**Acceptance Criteria:**
- [ ] [Specific, testable criterion 1]
- [ ] [Specific, testable criterion 2]
- [ ] [Specific, testable criterion 3]
**Edge Cases:**
- [What happens when X?]
- [What happens when Y?]
### Story 2: [Another User Goal]
[Same structure as Story 1]
[Continue for all major user stories]
## Functional Requirements
### FR1: [Requirement Name]
**Description**: [What the system must do]
**Priority**: High/Medium/Low
**Acceptance**: [How to verify this works]
### FR2: [Another Requirement]
[Same structure]
## Non-Functional Requirements
### Performance
- [e.g., "Page load time under 2 seconds"]
- [e.g., "Support up to 1000 items"]
### Usability
- [e.g., "Interface should be intuitive for first-time users"]
- [e.g., "Key actions accessible within 2 clicks"]
### Accessibility
- [Any accessibility requirements if applicable]
## Out of Scope
[Explicitly list what we're NOT building to prevent scope creep]
- [Feature/functionality 1]
- [Feature/functionality 2]
## Success Criteria
[How do we know this project succeeded?]
- [ ] [Measurable success criterion 1]
- [ ] [Measurable success criterion 2]
## Open Questions
[Any remaining uncertainties - resolve these before design phase]
- [Question 1]
- [Question 2]
```

322
commands/create_design.md Normal file
View File

@@ -0,0 +1,322 @@
# Create Design
Quick design generation with AI-driven creation and review cycle.
## Initial Setup
Respond with:
```
I'm ready to help create your technical design.
I'll read your requirements from spec/requirements.md first.
Please tell me your tech stack preferences. You can be specific (e.g., 'Python, FastAPI, SQLite, Jinja2') or general (e.g., 'modern web stack') or just say 'you decide'.
```
Wait for user input.
## Workflow
1. **Read** `spec/requirements.md` FULLY
2. **Receive user's tech preferences**
3. **Fill in gaps** with reasonable decisions based on:
- What user specified
- Requirements from requirements.md
- Modern best practices
- Project complexity and scope
4. **Generate complete design document** at `spec/design.md`
5. **Be explicit about decisions made**: "Based on your input, I made these decisions: [list with rationale]"
6. **Present design** with clear summary of tech choices
7. **Ask**: "Does this look good, or would you like me to make adjustments?"
8. **If adjustments needed**: make changes and repeat step 6-7
9. **Guide toward completion** (AI should signal when things look complete)
## Guidelines
- Listen to user's tech preferences (specific or vague)
- Fill gaps intelligently - don't ask questions, make decisions
- Be explicit about what decisions were made and WHY
- Match complexity to project scope - don't over-engineer
- If no UI framework mentioned but requirements need UI, choose appropriate one
- If no testing preference mentioned, include appropriate testing strategy
- Align with requirements: every design decision should trace back to requirements
- Include concrete file/folder structures
- Design for both happy path and error cases
- AI should guide conversation toward wrap-up after 2-3 adjustment rounds
## Chat Output Format
After generating design.md:
```
I've created a design document at spec/design.md.
Based on your input, I made these decisions:
- [Decision 1 with rationale - e.g., "React 18 with TypeScript for type safety and modern features"]
- [Decision 2 with rationale - e.g., "localStorage for data persistence since requirements don't need server sync"]
- [Decision 3 with rationale - e.g., "Vitest + React Testing Library for component testing (matches Vite build tool)"]
The design includes:
- Complete tech stack with rationale
- Component breakdown with responsibilities
- Data model and [UI structure if applicable]
- Key user flows and interactions
- File structure and testing strategy
Does this look good, or would you like me to make adjustments?
```
After adjustments (if needed):
```
I've updated spec/design.md with your feedback.
Changes made:
- [Change 1]
- [Change 2]
Does this look good now, or would you like further adjustments?
```
## File Output Format
Create `spec/design.md`:
```markdown
# Design: [Project Name]
## Design Overview
[2-3 sentences describing the technical approach and key architectural decisions]
## Tech Stack
### Languages & Frameworks
- **Language**: [e.g., JavaScript/TypeScript]
- **Framework**: [e.g., React 18]
- **Build Tool**: [e.g., Vite]
- **Styling**: [e.g., Tailwind CSS]
### Data & State
- **Data Storage**: [e.g., localStorage, PostgreSQL, MongoDB]
- **State Management**: [e.g., React Context, Redux, Zustand]
- **Data Format**: [e.g., JSON, structured data model]
### Dependencies
- [Library 1]: [Purpose]
- [Library 2]: [Purpose]
**Rationale**: [Why this tech stack? How does it fit the requirements?]
## System Architecture
### High-Level Architecture
```
[Diagram or description of system layers]
Example:
┌─────────────────────────────┐
│ Presentation Layer │ ← UI Components
├─────────────────────────────┤
│ Business Logic Layer │ ← State Management
├─────────────────────────────┤
│ Data Layer │ ← Storage/API
└─────────────────────────────┘
```
### Component Breakdown
#### Component: [Name]
**Purpose**: [What this component does]
**Location**: `src/components/ComponentName.tsx`
**Responsibilities**:
- [Responsibility 1]
- [Responsibility 2]
**Props/Interface**:
```typescript
interface ComponentProps {
prop1: string;
prop2: number;
onAction: () => void;
}
```
**State**:
- [State item 1]: [Purpose]
- [State item 2]: [Purpose]
## Data Model
### Entity: [EntityName]
```typescript
interface EntityName {
id: string;
field1: string;
field2: number;
field3: Date;
}
```
**Purpose**: [What this represents]
**Relationships**: [How it relates to other entities]
## User Interface Design
### Screen: [ScreenName]
**Purpose**: [What user accomplishes here]
**Layout**:
```
┌─────────────────────────┐
│ Header │
├─────────────────────────┤
│ Main Content │
│ - Element 1 │
│ - Element 2 │
├─────────────────────────┤
│ Footer/Actions │
└─────────────────────────┘
```
**Key Elements**:
- [Element 1]: [Purpose and behavior]
- [Element 2]: [Purpose and behavior]
**User Interactions**:
- [Action 1] → [Result]
- [Action 2] → [Result]
## Key Interactions & Flows
### Flow: [FlowName]
**Scenario**: [User story this implements]
1. User [action]
2. System [response]
3. System [next step]
4. User sees [result]
**Error Handling**:
- If [error condition] → [behavior]
- If [error condition] → [behavior]
## File Structure
```
project-root/
├── src/
│ ├── components/
│ │ ├── ComponentA.tsx
│ │ └── ComponentB.tsx
│ ├── hooks/
│ │ └── useCustomHook.ts
│ ├── utils/
│ │ └── helpers.ts
│ ├── types/
│ │ └── index.ts
│ ├── App.tsx
│ └── main.tsx
├── spec/
│ ├── requirements.md
│ └── design.md
├── package.json
└── README.md
```
## Design Decisions & Tradeoffs
### Decision: [DecisionName]
**Choice**: [What we decided]
**Alternatives Considered**: [What else we could do]
**Rationale**: [Why we chose this]
**Tradeoffs**: [What we gain/lose]
## Non-Functional Considerations
### Performance
- [Performance approach, e.g., "Lazy load components"]
- [Optimization strategy, e.g., "Memoize expensive calculations"]
### Scalability
- [How design handles growth]
- [What needs to change if requirements scale]
### Accessibility
- [Accessibility approach, e.g., "ARIA labels on all interactive elements"]
- [Keyboard navigation strategy]
### Error Handling
- [Error handling strategy]
- [User feedback approach]
## Testing Strategy
**Philosophy**: Every task must be verified before moving to the next. Testing is incremental and continuous.
### Testing Tools & Framework
- **Testing Framework**: [e.g., Jest, pytest, Vitest, etc.]
- **Testing Library**: [e.g., React Testing Library, unittest, etc.]
- **Test Runner Command**: `[e.g., npm test, pytest, etc.]`
- **Coverage Tool** (if applicable): [e.g., Jest coverage, coverage.py]
### Verification Approach for Each Task Type
#### Code/Logic Tasks
- **Verification Method**: Automated tests
- **When to Test**: [After each task / TDD / as you go]
- **What to Test**:
- Function behavior with valid inputs
- Edge cases and error handling
- Integration with other components
#### UI/Component Tasks
- **Verification Method**: [Automated tests + manual verification / manual only]
- **Automated Tests**: Component rendering, props, interactions
- **Manual Verification**: Visual appearance, responsiveness, UX flow
#### Configuration/Setup Tasks
- **Verification Method**: [e.g., "Run build command", "Run dev server"]
- **Success Criteria**: No errors, expected output
### Test Writing Approach
[Describe when tests should be written:]
- **Test-Driven Development (TDD)**: Write tests before implementation
- **Test-After**: Implement feature, then write tests
- **Incremental**: Write tests as you implement
### Critical Testing Rules
1. Every task must have clear verification steps
2. Tests must pass before moving to next task
3. If tests fail repeatedly (3+ times), stop and reassess
4. Document any tasks that cannot be automated (manual verification)
### Unit Tests
- **What**: Test individual components/functions in isolation
- **Where**: `src/__tests__/` or alongside source files
- **Key areas**: [List critical functions to test]
- **Run Command**: `[command to run unit tests]`
### Integration Tests
- **What**: Test component interactions and data flow
- **Key flows**: [List important user flows to test]
- **Run Command**: `[command to run integration tests]`
### Manual Testing Scenarios
- **What**: UI/UX verification that requires human judgment
- **Key scenarios**: [List scenarios to manually verify]
- **When**: After completing each phase with UI changes
## Development Approach
### Phase Breakdown
[High-level phases - detailed breakdown happens in plan.md]
1. **Phase 1**: [e.g., "Setup & Core Structure"]
2. **Phase 2**: [e.g., "Data Layer"]
3. **Phase 3**: [e.g., "UI Components"]
4. **Phase 4**: [e.g., "Integration & Polish"]
### Development Standards
- [Coding conventions]
- [Naming conventions]
- [Documentation requirements]
## Open Questions
[Any remaining technical uncertainties - resolve before planning]
- [Question 1]
- [Question 2]
## References
- Requirements: `spec/requirements.md`
- [Any relevant documentation or resources]
```

399
commands/create_plan.md Normal file
View File

@@ -0,0 +1,399 @@
# Create Implementation Plan
Creates detailed implementation plan based on requirements and design specifications.
## Initial Setup
Respond with:
```
I'm ready to create an implementation plan.
I'll read your requirements from spec/requirements.md and design from spec/design.md to create a detailed plan.
```
Then proceed to read documents and generate plan.
## Workflow
1. **Read** `spec/requirements.md` FULLY
2. **Read** `spec/design.md` FULLY
3. **Generate implementation plan** at `spec/plan.md` with:
- 3-5 phases, each with specific tasks
- Every task has: file paths, code structure, test requirements, verification steps, success criteria
- Map tasks to user stories from requirements
- Clear scope boundaries
## Planning Strategy
### UI-First Planning Approach
**When to use:** Use UI-first planning unless there's no UI involved (e.g., pure API, CLI tool, library, data pipeline).
**Why UI-First matters:**
The traditional backend-first approach (database → API → UI) has a critical flaw: you don't see or interact with anything until late in the process. This causes:
- **Late discovery of design misalignments** - Costly to fix after backend is built
- **Loss of momentum** - Abstract data models feel like slow progress
- **Inability to verify direction** - Can't know if it's right until you see it
UI-first solves this by:
- **Immediate visual verification** - See the UI running with mock data early, giving confidence about direction
- **Catch misunderstandings early** - Find issues when UI is easy to adjust, not after complex data layer is built
- **Better data model design** - Building UI first reveals what data shapes actually make sense
- **Maintains energy** - Working interactive UI (even with fake data) keeps momentum high
- **Easier iteration** - Refine UX freely with mocks, then build backend to support validated design
**Key insight:** UI with mock data is cheap to change. Backend with real data/APIs is expensive to change. So validate the expensive part by building the cheap part first.
**Example phase structure** (adapt to project needs, this is inspiration not a strict template):
- **Phase 1: Project Setup & UI Shell** - Initialize project, setup dev tools, create basic app shell, verify dev server runs and shows basic UI
- **Phase 2: Complete UI with Mock Data** - Implement ALL UI components with hardcoded/mock data, make it fully interactive and navigable
- **Phase 3: Data Layer & Backend** - Build database/storage, API endpoints, business logic, backend tests
- **Phase 4: Integration** - Replace mocks with real data, connect UI to backend, handle loading/error states
- **Phase 5: Polish & Testing** - Edge cases, error handling, final acceptance criteria verification
### Backend-First Planning Approach
**When to use:** Projects with no user interface (REST APIs, CLI tools, libraries, background services, data pipelines).
**Planning approach:** Structure phases around core functionality, data model, API contracts, and verification strategies appropriate for non-UI projects.
## Guidelines
- **Detect if project has a UI** based on design.md
- **If UI exists:** Use UI-first planning approach (see above for reasoning)
- **If no UI:** Use appropriate structure for that project type (backend-first)
- The 5-phase UI-first structure is an **example for inspiration** - adapt to project needs
- **Key principle for UI projects:** See working UI with mocks early → build backend → integrate
- Be specific: every task needs concrete file paths and line numbers
- Every task MUST have:
- Test Requirements section
- Verification Steps section
- Success criteria (what tests to run, what to verify)
- Break work into 3-5 discrete phases
- Each phase should be independently verifiable
- Order matters - explain why phases come in sequence
- Map tasks to specific user stories from requirements.md
- Include "What We're NOT Doing" section to prevent scope creep
- Plan should be detailed enough to execute mechanically
## Chat Output Format
After completing plan:
```
I've created a detailed implementation plan at spec/plan.md.
The plan includes:
- [X] phases with specific tasks
- File paths and code structures from design.md
- Acceptance criteria from requirements.md
- Clear verification checklist
Please review and let me know if you'd like any adjustments.
```
## File Output Format
Create `spec/plan.md`:
```markdown
# Implementation Plan: [Project Name]
## Overview
[1-2 sentence summary of what we're building based on requirements and design]
## Current State
- Empty project / Starting from scratch
- Target tech stack: [From design.md]
## Desired End State
[Clear description of what "done" looks like, referencing requirements]
**Success Criteria:**
- [ ] All user stories from requirements.md are implemented
- [ ] All acceptance criteria are met
- [ ] Application matches design specifications
- [ ] Code is tested and runs without errors
## What We're NOT Doing
[Explicitly list out-of-scope items from requirements.md]
## Implementation Approach
[High-level strategy: which phases, why this order, key dependencies]
---
## Phase 1: Project Setup & Foundation
### Overview
Set up the development environment and project structure according to design specifications.
### Tasks
#### 1.1 Initialize Project
**Action**:
- Create project using [tool, e.g., "Vite", "Create React App"]
- Install dependencies listed in design.md
- Set up file structure from design.md
**Files Created**:
- `package.json` / `requirements.txt` / [dependency file]
- `src/` directory structure
- Configuration files (tsconfig.json, vite.config.js, etc.)
**Success Criteria**:
- [ ] Project builds without errors: `[build command]`
- [ ] Dev server starts: `[dev command]`
- [ ] All dependencies installed correctly
#### 1.2 Setup Development Tools
**Action**:
- Configure linting and formatting
- Set up testing framework
- Add scripts to package.json
**Success Criteria**:
- [ ] Linter runs: `[lint command]`
- [ ] Formatter works: `[format command]`
- [ ] Test runner works: `[test command]`
---
## Phase 2: Data Layer
### Overview
Implement data models, storage, and state management as defined in design.md.
### Tasks
#### 2.1 Define Data Models
**File**: `src/types/index.ts` (or equivalent)
**Action**:
- Implement [Entity1] interface from design.md
- Implement [Entity2] interface from design.md
- Add type exports
**Code Structure**:
```typescript
interface Entity1 {
id: string;
field1: string;
// ... from design
}
```
**Test Requirements**:
- [ ] Write tests to verify type definitions are correct
- [ ] Test type safety with valid and invalid data
- [ ] Tests pass: `npm run typecheck` or equivalent
**Verification Steps**:
1. Run type checker: `[typecheck command]`
2. Verify no type errors in output
3. Expected: Clean compilation, all types defined
**Success Criteria**:
- [ ] All code changes completed
- [ ] Types compile without errors
- [ ] All entities from design.md are defined
- [ ] Type tests written and passing
#### 2.2 Implement Storage Layer
**File**: `src/utils/storage.ts` (or equivalent)
**Action**:
- Create [storage approach from design.md]
- Implement CRUD operations for each entity
- Add error handling
**Functions to Implement**:
- `create[Entity](data: Entity): Promise<Entity>`
- `read[Entity](id: string): Promise<Entity>`
- `update[Entity](id: string, data: Partial<Entity>): Promise<Entity>`
- `delete[Entity](id: string): Promise<void>`
**Test Requirements**:
- [ ] Write unit tests for each CRUD operation
- [ ] Test error handling (invalid data, not found, etc.)
- [ ] Test data persistence and retrieval
- [ ] Tests pass: `[test command]`
**Verification Steps**:
1. Run tests: `[test command]`
2. Verify all CRUD operations tested
3. Expected: All tests passing, 100% coverage on storage functions
**Success Criteria**:
- [ ] All code changes completed
- [ ] All CRUD operations implemented
- [ ] Unit tests written for all operations
- [ ] All tests passing
- [ ] Error cases handled and tested
---
## Phase 3: Core Components
### Overview
Build the UI components defined in design.md, focusing on core functionality first.
### Tasks
#### 3.1 Create [ComponentName] Component
**File**: `src/components/[ComponentName].tsx` (from design.md)
**Action**:
- Implement component as specified in design.md
- Add props/interface from design
- Implement event handlers
- Add basic styling
**Component Structure** (from design.md):
```typescript
interface ComponentProps {
// Props from design.md
}
function ComponentName({ prop1, prop2 }: ComponentProps) {
// Implementation
}
```
**Implements User Story**: [Reference specific user story from requirements.md]
**Test Requirements**:
- [ ] Write component tests (rendering, props, interactions)
- [ ] Test event handlers trigger correctly
- [ ] Test edge cases (empty data, errors, etc.)
- [ ] Tests pass: `[test command]`
**Verification Steps**:
1. Run tests: `[test command]`
2. Manual check: Open app, verify component appears correctly
3. Expected: Tests passing, component renders and functions as designed
**Success Criteria**:
- [ ] All code changes completed
- [ ] Component implemented per design
- [ ] Component tests written and passing
- [ ] Manual verification confirms visual appearance
- [ ] Event handlers tested and working
---
## Phase 4: User Flows & Integration
### Overview
Connect components together to implement complete user flows from requirements.md.
### Tasks
#### 4.1 Implement [Flow Name] Flow
**User Story**: [Reference specific user story from requirements.md]
**Components Involved**: [List from design.md]
**Action**:
- Connect [Component A] to [Component B]
- Implement data flow as per design.md
- Add error handling for edge cases from requirements.md
**Flow Steps** (from design.md):
1. User [action]
2. System [response]
3. User sees [result]
**Acceptance Criteria** (from requirements.md):
- [ ] [Criterion 1 from requirements]
- [ ] [Criterion 2 from requirements]
- [ ] [Criterion 3 from requirements]
**Edge Cases**:
- [ ] [Edge case 1 from requirements] handles correctly
- [ ] [Edge case 2 from requirements] handles correctly
---
## Phase 5: Polish & Testing
### Overview
Add finishing touches, handle edge cases, and verify all acceptance criteria.
### Tasks
#### 5.1 UI/UX Polish
**Action**:
- Refine styling to match design vision
- Add loading states
- Add empty states
- Improve error messages
**Success Criteria**:
- [ ] UI matches design specifications
- [ ] All interactive elements are accessible
- [ ] Loading states are clear
- [ ] Empty states are helpful
#### 5.2 Error Handling
**Action**:
- Implement error boundaries (if applicable)
- Add user-friendly error messages
- Handle all edge cases from requirements.md
**Success Criteria**:
- [ ] No uncaught errors
- [ ] Users see helpful error messages
- [ ] Edge cases handled gracefully
#### 5.3 Final Testing
**Action**:
- Write unit tests for critical functions
- Manually test all user flows
- Test all acceptance criteria from requirements.md
**Success Criteria**:
- [ ] All automated tests pass: `[test command]`
- [ ] All user stories verified manually
- [ ] All acceptance criteria met
---
## Verification Checklist
### Requirements Completion
[For each user story from requirements.md]
- [ ] User Story 1: [Name]
- [ ] Acceptance criterion 1
- [ ] Acceptance criterion 2
- [ ] User Story 2: [Name]
- [ ] Acceptance criterion 1
- [ ] Acceptance criterion 2
### Design Implementation
- [ ] All components from design.md implemented
- [ ] Data model matches design.md
- [ ] File structure matches design.md
- [ ] Tech stack matches design.md
### Quality Checks
- [ ] Code builds without errors: `[build command]`
- [ ] Linter passes: `[lint command]`
- [ ] Tests pass: `[test command]`
- [ ] Application runs: `[dev/start command]`
### User Acceptance
- [ ] Manually test all user flows
- [ ] Verify all edge cases
- [ ] Check performance (if requirements specify)
- [ ] Verify accessibility (if requirements specify)
---
## Development Notes
### Suggested Order
1. Don't skip phases - each builds on the previous
2. Verify success criteria before moving to next task
3. Update this document if you discover issues in specs
### If You Get Stuck
- Review requirements.md for the "why"
- Review design.md for the "how"
- Ask questions if specs are unclear
- Update specs if you find gaps
## References
- Requirements: `spec/requirements.md`
- Design: `spec/design.md`
```

View File

@@ -0,0 +1,133 @@
# Create Requirements
Quick requirements generation with AI-driven creation and review cycle.
## Initial Setup
Respond with:
```
I'm ready to help create your project requirements.
Please provide a brief description of what you want to build (1-3 paragraphs is fine).
```
Wait for user input.
## Workflow
1. **Receive user's brief project description**
2. **Generate complete requirements document** immediately at `spec/requirements.md`
3. **Present the requirements** to user with summary
4. **Ask**: "Does this look good, or would you like me to make adjustments?"
5. **If adjustments needed**: make changes and repeat step 3-4
6. **Guide toward completion** (AI should signal when things look complete)
## Guidelines
- Make reasonable assumptions based on best practices
- Focus on WHAT users need, not HOW to build it
- Be specific with acceptance criteria (testable)
- Include common edge cases proactively
- Use aggressive "Out of Scope" section
- AI should guide conversation toward wrap-up after 2-3 adjustment rounds
- Stay user-focused: every feature should map to a clear user need
- Avoid vague terms like "user-friendly" or "fast"
## Chat Output Format
After generating requirements.md:
```
I've created a requirements document at spec/requirements.md based on your description.
Key decisions I made:
- [Decision 1 - e.g., "Focused on single-user experience (multi-user out of scope)"]
- [Decision 2 - e.g., "Prioritized core task management over advanced features"]
- [Decision 3 - e.g., "Included data export as key requirement based on user control needs"]
The document includes:
- [X] user stories with acceptance criteria
- [X] functional requirements
- Non-functional requirements
- Clear scope boundaries
Does this look good, or would you like me to make adjustments?
```
After adjustments (if needed):
```
I've updated spec/requirements.md with your feedback.
Changes made:
- [Change 1]
- [Change 2]
Does this look good now, or would you like further adjustments?
```
## File Output Format
Create `spec/requirements.md`:
```markdown
# Requirements: [Project Name]
## Project Overview
[2-3 sentences describing what this project is and why it exists]
## Target Users
[Who will use this? What's their context and needs?]
## User Stories
### Story 1: [User Goal]
**As a** [type of user]
**I want to** [action]
**So that** [benefit/value]
**Acceptance Criteria:**
- [ ] [Specific, testable criterion 1]
- [ ] [Specific, testable criterion 2]
- [ ] [Specific, testable criterion 3]
**Edge Cases:**
- [What happens when X?]
- [What happens when Y?]
### Story 2: [Another User Goal]
[Same structure as Story 1]
[Continue for all major user stories]
## Functional Requirements
### FR1: [Requirement Name]
**Description**: [What the system must do]
**Priority**: High/Medium/Low
**Acceptance**: [How to verify this works]
### FR2: [Another Requirement]
[Same structure]
## Non-Functional Requirements
### Performance
- [e.g., "Page load time under 2 seconds"]
- [e.g., "Support up to 1000 items"]
### Usability
- [e.g., "Interface should be intuitive for first-time users"]
- [e.g., "Key actions accessible within 2 clicks"]
### Accessibility
- [Any accessibility requirements if applicable]
## Out of Scope
[Explicitly list what we're NOT building to prevent scope creep]
- [Feature/functionality 1]
- [Feature/functionality 2]
## Success Criteria
[How do we know this project succeeded?]
- [ ] [Measurable success criterion 1]
- [ ] [Measurable success criterion 2]
## Open Questions
[Any remaining uncertainties - resolve these before design phase]
- [Question 1]
- [Question 2]
```

157
commands/implement_plan.md Normal file
View File

@@ -0,0 +1,157 @@
# Implement Plan
Implements the project based on the approved implementation plan with rigorous testing.
## Initial Setup
Respond with:
```
I'm ready to implement the plan.
I'll read your plan from spec/plan.md, requirements from spec/requirements.md, and design from spec/design.md to guide the implementation.
```
Then proceed to read documents and start implementing.
## Workflow
1. **Read all spec documents** (`plan.md`, `requirements.md`, `design.md`) FULLY
2. **Check for existing checkmarks** `[x]` in plan.md to find where to start
3. **Create todo list** (if supported) to track progress
4. **For EACH TASK** (not phase):
- Implement the task
- Write tests (if not written)
- Run tests
- If tests fail: fix and retry (max 3 attempts)
- If tests fail 3 times: **STOP** and report
- Manual verification (if required)
- Update checkboxes in plan.md
- Report progress
- Proceed to next task
5. **After phase complete**:
- **STOP** - do not continue to next phase
- Summarize phase completion with all verification results
- List any manual verification items needed
- **Wait for user to review and verify before proceeding to next phase**
## Guidelines
**CRITICAL**: Test EVERY SINGLE TASK before moving to next (no exceptions!)
**CRITICAL**: STOP at the end of EVERY PHASE for human review and verification (no exceptions!)
- Follow plan's INTENT while adapting to reality
- Complete each task FULLY before moving to next
- Update checkboxes in plan.md as you complete items
- Each checkbox represents: code complete + tests passing
- Run tests at natural stopping points
- If tests fail 3 times: STOP, report, wait for user
- If codebase doesn't match expectations: STOP, present options
- Trust completed work, pick up from first unchecked item
- Stay aligned with requirements.md and design.md
- **Never proceed to the next phase without explicit user approval**
## Chat Output Format
### After Each Task:
```
Task [X.Y] Complete: [Task Name]
Changes Made:
- [Specific change 1 with file:line]
- [Specific change 2 with file:line]
Tests Written:
- [Test 1 description]
- [Test 2 description]
Test Results:
✓ All tests passing: `[command]`
✓ [X] tests passed
Manual Verification (if applicable):
- [What needs manual checking]
Proceeding to Task [X.Y+1]...
```
### After Each Phase:
```
Phase [N] Complete: [Phase Name]
Tasks Completed:
- Task [N.1]: [Name] ✓ Code ✓ Tests
- Task [N.2]: [Name] ✓ Code ✓ Tests
- Task [N.3]: [Name] ✓ Code ✓ Tests
All Tests Passing:
✓ Unit tests: [X] passed
✓ Integration tests: [X] passed
✓ All verification steps completed
Manual Verification Needed (if any):
- [ ] [Manual check 1]
- [ ] [Manual check 2]
Phase Summary:
[Brief summary of what this phase accomplished]
Ready to proceed to Phase [N+1]?
```
### If Tests Fail 3 Times:
```
TESTING FAILURE - Need Guidance
Task: [Task number and name]
Issue: Tests have failed 3 times
Test Output:
[Last test failure output]
Attempts Made:
1. [What you tried first]
2. [What you tried second]
3. [What you tried third]
Analysis: [Why you think it's failing]
Options:
1. [Suggested fix approach]
2. Skip tests for this task and continue (not recommended)
3. Revise the plan for this task
How should I proceed?
```
### When Complete:
```
Implementation Complete!
Summary:
- [Number] phases completed
- All tasks from spec/plan.md implemented
- Verification results: [summary]
The project is ready for final review and testing.
Please test the application and verify it meets all requirements from spec/requirements.md.
```
## File Output Format
No new file is created. The command updates `spec/plan.md` by checking off items with `[x]` as they are completed:
```markdown
### Tasks
#### 2.1 Define Data Models
**File**: `src/types/index.ts`
...
**Test Requirements**:
- [x] Write tests to verify type definitions are correct
- [x] Test type safety with valid and invalid data
- [x] Tests pass: `npm run typecheck`
**Success Criteria**:
- [x] All code changes completed
- [x] Types compile without errors
- [x] All entities from design.md are defined
- [x] Type tests written and passing

65
plugin.lock.json Normal file
View File

@@ -0,0 +1,65 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:kasperjunge/30-minute-vibe-coding-challenge:plugins/spec-driven-development",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "291d985d721cb8d1393927b191d9f1ae607b06f0",
"treeHash": "e063468430f937f649005ca9c2416d27b9399436aca43d3d9b8b2f71616d499d",
"generatedAt": "2025-11-28T10:19:25.193266Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "spec-driven-development",
"description": "Spec-driven development workflow commands",
"version": "1.1.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "18a0840f1ca345b0de7931de5d679774f09b9a3a6cf42d3360e5b53eb4f96586"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "8f951ee51eccb0511472daf4d127450ef71d60da5f81fd22d279863f110b067c"
},
{
"path": "commands/create_design.md",
"sha256": "7f9c7218a91a6c879a96ce580154ed0cd0a14dbc57dfa6a0e015f33f215659f5"
},
{
"path": "commands/clarify_design.md",
"sha256": "58500be6b3f20fa499b8cdbb650930a7629e7ee8ec28e3e6bdd21dbda5487a07"
},
{
"path": "commands/clarify_requirements.md",
"sha256": "3280563c8b2e7253ce477cbc27ee856ac456b4997a12a76bdb052f8b844d956a"
},
{
"path": "commands/implement_plan.md",
"sha256": "13d1dc96c3e5fe0d3ebf280f866d0d8e896e82d020100f390f4e8bece9a28373"
},
{
"path": "commands/create_requirements.md",
"sha256": "d3aab582dcc3f6e97eb5d1c8f5a5ee8dfc2ada858978b61f3f0a6d4c4d7d54dd"
},
{
"path": "commands/create_plan.md",
"sha256": "95750635aac4c932d0a5b1fd6fec03eb1d1ffcc666dc24022e31f00f05c30dd5"
}
],
"dirSha256": "e063468430f937f649005ca9c2416d27b9399436aca43d3d9b8b2f71616d499d"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}