3.2 KiB
3.2 KiB
allowed-tools, description
| allowed-tools | description |
|---|---|
| Bash(cat:*), Bash(test:*), Write | Create implementation task list |
Context
- Current spec: !
cat spec/.current 2>/dev/null || echo "No active spec" - Project context: @CLAUDE.md
Your Task
Phase 1: Validation
- Verify that
spec/<current-spec>/spec.mdexists - If not found, inform the user to run
/spkl:specfirst
Phase 2: Task Breakdown
Translate the spec into concrete, implementable tasks for human developers (not AI execution instructions):
- From Requirements section: Create concrete implementation tasks
- From Dependencies section: Understand context, but don't create "read X" tasks
- From Constraints section: Use to inform task creation, but don't duplicate them as tasks
- From Done criteria: Use to validate task completeness
Structure:
- Organize tasks into logical categories (e.g., "Core Models", "API Layer", "UI Components")
- Number categories sequentially (1, 2, 3, etc.)
- Number tasks within each category using decimal notation (1.1, 1.2, 1.3, etc.)
- Use markdown checkbox notation for each task:
- [ ] 1.1 Task description - Keep granularity appropriate (not too high-level, not overly detailed)
- Focus on what needs to be done, not how to do it
Update behavior:
-
If
spec/<current-spec>/tasks.mdexists, overwrite with fresh task list (developers manually check off completed tasks) -
Save tasks to
spec/<current-spec>/tasks.md
Critical Constraints
- Do not include constraints, dependencies, or success criteria in the task list - those remain in the spec to guide implementation
- Test scenarios/validation may be included as tasks if helpful, but consolidate into broad validation areas rather than individual test cases
Consolidation strategy for validation tasks:
- Group by system area (e.g., "API versioning", "UI validation", "persistence")
- Group by user flow (e.g., "refund flows", "authentication flows")
- Typically aim for 3-5 validation tasks total, not 7+
- Example: Instead of separate tasks for "test full refund" and "test partial refund", use "Verify refund flows work correctly (full and partial)"
Guidelines
- Tasks describe what to build, not implementation steps
- Avoid creating meta-tasks like "Read file X" or "Understand Y" (Dependencies section already lists these)
- Each task should be independently understandable and completable
- Tasks should map clearly back to Requirements in the spec
Example format:
# <spec name>: Implementation Tasks
**1. Core Models**
- [ ] 1.1 Create RefundReason enum in transaction domain
- [ ] 1.2 Add refund_reason field to Refund model
**2. API Layer**
- [ ] 2.1 Update RefundRequest schema to include reason
- [ ] 2.2 Add validation for required reason field
**3. UI Components**
- [ ] 3.1 Add reason dropdown to refund form
- [ ] 3.2 Update refund confirmation to display selected reason
**4. Validation**
- [ ] 4.1 Verify refund flows work correctly (full and partial refunds)
- [ ] 4.2 Verify UI validation and display (dropdown, confirmation, history)
- [ ] 4.3 Verify refund reason persistence in transaction logs
After Completion
- Confirm tasks were written to
spec/<current-spec>/tasks.md - Summarize the number of categories and tasks created