Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:06:38 +08:00
commit ed3e4c84c3
76 changed files with 20449 additions and 0 deletions

View File

@@ -0,0 +1,707 @@
---
name: managing-bd-tasks
description: Use for advanced bd operations - splitting tasks mid-flight, merging duplicates, changing dependencies, archiving epics, querying metrics, cross-epic dependencies
---
<skill_overview>
Advanced bd operations for managing complex task structures; bd is single source of truth, keep it accurate.
</skill_overview>
<rigidity_level>
HIGH FREEDOM - These are operational patterns, not rigid workflows. Adapt operations to your specific situation while following the core principles (keep bd accurate, merge don't delete, document changes).
</rigidity_level>
<quick_reference>
| Operation | When | Key Command |
|-----------|------|-------------|
| Split task | Task too large mid-flight | Create subtasks, add deps, close parent |
| Merge duplicates | Found duplicate tasks | Combine designs, move deps, close with reference |
| Change dependencies | Dependencies wrong/changed | `bd dep remove` then `bd dep add` |
| Archive epic | Epic complete, hide from views | `bd close bd-X --reason "Archived"` |
| Query metrics | Need status/velocity data | `bd list` + filters + `wc -l` |
| Cross-epic deps | Task depends on other epic | `bd dep add` works across epics |
| Bulk updates | Multiple tasks need same change | Loop with careful review first |
| Recover mistakes | Accidentally closed/wrong dep | `bd update --status` or `bd dep remove` |
**Core principle:** Track all work in bd, update as you go, never batch updates.
</quick_reference>
<when_to_use>
Use this skill for **advanced** bd operations:
- Split task that's too large (discovered mid-implementation)
- Merge duplicate tasks
- Reorganize dependencies after work started
- Archive completed epics (hide from views, keep history)
- Query bd for metrics (velocity, progress, bottlenecks)
- Manage cross-epic dependencies
- Bulk status updates
- Recover from bd mistakes
**For basic operations:** See skills/common-patterns/bd-commands.md (create, show, close, update)
</when_to_use>
<operations>
## Operation 1: Splitting Tasks Mid-Flight
**When:** Task in-progress but turns out too large.
**Example:** Started "Implement authentication" - realize it's 8+ hours of work across multiple areas.
**Process:**
### Step 1: Create subtasks for remaining work
```bash
# Original task bd-5 is in-progress
# Already completed: Login form
# Remaining work gets split:
bd create "Auth API endpoints" --type task --priority P1 --design "
POST /api/login and POST /api/logout endpoints.
## Success Criteria
- [ ] POST /api/login validates credentials, returns JWT
- [ ] POST /api/logout invalidates token
- [ ] Tests pass
"
# Returns bd-12
bd create "Session management" --type task --priority P1 --design "
JWT token tracking and validation.
## Success Criteria
- [ ] JWT generated on login
- [ ] Tokens validated on protected routes
- [ ] Token expiration handled
- [ ] Tests pass
"
# Returns bd-13
bd create "Password hashing" --type task --priority P1 --design "
Secure password hashing with bcrypt.
## Success Criteria
- [ ] Passwords hashed before storage
- [ ] Hash verification on login
- [ ] Tests pass
"
# Returns bd-14
```
### Step 2: Set up dependencies
```bash
# Password hashing must be done first
# API endpoints depend on password hashing
bd dep add bd-12 bd-14 # bd-12 depends on bd-14
# Session management depends on API endpoints
bd dep add bd-13 bd-12 # bd-13 depends on bd-12
# View tree
bd dep tree bd-5
```
### Step 3: Update original task and close
```bash
bd edit bd-5 --design "
Implement user authentication.
## Status
✓ Login form completed (frontend)
✗ Remaining work split into subtasks:
- bd-14: Password hashing (do first)
- bd-12: Auth API endpoints (depends on bd-14)
- bd-13: Session management (depends on bd-12)
## Success Criteria
- [x] Login form renders
- [ ] See subtasks for remaining criteria
"
bd close bd-5 --reason "Split into bd-12, bd-13, bd-14"
```
### Step 4: Work on subtasks in order
```bash
bd ready # Shows bd-14 (no dependencies)
bd update bd-14 --status in_progress
# Complete bd-14...
bd close bd-14
# Now bd-12 is unblocked
bd ready # Shows bd-12
```
---
## Operation 2: Merging Duplicate Tasks
**When:** Discovered two tasks are same thing.
**Example:**
```
bd-7: "Add email validation"
bd-9: "Validate user email addresses"
^ Duplicates
```
### Step 1: Choose which to keep
Based on:
- Which has more complete design?
- Which has more work done?
- Which has more dependencies?
**Example:** Keep bd-7 (more complete)
### Step 2: Merge designs
```bash
bd show bd-7
bd show bd-9
# Combine into bd-7
bd edit bd-7 --design "
Add email validation to user creation and update.
## Background
Originally tracked as bd-7 and bd-9 (now merged).
## Success Criteria
- [ ] Email validated on creation
- [ ] Email validated on update
- [ ] Rejects invalid formats
- [ ] Rejects empty strings
- [ ] Tests cover all cases
## Notes from bd-9
Need validation on update, not just creation.
"
```
### Step 3: Move dependencies
```bash
# Check bd-9 dependencies
bd show bd-9
# If bd-10 depended on bd-9, update to bd-7
bd dep remove bd-10 bd-9
bd dep add bd-10 bd-7
```
### Step 4: Close duplicate with reference
```bash
bd edit bd-9 --design "DUPLICATE: Merged into bd-7
This task was duplicate of bd-7. All work tracked there."
bd close bd-9
```
---
## Operation 3: Changing Dependencies
**When:** Dependencies were wrong or requirements changed.
**Example:** bd-10 depends on bd-8 and bd-9, but bd-9 got merged and bd-10 now also needs bd-11.
```bash
# Remove obsolete dependency
bd dep remove bd-10 bd-9
# Add new dependency
bd dep add bd-10 bd-11
# Verify
bd dep tree bd-1 # If bd-10 in epic bd-1
bd show bd-10 | grep "Blocking"
```
**Common scenarios:**
- Discovered hidden dependency during implementation
- Requirements changed mid-flight
- Tasks reordered for better flow
---
## Operation 4: Archiving Completed Epics
**When:** Epic complete, want to hide from default views but keep history.
```bash
# Verify all tasks closed
bd list --parent bd-1 --status open
# Output: [empty] = all closed
# Archive epic
bd close bd-1 --reason "Archived - completed Oct 2025"
# Won't show in open listings
bd list --status open # bd-1 won't appear
# Still accessible
bd show bd-1 # Still shows full epic
```
**Use archived for:** Completed epics, shipped features, historical reference
**Use open/in-progress for:** Active work
**Use closed with note for:** Cancelled work (explain why)
---
## Operation 5: Querying for Metrics
### Velocity
```bash
# Tasks closed this week
bd list --status closed | grep "closed_at" | grep "2025-10-" | wc -l
# Tasks closed by epic
bd list --parent bd-1 --status closed | wc -l
```
### Blocked vs Ready
```bash
# Ready to work on
bd ready
bd ready | grep "^bd-" | wc -l
# All open tasks
bd list --status open | wc -l
# Blocked = open - ready
```
### Epic Progress
```bash
# Show tree
bd dep tree bd-1
# Total tasks in epic
bd list --parent bd-1 | grep "^bd-" | wc -l
# Completed tasks
bd list --parent bd-1 --status closed | grep "^bd-" | wc -l
# Percentage = (completed / total) * 100
```
**For detailed metrics guidance:** See [resources/metrics-guide.md](resources/metrics-guide.md)
---
## Operation 6: Cross-Epic Dependencies
**When:** Task in one epic depends on task in different epic.
**Example:**
```
Epic bd-1: User Management
- bd-10: User CRUD API
Epic bd-2: Order Management
- bd-20: Order creation (needs user API)
```
```bash
# Add cross-epic dependency
bd dep add bd-20 bd-10
# bd-20 (in bd-2) depends on bd-10 (in bd-1)
# Check dependencies
bd show bd-20 | grep "Blocking"
# Check ready tasks
bd ready
# Won't show bd-20 until bd-10 closed
```
**Best practices:**
- Document cross-epic dependencies clearly
- Consider if epics should be merged
- Coordinate if different people own epics
---
## Operation 7: Bulk Status Updates
**When:** Need to update multiple tasks.
**Example:** Mark all test tasks closed after suite complete.
```bash
# Get tasks
bd list --parent bd-1 --status open | grep "test:" > test-tasks.txt
# Review list
cat test-tasks.txt
# Update each
while read task_id; do
bd close "$task_id"
done < test-tasks.txt
# Verify
bd list --parent bd-1 --status open | grep "test:"
```
**Use bulk for:**
- Marking completed work closed
- Reopening related tasks
- Updating priorities
**Never bulk:**
- Thoughtless changes
- Hiding problems (closing unfinished tasks)
---
## Operation 8: Recovering from Mistakes
### Accidentally closed task
```bash
bd update bd-15 --status open
# Or if was in progress
bd update bd-15 --status in_progress
```
### Wrong dependency
```bash
bd dep remove bd-10 bd-8 # Remove wrong
bd dep add bd-10 bd-9 # Add correct
```
### Undo design changes
```bash
# bd has no undo, restore from git
git log -p -- .beads/issues.jsonl | grep -A 50 "bd-10"
# Find previous version, copy
bd edit bd-10 --design "[paste previous]"
```
### Epic structure wrong
1. Create new tasks with correct structure
2. Move work to new tasks
3. Close old tasks with reference
4. Don't delete (keep audit trail)
</operations>
<examples>
<example>
<scenario>Developer closes duplicate without merging information</scenario>
<code>
# Found duplicates
bd-7: "Add email validation"
bd-9: "Validate user email addresses"
# Developer just closes bd-9
bd close bd-9
# Loses information from bd-9's design
# bd-9 mentioned validation on update (bd-7 didn't)
# Now that requirement is lost
# Work on bd-7 completes, but misses update validation
# Bug ships to production
</code>
<why_it_fails>
- Closed duplicate without reading its design
- Lost requirement mentioned only in duplicate
- Information not preserved
- Incomplete implementation ships
- bd not accurate source of truth
</why_it_fails>
<correction>
**Correct process:**
```bash
# Read BOTH tasks
bd show bd-7 # Only mentions validation on creation
bd show bd-9 # Mentions validation on update too
# Merge information
bd edit bd-7 --design "
Email validation for user creation and update.
## Background
Merged from bd-9.
## Success Criteria
- [ ] Validate on creation (from bd-7)
- [ ] Validate on update (from bd-9) ← Preserved!
- [ ] Tests for both cases
"
# Then close duplicate with reference
bd edit bd-9 --design "DUPLICATE: Merged into bd-7"
bd close bd-9
```
**What you gain:**
- All requirements preserved
- bd remains accurate
- No information lost
- Complete implementation
- Audit trail clear
</correction>
</example>
<example>
<scenario>Developer doesn't split large task, struggles through</scenario>
<code>
bd-15: "Implement payment processing" (started)
# 3 hours in, developer realizes:
# - Need Stripe API integration (4 hours)
# - Need payment validation (2 hours)
# - Need retry logic (3 hours)
# - Need receipt generation (2 hours)
# Total: 11 more hours!
# Developer thinks: "Too late to split, I'll power through"
# Works 14 hours straight
# Gets exhausted, makes mistakes
# Ships buggy code
# Has to fix in production
</code>
<why_it_fails>
- Didn't split when discovered size
- "Sunk cost" rationalization (already started)
- No clear stopping points
- Exhaustion leads to bugs
- Can't track progress granularly
- If interrupted, hard to resume
</why_it_fails>
<correction>
**Correct approach (split mid-flight):**
```bash
# 3 hours in, stop and split
bd edit bd-15 --design "
Implement payment processing.
## Status
✓ Completed: Payment form UI (3 hours)
✗ Split remaining work into subtasks:
- bd-20: Stripe API integration
- bd-21: Payment validation
- bd-22: Retry logic
- bd-23: Receipt generation
"
bd close bd-15 --reason "Split into bd-20, bd-21, bd-22, bd-23"
# Create subtasks with dependencies
bd create "Stripe API integration" ... # bd-20
bd create "Payment validation" ... # bd-21
bd create "Retry logic" ... # bd-22
bd create "Receipt generation" ... # bd-23
bd dep add bd-21 bd-20 # Validation needs API
bd dep add bd-22 bd-20 # Retry needs API
bd dep add bd-23 bd-22 # Receipts after retry works
# Work on one at a time
bd update bd-20 --status in_progress
# Complete bd-20 (4 hours)
bd close bd-20
# Take break
# Next day: bd-21
```
**What you gain:**
- Clear stopping points (can pause between tasks)
- Track progress granularly
- No exhaustion (spread over days)
- Better quality (not rushed)
- If interrupted, easy to resume
- Each subtask gets proper focus
</correction>
</example>
<example>
<scenario>Developer adds dependency but doesn't update dependent task</scenario>
<code>
# Initial state
bd-10: "Add user dashboard" (in progress)
bd-15: "Add analytics to dashboard" (blocked on bd-10)
# During bd-10 implementation, discover need for new API
bd create "Analytics API endpoints" ... # Creates bd-20
# Add dependency
bd dep add bd-15 bd-20 # bd-15 now depends on bd-20 too
# But bd-10 completes, closes
bd close bd-10
# bd-15 shows as ready (bd-10 closed)
bd ready # Shows bd-15
# Developer starts bd-15
bd update bd-15 --status in_progress
# Immediately blocked - needs bd-20!
# bd-20 not done yet
# Have to stop work on bd-15
# Time wasted
</code>
<why_it_fails>
- Added dependency but didn't document in bd-15
- bd-15's design doesn't mention bd-20 requirement
- Appears ready when not actually ready
- Wastes time starting work that's blocked
- Dependencies not obvious from task design
</why_it_fails>
<correction>
**Correct approach:**
```bash
# Create new API task
bd create "Analytics API endpoints" ... # bd-20
# Add dependency
bd dep add bd-15 bd-20
# UPDATE bd-15 to document new requirement
bd edit bd-15 --design "
Add analytics to dashboard.
## Dependencies
- bd-10: User dashboard (completed)
- bd-20: Analytics API endpoints (NEW - discovered during bd-10)
## Success Criteria
- [ ] Integrate with analytics API (bd-20)
- [ ] Display charts on dashboard
- [ ] Tests pass
"
# Close bd-10
bd close bd-10
# Check ready
bd ready # Does NOT show bd-15 (blocked on bd-20)
# Work on bd-20 first
bd update bd-20 --status in_progress
# Complete bd-20
bd close bd-20
# NOW bd-15 is truly ready
bd ready # Shows bd-15
```
**What you gain:**
- Dependencies documented in task design
- Clear why task is blocked
- No false "ready" signals
- Work proceeds in correct order
- No wasted time starting blocked work
</correction>
</example>
</examples>
<critical_rules>
## Rules That Have No Exceptions
1. **Keep bd accurate** → Single source of truth for all work
2. **Merge duplicates, don't just close** → Preserve information from both
3. **Split large tasks when discovered** → Not after struggling through
4. **Document dependency changes** → Update task designs when deps change
5. **Update as you go** → Never batch updates "for later"
## Common Excuses
All of these mean: **STOP. Follow the operation properly.**
- "Task too complex to split" (Every task can be broken down)
- "Just close duplicate" (Merge first, preserve information)
- "Won't track this in bd" (All work tracked, no exceptions)
- "bd is out of date, update later" (Later never comes, update now)
- "This dependency doesn't matter" (Dependencies prevent blocking, they matter)
- "Too much overhead to split" (More overhead to fail huge task)
</critical_rules>
<bd_best_practices>
**For detailed guidance on:**
- Task naming conventions
- Priority guidelines (P0-P4)
- Task granularity
- Success criteria
- Dependency management
**See:** [resources/task-naming-guide.md](resources/task-naming-guide.md)
</bd_best_practices>
<red_flags>
Watch for these patterns:
- **Multiple in-progress tasks** → Focus on one
- **Tasks stuck in-progress for days** → Blocked? Split it?
- **Many open tasks, no dependencies** → Prioritize!
- **Epics with 20+ tasks** → Too large, split epic
- **Closed tasks, incomplete criteria** → Not done, reopen
</red_flags>
<verification_checklist>
After advanced bd operations:
- [ ] bd still accurate (reflects reality)
- [ ] Dependencies correct (nothing blocked incorrectly)
- [ ] Duplicate information merged (not lost)
- [ ] Changes documented in task designs
- [ ] Ready tasks are actually unblocked
- [ ] Metrics queries return sensible numbers
- [ ] No orphaned tasks (all part of epics)
**Can't check all boxes?** Review operation and fix issues.
</verification_checklist>
<integration>
**This skill covers:** Advanced bd operations
**For basic operations:**
- skills/common-patterns/bd-commands.md
**Related skills:**
- hyperpowers:writing-plans (creating epics and tasks)
- hyperpowers:executing-plans (working through tasks)
- hyperpowers:verification-before-completion (closing tasks properly)
**CRITICAL:** Use bd CLI commands, never read `.beads/issues.jsonl` directly.
</integration>
<resources>
**Detailed guides:**
- [Metrics guide (cycle time, WIP limits)](resources/metrics-guide.md)
- [Task naming conventions](resources/task-naming-guide.md)
- [Dependency patterns](resources/dependency-patterns.md)
**When stuck:**
- Task seems unsplittable → Ask user how to break it down
- Duplicates complex → Merge designs carefully, don't rush
- Dependencies tangled → Draw diagram, untangle systematically
- bd out of sync → Stop everything, update bd first
</resources>

View File

@@ -0,0 +1,166 @@
# bd Metrics Guide
This guide covers the key metrics for tracking work in bd.
## Cycle Time vs. Lead Time
**Two distinct time measurements:**
### Cycle Time
- **Definition**: Time from "work started" to "work completed"
- **Start**: When task moves to "in-progress" status
- **End**: When task moves to "closed" status
- **Measures**: How efficiently work flows through active development
- **Use**: Identify process inefficiencies, improve development speed
```bash
# Calculate cycle time for completed task
bd show bd-5 | grep "status.*in-progress" # Get start time
bd show bd-5 | grep "status.*closed" # Get end time
# Difference = cycle time
```
### Lead Time
- **Definition**: Time from "request created" to "delivered to customer"
- **Start**: When task is created (enters backlog)
- **End**: When task is deployed/delivered
- **Measures**: Overall responsiveness to requests
- **Use**: Set realistic expectations, measure total process duration
```bash
# Calculate lead time for completed task
bd show bd-5 | grep "created_at" # Get creation time
bd show bd-5 | grep "deployed_at" # Get deployment time (if tracked)
# Difference = lead time
```
### Key Differences
| Metric | Starts | Ends | Includes Waiting? | Measures |
|--------|--------|------|-------------------|----------|
| **Cycle Time** | In-progress | Closed | No | Development efficiency |
| **Lead Time** | Created | Deployed | Yes | Total responsiveness |
### Example
```
Task created: Monday 9am (enters backlog)
↓ [waits 2 days]
Task started: Wednesday 9am (moved to in-progress)
↓ [active work]
Task completed: Wednesday 5pm (moved to closed)
↓ [waits for deployment]
Task deployed: Thursday 2pm (delivered)
Cycle Time: 8 hours (Wednesday 9am → 5pm)
Lead Time: 3 days, 5 hours (Monday 9am → Thursday 2pm)
```
### Why Both Matter
- **Short cycle time, long lead time**: Work is efficient once started, but tasks wait too long in backlog
- Fix: Reduce WIP, start fewer tasks, finish faster
- **Long cycle time, short lead time**: Work starts immediately but takes forever to complete
- Fix: Split tasks smaller, remove blockers, improve focus
- **Both long**: Overall process is slow
- Fix: Address both backlog management AND development efficiency
### Tracking Over Time
```bash
# Average cycle time (manual calculation)
# For each closed task: (closed_at - started_at)
# Sum and divide by task count
# Trend analysis
# Week 1: Avg cycle time = 3 days
# Week 2: Avg cycle time = 2 days ✅ Improving
# Week 3: Avg cycle time = 4 days ❌ Getting worse
```
### Improvement Targets
- **Cycle time**: Reduce by splitting tasks, removing blockers, improving focus
- **Lead time**: Reduce by prioritizing backlog, reducing WIP, faster deployment
## Work in Progress (WIP)
```bash
# All in-progress tasks
bd list --status in-progress
# Count
bd list --status in-progress | grep "^bd-" | wc -l
```
### WIP Limits
Work in Progress limits prevent overcommitment and identify bottlenecks.
**Setting WIP limits:**
- **Personal WIP limit**: 1-2 tasks in-progress at a time
- **Team WIP limit**: Depends on team size and workflow stages
- **Rule of thumb**: WIP limit = (Team size ÷ 2) + 1
**Example for individual developer:**
```
✅ Good: 1 task in-progress, 0-1 in code review
❌ Bad: 5 tasks in-progress simultaneously
```
**Example for team of 6:**
```
Workflow stages and limits:
- Backlog: Unlimited
- Ready: 8 items max
- In Progress: 4 items max (team size ÷ 2 + 1)
- Code Review: 3 items max
- Testing: 2 items max
- Done: Unlimited
```
### Why WIP Limits Matter
1. **Focus:** Fewer tasks means deeper focus, faster completion
2. **Flow:** Prevents bottlenecks from accumulating
3. **Quality:** Less context switching, fewer mistakes
4. **Visibility:** High WIP indicates blocked work or overcommitment
### Monitoring WIP
```bash
# Check personal WIP
bd list --status in-progress | grep "assignee:me" | wc -l
# If > 2: Focus on finishing before starting new work
```
### Red Flags
- WIP consistently at or above limit (need more capacity or smaller tasks)
- WIP growing week-over-week (work piling up, not finishing)
- WIP high but velocity low (tasks blocked or too large)
### Response to High WIP
1. Finish existing tasks before starting new ones
2. Identify and remove blockers
3. Split large tasks
4. Add capacity (if chronically high)
## Bottleneck Identification
```bash
# Find tasks that are blocking others
# (Tasks that many other tasks depend on)
for task in $(bd list --status open | grep "^bd-" | cut -d: -f1); do
echo -n "$task: "
bd list --status open | xargs -I {} sh -c "bd show {} | grep -q \"depends on $task\" && echo {}" | wc -l
done | sort -t: -k2 -n -r
# Shows tasks with most dependencies (top bottlenecks)
```

View File

@@ -0,0 +1,276 @@
# bd Task Naming and Quality Guidelines
This guide covers best practices for naming tasks, setting priorities, sizing work, and defining success criteria.
## Task Naming Conventions
### Principles
- **Actionable**: Start with action verbs (add, fix, update, remove, refactor, implement)
- **Specific**: Include enough context to understand without opening
- **Consistent**: Follow project-wide templates
### Templates by Task Type
#### User Stories
**Template:**
```
As a [persona], I want [something] so that [reason]
```
**Examples:**
```
As a customer, I want one-click checkout so that I can purchase quickly
As an admin, I want bulk user import so that I can onboard teams efficiently
As a developer, I want API rate limiting so that I can prevent abuse
```
**When to use:** Features from user perspective
#### Bug Reports
**Template 1 (Capability-focused):**
```
[User type] can't [action they should be able to do]
```
**Examples:**
```
New users can't view home screen after signup
Admin users can't export user data to CSV
Guest users can't add items to cart
```
**Template 2 (Event-focused):**
```
When [action/event], [system feature] doesn't work
```
**Examples:**
```
When clicking Submit, payment form doesn't validate
When uploading large files, progress bar freezes
When session expires, user isn't redirected to login
```
**When to use:** Describing broken functionality
#### Tasks (Implementation Work)
**Template:**
```
[Verb] [object] [context]
```
**Examples:**
```
feat(auth): Implement JWT token generation
fix(api): Handle empty email validation in user endpoint
test: Add integration tests for payment flow
refactor: Extract validation logic from UserService
docs: Update API documentation for v2 endpoints
```
**When to use:** Technical implementation tasks
#### Features (High-Level Capabilities)
**Template:**
```
[Verb] [capability] for [user/system]
```
**Examples:**
```
Add dark mode toggle for Settings page
Implement rate limiting for API endpoints
Enable two-factor authentication for admin users
Build export functionality for report data
```
**When to use:** Feature-level work (may become epic with multiple tasks)
### Context Guidelines
- **Which component**: "in login flow", "for user API", "in Settings page"
- **Which user type**: "for admins", "for guests", "for authenticated users"
- **Avoid jargon** in user stories (user perspective, not technical)
- **Be specific** in technical tasks (exact API, file, function)
### Good vs Bad Names
**Good names:**
- `feat(auth): Implement JWT token generation`
- `fix(api): Handle empty email validation in user endpoint`
- `As a customer, I want CSV export so that I can analyze my data`
- `test: Add integration tests for payment flow`
- `refactor: Extract validation logic from UserService`
**Bad names:**
- `fix stuff` (vague - what stuff?)
- `implement feature` (vague - which feature?)
- `work on backend` (vague - what work?)
- `Report` (noun, not action - should be "Generate Q4 Sales Report")
- `API endpoint` (incomplete - "Add GET /users endpoint" better)
## Priority Guidelines
Use bd's priority system consistently:
- **P0:** Critical production bug (drop everything)
- **P1:** Blocking other work (do next)
- **P2:** Important feature work (normal priority)
- **P3:** Nice to have (do when time permits)
- **P4:** Someday/maybe (backlog)
## Granularity Guidelines
**Good task size:**
- 2-4 hours of focused work
- Can complete in one sitting
- Clear deliverable
**Too large:**
- Takes multiple days
- Multiple independent pieces
- Should be split
**Too small:**
- Takes 15 minutes
- Too granular to track
- Combine with related tasks
## Success Criteria: Acceptance Criteria vs. Definition of Done
**Two distinct types of completion criteria:**
### Acceptance Criteria (Per-Task, Functional)
**Definition:** Specific, measurable requirements unique to each task that define functional completeness from user/business perspective.
**Scope:** Unique to each backlog item (bug, task, story)
**Purpose:** "Does this feature work correctly?"
**Owner:** Product owner/stakeholder defines, team validates
**Format:** Checklist or scenarios
```markdown
## Acceptance Criteria
- [ ] User can upload CSV files up to 10MB
- [ ] System validates CSV format before processing
- [ ] User sees progress bar during upload
- [ ] User receives success message with row count
- [ ] Invalid files show specific error messages
```
**Scenario format (Given/When/Then):**
```markdown
## Acceptance Criteria
Scenario 1: Valid file upload
Given a user is on the upload page
When they select a valid CSV file
Then the file uploads successfully
And they see confirmation with row count
Scenario 2: Invalid file format
Given a user selects a non-CSV file
When they try to upload
Then they see error: "Only CSV files supported"
```
### Definition of Done (Universal, Quality)
**Definition:** Universal checklist that applies to ALL work items to ensure consistent quality and release-readiness.
**Scope:** Applies to every single task (bugs, features, stories)
**Purpose:** "Is this work complete to our quality standards?"
**Owner:** Team defines and maintains (reviewed in retrospectives)
**Example DoD:**
```markdown
## Definition of Done (applies to all tasks)
- [ ] Code written and peer-reviewed
- [ ] Unit tests written and passing (>80% coverage)
- [ ] Integration tests passing
- [ ] No linter warnings
- [ ] Documentation updated (if public API)
- [ ] Manual testing completed (if UI)
- [ ] Deployed to staging environment
- [ ] Product owner accepted
- [ ] Commit references bd task ID
```
### Key Differences
| Aspect | Acceptance Criteria | Definition of Done |
|--------|--------------------|--------------------|
| **Scope** | Per-task (unique) | All tasks (universal) |
| **Focus** | Functional requirements | Quality standards |
| **Question** | "Does it work?" | "Is it done?" |
| **Owner** | Product owner | Team |
| **Changes** | Per task | Rarely (retrospectives) |
| **Examples** | "User can export data" | "Tests pass, code reviewed" |
### How to Use Both
**When creating a task:**
1. **Define Acceptance Criteria** (task-specific functional requirements)
2. **Reference Definition of Done** (don't duplicate it in task)
```markdown
bd create "Implement CSV file upload" --design "
## Acceptance Criteria
- [ ] User can upload CSV files up to 10MB
- [ ] System validates CSV format
- [ ] Progress bar shows during upload
- [ ] Success message displays row count
## Notes
Must also meet team's Definition of Done (see project wiki)
"
```
**Before closing a task:**
1. ✅ Verify all Acceptance Criteria met (functional)
2. ✅ Verify Definition of Done met (quality)
3. Only then close task
**Bad practice:**
```markdown
## Success Criteria
- [ ] CSV upload works
- [ ] Tests pass ← This is DoD, not acceptance criteria
- [ ] Code reviewed ← This is DoD, not acceptance criteria
- [ ] No linter warnings ← This is DoD, not acceptance criteria
```
**Good practice:**
```markdown
## Acceptance Criteria (functional, task-specific)
- [ ] CSV upload handles files up to 10MB
- [ ] Validation rejects non-CSV formats
- [ ] Progress bar updates during upload
## Definition of Done (quality, universal - referenced, not duplicated)
See team DoD checklist (applies to all tasks)
```
## Dependency Management
**Good dependency usage:**
- Technical dependency (feature B needs feature A's code)
- Clear ordering (must do A before B)
- Unblocks work (completing A unblocks B)
**Bad dependency usage:**
- "Feels like should be done first" (vague)
- No technical relationship (just preference)
- Circular dependencies (A depends on B depends on A)