Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:28:37 +08:00
commit ccc65b3f07
180 changed files with 53970 additions and 0 deletions

View File

@@ -0,0 +1,157 @@
# Brief Template
## Greenfield Brief (v1.0)
Copy and fill this structure for `.planning/BRIEF.md` when starting a new project:
```markdown
# [Project Name]
**One-liner**: [What this is in one sentence]
## Problem
[What problem does this solve? Why does it need to exist?
2-3 sentences max.]
## Success Criteria
How we know it worked:
- [ ] [Measurable outcome 1]
- [ ] [Measurable outcome 2]
- [ ] [Measurable outcome 3]
## Constraints
[Any hard constraints: tech stack, timeline, budget, dependencies]
- [Constraint 1]
- [Constraint 2]
## Out of Scope
What we're NOT building (prevents scope creep):
- [Not doing X]
- [Not doing Y]
```
<guidelines>
- Keep under 50 lines
- Success criteria must be measurable/verifiable
- Out of scope prevents "while we're at it" creep
- This is the ONLY human-focused document
</guidelines>
## Brownfield Brief (v1.1+)
After shipping v1.0, update BRIEF.md to include current state:
```markdown
# [Project Name]
## Current State (Updated: YYYY-MM-DD)
**Shipped:** v[X.Y] [Name] (YYYY-MM-DD)
**Status:** [Production / Beta / Internal / Live with users]
**Users:** [If known: "~500 downloads, 50 DAU" or "Internal use only" or "N/A"]
**Feedback:** [Key themes from user feedback, or "Initial release, gathering feedback"]
**Codebase:**
- [X,XXX] lines of [primary language]
- [Key tech stack: framework, platform, deployment target]
- [Notable dependencies or architecture]
**Known Issues:**
- [Issue 1 from v1.x that needs addressing]
- [Issue 2]
- [Or "None" if clean slate]
## v[Next] Goals
**Vision:** [What's the goal for this next iteration?]
**Motivation:**
- [Why this work matters now]
- [User feedback driving it]
- [Technical debt or improvements needed]
**Scope (v[X.Y]):**
- [Feature/improvement 1]
- [Feature/improvement 2]
- [Feature/improvement 3]
**Success Criteria:**
- [ ] [Measurable outcome 1]
- [ ] [Measurable outcome 2]
- [ ] [Measurable outcome 3]
**Out of Scope:**
- [Not doing X in this version]
- [Not doing Y in this version]
---
<details>
<summary>Original Vision (v1.0 - Archived for reference)</summary>
**One-liner**: [What this is in one sentence]
## Problem
[What problem does this solve? Why does it need to exist?]
## Success Criteria
How we know it worked:
- [x] [Outcome 1] - Achieved
- [x] [Outcome 2] - Achieved
- [x] [Outcome 3] - Achieved
## Constraints
- [Constraint 1]
- [Constraint 2]
## Out of Scope
- [Not doing X]
- [Not doing Y]
</details>
```
<brownfield_guidelines>
**When to update BRIEF:**
- After completing each milestone (v1.0 → v1.1 → v2.0)
- When starting new phases after a shipped version
- Use `complete-milestone.md` workflow to update systematically
**Current State captures:**
- What shipped (version, date)
- Real-world status (production, beta, etc.)
- User metrics (if applicable)
- User feedback themes
- Codebase stats (LOC, tech stack)
- Known issues needing attention
**Next Goals captures:**
- Vision for next version
- Why now (motivation)
- What's in scope
- What's measurable
- What's explicitly out
**Original Vision:**
- Collapsed in `<details>` tag
- Reference for "where we came from"
- Shows evolution of product thinking
- Checkboxes marked [x] for achieved goals
This structure makes all new plans brownfield-aware automatically because they read BRIEF and see:
- "v1.0 shipped"
- "2,450 lines of existing Swift code"
- "Users reporting X, requesting Y"
- Plans naturally reference existing files in @context
</brownfield_guidelines>

View File

@@ -0,0 +1,78 @@
# Continue-Here Template
Copy and fill this structure for `.planning/phases/XX-name/.continue-here.md`:
```yaml
---
phase: XX-name
task: 3
total_tasks: 7
status: in_progress
last_updated: 2025-01-15T14:30:00Z
---
```
```markdown
<current_state>
[Where exactly are we? What's the immediate context?]
</current_state>
<completed_work>
[What got done this session - be specific]
- Task 1: [name] - Done
- Task 2: [name] - Done
- Task 3: [name] - In progress, [what's done on it]
</completed_work>
<remaining_work>
[What's left in this phase]
- Task 3: [name] - [what's left to do]
- Task 4: [name] - Not started
- Task 5: [name] - Not started
</remaining_work>
<decisions_made>
[Key decisions and why - so next session doesn't re-debate]
- Decided to use [X] because [reason]
- Chose [approach] over [alternative] because [reason]
</decisions_made>
<blockers>
[Anything stuck or waiting on external factors]
- [Blocker 1]: [status/workaround]
</blockers>
<context>
[Mental state, "vibe", anything that helps resume smoothly]
[What were you thinking about? What was the plan?
This is the "pick up exactly where you left off" context.]
</context>
<next_action>
[The very first thing to do when resuming]
Start with: [specific action]
</next_action>
```
<yaml_fields>
Required YAML frontmatter:
- `phase`: Directory name (e.g., `02-authentication`)
- `task`: Current task number
- `total_tasks`: How many tasks in phase
- `status`: `in_progress`, `blocked`, `almost_done`
- `last_updated`: ISO timestamp
</yaml_fields>
<guidelines>
- Be specific enough that a fresh Claude instance understands immediately
- Include WHY decisions were made, not just what
- The `<next_action>` should be actionable without reading anything else
- This file gets DELETED after resume - it's not permanent storage
</guidelines>

View File

@@ -0,0 +1,91 @@
# ISSUES.md Template
This file is auto-created when Rule 5 (Log non-critical enhancements) is first triggered during execution.
Location: `.planning/ISSUES.md`
```markdown
# Project Issues Log
Non-critical enhancements discovered during execution. Address in future phases when appropriate.
## Open Enhancements
### ISS-001: [Brief description]
- **Discovered:** Phase [X] Plan [Y] Task [Z] (YYYY-MM-DD)
- **Type:** [Performance / Refactoring / UX / Testing / Documentation / Accessibility]
- **Description:** [What could be improved and why it would help]
- **Impact:** Low (works correctly, this would enhance)
- **Effort:** [Quick (<1hr) / Medium (1-4hr) / Substantial (>4hr)]
- **Suggested phase:** [Phase number where this makes sense, or "Future"]
### ISS-002: Add connection pooling for Redis
- **Discovered:** Phase 2 Plan 3 Task 6 (2025-11-23)
- **Type:** Performance
- **Description:** Redis client creates new connection per request. Connection pooling would reduce latency and handle connection failures better. Currently works but suboptimal under load.
- **Impact:** Low (works correctly, ~20ms overhead per request)
- **Effort:** Medium (2-3 hours - need to configure ioredis pool, test connection reuse)
- **Suggested phase:** Phase 5 (Performance optimization)
### ISS-003: Refactor UserService into smaller modules
- **Discovered:** Phase 1 Plan 2 Task 3 (2025-11-22)
- **Type:** Refactoring
- **Description:** UserService has grown to 400 lines with mixed concerns (auth, profile, settings). Would be cleaner as separate services (AuthService, ProfileService, SettingsService). Currently works but harder to test and reason about.
- **Impact:** Low (works correctly, just organizational)
- **Effort:** Substantial (4-6 hours - need to split, update imports, ensure no breakage)
- **Suggested phase:** Phase 7 (Code health milestone)
## Closed Enhancements
### ISS-XXX: [Brief description]
- **Status:** Resolved in Phase [X] Plan [Y] (YYYY-MM-DD)
- **Resolution:** [What was done]
- **Benefit:** [How it improved the codebase]
---
**Summary:** [X] open, [Y] closed
**Priority queue:** [List ISS numbers in priority order, or "Address as time permits"]
```
## Usage Guidelines
**When issues are added:**
- Auto-increment ISS numbers (ISS-001, ISS-002, etc.)
- Always include discovery context (Phase/Plan/Task and date)
- Be specific about impact and effort
- Suggested phase helps with roadmap planning
**When issues are resolved:**
- Move to "Closed Enhancements" section
- Document resolution and benefit
- Keeps history for reference
**Prioritization:**
- Quick wins (Quick effort, visible benefit) → Earlier phases
- Substantial refactors (Substantial effort, organizational benefit) → Dedicated "code health" phases
- Nice-to-haves (Low impact, high effort) → "Future" or never
**Integration with roadmap:**
- When planning new phases, scan ISSUES.md for relevant items
- Can create phases specifically for addressing accumulated issues
- Example: "Phase 8: Code Health - Address ISS-003, ISS-007, ISS-012"
## Example: Issues Driving Phase Planning
```markdown
# Roadmap excerpt
### Phase 6: Performance Optimization (Planned)
**Milestone Goal:** Address performance issues discovered during v1.0 usage
**Includes:**
- ISS-002: Redis connection pooling (Medium effort)
- ISS-015: Database query optimization (Quick)
- ISS-021: Image lazy loading (Medium)
**Excludes ISS-003 (refactoring):** Saving for dedicated code health phase
```
This creates traceability: enhancement discovered → logged → planned → addressed → documented.

View File

@@ -0,0 +1,115 @@
# Milestone Entry Template
Add this entry to `.planning/MILESTONES.md` when completing a milestone:
```markdown
## v[X.Y] [Name] (Shipped: YYYY-MM-DD)
**Delivered:** [One sentence describing what shipped]
**Phases completed:** [X-Y] ([Z] plans total)
**Key accomplishments:**
- [Major achievement 1]
- [Major achievement 2]
- [Major achievement 3]
- [Major achievement 4]
**Stats:**
- [X] files created/modified
- [Y] lines of code (primary language)
- [Z] phases, [N] plans, [M] tasks
- [D] days from start to ship (or milestone to milestone)
**Git range:** `feat(XX-XX)``feat(YY-YY)`
**What's next:** [Brief description of next milestone goals, or "Project complete"]
---
```
<structure>
If MILESTONES.md doesn't exist, create it with header:
```markdown
# Project Milestones: [Project Name]
[Entries in reverse chronological order - newest first]
```
</structure>
<guidelines>
**When to create milestones:**
- Initial v1.0 MVP shipped
- Major version releases (v2.0, v3.0)
- Significant feature milestones (v1.1, v1.2)
- Before archiving planning (capture what was shipped)
**Don't create milestones for:**
- Individual phase completions (normal workflow)
- Work in progress (wait until shipped)
- Minor bug fixes that don't constitute a release
**Stats to include:**
- Count modified files: `git diff --stat feat(XX-XX)..feat(YY-YY) | tail -1`
- Count LOC: `find . -name "*.swift" -o -name "*.ts" | xargs wc -l` (or relevant extension)
- Phase/plan/task counts from ROADMAP
- Timeline from first phase commit to last phase commit
**Git range format:**
- First commit of milestone → last commit of milestone
- Example: `feat(01-01)``feat(04-01)` for phases 1-4
</guidelines>
<example>
```markdown
# Project Milestones: WeatherBar
## v1.1 Security & Polish (Shipped: 2025-12-10)
**Delivered:** Security hardening with Keychain integration and comprehensive error handling
**Phases completed:** 5-6 (3 plans total)
**Key accomplishments:**
- Migrated API key storage from plaintext to macOS Keychain
- Implemented comprehensive error handling for network failures
- Added Sentry crash reporting integration
- Fixed memory leak in auto-refresh timer
**Stats:**
- 23 files modified
- 650 lines of Swift added
- 2 phases, 3 plans, 12 tasks
- 8 days from v1.0 to v1.1
**Git range:** `feat(05-01)``feat(06-02)`
**What's next:** v2.0 SwiftUI redesign with widget support
---
## v1.0 MVP (Shipped: 2025-11-25)
**Delivered:** Menu bar weather app with current conditions and 3-day forecast
**Phases completed:** 1-4 (7 plans total)
**Key accomplishments:**
- Menu bar app with popover UI (AppKit)
- OpenWeather API integration with auto-refresh
- Current weather display with conditions icon
- 3-day forecast list with high/low temperatures
- Code signed and notarized for distribution
**Stats:**
- 47 files created
- 2,450 lines of Swift
- 4 phases, 7 plans, 28 tasks
- 12 days from start to ship
**Git range:** `feat(01-01)``feat(04-01)`
**What's next:** Security audit and hardening for v1.1
```
</example>

View File

@@ -0,0 +1,233 @@
# Phase Prompt Template
Copy and fill this structure for `.planning/phases/XX-name/{phase}-{plan}-PLAN.md`:
**Naming:** Use `{phase}-{plan}-PLAN.md` format (e.g., `01-02-PLAN.md` for Phase 1, Plan 2)
```markdown
---
phase: XX-name
type: execute
domain: [optional - if domain skill loaded]
---
<objective>
[What this phase accomplishes - from roadmap phase goal]
Purpose: [Why this matters for the project]
Output: [What artifacts will be created]
</objective>
<execution_context>
@~/.claude/skills/create-plans/workflows/execute-phase.md
@~/.claude/skills/create-plans/templates/summary.md
[If plan contains checkpoint tasks (type="checkpoint:*"), add:]
@~/.claude/skills/create-plans/references/checkpoints.md
</execution_context>
<context>
@.planning/BRIEF.md
@.planning/ROADMAP.md
[If research exists:]
@.planning/phases/XX-name/FINDINGS.md
[Relevant source files:]
@src/path/to/relevant.ts
</context>
<tasks>
<task type="auto">
<name>Task 1: [Action-oriented name]</name>
<files>path/to/file.ext, another/file.ext</files>
<action>[Specific implementation - what to do, how to do it, what to avoid and WHY]</action>
<verify>[Command or check to prove it worked]</verify>
<done>[Measurable acceptance criteria]</done>
</task>
<task type="auto">
<name>Task 2: [Action-oriented name]</name>
<files>path/to/file.ext</files>
<action>[Specific implementation]</action>
<verify>[Command or check]</verify>
<done>[Acceptance criteria]</done>
</task>
<task type="checkpoint:decision" gate="blocking">
<decision>[What needs deciding]</decision>
<context>[Why this decision matters]</context>
<options>
<option id="option-a">
<name>[Option name]</name>
<pros>[Benefits and advantages]</pros>
<cons>[Tradeoffs and limitations]</cons>
</option>
<option id="option-b">
<name>[Option name]</name>
<pros>[Benefits and advantages]</pros>
<cons>[Tradeoffs and limitations]</cons>
</option>
</options>
<resume-signal>[How to indicate choice - "Select: option-a or option-b"]</resume-signal>
</task>
<task type="auto">
<name>Task 3: [Action-oriented name]</name>
<files>path/to/file.ext</files>
<action>[Specific implementation]</action>
<verify>[Command or check]</verify>
<done>[Acceptance criteria]</done>
</task>
<task type="checkpoint:human-verify" gate="blocking">
<what-built>[What Claude just built that needs verification]</what-built>
<how-to-verify>
1. Run: [command to start dev server/app]
2. Visit: [URL to check]
3. Test: [Specific interactions]
4. Confirm: [Expected behaviors]
</how-to-verify>
<resume-signal>Type "approved" to continue, or describe issues to fix</resume-signal>
</task>
[Continue for all tasks - mix of auto and checkpoints as needed...]
</tasks>
<verification>
Before declaring phase complete:
- [ ] [Specific test command]
- [ ] [Build/type check passes]
- [ ] [Behavior verification]
</verification>
<success_criteria>
- All tasks completed
- All verification checks pass
- No errors or warnings introduced
- [Phase-specific criteria]
</success_criteria>
<output>
After completion, create `.planning/phases/XX-name/{phase}-{plan}-SUMMARY.md`:
# Phase [X] Plan [Y]: [Name] Summary
**[Substantive one-liner - what shipped, not "phase complete"]**
## Accomplishments
- [Key outcome 1]
- [Key outcome 2]
## Files Created/Modified
- `path/to/file.ts` - Description
- `path/to/another.ts` - Description
## Decisions Made
[Key decisions and rationale, or "None"]
## Issues Encountered
[Problems and resolutions, or "None"]
## Next Step
[If more plans in this phase: "Ready for {phase}-{next-plan}-PLAN.md"]
[If phase complete: "Phase complete, ready for next phase"]
</output>
```
<key_elements>
From create-meta-prompts patterns:
- XML structure for Claude parsing
- @context references for file loading
- Task types: auto, checkpoint:human-action, checkpoint:human-verify, checkpoint:decision
- Action includes "what to avoid and WHY" (from intelligence-rules)
- Verification is specific and executable
- Success criteria is measurable
- Output specification includes SUMMARY.md structure
**Scope guidance:**
- Aim for 3-6 tasks per plan
- If planning >7 tasks, split into multiple plans (01-01, 01-02, etc.)
- Target ~80% context usage maximum
- See references/scope-estimation.md for splitting guidance
</key_elements>
<good_examples>
```markdown
---
phase: 01-foundation
type: execute
domain: next-js
---
<objective>
Set up Next.js project with authentication foundation.
Purpose: Establish the core structure and auth patterns all features depend on.
Output: Working Next.js app with JWT auth, protected routes, and user model.
</objective>
<execution_context>
@~/.claude/skills/create-plans/workflows/execute-phase.md
@~/.claude/skills/create-plans/templates/summary.md
</execution_context>
<context>
@.planning/BRIEF.md
@.planning/ROADMAP.md
@src/lib/db.ts
</context>
<tasks>
<task type="auto">
<name>Task 1: Add User model to database schema</name>
<files>prisma/schema.prisma</files>
<action>Add User model with fields: id (cuid), email (unique), passwordHash, createdAt, updatedAt. Add Session relation. Use @db.VarChar(255) for email to prevent index issues.</action>
<verify>npx prisma validate passes, npx prisma generate succeeds</verify>
<done>Schema valid, types generated, no errors</done>
</task>
<task type="auto">
<name>Task 2: Create login API endpoint</name>
<files>src/app/api/auth/login/route.ts</files>
<action>POST endpoint that accepts {email, password}, validates against User table using bcrypt, returns JWT in httpOnly cookie with 15-min expiry. Use jose library for JWT (not jsonwebtoken - it has CommonJS issues with Next.js).</action>
<verify>curl -X POST /api/auth/login -d '{"email":"test@test.com","password":"test"}' -H "Content-Type: application/json" returns 200 with Set-Cookie header</verify>
<done>Valid credentials return 200 + cookie, invalid return 401, missing fields return 400</done>
</task>
</tasks>
<verification>
Before declaring phase complete:
- [ ] `npm run build` succeeds without errors
- [ ] `npx prisma validate` passes
- [ ] Login endpoint responds correctly to valid/invalid credentials
- [ ] Protected route redirects unauthenticated users
</verification>
<success_criteria>
- All tasks completed
- All verification checks pass
- No TypeScript errors
- JWT auth flow works end-to-end
</success_criteria>
<output>
After completion, create `.planning/phases/01-foundation/01-01-SUMMARY.md`
</output>
```
</good_examples>
<bad_examples>
```markdown
# Phase 1: Foundation
## Tasks
### Task 1: Set up authentication
**Action**: Add auth to the app
**Done when**: Users can log in
```
This is useless. No XML structure, no @context, no verification, no specificity.
</bad_examples>

View File

@@ -0,0 +1,274 @@
# Research Prompt Template
For phases requiring research before planning:
```markdown
---
phase: XX-name
type: research
topic: [research-topic]
---
<session_initialization>
Before beginning research, verify today's date:
!`date +%Y-%m-%d`
Use this date when searching for "current" or "latest" information.
Example: If today is 2025-11-22, search for "2025" not "2024".
</session_initialization>
<research_objective>
Research [topic] to inform [phase name] implementation.
Purpose: [What decision/implementation this enables]
Scope: [Boundaries]
Output: FINDINGS.md with structured recommendations
</research_objective>
<research_scope>
<include>
- [Question to answer]
- [Area to investigate]
- [Specific comparison if needed]
</include>
<exclude>
- [Out of scope for this research]
- [Defer to implementation phase]
</exclude>
<sources>
Official documentation (with exact URLs when known):
- https://example.com/official-docs
- https://example.com/api-reference
Search queries for WebSearch:
- "[topic] best practices {current_year}"
- "[topic] latest version"
Context7 MCP for library docs
Prefer current/recent sources (check date above)
</sources>
</research_scope>
<verification_checklist>
{If researching configuration/architecture with known components:}
□ Enumerate ALL known options/scopes (list them explicitly):
□ Option/Scope 1: [description]
□ Option/Scope 2: [description]
□ Option/Scope 3: [description]
□ Document exact file locations/URLs for each option
□ Verify precedence/hierarchy rules if applicable
□ Check for recent updates or changes to documentation
{For all research:}
□ Verify negative claims ("X is not possible") with official docs
□ Confirm all primary claims have authoritative sources
□ Check both current docs AND recent updates/changelogs
□ Test multiple search queries to avoid missing information
□ Check for environment/tool-specific variations
</verification_checklist>
<research_quality_assurance>
Before completing research, perform these checks:
<completeness_check>
- [ ] All enumerated options/components documented with evidence
- [ ] Official documentation cited for critical claims
- [ ] Contradictory information resolved or flagged
</completeness_check>
<blind_spots_review>
Ask yourself: "What might I have missed?"
- [ ] Are there configuration/implementation options I didn't investigate?
- [ ] Did I check for multiple environments/contexts?
- [ ] Did I verify claims that seem definitive ("cannot", "only", "must")?
- [ ] Did I look for recent changes or updates to documentation?
</blind_spots_review>
<critical_claims_audit>
For any statement like "X is not possible" or "Y is the only way":
- [ ] Is this verified by official documentation?
- [ ] Have I checked for recent updates that might change this?
- [ ] Are there alternative approaches I haven't considered?
</critical_claims_audit>
</research_quality_assurance>
<incremental_output>
**CRITICAL: Write findings incrementally to prevent token limit failures**
Instead of generating full FINDINGS.md at the end:
1. Create FINDINGS.md with structure skeleton
2. Write each finding as you discover it (append immediately)
3. Add code examples as found (append immediately)
4. Finalize summary and metadata at end
This ensures zero lost work if token limits are hit.
<workflow>
Step 1 - Initialize:
```bash
# Create skeleton file
cat > .planning/phases/XX-name/FINDINGS.md <<'EOF'
# [Topic] Research Findings
## Summary
[Will complete at end]
## Recommendations
[Will complete at end]
## Key Findings
[Append findings here as discovered]
## Code Examples
[Append examples here as found]
## Metadata
[Will complete at end]
EOF
```
Step 2 - Append findings as discovered:
After researching each aspect, immediately append to Key Findings section
Step 3 - Finalize at end:
Complete Summary, Recommendations, and Metadata sections
</workflow>
</incremental_output>
<output_structure>
Create `.planning/phases/XX-name/FINDINGS.md`:
# [Topic] Research Findings
## Summary
[2-3 paragraph executive summary]
## Recommendations
### Primary Recommendation
[What to do and why]
### Alternatives Considered
[What else was evaluated]
## Key Findings
### [Category 1]
- Finding with source URL
- Relevance to our case
### [Category 2]
- Finding with source URL
- Relevance
## Code Examples
[Relevant patterns, if applicable]
## Metadata
<metadata>
<confidence level="high|medium|low">
[Why this confidence level]
</confidence>
<dependencies>
[What's needed to proceed]
</dependencies>
<open_questions>
[What couldn't be determined]
</open_questions>
<assumptions>
[What was assumed]
</assumptions>
<quality_report>
<sources_consulted>
[List URLs of official documentation and primary sources]
</sources_consulted>
<claims_verified>
[Key findings verified with official sources]
</claims_verified>
<claims_assumed>
[Findings based on inference or incomplete information]
</claims_assumed>
<confidence_by_finding>
- Finding 1: High (official docs + multiple sources)
- Finding 2: Medium (single source)
- Finding 3: Low (inferred, requires verification)
</confidence_by_finding>
</quality_report>
</metadata>
</output_structure>
<success_criteria>
- All scope questions answered
- All verification checklist items completed
- Sources are current and authoritative
- Clear primary recommendation
- Metadata captures uncertainties
- Quality report distinguishes verified from assumed
- Ready to inform PLAN.md creation
</success_criteria>
```
<when_to_use>
Create RESEARCH.md before PLAN.md when:
- Technology choice unclear
- Best practices needed for unfamiliar domain
- API/library investigation required
- Architecture decision pending
- Multiple valid approaches exist
</when_to_use>
<example>
```markdown
---
phase: 02-auth
type: research
topic: JWT library selection for Next.js App Router
---
<research_objective>
Research JWT libraries to determine best option for Next.js 14 App Router authentication.
Purpose: Select JWT library before implementing auth endpoints
Scope: Compare jose, jsonwebtoken, and @auth/core for our use case
Output: FINDINGS.md with library recommendation
</research_objective>
<research_scope>
<include>
- ESM/CommonJS compatibility with Next.js 14
- Edge runtime support
- Token creation and validation patterns
- Community adoption and maintenance
</include>
<exclude>
- Full auth framework comparison (NextAuth vs custom)
- OAuth provider configuration
- Session storage strategies
</exclude>
<sources>
Official documentation (prioritize):
- https://github.com/panva/jose
- https://github.com/auth0/node-jsonwebtoken
Context7 MCP for library docs
Prefer current/recent sources
</sources>
</research_scope>
<success_criteria>
- Clear recommendation with rationale
- Code examples for selected library
- Known limitations documented
- Verification checklist completed
</success_criteria>
```
</example>

View File

@@ -0,0 +1,200 @@
# Roadmap Template
Copy and fill this structure for `.planning/ROADMAP.md`:
## Initial Roadmap (v1.0 Greenfield)
```markdown
# Roadmap: [Project Name]
## Overview
[One paragraph describing the journey from start to finish]
## Phases
- [ ] **Phase 1: [Name]** - [One-line description]
- [ ] **Phase 2: [Name]** - [One-line description]
- [ ] **Phase 3: [Name]** - [One-line description]
- [ ] **Phase 4: [Name]** - [One-line description]
## Phase Details
### Phase 1: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Nothing (first phase)
**Plans**: [Number of plans, e.g., "3 plans" or "TBD after research"]
Plans:
- [ ] 01-01: [Brief description of first plan]
- [ ] 01-02: [Brief description of second plan]
- [ ] 01-03: [Brief description of third plan]
### Phase 2: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Phase 1
**Plans**: [Number of plans]
Plans:
- [ ] 02-01: [Brief description]
### Phase 3: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Phase 2
**Plans**: [Number of plans]
Plans:
- [ ] 03-01: [Brief description]
- [ ] 03-02: [Brief description]
### Phase 4: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Phase 3
**Plans**: [Number of plans]
Plans:
- [ ] 04-01: [Brief description]
## Progress
| Phase | Plans Complete | Status | Completed |
|-------|----------------|--------|-----------|
| 1. [Name] | 0/3 | Not started | - |
| 2. [Name] | 0/1 | Not started | - |
| 3. [Name] | 0/2 | Not started | - |
| 4. [Name] | 0/1 | Not started | - |
```
<guidelines>
**Initial planning (v1.0):**
- 3-6 phases total (more = scope creep)
- Each phase delivers something coherent
- Phases can have 1+ plans (split if >7 tasks or multiple subsystems)
- Plans use naming: {phase}-{plan}-PLAN.md (e.g., 01-02-PLAN.md)
- No time estimates (this isn't enterprise PM)
- Progress table updated by transition workflow
- Plan count can be "TBD" initially, refined during planning
**After milestones ship:**
- Reorganize with milestone groupings (see below)
- Collapse completed milestones in `<details>` tags
- Add new milestone sections for upcoming work
- Keep continuous phase numbering (never restart at 01)
</guidelines>
<status_values>
- `Not started` - Haven't begun
- `In progress` - Currently working
- `Complete` - Done (add completion date)
- `Deferred` - Pushed to later (with reason)
</status_values>
## Milestone-Grouped Roadmap (After v1.0 Ships)
After completing first milestone, reorganize roadmap with milestone groupings:
```markdown
# Roadmap: [Project Name]
## Milestones
-**v1.0 MVP** - Phases 1-4 (shipped YYYY-MM-DD)
- 🚧 **v1.1 [Name]** - Phases 5-6 (in progress)
- 📋 **v2.0 [Name]** - Phases 7-10 (planned)
## Phases
<details>
<summary>✅ v1.0 MVP (Phases 1-4) - SHIPPED YYYY-MM-DD</summary>
### Phase 1: [Name]
**Goal**: [What this phase delivers]
**Plans**: 3 plans
Plans:
- [x] 01-01: [Brief description]
- [x] 01-02: [Brief description]
- [x] 01-03: [Brief description]
### Phase 2: [Name]
**Goal**: [What this phase delivers]
**Plans**: 2 plans
Plans:
- [x] 02-01: [Brief description]
- [x] 02-02: [Brief description]
### Phase 3: [Name]
**Goal**: [What this phase delivers]
**Plans**: 2 plans
Plans:
- [x] 03-01: [Brief description]
- [x] 03-02: [Brief description]
### Phase 4: [Name]
**Goal**: [What this phase delivers]
**Plans**: 1 plan
Plans:
- [x] 04-01: [Brief description]
</details>
### 🚧 v1.1 [Name] (In Progress)
**Milestone Goal:** [What v1.1 delivers]
#### Phase 5: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Phase 4
**Plans**: 1 plan
Plans:
- [ ] 05-01: [Brief description]
#### Phase 6: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Phase 5
**Plans**: 2 plans
Plans:
- [ ] 06-01: [Brief description]
- [ ] 06-02: [Brief description]
### 📋 v2.0 [Name] (Planned)
**Milestone Goal:** [What v2.0 delivers]
#### Phase 7: [Name]
**Goal**: [What this phase delivers]
**Depends on**: Phase 6
**Plans**: 3 plans
Plans:
- [ ] 07-01: [Brief description]
- [ ] 07-02: [Brief description]
- [ ] 07-03: [Brief description]
[... additional phases for v2.0 ...]
## Progress
| Phase | Milestone | Plans Complete | Status | Completed |
|-------|-----------|----------------|--------|-----------|
| 1. Foundation | v1.0 | 3/3 | Complete | YYYY-MM-DD |
| 2. Features | v1.0 | 2/2 | Complete | YYYY-MM-DD |
| 3. Polish | v1.0 | 2/2 | Complete | YYYY-MM-DD |
| 4. Launch | v1.0 | 1/1 | Complete | YYYY-MM-DD |
| 5. Security | v1.1 | 0/1 | Not started | - |
| 6. Hardening | v1.1 | 0/2 | Not started | - |
| 7. Redesign Core | v2.0 | 0/3 | Not started | - |
```
**Notes:**
- Milestone emoji: ✅ shipped, 🚧 in progress, 📋 planned
- Completed milestones collapsed in `<details>` for readability
- Current/future milestones expanded
- Continuous phase numbering (01-99)
- Progress table includes milestone column

View File

@@ -0,0 +1,148 @@
# Summary Template
Standardize SUMMARY.md format for phase completion:
```markdown
# Phase [X]: [Name] Summary
**[Substantive one-liner describing outcome - NOT "phase complete" or "implementation finished"]**
## Accomplishments
- [Most important outcome]
- [Second key accomplishment]
- [Third if applicable]
## Files Created/Modified
- `path/to/file.ts` - What it does
- `path/to/another.ts` - What it does
## Decisions Made
[Key decisions with brief rationale, or "None - followed plan as specified"]
## Deviations from Plan
[If no deviations: "None - plan executed exactly as written"]
[If deviations occurred:]
### Auto-fixed Issues
**1. [Rule X - Category] Brief description**
- **Found during:** Task [N] ([task name])
- **Issue:** [What was wrong]
- **Fix:** [What was done]
- **Files modified:** [file paths]
- **Verification:** [How it was verified]
- **Commit:** [hash]
[... repeat for each auto-fix ...]
### Deferred Enhancements
Logged to .planning/ISSUES.md for future consideration:
- ISS-XXX: [Brief description] (discovered in Task [N])
- ISS-XXX: [Brief description] (discovered in Task [N])
---
**Total deviations:** [N] auto-fixed ([breakdown by rule]), [N] deferred
**Impact on plan:** [Brief assessment - e.g., "All auto-fixes necessary for correctness/security. No scope creep."]
## Issues Encountered
[Problems and how they were resolved, or "None"]
[Note: "Deviations from Plan" documents unplanned work that was handled automatically via deviation rules. "Issues Encountered" documents problems during planned work that required problem-solving.]
## Next Phase Readiness
[What's ready for next phase]
[Any blockers or concerns]
---
*Phase: XX-name*
*Completed: [date]*
```
<one_liner_rules>
The one-liner MUST be substantive:
**Good:**
- "JWT auth with refresh rotation using jose library"
- "Prisma schema with User, Session, and Product models"
- "Dashboard with real-time metrics via Server-Sent Events"
**Bad:**
- "Phase complete"
- "Authentication implemented"
- "Foundation finished"
- "All tasks done"
The one-liner should tell someone what actually shipped.
</one_liner_rules>
<example>
```markdown
# Phase 1: Foundation Summary
**JWT auth with refresh rotation using jose library, Prisma User model, and protected API middleware**
## Accomplishments
- User model with email/password auth
- Login/logout endpoints with httpOnly JWT cookies
- Protected route middleware checking token validity
- Refresh token rotation on each request
## Files Created/Modified
- `prisma/schema.prisma` - User and Session models
- `src/app/api/auth/login/route.ts` - Login endpoint
- `src/app/api/auth/logout/route.ts` - Logout endpoint
- `src/middleware.ts` - Protected route checks
- `src/lib/auth.ts` - JWT helpers using jose
## Decisions Made
- Used jose instead of jsonwebtoken (ESM-native, Edge-compatible)
- 15-min access tokens with 7-day refresh tokens
- Storing refresh tokens in database for revocation capability
## Deviations from Plan
### Auto-fixed Issues
**1. [Rule 2 - Missing Critical] Added password hashing with bcrypt**
- **Found during:** Task 2 (Login endpoint implementation)
- **Issue:** Plan didn't specify password hashing - storing plaintext would be critical security flaw
- **Fix:** Added bcrypt hashing on registration, comparison on login with salt rounds 10
- **Files modified:** src/app/api/auth/login/route.ts, src/lib/auth.ts
- **Verification:** Password hash test passes, plaintext never stored
- **Commit:** abc123f
**2. [Rule 3 - Blocking] Installed missing jose dependency**
- **Found during:** Task 4 (JWT token generation)
- **Issue:** jose package not in package.json, import failing
- **Fix:** Ran `npm install jose`
- **Files modified:** package.json, package-lock.json
- **Verification:** Import succeeds, build passes
- **Commit:** def456g
### Deferred Enhancements
Logged to .planning/ISSUES.md for future consideration:
- ISS-001: Add rate limiting to login endpoint (discovered in Task 2)
- ISS-002: Improve token refresh UX with auto-retry on 401 (discovered in Task 5)
---
**Total deviations:** 2 auto-fixed (1 missing critical, 1 blocking), 2 deferred
**Impact on plan:** Both auto-fixes essential for security and functionality. No scope creep.
## Issues Encountered
- jsonwebtoken CommonJS import failed in Edge runtime - switched to jose (planned library change, worked as expected)
## Next Phase Readiness
- Auth foundation complete, ready for feature development
- User registration endpoint needed before public launch
---
*Phase: 01-foundation*
*Completed: 2025-01-15*
```
</example>