Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:02:26 +08:00
commit dc99518b87
13 changed files with 1475 additions and 0 deletions

View File

@@ -0,0 +1,163 @@
---
name: claude-workflow
description: Load for any CWF planning task including: explaining the workflow, answering planning questions, creating plans, amending plans, and implementing features. Contains conventions, phase structure, task formats, validation rules, and templates.
---
# Claude Workflow
Knowledge repository for Claude Workflow (CWF).
## Overview
CWF is a plan-driven development workflow using two complementary documents that work together to guide feature implementation:
**Plan Document (`{feature-name}-plan.md`):**
- Captures architectural context and design rationale
- Documents WHY decisions were made and WHAT the solution is
- **Structure defined in `plan-spec.md`**
**Tasklist Document (`{feature-name}-tasklist.md`):**
- Provides step-by-step execution guidance
- Documents WHEN to do tasks and HOW to implement them
- **Structure defined in `tasklist-spec.md`**
**Both documents follow the conformance requirements defined below.**
---
## Conformance and Tailoring
**All CWF planning documents (plans and tasklists) use RFC 2119 keywords to define requirements.**
The specifications in `plan-spec.md` and `tasklist-spec.md` use these keywords as described in RFC 2119.
- **MUST** / **REQUIRED** / **SHALL** - Mandatory requirements for all plans
- **SHOULD** / **RECOMMENDED** - Strongly recommended; include unless there's good reason not to
- **MAY** / **OPTIONAL** - Optional enhancements; include when they add value
- **MUST NOT** / **SHALL NOT** - Absolute prohibitions
- **SHOULD NOT** - Generally inadvisable; avoid unless there's good reason
---
## Checkpoints
Checkpoints are end-of-phase validation operations that provide quality control for AI-driven development.
**Purpose:**
- Validate code quality independent of functional testing
- Ensure AI-generated code meets project standards
- Catch issues early before accumulating technical debt
**Checkpoint Types:**
- **Self-review:** Agent reviews implementation against phase deliverable
- **Code quality:** Linting, formatting, type checking (project-specific tools)
- **Code complexity:** Complexity analysis (project-specific thresholds)
Human review occurs after checkpoints complete, when "Phase X Complete" is signaled.
**Where checkpoints appear:**
- **Plan:** Checkpoint strategy explains WHY these checkpoints and WHAT tools
- **Tasklist:** Checkpoint checklist specifies WHEN to run and HOW to execute
**Key principle:** Checkpoints are validation operations performed after phase task completion but before moving to the next phase. They are distinct from functional tests, which validate feature behavior.
---
## CWF Workflow
The CWF planning workflow follows this command-driven flow:
```text
/brainstorm (optional) [Human runs]
Design Summary [Agent writes]
/write-plan [Human runs]
Plan + Tasklist [Agent writes]
/implement-plan [Human runs]
Phase 1 [Agent implements] → Checkpoints [Agent runs] → Review [Human] → ✓ → /clear [Human runs]
Phase 2 [Agent implements] → Checkpoints [Agent runs] → Review [Human] → ✓ → /clear [Human runs]
[Changes?] → /amend-plan [Human runs] ──┐
↓ │
Continue development [Agent] ←──────────┘
Feature Complete ✓ [Human confirms]
```
### Stage Breakdown
**1. `/brainstorm` Command (Optional)**
- **Human:** Runs `/brainstorm` command (optional step for structured exploration)
- **Agent:**
- Systematically extracts requirements through guided questions
- Explores 2-3 alternative approaches with trade-offs
- Incrementally builds design with validation checkpoints
- Produces design summary document in `docs/brainstorms/`
- **Outcome:** Complete design context ready for `/write-plan`
- **Note:** Can be skipped in favor of informal planning discussion or written specification
**2. `/write-plan` Command**
- **Human:** Runs `/write-plan` command
- **Agent:** Generates two documents:
- Plan: Architectural context and WHY/WHAT decisions
- Tasklist: Step-by-step HOW/WHEN execution guidance
- Validates structure and consistency between documents
**3. Phase-by-Phase Implementation**
The implementation follows this repeating cycle:
- **Human:** Runs `/implement-plan` command
- **Agent:**
- Reads plan for architectural understanding
- Checks tasklist to identify completed tasks and current phase
- Works through tasks for current phase sequentially
- Marks tasks complete as work progresses
- Executes checkpoints (code quality, complexity checks, etc)
- Signals phase completion for human review at phase boundary
- **Human:**
- Reviews phase results when agent signals completion
- Runs `/clear` to start fresh session for next phase
- **Repeats cycle:** Runs `/implement-plan` again for next phase
**Note:** Conversation history is lost after `/clear`; only plan, tasklist checkboxes, and committed code persist across cycles.
**4. `/amend-plan` Command (When Needed)**
- **Human:** Discusses amendment and runs `/amend-plan` when requirements change during development
- **Agent:**
- Adds tasks to incomplete phases
- Creates new phases for additional work
- Updates plan sections with new context
- Follows amendment safety rules
- **Agent:** Continues development with amended plan
**5. Feature Completion**
- **Agent:** Completes all phases and signals completion
- **Human:** Reviews and confirms feature is complete (✓)
## Quick Reference
| Need to understand... | Read This Reference | Contains |
|----------------------|---------------------|----------|
| **Plan document specification** | `references/plan-spec.md` | Plan structure requirements with RFC 2119 keywords |
| **Tasklist document specification** | `references/tasklist-spec.md` | Tasklist structure requirements with RFC 2119 keywords |
| **Amendment rules and safety** | `references/amendment.md` | Rules for safely modifying plans and tasklists |
| **Argument parsing for commands** | `references/parsing-arguments.md` | Command argument parsing logic and discovery patterns |
| **Feature naming and file structure** | `references/conventions.md` | Feature naming and file structure standards |
---
**Skill loaded.** CWF planning concepts and patterns are now available.

View File

@@ -0,0 +1,78 @@
# Amendment Reference
Safety rules for modifying CWF plans and tasklists during implementation.
## Core Principle
**Completed work is immutable.** Tasks marked `[x]` represent implemented code and form a trusted implementation history. Changing completed work breaks trust in the plan as source of truth and creates confusion about what was actually built. This immutability enables reliable progress tracking, troubleshooting, and context preservation across sessions.
## Amendment Operations
| Operation | Allowed? | Rules | Example |
|-----------|----------|-------|---------|
| **Add tasks to incomplete phase** | ✅ Yes | Phase must have at least one `[ ]` task. Use next sequential ID. | Add `[P2.3]`, `[P2.4]` to Phase 2 (has incomplete tasks) |
| **Add new phase** | ✅ Yes | Use next phase number. Include Goal, Deliverable, Tasks, Checkpoint. All tasks start `[ ]`. | Create Phase 4 after Phase 3 |
| **Modify incomplete task description** | ✅ Yes | Only `[ ]` tasks. Preserve ID and checkbox, update description only. | Change `[ ] [P3.2] Add tests``[ ] [P3.2] Add unit tests with 90% coverage` |
| **Update plan sections** | ✅ Yes | Add subsections, clarify decisions, document new constraints. | Add "Caching Strategy" subsection to Technical Approach |
| **Modify completed task** | ❌ No | Code already built. Changing description misrepresents implementation. | Use new task/phase to modify implementation instead |
| **Add task to completed phase** | ❌ No | All tasks `[x]` means phase done. Adding creates inconsistency. | Create new phase for additional work |
| **Change task ID** | ❌ No | IDs are stable references in commits/discussions. Changing breaks references. | Never renumber task IDs |
## Allowed Operation Patterns
### Add Tasks to Incomplete Phase
```markdown
## Phase 2: Ranking (IN PROGRESS)
- [x] [P2.1] Implement TF-IDF scoring
- [x] [P2.2] Add document ranking
- [ ] [P2.3] Add caching layer ← NEW (sequential ID)
- [ ] [P2.4] Write caching tests ← NEW
```
### Add New Phase
```markdown
## Phase 4: Caching Optimization
**Goal:** Improve query performance with caching
**Deliverable:** LRU cache with 1000 entry limit
**Tasks:**
- [ ] [P4.1] Create cache.py
- [ ] [P4.2] Integrate with QueryRanker
- [ ] [P4.3] Add cache tests
**Phase 4 Checkpoint:** Cache operational, performance improved
```
### Modify Incomplete Task
```markdown
- [ ] [P2.3] Write tests
- [ ] [P2.3] Write unit tests in test_ranker.py ← Description updated, ID preserved
```
### Update Plan Section
```markdown
## Technical Approach
### Caching Strategy ← NEW SUBSECTION
- LRU cache, 1000 entry limit
- Cache key: query hash
- Invalidate on index update
```
## Before Amending
**Verify:**
- [ ] Task/phase to modify is incomplete (has `[ ]` tasks)
- [ ] Not changing any task IDs
- [ ] Not modifying completed tasks `[x]`
- [ ] New tasks use sequential IDs
**When to amend:** Requirements change, new constraints discovered, additional work needed in incomplete phases.
**When NOT to amend:** Never modify completed work—create new phases instead.

View File

@@ -0,0 +1,321 @@
# Plan Document Specification
Specification for creating conformant plan documents in the CWF workflow.
---
## What is a Plan Document?
Plan documents capture **architectural context and design rationale**. They preserve WHY decisions were made and WHAT the solution is, enabling implementation across sessions after context has been cleared.
**Plan = WHY/WHAT** | Tasklist = WHEN/HOW
---
## Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
> **Note:** See `SKILL.md` for conformance levels (1-3) tailoring documentation depth.
---
## Core Plan Sections
Plan documents MUST include three core sections: Overview, Solution Design, and Implementation Strategy.
### Section 1: Overview
Provides high-level summary of problem and solution.
**MUST include:**
- Problem statement (current pain point or gap)
- Feature purpose (solution being built)
- Scope (What is IN/OUT of scope)
**SHOULD include:**
- Success criteria (quantifiable completion validation)
**Example (Informative):**
```markdown
## Overview
### Problem
Users currently search documentation by manually scanning files or using basic text search. This is slow (10+ minutes per search) and misses relevant documents that use different terminology. Support tickets show 40% of questions are about "how to find X in the docs."
### Purpose
Add keyword-based document search with relevance ranking. Users enter search terms and receive ranked results within 1 second, improving discoverability and reducing support load.
### Scope
**IN scope:**
- Keyword search with boolean AND/OR operators
- TF-IDF relevance ranking
- Result filtering by document type
- Search result caching
**OUT of scope:**
- Natural language queries ("find me information about...")
- Semantic/embedding-based search
- Advanced operators (NEAR, wildcards, regex)
### Success Criteria
- Users can search by keywords and receive ranked results
- Search completes in <100ms for 10,000 documents
- Results include documents even with terminology variations
- Test coverage >80% for core search logic
- Zero regressions in existing functionality
```
---
### Section 2: Solution Design
Documents the complete solution architecture and technical approach.
#### 2.1 System Architecture
**MUST include:**
- Component overview (logical pieces and their responsibilities)
- Project structure (file tree with operation markers)
**SHOULD include:**
- Component relationships (dependencies and communication patterns)
- Relationship to existing codebase (where feature fits, what it extends/uses)
**File Tree Format:**
File trees MUST use operation markers:
- `[CREATE]` for new files
- `[MODIFY]` for modified files
- `[REMOVE]` for removed files
- No marker for existing unchanged files
**Example (Informative):**
````markdown
### System Architecture
**Core Components:**
- **QueryParser:** Parses user search strings into structured queries (operators, quoted phrases)
- **DocumentIndexer:** Builds and maintains TF-IDF index from document corpus
- **QueryRanker:** Ranks documents against query using cosine similarity
- **SearchCache:** LRU cache for frequent queries
- **SearchAPI:** HTTP endpoint exposing search functionality
**Project Structure:**
```
src/
├── search/
│ ├── __init__.py [CREATE]
│ ├── parser.py [CREATE]
│ ├── indexer.py [CREATE]
│ ├── ranker.py [CREATE]
│ └── cache.py [CREATE]
├── api/
│ └── search.py [CREATE]
├── models/
│ └── document.py [MODIFY]
└── tests/
└── search/
├── test_parser.py [CREATE]
└── test_ranker.py [CREATE]
```
**Component Relationships:**
- SearchAPI depends on QueryParser, SearchCache
- QueryRanker depends on DocumentIndexer
- SearchCache depends on QueryRanker
- All components use shared Document model
**Relationship to Existing Codebase:**
- Architectural layer: Service layer (alongside existing `src/api/` endpoints)
- Domain: Search functionality (new domain area)
- Extends: `BaseAPIHandler` pattern used throughout repository
- Uses: Existing `AuthMiddleware` for authentication
- Uses: Application `CacheManager` for result caching
- Follows: Repository's service-oriented architecture and dependency injection patterns
````
---
#### 2.2 Design Rationale
Documents reasoning behind structural and technical choices.
**MUST include:**
- Rationale for key design choices
**SHOULD include:**
- Alternatives considered and why not chosen
- Trade-offs accepted
**MAY include:**
- Constraints influencing decisions
- Principles or patterns applied
**Tip (Informative):** Format flexibly - inline rationale, comparison tables, or structured decision records all work. Focus on capturing WHY, not following a template.
**Example (Informative):**
```markdown
### Design Rationale
**Use TF-IDF with cosine similarity for ranking**
Well-understood algorithm with predictable behavior. No training data or ML infrastructure required.
Alternatives considered:
- BM25: Marginal improvement for our corpus size, added complexity not justified
- Neural/embedding-based: Requires GPU, training data, model management - overkill for current needs
Trade-offs accepted:
- Pro: Fast to implement, predictable results, no infrastructure dependencies
- Con: Doesn't understand semantic similarity, sensitive to exact keyword matches
```
---
#### 2.3 Technical Specification
Describes runtime behavior and operational requirements.
**MUST include:**
- Dependencies (libraries, external systems)
- Runtime behavior (algorithms, execution flow, state management)
**MAY include:**
- Error handling (failure detection and recovery)
- Configuration needs (runtime or deployment settings)
**Example (Informative):**
````markdown
### Technical Specification
**Dependencies:**
Required libraries (new):
- scikit-learn 1.3+ (TF-IDF vectorization, cosine similarity)
- nltk 3.8+ (text preprocessing, stopword removal)
Required systems:
- PostgreSQL (stores `documents` table)
- Redis (event stream for `document_updated` events)
- InfluxDB (search metrics and monitoring)
Existing (from project):
- FastAPI 0.100+ (API framework)
- SQLAlchemy 2.0+ (database ORM)
- pytest 7.4+ (testing framework)
**Runtime Behavior:**
1. Parse query → structured query (operators, phrases)
2. Check cache (LRU, 1000 entries)
3. On cache miss: vectorize query, compute cosine similarity, rank results
4. Return paginated results (25 per page)
**Error Handling:**
Invalid Input:
- Empty query → 400 "Query cannot be empty"
- Invalid operators → 400 "Invalid syntax: [specific error]"
- Query too long (>500 chars) → 400 "Query exceeds maximum length"
Runtime Errors:
- Index not ready → 503 "Search index is building, retry in [X] seconds"
- Timeout (>5s) → 408 "Query timeout, try simplifying search terms"
- No results found → 200 with empty list (not an error)
System Errors:
- Database unavailable → 500, log error, alert on-call
- Index corruption → Rebuild from database, log incident
**Configuration:**
```python
SEARCH_INDEX_PATH = "/data/search-index.pkl"
SEARCH_CACHE_SIZE = 1000
SEARCH_TIMEOUT_MS = 5000
```
````
---
### Section 3: Implementation Strategy
Describes high-level approach guiding phase and task structure.
**MUST include:**
- Development approach (incremental, outside-in, vertical slice, bottom-up, etc.)
**SHOULD include:**
- Testing approach (test-driven, integration-focused, comprehensive, etc.)
- Risk mitigation strategy (tackle unknowns first, safe increments, prototype early, etc.)
- Checkpoint strategy (quality and validation operations at phase boundaries)
The strategy SHOULD explain WHY the tasklist is structured as it is.
**MUST NOT include:**
- Step-by-step execution instructions or task checklists
**Example (Informative):**
```markdown
## Implementation Strategy
### Development Approach
**Incremental with Safe Checkpoints**
Build bottom-up with validation at each layer:
1. **Foundation First:** Core search components (indexer, ranker) before API
2. **Runnable Increments:** Each phase produces working, testable code
3. **Early Validation:** Algorithm performance validated early before building around it
### Testing Approach
Integration-focused with targeted unit tests:
- Unit tests for complex logic (parsing, scoring)
- Integration tests for component interactions
- E2E tests for critical user flows
### Checkpoint Strategy
Each phase ends with mandatory validation before proceeding:
- Self-review: Agent reviews implementation against phase deliverable
- Code quality: Linting and formatting with ruff
- Code complexity: Complexity check with Radon
These checkpoints ensure AI-generated code meets project standards before continuing to next phase.
```
**Note (Informative):** Checkpoint types are project-specific. Use only tools your project already has. If the project doesn't use linting or complexity analysis, omit those checkpoints.
---
## Context Independence
Plans MUST be self-contained. Implementation may occur in fresh sessions after context has been cleared. All architectural decisions and rationale must be in the plan document.
---
## Validation
Plans are conformant when they:
- Include all three core sections with required content
- Contain all three Solution Design subsections
- Use file tree markers correctly
- Document WHY for design decisions
- Are self-contained (no assumed conversation context)
- Contain no step-by-step execution instructions

View File

@@ -0,0 +1,162 @@
# Tasklist Document Specification
Specification for creating conformant tasklist documents in CWF.
---
## What is a Tasklist Document?
Tasklist documents provide **step-by-step execution guidance**. They break features into phases and concrete, actionable tasks that implementers execute sequentially.
Plan = WHY/WHAT | **Tasklist = WHEN/HOW**
---
## Conformance
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
> **Note:** See `SKILL.md` for conformance levels (1-3) tailoring documentation depth.
---
## Task Syntax
Every task in the tasklist MUST follow this markdown format:
```markdown
- [ ] [PX.Y] Task description with file/component specifics
```
**Task ID Format:** `[PX.Y]` where P = "Phase", X = phase number, Y = task number within phase
**Requirements:**
- Tasks MUST use checkboxes: `- [ ]` for incomplete, `- [x]` for completed
- Task numbering MUST start at 1 within each phase
- Numbering MUST be sequential within phases (no gaps)
- Task IDs MUST NOT be reused or skipped
- Tasks MUST NOT use markdown headings (`###`)
- Descriptions MUST specify the file or component being modified
- Tasks MAY provide task-critical information in bulletpoints
**Example (Informative):**
```markdown
- [x] [P1.1] Create query/ directory and __init__.py
- [x] [P1.2] Create QueryModel class with Pydantic in models.py
- [ ] [P1.3] Add validation to QueryModel (required fields, type checks)
- [ ] [P1.4] Write unit tests for QueryModel validation in test_models.py
```
---
## Phase Structure
Every phase MUST follow this standard structure:
```markdown
## Phase X: Descriptive Name
**Goal:** One-sentence description of what this phase accomplishes
**Deliverable:** Concrete outcome (e.g., "Working data models with validation passing tests")
**Tasks:**
- [ ] [PX.1] Specific atomic action - file/component detail
- [ ] [PX.2] Another specific action with clear scope
- [ ] [PX.3] Write tests for implemented functionality
- [ ] [PX.4] Run tests: pytest tests/module/
**Checkpoints:**
- [ ] Code quality: Run `ruff check src/`
- [ ] Code complexity: Run `ruff check src/ --select C901`
- [ ] Review: Self review implementation and verify phase deliverable achieved
**Phase X Complete:** Brief description of system state after phase completion
```
---
## Checkpoint Requirements
Checkpoints are end-of-phase validation operations performed after all tasks in a phase are complete.
**Requirements:**
- Checkpoints MUST use checkbox format: `- [ ] Checkpoint description`
- Additional checkpoints SHOULD be project-specific validation or quality operations
- Checkpoints MUST NOT duplicate functional test tasks (tests belong in Tasks section)
- Checkpoint commands SHOULD be concrete and executable (e.g., `ruff check src/`)
**Common Checkpoint Types:**
- **Self-review:** Agent reviews implementation against deliverable
- **Code quality:** Linting, formatting, type checking (e.g., ruff, black, mypy)
- **Code complexity:** Complexity analysis (e.g., radon cc)
**Note (Informative):** Use only tools your project already has. Checkpoints provide quality control for AI-driven development.
---
## Task Granularity Guidelines
Tasks SHOULD meet these quality criteria:
- **Time:** 5-20 minutes to complete
- **Atomic:** Completable in one go without interruption (single logical change)
- **Testable:** Clear done criteria (observable output or verifiable behavior)
- **File-specific:** Reference concrete files or components
**Example (Informative):**
```markdown
- [P1.1] Create query/ directory and __init__.py
- [P1.2] Create QueryModel class with Pydantic in models.py
- [P1.3] Add field validation to QueryModel (required fields, type checks)
- [P1.4] Write unit tests for QueryModel validation in test_models.py
```
---
## Task Ordering Principles
Tasks that depend on others MUST be ordered after their dependencies.
**Example (Informative):**
```markdown
- [ ] [P2.1] Create ranker.py with RankerClass stub
- [ ] [P2.2] Implement TF-IDF scoring in RankerClass.score()
- [ ] [P2.3] Add document ranking in RankerClass.rank()
- [ ] [P2.4] Write unit tests for TF-IDF scoring in test_ranker.py
- [ ] [P2.5] Write unit tests for document ranking in test_ranker.py
- [ ] [P2.6] Verify all tests pass: pytest tests/query/test_ranker.py
```
---
## Phase Complete Statement
Every phase MUST end with a "Phase X Complete" statement describing system state after phase completion.
The statement SHOULD be 1-3 sentences describing what capabilities now exist, what's ready for the next phase, and what validation has been completed.
**Example (Informative):**
- "Core data models validated, ready for parser implementation"
- "Query parser complete with DSL support, validated against test cases"
- "End-to-end query flow working with TF-IDF ranking, ready for optimization"
---
## Validation
Tasklists are conformant when they:
- Include required elements in every phase (Goal, Deliverable, Tasks, Checkpoints, Phase Complete)
- Use correct task ID format `[PX.Y]` with checkboxes
- Specify concrete files/components in task descriptions
- Order tasks after their dependencies
- Use checkpoints for quality/validation (not functional tests)
- Contain no architectural rationale or design alternatives (belongs in plan)

View File

@@ -0,0 +1,17 @@
---
name: read-constitution
description: This skill should be used to load the constitution files located in .constitution
---
## Instructions
Read each file in the `.constitution` directory using the Read tool:
1. Use Glob tool to find all constitution files:
```yaml
pattern: "**/.constitution/**/*"
```
2a. If no files are found, then we don't need to read any files.
2b. If a .constitution folder exists and there are any files in it, use the Read tool to read each constitution file found.