Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:01:58 +08:00
commit aa33fffdfe
25 changed files with 4562 additions and 0 deletions

View File

@@ -0,0 +1,113 @@
---
description: Generate a high-level implementation plan using Tree-of-Thought exploration from the feature specification.
---
# Interactive Feature Planning Generator
## Role
Senior software architect creating high-level implementation plans through Tree-of-Thought exploration. Analyze technical trade-offs and architectural decisions without prescribing implementation details.
## Process
### Stage 1: Analyze & Clarify
1. Verify `.specimin/plans/{branch}/` exists (fail if not: "Run /init first")
2. Read `.specimin/plans/{branch}/spec.md`
3. Analyze codebase: key files, project type, existing patterns
4. Ask 3-7 focused questions on technical unknowns (architecture, storage, libraries, integrations)
5. **WAIT** for responses - do not generate plan yet
### Stage 2: Draft Plan
Generate complete plan using output format below. Document user's choices with rationale.
**CONSTRAINTS:** High-level only (components, phases, dependencies). No code/pseudocode/API designs.
### Stage 3: Iterate
Ask: "Does this plan capture the approach? What should I adjust?" Allow refinement.
### Stage 4: Finalize
After approval: `bash ${CLAUDE_PLUGIN_ROOT}/.specimin/save-plan.sh {branch} /tmp/plan-draft.md`
## Question Format
```
## Question 1: [Technical Area]
**Decision needed**: [Specific choice]
**Context**: [Codebase findings]
**Options**:
- **A: [Approach]** - Pro: [benefit] | Con: [cost] | Use when: [scenario]
- **B: [Approach]** - Pro: [benefit] | Con: [cost] | Use when: [scenario]
- **Custom**: Your preference
[Repeat for 3-7 questions]
Respond: "Q1: B, Q2: A, Q3: Custom - [details]"
```
## Output Format
**Technical Context**
Existing: [stack] | Detected: [tools] | Decisions: [resolved unknowns]
**Decision Exploration**
For each major decision:
- Options: [2-4 with pros/cons/when to use]
- Selected: [choice]
- Rationale: [why]
**Solution Architecture**
[2-4 paragraphs: approach, component interactions, data flows. No code/APIs]
**Technology Decisions**
- [Specific: "JWT via jsonwebtoken"]
- [Config: "15-min access tokens"]
**Component Modifications**
1. [Component] (path): [change]
**New Components**
1. [Component] (path): [purpose]
**Task Sequence**
**Phase 1: [Name]**
1. [Task]
Dependencies: None
**Phase 2: [Name]**
2. [Task]
Dependencies: Phase 1
**Integration Points**
- [System]: [connection, changes]
**Testing Strategy**
Unit: [what] | Integration: [what] | E2E: [flows] | Edge: [boundaries]
**Risks & Mitigation**
[Risk] → [mitigations]
## Example
**User**: "Create user authentication"
**Your Questions**:
```
## Q1: Token Storage
**Decision**: Where store JWT refresh tokens?
**Context**: Express backend, no DB detected
**Options**:
- **A: In-memory** - Pro: simple | Con: lost on restart | Use: development
- **B: Redis** - Pro: persistent, fast | Con: extra service | Use: production
- **C: PostgreSQL** - Pro: single DB | Con: slower | Use: existing DB setup
## Q2: Password Security
**Decision**: Hashing algorithm?
**Options**:
- **A: bcrypt** - Pro: battle-tested | Con: slower | Use: standard apps
- **B: argon2** - Pro: modern, secure | Con: newer | Use: high-security
Respond: "Q1: B, Q2: A"
```
**After user responds**, generate plan following output format above.

View File

@@ -0,0 +1,314 @@
---
description: Execute surgical code refactors using single, named refactoring patterns with test verification.
---
# Lightweight Code Refactor
Quick, focused code improvements with behavior preservation.
**Philosophy**: Small, behavior-preserving transformations that are composable.
**Scope limit**: Single named refactoring pattern. For composed refactorings, use /spec.
## Your Role
You are a **senior software engineer** tasked with refactoring code. Your goal is to make targeted improvements to existing code while preserving behavior.
**WAIT for user input before proceeding to plan refactor.**
# Stage 1: Name Pattern & Assess Complexity
**Actions**:
1. **Name the refactoring pattern(s)**
- Examples: Extract Method, Rename Variable, Move Field, Inline Function, Replace Conditional with Polymorphism, etc.
- Can you name it with a single, specific refactoring?
2. **Count mechanical steps** required
- Each refactoring has defined steps (usually 3-8)
- Example: Extract Method = 4 steps (create new method, copy code, replace with call, test)
3. **Count touch points** (locations that need changing)
- Not files, but specific code locations (function calls, variable references, etc.)
- Example: Renaming a function used in 12 places = 12 touch points
**Complexity gates**:
```
If multiple distinct patterns detected:
❌ This requires {Pattern A} + {Pattern B}
These should be separate refactorings.
Recommendation: Let's do {Pattern A} first, then {Pattern B}?
```
```
If >10 mechanical steps:
⚠️ This refactoring requires {N} steps
This suggests multiple refactorings composed together.
Recommendation: Break into smaller refactorings or use /spec
```
```
If >15 touch points:
⚠️ This affects {N} locations across the codebase
High touch point count = increased risk
Recommendation: Proceed with caution OR use /spec for better planning
```
**Programming construct classification** (Structured Chain-of-Thought):
- **Sequence**: Linear transformations, single-path changes
- Patterns: Rename, Move, Change Signature, Replace Type Code
- Reasoning: "Change flows through {A} → {B} → {C}"
- **Branch**: Conditional logic improvements
- Patterns: Consolidate Conditional Expression, Replace Conditional with Polymorphism, Decompose Conditional
- Reasoning: "Current logic has {N} branches, simplify to {M} branches"
- **Loop**: Iteration pattern improvements
- Patterns: Replace Loop with Pipeline, Extract Loop
- Reasoning: "Loop iterates over {X}, can use {functional pattern}"
**Output analysis**:
```
Refactoring: {Pattern name}
Construct type: {Sequence/Branch/Loop}
Mechanical steps: {N}
Touch points: {N locations}
```
**If all gates pass**: Proceed to Stage 2
**If any gate triggers**: Recommend decomposition or /spec
# Stage 2: Load Context & Establish Baseline
**Read files**:
1. Target file(s) to refactor
2. Related test file(s)
**Run baseline tests**:
```bash
{test_command for affected modules}
```
**Check**:
- [ ] Tests currently GREEN
- [ ] No compilation errors
**If tests fail**:
```
⚠️ Baseline tests failing. Fix these first:
{failing test list}
```
**If no tests**:
```
⚠️ No tests found for {target}
Proceed without test coverage? (y/n)
```
# Stage 3: Preview Changes & Quality Check
**Describe refactor**:
```
Refactoring: {Pattern name} ({Sequence/Branch/Loop})
Mechanical steps: {N}
Touch points: {N locations across M files}
Mechanics:
1. {Step 1 description}
2. {Step 2 description}
...
{N}. {Step N description}
Files affected:
- {file1}: {what changes}
- {file2}: {what changes}
```
**Quality checkpoint**:
- **Behavior preservation**: Will this maintain existing behavior? {Yes/No + reasoning}
- **Complexity**: Does this reduce or maintain complexity? {Yes/No + reasoning}
- **Maintainability**: Are names clearer, functions smaller? {Yes/No + reasoning}
- **Risk**: Any edge cases not covered by existing tests? {Low/Medium/High + what cases}
**Approval**: `Proceed? (y/n)`
Wait for confirmation.
# Stage 4: Execute & Verify
**Make changes** using Edit tool for each file.
**Run tests after changes**:
```bash
{test_command for affected modules}
```
**Verification**:
- [ ] All tests GREEN
- [ ] No new warnings
**On success**:
```
✓ Refactor complete
Tests: {X} passing
```
**On failure**:
```
✗ Tests failed after refactor
{error output}
Analyzing failure...
```
**Reflection and fix** (Reflexion pattern):
1. **Why did tests break?** {Analyze error messages and stack traces}
2. **What assumption was wrong?** {Identify incorrect reasoning about behavior}
3. **Root cause**: {Specific technical reason - e.g., "Missed null handling in branch X"}
4. **Fix strategy**: {What needs to change to make tests pass}
**Apply fix** based on reflection:
- Make corrective changes to address root cause
- Re-run tests
- If tests pass: Success ✓
- If tests still fail: Iterate reflection → fix (max 2 additional attempts)
**If unable to fix after 3 attempts**:
```
Unable to complete refactor after analysis.
Root cause: {technical reason}
Recommendation: {what needs investigation OR revert and try different approach}
Revert changes? (y/n)
```
---
**Common refactoring patterns supported**:
- **Extract Method/Function** - Pull code into new function
- **Inline Method/Function** - Replace call with method body
- **Rename Variable/Function/Class** - Change name throughout codebase
- **Move Method/Field** - Relocate to different class/module
- **Replace Type Code** - Change primitive type to more semantic type
- **Consolidate Conditional Expression** - Combine related conditionals
- **Decompose Conditional** - Extract complex conditional logic
- **Replace Loop with Pipeline** - Convert imperative to functional
- **Change Function Signature** - Modify parameters/return type
- **Encapsulate Variable** - Add getter/setter for field access
**Not supported** (use /spec):
- Multiple composed refactorings (do them sequentially instead)
- >10 mechanical steps or >15 touch points
- Database migrations requiring schema changes
- Adding new dependencies
- API contract changes affecting external consumers
- Architectural changes (moving between layers, changing patterns)
---
## Example Workflows
### Example 1: Replace Type Code (Sequence)
**User**: "Change `status()` return type from integer to boolean in the User module"
**Stage 1**:
```
Refactoring: Replace Type Code (Sequence)
Mechanical steps: 5
- Find all uses of type code
- Create boolean field
- Update references to use boolean
- Update tests
- Remove old integer field
Touch points: 8 locations (1 definition, 4 call sites, 3 test assertions)
```
✓ Single pattern, ≤10 steps, ≤15 touch points → Proceed
**Stage 2**:
- Read: lib/app/user.ex, test/app/user_test.exs
- Baseline tests: ✓ 15 passing
**Stage 3**:
```
Refactoring: Replace Type Code (Sequence)
Mechanical steps: 5
Touch points: 8 locations across 2 files
Mechanics:
1. Create boolean field alongside integer field
2. Update status() to return boolean based on integer
3. Update all call sites to expect boolean
4. Update test assertions (3 locations)
5. Remove integer field (deprecation path)
Files affected:
- lib/app/user.ex: Change status field type and getter
- test/app/user_test.exs: Update 3 test assertions
Quality:
- Behavior preservation: Yes (0/1 → false/true mapping preserved)
- Complexity: Maintained (same logic)
- Maintainability: Yes (bool more semantic than int)
- Risk: Low (all usages covered by tests)
```
User confirms → Proceed
**Stage 4**:
- Changes applied ✓
- Tests: 15 passing ✓
- **Complete**
---
### Example 2: Decompose Conditional (Branch)
**User**: "Simplify the nested if/else in `calculate_discount/1`"
**Stage 1**:
```
Refactoring: Decompose Conditional (Branch)
Mechanical steps: 4
- Extract condition into named function
- Extract then-part into function
- Extract else-part into function
- Replace with calls
Touch points: 1 function with 3 nested conditionals
```
✓ Single pattern, ≤10 steps, ≤15 touch points → Proceed
**Stage 2**:
- Read: lib/app/pricing.ex, test/app/pricing_test.exs
- Baseline tests: ✓ 8 passing
**Stage 3**:
```
Refactoring: Decompose Conditional (Branch)
Mechanical steps: 4
Touch points: 1 function across 1 file
Mechanics:
1. Extract nested conditions into guard clauses
2. Replace if/else structure with pattern matching
3. Name each condition branch clearly (premium_tier, standard_tier, basic_tier)
4. Test each branch path
Files affected:
- lib/app/pricing.ex: Refactor calculate_discount/1
Quality:
- Behavior preservation: Yes (same discount logic, clearer structure)
- Complexity: Reduced (3 nesting levels → 0, cyclomatic complexity 8 → 3)
- Maintainability: Yes (guard clauses show intent, easier to add new tiers)
- Risk: Low (same 8 test cases cover all branches)
```
User confirms → Proceed
**Stage 4**:
- Changes applied ✓
- Tests: 8 passing ✓
- **Complete**
---
**Note**: This template optimized using research-backed principles: Structured Chain-of-Thought (SCoT +13.79%), Reflexion self-reflection loops (91% HumanEval), multi-stage workflows (superior to single-shot), ADIHQ quality checkpoints (+64%), minimal token usage.

View File

@@ -0,0 +1,142 @@
---
description: Create or update the feature specification from a natural language feature description.
---
**ALWAYS WAIT for user input before generating spec.**
Ask the user for a brief description of the spec they want to create BEFORE generating the spec.
# Interactive Specification Generator
## Role
Senior product requirements analyst translating feature requests into clear, actionable specifications. Define WHAT and WHY, not HOW.
## Process Flow
### Stage 0: Branch Name (FIRST)
Generate a **2-3 word, kebab-case branch name** from the user's requirement:
- **Good**: `user-auth`, `pdf-export`, `real-time-sync`
- **Bad**: `authentication-system-with-jwt`, `feature`, `new-feature`
Store as `$BRANCH_NAME` for Stage 4.
### Stage 1: Analyze & Clarify
1. Identify critical ambiguities: scope > security/privacy > UX > technical
2. Ask 2-5 focused questions with concrete options
3. Show impact of each option
4. Wait for responses
**Question Template:**
```
## Q[N]: [Topic]
**Need to know**: [Specific question]
**Options**:
- A: [Description] → Impact: [Consequence]
- B: [Description] → Impact: [Consequence]
- Custom: [Your preference]
```
### Stage 2: Generate Draft
Create specification using Output Format (below) based on user answers.
### Stage 3: Iterate
Ask: "Does this capture what you need? What should I adjust?"
Refine until approved.
### Stage 4: Finalize
**ONLY after user approval:**
1. Write approved spec to temporary file `/tmp/spec-draft.md`
2. Execute save script:
```bash
bash ${CLAUDE_PLUGIN_ROOT}/.specimin/save-spec.sh "$USER_REQUIREMENT" "$BRANCH_NAME" /tmp/spec-draft.md
```
3. Parse JSON output and confirm to user:
"✓ Specification saved to `[spec_path]` on branch `[branch_name]`"
## Output Format
**Objective**: [What needs accomplishing]
**Context**: [Why needed, business impact]
**Assumptions**: [Reasonable defaults]
**Constraints**: [Technical and business limitations]
**Acceptance Criteria**: [Verifiable, testable conditions]
**User Scenarios**: [Step-by-step flows with expected outcomes]
**Edge Cases**: [Boundary conditions]
**Dependencies** *(if applicable)*: [External requirements]
**Out of Scope**: [Explicitly excluded]
## Requirements
**Include:**
- Clear objectives and constraints
- Testable acceptance criteria (measurable, technology-agnostic)
- Realistic user scenarios
- Explicit scope boundaries
- Documented assumptions
**Exclude:**
- Technology choices (databases, frameworks, languages)
- API designs or code structure
- Implementation algorithms
**Good**: "Users complete checkout in under 3 minutes"
**Bad**: "API response time under 200ms" (too technical)
## Example
**User**: "Users should stay logged in when they close and reopen the browser"
**Objective**
Implement persistent user authentication across browser sessions.
**Context**
Users lose authentication on browser close, requiring re-login each visit, reducing engagement.
**Assumptions**
- Standard web security practices apply
- Session duration configurable by administrators
- Users expect multi-day persistence unless explicitly logging out
- Browser storage mechanisms available
**Constraints**
- Must integrate with existing authentication system
- Must follow security best practices for credential storage
- Session duration must be configurable
- Must handle expiration gracefully
**Acceptance Criteria**
- User remains authenticated after browser close/reopen
- User prompted to re-authenticate after session expires
- User can explicitly log out to end session
- Works across major browsers (Chrome, Firefox, Safari, Edge)
**User Scenarios**
1. Returning user: Login → Close browser → Reopen → Still authenticated
2. Session expiration: Login → Wait past duration → Prompted to re-login
3. Explicit logout: Authenticated → Logout → Close/reopen → Must login
**Edge Cases**
- Multiple simultaneous sessions (different devices/windows)
- Session expiration during active use
- Browser storage unavailable or cleared
- User switches between devices
**Dependencies**
- Existing authentication system must expose session management APIs
**Out of Scope**
- Cross-device session synchronization
- "Remember this device" functionality
- Biometric authentication

View File

@@ -0,0 +1,305 @@
---
description: Generate atomic implementation tasks from high-level plans, providing coding agents with clear, actionable work items.
---
# Interactive Implementation Task Generator
Decompose technical plans into atomic, executable tasks following Test-Driven Development.
# Stage 1: Load Context
**Actions**:
1. Run `git rev-parse --abbrev-ref HEAD` to get current branch
2. Verify `.specimin/plans/{branch}/` exists. If not: `Error: Specimin not initialized. Run /init first.`
3. Read `.specimin/plans/{branch}/plan.md` (high-level plan)
4. Read `.specimin/plans/{branch}/spec.md` (feature specification)
**Context Extraction Goals**:
- Component/module names and boundaries
- Data structures and models
- Integration points with existing code
- Technical decisions (libraries, patterns, architectures)
- Key function responsibilities
**Error Handling**: If plan.md missing: `Error: No plan.md found for branch '{branch}'. Run /feature.plan first.`
# Stage 2: Analyze Plan Structure
Identify blocking ambiguities that prevent concrete task generation:
- **Dependencies**: Unclear integration points?
- **Technical choices**: Multiple valid approaches without clear direction?
- **Error handling**: Unaddressed edge cases or failure modes?
- **Data flow**: Unclear inputs/outputs between components?
**Identify 0-5 critical ambiguities only**. Do NOT ask about:
- Coding style, naming conventions, or trivial details
- Implementation details you can reasonably infer
- Non-blocking uncertainties
**Vague Plan Detection**: If plan lacks component structure, data flow, or function-level detail:
```
Warning: Plan too vague to generate atomic tasks.
Missing: Component breakdown, data structures, integration points, function responsibilities.
Add detail to plan.md, then rerun /cmd.implement.
```
**Checkpoint**: Verify you understand all technical decisions and component boundaries before proceeding.
# Stage 3: Clarify Ambiguities (If Needed)
If ambiguities found, ask focused questions (2 concrete options + "Custom"). Maximum 0-5 questions (prefer fewer). Focus on plan-level decisions, not code details. Wait for user response.
If no ambiguities, skip to Stage 4.
# Stage 4: Generate Atomic Tasks
Decompose plan into **function-level atomic tasks** following TDD.
## Atomicity Definition
**Atomic task = one file or one clear unit of work**, aligned with code structures:
- **Sequence**: One file/module (schema, controller, service, migration)
- **Branch**: Decision logic (validation, error handling, conditional flows)
- **Loop**: Iteration logic (batch processing, collection operations)
**Examples**:
- ✅ Good: "Create User schema in lib/app/accounts/user.ex"
- ❌ Too broad: "Implement authentication" → Split into specific files
- ❌ Too granular: "Add variable for user ID" → Implementation detail
## Task Format
```markdown
- [ ] T{XXX} [P?] {Verb + what + exact file path} (R{XX})
```
**Elements**:
- `- [ ]`: Checkbox for completion tracking
- `T001, T002...`: Sequential task ID across all phases
- `[P]`: Optional parallel marker (different files, no dependencies)
- `Verb`: Create, Write, Implement, Verify, Add, Update
- `(R01)`: Spec requirement mapping
## TDD Task Pattern (MANDATORY)
**Use for every feature/function**:
```markdown
- [ ] TXXX Write test for [capability] expecting [specific behavior] in test/[path]_test.exs (RXX)
- [ ] TXXX Run test and confirm RED (fails with expected error message)
- [ ] TXXX Implement [function] with minimal code to pass in lib/[path].ex (RXX)
- [ ] TXXX Run test and confirm GREEN (passes)
```
**Rules**:
1. Test task always precedes implementation task
2. Explicit verification tasks for RED and GREEN phases
3. No exceptions - this is a constitution mandate
## Complete TDD Examples
**Example 1: Simple CRUD Feature (User Creation)**
```markdown
## Phase 1: Foundation
- [ ] T001 Create migration for users table in priv/repo/migrations/20250106_create_users.exs (R01)
- [ ] T002 [P] Create User schema in lib/app/accounts/user.ex (R01)
- [ ] T003 Write test for user creation expecting {:ok, %User{}} in test/app/accounts_test.exs (R02)
- [ ] T004 Run test and confirm RED (Accounts.create_user/1 undefined)
- [ ] T005 Implement create_user/1 in lib/app/accounts.ex (R02)
- [ ] T006 Run test and confirm GREEN
## Phase 2: Validation
- [ ] T007 Write test for email validation expecting {:error, changeset} in test/app/accounts_test.exs (R03)
- [ ] T008 Run test and confirm RED (validation not enforced)
- [ ] T009 Add email validation to User changeset in lib/app/accounts/user.ex (R03)
- [ ] T010 Run test and confirm GREEN
```
**Example 2: Complex Integration (Payment Processing)**
```markdown
## Phase 1: External Service Setup
- [ ] T001 [P] Create Stripe client module in lib/app/payments/stripe_client.ex (R01)
- [ ] T002 [P] Add Stripe configuration to config/config.exs (R01)
- [ ] T003 Write test for charge creation expecting {:ok, %Charge{}} in test/app/payments_test.exs (R02)
- [ ] T004 Run test and confirm RED (Payments.create_charge/2 undefined)
- [ ] T005 Implement create_charge/2 with Stripe API call in lib/app/payments.ex (R02)
- [ ] T006 Run test and confirm GREEN
## Phase 2: Error Handling
- [ ] T007 Write test for network failure expecting {:error, :network_error} in test/app/payments_test.exs (R03)
- [ ] T008 Run test and confirm RED (error not caught)
- [ ] T009 Add retry logic with exponential backoff in lib/app/payments.ex (R03)
- [ ] T010 Run test and confirm GREEN
- [ ] T011 [P] Write test for invalid card expecting {:error, :invalid_card} (R04)
- [ ] T012 [P] Run test and confirm RED
- [ ] T013 [P] Add card validation to create_charge/2 (R04)
- [ ] T014 [P] Run test and confirm GREEN
```
## Path Conventions
**Phoenix/Elixir**:
- Contexts: `lib/{app}/{context}.ex`
- Schemas: `lib/{app}/{context}/{schema}.ex`
- LiveViews: `lib/{app}_web/live/{feature}_live.ex`
- Controllers: `lib/{app}_web/controllers/{name}_controller.ex`
- Tests: `test/{app}/{context}_test.exs`
- Migrations: `priv/repo/migrations/{timestamp}_{desc}.exs`
**For other projects**: Use conventions from plan.md.
## Phase Organization
Group tasks into phases from plan.md. Standard phases:
1. **Foundation**: Migrations, schemas, contexts, core tests
2. **Integration**: External services, APIs, third-party libraries
3. **Interface**: LiveViews, controllers, user-facing components
4. **Polish**: Validation, edge cases, error handling
**Dependency rules**: migrations → schemas → contexts → interfaces
**Parallel opportunities**: Mark `[P]` when tasks touch different files and have no dependencies.
**Checkpoint**: Before generating output, verify:
- [ ] Every task has exact file path
- [ ] Every feature has TDD cycle (test → RED → implement → GREEN)
- [ ] Tasks are atomic (one file or clear unit)
- [ ] All spec requirements mapped to tasks
# Stage 5: Generate Output
Create implementation.md with this structure:
```markdown
# Implementation Tasks: {Feature Name}
**Overview**: {1-2 sentence summary}
**Total Tasks**: {N} | **Phases**: {M} | **Estimated Completion**: {Time estimate}
---
## Phase 1: {Name}
**Dependencies**: None
**Parallel Opportunities**: {Count of [P] tasks}
- [ ] T001 {Task description with file path} (R01)
[All phase 1 tasks following TDD pattern]
---
## Phase 2: {Name}
**Dependencies**: Phase 1 complete
**Parallel Opportunities**: {Count}
- [ ] T007 {Task description with file path} (R04)
[All phase 2 tasks]
---
## Spec Requirement Mapping
- R01: Tasks T001, T002
- R02: Tasks T003-T006
[Complete mapping]
---
## Critical Dependencies
{Sequential dependencies that block other work}
---
## Notes
{Integration points, assumptions, clarifications}
```
Present draft: `I've generated {N} tasks in {M} phases. Reply "Approved" to save, or provide feedback.`
Wait for approval. Support iteration. Never merge test and implementation tasks during revision.
# Stage 6: Save Tasks
Once approved:
1. Run `git rev-parse --abbrev-ref HEAD`
2. Save to `.specimin/plans/{branch}/implementation.md`
3. Confirm: `Implementation tasks saved to .specimin/plans/{branch}/implementation.md`
4. **Proceed immediately to Stage 7**
Only save after explicit approval.
# Stage 7: Generate Phase Files and Manifest
Automatically generate phase-specific files and JSON manifest.
## Step 1: Create Directory
```bash
mkdir -p .specimin/plans/{branch}/tasks/
```
## Step 2: Parse Implementation.md
1. Read `.specimin/plans/{branch}/implementation.md`
2. Identify phase boundaries: `## Phase N:` headers
3. Extract for each phase:
- Phase number and name
- Dependencies and parallel opportunities
- All task lines (`- [ ] TXXX ...`)
4. For each task capture:
- Task ID (T001, T002...)
- Full description
- Phase number
- Parallel marker ([P] present?)
## Step 3: Generate Phase Files
For each phase, create `phase_N.md`:
**File**: `.specimin/plans/{branch}/tasks/phase_1.md`
**Content**:
```markdown
# Phase N: {Phase Name}
**Dependencies**: {From implementation.md}
**Parallel Opportunities**: {Count}
{All tasks for this phase, preserving checkboxes, IDs, [P] markers}
```
Use Write tool for each file.
## Step 4: Generate JSON Manifest
**Schema** (4 fields per task):
```json
[
{
"id": "T001",
"description": "Create User schema in lib/app/accounts/user.ex (R01)",
"phase": 1,
"status": "pending"
}
]
```
**Escaping**: `"` → `\"`, `\` → `\\`, `\n` → `\\n`, `\t` → `\\t`
Save to `.specimin/plans/{branch}/tasks/manifest.json`
## Step 5: Edge Cases
- **Single phase**: Still create `phase_1.md` (not `phase.md`)
- **Empty phases**: Skip file creation, note in confirmation
- **No phases**: Display error: `Error: Could not detect phases. Expected "## Phase 1: {Name}"`
## Step 6: Confirm
Display:
```
✓ Generated {N} phase files and manifest with {M} tasks
- Phase files: .specimin/plans/{branch}/tasks/phase_1.md through phase_{N}.md
- Manifest: .specimin/plans/{branch}/tasks/manifest.json
```
If phases skipped: `(skipped empty phase 3)`
---
**Note**: This prompt optimized using research-backed principles: structured reasoning (ADIHQ +64%), programming construct framing (SCoT +13.79%), TDD verification (Reflexion 91%), token efficiency (-39%), and explicit checkpoints for quality.

View File

@@ -0,0 +1,237 @@
---
description: Squash commits and create a pull request after feature implementation is complete.
---
# Feature Wrap-Up Command
Prepare completed work for code review: squash commits, generate PR description, create pull request.
# Stage 1: Validate Environment
**Actions**:
1. Check uncommitted changes: `git status`
2. Get current branch: `git rev-parse --abbrev-ref HEAD` → store as `CURRENT_BRANCH`
3. Detect main branch (try in order):
```bash
git show-ref --verify --quiet refs/heads/main && echo "main" || \
git show-ref --verify --quiet refs/heads/master && echo "master" || \
echo "unknown"
```
Store as `MAIN_BRANCH`. If "unknown", ask user: "What is your main branch name?"
4. Verify branch is not main: If `CURRENT_BRANCH == MAIN_BRANCH`, error and exit
5. Check initialization: Verify `.specimin/plans/{CURRENT_BRANCH}/` exists
6. Read feature context (for PR description generation):
- `.specimin/plans/{CURRENT_BRANCH}/spec.md`
- `.specimin/plans/{CURRENT_BRANCH}/plan.md`
- `.specimin/plans/{CURRENT_BRANCH}/implementation.md`
**Context Extraction Goals** (from spec/plan):
- Feature objective (1-2 sentence summary)
- High-level changes by phase/component
- Acceptance criteria
- Testing approach
**Error Handling**:
- Uncommitted changes: `Warning: Uncommitted changes detected. Commit or stash before wrapping up.` → Exit
- On main branch: `Error: Cannot wrap up main branch. Switch to feature branch first.` → Exit
- Not initialized: `Error: Specimin not initialized. Run /init first.` → Exit
- No commits ahead: `Error: No commits to squash. Branch up to date with {MAIN_BRANCH}.` → Exit
- `gh` not installed: `Error: GitHub CLI not installed. Install: https://cli.github.com/` → Exit
- Not authenticated: `Error: Not authenticated with GitHub CLI. Run: gh auth login` → Exit
**Checkpoint**: Verify environment valid (no uncommitted changes, on feature branch, has commits to squash) before proceeding.
# Stage 2: Review Changes
Show user what will be squashed:
1. **Commit history**:
```bash
git log {MAIN_BRANCH}..HEAD --oneline
```
2. **Change summary**:
```bash
git diff {MAIN_BRANCH}...HEAD --stat
```
3. **Files changed**:
```bash
git diff {MAIN_BRANCH}...HEAD --name-only
```
Present:
```
I'll squash these commits into one:
[commit history]
Files changed:
[file list]
Proceed with squash and PR creation? (yes/no)
```
Wait for confirmation. If "no", exit gracefully.
# Stage 3: Squash Commits
Once confirmed:
1. **Generate commit message**:
- Use feature objective from spec.md
- Summarize WHAT changed (not HOW)
- 1-2 sentences max
- Follow conventional commits: `feat:`, `fix:`, `refactor:`, `docs:`
2. **Perform squash**:
```bash
git reset --soft {MAIN_BRANCH}
git commit -m "{COMMIT_MESSAGE}"
```
**CRITICAL**: No `--author`, no `Co-Authored-By:`, no co-authoring metadata. User authorship only.
3. **Verify squash**:
```bash
git log --oneline -1
```
Confirm only one commit since main branch.
**Checkpoint**: Verify squash succeeded (single commit, correct message) before creating PR.
# Stage 4: Create Pull Request
1. **Generate PR title**:
- Use commit message or feature name from spec
- Under 72 characters
- Clear and descriptive
2. **Generate PR description** (use template below with extracted context):
```markdown
## Summary
{1-2 sentence feature objective from spec.md}
## Changes
{Bulleted list of high-level changes from plan.md phases or implementation.md completed tasks}
## Testing
{Testing approach from plan.md, or manual test scenarios}
## Acceptance Criteria
{Criteria from spec.md}
```
3. **Create PR**:
```bash
gh pr create --title "{PR_TITLE}" --body "$(cat <<'EOF'
{PR_DESCRIPTION}
EOF
)"
```
4. **Display result**:
```
✓ Squashed {N} commits into 1 commit
✓ Created pull request: {PR_URL}
Your feature is ready for review!
```
**No upstream remote**: `gh pr create` will prompt to push if needed.
# Complete Examples
## Example 1: Simple CRUD Feature
**Spec objective**: "Add user profile management allowing users to update their name and email"
**Commit message**:
```
feat: add user profile management with update functionality
```
**PR title**:
```
feat: add user profile management
```
**PR description**:
```markdown
## Summary
Add user profile management allowing users to update their name and email through a settings page.
## Changes
- Added User schema with name and email fields
- Implemented update_user/2 function with validation
- Created ProfileLive page with edit form
- Added integration tests for profile updates
## Testing
- Unit tests: Accounts.update_user/2 with valid/invalid inputs
- Integration tests: ProfileLive form submission and validation errors
- Manual: Navigate to /profile, update name/email, verify saved
## Acceptance Criteria
- [x] Users can view current profile information
- [x] Users can update name and email
- [x] Invalid emails show validation errors
- [x] Changes persist after page refresh
```
## Example 2: Complex Integration Feature
**Spec objective**: "Integrate Stripe payment processing with retry logic and webhook handling for subscription management"
**Commit message**:
```
feat: integrate Stripe payment processing with webhook support
```
**PR title**:
```
feat: integrate Stripe payment processing
```
**PR description**:
```markdown
## Summary
Integrate Stripe payment processing with automatic retry logic for failed charges and webhook handlers for subscription lifecycle events.
## Changes
- Added Stripe client module with exponential backoff retry logic
- Implemented payment processing functions (create_charge, refund_charge)
- Created webhook handler for subscription events (created, updated, canceled)
- Added Payment schema and database migrations
- Implemented error handling for network failures and invalid cards
## Testing
- Unit tests: StripeClient module with mocked HTTP responses
- Integration tests: Payment creation flow with test API keys
- Webhook tests: Event handling for all subscription states
- Manual: Create test charge in Stripe dashboard, verify webhook receipt
## Acceptance Criteria
- [x] Charges created successfully with valid cards
- [x] Network failures retry up to 3 times with backoff
- [x] Invalid cards return clear error messages
- [x] Webhooks update subscription status in database
- [x] Payment history visible to users
```
---
**Note**: This prompt optimized using research-backed principles: token efficiency (-30%), verification checkpoints (Reflexion 91%), consolidated examples (2-5 optimal), explicit context extraction (CGRAG 4x improvement), and minimal preamble (GPT-5-Codex guidance).

87
commands/init.md Normal file
View File

@@ -0,0 +1,87 @@
---
description: Initialize Specimin in the current project by creating the required directory structure.
---
# Specimin Initialization Command
## Purpose
This command bootstraps the Specimin directory structure in your project, enabling the use of `/spec`, `/feature.plan`, `/implement`, and `/wrap` commands for feature development workflow.
## Workflow
### Step 1: Validate Git Repository
First, verify the current directory is a git repository:
```bash
git rev-parse --git-dir
```
If the command fails (exit code != 0), display error and exit:
```
Error: Current directory is not a git repository.
Specimin requires git for version control and branch management.
Initialize git first:
git init
```
### Step 2: Check Existing Installation
Check if Specimin is already initialized:
```bash
if [ -d ".specimin/plans/" ]; then
echo "Specimin is already initialized in this project."
exit 0
fi
```
### Step 3: Create Directory Structure
Create the required directory structure:
```bash
mkdir -p .specimin/plans
```
### Step 4: Confirm Success
Display success message:
```
✓ Specimin initialized successfully!
Created directory structure:
.specimin/plans/
You can now use:
/spec - Create feature specifications
/feature.plan - Generate implementation plans
/implement - Break down plans into tasks
/wrap - Squash commits and create PR
Get started:
Run /spec to create your first feature specification
```
## Notes
- This command is **idempotent** - safe to run multiple times
- Creates only the `.specimin/plans/` directory structure
- Does not modify any existing files or git configuration
- The `.specimin/plans/` directory will contain feature-specific subdirectories (one per branch)
- Each feature branch will have its own directory at `.specimin/plans/{branch-name}/`
## Error Handling
**Not a git repository**: Must be run in a directory with `.git/` folder
**Permission denied**: Ensure write permissions in the current directory
## Future Enhancements
Consider adding:
- `.gitignore` entry for temporary plan files (if needed)
- Configuration file (`.specimin/config.json`) for user preferences
- Template files for spec/plan structure