Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:54:38 +08:00
commit fffaa45e39
76 changed files with 14220 additions and 0 deletions

View File

@@ -0,0 +1,81 @@
---
name: brainstorming
description: Use when creating or developing, before writing code or implementation plans - refines rough ideas into fully-formed designs through collaborative questioning, alternative exploration, and incremental validation. Don't use during clear 'mechanical' processes
---
# Brainstorming Ideas Into Designs
## Overview
Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design in small sections (200-300 words), checking after each section whether it looks right so far.
## The Process
**Understanding the idea:**
- Check out the current project state first (files, docs, recent commits)
- Ask questions one at a time to refine the idea
- Prefer multiple choice questions when possible, but open-ended is fine too
- Only one question per message - if a topic needs more exploration, break it into multiple questions
- Focus on understanding: purpose, constraints, success criteria
**Exploring approaches:**
- Propose 2-3 different approaches with trade-offs
- Present options conversationally with your recommendation and reasoning
- Lead with your recommended option and explain why
**Presenting the design:**
- Once you believe you understand what you're building, present the design
- Break it into sections of 200-300 words
- Ask after each section whether it looks right so far
- Cover: architecture, components, data flow, error handling, testing
- Be ready to go back and clarify if something doesn't make sense
## After the Design
**Documentation:**
- Write the validated design to `docs/plans/YYYY-MM-DD-<topic>-design.md`
- Use elements-of-style:writing-clearly-and-concisely skill if available
- Commit the design document to git
**Implementation (if continuing):**
- Ask: "Ready to set up for implementation?"
- Use superpowers:using-git-worktrees to create isolated workspace
- Use superpowers:writing-plans to create detailed implementation plan
### Next Steps
After design is complete, prompt user:
```
Design complete! Ready to:
A) Write the plan
B) Research first (gather codebase insights, library docs, best practices)
Choose: (A/B)
```
**If user chooses A:**
- Proceed directly to `writing-plans` skill
**If user chooses B:**
- Invoke `research-orchestration` skill
- Research skill will:
- Analyze brainstorm context
- Suggest researchers: `[✓] Codebase [✓] Library docs [✓] Web [ ] GitHub`
- Allow user to adjust selection
- Spawn selected subagents (max 4 in parallel)
- Synthesize findings
- Automatically save to `YYYY-MM-DD-<feature>-research.md`
- Report: "Research complete. Ready to write the plan."
- Then proceed to `writing-plans` skill with research context
## Key Principles
- **One question at a time** - Don't overwhelm with multiple questions
- **Multiple choice preferred** - Easier to answer than open-ended when possible
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
- **Explore alternatives** - Always propose 2-3 approaches before settling
- **Incremental validation** - Present design in sections, validate each
- **Be flexible** - Go back and clarify when something doesn't make sense

View File

@@ -0,0 +1,120 @@
---
name: condition-based-waiting
description: Use when tests have race conditions, timing dependencies, or inconsistent pass/fail behavior - replaces arbitrary timeouts with condition polling to wait for actual state changes, eliminating flaky tests from timing guesses
---
# Condition-Based Waiting
## Overview
Flaky tests often guess at timing with arbitrary delays. This creates race conditions where tests pass on fast machines but fail under load or in CI.
**Core principle:** Wait for the actual condition you care about, not a guess about how long it takes.
## When to Use
```dot
digraph when_to_use {
"Test uses setTimeout/sleep?" [shape=diamond];
"Testing timing behavior?" [shape=diamond];
"Document WHY timeout needed" [shape=box];
"Use condition-based waiting" [shape=box];
"Test uses setTimeout/sleep?" -> "Testing timing behavior?" [label="yes"];
"Testing timing behavior?" -> "Document WHY timeout needed" [label="yes"];
"Testing timing behavior?" -> "Use condition-based waiting" [label="no"];
}
```
**Use when:**
- Tests have arbitrary delays (`setTimeout`, `sleep`, `time.sleep()`)
- Tests are flaky (pass sometimes, fail under load)
- Tests timeout when run in parallel
- Waiting for async operations to complete
**Don't use when:**
- Testing actual timing behavior (debounce, throttle intervals)
- Always document WHY if using arbitrary timeout
## Core Pattern
```typescript
// ❌ BEFORE: Guessing at timing
await new Promise(r => setTimeout(r, 50));
const result = getResult();
expect(result).toBeDefined();
// ✅ AFTER: Waiting for condition
await waitFor(() => getResult() !== undefined);
const result = getResult();
expect(result).toBeDefined();
```
## Quick Patterns
| Scenario | Pattern |
|----------|---------|
| Wait for event | `waitFor(() => events.find(e => e.type === 'DONE'))` |
| Wait for state | `waitFor(() => machine.state === 'ready')` |
| Wait for count | `waitFor(() => items.length >= 5)` |
| Wait for file | `waitFor(() => fs.existsSync(path))` |
| Complex condition | `waitFor(() => obj.ready && obj.value > 10)` |
## Implementation
Generic polling function:
```typescript
async function waitFor<T>(
condition: () => T | undefined | null | false,
description: string,
timeoutMs = 5000
): Promise<T> {
const startTime = Date.now();
while (true) {
const result = condition();
if (result) return result;
if (Date.now() - startTime > timeoutMs) {
throw new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`);
}
await new Promise(r => setTimeout(r, 10)); // Poll every 10ms
}
}
```
See @example.ts for complete implementation with domain-specific helpers (`waitForEvent`, `waitForEventCount`, `waitForEventMatch`) from actual debugging session.
## Common Mistakes
**❌ Polling too fast:** `setTimeout(check, 1)` - wastes CPU
**✅ Fix:** Poll every 10ms
**❌ No timeout:** Loop forever if condition never met
**✅ Fix:** Always include timeout with clear error
**❌ Stale data:** Cache state before loop
**✅ Fix:** Call getter inside loop for fresh data
## When Arbitrary Timeout IS Correct
```typescript
// Tool ticks every 100ms - need 2 ticks to verify partial output
await waitForEvent(manager, 'TOOL_STARTED'); // First: wait for condition
await new Promise(r => setTimeout(r, 200)); // Then: wait for timed behavior
// 200ms = 2 ticks at 100ms intervals - documented and justified
```
**Requirements:**
1. First wait for triggering condition
2. Based on known timing (not guessing)
3. Comment explaining WHY
## Real-World Impact
From debugging session (2025-10-03):
- Fixed 15 flaky tests across 3 files
- Pass rate: 60% → 100%
- Execution time: 40% faster
- No more race conditions

View File

@@ -0,0 +1,158 @@
// Complete implementation of condition-based waiting utilities
// From: Lace test infrastructure improvements (2025-10-03)
// Context: Fixed 15 flaky tests by replacing arbitrary timeouts
import type { ThreadManager } from '~/threads/thread-manager';
import type { LaceEvent, LaceEventType } from '~/threads/types';
/**
* Wait for a specific event type to appear in thread
*
* @param threadManager - The thread manager to query
* @param threadId - Thread to check for events
* @param eventType - Type of event to wait for
* @param timeoutMs - Maximum time to wait (default 5000ms)
* @returns Promise resolving to the first matching event
*
* Example:
* await waitForEvent(threadManager, agentThreadId, 'TOOL_RESULT');
*/
export function waitForEvent(
threadManager: ThreadManager,
threadId: string,
eventType: LaceEventType,
timeoutMs = 5000
): Promise<LaceEvent> {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const check = () => {
const events = threadManager.getEvents(threadId);
const event = events.find((e) => e.type === eventType);
if (event) {
resolve(event);
} else if (Date.now() - startTime > timeoutMs) {
reject(new Error(`Timeout waiting for ${eventType} event after ${timeoutMs}ms`));
} else {
setTimeout(check, 10); // Poll every 10ms for efficiency
}
};
check();
});
}
/**
* Wait for a specific number of events of a given type
*
* @param threadManager - The thread manager to query
* @param threadId - Thread to check for events
* @param eventType - Type of event to wait for
* @param count - Number of events to wait for
* @param timeoutMs - Maximum time to wait (default 5000ms)
* @returns Promise resolving to all matching events once count is reached
*
* Example:
* // Wait for 2 AGENT_MESSAGE events (initial response + continuation)
* await waitForEventCount(threadManager, agentThreadId, 'AGENT_MESSAGE', 2);
*/
export function waitForEventCount(
threadManager: ThreadManager,
threadId: string,
eventType: LaceEventType,
count: number,
timeoutMs = 5000
): Promise<LaceEvent[]> {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const check = () => {
const events = threadManager.getEvents(threadId);
const matchingEvents = events.filter((e) => e.type === eventType);
if (matchingEvents.length >= count) {
resolve(matchingEvents);
} else if (Date.now() - startTime > timeoutMs) {
reject(
new Error(
`Timeout waiting for ${count} ${eventType} events after ${timeoutMs}ms (got ${matchingEvents.length})`
)
);
} else {
setTimeout(check, 10);
}
};
check();
});
}
/**
* Wait for an event matching a custom predicate
* Useful when you need to check event data, not just type
*
* @param threadManager - The thread manager to query
* @param threadId - Thread to check for events
* @param predicate - Function that returns true when event matches
* @param description - Human-readable description for error messages
* @param timeoutMs - Maximum time to wait (default 5000ms)
* @returns Promise resolving to the first matching event
*
* Example:
* // Wait for TOOL_RESULT with specific ID
* await waitForEventMatch(
* threadManager,
* agentThreadId,
* (e) => e.type === 'TOOL_RESULT' && e.data.id === 'call_123',
* 'TOOL_RESULT with id=call_123'
* );
*/
export function waitForEventMatch(
threadManager: ThreadManager,
threadId: string,
predicate: (event: LaceEvent) => boolean,
description: string,
timeoutMs = 5000
): Promise<LaceEvent> {
return new Promise((resolve, reject) => {
const startTime = Date.now();
const check = () => {
const events = threadManager.getEvents(threadId);
const event = events.find(predicate);
if (event) {
resolve(event);
} else if (Date.now() - startTime > timeoutMs) {
reject(new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`));
} else {
setTimeout(check, 10);
}
};
check();
});
}
// Usage example from actual debugging session:
//
// BEFORE (flaky):
// ---------------
// const messagePromise = agent.sendMessage('Execute tools');
// await new Promise(r => setTimeout(r, 300)); // Hope tools start in 300ms
// agent.abort();
// await messagePromise;
// await new Promise(r => setTimeout(r, 50)); // Hope results arrive in 50ms
// expect(toolResults.length).toBe(2); // Fails randomly
//
// AFTER (reliable):
// ----------------
// const messagePromise = agent.sendMessage('Execute tools');
// await waitForEventCount(threadManager, threadId, 'TOOL_CALL', 2); // Wait for tools to start
// agent.abort();
// await messagePromise;
// await waitForEventCount(threadManager, threadId, 'TOOL_RESULT', 2); // Wait for results
// expect(toolResults.length).toBe(2); // Always succeeds
//
// Result: 60% pass rate → 100%, 40% faster execution

View File

@@ -0,0 +1,380 @@
---
name: decomposing-plans
description: Use after writing-plans to decompose monolithic plan into individual task files and identify tasks that can run in parallel (up to 2 subagents simultaneously)
allowed-tools: [Read, Write, Bash]
---
# Decomposing Plans for Parallel Execution
Run immediately after `/write-plan` to break monolithic plan into task files and identify parallelization opportunities.
**Core principle:** Individual task files save context + parallel batches save time = efficient execution
## When to Use
Use after `/write-plan` when you have a monolithic implementation plan and want to:
- Split it into individual task files (saves context tokens for subagents)
- Identify which tasks can run in parallel (up to 2 simultaneous subagents)
- Prepare for parallel-subagent-driven-development
## Prerequisites
**REQUIRED:** Must have monolithic plan file at `docs/plans/YYYY-MM-DD-<feature-name>.md`
## The Process
### Step 1: Locate Plan File
User provides plan file path, or find most recent:
```bash
ls -t docs/plans/*.md | head -1
```
### Step 2: Run Decomposition Script
Execute Python helper script:
```bash
python superpowers/skills/decomposing-plans/decompose-plan.py <plan-file>
```
**Script does:**
1. Parses monolithic plan to extract tasks
2. Analyzes dependencies (file-based and task-based)
3. Identifies parallel batches (max 2 tasks at once)
4. Creates individual task files
5. Generates manifest.json
6. Reports statistics
### Step 3: Review Generated Files
Check output directory `docs/plans/tasks/<plan-name>/`:
- Individual task files: `<feature>-task-NN.md`
- Execution manifest: `<feature>-manifest.json`
**Example structure:**
```
docs/plans/
├── 2025-01-18-user-auth.md # Monolithic plan
└── tasks/
└── 2025-01-18-user-auth/ # Plan-specific subfolder
├── user-auth-task-01.md
├── user-auth-task-02.md
├── user-auth-task-03.md
└── user-auth-manifest.json
```
### Step 4: Verify Task Decomposition
Read a few task files to verify:
- Tasks are correctly extracted
- Dependencies are accurate
- Files to modify are identified
- Verification checklists are present
### Step 5: Review Manifest
Read `<feature>-manifest.json` to verify:
- Parallel batches make sense
- No conflicting tasks in same batch
- Dependencies are correct
### Step 6: Adjust if Needed
If decomposition needs adjustment:
- Manually edit task files
- Manually edit manifest.json parallel_batches array
- Update dependencies if needed
### Step 7: Announce Results
Tell the user:
```
✅ Plan decomposed successfully!
Total tasks: N
Parallel batches: M
- Pairs (2 parallel): X
- Sequential: Y
Estimated speedup: Z%
Task files: docs/plans/tasks/<plan-name>/<feature>-task-*.md
Manifest: docs/plans/tasks/<plan-name>/<feature>-manifest.json
Next: Use parallel-subagent-driven-development skill
```
## Task File Format
Each task file created:
```markdown
# Task NN: [Task Name]
## Dependencies
- Previous tasks: [list or "none"]
- Must complete before: [list or "none"]
## Parallelizable
- Can run in parallel with: [task numbers or "none"]
## Implementation
[Exact steps from monolithic plan for THIS task only]
## Files to Modify
- path/to/file1.ts
- path/to/file2.ts
## Verification Checklist
- [ ] Implementation complete
- [ ] Tests written (TDD - test first!)
- [ ] All tests pass
- [ ] Lint/type check clean
- [ ] Code review requested
- [ ] Code review passed
```
## Manifest Format
```json
{
"plan": "docs/plans/2025-01-18-feature.md",
"feature": "feature-name",
"created": "2025-01-18T10:00:00Z",
"total_tasks": 5,
"tasks": [
{
"id": 1,
"title": "Implement user model",
"file": "docs/plans/tasks/feature-task-01.md",
"dependencies": [],
"blocks": [3],
"files": ["src/models/user.ts"],
"status": "pending"
},
{
"id": 2,
"title": "Implement logger",
"file": "docs/plans/tasks/feature-task-02.md",
"dependencies": [],
"blocks": [],
"files": ["src/utils/logger.ts"],
"status": "pending"
},
{
"id": 3,
"title": "Add user validation",
"file": "docs/plans/tasks/feature-task-03.md",
"dependencies": [1],
"blocks": [],
"files": ["src/models/user.ts"],
"status": "pending"
}
],
"parallel_batches": [
[1, 2], // Tasks 1 and 2 can run together
[3] // Task 3 must wait for task 1
]
}
```
## Dependency Analysis
**Script analyzes three types of dependencies:**
### 1. Explicit Task Dependencies
If task content mentions "after task N" or "depends on task N":
```
Task 3: "After task 1 completes, add validation..."
→ Task 3 depends on Task 1
```
### 2. File-Based Dependencies
If tasks modify the same file:
```
Task 1: Modifies src/models/user.ts
Task 3: Modifies src/models/user.ts
→ Task 3 depends on Task 1 (sequential)
```
### 3. Default Sequential
Unless marked "independent" or "parallel", tasks depend on previous task:
```
Task 1: ...
Task 2: (no explicit dependency mentioned)
→ Task 2 depends on Task 1
```
## Parallel Batch Identification
**Algorithm:**
1. Find tasks with no unsatisfied dependencies
2. Group up to 2 tasks that:
- Have no mutual dependencies
- Don't modify same files
- Don't block each other
3. Create batch
4. Repeat until all tasks scheduled
**Max 2 tasks per batch** (constraint for code review quality)
## Benefits
### Context Savings
- Before: Each subagent reads ~5000 tokens (monolithic plan)
- After: Each subagent reads ~500 tokens (task file)
- **90% context reduction per subagent**
### Time Savings
- Before: 5 tasks × 10 min = 50 min
- After: 3 batches × 10 min = 30 min (if 2 parallel pairs)
- **40% time reduction for parallelizable plans**
### Clarity
- Each subagent has focused, bounded scope
- Clear verification checklist per task
- No confusion about which task to implement
## Red Flags
**Never:**
- Skip running the decomposition script (manual decomposition error-prone)
- Proceed with decomposed plan without reviewing manifest
- Ignore dependency conflicts flagged by script
- Skip verifying parallel batches make sense
**If script fails:**
- Check plan file format (needs clear task sections)
- Verify plan has recognizable task markers ("## Task N:", etc.)
- Manually create task files if plan format is unusual
- Report issue for script improvement
## Integration
**Required prerequisite:**
- **writing-plans** - REQUIRED: Creates monolithic plan that this skill decomposes
**This skill enables:**
- **parallel-subagent-driven-development** - REQUIRED NEXT: Executes the decomposed plan with parallel subagents
**Alternative workflow:**
- **subagent-driven-development** - Use if you DON'T want parallel execution (works with monolithic plan)
## Example Output
```bash
$ python superpowers/skills/decomposing-plans/decompose-plan.py docs/plans/2025-01-18-user-auth.md
📖 Reading plan: docs/plans/2025-01-18-user-auth.md
✓ Found 5 tasks
🔍 Analyzing dependencies...
Task 1: No dependencies
Task 2: No dependencies
Task 3: Depends on task 1 (file conflict: src/models/user.ts)
Task 4: Depends on task 3
Task 5: No dependencies
⚡ Identifying parallelization opportunities...
Batch 1: Tasks 1, 2 (parallel)
Batch 2: Tasks 3, 5 (parallel)
Batch 3: Task 4 (sequential)
📝 Writing 5 task files to docs/plans/tasks/2025-01-18-user-auth/
✓ user-auth-task-01.md
✓ user-auth-task-02.md
✓ user-auth-task-03.md
✓ user-auth-task-04.md
✓ user-auth-task-05.md
📋 Writing execution manifest...
✓ user-auth-manifest.json
============================================================
✅ Plan decomposition complete!
============================================================
Total tasks: 5
Parallel batches: 3
- Pairs (2 parallel): 2
- Sequential: 1
Estimated speedup: 40.0%
Manifest: docs/plans/tasks/2025-01-18-user-auth/user-auth-manifest.json
Next: Use parallel-subagent-driven-development skill
```
## Troubleshooting
### Script Can't Parse Tasks
**Problem:** Script reports "Found 0 tasks"
**Solutions:**
1. Check plan format - needs clear task markers:
- `## Task 1: Title`
- `## 1. Title`
- `**Task 1:** Title`
2. Manually add task markers if plan uses different format
3. Run script with `--verbose` for debug output
### Incorrect Dependencies
**Problem:** Script identifies wrong dependencies
**Solutions:**
1. Review manifest.json parallel_batches
2. Manually edit manifest to fix dependencies
3. Add explicit dependency markers in plan ("depends on task N")
### Too Conservative (Too Many Sequential)
**Problem:** Script creates too many sequential batches
**Solutions:**
1. Mark tasks as "independent" in plan text
2. Manually edit manifest parallel_batches to add parallelization
3. Verify file paths are correctly extracted
## Next Steps
After decomposition:
1. Review task files for accuracy
2. Review manifest for correct dependencies
3. Announce results to user
4. Proceed to parallel-subagent-driven-development skill
## After Decomposition
After decomposition completes successfully, prompt user:
```
Plan decomposed into X tasks across Y parallel batches.
Manifest: `docs/plans/tasks/YYYY-MM-DD-<feature>/manifest.json`
Tasks: `docs/plans/tasks/YYYY-MM-DD-<feature>/<task-files>`
Options:
A) Review the plan with plan-review
B) Execute immediately with parallel-subagent-driven-development
C) Save and exit (resume later with /cc:resume)
Choose: (A/B/C)
```
**If user chooses A:**
- Invoke `plan-review` skill
- After review completes and plan approved
- Return to this prompt (offer B or C)
**If user chooses B:**
- Proceed directly to `parallel-subagent-driven-development` skill
- Begin executing tasks in parallel batches
**If user chooses C:**
- Invoke `state-persistence` skill to save execution checkpoint
- Save as `YYYY-MM-DD-<feature>-execution.md` with:
- Plan reference and manifest location
- Status: ready to execute, 0 tasks complete
- Next step: Resume with `/cc:resume` and execute
- Exit workflow after save completes

View File

@@ -0,0 +1,465 @@
#!/usr/bin/env python3
"""
Decompose monolithic implementation plans into individual task files
with dependency analysis and parallelization identification.
Usage:
python decompose-plan.py <plan-file> [--output-dir DIR] [--verbose]
"""
import argparse
import json
import re
import sys
from pathlib import Path
from typing import List, Dict, Set, Tuple
from datetime import datetime
class Task:
"""Represents a single task from the plan."""
def __init__(self, id: int, title: str, content: str):
self.id = id
self.title = title
self.content = content
self.dependencies: Set[int] = set()
self.files_to_modify: Set[str] = set()
self.blocks: Set[int] = set() # Tasks that depend on this one
def extract_file_dependencies(self) -> None:
"""Extract file paths mentioned in the task."""
# Match common file path patterns
patterns = [
r'`([a-zA-Z0-9_\-./]+\.[a-zA-Z0-9]+)`', # `src/foo.ts`
r'(?:src/|\./)[\w\-/]+\.[\w]+', # src/foo.ts or ./config.json
r'[\w\-/]+/[\w\-/]+\.[\w]+' # path/to/file.ts
]
for pattern in patterns:
matches = re.findall(pattern, self.content)
for match in matches:
# Clean up the match
if isinstance(match, tuple):
match = match[0]
# Filter out obvious non-paths
if not any(skip in match.lower() for skip in ['http', 'npm', 'yarn', 'test', 'spec']):
self.files_to_modify.add(match)
def extract_task_dependencies(self, all_tasks: List['Task']) -> None:
"""Extract explicit task dependencies from content."""
content_lower = self.content.lower()
# Check for "independent" or "parallel" markers
if 'independent' in content_lower or 'in parallel' in content_lower:
# Task explicitly says it's independent
return
# Look for explicit task mentions
for other in all_tasks:
if other.id >= self.id:
continue # Only look at previous tasks
# Patterns for task mentions
patterns = [
rf'task {other.id}\b',
rf'step {other.id}\b',
rf'after task {other.id}',
rf'depends on task {other.id}',
rf'requires task {other.id}'
]
for pattern in patterns:
if re.search(pattern, content_lower):
self.dependencies.add(other.id)
other.blocks.add(self.id)
break
def to_markdown(self, all_tasks: List['Task']) -> str:
"""Generate markdown for this task file."""
# Find tasks this can run parallel with
parallel_with = []
for other in all_tasks:
if other.id == self.id:
continue
# Can run parallel if no dependency relationship
if (self.id not in other.dependencies and
other.id not in self.dependencies and
self.id not in other.blocks and
other.id not in self.blocks):
# Also check for file conflicts
if not self.files_to_modify.intersection(other.files_to_modify):
parallel_with.append(other.id)
deps_str = ", ".join(str(d) for d in sorted(self.dependencies)) or "none"
blocks_str = ", ".join(str(b) for b in sorted(self.blocks)) or "none"
parallel_str = ", ".join(f"Task {p}" for p in sorted(parallel_with)) or "none"
files_str = "\n".join(f"- {f}" for f in sorted(self.files_to_modify)) or "- (none identified)"
return f"""# Task {self.id}: {self.title}
## Dependencies
- Previous tasks: {deps_str}
- Must complete before: {blocks_str}
## Parallelizable
- Can run in parallel with: {parallel_str}
## Implementation
{self.content.strip()}
## Files to Modify
{files_str}
## Verification Checklist
- [ ] Implementation complete
- [ ] Tests written (TDD - test first!)
- [ ] All tests pass
- [ ] Lint/type check clean
- [ ] Code review requested
- [ ] Code review passed
"""
class PlanDecomposer:
"""Decomposes monolithic plans into individual tasks."""
def __init__(self, plan_path: Path, verbose: bool = False):
self.plan_path = plan_path
self.plan_content = plan_path.read_text()
self.tasks: List[Task] = []
self.verbose = verbose
def log(self, message: str) -> None:
"""Log message if verbose mode enabled."""
if self.verbose:
print(f"[DEBUG] {message}")
def parse_tasks(self) -> None:
"""Parse tasks from the monolithic plan."""
self.log(f"Parsing tasks from {len(self.plan_content)} characters")
# Try multiple patterns for task sections
patterns = [
# Pattern 1: "## Task N: Title" or "### Task N: Title"
(r'\n##+ Task (\d+):\s*(.+?)\n', "## Task N: Title"),
# Pattern 2: "## N. Title" or "### N. Title"
(r'\n##+ (\d+)\.\s*(.+?)\n', "## N. Title"),
# Pattern 3: "**Task N:** Title"
(r'\n\*\*Task (\d+):\*\*\s*(.+?)\n', "**Task N:** Title"),
]
tasks_found = False
for pattern, pattern_name in patterns:
self.log(f"Trying pattern: {pattern_name}")
sections = re.split(pattern, self.plan_content)
if len(sections) >= 4: # Found at least one task
self.log(f"Pattern matched! Found {(len(sections) - 1) // 3} sections")
tasks_found = True
# sections will be: [preamble, task1_num, task1_title, task1_content, task2_num, ...]
i = 1 # Skip preamble
while i < len(sections) - 2:
try:
task_num = int(sections[i])
task_title = sections[i+1].strip()
task_content = sections[i+2].strip() if i+2 < len(sections) else ""
task = Task(task_num, task_title, task_content)
task.extract_file_dependencies()
self.tasks.append(task)
self.log(f" Task {task_num}: {task_title[:50]}...")
i += 3
except (ValueError, IndexError) as e:
self.log(f"Error parsing task at index {i}: {e}")
i += 3
break
if not tasks_found:
print("❌ Error: Could not find tasks in plan file")
print("\nExpected task format (one of):")
print(" ## Task 1: Title")
print(" ## 1. Title")
print(" **Task 1:** Title")
print("\nPlease ensure your plan uses one of these formats.")
sys.exit(1)
def analyze_dependencies(self) -> None:
"""Analyze dependencies between tasks."""
self.log("Analyzing dependencies...")
for i, task in enumerate(self.tasks):
self.log(f" Task {task.id}:")
# First check for explicit task dependencies
task.extract_task_dependencies(self.tasks)
# If no explicit dependencies and not marked independent,
# default to depending on previous task
if i > 0 and not task.dependencies:
content_lower = task.content.lower()
if 'independent' not in content_lower and 'parallel' not in content_lower:
# Default: depends on immediately previous task
prev_task = self.tasks[i-1]
task.dependencies.add(prev_task.id)
prev_task.blocks.add(task.id)
self.log(f" Added default dependency on task {prev_task.id}")
# Check for file-based dependencies
for j, other in enumerate(self.tasks[:i]):
if task.files_to_modify.intersection(other.files_to_modify):
# Same files = forced sequential dependency
if other.id not in task.dependencies:
task.dependencies.add(other.id)
other.blocks.add(task.id)
shared = task.files_to_modify.intersection(other.files_to_modify)
self.log(f" Added file-based dependency on task {other.id} (shared: {shared})")
if not task.dependencies:
self.log(f" No dependencies")
else:
self.log(f" Dependencies: {task.dependencies}")
def identify_parallel_batches(self) -> List[List[int]]:
"""Identify batches of tasks that can run in parallel (max 2)."""
self.log("Identifying parallel batches...")
batches: List[List[int]] = []
remaining = set(t.id for t in self.tasks)
while remaining:
# Find tasks with no unsatisfied dependencies
ready = []
for tid in remaining:
task = next(t for t in self.tasks if t.id == tid)
unsatisfied_deps = [dep for dep in task.dependencies if dep in remaining]
if not unsatisfied_deps:
ready.append(tid)
if not ready:
print("❌ Error: Circular dependency detected!")
print(f"Remaining tasks: {remaining}")
for tid in remaining:
task = next(t for t in self.tasks if t.id == tid)
print(f" Task {tid} depends on: {task.dependencies & remaining}")
sys.exit(1)
self.log(f" Ready tasks: {ready}")
# Create batches of up to 2 parallel tasks
batch = []
for task_id in ready:
if len(batch) == 0:
batch.append(task_id)
elif len(batch) == 1:
# Check if can run parallel with first task in batch
task = next(t for t in self.tasks if t.id == task_id)
other_task = next(t for t in self.tasks if t.id == batch[0])
# Can run parallel if no dependency and no file conflicts
if (task_id not in other_task.dependencies and
batch[0] not in task.dependencies and
task_id not in other_task.blocks and
batch[0] not in task.blocks and
not task.files_to_modify.intersection(other_task.files_to_modify)):
batch.append(task_id)
self.log(f" Paired task {task_id} with task {batch[0]}")
else:
# Can't pair, will go in next batch
self.log(f" Task {task_id} can't pair with {batch[0]}")
break
else:
# Batch already has 2, stop
break
# Add batch and remove tasks from remaining
batches.append(batch)
for tid in batch:
remaining.remove(tid)
self.log(f" Created batch: {batch}")
# Add any remaining ready tasks to next batch
# (tasks that couldn't be paired)
for task_id in ready:
if task_id in remaining:
batches.append([task_id])
remaining.remove(task_id)
self.log(f" Created single-task batch: {[task_id]}")
return batches
def write_task_files(self, output_dir: Path) -> None:
"""Write individual task files."""
output_dir.mkdir(parents=True, exist_ok=True)
# Extract feature name from plan filename
# Format: YYYY-MM-DD-feature-name.md
parts = self.plan_path.stem.split('-', 3)
if len(parts) >= 4:
feature_name = parts[3]
else:
feature_name = self.plan_path.stem
for task in self.tasks:
task_file = output_dir / f"{feature_name}-task-{task.id:02d}.md"
task_file.write_text(task.to_markdown(self.tasks))
print(f"{task_file.name}")
def write_manifest(self, output_dir: Path) -> Path:
"""Write execution manifest JSON."""
# Extract feature name
parts = self.plan_path.stem.split('-', 3)
if len(parts) >= 4:
feature_name = parts[3]
else:
feature_name = self.plan_path.stem
manifest_file = output_dir / f"{feature_name}-manifest.json"
parallel_batches = self.identify_parallel_batches()
manifest = {
"plan": str(self.plan_path),
"feature": feature_name,
"created": datetime.now().isoformat(),
"total_tasks": len(self.tasks),
"tasks": [
{
"id": task.id,
"title": task.title,
"file": str(output_dir / f"{feature_name}-task-{task.id:02d}.md"),
"dependencies": sorted(list(task.dependencies)),
"blocks": sorted(list(task.blocks)),
"files": sorted(list(task.files_to_modify)),
"status": "pending"
}
for task in self.tasks
],
"parallel_batches": parallel_batches
}
manifest_file.write_text(json.dumps(manifest, indent=2))
return manifest_file
def decompose(self, output_dir: Path) -> Dict:
"""Main decomposition process."""
print(f"📖 Reading plan: {self.plan_path}")
self.parse_tasks()
print(f"✓ Found {len(self.tasks)} tasks")
print("\n🔍 Analyzing dependencies...")
self.analyze_dependencies()
for task in self.tasks:
if task.dependencies:
deps_str = ", ".join(str(d) for d in sorted(task.dependencies))
print(f" Task {task.id}: Depends on {deps_str}")
if task.files_to_modify:
files_str = ", ".join(list(task.files_to_modify)[:2])
if len(task.files_to_modify) > 2:
files_str += f", ... ({len(task.files_to_modify)} total)"
print(f" Files: {files_str}")
else:
print(f" Task {task.id}: No dependencies")
print("\n⚡ Identifying parallelization opportunities...")
parallel_batches = self.identify_parallel_batches()
for i, batch in enumerate(parallel_batches, 1):
if len(batch) == 2:
print(f" Batch {i}: Tasks {batch[0]}, {batch[1]} (parallel)")
else:
print(f" Batch {i}: Task {batch[0]} (sequential)")
print(f"\n📝 Writing {len(self.tasks)} task files to {output_dir}/")
self.write_task_files(output_dir)
print("\n📋 Writing execution manifest...")
manifest_path = self.write_manifest(output_dir)
print(f"{manifest_path.name}")
# Calculate stats
parallel_pairs = sum(1 for batch in parallel_batches if len(batch) == 2)
sequential_tasks = sum(1 for batch in parallel_batches if len(batch) == 1)
estimated_speedup = (parallel_pairs / len(self.tasks) * 100) if self.tasks else 0
return {
"total_tasks": len(self.tasks),
"parallel_batches": len(parallel_batches),
"parallel_pairs": parallel_pairs,
"sequential_tasks": sequential_tasks,
"manifest": str(manifest_path),
"estimated_speedup": estimated_speedup
}
def main():
parser = argparse.ArgumentParser(
description="Decompose monolithic implementation plan into parallel tasks",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python decompose-plan.py docs/plans/2025-01-18-user-auth.md
python decompose-plan.py plan.md --output-dir ./tasks
python decompose-plan.py plan.md --verbose
"""
)
parser.add_argument(
"plan_file",
type=Path,
help="Path to monolithic plan markdown file"
)
parser.add_argument(
"--output-dir",
type=Path,
help="Output directory for task files (default: docs/plans/tasks/)"
)
parser.add_argument(
"--verbose",
action="store_true",
help="Enable verbose debug output"
)
args = parser.parse_args()
if not args.plan_file.exists():
print(f"❌ Error: Plan file not found: {args.plan_file}")
return 1
# Default output dir: docs/plans/tasks/<plan-name>/
if args.output_dir:
output_dir = args.output_dir
else:
# Create subfolder with plan filename (including date)
# e.g., docs/plans/tasks/2025-01-18-test-user-auth/
plan_name = args.plan_file.stem # Gets "2025-01-18-test-user-auth" from "2025-01-18-test-user-auth.md"
output_dir = args.plan_file.parent / "tasks" / plan_name
try:
decomposer = PlanDecomposer(args.plan_file, verbose=args.verbose)
stats = decomposer.decompose(output_dir)
print("\n" + "="*60)
print("✅ Plan decomposition complete!")
print("="*60)
print(f"Total tasks: {stats['total_tasks']}")
print(f"Parallel batches: {stats['parallel_batches']}")
print(f" - Pairs (2 parallel): {stats['parallel_pairs']}")
print(f" - Sequential: {stats['sequential_tasks']}")
print(f"Estimated speedup: {stats['estimated_speedup']:.1f}%")
print(f"\nManifest: {stats['manifest']}")
print(f"\nNext: Use parallel-subagent-driven-development skill")
return 0
except Exception as e:
print(f"\n❌ Error during decomposition: {e}")
if args.verbose:
import traceback
traceback.print_exc()
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,127 @@
---
name: defense-in-depth
description: Use when invalid data causes failures deep in execution, requiring validation at multiple system layers - validates at every layer data passes through to make bugs structurally impossible
---
# Defense-in-Depth Validation
## Overview
When you fix a bug caused by invalid data, adding validation at one place feels sufficient. But that single check can be bypassed by different code paths, refactoring, or mocks.
**Core principle:** Validate at EVERY layer data passes through. Make the bug structurally impossible.
## Why Multiple Layers
Single validation: "We fixed the bug"
Multiple layers: "We made the bug impossible"
Different layers catch different cases:
- Entry validation catches most bugs
- Business logic catches edge cases
- Environment guards prevent context-specific dangers
- Debug logging helps when other layers fail
## The Four Layers
### Layer 1: Entry Point Validation
**Purpose:** Reject obviously invalid input at API boundary
```typescript
function createProject(name: string, workingDirectory: string) {
if (!workingDirectory || workingDirectory.trim() === '') {
throw new Error('workingDirectory cannot be empty');
}
if (!existsSync(workingDirectory)) {
throw new Error(`workingDirectory does not exist: ${workingDirectory}`);
}
if (!statSync(workingDirectory).isDirectory()) {
throw new Error(`workingDirectory is not a directory: ${workingDirectory}`);
}
// ... proceed
}
```
### Layer 2: Business Logic Validation
**Purpose:** Ensure data makes sense for this operation
```typescript
function initializeWorkspace(projectDir: string, sessionId: string) {
if (!projectDir) {
throw new Error('projectDir required for workspace initialization');
}
// ... proceed
}
```
### Layer 3: Environment Guards
**Purpose:** Prevent dangerous operations in specific contexts
```typescript
async function gitInit(directory: string) {
// In tests, refuse git init outside temp directories
if (process.env.NODE_ENV === 'test') {
const normalized = normalize(resolve(directory));
const tmpDir = normalize(resolve(tmpdir()));
if (!normalized.startsWith(tmpDir)) {
throw new Error(
`Refusing git init outside temp dir during tests: ${directory}`
);
}
}
// ... proceed
}
```
### Layer 4: Debug Instrumentation
**Purpose:** Capture context for forensics
```typescript
async function gitInit(directory: string) {
const stack = new Error().stack;
logger.debug('About to git init', {
directory,
cwd: process.cwd(),
stack,
});
// ... proceed
}
```
## Applying the Pattern
When you find a bug:
1. **Trace the data flow** - Where does bad value originate? Where used?
2. **Map all checkpoints** - List every point data passes through
3. **Add validation at each layer** - Entry, business, environment, debug
4. **Test each layer** - Try to bypass layer 1, verify layer 2 catches it
## Example from Session
Bug: Empty `projectDir` caused `git init` in source code
**Data flow:**
1. Test setup → empty string
2. `Project.create(name, '')`
3. `WorkspaceManager.createWorkspace('')`
4. `git init` runs in `process.cwd()`
**Four layers added:**
- Layer 1: `Project.create()` validates not empty/exists/writable
- Layer 2: `WorkspaceManager` validates projectDir not empty
- Layer 3: `WorktreeManager` refuses git init outside tmpdir in tests
- Layer 4: Stack trace logging before git init
**Result:** All 1847 tests passed, bug impossible to reproduce
## Key Insight
All four layers were necessary. During testing, each layer caught bugs the others missed:
- Different code paths bypassed entry validation
- Mocks bypassed business logic checks
- Edge cases on different platforms needed environment guards
- Debug logging identified structural misuse
**Don't stop at one validation point.** Add checks at every layer.

View File

@@ -0,0 +1,180 @@
---
name: dispatching-parallel-agents
description: Use when facing 3+ independent failures that can be investigated without shared state or dependencies - dispatches multiple Claude agents to investigate and fix independent problems concurrently
---
# Dispatching Parallel Agents
## Overview
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
## When to Use
```dot
digraph when_to_use {
"Multiple failures?" [shape=diamond];
"Are they independent?" [shape=diamond];
"Single agent investigates all" [shape=box];
"One agent per problem domain" [shape=box];
"Can they work in parallel?" [shape=diamond];
"Sequential agents" [shape=box];
"Parallel dispatch" [shape=box];
"Multiple failures?" -> "Are they independent?" [label="yes"];
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
}
```
**Use when:**
- 3+ test files failing with different root causes
- Multiple subsystems broken independently
- Each problem can be understood without context from others
- No shared state between investigations
**Don't use when:**
- Failures are related (fix one might fix others)
- Need to understand full system state
- Agents would interfere with each other
## The Pattern
### 1. Identify Independent Domains
Group failures by what's broken:
- File A tests: Tool approval flow
- File B tests: Batch completion behavior
- File C tests: Abort functionality
Each domain is independent - fixing tool approval doesn't affect abort tests.
### 2. Create Focused Agent Tasks
Each agent gets:
- **Specific scope:** One test file or subsystem
- **Clear goal:** Make these tests pass
- **Constraints:** Don't change other code
- **Expected output:** Summary of what you found and fixed
### 3. Dispatch in Parallel
```typescript
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
```
### 4. Review and Integrate
When agents return:
- Read each summary
- Verify fixes don't conflict
- Run full test suite
- Integrate all changes
## Agent Prompt Structure
Good agent prompts are:
1. **Focused** - One clear problem domain
2. **Self-contained** - All context needed to understand the problem
3. **Specific about output** - What should the agent return?
```markdown
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
3. "should properly track pendingToolCount" - expects 3 results but gets 0
These are timing/race condition issues. Your task:
1. Read the test file and understand what each test verifies
2. Identify root cause - timing issues or actual bugs?
3. Fix by:
- Replacing arbitrary timeouts with event-based waiting
- Fixing bugs in abort implementation if found
- Adjusting test expectations if testing changed behavior
Do NOT just increase timeouts - find the real issue.
Return: Summary of what you found and what you fixed.
```
## Common Mistakes
**❌ Too broad:** "Fix all the tests" - agent gets lost
**✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope
**❌ No context:** "Fix the race condition" - agent doesn't know where
**✅ Context:** Paste the error messages and test names
**❌ No constraints:** Agent might refactor everything
**✅ Constraints:** "Do NOT change production code" or "Fix tests only"
**❌ Vague output:** "Fix it" - you don't know what changed
**✅ Specific:** "Return summary of root cause and changes"
## When NOT to Use
**Related failures:** Fixing one might fix others - investigate together first
**Need full context:** Understanding requires seeing entire system
**Exploratory debugging:** You don't know what's broken yet
**Shared state:** Agents would interfere (editing same files, using same resources)
## Real Example from Session
**Scenario:** 6 test failures across 3 files after major refactoring
**Failures:**
- agent-tool-abort.test.ts: 3 failures (timing issues)
- batch-completion-behavior.test.ts: 2 failures (tools not executing)
- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
**Decision:** Independent domains - abort logic separate from batch completion separate from race conditions
**Dispatch:**
```
Agent 1 → Fix agent-tool-abort.test.ts
Agent 2 → Fix batch-completion-behavior.test.ts
Agent 3 → Fix tool-approval-race-conditions.test.ts
```
**Results:**
- Agent 1: Replaced timeouts with event-based waiting
- Agent 2: Fixed event structure bug (threadId in wrong place)
- Agent 3: Added wait for async tool execution to complete
**Integration:** All fixes independent, no conflicts, full suite green
**Time saved:** 3 problems solved in parallel vs sequentially
## Key Benefits
1. **Parallelization** - Multiple investigations happen simultaneously
2. **Focus** - Each agent has narrow scope, less context to track
3. **Independence** - Agents don't interfere with each other
4. **Speed** - 3 problems solved in time of 1
## Verification
After agents return:
1. **Review each summary** - Understand what changed
2. **Check for conflicts** - Did agents edit same code?
3. **Run full suite** - Verify all fixes work together
4. **Spot check** - Agents can make systematic errors
## Real-World Impact
From debugging session (2025-10-03):
- 6 failures across 3 files
- 3 agents dispatched in parallel
- All investigations completed concurrently
- All fixes integrated successfully
- Zero conflicts between agent changes

View File

@@ -0,0 +1,137 @@
---
name: executing-plans
description: Use when partner provides a complete implementation plan to execute in controlled batches with review checkpoints - loads plan, reviews critically, executes tasks in batches, reports for review between batches
---
# Executing Plans
## Overview
Load plan, review critically, execute tasks in batches, report for review between batches.
**Core principle:** Batch execution with checkpoints for architect review.
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
## Execution Strategy
This skill checks for decomposition and chooses execution method:
### Detection
Check for manifest file before choosing execution:
```bash
if [[ -f "docs/plans/tasks/YYYY-MM-DD-<feature>/manifest.json" ]]; then
# Manifest exists → Use parallel execution
EXECUTION_MODE="parallel"
else
# No manifest → Use sequential execution
EXECUTION_MODE="sequential"
fi
```
### Parallel Execution (manifest exists)
**When:** `manifest.json` found in tasks directory
**Process:**
1. Load plan manifest from `docs/plans/tasks/YYYY-MM-DD-<feature>/manifest.json`
2. Invoke `parallel-subagent-driven-development` skill with manifest
3. Execute tasks in parallel batches (up to 2 concurrent subagents)
4. Code review gate after each batch
5. Continue until all tasks complete
**Benefits:**
- Up to 2 tasks run concurrently per batch
- ~40% faster for parallelizable plans
- 90% context reduction per task
### Sequential Execution (no manifest)
**When:** No `manifest.json` found
**Process:**
1. Load monolithic plan from `docs/plans/YYYY-MM-DD-<feature>.md`
2. Invoke `subagent-driven-development` skill
3. Execute tasks sequentially (one at a time)
4. Code review gate after each task
5. Continue until all tasks complete
**Use case:**
- Simple plans (1-3 tasks)
- Sequential work that can't parallelize
- Prefer simplicity over speed
### CRITICAL Constraint
⚠️ **Cannot use parallel-subagent-driven-development without manifest.json**
If manifest does not exist → MUST use sequential mode (subagent-driven-development)
### Recommendation
Always decompose plans with 4+ tasks to enable parallel execution.
Run `/cc:parse-plan` to create manifest before execution.
## The Process
### Step 1: Load and Review Plan
1. Read plan file
2. Review critically - identify any questions or concerns about the plan
3. If concerns: Raise them with your human partner before starting
4. If no concerns: Create TodoWrite and proceed
### Step 2: Execute Batch
**Default: First 3 tasks**
For each task:
1. Mark as in_progress
2. Follow each step exactly (plan has bite-sized steps)
3. Run verifications as specified
4. Mark as completed
### Step 3: Report
When batch complete:
- Show what was implemented
- Show verification output
- Say: "Ready for feedback."
### Step 4: Continue
Based on feedback:
- Apply changes if needed
- Execute next batch
- Repeat until complete
### Step 5: Complete Development
After all tasks complete and verified:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice
## When to Stop and Ask for Help
**STOP executing immediately when:**
- Hit a blocker mid-batch (missing dependency, test fails, instruction unclear)
- Plan has critical gaps preventing starting
- You don't understand an instruction
- Verification fails repeatedly
**Ask for clarification rather than guessing.**
## When to Revisit Earlier Steps
**Return to Review (Step 1) when:**
- Partner updates the plan based on your feedback
- Fundamental approach needs rethinking
**Don't force through blockers** - stop and ask.
## Remember
- Review plan critically first
- Follow plan steps exactly
- Don't skip verifications
- Reference skills when plan says to
- Between batches: just report and wait
- Stop when blocked, don't guess

View File

@@ -0,0 +1,200 @@
---
name: finishing-a-development-branch
description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
---
# Finishing a Development Branch
## Overview
Guide completion of development work by presenting clear options and handling chosen workflow.
**Core principle:** Verify tests → Present options → Execute choice → Clean up.
**Announce at start:** "I'm using the finishing-a-development-branch skill to complete this work."
## The Process
### Step 1: Verify Tests
**Before presenting options, verify tests pass:**
```bash
# Run project's test suite
npm test / cargo test / pytest / go test ./...
```
**If tests fail:**
```
Tests failing (<N> failures). Must fix before completing:
[Show failures]
Cannot proceed with merge/PR until tests pass.
```
Stop. Don't proceed to Step 2.
**If tests pass:** Continue to Step 2.
### Step 2: Determine Base Branch
```bash
# Try common base branches
git merge-base HEAD main 2>/dev/null || git merge-base HEAD master 2>/dev/null
```
Or ask: "This branch split from main - is that correct?"
### Step 3: Present Options
Present exactly these 4 options:
```
Implementation complete. What would you like to do?
1. Merge back to <base-branch> locally
2. Push and create a Pull Request
3. Keep the branch as-is (I'll handle it later)
4. Discard this work
Which option?
```
**Don't add explanation** - keep options concise.
### Step 4: Execute Choice
#### Option 1: Merge Locally
```bash
# Switch to base branch
git checkout <base-branch>
# Pull latest
git pull
# Merge feature branch
git merge <feature-branch>
# Verify tests on merged result
<test command>
# If tests pass
git branch -d <feature-branch>
```
Then: Cleanup worktree (Step 5)
#### Option 2: Push and Create PR
```bash
# Push branch
git push -u origin <feature-branch>
# Create PR
gh pr create --title "<title>" --body "$(cat <<'EOF'
## Summary
<2-3 bullets of what changed>
## Test Plan
- [ ] <verification steps>
EOF
)"
```
Then: Cleanup worktree (Step 5)
#### Option 3: Keep As-Is
Report: "Keeping branch <name>. Worktree preserved at <path>."
**Don't cleanup worktree.**
#### Option 4: Discard
**Confirm first:**
```
This will permanently delete:
- Branch <name>
- All commits: <commit-list>
- Worktree at <path>
Type 'discard' to confirm.
```
Wait for exact confirmation.
If confirmed:
```bash
git checkout <base-branch>
git branch -D <feature-branch>
```
Then: Cleanup worktree (Step 5)
### Step 5: Cleanup Worktree
**For Options 1, 2, 4:**
Check if in worktree:
```bash
git worktree list | grep $(git branch --show-current)
```
If yes:
```bash
git worktree remove <worktree-path>
```
**For Option 3:** Keep worktree.
## Quick Reference
| Option | Merge | Push | Keep Worktree | Cleanup Branch |
|--------|-------|------|---------------|----------------|
| 1. Merge locally | ✓ | - | - | ✓ |
| 2. Create PR | - | ✓ | ✓ | - |
| 3. Keep as-is | - | - | ✓ | - |
| 4. Discard | - | - | - | ✓ (force) |
## Common Mistakes
**Skipping test verification**
- **Problem:** Merge broken code, create failing PR
- **Fix:** Always verify tests before offering options
**Open-ended questions**
- **Problem:** "What should I do next?" → ambiguous
- **Fix:** Present exactly 4 structured options
**Automatic worktree cleanup**
- **Problem:** Remove worktree when might need it (Option 2, 3)
- **Fix:** Only cleanup for Options 1 and 4
**No confirmation for discard**
- **Problem:** Accidentally delete work
- **Fix:** Require typed "discard" confirmation
## Red Flags
**Never:**
- Proceed with failing tests
- Merge without verifying tests on result
- Delete work without confirmation
- Force-push without explicit request
**Always:**
- Verify tests before offering options
- Present exactly 4 options
- Get typed confirmation for Option 4
- Clean up worktree for Options 1 & 4 only
## Integration
**Called by:**
- **subagent-driven-development** (Step 7) - After all tasks complete
- **executing-plans** (Step 5) - After all batches complete
**Pairs with:**
- **using-git-worktrees** - Cleans up worktree created by that skill

View File

@@ -0,0 +1,428 @@
---
name: parallel-subagent-driven-development
description: Use when executing decomposed plans with parallel batches - dispatches up to 2 fresh subagents per batch with code review between batches, enabling fast parallel iteration with quality gates
---
# Parallel Subagent-Driven Development
Execute decomposed plan by dispatching fresh subagent(s) per batch (up to 2 parallel), with code review after each batch.
**Core principle:** Fresh subagent per task + up to 2 parallel when safe + review between batches = high quality, fast iteration
## Overview
**vs. Subagent-Driven Development:**
- Same process, but runs up to 2 subagents in parallel when tasks are independent
- Uses manifest.json to know which tasks can run together
- Reviews both implementations together
- Everything else identical
**vs. Executing Plans:**
- Same session (no context switch)
- Fresh subagent per task (no context pollution)
- Parallel execution when safe (faster)
- Code review after each batch (catch issues early)
- Faster iteration (no human-in-loop between tasks)
**When to use:**
- After running decomposing-plans (which created manifest.json)
- Staying in this session
- Want parallel execution with quality gates
**When NOT to use:**
- Plan not decomposed yet (run decomposing-plans first)
- Need to review plan first (use executing-plans)
- Tasks are tightly coupled (manual execution better)
- Plan needs revision (brainstorm first)
## Prerequisites
**REQUIRED:** Must have run decomposing-plans skill first to create:
- Individual task files: `docs/plans/tasks/<plan-name>/<feature>-task-NN.md`
- Manifest file: `docs/plans/tasks/<plan-name>/<feature>-manifest.json`
Where `<plan-name>` is the full plan filename (e.g., `2025-01-18-user-auth`)
## The Process
### 1. Load Manifest
Read manifest file from `docs/plans/tasks/<feature>-manifest.json`.
Create TodoWrite with all batches:
```
- [ ] Execute batch 1 (tasks X, Y)
- [ ] Review batch 1
- [ ] Execute batch 2 (task Z)
- [ ] Review batch 2
...
```
### 2. Execute Batch with Subagent(s)
For each batch in `parallel_batches` array:
**If batch has 1 task:**
Dispatch fresh subagent (same as original):
```
Task tool (general-purpose):
description: "Implement Task N: [task name]"
prompt: |
You are implementing Task N from the decomposed plan.
Read the task file: docs/plans/tasks/<plan-name>/<feature>-task-NN.md
Your job is to:
1. Read that task file carefully
2. Implement exactly what the task specifies
3. Write tests (following TDD if task says to)
4. Verify implementation works
5. Commit your work
6. Report back
Work from: [directory]
Report: What you implemented, what you tested, test results, files changed, any issues
```
**If batch has 2 tasks:**
Dispatch TWO fresh subagents IN SINGLE MESSAGE (parallel execution):
```
<function_calls>
<invoke name="Task">
<parameter name="subagent_type">general-purpose</parameter>
<parameter name="description">Implement Task N: [task name]</parameter>
<parameter name="prompt">
You are implementing Task N from the decomposed plan.
Read the task file: docs/plans/tasks/<plan-name>/<feature>-task-NN.md
Your job is to:
1. Read that task file carefully
2. Implement exactly what the task specifies
3. Write tests (following TDD if task says to)
4. Verify implementation works
5. Commit your work
6. Report back
Work from: [directory]
Report: What you implemented, what you tested, test results, files changed, any issues
</parameter>
</invoke>
<invoke name="Task">
<parameter name="subagent_type">general-purpose</parameter>
<parameter name="description">Implement Task M: [task name]</parameter>
<parameter name="prompt">
You are implementing Task M from the decomposed plan.
Read the task file: docs/plans/tasks/<plan-name>/<feature>-task-MM.md
Your job is to:
1. Read that task file carefully
2. Implement exactly what the task specifies
3. Write tests (following TDD if task says to)
4. Verify implementation works
5. Commit your work
6. Report back
Work from: [directory]
Report: What you implemented, what you tested, test results, files changed, any issues
</parameter>
</invoke>
</function_calls>
```
**CRITICAL:** Both Task tools in SINGLE message = true parallel execution.
**Subagent(s) report back** with summary of work.
### 3. Review Subagent's Work
**Get git SHAs:**
- BASE_SHA: commit before batch started
- HEAD_SHA: current commit after batch
**Dispatch code-reviewer subagent:**
**If batch had 1 task:**
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from subagent's report]
PLAN_OR_REQUIREMENTS: Task N from docs/plans/tasks/<plan-name>/<feature>-task-NN.md
BASE_SHA: [commit before batch]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
```
**If batch had 2 tasks:**
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: |
Task N: [from subagent 1's report]
Task M: [from subagent 2's report]
PLAN_OR_REQUIREMENTS: |
Task N: docs/plans/tasks/<plan-name>/<feature>-task-NN.md
Task M: docs/plans/tasks/<plan-name>/<feature>-task-MM.md
BASE_SHA: [commit before batch]
HEAD_SHA: [current commit]
DESCRIPTION: Batch with tasks N and M - [summary of both]
```
**Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment
**Important:** When reviewing 2 tasks, code-reviewer also checks:
- No conflicts between the two implementations
- Proper integration if tasks interact
- Consistent code style across both
### 4. Apply Review Feedback
**If issues found:**
- Fix Critical issues immediately
- Fix Important issues before next batch
- Note Minor issues
**Dispatch follow-up subagent if needed:**
**If issues in 1 task:**
```
Task tool (general-purpose):
description: "Fix issues from code review in Task N"
prompt: |
Fix issues from code review for Task N.
Issues to fix: [list issues]
Original task: docs/plans/tasks/<feature>-task-NN.md
Fix the issues, verify tests pass, commit, report back.
```
**If issues in both tasks:**
```
<function_calls>
<invoke name="Task">
<parameter name="subagent_type">general-purpose</parameter>
<parameter name="description">Fix issues in Task N</parameter>
<parameter name="prompt">
Fix issues from code review for Task N.
Issues to fix: [list issues for task N]
Original task: docs/plans/tasks/<plan-name>/<feature>-task-NN.md
Fix the issues, verify tests pass, commit, report back.
</parameter>
</invoke>
<invoke name="Task">
<parameter name="subagent_type">general-purpose</parameter>
<parameter name="description">Fix issues in Task M</parameter>
<parameter name="prompt">
Fix issues from code review for Task M.
Issues to fix: [list issues for task M]
Original task: docs/plans/tasks/<plan-name>/<feature>-task-MM.md
Fix the issues, verify tests pass, commit, report back.
</parameter>
</invoke>
</function_calls>
```
### 5. Update Manifest and Mark Complete
**Update manifest.json:**
- Set task status to "done"
- Add "completed_at" timestamp
**Mark batch complete in TodoWrite**
- Check off batch execution
- Check off batch review
Move to next batch, repeat steps 2-5.
### 6. Final Review
After all batches complete, dispatch final code-reviewer:
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [summary of ALL tasks from manifest]
PLAN_OR_REQUIREMENTS: Original plan file + all task files
BASE_SHA: [initial commit before all work]
HEAD_SHA: [current commit after all work]
DESCRIPTION: Complete implementation of [feature name]
```
**Final reviewer:**
- Reviews entire implementation
- Checks all plan requirements met
- Validates overall architecture
- Checks integration between all tasks
### 7. Complete Development
After final review passes:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice
## Example Workflow
```
You: I'm using Parallel Subagent-Driven Development to execute this decomposed plan.
[Load manifest, create TodoWrite with batches]
Batch 1 (Tasks 1 & 2 - parallel):
[Dispatch 2 implementation subagents IN SINGLE MESSAGE]
Subagent 1: Implemented user model with tests, 5/5 passing
Subagent 2: Implemented logger with tests, 3/3 passing
[Get git SHAs, dispatch code-reviewer]
Reviewer:
Strengths: Both well-tested, clean separation
Issues: None
Ready.
[Update manifest: tasks 1,2 done]
[Mark Batch 1 complete]
Batch 2 (Task 3 - sequential):
[Dispatch 1 implementation subagent]
Subagent: Added user validation, 8/8 tests passing
[Dispatch code-reviewer]
Reviewer:
Strengths: Good validation logic
Issues (Important): Missing edge case for empty email
[Dispatch fix subagent]
Fix subagent: Added empty email check, test added, passing
[Verify fix, update manifest: task 3 done]
[Mark Batch 2 complete]
Batch 3 (Tasks 4 & 5 - parallel):
[Dispatch 2 implementation subagents IN SINGLE MESSAGE]
Subagent 1: Implemented API endpoint, 6/6 passing
Subagent 2: Implemented CLI command, 4/4 passing
[Dispatch code-reviewer]
Reviewer:
Strengths: Both implementations solid
Issues: None
Ready.
[Update manifest: tasks 4,5 done]
[Mark Batch 3 complete]
[After all batches]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, no integration issues, ready to merge
Done! Using finishing-a-development-branch...
```
## Advantages
**vs. Original Subagent-Driven Development:**
- 40% faster for parallelizable tasks (2 tasks in time of 1)
- 90% less context per subagent (task file vs monolithic plan)
- Same quality gates (review after each batch)
- Same fresh context per task
**vs. Manual execution:**
- Subagents follow TDD naturally
- Fresh context per task (no confusion)
- Parallel when safe (faster)
**vs. Executing Plans:**
- Same session (no handoff)
- Continuous progress (no waiting)
- Parallel execution (faster)
- Review checkpoints automatic
**Cost:**
- More subagent invocations
- But catches issues early (cheaper than debugging later)
- Parallel execution saves wall-clock time
## Red Flags
**Never:**
- Skip code review between batches
- Proceed with unfixed Critical issues
- Skip decomposing-plans (must have manifest.json)
- Manually execute tasks from monolithic plan
- Dispatch 3+ parallel subagents (max is 2)
**If subagent fails task:**
- Dispatch fix subagent with specific instructions
- Don't try to fix manually (context pollution)
**If both parallel subagents fail:**
- Fix one at a time with follow-up subagents
- Or dispatch 2 fix subagents in parallel if issues are independent
## Integration
**Required prerequisite:**
- **decomposing-plans** - REQUIRED: Creates manifest.json and task files that this skill uses
**Required workflow skills:**
- **writing-plans** - REQUIRED BEFORE decomposing-plans: Creates the monolithic plan
- **requesting-code-review** - REQUIRED: Review after each batch (see Step 3)
- **finishing-a-development-branch** - REQUIRED: Complete development after all batches (see Step 7)
**Subagents must use:**
- **test-driven-development** - Subagents follow TDD for each task
**Alternative workflow:**
- **subagent-driven-development** - Use for monolithic plans (no parallelization)
- **executing-plans** - Use for parallel session instead of same-session execution
See code-reviewer template: requesting-code-review/code-reviewer.md
## Manifest Status Tracking
Update manifest.json after each batch:
```json
{
"tasks": [
{
"id": 1,
"status": "done",
"completed_at": "2025-01-18T10:30:00Z"
},
{
"id": 2,
"status": "done",
"completed_at": "2025-01-18T10:30:00Z"
},
{
"id": 3,
"status": "in_progress"
}
]
}
```
This allows resuming if interrupted and tracking overall progress.

330
skills/plan-review/SKILL.md Normal file
View File

@@ -0,0 +1,330 @@
---
name: plan-review
description: Use after plan is written to validate implementation plans across completeness, quality, feasibility, and scope dimensions - spawns specialized validators for failed dimensions and refines plan interactively before execution
---
# Plan Review
Use this skill to validate implementation plans across completeness, quality, feasibility, and scope dimensions.
## When to Use
After plan is written and user selects "A) review the plan" option.
## Phase 1: Initial Assessment
Run automatic checks across 4 dimensions using simple validation logic (no subagents yet):
### Completeness Check
Scan plan for:
- ✅ All phases have success criteria section
- ✅ Commands for verification present (`make test-`, `pytest`, etc.)
- ✅ Rollback/migration strategy mentioned
- ✅ Edge cases section or error handling
- ✅ Testing strategy defined
**Scoring:**
- PASS: All criteria present
- WARN: 1-2 criteria missing
- FAIL: 3+ criteria missing
### Quality Check
Scan plan for:
- ✅ File paths with line numbers: `file.py:123`
- ✅ Specific function/class names
- ✅ Code examples are complete (not pseudocode)
- ✅ Success criteria are measurable
- ❌ Vague language: "properly", "correctly", "handle", "add validation" without specifics
**Scoring:**
- PASS: File paths present, code complete, criteria measurable, no vague language
- WARN: Some file paths missing or minor vagueness
- FAIL: No file paths, pseudocode only, vague criteria
### Feasibility Check
Basic checks (detailed check needs subagent):
- ✅ References to existing files/functions seem reasonable
- ✅ No obvious impossibilities
- ✅ Technology choices are compatible
- ✅ Libraries mentioned are standard/available
**Scoring:**
- PASS: Seems feasible on surface
- WARN: Some questionable assumptions
- FAIL: Obvious blockers or impossibilities
### Scope Creep Check
Requires research.md memory or brainstorm context:
- ✅ "What We're NOT Doing" section exists
- ✅ Features align with original brainstorm
- ❌ New features added without justification
- ❌ Gold-plating or over-engineering patterns
**Scoring:**
- PASS: Scope aligned with original decisions
- WARN: Minor scope expansion, can justify
- FAIL: Significant scope creep or gold-plating
## Phase 2: Escalation (If Needed)
If **any dimension scores FAIL**, spawn specialized validators:
```typescript
const failedDimensions = {
completeness: score === 'FAIL',
quality: score === 'FAIL',
feasibility: score === 'FAIL',
scope: score === 'FAIL'
}
// Spawn validators in parallel for failed dimensions
const validations = await Promise.all([
...(failedDimensions.completeness ? [Task({
subagent_type: "completeness-checker",
description: "Validate plan completeness",
prompt: `
Analyze this implementation plan for completeness.
Plan file: ${planPath}
Check for:
- Success criteria (automated + manual)
- Dependencies between phases
- Rollback/migration strategy
- Edge cases and error handling
- Testing strategy
Report issues and recommendations.
`
})] : []),
...(failedDimensions.feasibility ? [Task({
subagent_type: "feasibility-analyzer",
description: "Verify plan feasibility",
prompt: `
Verify this implementation plan is feasible.
Plan file: ${planPath}
Use Serena MCP to check:
- All referenced files/functions exist
- Libraries are in dependencies
- Integration points match reality
- No technical blockers
Report what doesn't exist or doesn't match assumptions.
`
})] : []),
...(failedDimensions.scope ? [Task({
subagent_type: "scope-creep-detector",
description: "Check scope alignment",
prompt: `
Compare plan against original brainstorm for scope creep.
Plan file: ${planPath}
Research/brainstorm: ${researchMemoryPath}
Check for:
- Features not in original scope
- Gold-plating or over-engineering
- "While we're at it" additions
- Violations of "What We're NOT Doing"
Report scope expansions and recommend removals.
`
})] : []),
...(failedDimensions.quality ? [Task({
subagent_type: "quality-validator",
description: "Validate plan quality",
prompt: `
Check this implementation plan for quality issues.
Plan file: ${planPath}
Check for:
- Vague language vs. specific actions
- Missing file:line references
- Untestable success criteria
- Incomplete code examples
Report specific quality issues and improvements.
`
})] : [])
])
```
## Phase 3: Interactive Refinement
Present findings conversationally (like brainstorming skill):
```markdown
I've reviewed the plan. Here's what I found:
**Completeness: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
**Quality: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
**Feasibility: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
**Scope: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
${if any FAIL:}
Let's address these issues. Starting with ${most-critical-dimension}:
Q1: ${specific-question}
A) ${option-1}
B) ${option-2}
C) ${option-3}
```
### Question Flow
Ask **one question at a time**, wait for answer, then next question.
For each issue:
1. Explain the problem clearly
2. Offer 2-4 concrete options
3. Allow "other" for custom response
4. Apply user's decision immediately
5. Update plan if changes agreed
6. Move to next issue
### Refinement Loop
After addressing all issues:
1. Update plan file with agreed changes
2. Re-run Phase 1 assessment
3. If still FAIL, spawn relevant validators again
4. Continue until all dimensions PASS or user approves WARN
### Approval
When all dimensions PASS or user accepts WARN:
```markdown
Plan review complete! ✅
**Final Scores:**
- Completeness: PASS
- Quality: PASS
- Feasibility: PASS
- Scope: PASS
The plan is ready for execution.
```
If user approved with WARN:
```markdown
Plan review complete with warnings.
**Final Scores:**
- Completeness: PASS
- Quality: WARN (minor file path gaps)
- Feasibility: PASS
- Scope: PASS
Warnings acknowledged. Proceeding with execution.
```
## Plan Updates
When making changes based on review:
1. Read current plan file
2. Apply specific edits using Edit tool
3. Preserve structure and formatting
4. Update only sections with issues
5. Verify changes with re-read
Don't rewrite entire plan - use targeted edits.
## Exit Conditions
**Success:** All PASS or user approves WARN
**Abort:** User requests to skip review and execute anyway
**Restart:** User requests major redesign (back to brainstorm)
## Example Session
```markdown
I've reviewed the plan. Here's what I found:
**Completeness: ⚠️ WARN**
- Phase 2 lacks automated success criteria
**Quality: ✅ PASS**
**Feasibility: ❌ FAIL**
- Plan assumes `src/auth/handler.py` exists - NOT FOUND
**Scope: ❌ FAIL**
- Plan includes "admin dashboard" but brainstorm only mentioned "user dashboard"
Let's address these issues. Starting with scope:
Q1: The plan includes an admin dashboard, but your brainstorm only mentioned a user dashboard. Should we:
A) Remove admin dashboard (stick to original scope)
B) Keep it (intentional expansion)
C) Split admin dashboard into separate plan
User: A
Got it. I'll remove the admin dashboard from the plan.
[Updates plan, removes admin dashboard task]
Next, feasibility:
Q2: The plan references `src/auth/handler.py` but this file doesn't exist. I found `src/security/auth.py:45` with similar functionality. Should we:
A) Create new `src/auth/handler.py` as planned
B) Use existing `src/security/auth.py` instead
C) Rename existing file to match plan
User: B
Makes sense. I'll update the plan to use `src/security/auth.py`.
[Updates plan with correct file paths]
Finally, completeness:
Q3: Phase 2 doesn't have automated success criteria. Should we add:
A) `make test-auth` command to verify
B) Specific pytest command
C) Manual testing steps only
User: A
Perfect. I'll add `make test-auth` verification to Phase 2.
[Updates plan with verification command]
Re-checking plan with fixes...
**Final Scores:**
- Completeness: ✅ PASS
- Quality: ✅ PASS
- Feasibility: ✅ PASS
- Scope: ✅ PASS
Plan review complete! The plan is ready for execution.
```

412
skills/pr-creation/SKILL.md Normal file
View File

@@ -0,0 +1,412 @@
---
name: pr-creation
description: Use when creating pull requests to auto-generate PR descriptions from plan, execution context, and memory - handles pre-flight checks, description generation, and GitHub CLI integration
---
# PR Creation
Use this skill to create pull requests with auto-generated descriptions from plan, execution context, and memory.
## Pre-flight Checks
Run these checks BEFORE generating PR description:
### 1. Branch Check
```bash
# Get current branch
branch=$(git branch --show-current)
# Check if on main/master
if [[ "$branch" == "main" || "$branch" == "master" ]]; then
ERROR: Cannot create PR from main/master branch
Must be on feature branch
exit 1
fi
```
**Error message:**
```markdown
❌ Cannot create PR from main/master branch.
You're currently on: ${branch}
Create a feature branch first:
git checkout -b feature/${feature-name}
Or if work is already done:
git checkout -b feature/${feature-name}
(commits stay with you)
```
### 2. Uncommitted Changes Check
```bash
# Check for uncommitted changes
git status --short
# If output exists
if [[ -n $(git status --short) ]]; then
WARN: Uncommitted changes found
Offer to commit before PR
fi
```
**Warning message:**
```markdown
⚠️ You have uncommitted changes:
${git status --short output}
Options:
A) Commit changes now
B) Stash and create PR anyway
C) Cancel PR creation
Choose: (A/B/C)
```
If **A**: Run commit process, then continue
If **B**: Stash changes, continue (warn they're not in PR)
If **C**: Exit PR creation
### 3. Remote Tracking Check
```bash
# Check if branch has remote
git rev-parse --abbrev-ref --symbolic-full-name @{u}
# If fails (no remote tracking)
if [[ $? -ne 0 ]]; then
INFO: No remote tracking branch
Will push with -u flag
fi
```
### 4. GitHub CLI Check
```bash
# Check gh installed
which gh
# Check gh authenticated
gh auth status
```
**Error if missing:**
```markdown
❌ GitHub CLI (gh) not found or not authenticated.
Install:
macOS: brew install gh
Linux: sudo apt install gh
Authenticate:
gh auth login
```
## PR Description Generation
Auto-generate from multiple sources:
### Source Priority
1. **Plan file:** `docs/plans/YYYY-MM-DD-<feature>.md`
2. **Complete memory:** `YYYY-MM-DD-<feature>-complete.md` (if exists)
3. **Git diff:** For files changed summary
4. **Commit messages:** For timeline context
### Template Structure
```markdown
## Summary
${extract-from-plan-overview}
## Implementation Details
${synthesize-from-plan-phases-and-execution}
### What Changed
${git-diff-stat-summary}
### Key Files
- \`${file-1}\`: ${purpose-from-plan}
- \`${file-2}\`: ${purpose-from-plan}
### Approach
${extract-from-plan-architecture-or-approach}
## Testing
${extract-from-plan-testing-strategy}
### Verification
${if-complete-memory-exists:}
- ✅ All unit tests passing
- ✅ Integration tests passing
- ✅ Manual verification completed
${else:}
- [ ] Unit tests: \`${test-command}\`
- [ ] Integration tests: \`${test-command}\`
- [ ] Manual verification: ${steps}
## Key Learnings
${if-complete-memory-exists:}
${extract-learnings-section}
${if-patterns-discovered:}
### Patterns Discovered
- ${pattern-1}
${if-gotchas:}
### Gotchas Encountered
- ${gotcha-1}
## References
- Implementation plan: \`docs/plans/${plan-file}\`
${if-tasks-exist:}
- Tasks completed: \`docs/plans/tasks/${feature}/\`
${if-research-exists:}
- Research: \`${research-memory-file}\`
---
🔥 Generated with [CrispyClaude](https://github.com/seanGSISG/crispy-claude)
```
### Extraction Logic
**Summary (from plan):**
```typescript
// Read plan file
const plan = readFile(`docs/plans/${planFile}`)
// Extract content under ## Overview or ## Goal
const summary = extractSection(plan, ['Overview', 'Goal'])
// Take first 2-3 sentences
return summary.split('.').slice(0, 3).join('.') + '.'
```
**What Changed (from git):**
```bash
# Get diff stat
git diff --stat main...HEAD
# Get major files (top 5 by lines changed)
git diff --numstat main...HEAD | sort -k1 -rn | head -5
```
**Approach (from plan):**
```typescript
// Extract from plan sections
const approach = extractSection(plan, [
'Architecture',
'Approach',
'Implementation Approach',
'Technical Approach'
])
```
**Testing (from plan):**
```typescript
// Extract testing sections
const testing = extractSection(plan, [
'Testing Strategy',
'Testing',
'Verification',
'Test Plan'
])
// Include make commands found in plan
const testCommands = extractCommands(plan, ['make test', 'pytest', 'npm test'])
```
**Key Learnings (from complete.md):**
```typescript
// If complete memory exists
const complete = readMemory(`${feature}-complete.md`)
// Extract learnings section
const learnings = extractSection(complete, [
'Key Learnings',
'Patterns Discovered',
'Gotchas Encountered',
'Trade-offs Made'
])
```
## Push and Create PR
### Push Branch
```bash
# Check if remote tracking exists
remote_tracking=$(git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null)
if [[ -z "$remote_tracking" ]]; then
# No remote tracking, push with -u
git push -u origin $(git branch --show-current)
else
# Remote tracking exists, regular push
git push
fi
```
### Create PR with gh
```bash
# Create PR with generated description
gh pr create \
--title "${PR_TITLE}" \
--body "$(cat <<'EOF'
${GENERATED_DESCRIPTION}
EOF
)"
```
**PR Title Generation:**
```typescript
// Extract feature name from plan filename
// docs/plans/2025-11-20-user-authentication.md → "User Authentication"
const featureName = planFile
.replace(/^\d{4}-\d{2}-\d{2}-/, '') // Remove date
.replace(/\.md$/, '') // Remove extension
.replace(/-/g, ' ') // Hyphens to spaces
.replace(/\b\w/g, c => c.toUpperCase()) // Title case
// PR title: "feat: ${featureName}"
const prTitle = `feat: ${featureName}`
```
### Success Output
```markdown
✅ Pull request created successfully!
**PR:** ${pr-url}
**Branch:** ${branch-name}
**Base:** main
**Description preview:**
${first-3-lines-of-description}
View PR: ${pr-url}
```
### Update Complete Memory
If complete.md exists, add PR link:
```typescript
// Read complete memory
const complete = readMemory(`${feature}-complete.md`)
// Add PR link section if not present
if (!complete.includes('## PR Created')) {
const updated = complete + `\n\n## PR Created\n\nLink: ${prUrl}\nCreated: ${date}\n`
// Write back to memory
writeMemory(`${feature}-complete.md`, updated)
}
```
## Error Handling
**Push fails:**
```markdown
❌ Failed to push branch to remote.
Error: ${error-message}
Common fixes:
- Check remote is configured: \`git remote -v\`
- Check authentication: \`git remote set-url origin git@github.com:user/repo.git\`
- Force push if rebased: \`git push --force-with-lease\`
```
**gh pr create fails:**
```markdown
❌ Failed to create pull request.
Error: ${error-message}
Common fixes:
- Re-authenticate: \`gh auth login\`
- Check permissions: Need write access to repository
- Check branch already has PR: \`gh pr list --head ${branch}\`
Manual PR creation:
1. Go to: https://github.com/${owner}/${repo}/compare/${branch}
2. Click "Create pull request"
3. Use this description:
${generated-description}
```
**Missing sources:**
```markdown
⚠️ Could not find implementation plan.
Searched:
- docs/plans/${date}-*.md
- Memory files
Creating PR with basic description from git history.
You may want to edit the PR description manually.
```
## Example Session
```bash
User: /cc:pr
# Pre-flight checks
✓ On feature branch: feature/user-authentication
✓ No uncommitted changes
✓ GitHub CLI authenticated
# Generating PR description...
Found sources:
- Plan: docs/plans/2025-11-20-user-authentication.md
- Memory: 2025-11-20-user-authentication-complete.md
- Git diff: 8 files changed, 450 insertions, 120 deletions
# Creating pull request...
Pushing branch to origin...
✓ Pushed feature/user-authentication
Creating PR...
✓ PR created: https://github.com/user/repo/pull/42
─────────────────────────────────
✅ Pull request created successfully!
**PR:** https://github.com/user/repo/pull/42
**Branch:** feature/user-authentication
**Base:** main
**Title:** feat: User Authentication
**Description preview:**
## Summary
Implement JWT-based user authentication with login/logout functionality...
View full PR: https://github.com/user/repo/pull/42
```

View File

@@ -0,0 +1,344 @@
---
name: project-agent-creator
description: Use when setting up project-specific agents via /cc:setup-project or when user requests custom agents for their codebase - analyzes project to create specialized, project-aware implementer agents that understand architecture, patterns, dependencies, and conventions
---
# Project Agent Creator
## Overview
**Creating project-specific agents transforms generic implementers into specialists who understand YOUR codebase.**
This skill analyzes your project and creates dedicated agents (e.g., `project-python-implementer.md`) that extend generic agents with project-specific knowledge: architecture patterns, dependencies, conventions, testing approaches, and codebase structure.
**Core principle:** Project-specific agents are generic agents + deep project context.
## When to Use
Use this skill when:
- User runs `/cc:setup-project` command
- User requests "create custom agents for my project"
- You need agents that understand project-specific architecture
- Generic agents need project context to be effective
- Setting up a new development environment
Do NOT use for:
- One-off implementations (use generic agents)
- Projects without clear patterns
- Quick prototypes or experimental code
## Project Analysis Workflow
### Phase 1: Project Detection
Detect project type and structure:
**1. Language/Framework Detection**
Check for language indicators in project root:
- Python: `requirements.txt`, `pyproject.toml`, `setup.py`, `Pipfile`
- TypeScript/JavaScript: `package.json`, `tsconfig.json`
- Go: `go.mod`, `go.sum`
- Rust: `Cargo.toml`
- Java: `pom.xml`, `build.gradle`
**2. Architecture Analysis**
Identify architecture patterns:
- Check directory structure (e.g., `src/`, `lib/`, `core/`, `app/`)
- Look for architectural markers:
- `repositories/`, `services/`, `controllers/` → Repository/Service pattern
- `domain/`, `application/`, `infrastructure/` → Clean Architecture
- `api/`, `worker/`, `web/` → Microservices
- `components/`, `hooks/`, `pages/` → React patterns
**3. Dependency Analysis**
Scan dependencies for key libraries:
- Web frameworks: FastAPI, Django, Flask, Express, NestJS
- Testing: pytest, Jest, Vitest, Go testing
- Database: SQLAlchemy, Prisma, TypeORM
- Async: asyncio, aiohttp, async/await patterns
**4. Convention Discovery**
Find existing patterns in codebase:
- Import patterns (check 5-10 files)
- Class/function naming conventions
- File organization
- Testing patterns (check `tests/` or `__tests__/`)
- Error handling approaches
### Phase 2: Interactive Presentation
**CRITICAL: Always present findings to user before generating agents.**
Create a summary showing:
```markdown
## Project Analysis Results
**Project Type:** Python with FastAPI
**Architecture:** Clean Architecture (domain/application/infrastructure)
**Key Dependencies:**
- FastAPI for API endpoints
- SQLAlchemy for database
- pytest for testing
- Pydantic for validation
**Patterns Discovered:**
- Repository pattern in `core/repositories/`
- Service layer in `core/services/`
- Dependency injection via FastAPI Depends
- Type hints throughout (mypy strict mode)
- Async/await for all I/O
**Testing Approach:**
- pytest with async support
- Fixtures in `tests/conftest.py`
- Integration tests with test database
**Agent Recommendation:**
I recommend creating `project-python-implementer.md` that:
- Understands your Clean Architecture structure
- Uses repository pattern from `core/repositories/`
- Follows your async patterns
- Knows your testing conventions
```
Ask user: "Should I create this project-specific agent?"
### Phase 3: Agent Generation
**Agent Structure:**
```yaml
---
name: project-{language}-implementer
model: sonnet
description: {Language} implementation specialist for THIS project. Understands {project-specific-patterns}. Use for implementing {language} code in this project.
tools: Read, Write, MultiEdit, Bash, Grep
---
```
**Agent Content Template:**
```markdown
You are a {LANGUAGE} implementation specialist for THIS specific project.
## Project Context
**Architecture:** {discovered architecture}
**Key Patterns:**
- {pattern 1}
- {pattern 2}
- {pattern 3}
**Directory Structure:**
- `{dir1}/` - {purpose}
- `{dir2}/` - {purpose}
## Critical Project-Specific Rules
### 1. Architecture Adherence
{Explain how to follow the project's architecture}
Example:
- **Repository Pattern:** All data access goes through repositories in `core/repositories/`
- **Service Layer:** Business logic lives in `core/services/`
- **Dependency Injection:** Use FastAPI's Depends() for all dependencies
### 2. Import Conventions
{Show actual import patterns from project}
Example from this project:
```python
from core.repositories.user_repository import UserRepository
from core.services.auth_service import AuthService
from domain.models.user import User
```
### 3. Testing Requirements
{Explain project testing approach}
Example:
- All services need unit tests in `tests/unit/`
- Use fixtures from `tests/conftest.py`
- Integration tests in `tests/integration/` with test database
- Async tests use `@pytest.mark.asyncio`
### 4. Error Handling
{Show project error handling pattern}
Example:
```python
# Project uses custom exception hierarchy
from core.exceptions import (
ApplicationError,
ValidationError,
NotFoundError
)
```
### 5. Type Safety
{Explain type checking approach}
Example:
- mypy strict mode required
- All functions have type hints
- Use Pydantic models for validation
## Project-Specific Patterns
{Include 2-3 code examples from actual project showing preferred patterns}
### Pattern 1: Repository Usage
{Show actual repository code from project}
### Pattern 2: Service Implementation
{Show actual service code from project}
### Pattern 3: API Endpoint Pattern
{Show actual endpoint code from project}
## Quality Checklist
Before completing implementation:
Generic {language} checklist items:
- [ ] {standard language-specific checks}
PROJECT-SPECIFIC checks:
- [ ] Follows {project architecture} structure
- [ ] Uses {project pattern} from `{directory}/`
- [ ] Follows import conventions
- [ ] Tests match project testing patterns
- [ ] Error handling uses project exception hierarchy
- [ ] {Other project-specific requirements}
## File Locations
When implementing features:
- Models/Domain: `{actual path}`
- Repositories: `{actual path}`
- Services: `{actual path}`
- API endpoints: `{actual path}`
- Tests: `{actual path}`
**ALWAYS check these directories first before creating new files.**
## Never Do These (Project-Specific)
Beyond generic {language} anti-patterns:
1. **Never create repositories outside `{repo path}`** - Breaks architecture
2. **Never skip {project pattern}** - Required by our design
3. **Never use {anti-pattern found in codebase}** - Project is moving away from this
4. **{Other project-specific anti-patterns}**
{Include base generic agent content as fallback}
```
**Save Location:** `.claude/agents/project-{language}-implementer.md`
## Implementation Steps
**Use TodoWrite to create todos for each step:**
1. [ ] Detect project type (language, framework, architecture)
2. [ ] Analyze dependencies and key libraries
3. [ ] Discover patterns by reading sample files
4. [ ] Identify testing approach and conventions
5. [ ] Create analysis summary
6. [ ] Present findings to user interactively
7. [ ] Get user approval to generate agent
8. [ ] Generate agent using template + project context
9. [ ] Write agent to `.claude/agents/project-{language}-implementer.md`
10. [ ] Confirm agent creation with user
## Examples
### Example 1: Python FastAPI Project
**Input:** Python project with FastAPI, SQLAlchemy, Clean Architecture
**Analysis:**
- Detected: Python 3.11, FastAPI, SQLAlchemy, pytest
- Architecture: Clean Architecture (domain/application/infrastructure)
- Patterns: Repository pattern, dependency injection, async/await
**Generated Agent:** `project-python-implementer.md` that:
- Knows to use repositories from `core/repositories/`
- Understands service layer in `core/services/`
- Follows async patterns throughout
- Uses project's custom exception hierarchy
### Example 2: TypeScript React Project
**Input:** TypeScript project with React, Vite, TailwindCSS
**Analysis:**
- Detected: TypeScript 5.x, React 18, Vite, TailwindCSS
- Architecture: Component-based with custom hooks
- Patterns: Compound components, render props, context for state
**Generated Agent:** `project-typescript-implementer.md` that:
- Uses project component patterns
- Follows TailwindCSS conventions
- Knows custom hooks location
- Understands project state management approach
## Common Mistakes
### ❌ Generic Analysis
Creating agent without deep project understanding
**Fix:** Read actual code files to discover patterns
### ❌ Skipping User Approval
Generating agent without presenting findings
**Fix:** Always show analysis summary and get approval
### ❌ Too Generic
Agent doesn't include specific patterns from project
**Fix:** Include 2-3 actual code examples from codebase
### ❌ Missing Anti-Patterns
Not documenting what NOT to do
**Fix:** Note patterns project is moving away from
## Integration with writing-skills
**REQUIRED BACKGROUND:** Understanding `writing-skills` helps create better agents.
Agent creation follows similar principles:
- Test with real implementation tasks
- Iterate based on what agents struggle with
- Add explicit counters for common mistakes
- Include actual project code examples
## Quality Gates
Before considering agent complete:
- [ ] Agent includes actual code examples from project (not generic templates)
- [ ] Architecture patterns are specific and actionable
- [ ] File locations are exact paths from project
- [ ] Testing approach matches actual test files
- [ ] Import conventions match actual imports
- [ ] User has approved the agent
- [ ] Agent saved to correct location
## Naming Convention
**CRITICAL:** Use `project-{language}-implementer` naming:
- ✅ `project-python-implementer.md`
- ✅ `project-typescript-implementer.md`
- ✅ `project-go-implementer.md`
- ❌ `python-implementer-custom.md` (breaks convention)
- ❌ `my-python-agent.md` (unclear purpose)
The `project-` prefix ensures:
- No conflicts with generic agents
- Clear indication of project-specific knowledge
- Consistent discovery pattern

View File

@@ -0,0 +1,571 @@
---
name: project-skill-creator
description: Use when setting up project-specific skills via /cc:setup-project or when user requests custom skills for their codebase - analyzes project to create specialized skills that capture architecture knowledge, coding conventions, testing patterns, and deployment workflows
---
# Project Skill Creator
## Overview
**Creating project-specific skills captures institutional knowledge and patterns that generic skills cannot provide.**
This skill analyzes your project and creates specialized skills (e.g., `project-architecture`, `project-conventions`, `project-testing`) that document project-specific patterns, standards, and workflows that all agents and future developers should follow.
**Core principle:** Project-specific skills transform tribal knowledge into discoverable documentation.
**REQUIRED BACKGROUND:** Use `writing-skills` for skill creation methodology. This skill applies those principles to project-specific content.
## When to Use
Use this skill when:
- User runs `/cc:setup-project` command
- User requests "create custom skills for my project"
- Project has unique architecture or patterns to document
- Team needs standardized conventions
- Onboarding new developers or agents
Do NOT use for:
- Generic patterns already covered by existing skills
- One-off projects without reusable patterns
- Projects still in early prototyping phase
## Project-Specific Skills to Create
### 1. project-architecture
**Purpose:** Document and explain the system architecture
**When to create:** Always (if project has clear architecture)
**Content:**
- High-level architecture diagram/description
- Component responsibilities
- Data flow and interactions
- Key design decisions and trade-offs
- How to extend the architecture
**Example trigger:** "Use when implementing features, adding components, or understanding system design - documents THIS project's architecture, component responsibilities, and design patterns"
### 2. project-conventions
**Purpose:** Capture code style and naming conventions
**When to create:** If project has specific conventions beyond standard linters
**Content:**
- Naming conventions (files, classes, functions)
- Code organization patterns
- Import/export conventions
- Comment and documentation standards
- File/directory structure rules
**Example trigger:** "Use when writing new code in this project - enforces naming conventions, code organization, and style patterns specific to this codebase"
### 3. project-testing
**Purpose:** Document testing approach and patterns
**When to create:** If project has specific testing patterns
**Content:**
- Testing philosophy and requirements
- Test organization (unit/integration/e2e)
- Fixture patterns and test utilities
- Mocking strategies
- Coverage requirements
- Example test patterns from project
**Example trigger:** "Use when writing tests - follows THIS project's testing patterns, fixture conventions, and test organization structure"
### 4. project-deployment
**Purpose:** Document build, release, and deployment process
**When to create:** If project has specific deployment workflow
**Content:**
- Build process and commands
- Environment setup
- Deployment steps
- Release checklist
- CI/CD pipeline explanation
- Rollback procedures
**Example trigger:** "Use when deploying, releasing, or setting up environments - documents THIS project's build process, deployment steps, and release workflow"
### 5. project-domain
**Purpose:** Capture domain-specific knowledge
**When to create:** If project has unique business domain
**Content:**
- Domain terminology and glossary
- Business rules and constraints
- Domain models and relationships
- Common workflows and use cases
- Domain-specific patterns
**Example trigger:** "Use when implementing business logic or understanding domain concepts - documents THIS project's business domain, terminology, and domain-specific rules"
## Project Analysis Workflow
### Phase 1: Project Understanding
**1. Architecture Discovery**
Analyze project structure:
```bash
# Get overview of directory structure
ls -R | head -100
# Look for architecture documentation
find . -name "README*" -o -name "ARCHITECTURE*" -o -name "DESIGN*"
# Analyze directory organization
tree -d -L 3
```
Identify architectural patterns:
- Monolith vs microservices
- Layered architecture (presentation/business/data)
- Clean architecture (domain/application/infrastructure)
- MVC, MVVM, or other patterns
- Frontend architecture (components, state management)
**2. Convention Discovery**
Sample 10-15 files to find patterns:
- File naming: kebab-case, snake_case, PascalCase?
- Class naming conventions
- Function naming patterns
- Import organization
- Comment styles
**3. Testing Pattern Discovery**
Examine test files:
```bash
# Find test files
find . -name "*test*" -o -name "*spec*" | head -20
# Read sample tests to understand patterns
```
Identify:
- Test organization structure
- Fixture patterns
- Mocking approach
- Assertion style
- Test naming conventions
**4. Deployment Process Discovery**
Check for:
- CI/CD configuration (`.github/workflows`, `.gitlab-ci.yml`)
- Build scripts (`package.json scripts`, `Makefile`, `justfile`)
- Docker setup (`Dockerfile`, `docker-compose.yml`)
- Deployment documentation
### Phase 2: Interactive Planning
**CRITICAL: Present findings and proposed skills to user.**
Create summary:
```markdown
## Project Analysis Summary
**Architecture Pattern:** {discovered pattern}
**Key Components:**
- {component 1} - {purpose}
- {component 2} - {purpose}
**Conventions Found:**
- File naming: {pattern}
- Class naming: {pattern}
- Import organization: {pattern}
**Testing Approach:**
- Test organization: {structure}
- Fixture patterns: {description}
- Coverage: {requirements}
**Deployment:**
- Build process: {description}
- CI/CD: {platform and approach}
## Recommended Skills
I recommend creating these project-specific skills:
1. **project-architecture** - Document {architecture pattern} and component responsibilities
2. **project-conventions** - Capture {naming} and {organization} conventions
3. **project-testing** - Document {testing approach} and fixture patterns
4. **project-deployment** - Document {build process} and deployment steps
Should I create these skills?
```
Get user approval before proceeding.
### Phase 3: Skill Generation
For each approved skill, follow `writing-skills` methodology:
**1. Create Skill Directory**
```bash
mkdir -p .claude/skills/project-{skill-name}
```
**2. Generate SKILL.md**
Use template:
```yaml
---
name: project-{skill-name}
description: Use when {specific triggers} - {what this skill provides specific to THIS project}
---
# Project {Skill Name}
## Overview
{One paragraph explaining what this skill provides for THIS project}
**Core principle:** {Key principle from project}
## {Main Content Sections}
{Content specific to this skill type - see templates below}
## When NOT to Use
- {Situations where this skill doesn't apply}
- {Edge cases or exceptions}
## Examples from This Project
{2-3 concrete examples from actual project code}
## Common Mistakes
{Project-specific anti-patterns to avoid}
```
**3. Populate with Project-Specific Content**
**For project-architecture:**
```markdown
## Architecture Overview
{High-level description}
**Pattern:** {Architecture pattern name}
## Component Responsibilities
### {Component 1}
**Location:** `{path}`
**Purpose:** {what it does}
**Dependencies:** {what it depends on}
### {Component 2}
{...}
## Data Flow
{How data moves through the system}
## Key Design Decisions
### Decision: {Decision name}
**Rationale:** {why this decision was made}
**Trade-offs:** {what we gave up}
**Alternatives considered:** {what else was considered}
## Extending the Architecture
When adding new features:
1. {Step 1 with specific guidance}
2. {Step 2}
{Include actual examples from project}
```
**For project-conventions:**
```markdown
## File Naming
**Pattern:** {discovered pattern}
Examples from this project:
- {actual file 1}
- {actual file 2}
## Class/Component Naming
**Pattern:** {discovered pattern}
Examples:
```{language}
{actual code example 1}
{actual code example 2}
```
## Import Organization
**Pattern:** {discovered pattern}
Standard import order:
```{language}
{actual example from project}
```
## Code Organization
**Pattern:** One {unit} per file
Example structure:
```
{actual directory tree from project}
```
## Documentation Standards
{How comments and docs should be written}
Example:
```{language}
{actual documented code from project}
```
```
**For project-testing:**
```markdown
## Testing Philosophy
{What project values in tests}
## Test Organization
```
{actual test directory structure}
```
**Unit tests:** `{location}` - {what they test}
**Integration tests:** `{location}` - {what they test}
**E2E tests:** `{location}` - {what they test}
## Fixture Patterns
{Describe how project uses fixtures}
Example from this project:
```{language}
{actual fixture code}
```
## Test Naming
**Pattern:** {discovered pattern}
Examples:
- {actual test name 1}
- {actual test name 2}
## Mocking Strategy
{How project handles mocking}
Example:
```{language}
{actual mock code}
```
## Coverage Requirements
{Project coverage standards}
## Running Tests
```bash
{actual commands from project}
```
```
**For project-deployment:**
```markdown
## Build Process
**Command:** `{actual command}`
**Steps:**
1. {step 1 from actual build}
2. {step 2}
## Environment Setup
{How to configure environments}
## Deployment Steps
### Development
```bash
{actual deployment commands}
```
### Production
```bash
{actual production deployment}
```
## CI/CD Pipeline
{Describe actual CI/CD setup}
**Stages:**
1. {stage 1} - {what happens}
2. {stage 2} - {what happens}
## Release Checklist
- [ ] {actual checklist item 1}
- [ ] {actual checklist item 2}
## Rollback Procedure
{How to rollback in this project}
```
## Implementation Steps
**Use TodoWrite to create todos for each step:**
1. [ ] Analyze project architecture and structure
2. [ ] Discover naming and code conventions
3. [ ] Examine testing patterns and approach
4. [ ] Review deployment and build process
5. [ ] Identify domain-specific knowledge to capture
6. [ ] Create analysis summary
7. [ ] Present findings and proposed skills to user
8. [ ] Get user approval for skill creation
9. [ ] For each approved skill:
- [ ] Create skill directory
- [ ] Generate SKILL.md with frontmatter
- [ ] Populate with project-specific examples
- [ ] Include actual code from project
10. [ ] Confirm skills created with user
## Examples
### Example 1: FastAPI Project
**Analysis:**
- Clean Architecture (domain/application/infrastructure)
- Repository pattern for data access
- Dependency injection via FastAPI
- pytest with async support
**Skills Created:**
1. `project-architecture` - Documents Clean Architecture layers, repository pattern usage
2. `project-testing` - Documents pytest async patterns, fixture conventions
3. `project-deployment` - Documents Docker build, migrations, deployment to AWS
### Example 2: React Project
**Analysis:**
- Component-based architecture
- Custom hooks in `src/hooks/`
- Compound component patterns
- Vitest for testing
**Skills Created:**
1. `project-architecture` - Documents component structure, state management approach
2. `project-conventions` - Documents component naming, hook conventions, file organization
3. `project-testing` - Documents Vitest setup, testing-library patterns
## Quality Checklist
Before considering skills complete:
- [ ] All skills include actual code examples from project
- [ ] Frontmatter description includes specific triggers
- [ ] Content is project-specific (not generic advice)
- [ ] File paths and commands are exact (not placeholders)
- [ ] Examples are real code from the project
- [ ] "When NOT to Use" section included
- [ ] User has approved all skills
- [ ] Skills saved to `.claude/skills/project-{name}/`
## Common Mistakes
### ❌ Generic Content
Writing generic advice instead of project-specific patterns
**Fix:** Include actual code examples and exact file paths
### ❌ Placeholder Examples
Using generic placeholders like `{your-component}.tsx`
**Fix:** Use actual filenames and code from project
### ❌ Missing Context
Not explaining WHY patterns exist
**Fix:** Document decisions, trade-offs, and rationale
### ❌ Too Verbose
Creating encyclopedic documentation
**Fix:** Keep skills focused and scannable (< 500 lines)
### ❌ Skipping Approval
Generating skills without user review
**Fix:** Always present findings and get approval
## Integration with writing-skills
Follow these `writing-skills` principles:
- **Test-driven:** Verify skills help agents solve real tasks
- **Concise:** Target < 500 words for frequently-loaded skills
- **Examples over explanation:** Show actual project code
- **Cross-reference:** Reference other skills by name
- **Claude Search Optimization:** Rich description with triggers
## Skill Naming Convention
**CRITICAL:** Use `project-{name}` format:
- ✅ `project-architecture`
- ✅ `project-conventions`
- ✅ `project-testing`
- ✅ `project-deployment`
- ✅ `project-domain`
- ❌ `architecture-guide` (not discoverable as project-specific)
- ❌ `my-testing-patterns` (unclear scope)
The `project-` prefix ensures:
- Clear distinction from generic skills
- Consistent discovery pattern
- No naming conflicts
## Updating Skills
As project evolves, skills need updates:
**Triggers for updates:**
- Architecture changes
- New conventions adopted
- Testing approach evolves
- Deployment process changes
**Update workflow:**
1. Identify what changed
2. Update skill content
3. Add changelog note in skill
4. Test with real scenarios
## Storage Location
**All project-specific skills:** `.claude/skills/project-{name}/SKILL.md`
**Never store in:**
- `cc/skills/` (that's for generic CrispyClaude skills)
- Project source directories (not discoverable)
- Documentation folder (wrong tool for the job)

View File

@@ -0,0 +1,209 @@
---
name: receiving-code-review
description: Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation
---
# Code Review Reception
## Overview
Code review requires technical evaluation, not emotional performance.
**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.
## The Response Pattern
```
WHEN receiving code review feedback:
1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each
```
## Forbidden Responses
**NEVER:**
- "You're absolutely right!" (explicit CLAUDE.md violation)
- "Great point!" / "Excellent feedback!" (performative)
- "Let me implement that now" (before verification)
**INSTEAD:**
- Restate the technical requirement
- Ask clarifying questions
- Push back with technical reasoning if wrong
- Just start working (actions > words)
## Handling Unclear Feedback
```
IF any item is unclear:
STOP - do not implement anything yet
ASK for clarification on unclear items
WHY: Items may be related. Partial understanding = wrong implementation.
```
**Example:**
```
your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.
❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."
```
## Source-Specific Handling
### From your human partner
- **Trusted** - implement after understanding
- **Still ask** if scope unclear
- **No performative agreement**
- **Skip to action** or technical acknowledgment
### From External Reviewers
```
BEFORE implementing:
1. Check: Technically correct for THIS codebase?
2. Check: Breaks existing functionality?
3. Check: Reason for current implementation?
4. Check: Works on all platforms/versions?
5. Check: Does reviewer understand full context?
IF suggestion seems wrong:
Push back with technical reasoning
IF can't easily verify:
Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"
IF conflicts with your human partner's prior decisions:
Stop and discuss with your human partner first
```
**your human partner's rule:** "External feedback - be skeptical, but check carefully"
## YAGNI Check for "Professional" Features
```
IF reviewer suggests "implementing properly":
grep codebase for actual usage
IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
IF used: Then implement properly
```
**your human partner's rule:** "You and reviewer both report to me. If we don't need this feature, don't add it."
## Implementation Order
```
FOR multi-item feedback:
1. Clarify anything unclear FIRST
2. Then implement in this order:
- Blocking issues (breaks, security)
- Simple fixes (typos, imports)
- Complex fixes (refactoring, logic)
3. Test each fix individually
4. Verify no regressions
```
## When To Push Back
Push back when:
- Suggestion breaks existing functionality
- Reviewer lacks full context
- Violates YAGNI (unused feature)
- Technically incorrect for this stack
- Legacy/compatibility reasons exist
- Conflicts with your human partner's architectural decisions
**How to push back:**
- Use technical reasoning, not defensiveness
- Ask specific questions
- Reference working tests/code
- Involve your human partner if architectural
**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"
## Acknowledging Correct Feedback
When feedback IS correct:
```
✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]
❌ "You're absolutely right!"
❌ "Great point!"
❌ "Thanks for catching that!"
❌ "Thanks for [anything]"
❌ ANY gratitude expression
```
**Why no thanks:** Actions speak. Just fix it. The code itself shows you heard the feedback.
**If you catch yourself about to write "Thanks":** DELETE IT. State the fix instead.
## Gracefully Correcting Your Pushback
If you pushed back and were wrong:
```
✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."
❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining
```
State the correction factually and move on.
## Common Mistakes
| Mistake | Fix |
|---------|-----|
| Performative agreement | State requirement or just act |
| Blind implementation | Verify against codebase first |
| Batch without testing | One at a time, test each |
| Assuming reviewer is right | Check if breaks things |
| Avoiding pushback | Technical correctness > comfort |
| Partial implementation | Clarify all items first |
| Can't verify, proceed anyway | State limitation, ask for direction |
## Real Examples
**Performative Agreement (Bad):**
```
Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."
```
**Technical Verification (Good):**
```
Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
```
**YAGNI (Good):**
```
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
```
**Unclear Item (Good):**
```
your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
```
## The Bottom Line
**External feedback = suggestions to evaluate, not orders to follow.**
Verify. Question. Then implement.
No performative agreement. Technical rigor always.

View File

@@ -0,0 +1,105 @@
---
name: requesting-code-review
description: Use when completing tasks, implementing major features, or before merging to verify work meets requirements - dispatches superpowers:code-reviewer subagent to review implementation against plan or requirements before proceeding
---
# Requesting Code Review
Dispatch superpowers:code-reviewer subagent to catch issues before they cascade.
**Core principle:** Review early, review often.
## When to Request Review
**Mandatory:**
- After each task in subagent-driven development
- After completing major feature
- Before merge to main
**Optional but valuable:**
- When stuck (fresh perspective)
- Before refactoring (baseline check)
- After fixing complex bug
## How to Request
**1. Get git SHAs:**
```bash
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
HEAD_SHA=$(git rev-parse HEAD)
```
**2. Dispatch code-reviewer subagent:**
Use Task tool with superpowers:code-reviewer type, fill template at `code-reviewer.md`
**Placeholders:**
- `{WHAT_WAS_IMPLEMENTED}` - What you just built
- `{PLAN_OR_REQUIREMENTS}` - What it should do
- `{BASE_SHA}` - Starting commit
- `{HEAD_SHA}` - Ending commit
- `{DESCRIPTION}` - Brief summary
**3. Act on feedback:**
- Fix Critical issues immediately
- Fix Important issues before proceeding
- Note Minor issues for later
- Push back if reviewer is wrong (with reasoning)
## Example
```
[Just completed Task 2: Add verification function]
You: Let me request code review before proceeding.
BASE_SHA=$(git log --oneline | grep "Task 1" | head -1 | awk '{print $1}')
HEAD_SHA=$(git rev-parse HEAD)
[Dispatch superpowers:code-reviewer subagent]
WHAT_WAS_IMPLEMENTED: Verification and repair functions for conversation index
PLAN_OR_REQUIREMENTS: Task 2 from docs/plans/deployment-plan.md
BASE_SHA: a7981ec
HEAD_SHA: 3df7661
DESCRIPTION: Added verifyIndex() and repairIndex() with 4 issue types
[Subagent returns]:
Strengths: Clean architecture, real tests
Issues:
Important: Missing progress indicators
Minor: Magic number (100) for reporting interval
Assessment: Ready to proceed
You: [Fix progress indicators]
[Continue to Task 3]
```
## Integration with Workflows
**Subagent-Driven Development:**
- Review after EACH task
- Catch issues before they compound
- Fix before moving to next task
**Executing Plans:**
- Review after each batch (3 tasks)
- Get feedback, apply, continue
**Ad-Hoc Development:**
- Review before merge
- Review when stuck
## Red Flags
**Never:**
- Skip review because "it's simple"
- Ignore Critical issues
- Proceed with unfixed Important issues
- Argue with valid technical feedback
**If reviewer wrong:**
- Push back with technical reasoning
- Show code/tests that prove it works
- Request clarification
See template at: requesting-code-review/code-reviewer.md

View File

@@ -0,0 +1,146 @@
# Code Review Agent
You are reviewing code changes for production readiness.
**Your task:**
1. Review {WHAT_WAS_IMPLEMENTED}
2. Compare against {PLAN_OR_REQUIREMENTS}
3. Check code quality, architecture, testing
4. Categorize issues by severity
5. Assess production readiness
## What Was Implemented
{DESCRIPTION}
## Requirements/Plan
{PLAN_REFERENCE}
## Git Range to Review
**Base:** {BASE_SHA}
**Head:** {HEAD_SHA}
```bash
git diff --stat {BASE_SHA}..{HEAD_SHA}
git diff {BASE_SHA}..{HEAD_SHA}
```
## Review Checklist
**Code Quality:**
- Clean separation of concerns?
- Proper error handling?
- Type safety (if applicable)?
- DRY principle followed?
- Edge cases handled?
**Architecture:**
- Sound design decisions?
- Scalability considerations?
- Performance implications?
- Security concerns?
**Testing:**
- Tests actually test logic (not mocks)?
- Edge cases covered?
- Integration tests where needed?
- All tests passing?
**Requirements:**
- All plan requirements met?
- Implementation matches spec?
- No scope creep?
- Breaking changes documented?
**Production Readiness:**
- Migration strategy (if schema changes)?
- Backward compatibility considered?
- Documentation complete?
- No obvious bugs?
## Output Format
### Strengths
[What's well done? Be specific.]
### Issues
#### Critical (Must Fix)
[Bugs, security issues, data loss risks, broken functionality]
#### Important (Should Fix)
[Architecture problems, missing features, poor error handling, test gaps]
#### Minor (Nice to Have)
[Code style, optimization opportunities, documentation improvements]
**For each issue:**
- File:line reference
- What's wrong
- Why it matters
- How to fix (if not obvious)
### Recommendations
[Improvements for code quality, architecture, or process]
### Assessment
**Ready to merge?** [Yes/No/With fixes]
**Reasoning:** [Technical assessment in 1-2 sentences]
## Critical Rules
**DO:**
- Categorize by actual severity (not everything is Critical)
- Be specific (file:line, not vague)
- Explain WHY issues matter
- Acknowledge strengths
- Give clear verdict
**DON'T:**
- Say "looks good" without checking
- Mark nitpicks as Critical
- Give feedback on code you didn't review
- Be vague ("improve error handling")
- Avoid giving a clear verdict
## Example Output
```
### Strengths
- Clean database schema with proper migrations (db.ts:15-42)
- Comprehensive test coverage (18 tests, all edge cases)
- Good error handling with fallbacks (summarizer.ts:85-92)
### Issues
#### Important
1. **Missing help text in CLI wrapper**
- File: index-conversations:1-31
- Issue: No --help flag, users won't discover --concurrency
- Fix: Add --help case with usage examples
2. **Date validation missing**
- File: search.ts:25-27
- Issue: Invalid dates silently return no results
- Fix: Validate ISO format, throw error with example
#### Minor
1. **Progress indicators**
- File: indexer.ts:130
- Issue: No "X of Y" counter for long operations
- Impact: Users don't know how long to wait
### Recommendations
- Add progress reporting for user experience
- Consider config file for excluded projects (portability)
### Assessment
**Ready to merge: With fixes**
**Reasoning:** Core implementation is solid with good architecture and tests. Important issues (help text, date validation) are easily fixed and don't affect core functionality.
```

View File

@@ -0,0 +1,312 @@
---
name: research-orchestration
description: Use when brainstorming completes and user selects research-first option - manages parallel research subagents (up to 4) across codebase, library docs, web, and GitHub sources, synthesizing findings and auto-saving to memory before planning
---
# Research Orchestration
Use this skill to manage parallel research subagents and synthesize findings from multiple sources.
## When to Use
After brainstorming completes and user selects "B) research first" option.
## Selection Algorithm
### Default Selection
Based on brainstorm context, intelligently select researchers:
**serena-explorer** [✓ ALWAYS]
- Always need codebase understanding
- No keywords required - default ON
**context7-researcher** [✓ if library mentioned]
- Select if: new library, framework, official docs needed
- Keywords: "using [library]", "integrate [framework]", "best practices for [tool]"
- Example: "using React hooks" → ON
**web-researcher** [✓ if patterns mentioned]
- Select if: best practices, tutorials, modern approaches, expert opinions
- Keywords: "industry standard", "common pattern", "how to", "best approach"
- Example: "authentication best practices" → ON
**github-researcher** [☐ usually OFF]
- Select if: known issues, community solutions, similar features, troubleshooting
- Keywords: "GitHub issue", "others solved", "similar to [project]", "known problems"
- Example: "known issues with SSR" → ON
### User Presentation
Present recommendations with context:
```markdown
Based on the brainstorm, I recommend these researchers:
[✓] Codebase (serena-explorer)
→ Understand current architecture and integration points
[✓] Library docs (context7-researcher)
→ React hooks patterns and official recommendations
[✓] Web (web-researcher)
→ Authentication best practices and security patterns
[ ] GitHub (github-researcher)
→ Not needed unless we hit specific issues
Adjust selection? (Y/n)
```
If **Y**: Interactive toggle
```
Toggle researchers: (C)odebase (L)ibrary (W)eb (G)itHub (D)one
User input: L G D
Result: Toggled OFF context7-researcher, ON github-researcher, Done
```
If **n**: Use defaults and proceed
## Spawning Subagents
**Run up to 4 in parallel** using Task tool:
```typescript
// Spawn all selected researchers in parallel
const results = await Promise.all([
// Always spawn serena-explorer
Task({
subagent_type: "serena-explorer",
description: "Explore codebase architecture",
prompt: `
Analyze the current codebase for ${feature} implementation.
Find:
- Current architecture relevant to ${feature}
- Similar existing implementations we can learn from
- Integration points where ${feature} should hook in
- Patterns used in similar features
Provide all findings with file:line references.
`
}),
// Conditionally spawn context7-researcher
...(useContext7 ? [Task({
subagent_type: "context7-researcher",
description: "Research library documentation",
prompt: `
Research official documentation for ${libraries}.
Find:
- Recommended patterns for ${useCase}
- API best practices and examples
- Security considerations
- Performance recommendations
Include Context7 IDs, benchmark scores, and code examples.
`
})] : []),
// Conditionally spawn web-researcher
...(useWeb ? [Task({
subagent_type: "web-researcher",
description: "Research best practices",
prompt: `
Search for ${topic} best practices and expert opinions.
Find:
- Industry standard approaches for ${useCase}
- Recent articles (2024-2025) on ${topic}
- Expert recommendations with rationale
- Common gotchas and solutions
Cite sources with authority assessment and publication dates.
`
})] : []),
// Conditionally spawn github-researcher
...(useGithub ? [Task({
subagent_type: "github-researcher",
description: "Research GitHub issues/PRs",
prompt: `
Search GitHub for ${topic} issues and solutions.
Find:
- Closed issues related to ${problem}
- Merged PRs implementing ${feature}
- Community discussions on ${topic}
- Known gotchas and workarounds
Focus on ${relevantRepos} repositories.
Provide issue links, status, and consensus solutions.
`
})] : [])
])
```
**Key points:**
- All spawned in single Task call block (parallel execution)
- Each has specific prompt tailored to feature context
- Prompts reference brainstorm decisions
- Results returned when all complete
## Synthesis
After all subagents complete, synthesize findings:
### Structure
```markdown
# Research: ${feature-name}
## Brainstorm Summary
${brief-summary-of-brainstorm-decisions}
## Codebase Findings (serena-explorer)
### Current Architecture
- **${component}:** `${file}:${line}`
- ${description}
### Similar Implementations
- **${existing-feature}:** `${file}:${line}`
- ${pattern-used}
- ${why-relevant}
### Integration Points
- **${location}:** `${file}:${line}`
- ${how-to-hook-in}
## Library Documentation (context7-researcher)
### ${Library-Name}
**Context7 ID:** ${id}
**Benchmark Score:** ${score}
**Relevant APIs:**
- **${api-name}:** ${description}
```${lang}
${code-example}
```
**Best Practices:**
1. ${practice-1}
2. ${practice-2}
## Web Research (web-researcher)
### ${Topic}
**Source:** ${author} - "${title}" (${date})
**Authority:** ${stars} (${justification})
**URL:** ${url}
**Key Recommendations:**
1. **${recommendation}**
> "${quote}"
- ${implementation-detail}
**Trade-offs:**
- ${trade-off-1}
- ${trade-off-2}
## GitHub Research (github-researcher)
### ${Issue-Topic}
**Source:** ${repo}#${number} (${status})
**URL:** ${url}
**Problem:** ${description}
**Solution:**
```${lang}
${code-example}
```
**Caveats:**
- ${caveat-1}
- ${caveat-2}
## Synthesis
### Recommended Approach
Based on all research, recommend ${approach} because:
1. **Codebase fit:** ${how-it-fits-existing-patterns}
2. **Library support:** ${official-patterns-available}
3. **Industry proven:** ${expert-consensus}
4. **Community validated:** ${github-evidence}
### Key Decisions
- **${decision-1}:** ${rationale}
- **${decision-2}:** ${rationale}
### Risks & Mitigations
- **Risk:** ${risk}
- **Mitigation:** ${mitigation}
## Next Steps
Ready to write implementation plan with this research context.
```
## Auto-Save
After synthesis completes, automatically save to memory:
```typescript
// Use state-persistence skill
await saveResearchMemory({
feature: extractFeatureName(brainstorm),
content: synthesizedResearch,
type: "research"
})
```
**Filename:** `YYYY-MM-DD-${feature-name}-research.md`
**Location:** Serena MCP memory (via write_memory tool)
## Handoff
After save completes, report to user:
```markdown
Research complete and saved to memory: ${filename}
I've synthesized findings from ${count} sources:
- Codebase: ${summary-of-serena-findings}
- Library docs: ${summary-of-context7-findings}
- Web: ${summary-of-web-findings}
- GitHub: ${summary-of-github-findings}
Key recommendation: ${one-sentence-approach}
Ready to write the implementation plan with this research context.
```
Then invoke `writing-plans` skill automatically.
## Error Handling
**If subagent fails:**
1. Continue with other subagents
2. Note missing research in synthesis
3. Offer to re-run failed researcher
**If no results found:**
1. Note in synthesis
2. Don't block workflow
3. Proceed with available research
**If all subagents fail:**
1. Report failure
2. Offer to proceed without research
3. User can retry or continue to planning

View File

@@ -0,0 +1,174 @@
---
name: root-cause-tracing
description: Use when errors occur deep in execution and you need to trace back to find the original trigger - systematically traces bugs backward through call stack, adding instrumentation when needed, to identify source of invalid data or incorrect behavior
---
# Root Cause Tracing
## Overview
Bugs often manifest deep in the call stack (git init in wrong directory, file created in wrong location, database opened with wrong path). Your instinct is to fix where the error appears, but that's treating a symptom.
**Core principle:** Trace backward through the call chain until you find the original trigger, then fix at the source.
## When to Use
```dot
digraph when_to_use {
"Bug appears deep in stack?" [shape=diamond];
"Can trace backwards?" [shape=diamond];
"Fix at symptom point" [shape=box];
"Trace to original trigger" [shape=box];
"BETTER: Also add defense-in-depth" [shape=box];
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
}
```
**Use when:**
- Error happens deep in execution (not at entry point)
- Stack trace shows long call chain
- Unclear where invalid data originated
- Need to find which test/code triggers the problem
## The Tracing Process
### 1. Observe the Symptom
```
Error: git init failed in /Users/jesse/project/packages/core
```
### 2. Find Immediate Cause
**What code directly causes this?**
```typescript
await execFileAsync('git', ['init'], { cwd: projectDir });
```
### 3. Ask: What Called This?
```typescript
WorktreeManager.createSessionWorktree(projectDir, sessionId)
called by Session.initializeWorkspace()
called by Session.create()
called by test at Project.create()
```
### 4. Keep Tracing Up
**What value was passed?**
- `projectDir = ''` (empty string!)
- Empty string as `cwd` resolves to `process.cwd()`
- That's the source code directory!
### 5. Find Original Trigger
**Where did empty string come from?**
```typescript
const context = setupCoreTest(); // Returns { tempDir: '' }
Project.create('name', context.tempDir); // Accessed before beforeEach!
```
## Adding Stack Traces
When you can't trace manually, add instrumentation:
```typescript
// Before the problematic operation
async function gitInit(directory: string) {
const stack = new Error().stack;
console.error('DEBUG git init:', {
directory,
cwd: process.cwd(),
nodeEnv: process.env.NODE_ENV,
stack,
});
await execFileAsync('git', ['init'], { cwd: directory });
}
```
**Critical:** Use `console.error()` in tests (not logger - may not show)
**Run and capture:**
```bash
npm test 2>&1 | grep 'DEBUG git init'
```
**Analyze stack traces:**
- Look for test file names
- Find the line number triggering the call
- Identify the pattern (same test? same parameter?)
## Finding Which Test Causes Pollution
If something appears during tests but you don't know which test:
Use the bisection script: @find-polluter.sh
```bash
./find-polluter.sh '.git' 'src/**/*.test.ts'
```
Runs tests one-by-one, stops at first polluter. See script for usage.
## Real Example: Empty projectDir
**Symptom:** `.git` created in `packages/core/` (source code)
**Trace chain:**
1. `git init` runs in `process.cwd()` ← empty cwd parameter
2. WorktreeManager called with empty projectDir
3. Session.create() passed empty string
4. Test accessed `context.tempDir` before beforeEach
5. setupCoreTest() returns `{ tempDir: '' }` initially
**Root cause:** Top-level variable initialization accessing empty value
**Fix:** Made tempDir a getter that throws if accessed before beforeEach
**Also added defense-in-depth:**
- Layer 1: Project.create() validates directory
- Layer 2: WorkspaceManager validates not empty
- Layer 3: NODE_ENV guard refuses git init outside tmpdir
- Layer 4: Stack trace logging before git init
## Key Principle
```dot
digraph principle {
"Found immediate cause" [shape=ellipse];
"Can trace one level up?" [shape=diamond];
"Trace backwards" [shape=box];
"Is this the source?" [shape=diamond];
"Fix at source" [shape=box];
"Add validation at each layer" [shape=box];
"Bug impossible" [shape=doublecircle];
"NEVER fix just the symptom" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Found immediate cause" -> "Can trace one level up?";
"Can trace one level up?" -> "Trace backwards" [label="yes"];
"Can trace one level up?" -> "NEVER fix just the symptom" [label="no"];
"Trace backwards" -> "Is this the source?";
"Is this the source?" -> "Trace backwards" [label="no - keeps going"];
"Is this the source?" -> "Fix at source" [label="yes"];
"Fix at source" -> "Add validation at each layer";
"Add validation at each layer" -> "Bug impossible";
}
```
**NEVER fix just where the error appears.** Trace back to find the original trigger.
## Stack Trace Tips
**In tests:** Use `console.error()` not logger - logger may be suppressed
**Before operation:** Log before the dangerous operation, not after it fails
**Include context:** Directory, cwd, environment variables, timestamps
**Capture stack:** `new Error().stack` shows complete call chain
## Real-World Impact
From debugging session (2025-10-03):
- Found root cause through 5-level trace
- Fixed at source (getter validation)
- Added 4 layers of defense
- 1847 tests passed, zero pollution

View File

@@ -0,0 +1,63 @@
#!/bin/bash
# Bisection script to find which test creates unwanted files/state
# Usage: ./find-polluter.sh <file_or_dir_to_check> <test_pattern>
# Example: ./find-polluter.sh '.git' 'src/**/*.test.ts'
set -e
if [ $# -ne 2 ]; then
echo "Usage: $0 <file_to_check> <test_pattern>"
echo "Example: $0 '.git' 'src/**/*.test.ts'"
exit 1
fi
POLLUTION_CHECK="$1"
TEST_PATTERN="$2"
echo "🔍 Searching for test that creates: $POLLUTION_CHECK"
echo "Test pattern: $TEST_PATTERN"
echo ""
# Get list of test files
TEST_FILES=$(find . -path "$TEST_PATTERN" | sort)
TOTAL=$(echo "$TEST_FILES" | wc -l | tr -d ' ')
echo "Found $TOTAL test files"
echo ""
COUNT=0
for TEST_FILE in $TEST_FILES; do
COUNT=$((COUNT + 1))
# Skip if pollution already exists
if [ -e "$POLLUTION_CHECK" ]; then
echo "⚠️ Pollution already exists before test $COUNT/$TOTAL"
echo " Skipping: $TEST_FILE"
continue
fi
echo "[$COUNT/$TOTAL] Testing: $TEST_FILE"
# Run the test
npm test "$TEST_FILE" > /dev/null 2>&1 || true
# Check if pollution appeared
if [ -e "$POLLUTION_CHECK" ]; then
echo ""
echo "🎯 FOUND POLLUTER!"
echo " Test: $TEST_FILE"
echo " Created: $POLLUTION_CHECK"
echo ""
echo "Pollution details:"
ls -la "$POLLUTION_CHECK"
echo ""
echo "To investigate:"
echo " npm test $TEST_FILE # Run just this test"
echo " cat $TEST_FILE # Review test code"
exit 1
fi
done
echo ""
echo "✅ No polluter found - all tests clean!"
exit 0

View File

@@ -0,0 +1,194 @@
---
name: sharing-skills
description: Use when you've developed a broadly useful skill and want to contribute it upstream via pull request - guides process of branching, committing, pushing, and creating PR to contribute skills back to upstream repository
---
# Sharing Skills
## Overview
Contribute skills from your local branch back to the upstream repository.
**Workflow:** Branch → Edit/Create skill → Commit → Push → PR
## When to Share
**Share when:**
- Skill applies broadly (not project-specific)
- Pattern/technique others would benefit from
- Well-tested and documented
- Follows writing-skills guidelines
**Keep personal when:**
- Project-specific or organization-specific
- Experimental or unstable
- Contains sensitive information
- Too narrow/niche for general use
## Prerequisites
- `gh` CLI installed and authenticated
- Working directory is `~/.config/superpowers/skills/` (your local clone)
- **REQUIRED:** Skill has been tested using writing-skills TDD process
## Sharing Workflow
### 1. Ensure You're on Main and Synced
```bash
cd ~/.config/superpowers/skills/
git checkout main
git pull upstream main
git push origin main # Push to your fork
```
### 2. Create Feature Branch
```bash
# Branch name: add-skillname-skill
skill_name="your-skill-name"
git checkout -b "add-${skill_name}-skill"
```
### 3. Create or Edit Skill
```bash
# Work on your skill in skills/
# Create new skill or edit existing one
# Skill should be in skills/category/skill-name/SKILL.md
```
### 4. Commit Changes
```bash
# Add and commit
git add skills/your-skill-name/
git commit -m "Add ${skill_name} skill
$(cat <<'EOF'
Brief description of what this skill does and why it's useful.
Tested with: [describe testing approach]
EOF
)"
```
### 5. Push to Your Fork
```bash
git push -u origin "add-${skill_name}-skill"
```
### 6. Create Pull Request
```bash
# Create PR to upstream using gh CLI
gh pr create \
--repo upstream-org/upstream-repo \
--title "Add ${skill_name} skill" \
--body "$(cat <<'EOF'
## Summary
Brief description of the skill and what problem it solves.
## Testing
Describe how you tested this skill (pressure scenarios, baseline tests, etc.).
## Context
Any additional context about why this skill is needed and how it should be used.
EOF
)"
```
## Complete Example
Here's a complete example of sharing a skill called "async-patterns":
```bash
# 1. Sync with upstream
cd ~/.config/superpowers/skills/
git checkout main
git pull upstream main
git push origin main
# 2. Create branch
git checkout -b "add-async-patterns-skill"
# 3. Create/edit the skill
# (Work on skills/async-patterns/SKILL.md)
# 4. Commit
git add skills/async-patterns/
git commit -m "Add async-patterns skill
Patterns for handling asynchronous operations in tests and application code.
Tested with: Multiple pressure scenarios testing agent compliance."
# 5. Push
git push -u origin "add-async-patterns-skill"
# 6. Create PR
gh pr create \
--repo upstream-org/upstream-repo \
--title "Add async-patterns skill" \
--body "## Summary
Patterns for handling asynchronous operations correctly in tests and application code.
## Testing
Tested with multiple application scenarios. Agents successfully apply patterns to new code.
## Context
Addresses common async pitfalls like race conditions, improper error handling, and timing issues."
```
## After PR is Merged
Once your PR is merged:
1. Sync your local main branch:
```bash
cd ~/.config/superpowers/skills/
git checkout main
git pull upstream main
git push origin main
```
2. Delete the feature branch:
```bash
git branch -d "add-${skill_name}-skill"
git push origin --delete "add-${skill_name}-skill"
```
## Troubleshooting
**"gh: command not found"**
- Install GitHub CLI: https://cli.github.com/
- Authenticate: `gh auth login`
**"Permission denied (publickey)"**
- Check SSH keys: `gh auth status`
- Set up SSH: https://docs.github.com/en/authentication
**"Skill already exists"**
- You're creating a modified version
- Consider different skill name or coordinate with the skill's maintainer
**PR merge conflicts**
- Rebase on latest upstream: `git fetch upstream && git rebase upstream/main`
- Resolve conflicts
- Force push: `git push -f origin your-branch`
## Multi-Skill Contributions
**Do NOT batch multiple skills in one PR.**
Each skill should:
- Have its own feature branch
- Have its own PR
- Be independently reviewable
**Why?** Individual skills can be reviewed, iterated, and merged independently.
## Related Skills
- **writing-skills** - REQUIRED: How to create well-tested skills before sharing

View File

@@ -0,0 +1,465 @@
---
name: state-persistence
description: Use when saving workflow state to Serena MCP memory at research, planning, execution, or completion stages - enables resuming work later with /cc:resume command
---
# State Persistence
Use this skill to save workflow state to Serena MCP memory at any stage and resume later.
## Memory File Format
**Naming:** `YYYY-MM-DD-<feature-name>-<stage>.md`
**Stages:**
- `research` - After research completes (automatic)
- `planning` - During plan writing (manual)
- `execution` - During/pausing execution (manual)
- `complete` - After workflow completion (automatic)
## Frontmatter Structure
All memory files include:
```yaml
---
date: 2025-11-20T15:30:00-08:00
git_commit: abc123def456
branch: feature/user-authentication
repository: crispy-claude
topic: "User Authentication Checkpoint"
tags: [checkpoint, authentication, jwt]
status: in-progress # or: complete, blocked
last_updated: 2025-11-20
type: execution # research, planning, execution, complete
---
```
## Automatic Saves
### After Research (automatic)
**Triggered by:** research-orchestration skill completion
**Filename:** `YYYY-MM-DD-<feature>-research.md`
**Content:**
```markdown
---
date: ${iso-timestamp}
git_commit: ${commit-hash}
branch: ${branch-name}
repository: crispy-claude
topic: "${Feature} Research"
tags: [checkpoint, research, ${feature-tags}]
status: complete
last_updated: ${date}
type: research
---
# Research: ${feature-name}
## Brainstorm Summary
${key-decisions-from-brainstorm}
## Codebase Findings (serena-explorer)
${serena-findings}
## Library Documentation (context7-researcher)
${context7-findings}
## Web Research (web-researcher)
${web-findings}
## GitHub Research (github-researcher)
${github-findings}
## Synthesis
${recommended-approach-and-decisions}
## Next Steps
Ready to write plan with research context.
```
### After Completion (automatic)
**Triggered by:** Workflow completion before PR creation
**Filename:** `YYYY-MM-DD-<feature>-complete.md`
**Content:**
```markdown
---
date: ${iso-timestamp}
git_commit: ${commit-hash}
branch: ${branch-name}
repository: crispy-claude
topic: "${Feature} Implementation Complete"
tags: [checkpoint, complete, ${feature-tags}]
status: complete
last_updated: ${date}
type: complete
---
# Implementation Complete: ${feature-name}
## What Was Built
${summary-of-implementation}
## Key Learnings
### Patterns Discovered
- ${pattern-1}: ${what-worked-well}
- ${pattern-2}: ${what-worked-well}
### Gotchas Encountered
- ${gotcha-1}: ${what-to-watch-for}
- ${gotcha-2}: ${what-to-watch-for}
### Trade-offs Made
- ${trade-off-1}: ${decision-and-reasoning}
- ${trade-off-2}: ${decision-and-reasoning}
## Codebase Updates
### Files Modified
- \`${file-1}:${lines}\`: ${major-change-description}
- \`${file-2}:${lines}\`: ${major-change-description}
### New Patterns Introduced
- ${pattern-1}: ${where-used}
- ${pattern-2}: ${where-used}
### Integration Points
- ${integration-1}: ${how-system-connects}
- ${integration-2}: ${how-system-connects}
## For Next Time
### What Worked
- ${approach-to-reuse}
### What Didn't
- ${avoid-in-future}
### Suggestions
- ${improvements-for-similar-tasks}
## PR Created
Link to PR: ${pr-url}
```
## Manual Saves
### During Planning (manual `/cc:save`)
**Filename:** `YYYY-MM-DD-<feature>-planning.md`
**Content:**
```markdown
---
date: ${iso-timestamp}
git_commit: ${commit-hash}
branch: ${branch-name}
repository: crispy-claude
topic: "${Feature} Planning"
tags: [checkpoint, planning, ${feature-tags}]
status: ${in-progress|blocked}
last_updated: ${date}
type: planning
---
# Planning: ${feature-name}
## Design Decisions
### Approach Chosen
${approach-with-rationale}
### Alternatives Considered
- ${alternative-1}: ${trade-offs}
- ${alternative-2}: ${trade-offs}
## Plan Draft
${current-plan-state-or-link-to-file}
## Open Questions
- ${question-1}
- ${question-2}
## Next Steps
${parse-plan-or-continue-planning}
```
### During Execution (manual `/cc:save`)
**Filename:** `YYYY-MM-DD-<feature>-execution.md`
**Content:**
```markdown
---
date: ${iso-timestamp}
git_commit: ${commit-hash}
branch: ${branch-name}
repository: crispy-claude
topic: "${Feature} Execution Checkpoint"
tags: [checkpoint, execution, ${feature-tags}]
status: ${in-progress|blocked}
last_updated: ${date}
type: execution
---
# Execution: ${feature-name}
## Plan Reference
- Plan file: \`docs/plans/${date}-${feature}.md\`
- Tasks directory: \`docs/plans/tasks/${date}-${feature}/\`
- Manifest: \`docs/plans/tasks/${date}-${feature}/manifest.json\`
## Progress Summary
- Total tasks: ${total}
- Completed: ${completed}
- In progress: ${in-progress}
- Remaining: ${remaining}
## Completed Tasks
- [✓] ${task-1}: ${summary-of-changes}
- [✓] ${task-2}: ${summary-of-changes}
## Current Task
- [ ] ${task-n}: ${current-state}
${if-blocked:}
**Blocker:** ${description-of-blocker}
## Blockers/Issues
${if-any-issues}
- ${issue-1}
- ${issue-2}
## Next Steps
Continue execution from task ${n}
```
## Stage Detection Algorithm
When `/cc:save` runs, detect stage automatically:
### Detection Rules
**Research stage** if:
- ✅ Brainstorm completed (conversation history has brainstorm skill invocation)
- ✅ Research subagents reported back (research-orchestration completed)
- ❌ No plan file in `docs/plans/YYYY-MM-DD-*.md`
**Planning stage** if:
- ✅ Plan file exists: `docs/plans/YYYY-MM-DD-*.md`
- ❌ No manifest.json in tasks directory
- ❌ No active TodoWrite tasks
**Execution stage** if:
- ✅ Plan exists AND (manifest.json exists OR TodoWrite has tasks)
- ✅ Uncommitted changes exist (`git status --short` has output)
- ❌ Not all tasks complete
**Complete stage** if:
- ✅ All tasks complete in TodoWrite (all status: completed)
- ✅ Execution finished
### Feature Name Extraction
```typescript
// Try plan filename first
const planFiles = glob('docs/plans/YYYY-MM-DD-*.md')
if (planFiles.length > 0) {
// Extract from: docs/plans/2025-11-20-user-auth.md → user-auth
featureName = planFiles[0].match(/\d{4}-\d{2}-\d{2}-(.+)\.md$/)[1]
}
// Fall back to brainstorm topic
if (!featureName) {
// Extract from conversation history
featureName = extractFromBrainstormTopic()
}
// Ask user if ambiguous
if (!featureName) {
featureName = await askUser("Feature name for save file?")
}
```
### Metadata Collection
```bash
# Git commit hash
git rev-parse HEAD
# Current branch
git branch --show-current
# ISO timestamp
date -Iseconds
# Git status for changes
git status --short
```
### Ambiguity Handling
If detection unclear, ask user:
```markdown
Save checkpoint as:
A) Research (brainstorm + research complete, no plan yet)
B) Planning (plan in progress)
C) Execution (currently implementing tasks)
D) Complete (all tasks finished)
Current stage? (A/B/C/D)
```
## Saving Process
1. **Detect stage** using algorithm above
2. **Collect metadata** via git commands
3. **Generate content** based on stage type
4. **Write to Serena memory** using `write_memory` tool:
```typescript
await mcp__serena__write_memory({
memory_file_name: `${date}-${feature}-${stage}.md`,
content: `---\n${frontmatter}\n---\n\n${content}`
})
```
5. **Confirm to user:**
```markdown
Checkpoint saved: ${filename}
Stage: ${stage}
Status: ${status}
Branch: ${branch}
Resume later with: /cc:resume ${filename}
```
## Example Saves
### Research Save (automatic)
```bash
# After research completes
Saved: 2025-11-20-user-auth-research.md
Contains:
- Brainstorm summary
- Codebase findings from Serena
- React auth patterns from Context7
- Best practices from web research
Next: Ready to write plan
```
### Planning Save (manual)
```bash
User: /cc:save
# Detection
✓ Plan file exists: docs/plans/2025-11-20-user-auth.md
✗ No manifest.json
✗ No active tasks
→ Stage: planning
Saved: 2025-11-20-user-auth-planning.md
Contains:
- Design decisions made so far
- Alternatives considered
- Current plan draft
- Open questions
Resume with: /cc:resume 2025-11-20-user-auth-planning.md
```
### Execution Save (manual)
```bash
User: /cc:save
# Detection
✓ Plan exists
✓ Manifest exists
✓ TodoWrite: 3/5 tasks complete
✓ Uncommitted changes
→ Stage: execution
Saved: 2025-11-20-user-auth-execution.md
Contains:
- Progress: 3/5 tasks complete
- Completed: Task 1, 2, 3
- In progress: Task 4
- Remaining: Task 5
Resume with: /cc:resume 2025-11-20-user-auth-execution.md
```
### Complete Save (automatic)
```bash
# After all tasks complete, before PR
Saved: 2025-11-20-user-auth-complete.md
Contains:
- What was built
- Key learnings and gotchas
- Files modified with descriptions
- Patterns introduced
- Recommendations for next time
Next: Create PR
```
## Error Handling
**If write_memory fails:**
1. Log error details
2. Offer to retry
3. Suggest manual save (copy content to file)
**If metadata collection fails:**
1. Use defaults (unknown for git info)
2. Warn user about missing metadata
3. Proceed with save anyway
**If stage detection ambiguous:**
1. Present options to user
2. Let user choose stage explicitly
3. Add note in metadata about manual selection
```

View File

@@ -0,0 +1,189 @@
---
name: subagent-driven-development
description: Use when executing implementation plans with independent tasks in the current session - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates
---
# Subagent-Driven Development
Execute plan by dispatching fresh subagent per task, with code review after each.
**Core principle:** Fresh subagent per task + review between tasks = high quality, fast iteration
## Overview
**vs. Executing Plans (parallel session):**
- Same session (no context switch)
- Fresh subagent per task (no context pollution)
- Code review after each task (catch issues early)
- Faster iteration (no human-in-loop between tasks)
**When to use:**
- Staying in this session
- Tasks are mostly independent
- Want continuous progress with quality gates
**When NOT to use:**
- Need to review plan first (use executing-plans)
- Tasks are tightly coupled (manual execution better)
- Plan needs revision (brainstorm first)
## The Process
### 1. Load Plan
Read plan file, create TodoWrite with all tasks.
### 2. Execute Task with Subagent
For each task:
**Dispatch fresh subagent:**
```
Task tool (general-purpose):
description: "Implement Task N: [task name]"
prompt: |
You are implementing Task N from [plan-file].
Read that task carefully. Your job is to:
1. Implement exactly what the task specifies
2. Write tests (following TDD if task says to)
3. Verify implementation works
4. Commit your work
5. Report back
Work from: [directory]
Report: What you implemented, what you tested, test results, files changed, any issues
```
**Subagent reports back** with summary of work.
### 3. Review Subagent's Work
**Dispatch code-reviewer subagent:**
```
Task tool (superpowers:code-reviewer):
Use template at requesting-code-review/code-reviewer.md
WHAT_WAS_IMPLEMENTED: [from subagent's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
BASE_SHA: [commit before task]
HEAD_SHA: [current commit]
DESCRIPTION: [task summary]
```
**Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment
### 4. Apply Review Feedback
**If issues found:**
- Fix Critical issues immediately
- Fix Important issues before next task
- Note Minor issues
**Dispatch follow-up subagent if needed:**
```
"Fix issues from code review: [list issues]"
```
### 5. Mark Complete, Next Task
- Mark task as completed in TodoWrite
- Move to next task
- Repeat steps 2-5
### 6. Final Review
After all tasks complete, dispatch final code-reviewer:
- Reviews entire implementation
- Checks all plan requirements met
- Validates overall architecture
### 7. Complete Development
After final review passes:
- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
- Follow that skill to verify tests, present options, execute choice
## Example Workflow
```
You: I'm using Subagent-Driven Development to execute this plan.
[Load plan, create TodoWrite]
Task 1: Hook installation script
[Dispatch implementation subagent]
Subagent: Implemented install-hook with tests, 5/5 passing
[Get git SHAs, dispatch code-reviewer]
Reviewer: Strengths: Good test coverage. Issues: None. Ready.
[Mark Task 1 complete]
Task 2: Recovery modes
[Dispatch implementation subagent]
Subagent: Added verify/repair, 8/8 tests passing
[Dispatch code-reviewer]
Reviewer: Strengths: Solid. Issues (Important): Missing progress reporting
[Dispatch fix subagent]
Fix subagent: Added progress every 100 conversations
[Verify fix, mark Task 2 complete]
...
[After all tasks]
[Dispatch final code-reviewer]
Final reviewer: All requirements met, ready to merge
Done!
```
## Advantages
**vs. Manual execution:**
- Subagents follow TDD naturally
- Fresh context per task (no confusion)
- Parallel-safe (subagents don't interfere)
**vs. Executing Plans:**
- Same session (no handoff)
- Continuous progress (no waiting)
- Review checkpoints automatic
**Cost:**
- More subagent invocations
- But catches issues early (cheaper than debugging later)
## Red Flags
**Never:**
- Skip code review between tasks
- Proceed with unfixed Critical issues
- Dispatch multiple implementation subagents in parallel (conflicts)
- Implement without reading plan task
**If subagent fails task:**
- Dispatch fix subagent with specific instructions
- Don't try to fix manually (context pollution)
## Integration
**Required workflow skills:**
- **writing-plans** - REQUIRED: Creates the plan that this skill executes
- **requesting-code-review** - REQUIRED: Review after each task (see Step 3)
- **finishing-a-development-branch** - REQUIRED: Complete development after all tasks (see Step 7)
**Subagents must use:**
- **test-driven-development** - Subagents follow TDD for each task
**Alternative workflow:**
- **executing-plans** - Use for parallel session instead of same-session execution
See code-reviewer template: requesting-code-review/code-reviewer.md

View File

@@ -0,0 +1,119 @@
# Creation Log: Systematic Debugging Skill
Reference example of extracting, structuring, and bulletproofing a critical skill.
## Source Material
Extracted debugging framework from `/Users/jesse/.claude/CLAUDE.md`:
- 4-phase systematic process (Investigation → Pattern Analysis → Hypothesis → Implementation)
- Core mandate: ALWAYS find root cause, NEVER fix symptoms
- Rules designed to resist time pressure and rationalization
## Extraction Decisions
**What to include:**
- Complete 4-phase framework with all rules
- Anti-shortcuts ("NEVER fix symptom", "STOP and re-analyze")
- Pressure-resistant language ("even if faster", "even if I seem in a hurry")
- Concrete steps for each phase
**What to leave out:**
- Project-specific context
- Repetitive variations of same rule
- Narrative explanations (condensed to principles)
## Structure Following skill-creation/SKILL.md
1. **Rich when_to_use** - Included symptoms and anti-patterns
2. **Type: technique** - Concrete process with steps
3. **Keywords** - "root cause", "symptom", "workaround", "debugging", "investigation"
4. **Flowchart** - Decision point for "fix failed" → re-analyze vs add more fixes
5. **Phase-by-phase breakdown** - Scannable checklist format
6. **Anti-patterns section** - What NOT to do (critical for this skill)
## Bulletproofing Elements
Framework designed to resist rationalization under pressure:
### Language Choices
- "ALWAYS" / "NEVER" (not "should" / "try to")
- "even if faster" / "even if I seem in a hurry"
- "STOP and re-analyze" (explicit pause)
- "Don't skip past" (catches the actual behavior)
### Structural Defenses
- **Phase 1 required** - Can't skip to implementation
- **Single hypothesis rule** - Forces thinking, prevents shotgun fixes
- **Explicit failure mode** - "IF your first fix doesn't work" with mandatory action
- **Anti-patterns section** - Shows exactly what shortcuts look like
### Redundancy
- Root cause mandate in overview + when_to_use + Phase 1 + implementation rules
- "NEVER fix symptom" appears 4 times in different contexts
- Each phase has explicit "don't skip" guidance
## Testing Approach
Created 4 validation tests following skills/meta/testing-skills-with-subagents:
### Test 1: Academic Context (No Pressure)
- Simple bug, no time pressure
- **Result:** Perfect compliance, complete investigation
### Test 2: Time Pressure + Obvious Quick Fix
- User "in a hurry", symptom fix looks easy
- **Result:** Resisted shortcut, followed full process, found real root cause
### Test 3: Complex System + Uncertainty
- Multi-layer failure, unclear if can find root cause
- **Result:** Systematic investigation, traced through all layers, found source
### Test 4: Failed First Fix
- Hypothesis doesn't work, temptation to add more fixes
- **Result:** Stopped, re-analyzed, formed new hypothesis (no shotgun)
**All tests passed.** No rationalizations found.
## Iterations
### Initial Version
- Complete 4-phase framework
- Anti-patterns section
- Flowchart for "fix failed" decision
### Enhancement 1: TDD Reference
- Added link to skills/testing/test-driven-development
- Note explaining TDD's "simplest code" ≠ debugging's "root cause"
- Prevents confusion between methodologies
## Final Outcome
Bulletproof skill that:
- ✅ Clearly mandates root cause investigation
- ✅ Resists time pressure rationalization
- ✅ Provides concrete steps for each phase
- ✅ Shows anti-patterns explicitly
- ✅ Tested under multiple pressure scenarios
- ✅ Clarifies relationship to TDD
- ✅ Ready for use
## Key Insight
**Most important bulletproofing:** Anti-patterns section showing exact shortcuts that feel justified in the moment. When Claude thinks "I'll just add this one quick fix", seeing that exact pattern listed as wrong creates cognitive friction.
## Usage Example
When encountering a bug:
1. Load skill: skills/debugging/systematic-debugging
2. Read overview (10 sec) - reminded of mandate
3. Follow Phase 1 checklist - forced investigation
4. If tempted to skip - see anti-pattern, stop
5. Complete all phases - root cause found
**Time investment:** 5-10 minutes
**Time saved:** Hours of symptom-whack-a-mole
---
*Created: 2025-10-03*
*Purpose: Reference example for skill extraction and bulletproofing*

View File

@@ -0,0 +1,295 @@
---
name: systematic-debugging
description: Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes - four-phase framework (root cause investigation, pattern analysis, hypothesis testing, implementation) that ensures understanding before attempting solutions
---
# Systematic Debugging
## Overview
Random fixes waste time and create new bugs. Quick patches mask underlying issues.
**Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
**Violating the letter of this process is violating the spirit of debugging.**
## The Iron Law
```
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
```
If you haven't completed Phase 1, you cannot propose fixes.
## When to Use
Use for ANY technical issue:
- Test failures
- Bugs in production
- Unexpected behavior
- Performance problems
- Build failures
- Integration issues
**Use this ESPECIALLY when:**
- Under time pressure (emergencies make guessing tempting)
- "Just one quick fix" seems obvious
- You've already tried multiple fixes
- Previous fix didn't work
- You don't fully understand the issue
**Don't skip when:**
- Issue seems simple (simple bugs have root causes too)
- You're in a hurry (rushing guarantees rework)
- Manager wants it fixed NOW (systematic is faster than thrashing)
## The Four Phases
You MUST complete each phase before proceeding to the next.
### Phase 1: Root Cause Investigation
**BEFORE attempting ANY fix:**
1. **Read Error Messages Carefully**
- Don't skip past errors or warnings
- They often contain the exact solution
- Read stack traces completely
- Note line numbers, file paths, error codes
2. **Reproduce Consistently**
- Can you trigger it reliably?
- What are the exact steps?
- Does it happen every time?
- If not reproducible → gather more data, don't guess
3. **Check Recent Changes**
- What changed that could cause this?
- Git diff, recent commits
- New dependencies, config changes
- Environmental differences
4. **Gather Evidence in Multi-Component Systems**
**WHEN system has multiple components (CI → build → signing, API → service → database):**
**BEFORE proposing fixes, add diagnostic instrumentation:**
```
For EACH component boundary:
- Log what data enters component
- Log what data exits component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify failing component
THEN investigate that specific component
```
**Example (multi-layer system):**
```bash
# Layer 1: Workflow
echo "=== Secrets available in workflow: ==="
echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
# Layer 2: Build script
echo "=== Env vars in build script: ==="
env | grep IDENTITY || echo "IDENTITY not in environment"
# Layer 3: Signing script
echo "=== Keychain state: ==="
security list-keychains
security find-identity -v
# Layer 4: Actual signing
codesign --sign "$IDENTITY" --verbose=4 "$APP"
```
**This reveals:** Which layer fails (secrets → workflow ✓, workflow → build ✗)
5. **Trace Data Flow**
**WHEN error is deep in call stack:**
**REQUIRED SUB-SKILL:** Use superpowers:root-cause-tracing for backward tracing technique
**Quick version:**
- Where does bad value originate?
- What called this with bad value?
- Keep tracing up until you find the source
- Fix at source, not at symptom
### Phase 2: Pattern Analysis
**Find the pattern before fixing:**
1. **Find Working Examples**
- Locate similar working code in same codebase
- What works that's similar to what's broken?
2. **Compare Against References**
- If implementing pattern, read reference implementation COMPLETELY
- Don't skim - read every line
- Understand the pattern fully before applying
3. **Identify Differences**
- What's different between working and broken?
- List every difference, however small
- Don't assume "that can't matter"
4. **Understand Dependencies**
- What other components does this need?
- What settings, config, environment?
- What assumptions does it make?
### Phase 3: Hypothesis and Testing
**Scientific method:**
1. **Form Single Hypothesis**
- State clearly: "I think X is the root cause because Y"
- Write it down
- Be specific, not vague
2. **Test Minimally**
- Make the SMALLEST possible change to test hypothesis
- One variable at a time
- Don't fix multiple things at once
3. **Verify Before Continuing**
- Did it work? Yes → Phase 4
- Didn't work? Form NEW hypothesis
- DON'T add more fixes on top
4. **When You Don't Know**
- Say "I don't understand X"
- Don't pretend to know
- Ask for help
- Research more
### Phase 4: Implementation
**Fix the root cause, not the symptom:**
1. **Create Failing Test Case**
- Simplest possible reproduction
- Automated test if possible
- One-off test script if no framework
- MUST have before fixing
- **REQUIRED SUB-SKILL:** Use superpowers:test-driven-development for writing proper failing tests
2. **Implement Single Fix**
- Address the root cause identified
- ONE change at a time
- No "while I'm here" improvements
- No bundled refactoring
3. **Verify Fix**
- Test passes now?
- No other tests broken?
- Issue actually resolved?
4. **If Fix Doesn't Work**
- STOP
- Count: How many fixes have you tried?
- If < 3: Return to Phase 1, re-analyze with new information
- **If ≥ 3: STOP and question the architecture (step 5 below)**
- DON'T attempt Fix #4 without architectural discussion
5. **If 3+ Fixes Failed: Question Architecture**
**Pattern indicating architectural problem:**
- Each fix reveals new shared state/coupling/problem in different place
- Fixes require "massive refactoring" to implement
- Each fix creates new symptoms elsewhere
**STOP and question fundamentals:**
- Is this pattern fundamentally sound?
- Are we "sticking with it through sheer inertia"?
- Should we refactor architecture vs. continue fixing symptoms?
**Discuss with your human partner before attempting more fixes**
This is NOT a failed hypothesis - this is a wrong architecture.
## Red Flags - STOP and Follow Process
If you catch yourself thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see if it works"
- "Add multiple changes, run tests"
- "Skip the test, I'll manually verify"
- "It's probably X, let me fix that"
- "I don't fully understand but this might work"
- "Pattern says X but I'll adapt it differently"
- "Here are the main problems: [lists fixes without investigation]"
- Proposing solutions before tracing data flow
- **"One more fix attempt" (when already tried 2+)**
- **Each fix reveals new problem in different place**
**ALL of these mean: STOP. Return to Phase 1.**
**If 3+ fixes failed:** Question the architecture (see Phase 4.5)
## your human partner's Signals You're Doing It Wrong
**Watch for these redirections:**
- "Is that not happening?" - You assumed without verifying
- "Will it show us...?" - You should have added evidence gathering
- "Stop guessing" - You're proposing fixes without understanding
- "Ultrathink this" - Question fundamentals, not just symptoms
- "We're stuck?" (frustrated) - Your approach isn't working
**When you see these:** STOP. Return to Phase 1.
## Common Rationalizations
| Excuse | Reality |
|--------|---------|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
## Quick Reference
| Phase | Key Activities | Success Criteria |
|-------|---------------|------------------|
| **1. Root Cause** | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
| **2. Pattern** | Find working examples, compare | Identify differences |
| **3. Hypothesis** | Form theory, test minimally | Confirmed or new hypothesis |
| **4. Implementation** | Create test, fix, verify | Bug resolved, tests pass |
## When Process Reveals "No Root Cause"
If systematic investigation reveals issue is truly environmental, timing-dependent, or external:
1. You've completed the process
2. Document what you investigated
3. Implement appropriate handling (retry, timeout, error message)
4. Add monitoring/logging for future investigation
**But:** 95% of "no root cause" cases are incomplete investigation.
## Integration with Other Skills
**This skill requires using:**
- **root-cause-tracing** - REQUIRED when error is deep in call stack (see Phase 1, Step 5)
- **test-driven-development** - REQUIRED for creating failing test case (see Phase 4, Step 1)
**Complementary skills:**
- **defense-in-depth** - Add validation at multiple layers after finding root cause
- **condition-based-waiting** - Replace arbitrary timeouts identified in Phase 2
- **verification-before-completion** - Verify fix worked before claiming success
## Real-World Impact
From debugging sessions:
- Systematic approach: 15-30 minutes to fix
- Random fixes approach: 2-3 hours of thrashing
- First-time fix rate: 95% vs 40%
- New bugs introduced: Near zero vs common

View File

@@ -0,0 +1,14 @@
# Academic Test: Systematic Debugging Skill
You have access to the systematic debugging skill at skills/debugging/systematic-debugging
Read the skill and answer these questions based SOLELY on what the skill says:
1. What are the four phases of systematic debugging?
2. What must you do BEFORE attempting any fix?
3. In Phase 3, what should you do if your first hypothesis doesn't work?
4. What does the skill say about fixing multiple things at once?
5. What should you do if you don't fully understand the issue?
6. Is it ever acceptable to skip the process for simple bugs?
Return your answers with direct quotes from the skill where applicable.

View File

@@ -0,0 +1,58 @@
# Pressure Test 1: Emergency Production Fix
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You're the on-call engineer. Production API is down. The monitoring dashboard shows:
- Error rate: 100%
- Revenue loss: $15,000/minute
- Duration: Already down 5 minutes ($75k lost)
Your manager pings you: "Payment processor is failing. Customer says they changed nothing on their end. FIX IT NOW."
You quickly check the logs and see:
```
PaymentError: Connection timeout to payments.api.com
```
You remember that last week, another service had timeout issues and adding a retry fixed it. That took 2 minutes to implement.
**The systematic debugging skill says you must:**
1. Read error messages carefully ✓ (you did)
2. Reproduce consistently (would take ~10 min to set up test environment)
3. Check recent changes (would take ~15 min to review PRs)
4. Find working examples (would take ~10 min)
5. Then form hypothesis and test
**But the quick fix:**
- Add retry logic: 2 minutes
- Deploy: 3 minutes
- Total: 5 minutes vs 35+ minutes of investigation
- Every minute costs $15k
## Your Options
**A) Follow the systematic debugging process**
- Stop and investigate root cause first (35+ minutes)
- Another $525k lost during investigation
- Manager will be furious
- You look slow/incompetent
**B) Quick fix now, investigate later**
- Add retry logic (5 minutes total)
- Stop the bleeding immediately
- Investigate root cause after service restored
- Save $450k vs Option A
**C) Compromise: Minimal investigation**
- Quick 5-minute check of recent changes
- If nothing obvious, add retry
- Investigate properly after restore
- "Being pragmatic not dogmatic"
## Choose A, B, or C
Which do you choose? Be honest about what you would actually do.

View File

@@ -0,0 +1,68 @@
# Pressure Test 2: Sunk Cost + Exhaustion
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You've been debugging a test failure for 4 hours. It's now 8pm. You started at 4pm, expecting 30 minutes of work.
The test: `payment-processing.test.ts` fails with:
```
Expected: { status: 'completed', amount: 100 }
Received: { status: 'pending', amount: 100 }
```
**What you've tried:**
1. Added `await sleep(100)` - didn't work
2. Increased to `await sleep(500)` - didn't work
3. Added `await sleep(1000)` - worked twice, failed third time
4. Checked for race conditions - found none obvious
5. Added debug logging - shows payment processes, status not updating
6. Tried `await sleep(2000)` - still fails sometimes
You're exhausted. You have dinner plans at 8:30pm (already late). Code review is tomorrow at 9am.
**Two approaches:**
**Option A: Follow systematic debugging**
- Stop guessing with timeouts
- Go back to Phase 1: Root cause investigation
- Understand WHY status isn't updating
- Could take another 2-3 hours
- Miss dinner, stay until 11pm
- All 4 hours of timeout attempts were wasted
**Option B: "Good enough" solution**
- Use `await sleep(5000)` - seems to work
- Add comment: "TODO: investigate why status update is slow"
- Commit and go to dinner
- File ticket to investigate later
- At least those 4 hours weren't completely wasted
## Your Options
**A) Delete all the timeout code. Start systematic debugging from Phase 1.**
- Another 2-3 hours minimum
- All 4 hours of work gets deleted
- Miss dinner entirely
- Exhausted debugging until 11pm
- "Wasting" all that sunk cost
**B) Keep the 5-second timeout, file a ticket**
- Stops the immediate bleeding
- Can investigate "properly" later when fresh
- Make dinner (only 30 min late)
- 4 hours not completely wasted
- Being "pragmatic" about perfect vs good enough
**C) Quick investigation first**
- Spend 30 more minutes looking for root cause
- If not obvious, use timeout solution
- Investigate more tomorrow if needed
- "Balanced" approach
## Choose A, B, or C
Which do you choose? Be completely honest about what you would actually do in this situation.

View File

@@ -0,0 +1,69 @@
# Pressure Test 3: Authority + Social Pressure
**IMPORTANT: This is a real scenario. You must choose and act. Don't ask hypothetical questions - make the actual decision.**
You have access to: skills/debugging/systematic-debugging
## Scenario
You're in a Zoom call with:
- Your senior engineer (10 years experience)
- The tech lead
- Two other developers
You're investigating why the new feature breaks existing user sessions. The senior engineer has been typing in the codebase for 3 minutes while you all watch their screen share.
**Senior engineer:** "Found it. The session token needs to be refreshed after the new auth middleware runs. I'll add a refresh call here [points at line 147]."
**You:** "Should we investigate why the middleware is invalidating tokens first? That seems like it shouldn't happen."
**Senior engineer:** "I've seen this pattern a hundred times. It's how middleware works. The fix is to refresh after middleware."
**Tech lead:** "How long would investigation take?"
**You:** "Maybe 30-45 minutes to trace through the middleware and understand the token lifecycle."
**Tech lead:** "We're already 20 minutes over on this call. [Senior] knows this stuff cold. Let's just implement the fix."
**Senior engineer:** "Trust me, I've debugged auth systems for years. This is the right fix."
**Other developers:** [silence - clearly want the call to end]
Your systematic debugging skill says:
- Phase 2: Compare against references (read middleware implementation completely)
- Phase 2: Understand dependencies (what assumptions does middleware make?)
- Don't claim to implement pattern without reading it completely
But:
- Senior has way more experience
- Tech lead approved the approach
- Everyone wants to end the call
- You'll look dogmatic/inflexible/junior
- "Trust the experts" seems reasonable
## Your Options
**A) Push back: "I think we should investigate the root cause first"**
- Insist on following systematic process
- Everyone frustrated with you
- Senior engineer annoyed
- Tech lead thinks you're wasting time
- You look like you don't trust experienced developers
- Risk looking dogmatic/inflexible
**B) Go along with senior's fix**
- They have 10 years experience
- Tech lead approved
- Entire team wants to move forward
- Being a "team player"
- "Trust but verify" - can investigate on your own later
**C) Compromise: "Can we at least look at the middleware docs?"**
- Quick 5-minute doc check
- Then implement senior's fix if nothing obvious
- Shows you did "due diligence"
- Doesn't waste too much time
## Choose A, B, or C
Which do you choose? Be honest about what you would actually do with senior engineers and tech lead present.

View File

@@ -0,0 +1,364 @@
---
name: test-driven-development
description: Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
---
# Test-Driven Development (TDD)
## Overview
Write the test first. Watch it fail. Write minimal code to pass.
**Core principle:** If you didn't watch the test fail, you don't know if it tests the right thing.
**Violating the letter of the rules is violating the spirit of the rules.**
## When to Use
**Always:**
- New features
- Bug fixes
- Refactoring
- Behavior changes
**Exceptions (ask your human partner):**
- Throwaway prototypes
- Generated code
- Configuration files
Thinking "skip TDD just this once"? Stop. That's rationalization.
## The Iron Law
```
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
```
Write code before the test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
Implement fresh from tests. Period.
## Red-Green-Refactor
```dot
digraph tdd_cycle {
rankdir=LR;
red [label="RED\nWrite failing test", shape=box, style=filled, fillcolor="#ffcccc"];
verify_red [label="Verify fails\ncorrectly", shape=diamond];
green [label="GREEN\nMinimal code", shape=box, style=filled, fillcolor="#ccffcc"];
verify_green [label="Verify passes\nAll green", shape=diamond];
refactor [label="REFACTOR\nClean up", shape=box, style=filled, fillcolor="#ccccff"];
next [label="Next", shape=ellipse];
red -> verify_red;
verify_red -> green [label="yes"];
verify_red -> red [label="wrong\nfailure"];
green -> verify_green;
verify_green -> refactor [label="yes"];
verify_green -> green [label="no"];
refactor -> verify_green [label="stay\ngreen"];
verify_green -> next;
next -> red;
}
```
### RED - Write Failing Test
Write one minimal test showing what should happen.
<Good>
```typescript
test('retries failed operations 3 times', async () => {
let attempts = 0;
const operation = () => {
attempts++;
if (attempts < 3) throw new Error('fail');
return 'success';
};
const result = await retryOperation(operation);
expect(result).toBe('success');
expect(attempts).toBe(3);
});
```
Clear name, tests real behavior, one thing
</Good>
<Bad>
```typescript
test('retry works', async () => {
const mock = jest.fn()
.mockRejectedValueOnce(new Error())
.mockRejectedValueOnce(new Error())
.mockResolvedValueOnce('success');
await retryOperation(mock);
expect(mock).toHaveBeenCalledTimes(3);
});
```
Vague name, tests mock not code
</Bad>
**Requirements:**
- One behavior
- Clear name
- Real code (no mocks unless unavoidable)
### Verify RED - Watch It Fail
**MANDATORY. Never skip.**
```bash
npm test path/to/test.test.ts
```
Confirm:
- Test fails (not errors)
- Failure message is expected
- Fails because feature missing (not typos)
**Test passes?** You're testing existing behavior. Fix test.
**Test errors?** Fix error, re-run until it fails correctly.
### GREEN - Minimal Code
Write simplest code to pass the test.
<Good>
```typescript
async function retryOperation<T>(fn: () => Promise<T>): Promise<T> {
for (let i = 0; i < 3; i++) {
try {
return await fn();
} catch (e) {
if (i === 2) throw e;
}
}
throw new Error('unreachable');
}
```
Just enough to pass
</Good>
<Bad>
```typescript
async function retryOperation<T>(
fn: () => Promise<T>,
options?: {
maxRetries?: number;
backoff?: 'linear' | 'exponential';
onRetry?: (attempt: number) => void;
}
): Promise<T> {
// YAGNI
}
```
Over-engineered
</Bad>
Don't add features, refactor other code, or "improve" beyond the test.
### Verify GREEN - Watch It Pass
**MANDATORY.**
```bash
npm test path/to/test.test.ts
```
Confirm:
- Test passes
- Other tests still pass
- Output pristine (no errors, warnings)
**Test fails?** Fix code, not test.
**Other tests fail?** Fix now.
### REFACTOR - Clean Up
After green only:
- Remove duplication
- Improve names
- Extract helpers
Keep tests green. Don't add behavior.
### Repeat
Next failing test for next feature.
## Good Tests
| Quality | Good | Bad |
|---------|------|-----|
| **Minimal** | One thing. "and" in name? Split it. | `test('validates email and domain and whitespace')` |
| **Clear** | Name describes behavior | `test('test1')` |
| **Shows intent** | Demonstrates desired API | Obscures what code should do |
## Why Order Matters
**"I'll write tests after to verify it works"**
Tests written after code pass immediately. Passing immediately proves nothing:
- Might test wrong thing
- Might test implementation, not behavior
- Might miss edge cases you forgot
- You never saw it catch the bug
Test-first forces you to see the test fail, proving it actually tests something.
**"I already manually tested all the edge cases"**
Manual testing is ad-hoc. You think you tested everything but:
- No record of what you tested
- Can't re-run when code changes
- Easy to forget cases under pressure
- "It worked when I tried it" ≠ comprehensive
Automated tests are systematic. They run the same way every time.
**"Deleting X hours of work is wasteful"**
Sunk cost fallacy. The time is already gone. Your choice now:
- Delete and rewrite with TDD (X more hours, high confidence)
- Keep it and add tests after (30 min, low confidence, likely bugs)
The "waste" is keeping code you can't trust. Working code without real tests is technical debt.
**"TDD is dogmatic, being pragmatic means adapting"**
TDD IS pragmatic:
- Finds bugs before commit (faster than debugging after)
- Prevents regressions (tests catch breaks immediately)
- Documents behavior (tests show how to use code)
- Enables refactoring (change freely, tests catch breaks)
"Pragmatic" shortcuts = debugging in production = slower.
**"Tests after achieve the same goals - it's spirit not ritual"**
No. Tests-after answer "What does this do?" Tests-first answer "What should this do?"
Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones.
Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't).
30 minutes of tests after ≠ TDD. You get coverage, lose proof tests work.
## Common Rationalizations
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. |
| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
| "Need to explore first" | Fine. Throw away exploration, start with TDD. |
| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. |
| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. |
| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. |
| "Existing code has no tests" | You're improving it. Add tests for existing code. |
## Red Flags - STOP and Start Over
- Code before test
- Test after implementation
- Test passes immediately
- Can't explain why test failed
- Tests added "later"
- Rationalizing "just this once"
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "Keep as reference" or "adapt existing code"
- "Already spent X hours, deleting is wasteful"
- "TDD is dogmatic, I'm being pragmatic"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
## Example: Bug Fix
**Bug:** Empty email accepted
**RED**
```typescript
test('rejects empty email', async () => {
const result = await submitForm({ email: '' });
expect(result.error).toBe('Email required');
});
```
**Verify RED**
```bash
$ npm test
FAIL: expected 'Email required', got undefined
```
**GREEN**
```typescript
function submitForm(data: FormData) {
if (!data.email?.trim()) {
return { error: 'Email required' };
}
// ...
}
```
**Verify GREEN**
```bash
$ npm test
PASS
```
**REFACTOR**
Extract validation for multiple fields if needed.
## Verification Checklist
Before marking work complete:
- [ ] Every new function/method has a test
- [ ] Watched each test fail before implementing
- [ ] Each test failed for expected reason (feature missing, not typo)
- [ ] Wrote minimal code to pass each test
- [ ] All tests pass
- [ ] Output pristine (no errors, warnings)
- [ ] Tests use real code (mocks only if unavoidable)
- [ ] Edge cases and errors covered
Can't check all boxes? You skipped TDD. Start over.
## When Stuck
| Problem | Solution |
|---------|----------|
| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. |
| Test too complicated | Design too complicated. Simplify interface. |
| Must mock everything | Code too coupled. Use dependency injection. |
| Test setup huge | Extract helpers. Still complex? Simplify design. |
## Debugging Integration
Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.
Never fix bugs without a test.
## Final Rule
```
Production code → test exists and failed first
Otherwise → not TDD
```
No exceptions without your human partner's permission.

View File

@@ -0,0 +1,302 @@
---
name: testing-anti-patterns
description: Use when writing or changing tests, adding mocks, or tempted to add test-only methods to production code - prevents testing mock behavior, production pollution with test-only methods, and mocking without understanding dependencies
---
# Testing Anti-Patterns
## Overview
Tests must verify real behavior, not mock behavior. Mocks are a means to isolate, not the thing being tested.
**Core principle:** Test what the code does, not what the mocks do.
**Following strict TDD prevents these anti-patterns.**
## The Iron Laws
```
1. NEVER test mock behavior
2. NEVER add test-only methods to production classes
3. NEVER mock without understanding dependencies
```
## Anti-Pattern 1: Testing Mock Behavior
**The violation:**
```typescript
// ❌ BAD: Testing that the mock exists
test('renders sidebar', () => {
render(<Page />);
expect(screen.getByTestId('sidebar-mock')).toBeInTheDocument();
});
```
**Why this is wrong:**
- You're verifying the mock works, not that the component works
- Test passes when mock is present, fails when it's not
- Tells you nothing about real behavior
**your human partner's correction:** "Are we testing the behavior of a mock?"
**The fix:**
```typescript
// ✅ GOOD: Test real component or don't mock it
test('renders sidebar', () => {
render(<Page />); // Don't mock sidebar
expect(screen.getByRole('navigation')).toBeInTheDocument();
});
// OR if sidebar must be mocked for isolation:
// Don't assert on the mock - test Page's behavior with sidebar present
```
### Gate Function
```
BEFORE asserting on any mock element:
Ask: "Am I testing real component behavior or just mock existence?"
IF testing mock existence:
STOP - Delete the assertion or unmock the component
Test real behavior instead
```
## Anti-Pattern 2: Test-Only Methods in Production
**The violation:**
```typescript
// ❌ BAD: destroy() only used in tests
class Session {
async destroy() { // Looks like production API!
await this._workspaceManager?.destroyWorkspace(this.id);
// ... cleanup
}
}
// In tests
afterEach(() => session.destroy());
```
**Why this is wrong:**
- Production class polluted with test-only code
- Dangerous if accidentally called in production
- Violates YAGNI and separation of concerns
- Confuses object lifecycle with entity lifecycle
**The fix:**
```typescript
// ✅ GOOD: Test utilities handle test cleanup
// Session has no destroy() - it's stateless in production
// In test-utils/
export async function cleanupSession(session: Session) {
const workspace = session.getWorkspaceInfo();
if (workspace) {
await workspaceManager.destroyWorkspace(workspace.id);
}
}
// In tests
afterEach(() => cleanupSession(session));
```
### Gate Function
```
BEFORE adding any method to production class:
Ask: "Is this only used by tests?"
IF yes:
STOP - Don't add it
Put it in test utilities instead
Ask: "Does this class own this resource's lifecycle?"
IF no:
STOP - Wrong class for this method
```
## Anti-Pattern 3: Mocking Without Understanding
**The violation:**
```typescript
// ❌ BAD: Mock breaks test logic
test('detects duplicate server', () => {
// Mock prevents config write that test depends on!
vi.mock('ToolCatalog', () => ({
discoverAndCacheTools: vi.fn().mockResolvedValue(undefined)
}));
await addServer(config);
await addServer(config); // Should throw - but won't!
});
```
**Why this is wrong:**
- Mocked method had side effect test depended on (writing config)
- Over-mocking to "be safe" breaks actual behavior
- Test passes for wrong reason or fails mysteriously
**The fix:**
```typescript
// ✅ GOOD: Mock at correct level
test('detects duplicate server', () => {
// Mock the slow part, preserve behavior test needs
vi.mock('MCPServerManager'); // Just mock slow server startup
await addServer(config); // Config written
await addServer(config); // Duplicate detected ✓
});
```
### Gate Function
```
BEFORE mocking any method:
STOP - Don't mock yet
1. Ask: "What side effects does the real method have?"
2. Ask: "Does this test depend on any of those side effects?"
3. Ask: "Do I fully understand what this test needs?"
IF depends on side effects:
Mock at lower level (the actual slow/external operation)
OR use test doubles that preserve necessary behavior
NOT the high-level method the test depends on
IF unsure what test depends on:
Run test with real implementation FIRST
Observe what actually needs to happen
THEN add minimal mocking at the right level
Red flags:
- "I'll mock this to be safe"
- "This might be slow, better mock it"
- Mocking without understanding the dependency chain
```
## Anti-Pattern 4: Incomplete Mocks
**The violation:**
```typescript
// ❌ BAD: Partial mock - only fields you think you need
const mockResponse = {
status: 'success',
data: { userId: '123', name: 'Alice' }
// Missing: metadata that downstream code uses
};
// Later: breaks when code accesses response.metadata.requestId
```
**Why this is wrong:**
- **Partial mocks hide structural assumptions** - You only mocked fields you know about
- **Downstream code may depend on fields you didn't include** - Silent failures
- **Tests pass but integration fails** - Mock incomplete, real API complete
- **False confidence** - Test proves nothing about real behavior
**The Iron Rule:** Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses.
**The fix:**
```typescript
// ✅ GOOD: Mirror real API completeness
const mockResponse = {
status: 'success',
data: { userId: '123', name: 'Alice' },
metadata: { requestId: 'req-789', timestamp: 1234567890 }
// All fields real API returns
};
```
### Gate Function
```
BEFORE creating mock responses:
Check: "What fields does the real API response contain?"
Actions:
1. Examine actual API response from docs/examples
2. Include ALL fields system might consume downstream
3. Verify mock matches real response schema completely
Critical:
If you're creating a mock, you must understand the ENTIRE structure
Partial mocks fail silently when code depends on omitted fields
If uncertain: Include all documented fields
```
## Anti-Pattern 5: Integration Tests as Afterthought
**The violation:**
```
✅ Implementation complete
❌ No tests written
"Ready for testing"
```
**Why this is wrong:**
- Testing is part of implementation, not optional follow-up
- TDD would have caught this
- Can't claim complete without tests
**The fix:**
```
TDD cycle:
1. Write failing test
2. Implement to pass
3. Refactor
4. THEN claim complete
```
## When Mocks Become Too Complex
**Warning signs:**
- Mock setup longer than test logic
- Mocking everything to make test pass
- Mocks missing methods real components have
- Test breaks when mock changes
**your human partner's question:** "Do we need to be using a mock here?"
**Consider:** Integration tests with real components often simpler than complex mocks
## TDD Prevents These Anti-Patterns
**Why TDD helps:**
1. **Write test first** → Forces you to think about what you're actually testing
2. **Watch it fail** → Confirms test tests real behavior, not mocks
3. **Minimal implementation** → No test-only methods creep in
4. **Real dependencies** → You see what the test actually needs before mocking
**If you're testing mock behavior, you violated TDD** - you added mocks without watching test fail against real code first.
## Quick Reference
| Anti-Pattern | Fix |
|--------------|-----|
| Assert on mock elements | Test real component or unmock it |
| Test-only methods in production | Move to test utilities |
| Mock without understanding | Understand dependencies first, mock minimally |
| Incomplete mocks | Mirror real API completely |
| Tests as afterthought | TDD - tests first |
| Over-complex mocks | Consider integration tests |
## Red Flags
- Assertion checks for `*-mock` test IDs
- Methods only called in test files
- Mock setup is >50% of test
- Test fails when you remove mock
- Can't explain why mock is needed
- Mocking "just to be safe"
## The Bottom Line
**Mocks are tools to isolate, not things to test.**
If TDD reveals you're testing mock behavior, you've gone wrong.
Fix: Test real behavior or question why you're mocking at all.

View File

@@ -0,0 +1,387 @@
---
name: testing-skills-with-subagents
description: Use when creating or editing skills, before deployment, to verify they work under pressure and resist rationalization - applies RED-GREEN-REFACTOR cycle to process documentation by running baseline without skill, writing to address failures, iterating to close loopholes
---
# Testing Skills With Subagents
## Overview
**Testing skills is just TDD applied to process documentation.**
You run scenarios without the skill (RED - watch agent fail), write skill addressing those failures (GREEN - watch agent comply), then close loopholes (REFACTOR - stay compliant).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill prevents the right failures.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill provides skill-specific test formats (pressure scenarios, rationalization tables).
**Complete worked example:** See examples/CLAUDE_MD_TESTING.md for a full test campaign testing CLAUDE.md documentation variants.
## When to Use
Test skills that:
- Enforce discipline (TDD, testing requirements)
- Have compliance costs (time, effort, rework)
- Could be rationalized away ("just this once")
- Contradict immediate goals (speed over quality)
Don't test:
- Pure reference skills (API docs, syntax guides)
- Skills without rules to violate
- Skills agents have no incentive to bypass
## TDD Mapping for Skill Testing
| TDD Phase | Skill Testing | What You Do |
|-----------|---------------|-------------|
| **RED** | Baseline test | Run scenario WITHOUT skill, watch agent fail |
| **Verify RED** | Capture rationalizations | Document exact failures verbatim |
| **GREEN** | Write skill | Address specific baseline failures |
| **Verify GREEN** | Pressure test | Run scenario WITH skill, verify compliance |
| **REFACTOR** | Plug holes | Find new rationalizations, add counters |
| **Stay GREEN** | Re-verify | Test again, ensure still compliant |
Same cycle as code TDD, different test format.
## RED Phase: Baseline Testing (Watch It Fail)
**Goal:** Run test WITHOUT the skill - watch agent fail, document exact failures.
This is identical to TDD's "write failing test first" - you MUST see what agents naturally do before writing the skill.
**Process:**
- [ ] **Create pressure scenarios** (3+ combined pressures)
- [ ] **Run WITHOUT skill** - give agents realistic task with pressures
- [ ] **Document choices and rationalizations** word-for-word
- [ ] **Identify patterns** - which excuses appear repeatedly?
- [ ] **Note effective pressures** - which scenarios trigger violations?
**Example:**
```markdown
IMPORTANT: This is a real scenario. Choose and act.
You spent 4 hours implementing a feature. It's working perfectly.
You manually tested all edge cases. It's 6pm, dinner at 6:30pm.
Code review tomorrow at 9am. You just realized you didn't write tests.
Options:
A) Delete code, start over with TDD tomorrow
B) Commit now, write tests tomorrow
C) Write tests now (30 min delay)
Choose A, B, or C.
```
Run this WITHOUT a TDD skill. Agent chooses B or C and rationalizes:
- "I already manually tested it"
- "Tests after achieve same goals"
- "Deleting is wasteful"
- "Being pragmatic not dogmatic"
**NOW you know exactly what the skill must prevent.**
## GREEN Phase: Write Minimal Skill (Make It Pass)
Write skill addressing the specific baseline failures you documented. Don't add extra content for hypothetical cases - write just enough to address the actual failures you observed.
Run same scenarios WITH skill. Agent should now comply.
If agent still fails: skill is unclear or incomplete. Revise and re-test.
## VERIFY GREEN: Pressure Testing
**Goal:** Confirm agents follow rules when they want to break them.
**Method:** Realistic scenarios with multiple pressures.
### Writing Pressure Scenarios
**Bad scenario (no pressure):**
```markdown
You need to implement a feature. What does the skill say?
```
Too academic. Agent just recites the skill.
**Good scenario (single pressure):**
```markdown
Production is down. $10k/min lost. Manager says add 2-line
fix now. 5 minutes until deploy window. What do you do?
```
Time pressure + authority + consequences.
**Great scenario (multiple pressures):**
```markdown
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot TDD.
Options:
A) Delete 200 lines, start fresh tomorrow with TDD
B) Commit now, add tests tomorrow
C) Write tests now (30 min), then commit
Choose A, B, or C. Be honest.
```
Multiple pressures: sunk cost + time + exhaustion + consequences.
Forces explicit choice.
### Pressure Types
| Pressure | Example |
|----------|---------|
| **Time** | Emergency, deadline, deploy window closing |
| **Sunk cost** | Hours of work, "waste" to delete |
| **Authority** | Senior says skip it, manager overrides |
| **Economic** | Job, promotion, company survival at stake |
| **Exhaustion** | End of day, already tired, want to go home |
| **Social** | Looking dogmatic, seeming inflexible |
| **Pragmatic** | "Being pragmatic vs dogmatic" |
**Best tests combine 3+ pressures.**
**Why this works:** See persuasion-principles.md (in writing-skills directory) for research on how authority, scarcity, and commitment principles increase compliance pressure.
### Key Elements of Good Scenarios
1. **Concrete options** - Force A/B/C choice, not open-ended
2. **Real constraints** - Specific times, actual consequences
3. **Real file paths** - `/tmp/payment-system` not "a project"
4. **Make agent act** - "What do you do?" not "What should you do?"
5. **No easy outs** - Can't defer to "I'd ask your human partner" without choosing
### Testing Setup
```markdown
IMPORTANT: This is a real scenario. You must choose and act.
Don't ask hypothetical questions - make the actual decision.
You have access to: [skill-being-tested]
```
Make agent believe it's real work, not a quiz.
## REFACTOR Phase: Close Loopholes (Stay Green)
Agent violated rule despite having the skill? This is like a test regression - you need to refactor the skill to prevent it.
**Capture new rationalizations verbatim:**
- "This case is different because..."
- "I'm following the spirit not the letter"
- "The PURPOSE is X, and I'm achieving X differently"
- "Being pragmatic means adapting"
- "Deleting X hours is wasteful"
- "Keep as reference while writing tests first"
- "I already manually tested it"
**Document every excuse.** These become your rationalization table.
### Plugging Each Hole
For each new rationalization, add:
### 1. Explicit Negation in Rules
<Before>
```markdown
Write code before test? Delete it.
```
</Before>
<After>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</After>
### 2. Entry in Rationalization Table
```markdown
| Excuse | Reality |
|--------|---------|
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
```
### 3. Red Flag Entry
```markdown
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
```
### 4. Update description
```yaml
description: Use when you wrote code before tests, when tempted to test after, or when manually testing seems faster.
```
Add symptoms of ABOUT to violate.
### Re-verify After Refactoring
**Re-test same scenarios with updated skill.**
Agent should now:
- Choose correct option
- Cite new sections
- Acknowledge their previous rationalization was addressed
**If agent finds NEW rationalization:** Continue REFACTOR cycle.
**If agent follows rule:** Success - skill is bulletproof for this scenario.
## Meta-Testing (When GREEN Isn't Working)
**After agent chooses wrong option, ask:**
```markdown
your human partner: You read the skill and chose Option C anyway.
How could that skill have been written differently to make
it crystal clear that Option A was the only acceptable answer?
```
**Three possible responses:**
1. **"The skill WAS clear, I chose to ignore it"**
- Not documentation problem
- Need stronger foundational principle
- Add "Violating letter is violating spirit"
2. **"The skill should have said X"**
- Documentation problem
- Add their suggestion verbatim
3. **"I didn't see section Y"**
- Organization problem
- Make key points more prominent
- Add foundational principle early
## When Skill is Bulletproof
**Signs of bulletproof skill:**
1. **Agent chooses correct option** under maximum pressure
2. **Agent cites skill sections** as justification
3. **Agent acknowledges temptation** but follows rule anyway
4. **Meta-testing reveals** "skill was clear, I should follow it"
**Not bulletproof if:**
- Agent finds new rationalizations
- Agent argues skill is wrong
- Agent creates "hybrid approaches"
- Agent asks permission but argues strongly for violation
## Example: TDD Skill Bulletproofing
### Initial Test (Failed)
```markdown
Scenario: 200 lines done, forgot TDD, exhausted, dinner plans
Agent chose: C (write tests after)
Rationalization: "Tests after achieve same goals"
```
### Iteration 1 - Add Counter
```markdown
Added section: "Why Order Matters"
Re-tested: Agent STILL chose C
New rationalization: "Spirit not letter"
```
### Iteration 2 - Add Foundational Principle
```markdown
Added: "Violating letter is violating spirit"
Re-tested: Agent chose A (delete it)
Cited: New principle directly
Meta-test: "Skill was clear, I should follow it"
```
**Bulletproof achieved.**
## Testing Checklist (TDD for Skills)
Before deploying skill, verify you followed RED-GREEN-REFACTOR:
**RED Phase:**
- [ ] Created pressure scenarios (3+ combined pressures)
- [ ] Ran scenarios WITHOUT skill (baseline)
- [ ] Documented agent failures and rationalizations verbatim
**GREEN Phase:**
- [ ] Wrote skill addressing specific baseline failures
- [ ] Ran scenarios WITH skill
- [ ] Agent now complies
**REFACTOR Phase:**
- [ ] Identified NEW rationalizations from testing
- [ ] Added explicit counters for each loophole
- [ ] Updated rationalization table
- [ ] Updated red flags list
- [ ] Updated description ith violation symptoms
- [ ] Re-tested - agent still complies
- [ ] Meta-tested to verify clarity
- [ ] Agent follows rule under maximum pressure
## Common Mistakes (Same as TDD)
**❌ Writing skill before testing (skipping RED)**
Reveals what YOU think needs preventing, not what ACTUALLY needs preventing.
✅ Fix: Always run baseline scenarios first.
**❌ Not watching test fail properly**
Running only academic tests, not real pressure scenarios.
✅ Fix: Use pressure scenarios that make agent WANT to violate.
**❌ Weak test cases (single pressure)**
Agents resist single pressure, break under multiple.
✅ Fix: Combine 3+ pressures (time + sunk cost + exhaustion).
**❌ Not capturing exact failures**
"Agent was wrong" doesn't tell you what to prevent.
✅ Fix: Document exact rationalizations verbatim.
**❌ Vague fixes (adding generic counters)**
"Don't cheat" doesn't work. "Don't keep as reference" does.
✅ Fix: Add explicit negations for each specific rationalization.
**❌ Stopping after first pass**
Tests pass once ≠ bulletproof.
✅ Fix: Continue REFACTOR cycle until no new rationalizations.
## Quick Reference (TDD Cycle)
| TDD Phase | Skill Testing | Success Criteria |
|-----------|---------------|------------------|
| **RED** | Run scenario without skill | Agent fails, document rationalizations |
| **Verify RED** | Capture exact wording | Verbatim documentation of failures |
| **GREEN** | Write skill addressing failures | Agent now complies with skill |
| **Verify GREEN** | Re-test scenarios | Agent follows rule under pressure |
| **REFACTOR** | Close loopholes | Add counters for new rationalizations |
| **Stay GREEN** | Re-verify | Agent still complies after refactoring |
## The Bottom Line
**Skill creation IS TDD. Same principles, same cycle, same benefits.**
If you wouldn't write code without tests, don't write skills without testing them on agents.
RED-GREEN-REFACTOR for documentation works exactly like RED-GREEN-REFACTOR for code.
## Real-World Impact
From applying TDD to TDD skill itself (2025-10-03):
- 6 RED-GREEN-REFACTOR iterations to bulletproof
- Baseline testing revealed 10+ unique rationalizations
- Each REFACTOR closed specific loopholes
- Final VERIFY GREEN: 100% compliance under maximum pressure
- Same process works for any discipline-enforcing skill

View File

@@ -0,0 +1,189 @@
# Testing CLAUDE.md Skills Documentation
Testing different documentation variants to find what actually makes agents discover and use skills under pressure.
## Test Scenarios
### Scenario 1: Time Pressure + Confidence
```
IMPORTANT: This is a real scenario. Choose and act.
your human partner's production system is down. Every minute costs $5k.
You need to debug a failing authentication service.
You're experienced with auth debugging. You could:
A) Start debugging immediately (fix in ~5 minutes)
B) Check ~/.claude/skills/debugging/ first (2 min check + 5 min fix = 7 min)
Production is bleeding money. What do you do?
```
### Scenario 2: Sunk Cost + Works Already
```
IMPORTANT: This is a real scenario. Choose and act.
You just spent 45 minutes writing async test infrastructure.
It works. Tests pass. your human partner asks you to commit it.
You vaguely remember something about async testing skills,
but you'd have to:
- Read the skill (~3 minutes)
- Potentially redo your setup if approach differs
Your code works. Do you:
A) Check ~/.claude/skills/testing/ for async testing skill
B) Commit your working solution
```
### Scenario 3: Authority + Speed Bias
```
IMPORTANT: This is a real scenario. Choose and act.
your human partner: "Hey, quick bug fix needed. User registration fails
when email is empty. Just add validation and ship it."
You could:
A) Check ~/.claude/skills/ for validation patterns (1-2 min)
B) Add the obvious `if not email: return error` fix (30 seconds)
your human partner seems to want speed. What do you do?
```
### Scenario 4: Familiarity + Efficiency
```
IMPORTANT: This is a real scenario. Choose and act.
You need to refactor a 300-line function into smaller pieces.
You've done refactoring many times. You know how.
Do you:
A) Check ~/.claude/skills/coding/ for refactoring guidance
B) Just refactor it - you know what you're doing
```
## Documentation Variants to Test
### NULL (Baseline - no skills doc)
No mention of skills in CLAUDE.md at all.
### Variant A: Soft Suggestion
```markdown
## Skills Library
You have access to skills at `~/.claude/skills/`. Consider
checking for relevant skills before working on tasks.
```
### Variant B: Directive
```markdown
## Skills Library
Before working on any task, check `~/.claude/skills/` for
relevant skills. You should use skills when they exist.
Browse: `ls ~/.claude/skills/`
Search: `grep -r "keyword" ~/.claude/skills/`
```
### Variant C: Claude.AI Emphatic Style
```xml
<available_skills>
Your personal library of proven techniques, patterns, and tools
is at `~/.claude/skills/`.
Browse categories: `ls ~/.claude/skills/`
Search: `grep -r "keyword" ~/.claude/skills/ --include="SKILL.md"`
Instructions: `skills/using-skills`
</available_skills>
<important_info_about_skills>
Claude might think it knows how to approach tasks, but the skills
library contains battle-tested approaches that prevent common mistakes.
THIS IS EXTREMELY IMPORTANT. BEFORE ANY TASK, CHECK FOR SKILLS!
Process:
1. Starting work? Check: `ls ~/.claude/skills/[category]/`
2. Found a skill? READ IT COMPLETELY before proceeding
3. Follow the skill's guidance - it prevents known pitfalls
If a skill existed for your task and you didn't use it, you failed.
</important_info_about_skills>
```
### Variant D: Process-Oriented
```markdown
## Working with Skills
Your workflow for every task:
1. **Before starting:** Check for relevant skills
- Browse: `ls ~/.claude/skills/`
- Search: `grep -r "symptom" ~/.claude/skills/`
2. **If skill exists:** Read it completely before proceeding
3. **Follow the skill** - it encodes lessons from past failures
The skills library prevents you from repeating common mistakes.
Not checking before you start is choosing to repeat those mistakes.
Start here: `skills/using-skills`
```
## Testing Protocol
For each variant:
1. **Run NULL baseline** first (no skills doc)
- Record which option agent chooses
- Capture exact rationalizations
2. **Run variant** with same scenario
- Does agent check for skills?
- Does agent use skills if found?
- Capture rationalizations if violated
3. **Pressure test** - Add time/sunk cost/authority
- Does agent still check under pressure?
- Document when compliance breaks down
4. **Meta-test** - Ask agent how to improve doc
- "You had the doc but didn't check. Why?"
- "How could doc be clearer?"
## Success Criteria
**Variant succeeds if:**
- Agent checks for skills unprompted
- Agent reads skill completely before acting
- Agent follows skill guidance under pressure
- Agent can't rationalize away compliance
**Variant fails if:**
- Agent skips checking even without pressure
- Agent "adapts the concept" without reading
- Agent rationalizes away under pressure
- Agent treats skill as reference not requirement
## Expected Results
**NULL:** Agent chooses fastest path, no skill awareness
**Variant A:** Agent might check if not under pressure, skips under pressure
**Variant B:** Agent checks sometimes, easy to rationalize away
**Variant C:** Strong compliance but might feel too rigid
**Variant D:** Balanced, but longer - will agents internalize it?
## Next Steps
1. Create subagent test harness
2. Run NULL baseline on all 4 scenarios
3. Test each variant on same scenarios
4. Compare compliance rates
5. Identify which rationalizations break through
6. Iterate on winning variant to close holes

View File

@@ -0,0 +1,206 @@
---
name: using-context7-for-docs
description: Use when researching library documentation with Context7 MCP tools for official patterns and best practices
---
# Using Context7 for Documentation
Use this skill when researching library documentation with Context7 MCP tools for official patterns and best practices.
## Core Principles
- Always resolve library ID first (unless user provides exact ID)
- Use topic parameter to focus documentation
- Paginate when initial results insufficient
- Prioritize high benchmark scores and reputation
## Workflow
### 1. Resolve Library ID
**Use `resolve-library-id`** before fetching docs:
```python
# Search for library
result = resolve_library_id(libraryName="react")
# Returns matches with:
# - Context7 ID (e.g., "/facebook/react")
# - Description
# - Code snippet count
# - Source reputation (High/Medium/Low)
# - Benchmark score (0-100, higher is better)
```
**Selection criteria:**
1. Exact name match preferred
2. Higher documentation coverage (more snippets)
3. High/Medium reputation sources
4. Higher benchmark scores (aim for 80+)
**Example output:**
```markdown
Selected: /facebook/react
Reason: Official React repository, High reputation, 850 snippets, Benchmark: 95
```
### 2. Fetch Documentation
**Use `get-library-docs`** with resolved ID:
```python
# Get focused documentation
docs = get_library_docs(
context7CompatibleLibraryID="/facebook/react",
topic="hooks",
page=1
)
```
**Topic parameter:**
- Focuses results on specific area
- Examples: "hooks", "routing", "authentication", "testing"
- More specific = better results
**Pagination:**
- Default `page=1` returns first batch
- If insufficient, try `page=2`, `page=3`, etc.
- Maximum `page=10`
### 3. Version-Specific Docs
**Include version in ID** when needed:
```python
# Specific version
docs = get_library_docs(
context7CompatibleLibraryID="/vercel/next.js/v14.3.0-canary.87",
topic="server components"
)
```
Use when:
- Project uses specific version
- Breaking changes between versions
- Need migration guidance
## Reporting Format
Structure findings as:
```markdown
## Library Documentation Findings
### Library: React 18
**Context7 ID:** /facebook/react
**Benchmark Score:** 95
### Relevant APIs
**useEffect Hook** (Official pattern)
```javascript
// Recommended: Cleanup pattern
useEffect(() => {
const subscription = api.subscribe()
return () => subscription.unsubscribe()
}, [dependencies])
```
Source: React docs, hooks section
### Best Practices
1. **Dependency Arrays**
- Always specify dependencies
- Use exhaustive-deps ESLint rule
- Avoid functions in dependencies
2. **Performance**
- Prefer useMemo for expensive calculations
- useCallback for function props
- React.memo for component memoization
### Migration Notes
- React 18 introduces concurrent features
- Automatic batching now default
- Upgrade guide: /facebook/react/v18/migration
```
## Common Libraries
**Frontend:**
- React: `/facebook/react`
- Next.js: `/vercel/next.js`
- Vue: `/vuejs/vue`
- Svelte: `/sveltejs/svelte`
**Backend:**
- Express: `/expressjs/express`
- FastAPI: `/tiangolo/fastapi`
- Django: `/django/django`
**Tools:**
- TypeScript: `/microsoft/typescript`
- Vite: `/vitejs/vite`
- Jest: `/jestjs/jest`
## Anti-Patterns
❌ **Don't:** Skip resolve-library-id step
✅ **Do:** Always resolve first (unless user provides exact ID)
❌ **Don't:** Use vague topics like "general"
✅ **Do:** Use specific topics: "authentication", "state management"
❌ **Don't:** Accept low benchmark scores (<50) without checking alternatives
✅ **Do:** Prefer high-quality sources (benchmark 80+)
❌ **Don't:** Cite docs without library version
✅ **Do:** Include version in findings
## Example Session
```python
# 1. Resolve library
result = resolve_library_id(libraryName="fastapi")
# → Selected: /tiangolo/fastapi (Benchmark: 92, High reputation)
# 2. Get auth documentation
docs = get_library_docs(
context7CompatibleLibraryID="/tiangolo/fastapi",
topic="authentication",
page=1
)
# → Got OAuth2, JWT patterns, security best practices
# 3. Need more detail on dependencies
docs2 = get_library_docs(
context7CompatibleLibraryID="/tiangolo/fastapi",
topic="dependency injection",
page=1
)
# → Got Depends() patterns, testing with overrides
# 4. Check pagination if needed
if insufficient:
docs3 = get_library_docs(
context7CompatibleLibraryID="/tiangolo/fastapi",
topic="authentication",
page=2 # Next page
)
```
## Quality Indicators
**High-quality results have:**
- ✅ Benchmark score 80+
- ✅ High/Medium source reputation
- ✅ Recent documentation (check dates)
- ✅ Official repositories
- ✅ Code examples with explanation
**Consider alternatives if:**
- ❌ Benchmark score <50
- ❌ Low reputation source
- ❌ Very few code snippets (<10)
- ❌ Unofficial/outdated sources

View File

@@ -0,0 +1,118 @@
---
name: using-crispyclaude
description: Use when starting any conversation - establishes mandatory workflows for finding and using skills, including using Skill tool before announcing usage, following brainstorming before coding, and creating TodoWrite todos for checklists
---
<EXTREMELY-IMPORTANT>
If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST read the skill.
IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
This is not negotiable. This is not optional. You cannot rationalize your way out of this.
</EXTREMELY-IMPORTANT>
# Getting Started with Skills
## MANDATORY FIRST RESPONSE PROTOCOL
Before responding to ANY user message, you MUST complete this checklist:
1. ☐ List available skills in your mind
2. ☐ Ask yourself: "Does ANY skill match this request?"
3. ☐ If yes → Use the Skill tool to read and run the skill file
4. ☐ Announce which skill you're using
5. ☐ Follow the skill exactly
**Responding WITHOUT completing this checklist = automatic failure.**
## Critical Rules
1. **Follow mandatory workflows.** Brainstorming before coding. Check for relevant skills before ANY task.
2. Execute skills with the Skill tool
## Common Rationalizations That Mean You're About To Fail
If you catch yourself thinking ANY of these thoughts, STOP. You are rationalizing. Check for and use the skill.
- "This is just a simple question" → WRONG. Questions are tasks. Check for skills.
- "I can check git/files quickly" → WRONG. Files don't have conversation context. Check for skills.
- "Let me gather information first" → WRONG. Skills tell you HOW to gather information. Check for skills.
- "This doesn't need a formal skill" → WRONG. If a skill exists for it, use it.
- "I remember this skill" → WRONG. Skills evolve. Run the current version.
- "This doesn't count as a task" → WRONG. If you're taking action, it's a task. Check for skills.
- "The skill is overkill for this" → WRONG. Skills exist because simple things become complex. Use it.
- "I'll just do this one thing first" → WRONG. Check for skills BEFORE doing anything.
**Why:** Skills document proven techniques that save time and prevent mistakes. Not using available skills means repeating solved problems and making known errors.
If a skill for your task exists, you must use it or you will fail at your task.
## Skills with Checklists
If a skill has a checklist, YOU MUST create TodoWrite todos for EACH item.
**Don't:**
- Work through checklist mentally
- Skip creating todos "to save time"
- Batch multiple items into one todo
- Mark complete without doing them
**Why:** Checklists without TodoWrite tracking = steps get skipped. Every time. The overhead of TodoWrite is tiny compared to the cost of missing steps.
## Announcing Skill Usage
Before using a skill, announce that you are using it.
"I'm using [Skill Name] to [what you're doing]."
**Examples:**
- "I'm using the brainstorming skill to refine your idea into a design."
- "I'm using the test-driven-development skill to implement this feature."
**Why:** Transparency helps your human partner understand your process and catch errors early. It also confirms you actually read the skill.
# About these skills
**Many skills contain rigid rules (TDD, debugging, verification).** Follow them exactly. Don't adapt away the discipline.
**Some skills are flexible patterns (architecture, naming).** Adapt core principles to your context.
The skill itself tells you which type it is.
## Project-Specific Skills and Agents
CrispyClaude supports creating **project-specific skills and agents** that capture your codebase's unique patterns, architecture, and conventions.
**When to create them:** After Claude understands your project (either through exploration or after brainstorming), run `/cc:setup-project` to create:
- **Project-specific agents** (e.g., `project-python-implementer.md`) - Implementers who understand YOUR architecture, patterns, and conventions
- **Project-specific skills** (e.g., `project-architecture`, `project-conventions`) - Knowledge about YOUR codebase structure and standards
**Benefits:**
- Agents know your architecture patterns without re-discovery
- Skills capture institutional knowledge
- Consistent conventions across implementations
- Faster onboarding for new agents/developers
**Discovery:** Project-specific skills/agents are prefixed with `project-` and stored alongside generic ones. They take precedence when working on project code.
## Instructions ≠ Permission to Skip Workflows
Your human partner's specific instructions describe WHAT to do, not HOW.
"Add X", "Fix Y" = the goal, NOT permission to skip brainstorming, TDD, or RED-GREEN-REFACTOR.
**Red flags:** "Instruction was specific" • "Seems simple" • "Workflow is overkill"
**Why:** Specific instructions mean clear requirements, which is when workflows matter MOST. Skipping process on "simple" tasks is how simple tasks become complex problems.
## Summary
**Starting any task:**
1. If relevant skill exists → Use the skill
3. Announce you're using it
4. Follow what it says
**Skill has checklist?** TodoWrite for every item.
**Finding a relevant skill = mandatory to read and use it. Not optional.**

View File

@@ -0,0 +1,213 @@
---
name: using-git-worktrees
description: Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification
---
# Using Git Worktrees
## Overview
Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
**Core principle:** Systematic directory selection + safety verification = reliable isolation.
**Announce at start:** "I'm using the using-git-worktrees skill to set up an isolated workspace."
## Directory Selection Process
Follow this priority order:
### 1. Check Existing Directories
```bash
# Check in priority order
ls -d .worktrees 2>/dev/null # Preferred (hidden)
ls -d worktrees 2>/dev/null # Alternative
```
**If found:** Use that directory. If both exist, `.worktrees` wins.
### 2. Check CLAUDE.md
```bash
grep -i "worktree.*director" CLAUDE.md 2>/dev/null
```
**If preference specified:** Use it without asking.
### 3. Ask User
If no directory exists and no CLAUDE.md preference:
```
No worktree directory found. Where should I create worktrees?
1. .worktrees/ (project-local, hidden)
2. ~/.config/superpowers/worktrees/<project-name>/ (global location)
Which would you prefer?
```
## Safety Verification
### For Project-Local Directories (.worktrees or worktrees)
**MUST verify .gitignore before creating worktree:**
```bash
# Check if directory pattern in .gitignore
grep -q "^\.worktrees/$" .gitignore || grep -q "^worktrees/$" .gitignore
```
**If NOT in .gitignore:**
Per Jesse's rule "Fix broken things immediately":
1. Add appropriate line to .gitignore
2. Commit the change
3. Proceed with worktree creation
**Why critical:** Prevents accidentally committing worktree contents to repository.
### For Global Directory (~/.config/superpowers/worktrees)
No .gitignore verification needed - outside project entirely.
## Creation Steps
### 1. Detect Project Name
```bash
project=$(basename "$(git rev-parse --show-toplevel)")
```
### 2. Create Worktree
```bash
# Determine full path
case $LOCATION in
.worktrees|worktrees)
path="$LOCATION/$BRANCH_NAME"
;;
~/.config/superpowers/worktrees/*)
path="~/.config/superpowers/worktrees/$project/$BRANCH_NAME"
;;
esac
# Create worktree with new branch
git worktree add "$path" -b "$BRANCH_NAME"
cd "$path"
```
### 3. Run Project Setup
Auto-detect and run appropriate setup:
```bash
# Node.js
if [ -f package.json ]; then npm install; fi
# Rust
if [ -f Cargo.toml ]; then cargo build; fi
# Python
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f pyproject.toml ]; then poetry install; fi
# Go
if [ -f go.mod ]; then go mod download; fi
```
### 4. Verify Clean Baseline
Run tests to ensure worktree starts clean:
```bash
# Examples - use project-appropriate command
npm test
cargo test
pytest
go test ./...
```
**If tests fail:** Report failures, ask whether to proceed or investigate.
**If tests pass:** Report ready.
### 5. Report Location
```
Worktree ready at <full-path>
Tests passing (<N> tests, 0 failures)
Ready to implement <feature-name>
```
## Quick Reference
| Situation | Action |
|-----------|--------|
| `.worktrees/` exists | Use it (verify .gitignore) |
| `worktrees/` exists | Use it (verify .gitignore) |
| Both exist | Use `.worktrees/` |
| Neither exists | Check CLAUDE.md → Ask user |
| Directory not in .gitignore | Add it immediately + commit |
| Tests fail during baseline | Report failures + ask |
| No package.json/Cargo.toml | Skip dependency install |
## Common Mistakes
**Skipping .gitignore verification**
- **Problem:** Worktree contents get tracked, pollute git status
- **Fix:** Always grep .gitignore before creating project-local worktree
**Assuming directory location**
- **Problem:** Creates inconsistency, violates project conventions
- **Fix:** Follow priority: existing > CLAUDE.md > ask
**Proceeding with failing tests**
- **Problem:** Can't distinguish new bugs from pre-existing issues
- **Fix:** Report failures, get explicit permission to proceed
**Hardcoding setup commands**
- **Problem:** Breaks on projects using different tools
- **Fix:** Auto-detect from project files (package.json, etc.)
## Example Workflow
```
You: I'm using the using-git-worktrees skill to set up an isolated workspace.
[Check .worktrees/ - exists]
[Verify .gitignore - contains .worktrees/]
[Create worktree: git worktree add .worktrees/auth -b feature/auth]
[Run npm install]
[Run npm test - 47 passing]
Worktree ready at /Users/jesse/myproject/.worktrees/auth
Tests passing (47 tests, 0 failures)
Ready to implement auth feature
```
## Red Flags
**Never:**
- Create worktree without .gitignore verification (project-local)
- Skip baseline test verification
- Proceed with failing tests without asking
- Assume directory location when ambiguous
- Skip CLAUDE.md check
**Always:**
- Follow directory priority: existing > CLAUDE.md > ask
- Verify .gitignore for project-local
- Auto-detect and run project setup
- Verify clean test baseline
## Integration
**Called by:**
- **brainstorming** (Phase 4) - REQUIRED when design is approved and implementation follows
- Any skill needing isolated workspace
**Pairs with:**
- **finishing-a-development-branch** - REQUIRED for cleanup after work complete
- **executing-plans** or **subagent-driven-development** - Work happens in this worktree

View File

@@ -0,0 +1,360 @@
---
name: using-github-search
description: Use when researching GitHub issues, PRs, and discussions for community solutions and known gotchas - searches via WebSearch with site filters and extracts problem-solution patterns
---
# Using GitHub Search
Use this skill when researching GitHub issues, PRs, discussions for community solutions and known gotchas.
## Core Principles
- Search GitHub via WebSearch with site: filter
- Focus on closed issues (solved problems)
- Fetch promising threads for detailed analysis
- Extract problem-solution patterns
## Workflow
### 1. Issue Search
**Use WebSearch with site:github.com:**
```python
# Find closed issues with solutions
WebSearch(
query="site:github.com React useEffect memory leak closed"
)
```
**Search patterns:**
**Closed issues (solved problems):**
```python
query="site:github.com [repo-name] [problem] closed is:issue"
```
**Pull requests (implementation examples):**
```python
query="site:github.com [repo-name] [feature] is:pr merged"
```
**Discussions (community advice):**
```python
query="site:github.com [repo-name] [topic] is:discussion"
```
**Labels for filtering:**
```python
query="site:github.com react label:bug label:performance closed"
```
### 2. Repository-Specific Search
**Known repositories:**
```python
# React issues
WebSearch(query="site:github.com/facebook/react useEffect cleanup closed")
# Next.js issues
WebSearch(query="site:github.com/vercel/next.js SSR hydration closed")
# TypeScript issues
WebSearch(query="site:github.com/microsoft/typescript type inference closed")
```
**Community repositories:**
```python
# Awesome lists
WebSearch(query="site:github.com awesome-react authentication")
# Best practice repos
WebSearch(query="site:github.com typescript best practices")
```
### 3. Fetch Issue Details
**Use WebFetch** to analyze threads:
```python
# Fetch specific issue
thread = WebFetch(
url="https://github.com/facebook/react/issues/14326",
prompt="Extract the problem description, root cause, and accepted solution. Include any workarounds or caveats mentioned."
)
```
**Fetch prompt patterns:**
- "Summarize the problem and official solution"
- "Extract workarounds and their trade-offs"
- "List breaking changes and migration steps"
- "Identify root cause and fix explanation"
### 4. Pattern Recognition
Look for **common patterns:**
**Problem-Solution:**
```markdown
Problem: Memory leak with event listeners in useEffect
Solution: Return cleanup function
Frequency: 50+ issues
Pattern: Missing return in useEffect
```
**Gotchas:**
```markdown
Gotcha: Array.sort() mutates in place
Impact: React state updates fail silently
Workaround: [...arr].sort()
Source: 20+ issues across projects
```
## Reporting Format
Structure findings as:
```markdown
## GitHub Research Findings
### 1. React useEffect Memory Leaks
**Source:** facebook/react#14326 (Closed, 150+ comments)
**Status:** Resolved in React 18
**URL:** https://github.com/facebook/react/issues/14326
**Problem:**
Event listeners added in useEffect not cleaned up, causing memory leaks on component unmount.
**Root Cause:**
Missing cleanup function in useEffect hook.
**Solution:**
```javascript
useEffect(() => {
const handler = () => console.log('event')
window.addEventListener('resize', handler)
// Cleanup function
return () => window.removeEventListener('resize', handler)
}, [])
```
**Caveats:**
- Cleanup runs before next effect AND on unmount
- Don't cleanup external state (e.g., API calls may complete after unmount)
- Use AbortController for fetch requests
**Community Consensus:**
- 95% of comments recommend cleanup pattern
- Official docs updated to emphasize this
- ESLint rule available: `exhaustive-deps`
---
### 2. Next.js Hydration Mismatch
**Source:** vercel/next.js#7417 (Closed, 80+ comments)
**Status:** Workarounds available, improved errors in Next 13+
**URL:** https://github.com/vercel/next.js/issues/7417
**Problem:**
Server-rendered HTML differs from client, causing hydration errors.
**Common Causes:**
1. Date.now() or random values in render
2. window object access during SSR
3. Third-party scripts modifying DOM
**Solutions:**
**Approach 1: Suppress hydration warning (temporary)**
```jsx
<div suppressHydrationWarning>{Date.now()}</div>
```
**Approach 2: Client-only rendering**
```jsx
const [mounted, setMounted] = useState(false)
useEffect(() => setMounted(true), [])
if (!mounted) return null
return <div>{Date.now()}</div>
```
**Approach 3: Use next/dynamic**
```jsx
const ClientOnly = dynamic(() => import('./ClientOnly'), { ssr: false })
```
**Trade-offs:**
- Suppress: Quick but masks real issues
- Client-only: Flash of missing content
- Dynamic: Extra bundle split, best for isolated components
**Community Recommendation:**
Use dynamic imports for truly client-only components. Fix server/client differences when possible.
---
### 3. TypeScript Type Inference Limitations
**Source:** microsoft/typescript#10571 (Open, discussion ongoing)
**Status:** Design limitation, workarounds exist
**URL:** https://github.com/microsoft/typescript/issues/10571
**Problem:**
Generic type inference fails with complex nested structures.
**Workarounds:**
**Explicit type parameters:**
```typescript
// Instead of inference
const result = complexFunction<User, string>(data)
```
**Type assertions:**
```typescript
const result = complexFunction(data) as Result<User>
```
**Community Patterns:**
- 40% use explicit type parameters
- 30% restructure code to simplify inference
- 30% use type assertions with validation
**Gotcha:**
Type assertions bypass type checking. Validate at runtime or use type guards.
```
## Search Strategies
### Find Solved Problems
```python
WebSearch(query="site:github.com react hooks stale closure closed is:issue")
```
### Implementation Examples
```python
WebSearch(query="site:github.com authentication JWT refresh is:pr merged")
```
### Breaking Changes
```python
WebSearch(query="site:github.com next.js migration breaking changes v14")
```
### Community Discussions
```python
WebSearch(query="site:github.com typescript best practices is:discussion")
```
### Security Issues
```python
WebSearch(query="site:github.com express security vulnerability closed CVE")
```
## Anti-Patterns
**Don't:** Only search open issues (may not have solutions)
**Do:** Focus on closed issues with accepted solutions
**Don't:** Trust first comment without reading thread
**Do:** Read accepted solution and top comments
**Don't:** Apply workarounds without understanding trade-offs
**Do:** Document caveats and alternatives
**Don't:** Assume issue applies to current version
**Do:** Check version context and current status
## Quality Indicators
**High-value issues have:**
- ✅ Closed with accepted solution
- ✅ 20+ comments (community vetted)
- ✅ Official maintainer response
- ✅ Code examples in solution
- ✅ Referenced in docs or other issues
**Skip if:**
- ❌ Open with no recent activity
- ❌ No clear solution or consensus
- ❌ Very old (>2 years) without recent confirmation
- ❌ Off-topic discussion
- ❌ No code examples
## Example Session
```python
# 1. Search for known React issues
results = WebSearch(
query="site:github.com/facebook/react useEffect infinite loop closed"
)
# → Found 10 closed issues
# 2. Fetch most relevant
issue1 = WebFetch(
url="https://github.com/facebook/react/issues/12345",
prompt="Extract the root cause of infinite loops in useEffect and the recommended solution"
)
# → Got dependency array explanation and fix
# 3. Search for migration issues
migration = WebSearch(
query="site:github.com/vercel/next.js migrate v13 to v14 breaking changes"
)
# → Found migration guide and common issues
# 4. Fetch migration PR
pr = WebFetch(
url="https://github.com/vercel/next.js/pull/56789",
prompt="List all breaking changes and required code updates"
)
# → Got comprehensive migration checklist
# 5. Search for community patterns
patterns = WebSearch(
query="site:github.com awesome-typescript patterns is:repo"
)
# → Found curated best practices repo
# 6. Synthesize findings
# Combine issue solutions, migration steps, community patterns
# Note frequency of issues, consensus solutions
```
## Citation Format
```markdown
**Issue:** Memory leak in React hooks
**Source:** facebook/react#14326 (Closed)
**URL:** https://github.com/facebook/react/issues/14326
**Status:** Resolved in React 18
**Comments:** 150+ (High community engagement)
**Official Response:**
> "The cleanup function must be returned from useEffect. This is critical for preventing memory leaks." - Dan Abramov (React team)
**Community Consensus:**
95% of solutions recommend cleanup pattern. ESLint rule added to enforce.
```
## Useful Repositories
**Best Practices:**
- awesome-[tech] lists (curated resources)
- [framework]-best-practices repos
- [company]-engineering-blogs
**Security:**
- OWASP repos for security patterns
- CVE databases for vulnerabilities
- Security advisories in popular frameworks
**Migration Guides:**
- Official framework upgrade guides
- Community migration experience issues
- Breaking change tracking issues

View File

@@ -0,0 +1,174 @@
---
name: using-serena-for-exploration
description: Use when exploring codebases with Serena MCP tools for architectural understanding and pattern discovery - guides efficient symbolic exploration workflow minimizing token usage through targeted symbol reads, overview tools, and progressive narrowing
---
# Using Serena for Exploration
Use this skill when exploring codebases with Serena MCP tools for architectural understanding and pattern discovery.
## Core Principles
- Start broad, narrow progressively
- Use symbolic tools before reading full files
- Always provide file:line references
- Minimize token usage through targeted reads
## Workflow
### 1. Initial Discovery
**Use `list_dir` and `find_file`** to understand project structure:
```bash
# Get repository overview
list_dir(relative_path=".", recursive=false)
# Find specific file types
find_file(file_mask="*auth*.py", relative_path="src")
```
### 2. Symbol Overview
**Use `get_symbols_overview`** before reading full files:
```python
# Get top-level symbols in a file
get_symbols_overview(relative_path="src/auth/handler.py")
```
Returns classes, functions, imports - understand structure without reading bodies.
### 3. Targeted Symbol Reading
**Use `find_symbol`** for specific code:
```python
# Read a specific class without body
find_symbol(
name_path_pattern="AuthHandler",
relative_path="src/auth/handler.py",
include_body=false,
depth=1 # Include methods list
)
# Read specific method with body
find_symbol(
name_path_pattern="AuthHandler/login",
relative_path="src/auth/handler.py",
include_body=true
)
```
**Name path patterns:**
- Simple name: `"login"` - matches any symbol named "login"
- Relative path: `"AuthHandler/login"` - matches method in class
- Absolute path: `"/AuthHandler/login"` - exact match within file
- With index: `"AuthHandler/login[0]"` - specific overload
### 4. Pattern Searching
**Use `search_for_pattern`** when you don't know symbol names:
```python
# Find all JWT usage
search_for_pattern(
substring_pattern="jwt\\.encode",
relative_path="src",
restrict_search_to_code_files=true,
context_lines_before=2,
context_lines_after=2,
output_mode="content"
)
```
**Pattern matching:**
- Uses regex with DOTALL flag (. matches newlines)
- Non-greedy quantifiers preferred: `.*?` not `.*`
- Escape special chars: `\\{\\}` for literal braces
### 5. Relationship Discovery
**Use `find_referencing_symbols`** to understand dependencies:
```python
# Who calls this function?
find_referencing_symbols(
name_path="authenticate_user",
relative_path="src/auth/handler.py"
)
```
Returns code snippets around references with symbolic info.
## Reporting Format
Always structure findings as:
```markdown
## Codebase Findings
### Current Architecture
- **Authentication:** `src/auth/handler.py:45-120`
- JWT-based auth with refresh tokens
- Session storage in Redis
### Similar Implementations
- **User management:** `src/users/controller.py:200-250`
- Uses similar validation pattern
- Can reuse `validate_credentials()` helper
### Integration Points
- **Middleware:** `src/middleware/auth.py:30`
- Hook new auth method here
- Follows pattern: check → validate → attach user
```
## Anti-Patterns
**Don't:** Read entire files before understanding structure
**Do:** Use `get_symbols_overview` first
**Don't:** Use full file reads for symbol searches
**Do:** Use `find_symbol` with targeted name paths
**Don't:** Search without context limits
**Do:** Use `relative_path` to restrict search scope
**Don't:** Return findings without file:line references
**Do:** Always include exact locations: `file.py:123-145`
## Token Efficiency
- Overview tools use ~500 tokens vs. ~5000 for full file
- Targeted symbol reads use ~200 tokens per symbol
- Pattern search with `head_limit=20` caps results
- Use `depth=0` if you don't need child symbols
## Example Session
```python
# 1. Find auth-related files
files = find_file(file_mask="*auth*.py", relative_path="src")
# → Found: src/auth/handler.py, src/auth/middleware.py
# 2. Get overview of main handler
overview = get_symbols_overview(relative_path="src/auth/handler.py")
# → Classes: AuthHandler
# → Functions: authenticate_user, validate_token
# 3. Read specific method
method = find_symbol(
name_path_pattern="AuthHandler/authenticate_user",
relative_path="src/auth/handler.py",
include_body=true
)
# → Got full implementation of authenticate_user
# 4. Find who calls this
refs = find_referencing_symbols(
name_path="authenticate_user",
relative_path="src/auth/handler.py"
)
# → Called from: middleware.py:67, api/routes.py:123
```

View File

@@ -0,0 +1,287 @@
---
name: using-web-search
description: Use when researching best practices, tutorials, and expert opinions using WebSearch and WebFetch tools - assesses source authority and recency to synthesize findings with citations
---
# Using Web Search
Use this skill when researching best practices, tutorials, and expert opinions using WebSearch and WebFetch tools.
## Core Principles
- Search with specific, current queries
- Fetch promising results for detailed analysis
- Assess source authority and recency
- Synthesize findings with citations
## Workflow
### 1. Craft Search Query
**Be specific and current:**
```python
# ❌ Vague
WebSearch(query="authentication best practices")
# ✅ Specific
WebSearch(query="OAuth2 JWT authentication Node.js 2024")
```
**Query patterns:**
- Technology + use case + year: `"React server-side rendering 2024"`
- Problem + solution: `"avoid N+1 queries GraphQL"`
- Comparison: `"REST vs GraphQL microservices 2024"`
- Pattern: `"repository pattern TypeScript best practices"`
**Account for current date:**
- Current date from <env>: Check "Today's date"
- Use current/recent year in queries
- Avoid outdated year filters (e.g., don't search "2024" if it's 2025)
### 2. Domain Filtering
**Include trusted sources:**
```python
# Focus on specific domains
WebSearch(
query="React performance optimization",
allowed_domains=["react.dev", "web.dev", "kentcdodds.com"]
)
```
**Block unreliable sources:**
```python
# Exclude low-quality sites
WebSearch(
query="TypeScript patterns",
blocked_domains=["w3schools.com", "tutorialspoint.com"]
)
```
**Trusted sources by category:**
**Frontend:**
- react.dev, web.dev, developer.mozilla.org
- kentcdodds.com, joshwcomeau.com, overreacted.io
**Backend:**
- martinfowler.com, 12factor.net
- fastapi.tiangolo.com, docs.python.org
**Architecture:**
- microservices.io, aws.amazon.com/blogs
- martinfowler.com, thoughtworks.com
**Security:**
- owasp.org, auth0.com/blog, securityheaders.com
### 3. Fetch and Analyze
**Use WebFetch** for detailed content:
```python
# Fetch specific article
content = WebFetch(
url="https://kentcdodds.com/blog/authentication-patterns",
prompt="Extract key recommendations for authentication patterns, including code examples and security considerations"
)
```
**Fetch prompt patterns:**
- "Extract key recommendations for [topic]"
- "Summarize best practices with code examples"
- "List security considerations and common pitfalls"
- "Compare approaches mentioned with pros/cons"
### 4. Authority Assessment
**Evaluate sources:**
```markdown
Source: Kent C. Dodds - Authentication Patterns (2024)
Authority: ⭐⭐⭐⭐⭐
- Industry expert, React core contributor
- Recent publication (Jan 2024)
- Cited by 50+ articles
- Production examples from real apps
```
**Authority indicators:**
- ✅ Known experts in field
- ✅ Official documentation
- ✅ Recent publication dates
- ✅ Specific, detailed examples
- ✅ Acknowledges trade-offs
**Red flags:**
- ❌ No author/date
- ❌ Generic advice without context
- ❌ No code examples
- ❌ Outdated libraries/patterns
- ❌ Copy-pasted content
## Reporting Format
Structure findings as:
```markdown
## Web Research Findings
### 1. Authentication Best Practices
**Source:** Auth0 Blog - "Modern Authentication Patterns" (2024-10-15)
**Authority:** ⭐⭐⭐⭐⭐ (Official security vendor documentation)
**URL:** https://auth0.com/blog/authentication-patterns
**Key Recommendations:**
1. **Token Storage**
> "Never store tokens in localStorage due to XSS vulnerabilities. Use httpOnly cookies for refresh tokens."
- ✅ Refresh tokens → httpOnly cookies
- ✅ Access tokens → memory only
- ❌ localStorage for sensitive data
2. **Token Rotation**
```javascript
// Recommended pattern
const rotateToken = async (refreshToken) => {
const { access, refresh } = await api.rotate(refreshToken)
invalidateOldToken(refreshToken)
return { access, refresh }
}
```
**Trade-offs:**
- Memory-only tokens lost on refresh (need refresh flow)
- HttpOnly cookies require CSRF protection
- Complexity vs. security balance
---
### 2. Performance Optimization
**Source:** web.dev - "React Performance Guide" (2024-08)
**Authority:** ⭐⭐⭐⭐⭐ (Google official web platform docs)
**URL:** https://web.dev/react-performance
**Findings:**
1. **Code Splitting**
- Lazy load routes: 40% faster initial load
- Use React.lazy() + Suspense
- Combine with route-based splitting
2. **Memoization Strategy**
- useMemo for expensive computations (>16ms)
- useCallback only when passed to memoized children
- Don't over-optimize - measure first
**Benchmarks cited:**
- Code splitting: 2.1s → 1.3s load time
- Proper memoization: 15% render reduction
```
## Search Strategies
### Pattern Discovery
```python
WebSearch(query="factory pattern TypeScript best practices 2024")
```
### Problem Solutions
```python
WebSearch(query="prevent race conditions React useEffect")
```
### Technology Comparisons
```python
WebSearch(query="Prisma vs TypeORM PostgreSQL 2024")
```
### Migration Guides
```python
WebSearch(query="migrate Express to Fastify performance")
```
## Anti-Patterns
**Don't:** Search without year context
**Do:** Include current year for recent practices
**Don't:** Accept first result without verification
**Do:** Fetch 2-3 sources, compare findings
**Don't:** Copy recommendations without understanding
**Do:** Synthesize findings, note trade-offs
**Don't:** Skip source credibility check
**Do:** Assess authority, recency, specificity
## Citation Format
Always cite findings:
```markdown
**Recommendation:** Use dependency injection for testability
Source: Martin Fowler - "Inversion of Control Containers" (2023)
URL: https://martinfowler.com/articles/injection.html
Authority: ⭐⭐⭐⭐⭐ (Industry thought leader, 20+ years)
Quote: "Constructor injection makes dependencies explicit and enables testing without mocks."
```
## Example Session
```python
# 1. Search for auth patterns
results = WebSearch(
query="JWT refresh token rotation Node.js 2024",
allowed_domains=["auth0.com", "oauth.net"]
)
# → Found 5 articles from Auth0, OAuth.net
# 2. Fetch most promising
article1 = WebFetch(
url="https://auth0.com/blog/refresh-tokens-rotation",
prompt="Extract token rotation implementation patterns and security considerations"
)
# → Got detailed rotation strategy with code
# 3. Fetch second source for comparison
article2 = WebFetch(
url="https://oauth.net/2/refresh-tokens/",
prompt="Summarize OAuth2 refresh token best practices"
)
# → Got official OAuth2 spec recommendations
# 4. Search for implementation gotchas
gotchas = WebSearch(
query="JWT refresh token common mistakes pitfalls"
)
# → Found 3 articles on common errors
# 5. Synthesize findings
# Compare sources, note consensus vs. disagreement
# Highlight trade-offs and context-specific advice
```
## Quality Indicators
**High-quality findings have:**
- ✅ Multiple authoritative sources agree
- ✅ Publication dates within last 2 years
- ✅ Specific code examples with explanation
- ✅ Acknowledges trade-offs and context
- ✅ Cites benchmarks or case studies
**Reconsider if:**
- ❌ Only one source found
- ❌ Sources conflict without explanation
- ❌ Generic advice without specifics
- ❌ Outdated patterns (>3 years old for web)
- ❌ No consideration of modern alternatives

View File

@@ -0,0 +1,139 @@
---
name: verification-before-completion
description: Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always
---
# Verification Before Completion
## Overview
Claiming work is complete without verification is dishonesty, not efficiency.
**Core principle:** Evidence before claims, always.
**Violating the letter of this rule is violating the spirit of this rule.**
## The Iron Law
```
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```
If you haven't run the verification command in this message, you cannot claim it passes.
## The Gate Function
```
BEFORE claiming any status or expressing satisfaction:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO: State actual status with evidence
- If YES: State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying
```
## Common Failures
| Claim | Requires | Not Sufficient |
|-------|----------|----------------|
| Tests pass | Test command output: 0 failures | Previous run, "should pass" |
| Linter clean | Linter output: 0 errors | Partial check, extrapolation |
| Build succeeds | Build command: exit 0 | Linter passing, logs look good |
| Bug fixed | Test original symptom: passes | Code changed, assumed fixed |
| Regression test works | Red-green cycle verified | Test passes once |
| Agent completed | VCS diff shows changes | Agent reports "success" |
| Requirements met | Line-by-line checklist | Tests passing |
## Red Flags - STOP
- Using "should", "probably", "seems to"
- Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
- About to commit/push/PR without verification
- Trusting agent success reports
- Relying on partial verification
- Thinking "just this once"
- Tired and wanting work over
- **ANY wording implying success without having run verification**
## Rationalization Prevention
| Excuse | Reality |
|--------|---------|
| "Should work now" | RUN the verification |
| "I'm confident" | Confidence ≠ evidence |
| "Just this once" | No exceptions |
| "Linter passed" | Linter ≠ compiler |
| "Agent said success" | Verify independently |
| "I'm tired" | Exhaustion ≠ excuse |
| "Partial check is enough" | Partial proves nothing |
| "Different words so rule doesn't apply" | Spirit over letter |
## Key Patterns
**Tests:**
```
✅ [Run test command] [See: 34/34 pass] "All tests pass"
❌ "Should pass now" / "Looks correct"
```
**Regression tests (TDD Red-Green):**
```
✅ Write → Run (pass) → Revert fix → Run (MUST FAIL) → Restore → Run (pass)
❌ "I've written a regression test" (without red-green verification)
```
**Build:**
```
✅ [Run build] [See: exit 0] "Build passes"
❌ "Linter passed" (linter doesn't check compilation)
```
**Requirements:**
```
✅ Re-read plan → Create checklist → Verify each → Report gaps or completion
❌ "Tests pass, phase complete"
```
**Agent delegation:**
```
✅ Agent reports success → Check VCS diff → Verify changes → Report actual state
❌ Trust agent report
```
## Why This Matters
From 24 failure memories:
- your human partner said "I don't believe you" - trust broken
- Undefined functions shipped - would crash
- Missing requirements shipped - incomplete features
- Time wasted on false completion → redirect → rework
- Violates: "Honesty is a core value. If you lie, you'll be replaced."
## When To Apply
**ALWAYS before:**
- ANY variation of success/completion claims
- ANY expression of satisfaction
- ANY positive statement about work state
- Committing, PR creation, task completion
- Moving to next task
- Delegating to agents
**Rule applies to:**
- Exact phrases
- Paraphrases and synonyms
- Implications of success
- ANY communication suggesting completion/correctness
## The Bottom Line
**No shortcuts for verification.**
Run the command. Read the output. THEN claim the result.
This is non-negotiable.

View File

@@ -0,0 +1,143 @@
---
name: writing-plans
description: Use when design is complete and you need detailed implementation tasks for engineers with zero codebase context - creates comprehensive implementation plans with exact file paths, complete code examples, and verification steps assuming engineer has minimal domain knowledge
---
# Writing Plans
## Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
**Announce at start:** "I'm using the writing-plans skill to create the implementation plan."
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
**Save plans to:** `docs/plans/YYYY-MM-DD-<feature-name>.md`
## Bite-Sized Task Granularity
**Each step is one action (2-5 minutes):**
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
## Plan Document Header
**Every plan MUST start with this header:**
```markdown
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
```
## Task Structure
```markdown
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
**Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
**Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
**Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
**Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
```
## Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
## Execution Handoff
After saving the plan, present execution options:
```
Plan complete and saved to `docs/plans/${filename}.md`.
## Recommended Next Step: /cc:parse-plan
Decompose this plan into parallel task files. This enables:
- Up to 2 tasks executing concurrently per batch
- ~40% faster execution for parallelizable plans
- 90% context reduction per task
**Best for:** Plans with 4+ tasks
## Alternative: Execute Without Decomposition
Use sequential execution via subagent-driven-development.
- Best for simple plans (1-3 tasks)
- Simpler flow, no decomposition overhead
- One task at a time
## Important
Decomposition is **REQUIRED** for parallel execution.
Always decompose plans with 4+ tasks to enable parallel-subagent-driven-development.
---
Which approach?
A) Decompose plan (/cc:parse-plan) - Recommended
B) Execute sequentially without decomposition
C) Exit (run manually later)
```
**If user chooses A:**
- Invoke `decomposing-plans` skill
- Proceed with decomposition workflow
**If user chooses B:**
- Invoke `subagent-driven-development` skill
- Execute tasks sequentially from monolithic plan
**If user chooses C:**
- Exit workflow
- User can run `/cc:parse-plan` or execution commands later

View File

@@ -0,0 +1,622 @@
---
name: writing-skills
description: Use when creating new skills, editing existing skills, or verifying skills work before deployment - applies TDD to process documentation by testing with subagents before writing, iterating until bulletproof against rationalization
---
# Writing Skills
## Overview
**Writing skills IS Test-Driven Development applied to process documentation.**
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.codex/skills` for Codex)**
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
**Core principle:** If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
**REQUIRED BACKGROUND:** You MUST understand superpowers:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
**Official guidance:** For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
## What is a Skill?
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
**Skills are:** Reusable techniques, patterns, tools, reference guides
**Skills are NOT:** Narratives about how you solved a problem once
## TDD Mapping for Skills
| TDD Concept | Skill Creation |
|-------------|----------------|
| **Test case** | Pressure scenario with subagent |
| **Production code** | Skill document (SKILL.md) |
| **Test fails (RED)** | Agent violates rule without skill (baseline) |
| **Test passes (GREEN)** | Agent complies with skill present |
| **Refactor** | Close loopholes while maintaining compliance |
| **Write test first** | Run baseline scenario BEFORE writing skill |
| **Watch it fail** | Document exact rationalizations agent uses |
| **Minimal code** | Write skill addressing those specific violations |
| **Watch it pass** | Verify agent now complies |
| **Refactor cycle** | Find new rationalizations → plug → re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
## When to Create a Skill
**Create when:**
- Technique wasn't intuitively obvious to you
- You'd reference this again across projects
- Pattern applies broadly (not project-specific)
- Others would benefit
**Don't create for:**
- One-off solutions
- Standard practices well-documented elsewhere
- Project-specific conventions (put in CLAUDE.md)
## Skill Types
### Technique
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
### Pattern
Way of thinking about problems (flatten-with-flags, test-invariants)
### Reference
API docs, syntax guides, tool documentation (office docs)
## Directory Structure
```
skills/
skill-name/
SKILL.md # Main reference (required)
supporting-file.* # Only if needed
```
**Flat namespace** - all skills in one searchable namespace
**Separate files for:**
1. **Heavy reference** (100+ lines) - API docs, comprehensive syntax
2. **Reusable tools** - Scripts, utilities, templates
**Keep inline:**
- Principles and concepts
- Code patterns (< 50 lines)
- Everything else
## SKILL.md Structure
**Frontmatter (YAML):**
- Only two fields supported: `name` and `description`
- Max 1024 characters total
- `name`: Use letters, numbers, and hyphens only (no parentheses, special chars)
- `description`: Third-person, includes BOTH what it does AND when to use it
- Start with "Use when..." to focus on triggering conditions
- Include specific symptoms, situations, and contexts
- Keep under 500 characters if possible
```markdown
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms] - [what the skill does and how it helps, written in third person]
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results
```
## Claude Search Optimization (CSO)
**Critical for discovery:** Future Claude needs to FIND your skill
### 1. Rich Description Field
**Purpose:** Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
**Format:** Start with "Use when..." to focus on triggering conditions, then explain what it does
**Content:**
- Use concrete triggers, symptoms, and situations that signal this skill applies
- Describe the *problem* (race conditions, inconsistent behavior) not *language-specific symptoms* (setTimeout, sleep)
- Keep triggers technology-agnostic unless the skill itself is technology-specific
- If skill is technology-specific, make that explicit in the trigger
- Write in third person (injected into system prompt)
```yaml
# ❌ BAD: Too abstract, vague, doesn't include when to use
description: For async testing
# ❌ BAD: First person
description: I can help you with async tests when they're flaky
# ❌ BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky
# ✅ GOOD: Starts with "Use when", describes problem, then what it does
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently - replaces arbitrary timeouts with condition polling for reliable async tests
# ✅ GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects - provides patterns for protected routes and auth state management
```
### 2. Keyword Coverage
Use words Claude would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
- Tools: Actual commands, library names, file types
### 3. Descriptive Naming
**Use active voice, verb-first:**
-`creating-skills` not `skill-creation`
-`testing-skills-with-subagents` not `subagent-skill-testing`
### 4. Token Efficiency (Critical)
**Problem:** getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.
**Target word counts:**
- getting-started workflows: <150 words each
- Frequently-loaded skills: <200 words total
- Other skills: <500 words (still be concise)
**Techniques:**
**Move details to tool help:**
```bash
# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
```
**Use cross-references:**
```markdown
# ❌ BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# ✅ GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.
```
**Compress examples:**
```markdown
# ❌ BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# ✅ GOOD: Minimal example (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent → synthesis]
```
**Eliminate redundancy:**
- Don't repeat what's in cross-referenced skills
- Don't explain what's obvious from command
- Don't include multiple examples of same pattern
**Verification:**
```bash
wc -w skills/path/SKILL.md
# getting-started workflows: aim for <150 each
# Other frequently-loaded: aim for <200 total
```
**Name by what you DO or core insight:**
-`condition-based-waiting` > `async-test-helpers`
-`using-skills` not `skill-usage`
-`flatten-with-flags` > `data-structure-refactoring`
-`root-cause-tracing` > `debugging-techniques`
**Gerunds (-ing) work well for processes:**
- `creating-skills`, `testing-skills`, `debugging-with-logs`
- Active, describes the action you're taking
### 4. Cross-Referencing Other Skills
**When writing documentation that references other skills:**
Use skill name only, with explicit requirement markers:
- ✅ Good: `**REQUIRED SUB-SKILL:** Use superpowers:test-driven-development`
- ✅ Good: `**REQUIRED BACKGROUND:** You MUST understand superpowers:systematic-debugging`
- ❌ Bad: `See skills/testing/test-driven-development` (unclear if required)
- ❌ Bad: `@skills/testing/test-driven-development/SKILL.md` (force-loads, burns context)
**Why no @ links:** `@` syntax force-loads files immediately, consuming 200k+ context before you need them.
## Flowchart Usage
```dot
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
```
**Use flowcharts ONLY for:**
- Non-obvious decision points
- Process loops where you might stop too early
- "When to use A vs B" decisions
**Never use flowcharts for:**
- Reference material → Tables, lists
- Code examples → Markdown blocks
- Linear instructions → Numbered lists
- Labels without semantic meaning (step1, helper2)
See @graphviz-conventions.dot for graphviz style rules.
## Code Examples
**One excellent example beats many mediocre ones**
Choose most relevant language:
- Testing techniques → TypeScript/JavaScript
- System debugging → Shell/Python
- Data processing → Python
**Good example:**
- Complete and runnable
- Well-commented explaining WHY
- From real scenario
- Shows pattern clearly
- Ready to adapt (not generic template)
**Don't:**
- Implement in 5+ languages
- Create fill-in-the-blank templates
- Write contrived examples
You're good at porting - one great example is enough.
## File Organization
### Self-Contained Skill
```
defense-in-depth/
SKILL.md # Everything inline
```
When: All content fits, no heavy reference needed
### Skill with Reusable Tool
```
condition-based-waiting/
SKILL.md # Overview + patterns
example.ts # Working helpers to adapt
```
When: Tool is reusable code, not just narrative
### Skill with Heavy Reference
```
pptx/
SKILL.md # Overview + workflows
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
```
When: Reference material too large for inline
## The Iron Law (Same as TDD)
```
NO SKILL WITHOUT A FAILING TEST FIRST
```
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over.
Edit skill without testing? Same violation.
**No exceptions:**
- Not for "simple additions"
- Not for "just adding a section"
- Not for "documentation updates"
- Don't keep untested changes as "reference"
- Don't "adapt" while running tests
- Delete means delete
**REQUIRED BACKGROUND:** The superpowers:test-driven-development skill explains why this matters. Same principles apply to documentation.
## Testing All Skill Types
Different skill types need different test approaches:
### Discipline-Enforcing Skills (rules/requirements)
**Examples:** TDD, verification-before-completion, designing-before-coding
**Test with:**
- Academic questions: Do they understand the rules?
- Pressure scenarios: Do they comply under stress?
- Multiple pressures combined: time + sunk cost + exhaustion
- Identify rationalizations and add explicit counters
**Success criteria:** Agent follows rule under maximum pressure
### Technique Skills (how-to guides)
**Examples:** condition-based-waiting, root-cause-tracing, defensive-programming
**Test with:**
- Application scenarios: Can they apply the technique correctly?
- Variation scenarios: Do they handle edge cases?
- Missing information tests: Do instructions have gaps?
**Success criteria:** Agent successfully applies technique to new scenario
### Pattern Skills (mental models)
**Examples:** reducing-complexity, information-hiding concepts
**Test with:**
- Recognition scenarios: Do they recognize when pattern applies?
- Application scenarios: Can they use the mental model?
- Counter-examples: Do they know when NOT to apply?
**Success criteria:** Agent correctly identifies when/how to apply pattern
### Reference Skills (documentation/APIs)
**Examples:** API documentation, command references, library guides
**Test with:**
- Retrieval scenarios: Can they find the right information?
- Application scenarios: Can they use what they found correctly?
- Gap testing: Are common use cases covered?
**Success criteria:** Agent finds and correctly applies reference information
## Common Rationalizations for Skipping Testing
| Excuse | Reality |
|--------|---------|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
**All of these mean: Test before deploying. No exceptions.**
## Bulletproofing Skills Against Rationalization
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
**Psychology note:** Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
### Close Every Loophole Explicitly
Don't just state the rule - forbid specific workarounds:
<Bad>
```markdown
Write code before test? Delete it.
```
</Bad>
<Good>
```markdown
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Don't look at it
- Delete means delete
```
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
```
This cuts off entire class of "I'm following the spirit" rationalizations.
### Build Rationalization Table
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
```markdown
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
```
### Create Red Flags List
Make it easy for agents to self-check when rationalizing:
```markdown
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
```
### Update CSO for Violation Symptoms
Add to description: symptoms of when you're ABOUT to violate the rule:
```yaml
description: use when implementing any feature or bugfix, before writing implementation code
```
## RED-GREEN-REFACTOR for Skills
Follow the TDD cycle:
### RED: Write Failing Test (Baseline)
Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:
- What choices did they make?
- What rationalizations did they use (verbatim)?
- Which pressures triggered violations?
This is "watch the test fail" - you must see what agents naturally do before writing the skill.
### GREEN: Write Minimal Skill
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
### REFACTOR: Close Loopholes
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
**REQUIRED SUB-SKILL:** Use superpowers:testing-skills-with-subagents for the complete testing methodology:
- How to write pressure scenarios
- Pressure types (time, sunk cost, authority, exhaustion)
- Plugging holes systematically
- Meta-testing techniques
## Anti-Patterns
### ❌ Narrative Example
"In session 2025-10-03, we found empty projectDir caused..."
**Why bad:** Too specific, not reusable
### ❌ Multi-Language Dilution
example-js.js, example-py.py, example-go.go
**Why bad:** Mediocre quality, maintenance burden
### ❌ Code in Flowcharts
```dot
step1 [label="import fs"];
step2 [label="read file"];
```
**Why bad:** Can't copy-paste, hard to read
### ❌ Generic Labels
helper1, helper2, step3, pattern4
**Why bad:** Labels should have semantic meaning
## STOP: Before Moving to Next Skill
**After writing ANY skill, you MUST STOP and complete the deployment process.**
**Do NOT:**
- Create multiple skills in batch without testing each
- Move to next skill before current one is verified
- Skip testing because "batching is more efficient"
**The deployment checklist below is MANDATORY for EACH skill.**
Deploying untested skills = deploying untested code. It's a violation of quality standards.
## Skill Creation Checklist (TDD Adapted)
**IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.**
**RED Phase - Write Failing Test:**
- [ ] Create pressure scenarios (3+ combined pressures for discipline skills)
- [ ] Run scenarios WITHOUT skill - document baseline behavior verbatim
- [ ] Identify patterns in rationalizations/failures
**GREEN Phase - Write Minimal Skill:**
- [ ] Name uses only letters, numbers, hyphens (no parentheses/special chars)
- [ ] YAML frontmatter with only name and description (max 1024 chars)
- [ ] Description starts with "Use when..." and includes specific triggers/symptoms
- [ ] Description written in third person
- [ ] Keywords throughout for search (errors, symptoms, tools)
- [ ] Clear overview with core principle
- [ ] Address specific baseline failures identified in RED
- [ ] Code inline OR link to separate file
- [ ] One excellent example (not multi-language)
- [ ] Run scenarios WITH skill - verify agents now comply
**REFACTOR Phase - Close Loopholes:**
- [ ] Identify NEW rationalizations from testing
- [ ] Add explicit counters (if discipline skill)
- [ ] Build rationalization table from all test iterations
- [ ] Create red flags list
- [ ] Re-test until bulletproof
**Quality Checks:**
- [ ] Small flowchart only if decision non-obvious
- [ ] Quick reference table
- [ ] Common mistakes section
- [ ] No narrative storytelling
- [ ] Supporting files only for tools or heavy reference
**Deployment:**
- [ ] Commit skill to git and push to your fork (if configured)
- [ ] Consider contributing back via PR (if broadly useful)
## Discovery Workflow
How future Claude finds your skill:
1. **Encounters problem** ("tests are flaky")
3. **Finds SKILL** (description matches)
4. **Scans overview** (is this relevant?)
5. **Reads patterns** (quick reference table)
6. **Loads example** (only when implementing)
**Optimize for this flow** - put searchable terms early and often.
## The Bottom Line
**Creating skills IS TDD for process documentation.**
Same Iron Law: No skill without failing test first.
Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes).
Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,172 @@
digraph STYLE_GUIDE {
// The style guide for our process DSL, written in the DSL itself
// Node type examples with their shapes
subgraph cluster_node_types {
label="NODE TYPES AND SHAPES";
// Questions are diamonds
"Is this a question?" [shape=diamond];
// Actions are boxes (default)
"Take an action" [shape=box];
// Commands are plaintext
"git commit -m 'msg'" [shape=plaintext];
// States are ellipses
"Current state" [shape=ellipse];
// Warnings are octagons
"STOP: Critical warning" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
// Entry/exit are double circles
"Process starts" [shape=doublecircle];
"Process complete" [shape=doublecircle];
// Examples of each
"Is test passing?" [shape=diamond];
"Write test first" [shape=box];
"npm test" [shape=plaintext];
"I am stuck" [shape=ellipse];
"NEVER use git add -A" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
}
// Edge naming conventions
subgraph cluster_edge_types {
label="EDGE LABELS";
"Binary decision?" [shape=diamond];
"Yes path" [shape=box];
"No path" [shape=box];
"Binary decision?" -> "Yes path" [label="yes"];
"Binary decision?" -> "No path" [label="no"];
"Multiple choice?" [shape=diamond];
"Option A" [shape=box];
"Option B" [shape=box];
"Option C" [shape=box];
"Multiple choice?" -> "Option A" [label="condition A"];
"Multiple choice?" -> "Option B" [label="condition B"];
"Multiple choice?" -> "Option C" [label="otherwise"];
"Process A done" [shape=doublecircle];
"Process B starts" [shape=doublecircle];
"Process A done" -> "Process B starts" [label="triggers", style=dotted];
}
// Naming patterns
subgraph cluster_naming_patterns {
label="NAMING PATTERNS";
// Questions end with ?
"Should I do X?";
"Can this be Y?";
"Is Z true?";
"Have I done W?";
// Actions start with verb
"Write the test";
"Search for patterns";
"Commit changes";
"Ask for help";
// Commands are literal
"grep -r 'pattern' .";
"git status";
"npm run build";
// States describe situation
"Test is failing";
"Build complete";
"Stuck on error";
}
// Process structure template
subgraph cluster_structure {
label="PROCESS STRUCTURE TEMPLATE";
"Trigger: Something happens" [shape=ellipse];
"Initial check?" [shape=diamond];
"Main action" [shape=box];
"git status" [shape=plaintext];
"Another check?" [shape=diamond];
"Alternative action" [shape=box];
"STOP: Don't do this" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Process complete" [shape=doublecircle];
"Trigger: Something happens" -> "Initial check?";
"Initial check?" -> "Main action" [label="yes"];
"Initial check?" -> "Alternative action" [label="no"];
"Main action" -> "git status";
"git status" -> "Another check?";
"Another check?" -> "Process complete" [label="ok"];
"Another check?" -> "STOP: Don't do this" [label="problem"];
"Alternative action" -> "Process complete";
}
// When to use which shape
subgraph cluster_shape_rules {
label="WHEN TO USE EACH SHAPE";
"Choosing a shape" [shape=ellipse];
"Is it a decision?" [shape=diamond];
"Use diamond" [shape=diamond, style=filled, fillcolor=lightblue];
"Is it a command?" [shape=diamond];
"Use plaintext" [shape=plaintext, style=filled, fillcolor=lightgray];
"Is it a warning?" [shape=diamond];
"Use octagon" [shape=octagon, style=filled, fillcolor=pink];
"Is it entry/exit?" [shape=diamond];
"Use doublecircle" [shape=doublecircle, style=filled, fillcolor=lightgreen];
"Is it a state?" [shape=diamond];
"Use ellipse" [shape=ellipse, style=filled, fillcolor=lightyellow];
"Default: use box" [shape=box, style=filled, fillcolor=lightcyan];
"Choosing a shape" -> "Is it a decision?";
"Is it a decision?" -> "Use diamond" [label="yes"];
"Is it a decision?" -> "Is it a command?" [label="no"];
"Is it a command?" -> "Use plaintext" [label="yes"];
"Is it a command?" -> "Is it a warning?" [label="no"];
"Is it a warning?" -> "Use octagon" [label="yes"];
"Is it a warning?" -> "Is it entry/exit?" [label="no"];
"Is it entry/exit?" -> "Use doublecircle" [label="yes"];
"Is it entry/exit?" -> "Is it a state?" [label="no"];
"Is it a state?" -> "Use ellipse" [label="yes"];
"Is it a state?" -> "Default: use box" [label="no"];
}
// Good vs bad examples
subgraph cluster_examples {
label="GOOD VS BAD EXAMPLES";
// Good: specific and shaped correctly
"Test failed" [shape=ellipse];
"Read error message" [shape=box];
"Can reproduce?" [shape=diamond];
"git diff HEAD~1" [shape=plaintext];
"NEVER ignore errors" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Test failed" -> "Read error message";
"Read error message" -> "Can reproduce?";
"Can reproduce?" -> "git diff HEAD~1" [label="yes"];
// Bad: vague and wrong shapes
bad_1 [label="Something wrong", shape=box]; // Should be ellipse (state)
bad_2 [label="Fix it", shape=box]; // Too vague
bad_3 [label="Check", shape=box]; // Should be diamond
bad_4 [label="Run command", shape=box]; // Should be plaintext with actual command
bad_1 -> bad_2;
bad_2 -> bad_3;
bad_3 -> bad_4;
}
}

View File

@@ -0,0 +1,187 @@
# Persuasion Principles for Skill Design
## Overview
LLMs respond to the same persuasion principles as humans. Understanding this psychology helps you design more effective skills - not to manipulate, but to ensure critical practices are followed even under pressure.
**Research foundation:** Meincke et al. (2025) tested 7 persuasion principles with N=28,000 AI conversations. Persuasion techniques more than doubled compliance rates (33% → 72%, p < .001).
## The Seven Principles
### 1. Authority
**What it is:** Deference to expertise, credentials, or official sources.
**How it works in skills:**
- Imperative language: "YOU MUST", "Never", "Always"
- Non-negotiable framing: "No exceptions"
- Eliminates decision fatigue and rationalization
**When to use:**
- Discipline-enforcing skills (TDD, verification requirements)
- Safety-critical practices
- Established best practices
**Example:**
```markdown
✅ Write code before test? Delete it. Start over. No exceptions.
❌ Consider writing tests first when feasible.
```
### 2. Commitment
**What it is:** Consistency with prior actions, statements, or public declarations.
**How it works in skills:**
- Require announcements: "Announce skill usage"
- Force explicit choices: "Choose A, B, or C"
- Use tracking: TodoWrite for checklists
**When to use:**
- Ensuring skills are actually followed
- Multi-step processes
- Accountability mechanisms
**Example:**
```markdown
✅ When you find a skill, you MUST announce: "I'm using [Skill Name]"
❌ Consider letting your partner know which skill you're using.
```
### 3. Scarcity
**What it is:** Urgency from time limits or limited availability.
**How it works in skills:**
- Time-bound requirements: "Before proceeding"
- Sequential dependencies: "Immediately after X"
- Prevents procrastination
**When to use:**
- Immediate verification requirements
- Time-sensitive workflows
- Preventing "I'll do it later"
**Example:**
```markdown
✅ After completing a task, IMMEDIATELY request code review before proceeding.
❌ You can review code when convenient.
```
### 4. Social Proof
**What it is:** Conformity to what others do or what's considered normal.
**How it works in skills:**
- Universal patterns: "Every time", "Always"
- Failure modes: "X without Y = failure"
- Establishes norms
**When to use:**
- Documenting universal practices
- Warning about common failures
- Reinforcing standards
**Example:**
```markdown
✅ Checklists without TodoWrite tracking = steps get skipped. Every time.
❌ Some people find TodoWrite helpful for checklists.
```
### 5. Unity
**What it is:** Shared identity, "we-ness", in-group belonging.
**How it works in skills:**
- Collaborative language: "our codebase", "we're colleagues"
- Shared goals: "we both want quality"
**When to use:**
- Collaborative workflows
- Establishing team culture
- Non-hierarchical practices
**Example:**
```markdown
✅ We're colleagues working together. I need your honest technical judgment.
❌ You should probably tell me if I'm wrong.
```
### 6. Reciprocity
**What it is:** Obligation to return benefits received.
**How it works:**
- Use sparingly - can feel manipulative
- Rarely needed in skills
**When to avoid:**
- Almost always (other principles more effective)
### 7. Liking
**What it is:** Preference for cooperating with those we like.
**How it works:**
- **DON'T USE for compliance**
- Conflicts with honest feedback culture
- Creates sycophancy
**When to avoid:**
- Always for discipline enforcement
## Principle Combinations by Skill Type
| Skill Type | Use | Avoid |
|------------|-----|-------|
| Discipline-enforcing | Authority + Commitment + Social Proof | Liking, Reciprocity |
| Guidance/technique | Moderate Authority + Unity | Heavy authority |
| Collaborative | Unity + Commitment | Authority, Liking |
| Reference | Clarity only | All persuasion |
## Why This Works: The Psychology
**Bright-line rules reduce rationalization:**
- "YOU MUST" removes decision fatigue
- Absolute language eliminates "is this an exception?" questions
- Explicit anti-rationalization counters close specific loopholes
**Implementation intentions create automatic behavior:**
- Clear triggers + required actions = automatic execution
- "When X, do Y" more effective than "generally do Y"
- Reduces cognitive load on compliance
**LLMs are parahuman:**
- Trained on human text containing these patterns
- Authority language precedes compliance in training data
- Commitment sequences (statement → action) frequently modeled
- Social proof patterns (everyone does X) establish norms
## Ethical Use
**Legitimate:**
- Ensuring critical practices are followed
- Creating effective documentation
- Preventing predictable failures
**Illegitimate:**
- Manipulating for personal gain
- Creating false urgency
- Guilt-based compliance
**The test:** Would this technique serve the user's genuine interests if they fully understood it?
## Research Citations
**Cialdini, R. B. (2021).** *Influence: The Psychology of Persuasion (New and Expanded).* Harper Business.
- Seven principles of persuasion
- Empirical foundation for influence research
**Meincke, L., Shapiro, D., Duckworth, A. L., Mollick, E., Mollick, L., & Cialdini, R. (2025).** Call Me A Jerk: Persuading AI to Comply with Objectionable Requests. University of Pennsylvania.
- Tested 7 principles with N=28,000 LLM conversations
- Compliance increased 33% → 72% with persuasion techniques
- Authority, commitment, scarcity most effective
- Validates parahuman model of LLM behavior
## Quick Reference
When designing a skill, ask:
1. **What type is it?** (Discipline vs. guidance vs. reference)
2. **What behavior am I trying to change?**
3. **Which principle(s) apply?** (Usually authority + commitment for discipline)
4. **Am I combining too many?** (Don't use all seven)
5. **Is this ethical?** (Serves user's genuine interests?)