Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:01:45 +08:00
commit befff05008
38 changed files with 9964 additions and 0 deletions

118
commands/adr.md Normal file
View File

@@ -0,0 +1,118 @@
---
description: >
Create Architecture Decision Records (ADR) in MADR format with Skills integration.
Records architecture decisions with context and rationale. Auto-numbering (0001, 0002, ...), saves to docs/adr/.
allowed-tools: Read, Write, Bash(ls:*), Bash(find:*), Bash(cat:*), Grep, Glob
model: inherit
argument-hint: "[decision title]"
---
# /adr - Architecture Decision Record Creator
## Purpose
High-quality Architecture Decision Record creation command using ADR Creator Skill.
**Detailed process**: [@~/.claude/skills/adr-creator/SKILL.md]
## Usage
```bash
/adr "Decision title"
```
**Examples:**
```bash
/adr "Adopt TypeScript strict mode"
/adr "Use Auth.js for authentication"
/adr "Introduce Turborepo for monorepo"
```
## Execution Flow (6 Phases)
```text
Phase 1: Pre-Check
├─ Duplicate check, naming rules, ADR number assignment
Phase 2: Template Selection
├─ 1. Tech Selection / 2. Architecture Pattern / 3. Process Change / 4. Default
Phase 3: Information Collection
├─ Context, Options, Decision Outcome, Consequences
Phase 4: ADR Generation
├─ Generate in MADR format
Phase 5: Validation
├─ Required sections, format, quality check
Phase 6: Index Update
└─ Auto-update docs/adr/README.md
```
## Output
```text
docs/adr/
├── README.md (auto-updated)
├── 0001-initial-tech.md
├── 0002-adopt-react.md
└── 0023-your-new-adr.md (newly created)
```
## Configuration
Customizable via environment variables:
```bash
ADR_DIRECTORY="docs/adr" # ADR storage location
ADR_DUPLICATE_THRESHOLD="0.7" # Duplicate detection threshold
ADR_AUTO_VALIDATE="true" # Auto-validation
ADR_AUTO_INDEX="true" # Auto-index update
```
## Best Practices
### Title Guidelines
```text
✅ Good: "Adopt Zustand for State Management"
✅ Good: "Migrate to PostgreSQL for User Data"
❌ Bad: "State Management" (too abstract)
❌ Bad: "Fix bug" (not ADR scope)
```
### Status Management
- `proposed` → Under consideration
- `accepted` → Approved
- `deprecated` → No longer recommended
- `superseded` → Replaced by another ADR
## Related Commands
- `/adr:rule <number>` - Generate project rule from ADR
- `/research` - Technical investigation before ADR creation
- `/think` - Planning before major decisions
## Error Handling
### Skill Not Found
```text
⚠️ ADR Creator Skill not found
Continuing in normal mode (interactive)
```
### Pre-Check Failure
```text
❌ Issues detected in Pre-Check
Actions: Change title / Review similar ADR / Consider consolidation
```
## References
- [ADR Creator Skill](~/.claude/skills/adr-creator/SKILL.md) - Detailed documentation
- [MADR Official Site](https://adr.github.io/madr/)

600
commands/adr/rule.md Normal file
View File

@@ -0,0 +1,600 @@
---
description: >
Generate project rules from ADR automatically and integrate with CLAUDE.md. Converts decision into AI-executable format.
Saves to docs/rules/, auto-integrates with .claude/CLAUDE.md. Enables AI to follow project-specific decisions.
Use when ADR decision should affect AI behavior and enforce project-specific patterns.
ADRからプロジェクトルールを自動生成し、CLAUDE.mdに統合。決定内容をAI実行可能形式に変換。
allowed-tools: Read, Write, Edit, Bash(ls:*), Bash(cat:*), Grep, Glob
model: inherit
argument-hint: "[ADR number]"
---
# /adr:rule - Generate Rule from ADR
## Purpose
Analyze the specified ADR (Architecture Decision Record) and automatically convert it into AI-executable rule format. Generated rules are automatically appended to the project's `.claude/CLAUDE.md` and will be reflected in subsequent AI operations.
## Usage
```bash
/adr:rule <ADR-number>
```
**Examples:**
```bash
/adr:rule 0001
/adr:rule 12
/adr:rule 0003
```
## Execution Flow
### 1. Read ADR File
```bash
# Zero-pad ADR number
ADR_NUM=$(printf "%04d" $1)
# Find ADR file
ADR_FILE=$(ls docs/adr/${ADR_NUM}-*.md 2>/dev/null | head -1)
# Check if file exists
if [ -z "$ADR_FILE" ]; then
echo "❌ Error: ADR-${ADR_NUM} not found"
exit 1
fi
```
### 2. Parse ADR Content
Use Read tool to load ADR file and extract the following sections:
- **Title**: Basis for rule name
- **Decision Outcome**: Core of the rule
- **Rationale**: Why this rule is needed
- **Consequences (Positive/Negative)**: Considerations when applying rule
**Parsing example:**
```markdown
# Input ADR (0001-typescript-strict-mode.md)
Title: Adopt TypeScript strict mode
Decision: Enable TypeScript strict mode
Rationale: Improve type safety, early bug detection
↓ Analysis
Rule Name: TYPESCRIPT_STRICT_MODE
Priority: P2 (Development Rule)
Application: When writing TypeScript code
Instructions: Always write in strict mode, avoid any
```
### 3. Generate Rule Filename
```bash
# Convert title to UPPER_SNAKE_CASE
# Example: "Adopt TypeScript strict mode" → "TYPESCRIPT_STRICT_MODE"
RULE_NAME=$(echo "$TITLE" | \
tr '[:lower:]' '[:upper:]' | \
sed 's/ /_/g' | \
sed 's/[^A-Z0-9_]//g' | \
sed 's/__*/_/g')
RULE_FILE="docs/rules/${RULE_NAME}.md"
```
### 4. Generate Rule File
````markdown
# [Rule Name]
Priority: P2
Source: ADR-[number]
Created: [YYYY-MM-DD]
## Application Conditions
[When to apply this rule - derived from ADR "Decision Outcome"]
## Execution Instructions
[Specific instructions for AI - generated from ADR "Decision Outcome" and "Rationale"]
### Requirements
- [Must do item 1]
- [Must do item 2]
### Prohibitions
- [Must NOT do item 1]
- [Must NOT do item 2]
## Examples
### ✅ Good Example
```[language]
[Code example following ADR decision]
```
### ❌ Bad Example
```[language]
[Pattern to avoid]
```
## Background
[Quoted from ADR "Context and Problem Statement"]
## Expected Benefits
[Quoted from ADR "Positive Consequences"]
## Caveats
[Quoted from ADR "Negative Consequences"]
## References
- ADR: [relative path]
- Created: [YYYY-MM-DD]
- Last Updated: [YYYY-MM-DD]
---
*This rule was automatically generated from ADR-[number]*
````
### 5. Integrate with CLAUDE.md
Automatically append reference to project's `.claude/CLAUDE.md`:
```bash
# Check if .claude/CLAUDE.md exists
if [ ! -f ".claude/CLAUDE.md" ]; then
# Create if doesn't exist
mkdir -p .claude
cat > .claude/CLAUDE.md << 'EOF'
# CLAUDE.md
## Project Rules
Project-specific rules.
EOF
fi
# Check existing references (avoid duplicates)
if ! grep -q "docs/rules/${RULE_NAME}.md" .claude/CLAUDE.md; then
# Append after "## Project Rules" section
# Or create new section
fi
```
**Append format:**
```markdown
## Project Rules
Generated from ADR:
- **[Rule Name]**: [@docs/rules/[RULE_NAME].md](docs/rules/[RULE_NAME].md) (ADR-[number])
```
### 6. Completion Message
```text
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Rule Generated
📄 ADR: docs/adr/0001-typescript-strict-mode.md
📋 Rule: docs/rules/TYPESCRIPT_STRICT_MODE.md
🔗 Integrated: .claude/CLAUDE.md (updated)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Generated Rule
**Rule Name:** TYPESCRIPT_STRICT_MODE
**Priority:** P2
**Application:** When writing TypeScript code
### Execution Instructions
- Always enable TypeScript strict mode
- Avoid using any, use proper type definitions
- Prioritize type-safe implementation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This rule will be automatically applied from next AI execution.
```
## Rule Generation Logic
### Automatic Priority Determination
Determine priority automatically from ADR content:
| Condition | Priority | Example |
|-----------|----------|---------|
| Security-related | P0 | Authentication, Authorization |
| Language/Framework settings | P1 | TypeScript strict, Linter config |
| Development process | P2 | Commit conventions, Code review |
| Recommendations | P3 | Performance optimization |
**Determination logic:**
```javascript
if (title.includes('security') || title.includes('auth')) {
priority = 'P0';
} else if (title.includes('TypeScript') || title.includes('config')) {
priority = 'P1';
} else if (title.includes('process') || title.includes('convention')) {
priority = 'P2';
} else {
priority = 'P3';
}
```
### Generate Execution Instructions
Extract specific execution instructions from ADR "Decision Outcome" and "Rationale":
**Conversion examples:**
| ADR Content | Generated Execution Instruction |
|-------------|----------------------------------|
| "Enable TypeScript strict mode" | "Always write code in strict mode" |
| "Use Auth.js for authentication" | "Use Auth.js library for authentication, avoid custom implementations" |
| "Monorepo structure" | "Clarify dependencies between packages, place common code in shared package" |
### Extract Requirements and Prohibitions
Derived from ADR "Rationale" and "Consequences":
**Requirements (Must do):**
- Positive parts of decision content
- Specific actions to implement
**Prohibitions (Must NOT):**
- Anti-patterns to avoid
- Actions leading to negative consequences
### Generate Code Examples
Automatically generate good and bad examples based on ADR technical content:
````markdown
// TypeScript strict mode example
### ✅ Good Example
```typescript
// Clear type definition
interface User {
id: number;
name: string;
}
function getUser(id: number): User {
// implementation
}
```
### ❌ Bad Example
```typescript
// Using any
function getUser(id: any): any {
// implementation
}
```
````
## Error Handling
### 1. ADR Not Found
```text
❌ Error: ADR-0001 not found
Check docs/adr/ directory:
ls docs/adr/
Available ADRs:
- 0002-use-react-query.md
- 0003-monorepo-structure.md
Specify correct number:
/adr:rule 0002
```
### 2. Invalid Number Format
```text
❌ Error: Invalid ADR number "abc"
ADR number must be numeric:
Correct examples:
/adr:rule 1
/adr:rule 0001
/adr:rule 12
Wrong examples:
/adr:rule abc
/adr:rule one
```
### 3. Failed to Create docs/rules/ Directory
```text
❌ Error: Cannot create docs/rules/ directory
Check permissions:
ls -la docs/
Or create manually:
mkdir -p docs/rules
chmod +w docs/rules
```
### 4. CLAUDE.md Not Found
```text
⚠️ Warning: .claude/CLAUDE.md not found
Create new file? (Y/n)
> Y
✅ Created .claude/CLAUDE.md
✅ Added rule reference
```
### 5. Rule File Already Exists
```text
⚠️ Warning: docs/rules/TYPESCRIPT_STRICT_MODE.md already exists
Overwrite? (y/N)
> n
❌ Cancelled
To review existing rule:
cat docs/rules/TYPESCRIPT_STRICT_MODE.md
```
## CLAUDE.md Integration Patterns
### For New Projects
```markdown
# CLAUDE.md
## Project Rules
Project-specific rules.
### Architecture Decisions
Generated from ADR:
- **TYPESCRIPT_STRICT_MODE**: [@docs/rules/TYPESCRIPT_STRICT_MODE.md] (ADR-0001)
```
### For Existing CLAUDE.md
```markdown
# CLAUDE.md
## Existing Section
[Existing content]
## Project Rules
[Existing rules]
### Architecture Decisions
Generated from ADR:
- **TYPESCRIPT_STRICT_MODE**: [@docs/rules/TYPESCRIPT_STRICT_MODE.md] (ADR-0001)
- **REACT_QUERY_USAGE**: [@docs/rules/REACT_QUERY_USAGE.md] (ADR-0002)
```
**Duplicate check:**
```bash
# Check if same rule is already added
if grep -q "docs/rules/${RULE_NAME}.md" .claude/CLAUDE.md; then
echo "⚠️ This rule is already added"
exit 0
fi
```
## Usage Examples
### Example 1: Convert TypeScript Config to Rule
```bash
# Step 1: Create ADR
/adr "Adopt TypeScript strict mode"
# Step 2: Generate rule
/adr:rule 0001
```
**Generated rule (`docs/rules/TYPESCRIPT_STRICT_MODE.md`):**
````markdown
# TYPESCRIPT_STRICT_MODE
Priority: P1
Source: ADR-0001
Created: 2025-10-01
## Application Conditions
Apply to all files when writing TypeScript code
## Execution Instructions
### Requirements
- Set `strict: true` in `tsconfig.json`
- Avoid using `any` type, use proper type definitions
- Leverage type inference, explicit type annotations only where necessary
- Clearly distinguish between `null` and `undefined`
### Prohibitions
- Easy use of `any` type
- Overuse of `@ts-ignore` comments
- Excessive use of type assertions (`as`)
## Examples
### ✅ Good Example
```typescript
interface User {
id: number;
name: string;
email: string | null;
}
function getUser(id: number): User | undefined {
// implementation
}
```
### ❌ Bad Example
```typescript
function getUser(id: any): any {
// @ts-ignore
return data;
}
```
## Background
Aim to improve type safety and early bug detection.
Reddit codebase is becoming complex and needs protection through types.
## Expected Benefits
- Compile-time error detection
- Improved IDE completion
- Safer refactoring
## Caveats
- Migrating existing code takes time (apply gradually)
- Higher learning curve for beginners
## References
- ADR: docs/adr/0001-typescript-strict-mode.md
- Created: 2025-10-01
- Last Updated: 2025-10-01
---
*This rule was automatically generated from ADR-0001*
````
### Example 2: Convert Authentication Rule
```bash
/adr "Use Auth.js for authentication"
/adr:rule 0002
```
### Example 3: Convert Architecture Rule
```bash
/adr "Introduce Turborepo for monorepo"
/adr:rule 0003
```
## Best Practices
### 1. Generate Rules Immediately After ADR Creation
```bash
# Don't forget to convert decision to rule immediately
/adr "New decision"
/adr:rule [number] # Execute without forgetting
```
### 2. Regular Rule Reviews
```bash
# Periodically check rules
ls -la docs/rules/
# Review old rules
cat docs/rules/*.md
```
### 3. Team Sharing
```bash
# Include rule files in git management
git add docs/rules/*.md .claude/CLAUDE.md
git commit -m "docs: add architecture decision rules"
```
### 4. Rule Updates
When ADR is updated, regenerate the rule:
```bash
# Update ADR
vim docs/adr/0001-typescript-strict-mode.md
# Regenerate rule
/adr:rule 0001 # Confirm overwrite
```
## Related Commands
- `/adr [title]` - Create ADR
- `/research` - Technical research
- `/review` - Review rule application
## Tips
1. **Convert Immediately**: Execute right after ADR creation so decisions aren't forgotten
2. **Verify Priority**: After generation, check rule file to verify appropriate priority
3. **Confirm CLAUDE.md**: Test AI behavior after integration to verify reflection
4. **Team Agreement**: Review with team before converting to rule
## FAQ
**Q: Is rule generation completely automatic?**
A: Yes, it automatically generates rules from ADR and integrates with CLAUDE.md.
**Q: Can generated rules be edited?**
A: Yes, you can directly edit files in `docs/rules/`.
**Q: What if I want to delete a rule?**
A: Delete the rule file and manually remove reference from CLAUDE.md.
**Q: Can I create one rule from multiple ADRs?**
A: Currently 1:1 correspondence. If you want to combine multiple ADRs, create rule manually.
**Q: What if AI doesn't recognize the rule?**
A: Check that reference path in `.claude/CLAUDE.md` is correct. Relative paths are important.

147
commands/auto-test.md Normal file
View File

@@ -0,0 +1,147 @@
---
description: >
Automatically execute tests after file modifications and invoke /fix command if tests fail using SlashCommand tool.
Streamlines test-fix cycle with automatic conditional execution. Can be triggered via hooks in settings.json.
Use after code changes to verify functionality and automatically attempt fixes on failure.
ファイル変更後に自動的にテストを実行し、失敗時に/fixコマンドを呼び出す。
allowed-tools: SlashCommand, Bash(npm test:*), Bash(yarn test:*), Bash(pnpm test:*)
model: inherit
---
# /auto-test - Automatic Test Runner with SlashCommand Integration
## Purpose
Systematically execute tests after file modifications and explicitly invoke `/fix` command when issues are detected through automated workflow orchestration.
## Workflow Instructions
Follow this sequence when invoked:
### Step 1: Execute Tests
Run the project's test command:
```bash
# Auto-detect and run tests
if [ -f "package.json" ] && grep -q "\"test\":" package.json; then
npm test || yarn test || pnpm test
elif [ -f "pubspec.yaml" ]; then
flutter test
elif [ -f "Makefile" ] && grep -q "test:" Makefile; then
make test
else
echo "No test command found"
exit 1
fi
```
### Step 2: Analyze Test Results
After test execution:
- Parse the output for test failures
- Count failed tests
- Extract error messages and stack traces
### Step 3: Invoke /fix if Tests Fail
**IMPORTANT**: If any tests fail, you MUST use the SlashCommand tool to invoke `/fix`:
1. Prepare context for /fix:
- Failed test names
- Error messages
- Relevant file paths
2. **Use SlashCommand tool with this exact format**:
```markdown
Use the SlashCommand tool to execute: /fix
Context to pass to /fix:
- Failed tests: [list test names]
- Error messages: [specific error details]
- Affected files: [file paths from stack traces]
```
3. Wait for /fix command to complete
4. Re-run tests to verify fixes
## Example Execution
```markdown
User: /auto-test
Claude: Running tests...
[Executes: npm test]
Result: 3 tests failed out of 15 total
Claude: Tests failed. Using SlashCommand tool to invoke /fix...
[Uses SlashCommand tool to call: /fix]
Context passed to /fix:
- Failed tests: auth.test.ts::login, auth.test.ts::logout, user.test.ts::profile
- Error messages:
- Expected 200, got 401 in auth.test.ts:42
- Undefined user object in user.test.ts:28
- Affected files: src/auth.ts, src/user.ts
[/fix command executes and applies fixes]
Claude: Re-running tests...
[Executes: npm test]
Result: All 15 tests passed ✓
```
## Requirements for SlashCommand Tool
- `/fix` command must be available in `.claude/commands/`
- `/fix` must have proper `allowed-tools` configured
- This command requires `SlashCommand` to be in the allow list in settings.json permissions
## Usage Patterns
```bash
# Manual execution
/auto-test
# Automatic trigger after file modifications (via hooks)
# Explicitly enable by configuring settings.json
```
## Hook Integration Configuration
Explicitly enable automatic execution by adding to settings.json:
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "claude --command '/auto-test'"
}
]
}
]
}
}
```
## Key Benefits
- 🚀 **Complete Automation**: Eliminates manual test execution after file changes
- 🔄 **Continuous Execution**: Automatically attempts fixes upon test failures
- 📊 **Maximized Efficiency**: Accelerates development cycle significantly
## Critical Notes
- Strictly requires SlashCommand tool availability
- Test commands are intelligently auto-detected based on environment
- `/fix` command is explicitly invoked when corrections are necessary

155
commands/branch.md Normal file
View File

@@ -0,0 +1,155 @@
---
description: >
Analyze current Git changes and suggest appropriate branch names following conventional patterns (feature/fix/chore/docs).
Uses branch-generator agent to analyze diff and file patterns. Provides 3-5 naming suggestions with rationale.
Use before creating a new branch when you need help with naming conventions.
Git差分を分析して適切なブランチ名を自動生成。慣習的なパターンfeature/fix/chore/docsに従う。
allowed-tools: Task
model: inherit
---
# /branch - Git Branch Name Generator
Analyze current Git changes and suggest appropriate branch names following conventional patterns.
**Implementation**: This command delegates to the specialized `branch-generator` subagent for optimal performance and context efficiency.
## How It Works
When invoked, this command:
1. Launches the `branch-generator` subagent via Task tool
2. Subagent analyzes git diff and status (no codebase context needed)
3. Generates conventional branch names
4. Returns multiple naming alternatives
## Usage
### Basic Usage
```bash
/branch
```
Analyzes current changes and suggests branch names.
### With Context
```bash
/branch "Adding user authentication with OAuth"
```
Incorporates description into suggestions.
### With Ticket
```bash
/branch "PROJ-456"
```
Includes ticket number in branch name.
## Branch Naming Conventions
### Type Prefixes
| Prefix | Use Case |
|--------|----------|
| `feature/` | New functionality |
| `fix/` | Bug fixes |
| `hotfix/` | Emergency fixes |
| `refactor/` | Code improvements |
| `docs/` | Documentation |
| `test/` | Test additions/fixes |
| `chore/` | Maintenance tasks |
| `perf/` | Performance improvements |
| `style/` | Formatting/styling |
### Format
```text
<type>/<scope>-<description>
<type>/<ticket>-<description>
<type>/<description>
```
### Good Examples
```bash
✅ feature/auth-add-oauth-support
✅ fix/api-resolve-timeout-issue
✅ docs/readme-update-install-steps
✅ feature/PROJ-123-user-search
```
### Bad Examples
```bash
❌ new-feature (no type prefix)
❌ feature/ADD_USER (uppercase)
❌ fix/bug (too vague)
❌ update_code (wrong separator)
```
## Output Format
The command provides:
- **Current status**: Current branch, files changed
- **Analysis**: Change type, primary scope, key changes
- **Recommended name**: Most appropriate based on analysis
- **Alternatives**: With scope, descriptive, concise versions
- **Usage instructions**: How to create or rename the branch
## Integration with Workflow
Works seamlessly with:
- `/commit` - Create branch first, then commit
- `/pr` - Generate PR description after branching
- `/think` - Planning before branching
## Technical Details
### Subagent Benefits
- **90% context reduction**: Only git operations, no codebase loading
- **2-3x faster execution**: Lightweight agent optimized for git analysis
- **Specialized logic**: Dedicated to branch name generation
- **Parallel execution**: Can run concurrently with other operations
### Git Operations Used
The subagent only executes git commands:
- `git branch --show-current` - Check current branch
- `git status` - Check file status
- `git diff` - Analyze changes
- No file system access or code parsing
## Related Commands
- `/commit` - Generate commit messages
- `/pr` - Create PR descriptions
- `/research` - Investigation before branching
## Best Practices
1. **Create branch early**: Before making changes
2. **Clear naming**: Be specific about changes
3. **Follow conventions**: Stick to project patterns
4. **Include tickets**: Link to issues when applicable
5. **Keep concise**: 50 characters or less
## Context Efficiency
This command is optimized for minimal context usage:
- ✅ No codebase files loaded
- ✅ Only git metadata analyzed
- ✅ Fast execution (<5 seconds)
- ✅ Can run in parallel with other tasks
---
**Note**: For implementation details, see `.claude/agents/git/branch-generator.md`

727
commands/code.md Normal file
View File

@@ -0,0 +1,727 @@
---
description: >
Implement code following TDD/RGRC cycle (Red-Green-Refactor-Commit) with real-time test feedback and quality checks.
Use for feature implementation, refactoring, or bug fixes when you have clear understanding (≥70%) of requirements.
Applies SOLID principles, DRY, and progressive enhancement. Includes dynamic quality discovery and confidence scoring.
計画に基づいてコードを記述TDD/RGRC推奨。要件の明確な理解がある場合に、機能実装、リファクタリング、バグ修正で使用。
allowed-tools: Bash(npm run), Bash(npm run:*), Bash(yarn run), Bash(yarn run:*), Bash(yarn:*), Bash(pnpm run), Bash(pnpm run:*), Bash(pnpm:*), Bash(bun run), Bash(bun run:*), Bash(bun:*), Bash(make:*), Bash(git status:*), Bash(git log:*), Bash(ls:*), Bash(cat:*), Edit, MultiEdit, Write, Read, Glob, Grep, LS, Task
model: inherit
argument-hint: "[implementation description]"
---
# /code - Advanced Implementation with Dynamic Quality Assurance
## Purpose
Perform code implementation with real-time test feedback, dynamic quality discovery, and confidence-scored decisions.
## Usage Modes
- **Standalone**: Implement specific features or bug fixes
- **Workflow**: Code based on `/research` results, then proceed to `/test`
## Prerequisites (Workflow Mode)
- SOW created in `/think`
- Technical research completed in `/research`
- For standalone use, implementation details must be clear
## Dynamic Project Context
### Current Git Status
```bash
!`git status --porcelain`
```
### Package.json Check
```bash
!`ls package.json`
```
### NPM Scripts Available
```bash
!`npm run || yarn run || pnpm run || bun run`
```
### Config Files
```bash
!`ls *.json`
```
### Recent Commits
```bash
!`git log --oneline -5`
```
## Specification Context (Auto-Detection)
### Discover Latest Spec
Search for spec.md in SOW workspace:
```bash
!`find .claude/workspace/sow ~/.claude/workspace/sow -name "spec.md" -type f 2>/dev/null | sort -r | head -1`
```
### Load Specification for Implementation
**If spec.md exists**, use it as implementation guide:
- **Functional Requirements (FR-xxx)**: Define what to implement
- **API Specifications**: Provide exact request/response structures
- **Data Models**: Show expected data structures and validation rules
- **UI Specifications**: Define layout, validation, and interactions
- **Test Scenarios**: Guide test case creation with Given-When-Then
- **Implementation Checklist**: Track implementation progress
**If spec.md does not exist**:
- Proceed with implementation based on available requirements
- Consider running `/think` first to generate specification
- Document assumptions and design decisions inline
This ensures implementation aligns with specification from the start.
## Integration with Skills
This command references the following Skills for implementation guidance:
- [@~/.claude/skills/tdd-test-generation/SKILL.md] - TDD/RGRC cycle, Baby Steps, systematic test design
- [@~/.claude/skills/frontend-patterns/SKILL.md] - Frontend component design patterns (Container/Presentational, Hooks, State Management, Composition)
- [@~/.claude/skills/code-principles/SKILL.md] - Fundamental software development principles (SOLID, DRY, Occam's Razor, Miller's Law, YAGNI)
## Implementation Principles
### Applied Development Rules
- [@~/.claude/skills/tdd-test-generation/SKILL.md] - Test-Driven Development with Baby Steps (primary)
- [@~/.claude/skills/code-principles/SKILL.md] - Fundamental software principles (SOLID, DRY, Occam's Razor, YAGNI)
- [@~/.claude/skills/frontend-patterns/SKILL.md] - Frontend component design patterns
- [@~/.claude/rules/development/PROGRESSIVE_ENHANCEMENT.md](~/.claude/rules/development/PROGRESSIVE_ENHANCEMENT.md) - CSS-first approach for UI
- [@~/.claude/rules/development/READABLE_CODE.md](~/.claude/rules/development/READABLE_CODE.md) - Code readability and clarity
### Principle Hierarchy
**TDD/RGRC is the primary implementation cycle**. With test-generator enhancement:
- **Phase 0 (Preparation)**: test-generator creates test scaffold from spec.md
- **Red & Green phases**: Focus on functionality only
- **Refactor phase**: Apply SOLID and DRY principles
- **Commit phase**: Save stable state
This ensures:
1. Tests align with specification (Phase 0)
2. Code first works (TDD Red-Green)
3. Code becomes clean and maintainable (Refactor with SOLID/DRY)
### 0. Test Preparation (Phase 0 - NEW)
**Purpose**: Generate initial test cases from specification before starting TDD cycle.
**When to use**: When spec.md exists and contains test scenarios.
Use test-generator to extract and generate test code from specification:
```typescript
Task({
subagent_type: "test-generator",
description: "Generate tests from specification",
prompt: `
Feature: "${featureDescription}"
Spec: ${specContent}
Generate test code:
1. FR-xxx requirements → test cases [✓]
2. Given-When-Then scenarios → executable tests [✓]
3. Baby steps order: simple → complex [→]
4. Edge cases and error handling [→]
Output: Test file using project framework (detect from package.json).
Mark: [✓] from spec, [→] inferred, [?] unclear.
`
})
```
**Benefits**:
- Automatic test scaffold from specification
- Baby steps ordering built-in
- Consistent with spec.md requirements
- Faster TDD cycle start
**Integration with TDD**:
```text
Phase 0: test-generator creates test scaffold
Phase 1 (Red): Run generated tests (they fail)
Phase 2 (Green): Implement to pass tests
...
```
### 1. Test-Driven Development (TDD) as t_wada would
**Goal**: "Clean code that works" (動作するきれいなコード) - Ron Jeffries
#### Baby Steps - The Foundation of TDD
**Core Principle**: Make the smallest possible change at each step
##### Why Baby Steps Matter
- **Immediate error localization**: When test fails, the cause is in the last tiny change
- **Continuous working state**: Code is always seconds away from green
- **Rapid feedback**: Each step takes 1-2 minutes max
- **Confidence building**: Small successes compound into major features
##### Baby Steps in Practice
```typescript
// ❌ Big Step - Multiple changes at once
function calculateTotal(items, tax, discount) {
const subtotal = items.reduce((sum, item) => sum + item.price, 0);
const afterTax = subtotal * (1 + tax);
const afterDiscount = afterTax * (1 - discount);
return afterDiscount;
}
// ✅ Baby Steps - One change at a time
// Step 1: Return zero (make test pass minimally)
function calculateTotal(items) {
return 0;
}
// Step 2: Basic sum (next test drives this)
function calculateTotal(items) {
return items.reduce((sum, item) => sum + item.price, 0);
}
// Step 3: Add tax support (only when test requires it)
// ... continue in tiny increments
```
##### Baby Steps Rhythm
1. **Write smallest failing test** (30 seconds)
2. **Make it pass with minimal code** (1 minute)
3. **Run tests** (10 seconds)
4. **Tiny refactor if needed** (30 seconds)
5. **Commit if green** (20 seconds)
Total cycle: ~2 minutes
#### Test Generation from Plan (Pre-Red)
Before entering the RGRC cycle, automatically generate tests from SOW:
```bash
# 1. Check for SOW with test plan
.claude/workspace/sow/[feature-name]/sow.md
# 2. If test plan exists, invoke test-generator
Task(
subagent_type="test-generator",
description="Generate tests from SOW",
prompt="Generate tests from SOW test plan"
)
# 3. Verify generated tests
npm test -- --listTests | grep -E "\.test\.|\.spec\."
```
**When to generate:**
- SOW contains "Test Plan" section
- No existing tests for planned features
- User requests test generation
**Skip conditions:**
- Tests already exist
- No SOW test plan defined
- Quick fix mode
#### Enhanced RGRC Cycle with Real-time Feedback
1. **Red Phase** (Confidence Target: 0.9)
```bash
npm test -- --testNamePattern="[current test]" | grep -E "FAIL|PASS"
```
- Write failing test with clear intent (or use generated tests)
- Verify failure reason matches expectation
- Document understanding via test assertions
- **Exit Criteria**: Test fails for expected reason
2. **Green Phase** (Confidence Target: 0.7)
```bash
npm test -- --watch --testNamePattern="[current test]"
```
- Minimal implementation to pass
- Quick solutions acceptable
- Focus on functionality over form
- **Exit Criteria**: Test passes consistently
3. **Refactor Phase** (Confidence Target: 0.95)
```bash
npm test | tail -5 | grep -E "Passing|Failing"
```
- Apply SOLID principles
- Remove duplication (DRY)
- Improve naming and structure
- Extract abstractions
- **Exit Criteria**: All tests green, code clean
4. **Commit Phase** (Confidence Target: 1.0)
- Quality checks pass
- Coverage maintained/improved
- Ready for stable commit
- User executes git commands
#### Advanced TodoWrite Integration
Real-time tracking with confidence scoring:
```markdown
# Implementation: [Feature Name]
## Scenarios (Total Confidence: 0.85)
1. ⏳ User registration with valid email [C: 0.9]
2. ⏳ Registration fails with invalid email [C: 0.8]
3. ⏳ Duplicate email prevention [C: 0.85]
## Current RGRC Cycle - Scenario 1
### Red Phase (Started: 14:23)
1.1 ✅ Write failing test [C: 0.95] ✓ 2 min
1.2 ✅ Verify correct failure [C: 0.9] ✓ 30 sec
### Green Phase (Active: 14:26)
1.3 ❌ Implement registration logic [C: 0.7] ⏱️ 3 min
1.4 ⏳ Test passes consistently [C: pending]
### Refactor Phase (Pending)
1.5 ⏳ Apply SOLID principles [C: pending]
1.6 ⏳ Extract validation logic [C: pending]
### Quality Gates
- 🧪 Tests: 12/14 passing
- 📊 Coverage: 78% (target: 80%)
- 🔍 Lint: 2 warnings
- 🔷 Types: All passing
```
## Progress Display
### RGRC Cycle Progress Visualization
Display TDD cycle progress with real-time updates:
```markdown
📋 Implementation Task: User Authentication Feature
🔴 Red → 🟢 Green → 🔵 Refactor → ✅ Commit
Current Cycle: Scenario 2/5
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 Red Phase [████████████] Complete
🟢 Green Phase [████████░░░░] 70%
🔵 Refactor [░░░░░░░░░░░░] Waiting
✅ Commit [░░░░░░░░░░░░] Waiting
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Current: Minimal implementation until test passes...
Elapsed: 8 min | Remaining scenarios: 3
```
### Quality Check Progress
Parallel quality checks during implementation:
```markdown
Quality checks in progress:
├─ 🧪 Tests [████████████] ✅ 45/45 passing
├─ 📊 Coverage [████████░░░░] ⚠️ 78% (Target: 80%)
├─ 🔍 Lint [████████████] ✅ 0 errors, 2 warnings
├─ 🔷 TypeCheck [████████████] ✅ All types valid
└─ 🎨 Format [████████████] ✅ Formatted
Quality Score: 92% | Confidence: HIGH
```
### Implementation Mode Indicators
#### TDD Mode (Default)
```markdown
📋 TDD Progress:
Cycle 3/8: Green Phase
⏳ Current test: should handle edge cases
📝 Lines written: 125 | Tests: 18/20
```
#### Quick Implementation Mode
```markdown
⚡ Quick implementation... [████████░░] 80%
Skipping some quality checks for speed
```
### 2. SOLID Principles During Implementation
Apply during Refactor phase:
- **SRP**: Each class/function has one reason to change
- **OCP**: Extend functionality without modifying existing code
- **LSP**: Derived classes must be substitutable for base classes
- **ISP**: Clients shouldn't depend on unused interfaces
- **DIP**: Depend on abstractions, not concrete implementations
### 3. DRY (Don't Repeat Yourself) Principle
**"Every piece of knowledge must have a single, unambiguous, authoritative representation"**
- Extract repeated logic into functions
- Create configuration objects for repeated values
- Use composition for repeated structure
- Avoid copy-paste programming
### 4. Consistency with Existing Code
- Follow coding conventions
- Utilize existing patterns and libraries
- Maintain naming convention consistency
## Hierarchical Implementation Process
### Phase 1: Context Discovery & Planning
Analyze with confidence scoring:
1. **Code Context**: Understand existing patterns (C: 0.0-1.0)
2. **Dependencies**: Verify required libraries available
3. **Conventions**: Detect and follow project standards
4. **Test Structure**: Identify test patterns to follow
### Phase 2: Parallel Quality Execution
Run quality checks simultaneously:
```typescript
// Execute these in parallel, not sequentially
const qualityChecks = [
Bash({ command: "npm run lint" }),
Bash({ command: "npm run type-check" }),
Bash({ command: "npm test -- --findRelatedTests" }),
Bash({ command: "npm run format:check" })
];
```
### Phase 3: Confidence-Based Decisions
Make implementation choices based on evidence:
- **High Confidence (>0.8)**: Proceed with implementation
- **Medium (0.5-0.8)**: Add defensive checks
- **Low (<0.5)**: Research before implementing
### 5. Code Implementation with TDD
Follow the RGRC cycle defined above:
- **Red**: Write failing test first
- **Green**: Minimal code to pass
- **Refactor**: Apply SOLID and DRY principles
- **Commit**: Save stable state
### 6. Dynamic Quality Checks
#### Automatic Discovery
```bash
!`cat package.json`
```
#### Parallel Execution
```markdown
## Quality Check Results
### Linting (Confidence: 0.95)
```bash
npm run lint | tail -5
```
- Status: ✅ Passing
- Issues: 0 errors, 2 warnings
- Time: 1.2s
### Type Checking (Confidence: 0.98)
```bash
npm run type-check | tail -5
```
- Status: ✅ All types valid
- Files checked: 47
- Time: 3.4s
### Tests (Confidence: 0.92)
```bash
npm test -- --passWithNoTests | grep -E "Tests:|Snapshots:"
```
- Status: ✅ 45/45 passing
- Coverage: 82%
- Time: 8.7s
### Format Check (Confidence: 0.90)
```bash
npm run format:check | tail -3
```
- Status: ⚠️ 3 files need formatting
- Auto-fixable: Yes
- Time: 0.8s
```
#### Quality Score Calculation
```text
Overall Quality Score: (L*0.3 + T*0.3 + Test*0.3 + F*0.1) = 0.93
Confidence Level: HIGH - Ready for commit
```
### 7. Functionality Verification
- Verify on development server
- Validate edge cases
- Check performance
## Advanced Features
### Real-time Test Monitoring
Watch test results during development:
```bash
npm test -- --watch --coverage
```
### Code Complexity Analysis
Track complexity during implementation:
```bash
npx complexity-report src/ | grep -E "Complexity|Maintainability"
```
### Performance Profiling
For performance-critical code:
```bash
npm run profile
```
### Security Scanning
Automatic vulnerability detection:
```bash
npm audit --production | grep -E "found|Severity"
```
## Implementation Patterns
### Pattern Selection by Confidence
```markdown
## Available Patterns (Choose based on context)
### High Confidence Patterns (>0.9)
1. **Factory Pattern** - Object creation
- When: Multiple similar objects
- Confidence: 0.95
- Example in: src/factories/
2. **Repository Pattern** - Data access
- When: Database operations
- Confidence: 0.92
- Example in: src/repositories/
### Medium Confidence Patterns (0.7-0.9)
1. **Observer Pattern** - Event handling
- When: Loose coupling needed
- Confidence: 0.85
- Consider: Built-in EventEmitter
### Experimental Patterns (<0.7)
1. **New architectural pattern**
- Confidence: 0.6
- Recommendation: Prototype first
```
## Risk Mitigation
### Common Implementation Risks
| Risk | Probability | Impact | Mitigation | Confidence |
|------|------------|--------|------------|------------|
| Breaking existing tests | Medium | High | Run full suite before/after | 0.95 |
| Performance regression | Low | High | Profile critical paths | 0.88 |
| Security vulnerability | Low | Critical | Security scan + review | 0.92 |
| Inconsistent patterns | Medium | Medium | Follow existing examples | 0.90 |
| Missing edge cases | High | Medium | Comprehensive test cases | 0.85 |
## Definition of Done with Confidence Metrics
Implementation complete when all metrics achieved:
```markdown
## Completion Checklist
### Core Implementation
- ✅ All RGRC cycles complete [C: 0.95]
- ✅ Feature works as specified [C: 0.93]
- ✅ Edge cases handled [C: 0.88]
### Quality Metrics
- ✅ All tests passing (47/47) [C: 1.0]
- ✅ Coverage ≥ 80% (current: 82%) [C: 0.95]
- ✅ Zero lint errors [C: 0.98]
- ✅ Zero type errors [C: 1.0]
- ⚠️ 2 lint warnings (documented) [C: 0.85]
### Code Quality
- ✅ SOLID principles applied [C: 0.90]
- ✅ DRY - No duplication [C: 0.92]
- ✅ Readable code standards [C: 0.88]
- ✅ Consistent with codebase [C: 0.94]
### Documentation
- ✅ Code comments where needed [C: 0.85]
- ✅ README updated if required [C: 0.90]
- ✅ API docs current [C: 0.87]
### Overall Confidence: 0.92 (HIGH)
Status: ✅ READY FOR REVIEW
```
If confidence < 0.8 on any critical metric, continue improving.
## Decision Framework
### When Implementation Confidence is Low
```markdown
## Low Confidence Detected (< 0.7)
### Issue: [Uncertain about implementation approach]
Options:
1. **Research More** (/research)
- Time: +30 min
- Confidence gain: +0.3
2. **Prototype First**
- Time: +15 min
- Confidence gain: +0.2
3. **Consult Documentation**
- Time: +10 min
- Confidence gain: +0.15
Recommendation: Option 1 for complex features
```
### Quality Gate Failures
```markdown
## Quality Gate Failed
### Issue: Coverage dropped below 80%
Current: 78% (-2% from main)
Uncovered lines: src/auth/validator.ts:45-52
Actions:
1. ❌ Add tests for uncovered lines
2. ⏳ Or document why not testable
3. ⏳ Or adjust threshold (not recommended)
Proceeding without resolution? (y/N)
```
## Usage Examples
### Basic Implementation
```bash
/code "Add user authentication"
# Standard TDD implementation
```
### With Confidence Threshold
```bash
/code --confidence 0.9 "Critical payment logic"
# Requires 90% confidence before proceeding
```
### Fast Mode (Skip Some Checks)
```bash
/code --fast "Simple UI update"
# Minimal quality checks for low-risk changes
```
### With Specific Pattern
```bash
/code --pattern repository "Database access layer"
# Use repository pattern for implementation
```
## Applied Development Principles
### TDD/RGRC
[@~/.claude/rules/development/TDD_RGRC.md](~/.claude/rules/development/TDD_RGRC.md) - Red-Green-Refactor-Commit cycle
Application:
- **Baby Steps**: Smallest possible change
- **Red**: Write failing test first
- **Green**: Minimal code to pass test
- **Refactor**: Improve clarity
- **Commit**: Manual commit after each cycle
### Occam's Razor
[@~/.claude/rules/reference/OCCAMS_RAZOR.md](~/.claude/rules/reference/OCCAMS_RAZOR.md) - "Entities should not be multiplied without necessity"
Application:
- **Simplest Solution**: Minimal implementation that meets requirements
- **Avoid Unnecessary Complexity**: Don't abstract until proven
- **Question Every Abstraction**: Is it truly necessary?
- **Avoid Premature Optimization**: Only for measured needs
## Next Steps
- **High Confidence (>0.9)** → Ready for `/test` or review
- **Medium (0.7-0.9)** → Consider additional testing
- **Low (<0.7)** → Need `/research` or planning
- **Quality Issues** → Fix before proceeding
- **All Green** → Ready for PR/commit

163
commands/commit.md Normal file
View File

@@ -0,0 +1,163 @@
---
description: >
Analyze Git diff and generate Conventional Commits format messages automatically. Uses commit-generator agent.
Detects commit type (feat/fix/chore/docs), scope, breaking changes. Focuses on "why" rather than "what".
Use after staging changes when ready to commit.
Git差分を分析してConventional Commits形式のメッセージを自動生成。型、スコープ、破壊的変更を検出。
allowed-tools: Task
model: inherit
---
# /commit - Git Commit Message Generator
Analyze staged changes and generate appropriate commit messages following Conventional Commits specification.
**Implementation**: This command delegates to the specialized `commit-generator` subagent for optimal performance and context efficiency.
## How It Works
When invoked, this command:
1. Launches the `commit-generator` subagent via Task tool
2. Subagent analyzes git diff and status (no codebase context needed)
3. Generates Conventional Commits format messages
4. Returns multiple message alternatives
## Usage
### Basic Usage
```bash
/commit
```
Analyzes staged changes and suggests messages.
### With Context
```bash
/commit "Related to authentication flow"
```
Incorporates context into message generation.
### With Issue Number
```bash
/commit "#123"
```
Includes issue reference in commit message.
## Conventional Commits Format
```text
<type>(<scope>): <subject>
[optional body]
[optional footer]
```
### Commit Types
| Type | Use Case |
|------|----------|
| `feat` | New feature |
| `fix` | Bug fix |
| `docs` | Documentation |
| `style` | Formatting |
| `refactor` | Code restructuring |
| `perf` | Performance improvement |
| `test` | Testing |
| `chore` | Maintenance |
| `ci` | CI/CD changes |
| `build` | Build system changes |
### Subject Line Rules
1. **Limit to 72 characters**
2. **Use imperative mood** ("add" not "added")
3. **Don't capitalize first letter after type**
4. **No period at the end**
5. **Be specific but concise**
## Good Examples
```markdown
✅ feat(auth): add OAuth2 authentication support
✅ fix(api): resolve timeout in user endpoint
✅ docs(readme): update installation instructions
✅ perf(search): optimize database queries
```
## Bad Examples
```markdown
❌ Fixed bug (no type, too vague)
❌ feat: Added new feature. (capitalized, period)
❌ update code (no type, not specific)
❌ FEAT(AUTH): ADD LOGIN (all caps)
```
## Output Format
The command provides:
- **Analysis summary**: Files changed, lines added/deleted, detected type/scope
- **Recommended message**: Most appropriate based on analysis
- **Alternative formats**: Detailed, concise, with issue reference
- **Usage instructions**: How to commit with the generated message
## Integration with Workflow
Works seamlessly with:
- `/branch` - Create branch first
- `/pr` - Generate PR description after commits
- `/test` - Ensure tests pass before committing
## Technical Details
### Subagent Benefits
- **90% context reduction**: Only git operations, no codebase loading
- **2-3x faster execution**: Lightweight agent optimized for git analysis
- **Specialized logic**: Dedicated to commit message generation
- **Parallel execution**: Can run concurrently with other operations
### Git Operations Used
The subagent only executes git commands:
- `git diff --staged` - Analyze changes
- `git status` - Check file status
- `git log` - Learn commit style
- No file system access or code parsing
## Related Commands
- `/branch` - Generate branch names from changes
- `/pr` - Create PR descriptions
- `/review` - Code review before committing
## Best Practices
1. **Stage related changes**: Group logically related changes
2. **Commit frequently**: Small, focused commits are better
3. **Review before committing**: Check the suggested message
4. **Include breaking changes**: Always note breaking changes
5. **Reference issues**: Link commits to issues when applicable
## Context Efficiency
This command is optimized for minimal context usage:
- ✅ No codebase files loaded
- ✅ Only git metadata analyzed
- ✅ Fast execution (<5 seconds)
- ✅ Can run in parallel with other tasks
---
**Note**: For implementation details, see `.claude/agents/git/commit-generator.md`

157
commands/context.md Normal file
View File

@@ -0,0 +1,157 @@
---
description: >
Diagnose current context usage and provide token optimization recommendations.
Displays token usage, file count, session cost. Helps identify context-heavy operations.
Use when context limits are approaching or to optimize session efficiency.
現在のコンテキスト使用状況を診断し、トークン最適化の推奨事項を提供。
allowed-tools: Read, Glob, Grep, LS, Bash(wc:*), Bash(du:*), Bash(find:*)
model: inherit
---
# /context - Context Diagnostics & Optimization
## Purpose
Diagnose current context usage and provide token optimization recommendations.
## Dynamic Context Analysis
### Session Statistics
```bash
!`wc -l ~/.claude/CLAUDE.md ~/.claude/rules/**/*.md 2>/dev/null | tail -1`
```
### Current Working Files
```bash
!`find . -type f -name "*.md" -o -name "*.json" -o -name "*.ts" -o -name "*.tsx" | grep -v node_modules | wc -l`
```
### Modified Files in Session
```bash
!`git status --porcelain 2>/dev/null | wc -l`
```
### Memory Usage Estimate
```bash
!`du -sh ~/.claude 2>/dev/null`
```
## Context Optimization Strategies
### 1. File Analysis
- **Large Files Detection**: Identify files over 500 lines
- **Redundant Files**: Detect unused files
- **Pattern Files**: Suggest compression for repetitive patterns
### 2. Token Usage Breakdown
```markdown
## Token Usage Analysis
- System Prompts: ~[calculated]
- User Messages: ~[calculated]
- Tool Results: ~[calculated]
- Total Context: ~[calculated]
```
### 3. Optimization Recommendations
Based on analysis, recommended optimizations:
1. **File Chunking**: Split large files
2. **Selective Loading**: Load only necessary parts
3. **Context Pruning**: Remove unnecessary information
4. **Compression**: Compress repetitive information
## Usage Examples
### Basic Context Check
```bash
/context
# Display current context usage
```
### With Optimization
```bash
/context --optimize
# Detailed analysis with optimization suggestions
```
### Token Limit Check
```bash
/context --check-limit
# Check usage against token limit
```
## Output Format
```markdown
📊 Context Diagnostic Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📈 Usage:
- Current Tokens: ~XXXk / 200k
- Utilization: XX%
- Estimated Remaining: ~XXXk tokens
📁 File Statistics:
- Loaded: XX files
- Total Lines: XXXX lines
- Largest File: [filename] (XXX lines)
⚠️ Warnings:
- [Warning if approaching 200k limit]
- [Warning for large files]
💡 Optimization Suggestions:
1. [Specific suggestion]
2. [Specific suggestion]
📝 Session Info:
- Start Time: [timestamp]
- Files Modified: XX
- Estimated Cost: $X.XX
━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Integration with Other Commands
- `/doctor` - Full system diagnostics
- `/status` - Status check
- `/cost` - Cost calculation
## Best Practices
1. **Regular Checks**: Run periodically during large tasks
2. **Warning at Limit**: Alert at 180k tokens (90%)
3. **Auto-optimization**: Suggest automatic compression when needed
## Advanced Features
### Context History
View past context usage history:
```bash
ls -la ~/.claude/logs/sessions/latest-session.json 2>/dev/null
```
### Real-time Monitoring
Track usage in real-time:
```bash
echo "Current context estimation in progress..."
```
## Notes
- Leverages `exceeds_200k_tokens` flag added in Version 1.0.86
- Settings changes reflect immediately without restart (v1.0.90)
- Session statistics saved automatically via SessionEnd hook

529
commands/fix.md Normal file
View File

@@ -0,0 +1,529 @@
---
description: >
Rapidly fix small bugs and minor improvements in development environment with dynamic problem detection and parallel quality verification.
Use for well-understood (≥80%) small-scale fixes in development only. NOT for production emergencies (use /hotfix instead).
Applies Occam's Razor (simplest solution), Progressive Enhancement (CSS-first), and TIDYINGS (clean as you go).
開発環境で小さなバグの修正や軽微な改善を素早く実行。よく理解された小規模な修正に使用。本番緊急時は/hotfixを使用。
allowed-tools: Bash(git diff:*), Bash(git ls-files:*), Bash(npm test:*), Bash(npm run), Bash(npm run:*), Bash(yarn run), Bash(yarn run:*), Bash(pnpm run), Bash(pnpm run:*), Bash(bun run), Bash(bun run:*), Bash(ls:*), Edit, MultiEdit, Read, Grep, Task
model: inherit
argument-hint: "[bug or issue description]"
---
# /fix - Advanced Quick Fix with Dynamic Analysis
## Purpose
Rapidly fix small bugs with dynamic problem detection, confidence scoring, and parallel quality verification.
## Usage
For simple fixes that don't require extensive planning or research.
## Dynamic Problem Context
### Recent Changes Analysis
```bash
!`git diff HEAD~1 --stat`
```
### Test Status Check
```bash
!`find . -name "*test*" -o -name "*spec*"`
```
### Quality Commands Discovery
```bash
!`npm run || yarn run || pnpm run || bun run`
```
### Related Files Detection
```bash
!`git ls-files --modified`
```
## Phase 0.5: Deep Root Cause Analysis
**Purpose**: Identify the true root cause, not just surface symptoms, before attempting fixes.
### Step 1: Explore Bug Context
Use Explore agent for rapid context gathering (30 seconds):
```typescript
Task({
subagent_type: "Explore",
thoroughness: "quick",
description: "Explore bug-related code",
prompt: `
Bug: "${bugDescription}"
Find (30s):
1. Related files: mentioned, recent changes, similar patterns [✓]
2. Dependencies: interacting components, shared utilities [✓]
3. Recent commits (last 10), PRs [✓]
4. Similar past issues [→]
Return: Findings with file:line evidence. Mark: [✓/→/?].
`
})
```
### Step 2: Root Cause Analysis
Use root-cause-reviewer for deep analysis:
```typescript
Task({
subagent_type: "root-cause-reviewer",
description: "Identify root cause",
prompt: `
Bug: "${bugDescription}"
Context: ${exploreFindings}
5 Whys analysis:
1. Symptom vs root cause (dig 5x deep) [✓]
2. Similar bugs: pattern or isolated? [→]
3. Fix strategy: symptom or root cause? [→]
4. Impact: affected areas, side effects, tests [→]
Return: Root cause [✓] with evidence, fix approach [→], prevention [→], risks [?].
`
})
```
### Benefits of Phase 0.5
```yaml
Before (without Phase 0.5):
- Quick surface-level fix
- May miss root cause
- Bug might recur elsewhere
- Confidence: 0.75-0.85
After (with Phase 0.5):
- Root cause identified
- Comprehensive fix
- Prevention measures included
- Confidence: 0.95+
```
### Cost-Benefit Analysis
| Aspect | Cost | Benefit |
|--------|------|---------|
| **Time** | +30-60s | Fewer iterations, no recurrence |
| **Expense** | +$0.05-0.15 | Permanent fix vs temporary patch |
| **Quality** | None | Root cause resolution |
| **Prevention** | None | Similar bugs prevented |
**ROI**: High - One-time investment prevents multiple future fixes.
## Hierarchical Fix Process
### Phase 1: Problem Analysis (Enhanced with Phase 0.5 - Confidence Target: 0.95+)
**Integration**: Uses findings from Phase 0.5 for higher confidence decisions.
Dynamic root cause identification:
1. **Issue Detection**: Analyze symptoms and error patterns
- **Enhanced**: Leverage Explore findings for context
2. **Impact Scope**: Determine affected files and components
- **Enhanced**: Use dependency analysis from Phase 0.5
3. **Root Cause**: Identify why not just what
- **Enhanced**: Apply root-cause-reviewer insights
4. **Fix Strategy**: Choose simplest effective approach
- **Enhanced**: Based on root cause, not symptoms
- **Enhanced**: Include prevention measures from Phase 0.5
### Phase 2: Targeted Implementation (Confidence Target: 0.90)
Apply fix with confidence scoring:
- **High Confidence (>0.9)**: Direct fix implementation
- **Medium (0.7-0.9)**: Add defensive checks
- **Low (<0.7)**: Research before fixing
### Phase 3: Parallel Verification (Confidence Target: 0.95)
Simultaneous quality checks:
```typescript
// Execute in parallel, not sequentially
const checks = [
Bash({ command: "npm test -- --findRelatedTests" }),
Bash({ command: "npm run lint -- --fix" }),
Bash({ command: "npm run type-check" })
];
```
## Enhanced Execution with Confidence Metrics
### 1. Dynamic Problem Analysis
#### Issue Classification
```markdown
[TEMPLATE: Problem Analysis Section]
Category: [UI/Logic/Performance/Type/Test]
- **Symptoms**: [Observable behavior]
- **Evidence**: [Error messages, test failures]
- **Root Cause**: [Why it's happening]
- **Confidence**: [0.0-1.0 score]
```
#### Recent Context
```bash
git log --oneline -5 --grep="fix"
```
### 2. Smart Implementation
#### Fix Approach Selection
```markdown
[TEMPLATE: Fix Strategy Section]
Selected Approach: [Name]
- **Implementation**: [How to fix]
- **Rationale**: [Why this approach]
- **Risk Level**: [Low/Medium/High]
- **Alternative**: [If confidence < 0.8]
```
#### Progressive Enhancement Check
```markdown
[TEMPLATE: CSS-First Analysis]
- Can CSS solve this? [Yes/No]
- If Yes: [CSS solution]
- If No: [Why JS is needed]
```
### 3. Real-time Verification
#### Parallel Quality Execution
Run quality checks in parallel:
```bash
npm test -- --findRelatedTests | grep -E "PASS|FAIL" | head -5
npm run lint | tail -3
npm run type-check | tail -3
```
#### Regression Check
```bash
npm test -- --onlyChanged | grep -E "Test Suites:"
```
## Advanced TodoWrite Integration
Real-time tracking with confidence scoring:
```markdown
[TODO LIST TEMPLATE]
Fix: [Issue Description]
Analysis Phase (Confidence: X.XX)
1. ✅ Problem identified [C: 0.95] ✓ 2 min
2. ❌ Root cause analysis [C: 0.85] ⏱️ Active
3. ⏳ Fix strategy selection [C: pending]
## Implementation Phase
4. ⏳ Apply targeted fix [C: pending]
5. ⏳ Update related tests [C: pending]
## Verification Phase
6. ⏳ Run quality checks (parallel) [C: pending]
7. ⏳ Confirm fix resolves issue [C: pending]
## Metrics
- 🎯 Problem Clarity: 85%
- 🔧 Fix Confidence: 90%
- ✅ Tests Passing: 12/12
- 📊 Coverage Impact: +2%
```
## Definition of Done with Confidence Scoring
```markdown
[COMPLETION CHECKLIST TEMPLATE]
Problem Resolution
- ✅ Root cause identified [C: 0.92]
- ✅ Fix addresses cause not symptom [C: 0.88]
- ✅ Minimal complexity solution [C: 0.95]
### Quality Metrics
- ✅ All related tests pass [C: 1.0]
- ✅ No new lint errors [C: 0.98]
- ✅ Type safety maintained [C: 1.0]
- ✅ No regressions detected [C: 0.93]
### Code Standards
- ✅ Follows existing patterns [C: 0.90]
- ✅ Progressive Enhancement applied [C: 0.87]
- ✅ Documentation updated if needed [C: 0.85]
### Overall Confidence: 0.91 (HIGH)
Status: ✅ FIX COMPLETE
```
If any metric has confidence < 0.8, continue improving.
## Progress Display
### Quick Fix Progress Visualization
Display fix progress with confidence tracking:
```markdown
📋 Fix Task: Button Alignment Issue
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Analysis: [████████████] Complete (Confidence: 92%)
Implementation: [████████░░░░] 70% In progress...
Verification: [░░░░░░░░░░░░] Waiting
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Current: Editing CSS file...
Changes: 2 files | 15 lines | Elapsed: 3 min
```
### Parallel Quality Checks
Show concurrent quality verification:
```markdown
🔍 Quality Verification (Parallel Execution):
├─ 🧪 Tests [████████████] ✅ 12/12 passing
├─ 🔍 Lint [████████████] ✅ No new issues
├─ 🔷 Types [████████████] ✅ All valid
└─ 📊 Regress [████████░░░░] ⏳ Checking...
Fix Confidence: 90% | Status: Safe
```
### Fix Mode Indicators
```markdown
⚡ Quick fix mode
Focus: Minimal change for maximum impact
Scope: 2 files | Risk: Low | Time: < 5 min
```
## Enhanced Output Format
```markdown
[FIX SUMMARY TEMPLATE]
🔧 Fix Summary
Problem
- **Issue**: [Description]
- **Category**: [UI/Logic/Performance/Type]
- **Root Cause**: [Why it happened]
- **Confidence**: 0.XX
### Solution Applied
- **Approach**: [Fix strategy used]
- **Files Modified**:
- `path/to/file.ts` - [What changed]
- `path/to/test.ts` - [Test updates]
- **Progressive Enhancement**: [CSS-first approach used?]
### Verification Results
- **Tests**: ✅ 15/15 passing
- **Lint**: ✅ No issues
- **Types**: ✅ All valid
- **Coverage**: 82% (+2%)
- **Regression**: None detected
### Confidence Metrics
| Stage | Confidence | Result |
|-------|------------|--------|
| Analysis | 0.88 | Root cause found |
| Implementation | 0.92 | Clean fix applied |
| Verification | 0.95 | All checks pass |
| **Overall** | **0.91** | **HIGH CONFIDENCE** |
### Performance Impact
```bash
git diff HEAD --stat | tail -3
```
- Lines changed: XX
- Files affected: X
- Complexity: Low
```
## Decision Framework
### When to Use `/fix` (Confidence Check)
```markdown
✅ High Confidence Scenarios (>0.85)
- Single-file fixes with clear scope
- Test failures with obvious causes
- Typo and naming corrections
- CSS-solvable UI issues
- Simple logic errors
- Configuration updates
⚠️ Medium Confidence (0.6-0.85)
- Two-file coordinated changes
- Missing error handling
- Performance optimizations
- State management fixes
❌ Low Confidence (<0.6) - Use different command
- Multi-file refactoring → /code
- Unknown root cause → /research
- New features → /think → /code
- Production issues → /hotfix
```
### Automatic Command Switching
If confidence drops below 0.6 during analysis:
```markdown
[LOW CONFIDENCE ALERT TEMPLATE]
Issue: Fix scope exceeds /fix capabilities
Recommended action:
- For investigation: /research
- For planning: /think
- For implementation: /code
Switch command? (Y/n)
```
## Example Usage with Confidence
### High Confidence Fix
```markdown
/fix "Fix button alignment issue in header"
# Confidence: 0.92 - CSS-first solution available
```
### Medium Confidence Fix
```markdown
/fix "Resolve state update not triggering re-render"
# Confidence: 0.75 - May need investigation
```
### Auto-escalation Example
```markdown
/fix "Optimize database query performance"
# Confidence: 0.45 - Suggests /research → /think instead
```
## Next Steps Based on Outcome
### Success Path (Confidence >0.9)
- Document learnings
- Add regression test
- Update related documentation
### Partial Success (Confidence 0.7-0.9)
- Review with `/research` for completeness
- Consider follow-up `/fix` for remaining issues
### Escalation Path (Confidence <0.7)
```markdown
[WORKFLOW RECOMMENDATION TEMPLATE]
Based on analysis, suggesting:
1. /research - Investigate deeper
2. /think - Plan comprehensive solution
3. /code - Implement with full TDD
```
## Advanced Features
### Pattern Learning
Track successful fixes for similar issues:
```bash
git log --grep="fix" --oneline | grep -i "[similar keyword]" | head -3
```
### Auto-discovery
Find related issues that might need fixing:
```bash
grep -r "TODO\|FIXME\|HACK" --include="*.ts" --include="*.tsx" | head -5
```
### Quality Trend
Monitor fix impact on codebase health:
```bash
npm run lint | grep -E "problems\|warnings"
```
## Applied Principles
This command applies Progressive Enhancement from [@~/.claude/rules/development/PROGRESSIVE_ENHANCEMENT.md](~/.claude/rules/development/PROGRESSIVE_ENHANCEMENT.md):
- CSS-first approach for UI issues
- Root cause analysis
- Minimal complexity solutions
## Command Differentiation Guide
### Use `/fix` when
- 🔧 Working in development environment
- Issue is small and well-understood
- Fix can be tested normally
- No production emergency
- Standard deployment timeline is acceptable
### Use `/hotfix` instead when
- 🚨 Production is down or impaired
- Security vulnerability in production
- Users are actively affected
- Immediate deployment required
- Emergency response needed
### Key Difference
- **fix**: Rapid development fixes with normal testing/deployment
- **hotfix**: Emergency production fixes requiring immediate action
## Applied Development Principles
### Occam's Razor
[@~/.claude/rules/reference/OCCAMS_RAZOR.md](~/.claude/rules/reference/OCCAMS_RAZOR.md) - "Entities should not be multiplied without necessity"
Application in fixes:
- **Minimal Change**: Simplest change that fixes the issue
- **Avoid Restructuring**: Don't change surrounding code
- **Minimize Side Effects**: Fix stays focused on the problem
- **Avoid Over-generalization**: Solve just the current issue
### TIDYINGS
[@~/.claude/rules/development/TIDYINGS.md](~/.claude/rules/development/TIDYINGS.md) - Clean as you go
Application in fixes:
- **Clean Only What You Touch**: Improve only fix-related code
- **Leave It Better**: Better than before, not perfect
- **Incremental Improvement**: Don't try to fix everything at once
- **Boy Scout Rule**: Leave it cleaner than you found it

180
commands/full-cycle.md Normal file
View File

@@ -0,0 +1,180 @@
---
description: >
Orchestrate complete development cycle through SlashCommand tool integration, executing from research through implementation, testing, and validation.
Chains multiple commands: /research → /think → /code → /test → /review → /validate with conditional execution and error handling.
TodoWrite integration for progress tracking. Use for comprehensive feature development requiring full workflow automation.
SlashCommandツール統合により、研究から実装、テスト、検証まで完全な開発サイクルを統括。
allowed-tools: SlashCommand, TodoWrite, Read, Write, Edit, MultiEdit
model: inherit
argument-hint: "[feature or task description]"
---
# /full-cycle - Complete Development Cycle Automation
## Purpose
Systematically orchestrate the complete development cycle through SlashCommand tool integration, rigorously executing from research through implementation, testing, and validation phases.
## Workflow Instructions
Follow this command sequence when invoked. Use the **SlashCommand tool** to execute each command:
### Phase 1: Research
**Use SlashCommand tool to execute**: `/research [task description]`
- Explore codebase structure and understand existing implementation
- Document findings for context
- On failure: Terminate workflow and report to user
### Phase 2: Planning
**Use SlashCommand tool to execute**: `/think [feature description]`
- Create comprehensive SOW with acceptance criteria
- Define implementation approach and risks
- On failure: May retry once or ask user for clarification
### Phase 3: Implementation
**Use SlashCommand tool to execute**: `/code [implementation details]`
- Implement following TDD/RGRC cycle
- Apply SOLID principles and code quality standards
- On failure: **Use SlashCommand tool to execute `/fix`** and retry
### Phase 4: Testing
**Use SlashCommand tool to execute**: `/test`
- Run all tests (unit, integration, E2E)
- Verify quality standards
- On failure: **Use SlashCommand tool to execute `/fix`** and re-test
### Phase 5: Review
**Use SlashCommand tool to execute**: `/review`
- Multi-agent code review for quality, security, performance
- Generate actionable recommendations
- On failure: Document issues for manual review
### Phase 6: Validation
**Use SlashCommand tool to execute**: `/validate`
- Verify implementation against SOW criteria
- Check coverage and performance metrics
- On failure: Report missing requirements
## Progress Tracking
Use **TodoWrite** tool throughout to track progress:
```markdown
Development Cycle Progress:
- [ ] Research phase (Use SlashCommand: /research)
- [ ] Planning phase (Use SlashCommand: /think)
- [ ] Implementation phase (Use SlashCommand: /code)
- [ ] Testing phase (Use SlashCommand: /test)
- [ ] Review phase (Use SlashCommand: /review)
- [ ] Validation phase (Use SlashCommand: /validate)
```
Update each task status as commands complete.
## Error Handling Strategy
When a command fails:
1. **For /code or /test failures**: Automatically use SlashCommand to invoke `/fix`
2. **For /research or /think failures**: Ask user for clarification
3. **For /review failures**: Continue with documented issues
4. **For /validate failures**: Report specific criteria that failed
## Conditional Execution
After each phase, evaluate results:
- If test coverage < 80%: Consider additional test implementation
- If critical security issues found: Prioritize fixes before proceeding
- If performance issues detected: May need optimization pass
## Example Execution
```markdown
User: /full-cycle "Add user authentication feature"
Claude: Starting full development cycle...
[Uses SlashCommand to execute: /research user authentication]
✓ Research complete - found existing auth patterns
[Uses SlashCommand to execute: /think Add OAuth2 authentication]
✓ SOW created with 8 acceptance criteria
[Uses SlashCommand to execute: /code Implement OAuth2 login flow]
✓ Implementation complete - 15 files modified
[Uses SlashCommand to execute: /test]
⚠ 3 tests failed
[Uses SlashCommand to execute: /fix]
✓ Fixes applied
[Uses SlashCommand to execute: /test]
✓ All tests passing
[Uses SlashCommand to execute: /review]
✓ Review complete - 2 medium priority issues found
[Uses SlashCommand to execute: /validate]
✓ All acceptance criteria met
Complete! Feature successfully implemented and validated.
```
## Usage Specifications
```bash
# Standard execution
/full-cycle
# Selectively skip phases
/full-cycle --skip=research,think
# Initiate from specific phase
/full-cycle --start-from=code
# Dry-run mode (display plan without execution)
/full-cycle --dry-run
```
## Integration Benefits
1. **🔄 Complete Automation**: Minimizes manual intervention throughout workflow
2. **📊 Progress Visibility**: Seamlessly integrates with TodoWrite for transparent tracking
3. **🛡️ Error Resilience**: Intelligent retry mechanisms with automatic corrections
4. **⚡ Optimized Execution**: Ensures optimal command sequence and timing
## Configuration Specification
Customize behavior through settings.json:
```json
{
"full_cycle": {
"default_sequence": ["research", "think", "code", "test", "review"],
"error_handling": "stop_on_failure",
"parallel_execution": true,
"auto_commit": false
}
}
```
## Critical Requirements
- Strictly requires SlashCommand tool (v1.0.123+)
- Execution permissions must be explicitly configured for each command
- Automatic corrections utilize `/fix` only when available
- Comprehensive summary report generated upon completion

156
commands/gemini/search.md Normal file
View File

@@ -0,0 +1,156 @@
---
name: search
description: Gemini CLIを使用してGoogle検索を実行
allowed-tools: Bash(gemini:*), TodoWrite
priority: medium
suitable_for:
scale: [small, medium, large]
type: [research, exploration]
understanding: "any"
urgency: [low, medium, high]
aliases: [gsearch, google]
timeout: 30
context:
files_changed: "none"
lines_changed: "0"
new_features: false
breaking_changes: false
---
# /gemini:search - Google Search via Gemini
## Purpose
Use Gemini CLI to perform Google searches and get comprehensive results with AI-powered insights.
## Usage
Describe what you want to search:
- "Latest React performance optimization techniques"
- "TypeScript 5.0 new features"
- "Best practices for API security 2024"
## Execution Strategy
### 1. Query Optimization
- Enhance search terms for better results
- Add relevant keywords and timeframes
- Focus on authoritative sources
### 2. Search via Gemini
```bash
gemini --prompt "Search and summarize: {{query}}
Focus on:
- Latest information (prioritize recent sources)
- Authoritative sources
- Practical examples
- Key insights and trends"
```
### 3. TodoWrite Integration
Track search progress:
```markdown
# Search: [topic]
1. ⏳ Execute search
2. ⏳ Analyze results
3. ⏳ Extract key findings
```
## Search Types
### Technical Research
```bash
gemini -p "Technical search: {{query}}
Include:
- Official documentation
- GitHub repositories
- Stack Overflow solutions
- Technical blog posts"
```
### Best Practices
```bash
gemini -p "Best practices search: {{query}}
Focus on:
- Industry standards
- Expert recommendations
- Case studies
- Common pitfalls"
```
### Troubleshooting
```bash
gemini -p "Troubleshooting search: {{query}}
Find:
- Common causes
- Solution approaches
- Similar issues
- Workarounds"
```
## Output Format
```markdown
## Search Results: [Query]
### Key Findings
- [Main insight 1]
- [Main insight 2]
- [Main insight 3]
### Relevant Sources
1. [Source with brief description]
2. [Source with brief description]
### Recommended Actions
- [Next step based on findings]
```
## When to Use
- Researching new technologies
- Finding best practices
- Troubleshooting errors
- Exploring implementation approaches
- Staying updated with trends
## When NOT to Use
- Simple factual queries (use WebSearch)
- Local codebase search (use Grep/Glob)
- API documentation (use official docs)
## Example Usage
```markdown
/gemini:search "React Server Components production deployment"
/gemini:search "Solving N+1 query problem in GraphQL"
/gemini:search "Kubernetes autoscaling best practices 2024"
```
## Tips
1. **Be specific** - Include context and constraints
2. **Add timeframe** - "2024", "latest", "recent"
3. **Specify domain** - "TypeScript", "React", "Node.js"
4. **Request format** - "with examples", "step-by-step"
## Prerequisites
- Gemini CLI installed and configured
- Internet connection
- Valid Gemini API credentials
## Next Steps
- Promising findings → `/research` for deeper dive
- Implementation ideas → `/think` for planning
- Quick fixes found → `/fix` to apply

159
commands/hotfix.md Normal file
View File

@@ -0,0 +1,159 @@
---
description: >
Emergency fixes for critical production issues ONLY. For production-impacting problems, security vulnerabilities, or immediate deployment needs.
5-min triage, 15-min fix, 10-min test. Minimal process overhead with required rollback plan.
NOT for development fixes (use /fix instead). High severity (critical/security) only.
本番環境の緊急対応が必要な重大な問題を修正。本番影響、セキュリティ脆弱性、即座のデプロイが必要な場合のみ。
allowed-tools: Bash(git diff:*), Bash(git status:*), Bash(git log:*), Bash(git show:*), Edit, MultiEdit, Read, Write, Glob, Grep, Task
model: inherit
argument-hint: "[critical issue description]"
---
# /hotfix - Emergency Hot Fix
## Purpose
Apply critical fixes to production issues with minimal process overhead while maintaining quality.
## Usage
For urgent production bugs requiring immediate attention.
## Workflow
Streamlined critical path: Quick analysis → Fix → Test → Deploy readiness
## Safety Checks
**MANDATORY**: Before proceeding with hotfix:
1. **Impact Assessment**: Is production truly affected?
2. **Rollback Ready**: Note current version/commit
3. **Minimum Fix**: Scope the smallest possible change
4. **Team Alert**: Notify stakeholders
5. **Test Plan**: Define critical path testing
⚠️ **WARNING**: Production changes carry high risk. Double-check everything.
## Execution Steps
### 1. Triage (5 min)
- Confirm production impact
- Identify root cause
- Define minimum fix
### 2. Fix (15 min)
- Apply focused change
- Stability > Elegance
- Document technical debt
### 3. Test (10 min)
- Verify issue resolved
- Check critical paths only
- No comprehensive testing
### 4. Deploy Ready
- Clear commit message
- Rollback documented
- Team notified
## TodoWrite Integration
Emergency tracking (keep it simple):
```markdown
# HOTFIX: [Critical issue]
1. ⏳ Triage & Assess
2. ⏳ Emergency Fix
3. ⏳ Critical Testing
```
## Output Format
```markdown
## 🚨 HOTFIX Summary
- Critical Issue: [Description]
- Severity: [Critical/High]
- Root Cause: [Brief explanation]
## Changes Made
- Files Modified: [List with specific changes]
- Risk Assessment: [Low/Medium/High]
- Rollback Plan: [How to revert if needed]
## Verification
- Issue Resolved: [Yes/No]
- Tests Passed: [List of tests]
- Side Effects: [None/Listed]
## Follow-up Required
- Technical Debt: [What needs cleanup]
- Full Testing: [What needs comprehensive testing]
- Documentation: [What needs updating]
```
## When to Use
- Production is down
- Security vulnerabilities
- Data corruption risks
- Critical user-facing bugs
- Regulatory compliance issues
## When NOT to Use
- Feature requests
- Performance improvements
- Refactoring
- Non-critical bugs
## Core Principles
- Fix first, perfect later
- Document everything
- Test the critical path
- Plan for rollback
- Schedule follow-up
## Example Usage
```markdown
/hotfix "Payment processing returning 500 errors"
/hotfix "User data exposed in API response"
/hotfix "Login completely broken after deployment"
```
## Post-Hotfix Actions
1. **Immediate**: Deploy and monitor
2. **Within 24h**: Run `/reflect` to document lessons
3. **Within 1 week**: Use full workflow to properly fix
4. **Update**: Add test cases to prevent recurrence
## Command Differentiation Guide
### Use `/hotfix` when
- 🚨 Production environment is affected
- System is down or severely impaired
- Security vulnerability discovered
- Data integrity at risk
- Regulatory compliance issue
- Users cannot complete critical actions
### Use `/fix` instead when
- 🔧 Working in development environment
- Issue is minor or cosmetic
- No immediate user impact
- Can wait for normal deployment cycle
- Testing can follow standard process
### Key Difference
- **hotfix**: Emergency production fixes with immediate deployment need
- **fix**: Rapid development fixes following normal deployment flow

159
commands/pr.md Normal file
View File

@@ -0,0 +1,159 @@
---
description: >
Analyze branch changes and generate comprehensive PR description automatically. Uses pr-generator agent.
Examines all commits from branch divergence, not just latest. Creates summary, test plan, and checklist.
Use when ready to create pull request and need description text.
ブランチの変更内容を分析して包括的なPR説明文を自動生成。分岐点からのすべてのコミットを検査。
allowed-tools: Task
model: inherit
---
# /pr - Pull Request Description Generator
Analyze all changes in the current branch compared to the base branch and generate comprehensive PR descriptions.
**Implementation**: This command delegates to the specialized `pr-generator` subagent for optimal performance and context efficiency.
## How It Works
When invoked, this command:
1. Launches the `pr-generator` subagent via Task tool
2. Subagent detects base branch dynamically (main/master/develop)
3. Analyzes git diff, commit history, and file changes
4. Generates comprehensive PR descriptions
5. Returns multiple template alternatives
## Usage
### Basic Usage
```bash
/pr
```
Generates PR description from current branch changes.
### With Issue Reference
```bash
/pr "#456"
```
Links PR to specific issue.
### With Custom Context
```bash
/pr "This PR implements the new authentication flow discussed in the team meeting"
```
Incorporates additional context into the description.
## PR Description Structure
### Essential Sections
1. **Summary**: High-level overview
2. **Motivation**: Why these changes
3. **Changes**: Detailed breakdown
4. **Testing**: Verification steps
5. **Related**: Linked issues/PRs
### Optional Sections
- **Screenshots**: For UI changes
- **Breaking Changes**: For API modifications
- **Performance Impact**: For optimizations
- **Migration Guide**: For breaking changes
## Output Format
The command provides:
- **Branch analysis**: Current/base branches, commits, files, lines changed
- **Change summary**: Type, affected components, breaking changes, test coverage
- **Recommended template**: Comprehensive PR description
- **Alternative formats**: Detailed, concise, custom versions
- **Usage instructions**: How to create PR with description
## Integration with Workflow
Works seamlessly with:
- `/branch` - Create branch first
- `/commit` - Make commits
- `/pr` - Generate PR description
- `/review` - Code review after PR
## Technical Details
### Subagent Benefits
- **90% context reduction**: Only git operations, no codebase loading
- **2-3x faster execution**: Lightweight agent optimized for git analysis
- **Specialized logic**: Dedicated to PR description generation
- **Parallel execution**: Can run concurrently with other operations
### Git Operations Used
The subagent only executes git commands:
- `git symbolic-ref` - Detect base branch
- `git diff` - Compare branches
- `git log` - Analyze commits
- `git status` - Check current state
- No file system access or code parsing
### Base Branch Detection
The subagent automatically detects the base branch:
1. Attempts: `git symbolic-ref refs/remotes/origin/HEAD`
2. Falls back to: `main``master``develop`
3. Never assumes without verification
## Related Commands
- `/branch` - Generate branch names
- `/commit` - Generate commit messages
- `/review` - Code review
## Best Practices
1. **Create PR after commits**: Ensure all changes are committed
2. **Include context**: Provide motivation and goals
3. **Add testing steps**: Help reviewers verify
4. **Link issues**: Connect to relevant issues
5. **Review before submitting**: Check generated description
## Context Efficiency
This command is optimized for minimal context usage:
- ✅ No codebase files loaded
- ✅ Only git metadata analyzed
- ✅ Fast execution (<10 seconds)
- ✅ Can run in parallel with other tasks
## Smart Features
### Automatic Detection
- Issue numbers from commits/branch
- Change type (feature/fix/refactor)
- Breaking changes
- Test coverage
- Affected components
### Pattern Recognition
- API changes
- UI updates
- Database modifications
- Configuration changes
- Dependency updates
---
**Note**: For implementation details, see `.claude/agents/git/pr-generator.md`

660
commands/research.md Normal file
View File

@@ -0,0 +1,660 @@
---
description: >
Perform project research and technical investigation without implementation. Explore codebase structure, technology stack, dependencies, and patterns.
Use when understanding is low (≥30%) and you need to learn before implementing. Documents findings persistently for future reference.
Uses Task agent for complex searches with efficient parallel execution.
プロジェクト理解と技術調査を行う(実装なし)。コードベース構造、技術スタック、依存関係、パターンを探索。
allowed-tools: Bash(find:*), Bash(tree:*), Bash(ls:*), Bash(git log:*), Bash(git diff:*), Bash(grep:*), Bash(cat:*), Bash(cat package.json:*), Bash(head:*), Bash(wc:*), Read, Glob, Grep, LS, Task
model: inherit
argument-hint: "[research topic or question]"
---
# /research - Advanced Project Research & Technical Investigation
## Purpose
Investigate codebase with dynamic discovery, parallel search execution, and confidence-based findings (✓/→/?), without implementation commitment.
**Output Verifiability**: All findings include evidence, distinguish facts from inferences, and explicitly state unknowns per AI Operation Principle #4.
## Dynamic Project Discovery
### Recent Commit History
```bash
!`git log --oneline -10 || echo "Not a git repository"`
```
### Technology Stack
```bash
!`ls -la package.json pyproject.toml go.mod Cargo.toml pom.xml build.gradle | head -5 || echo "No standard project files found"`
```
### Modified Files
```bash
!`git diff --name-only HEAD~1 | head -10 || echo "No recent changes"`
```
### Documentation Files
```bash
!`find . -name "*.md" | grep -v node_modules | head -10 || echo "No documentation found"`
```
### Core File Count
```bash
!`find . -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" | grep -v node_modules | wc -l`
```
## Quick Context Analysis
### Test Framework Detection
```bash
!`grep -E "jest|mocha|vitest|pytest|unittest" package.json 2>/dev/null | head -3 || echo "No test framework detected"`
```
### API Endpoints
```bash
!`grep -r "app.get\|app.post\|app.put\|app.delete\|router." --include="*.js" --include="*.ts" | head -10 || echo "No API endpoints found"`
```
### Configuration Files
```bash
!`ls -la .env* .config* *.config.* | head -10 || echo "No configuration files found"`
```
### Package Dependencies
```bash
!`grep -E '"dependencies"|"devDependencies"' -A 10 package.json 2>/dev/null || echo "No package.json found"`
```
### Recent Issues/TODOs
```bash
!`grep -r "TODO\|FIXME\|HACK" --include="*.ts" --include="*.tsx" --include="*.js" | head -10 || echo "No TODOs found"`
```
## Hierarchical Research Process
### Phase 1: Scope Discovery
Analyze project to understand:
1. **Architecture**: Identify project structure and patterns
2. **Technology**: Detect frameworks, libraries, tools
3. **Conventions**: Recognize coding standards and practices
4. **Entry Points**: Find main files, exports, APIs
### Phase 2: Parallel Investigation
Execute searches concurrently for efficiency:
- **Pattern Search**: Multiple grep operations in parallel
- **File Discovery**: Simultaneous glob patterns
- **Dependency Tracing**: Parallel import analysis
- **Documentation Scan**: Concurrent README/docs reading
### Phase 3: Synthesis & Scoring
Consolidate findings with confidence levels:
1. **Confidence Scoring**: Rate each finding (0.0-1.0)
2. **Pattern Recognition**: Identify recurring themes
3. **Relationship Mapping**: Connect related components
4. **Priority Assessment**: Rank by importance
## Context-Based Automatic Level Selection
The `/research` command automatically determines the appropriate investigation depth based on context analysis, eliminating the need for manual level specification.
### How Context Analysis Works
When you run `/research`, the AI automatically:
1. **Analyzes Current Context**
- Conversation history and flow
- Previous research findings
- Current work phase (planning/implementation/debugging)
- User's implied intent
2. **Selects Optimal Level**
- **Quick Scan (30 sec)**: Initial project exploration, overview requests
- **Standard Research (2-3 min)**: Implementation preparation, specific inquiries
- **Deep Investigation (5 min)**: Problem solving, comprehensive analysis
3. **Explains the Decision**
```text
🔍 Research Level: Standard (auto-selected)
Reason: Implementation context detected - gathering detailed information
```
### Context Determination Criteria
```markdown
## Automatic Level Selection Logic
### Quick Scan Selected When:
- First interaction with a project
- User asks for overview or summary
- No specific problem to solve
- General exploration needed
### Standard Research Selected When:
- Following up on previous findings
- Preparing for implementation
- Specific component investigation
- Normal development workflow
### Deep Investigation Selected When:
- Debugging or troubleshooting context
- Previous research was insufficient
- Complex system analysis needed
- Multiple interconnected components involved
```
## Research Strategies
### Quick Scan (Auto-selected: 30 sec)
Surface-level understanding:
- Project structure overview
- Main technologies identification
- Key file discovery
### Standard Research (Auto-selected: 2-3 min)
Balanced depth and breadth:
- Core architecture understanding
- Key patterns identification
- Main dependencies analysis
- Implementation-ready insights
### Deep Dive (Auto-selected: 5 min)
Comprehensive investigation:
- Complete architecture mapping
- All patterns and relationships
- Full dependency graph
- Historical context (git history)
- Root cause analysis
### Manual Override (Optional)
While context-based selection is automatic, you can override if needed:
- `/research --quick` - Force quick scan
- `/research --deep` - Force deep investigation
- Default behavior: Automatic context-based selection
## Efficient Search Patterns
### Parallel Execution Example
```typescript
// Execute these simultaneously, not sequentially
const searches = [
Grep({ pattern: "class.*Controller", glob: "**/*.ts" }),
Grep({ pattern: "export.*function", glob: "**/*.js" }),
Glob({ pattern: "**/*.test.*" }),
Glob({ pattern: "**/api/**" })
];
```
### Smart Pattern Selection
Based on initial discovery:
- **React Project**: Search for hooks, components, context
- **API Project**: Search for routes, controllers, middleware
- **Library Project**: Search for exports, types, tests
## Confidence-Based Findings
### Finding Classification
Use both numeric scores (0.0-1.0) and visual markers (✓/→/?) for clarity:
- **✓ High Confidence (> 0.8)**: Directly verified from code/files
- **→ Medium Confidence (0.5 - 0.8)**: Reasonable inference from evidence
- **? Low Confidence (< 0.5)**: Assumption requiring verification
```markdown
## ✓ High Confidence Findings (> 0.8)
### Authentication System (0.95)
- **Location**: src/auth/* (verified)
- **Type**: JWT-based (confirmed by imports)
- **Evidence**: Multiple JWT imports, token validation middleware
- **Dependencies**: jsonwebtoken, bcrypt (package.json:12-15)
- **Entry Points**: auth.controller.ts, auth.middleware.ts
## → Medium Confidence Findings (0.5 - 0.8)
### State Management (0.7)
- **Pattern**: Redux-like pattern detected
- **Evidence**: Actions, reducers folders found (src/store/)
- **Uncertainty**: Actual library unclear (Redux/MobX/Zustand?)
- **Reason**: Folder structure suggests Redux, but no explicit import found yet
## ? Low Confidence Findings (< 0.5)
### Possible Patterns
- [?] May use microservices (0.4) - multiple service folders observed
- [?] Might have WebSocket support (0.3) - socket.io in dependencies, no usage found
```
## TodoWrite Integration
Automatic task tracking:
```markdown
# Research: [Topic]
1. ⏳ Discover project structure (30 sec)
2. ⏳ Identify technology stack (30 sec)
3. ⏳ Execute parallel searches (2 min)
4. ⏳ Analyze findings (1 min)
5. ⏳ Score confidence levels (30 sec)
6. ⏳ Synthesize report (1 min)
```
## Task Agent Usage
### When to Use Explore Agent
Use `Explore` agent for:
- **Complex Investigations**: 10+ related searches
- **Exploratory Analysis**: Unknown structure
- **Relationship Mapping**: Understanding connections
- **Historical Research**: Git history analysis
- **Parallel Execution**: Multiple searches simultaneously
- **Result Structuring**: Clean, organized output
- **Fast Codebase Exploration**: Haiku-powered for efficiency
### Explore Agent Integration
```typescript
// Explore agent with automatic level selection
Task({
subagent_type: "Explore",
thoroughness: determineThornessLevel(), // "quick" | "medium" | "very thorough"
description: "Codebase exploration and investigation",
prompt: `
Topic: "${researchTopic}"
Investigate:
1. Architecture: organization, entry points (file:line), patterns [✓]
2. Tech stack: frameworks (versions), languages, testing [✓]
3. Key components: modules, APIs, config [✓]
4. Code patterns: conventions, practices [→]
5. Relationships: dependencies, data flow, integration [→]
Report format:
- Coverage, confidence [✓/→/?]
- Key findings by confidence with evidence (file:line)
- Verification notes: verified, inferred, unknown
`
})
// Helper function for determining thoroughness level
function determineThornessLevel() {
// Auto-select based on context
if (isQuickScanNeeded()) return "quick"; // 30s overview
if (isDeepDiveNeeded()) return "very thorough"; // 5m comprehensive
return "medium"; // 2-3m balanced (default)
}
```
### Context-Aware Research Task (Default)
```typescript
// Default: Explore agent with automatic thoroughness selection
Task({
subagent_type: "Explore",
thoroughness: "medium", // Auto-adjusted based on context if needed
description: "Codebase investigation",
prompt: `
Research Topic: "${topic}"
Explore the codebase and provide structured findings:
1. Architecture & Structure [✓]
2. Technology Stack [✓]
3. Key Components & Patterns [→]
4. Relationships & Dependencies [→]
5. Unknowns requiring investigation [?]
Include:
- Confidence markers (✓/→/?)
- Evidence (file:line references)
- Clear distinction between facts and inferences
Return organized report with verification notes.
`
})
```
### Manual Override Examples
```typescript
// Force quick scan (when you know you need just an overview)
Task({
subagent_type: "Explore",
thoroughness: "quick",
description: "Quick overview scan",
prompt: `
Topic: "${topic}"
Provide quick overview (30 seconds):
- Top 5 key findings
- Basic architecture
- Main technologies
- Entry points
Each finding: one line with confidence marker (✓/→/?)
`
})
// Force deep investigation (when you know you need everything)
Task({
subagent_type: "Explore",
thoroughness: "very thorough",
description: "Comprehensive investigation",
prompt: `
Topic: "${topic}"
Comprehensive analysis (5 minutes):
- Complete architecture mapping
- All patterns and relationships
- Historical context (git history)
- Root cause analysis
- Security and performance considerations
Return detailed findings with full evidence and context.
Include confidence markers and verification notes.
`
})
```
## Advanced Features
### Cross-Reference Analysis
Connect findings across different areas:
```bash
grep -l "AuthController" **/*.ts | xargs grep -l "UserService"
```
### Import Dependency Graph
Trace module dependencies:
```bash
grep -h "^import.*from" **/*.ts | sed "s/.*from ['\"]\.\/\(.*\)['\"].*/\1/" | sort | uniq -c | sort -rn | head -10
```
### Pattern Frequency Analysis
Identify common patterns:
```bash
grep -oh "use[A-Z][a-zA-Z]*" **/*.tsx | sort | uniq -c | sort -rn | head -10
```
### Historical Context
Understand evolution:
```bash
git log --oneline --since="3 months ago" --pretty=format:"%h %s" | head -10
```
## Output Format
**IMPORTANT**: Apply Output Verifiability principle - use ✓/→/? markers with evidence.
```markdown
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 Research Context (Context Engineering Structure)
🎯 Purpose
- Why this research is being conducted
- What we aim to achieve
📋 Prerequisites
- [✓] Known constraints & requirements (verified)
- [→] Inferred environment & configuration
- [?] Unknown dependencies (need verification)
📊 Available Data
- Related files: [file paths discovered]
- Tech stack: [frameworks/libraries identified]
- Existing implementation: [what was found]
🔒 Constraints
- Security requirements
- Performance limitations
- Compatibility constraints
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Research Summary
- **Scope**: [What was researched]
- **Duration**: [Time taken]
- **Overall Confidence**: [Score with marker: ✓/→/?]
- **Coverage**: [% of codebase analyzed]
## Key Discoveries
### ✓ Architecture (0.9)
- **Pattern**: [MVC/Microservices/Monolith]
- **Structure**: [Description]
- **Entry Points**: [Main files with line numbers]
- **Evidence**: [Specific files/imports that confirm this]
### ✓ Technology Stack (0.95)
- **Framework**: [React/Vue/Express/etc]
- **Language**: [TypeScript/JavaScript]
- **Key Libraries**: [List with versions from package.json:line]
- **Evidence**: [package.json dependencies, import statements]
### → Code Patterns (0.7)
- **Design Patterns**: [Observer/Factory/etc]
- **Conventions**: [Naming/Structure]
- **Best Practices**: [Identified patterns]
- **Inference Basis**: [Why we believe this - folder structure/naming]
## Findings by Confidence
### ✓ High Confidence (>0.8)
1. [✓] [Finding] - Evidence: [file:line, specific code reference]
2. [✓] [Finding] - Evidence: [direct observation]
### → Medium Confidence (0.5-0.8)
1. [→] [Finding] - Inferred from: [logical reasoning]
2. [→] [Finding] - Likely because: [supporting evidence]
### ? Low Confidence (<0.5)
1. [?] [Possible pattern] - Needs verification: [what to check]
2. [?] [Assumption] - Unknown: [what information is missing]
## Relationships Discovered
- [✓] [Component A] → [Component B]: [Relationship] (verified in imports)
- [→] [Service X] ← [Module Y]: [Dependency] (inferred from structure)
## Recommendations
1. **Immediate Focus**: [Most important areas]
2. **Further Investigation**: [Areas needing deeper research with specific unknowns]
3. **Implementation Approach**: [If moving to /code]
## References
- Key Files: [List with absolute paths and relevant line numbers]
- Documentation: [Absolute paths to docs]
- External Resources: [URLs if found]
## Verification Notes
- **What was directly verified**: [List with ✓]
- **What was inferred**: [List with →]
- **What remains unknown**: [List with ?]
```
## Persistent Documentation
For significant findings, save to:
```bash
.claude/workspace/research/YYYY-MM-DD-[topic].md
```
Include:
- Architecture diagrams (ASCII)
- Dependency graphs
- Key code snippets
- Future reference notes
### Context Engineering Integration
**IMPORTANT**: Always save structured context for `/think` integration.
**Context File**: `.claude/workspace/research/[timestamp]-[topic]-context.md`
**Format**:
```markdown
# Research Context: [Topic]
Generated: [Timestamp]
Overall Confidence: [✓/→/?] [Score]
## 🎯 Purpose
[Why this research was conducted]
[What we aim to achieve with this knowledge]
## 📋 Prerequisites
### Verified Facts (✓)
- [✓] [Fact] - Evidence: [file:line or source]
### Working Assumptions (→)
- [→] [Assumption] - Based on: [reasoning]
### Unknown/Needs Verification (?)
- [?] [Unknown] - Need to check: [what/where]
## 📊 Available Data
### Related Files
- [file paths with relevance notes]
### Technology Stack
- [frameworks/libraries with versions]
### Existing Implementation
- [what was found with evidence]
## 🔒 Constraints
### Security
- [security requirements identified]
### Performance
- [performance limitations discovered]
### Compatibility
- [compatibility constraints found]
## 📌 Key Findings Summary
[Brief summary of most important discoveries for quick reference]
## 🔗 References
- Detailed findings: [link to full research doc]
- Related SOWs: [if any exist]
```
**Usage Flow**:
1. `/research` generates both detailed findings AND structured context
2. Context file is automatically saved for `/think` to discover
3. `/think` reads latest context file to inform planning
## Usage Examples
### Default Context-Based Selection (Recommended)
```bash
/research "authentication system"
# AI automatically selects appropriate level based on context
# Example: Selects "Standard" if preparing for implementation
# Example: Selects "Deep" if debugging authentication issues
# Example: Selects "Quick" if first time exploring the project
```
### Automatic Level Selection Examples
```bash
# Scenario 1: First project exploration
/research
# → Auto-selects Quick scan (30s overview)
# Scenario 2: After finding a bug
/research "user validation error"
# → Auto-selects Deep investigation (root cause analysis)
# Scenario 3: During implementation planning
/research "database schema"
# → Auto-selects Standard research (implementation details)
```
### Manual Override (When Needed)
```bash
# Force quick overview
/research --quick "API structure"
# Force deep analysis
/research --deep "complete system architecture"
# Default (recommended): Let AI decide
/research "payment processing"
```
## Best Practices
1. **Start Broad**: Get overview before diving deep
2. **Parallel Search**: Execute multiple searches simultaneously
3. **Apply Output Verifiability**: Use ✓/→/? markers with evidence
- [✓] Direct verification: Include file paths and line numbers
- [→] Logical inference: Explain reasoning
- [?] Assumptions: Explicitly state what needs confirmation
4. **Score Confidence**: Always rate findings reliability (0.0-1.0)
5. **Document Patterns**: Note recurring themes
6. **Map Relationships**: Connect components with evidence
7. **Save Important Findings**: Persist for future reference
8. **Admit Unknowns**: Never pretend to know - explicitly list gaps
## Performance Tips
### Optimize Searches
- Use specific globs: `**/*.controller.ts` not `**/*`
- Limit depth when possible: `-maxdepth 3`
- Exclude irrelevant: `-not -path "*/test/*"`
### Efficient Patterns
- Batch related searches together
- Use Task agent for 10+ operations
- Cache common queries results
## Next Steps
- **Found Issues** → `/fix` for targeted solutions
- **Need Planning** → `/think` for architecture decisions
- **Ready to Build** → `/code` for implementation
- **Documentation Needed** → Create comprehensive docs

485
commands/review.md Normal file
View File

@@ -0,0 +1,485 @@
---
description: >
Orchestrate multiple specialized review agents with dynamic context analysis, hierarchical task decomposition, and confidence-based filtering.
Use after code changes or when comprehensive quality assessment is needed. Includes security, performance, accessibility, type safety, and more.
All findings include evidence (file:line) and confidence markers (✓/→/?) per Output Verifiability principles.
複数の専門エージェントによるコードレビューを実行。セキュリティ、パフォーマンス、アクセシビリティなど包括的な品質評価。
allowed-tools: Bash(git diff:*), Bash(git status:*), Bash(git log:*), Bash(git show:*), Read, Glob, Grep, LS, Task
model: inherit
argument-hint: "[target files or scope]"
---
# /review - Advanced Code Review Orchestrator
## Purpose
Orchestrate multiple specialized review agents with dynamic context analysis, hierarchical task decomposition, and confidence-based filtering.
**Output Verifiability**: All review findings include evidence (file:line), distinguish verified issues (✓) from inferred problems (→), per AI Operation Principle #4.
## Integration with Skills
This command explicitly references the following Skills:
- [@~/.claude/skills/security-review/SKILL.md] - Security review knowledge based on OWASP Top 10
Other Skills are automatically loaded through each review agent's dependencies:
- `performance-reviewer``performance-optimization` skill
- `readability-reviewer``readability-review` skill
- `progressive-enhancer``progressive-enhancement` skill
## Dynamic Context Analysis
### Git Status
Check git status:
```bash
!`git status --porcelain`
```
### Files Changed
List changed files:
```bash
!`git diff --name-only HEAD`
```
### Recent Commits
View recent commits:
```bash
!`git log --oneline -10`
```
### Change Statistics
Show change statistics:
```bash
!`git diff --stat HEAD`
```
## Specification Context (Auto-Detection)
### Discover Latest Spec
Search for spec.md in SOW workspace using Glob tool (approved):
```markdown
Use Glob tool to find spec.md:
- Pattern: ".claude/workspace/sow/**/spec.md"
- Alternative: "~/.claude/workspace/sow/**/spec.md"
Select the most recent spec.md if multiple exist (check modification time).
```
### Load Specification
**If spec.md exists**, load it for review context:
- Provides functional requirements for alignment checking
- Enables "specification vs implementation" verification
- Implements Article 2's approach: spec.md in review prompts
- Allows reviewers to identify gaps like "仕様書ではこう定義されていますが、この実装ではそのケースが考慮されていません"
**If spec.md does not exist**:
- Review proceeds with code-only analysis
- Focus on code quality, security, and best practices
- Consider creating specification with `/think` for future reference
## Execution
Invoke the review-orchestrator agent to perform comprehensive code review:
```typescript
Task({
subagent_type: "review-orchestrator",
description: "Comprehensive code review",
prompt: `
Execute comprehensive code review with the following requirements:
### Review Context
- Changed files: Use git status and git diff to identify scope
- Recent commits: Analyze recent changes for context
- Project type: Detect technology stack automatically
- Specification: If spec.md exists in workspace, verify implementation aligns with specification requirements
- Check if all functional requirements (FR-xxx) are implemented
- Identify missing features defined in spec
- Flag deviations from API specifications
- Compare actual vs expected behavior per spec
### Review Process
1. **Context Discovery**: Analyze repository structure and technology stack
2. **Parallel Reviews**: Launch specialized review agents concurrently
- structure-reviewer, readability-reviewer, progressive-enhancer
- type-safety-reviewer, design-pattern-reviewer, testability-reviewer
- performance-reviewer, accessibility-reviewer
- document-reviewer (if .md files present)
3. **Filtering & Consolidation**: Apply confidence filters and deduplication
### Output Requirements
- All findings MUST include evidence (file:line)
- Use confidence markers: ✓ (>0.8), → (0.5-0.8)
- Only include findings with confidence >0.7
- Group by severity: Critical, High, Medium, Low
- Provide actionable recommendations
Report results in Japanese.
`
})
```
## Hierarchical Review Process Details
### Phase 1: Context Discovery
Analyze repository and determine review scope:
1. Analyze repository structure and technology stack
2. Identify review scope (changed files, directories)
3. Detect code patterns and existing quality standards
4. Determine applicable review categories
### Phase 2: Parallel Specialized Reviews
Launch multiple review agents concurrently:
- Each agent focuses on specific aspect
- Independent execution for efficiency
- Collect raw findings with confidence scores
### Phase 3: Filtering and Consolidation
Apply multi-level filtering with evidence requirements:
1. **Confidence Filter**: Only issues with >0.7 confidence
2. **Evidence Requirement**: All findings MUST include:
- File path with line number (e.g., `src/auth.ts:42`)
- Specific code reference or pattern
- Reasoning for the issue
3. **False Positive Filter**: Apply exclusion rules
4. **Deduplication**: Merge similar findings
5. **Prioritization**: Sort by impact and severity
**Confidence Mapping**:
- ✓ High Confidence (>0.8): Verified issue with direct code evidence
- → Medium Confidence (0.5-0.8): Inferred problem with reasoning
- ? Low Confidence (<0.5): Not included in output (too uncertain)
## Review Agents and Their Focus
### Core Architecture Reviewers
- `review-orchestrator`: Coordinates all review activities
- `structure-reviewer`: Code organization, DRY violations, coupling
- `root-cause-reviewer`: Deep problem analysis, architectural debt
### Quality Assurance Reviewers
- `readability-reviewer`: Code clarity, naming, complexity
- `type-safety-reviewer`: TypeScript coverage, any usage, type assertions
- `testability-reviewer`: Test design, mocking, coverage gaps
### Specialized Domain Reviewers
- Security review (via `security-review` skill): OWASP Top 10 vulnerabilities, auth issues, data exposure
- `accessibility-reviewer`: WCAG compliance, keyboard navigation, ARIA
- `performance-reviewer`: Bottlenecks, bundle size, rendering issues
- `design-pattern-reviewer`: Pattern consistency, React best practices
- `progressive-enhancer`: CSS-first solutions, graceful degradation
- `document-reviewer`: README quality, API docs, inline comments
## Exclusion Rules
### Automatic Exclusions (False Positive Prevention)
1. **Style Issues**: Formatting, indentation (handled by linters)
2. **Minor Naming**: Unless severely misleading
3. **Test Files**: Focus on production code unless requested
4. **Generated Code**: Build outputs, vendor files
5. **Documentation**: Unless specifically reviewing docs
6. **Theoretical Issues**: Without concrete exploitation path
7. **Performance Micro-optimizations**: Unless measurable impact
8. **Missing Features**: vs actual bugs/issues
### Context-Aware Exclusions
- Framework-specific patterns (React/Angular/Vue idioms)
- Project conventions (detected from existing code)
- Language-specific safety (memory-safe languages)
- Environment assumptions (browser vs Node.js)
## Output Format with Confidence Scoring
**IMPORTANT**: Use both numeric scores (0.0-1.0) and visual markers (✓/→) for clarity.
```markdown
[REVIEW OUTPUT TEMPLATE]
Review Summary
- Files Reviewed: [Count and list]
- Total Issues: [Count by severity with markers]
- Review Coverage: [Percentage]
- Overall Confidence: [✓/→] [Average score]
## ✓ Critical Issues 🚨 (Confidence > 0.9)
Issue #1: [Title]
- **Marker**: [✓] High Confidence
- **File**: path/to/file.ts:42-45
- **Category**: security|performance|accessibility|etc
- **Confidence**: 0.95
- **Evidence**: [Specific code snippet or pattern found]
- **Description**: [Detailed explanation]
- **Impact**: [User/system impact]
- **Recommendation**: [Specific fix with code example]
- **References**: [Related files, docs, or standards]
## ✓ High Priority ⚠️ (Confidence > 0.8)
Issue #2: [Title]
- **Marker**: [✓] High Confidence
- **File**: path/to/another.ts:123
- **Evidence**: [Direct observation]
- **Description**: [Issue explanation]
- **Recommendation**: [Fix with example]
## → Medium Priority 💡 (Confidence 0.7-0.8)
Issue #3: [Title]
- **Marker**: [→] Medium Confidence
- **File**: path/to/file.ts:200
- **Inference**: [Reasoning behind this finding]
- **Description**: [Issue explanation]
- **Recommendation**: [Suggested improvement]
- **Note**: Verify this inference before implementing fix
Improvement Opportunities
[→] Lower confidence suggestions (0.5-0.7) for consideration
- Mark as [→] to indicate these are recommendations, not confirmed issues
Metrics
- Code Quality Score: [A-F rating] [✓/→]
- Technical Debt Estimate: [Hours] [✓/→]
- Test Coverage Gap: [Percentage] [✓]
- Security Posture: [Rating] [✓/→]
Recommended Actions
1. **Immediate** [✓]: [Critical fixes with evidence]
2. **Next Sprint** [✓/→]: [High priority items]
3. **Backlog** [→]: [Nice-to-have improvements]
Evidence Summary
- **Verified Issues** [✓]: [Count] - Direct code evidence
- **Inferred Problems** [→]: [Count] - Based on patterns/reasoning
- **Total Confidence**: [Overall score]
```
## Review Strategies
### Quick Review (2-3 min)
Focus areas:
- Security vulnerabilities
- Critical bugs
- Breaking changes
- Accessibility violations
Command: `/review --quick`
### Standard Review (5-7 min)
Includes Quick + :
- Performance issues
- Type safety problems
- Test coverage gaps
- Code organization
Command: `/review` (default)
### Deep Review (10+ min)
Comprehensive analysis:
- All standard checks
- Root cause analysis
- Technical debt assessment
- Refactoring opportunities
- Architecture evaluation
Command: `/review --deep`
### Focused Review
Target specific areas:
- `/review --security` - Security focus
- `/review --performance` - Performance focus
- `/review --accessibility` - A11y focus
- `/review --architecture` - Design patterns
## TodoWrite Integration
Automatic task creation:
```markdown
[TODO LIST TEMPLATE]
Code Review: [Target]
1. ⏳ Context discovery and scope analysis
2. ⏳ Execute specialized review agents (parallel)
3. ⏳ Filter and validate findings (confidence > 0.7)
4. ⏳ Consolidate and prioritize results
5. ⏳ Generate actionable recommendations
```
## Custom Review Instructions
Support for project-specific rules:
- `.claude/review-rules.md` - Project conventions
- `.claude/exclusions.md` - Custom exclusions
- `.claude/review-focus.md` - Priority areas
## Advanced Features
### Incremental Reviews
Compare against baseline:
```bash
!`git diff origin/main...HEAD --name-only`
```
### Pattern Detection
Identify recurring issues:
- Similar problems across files
- Systemic architectural issues
- Common anti-patterns
### Learning Mode
Track and improve:
- False positive patterns
- Project-specific idioms
- Team preferences
## Usage Examples
### Basic Review
```bash
/review
# Reviews all changed files with standard depth
```
### Targeted Review
```bash
/review "authentication module"
# Focuses on auth-related code
```
### Security Audit
```bash
/review --security --deep
# Comprehensive security analysis
```
### Pre-PR Review
```bash
/review --compare main
# Reviews changes against main branch
```
### Component Review
```bash
/review "src/components" --accessibility
# A11y review of components directory
```
## Best Practices
1. **Review Early**: Catch issues before they compound
2. **Review Incrementally**: Small, frequent reviews > large, rare ones
3. **Apply Output Verifiability**:
- **Always provide evidence**: File paths with line numbers
- **Use confidence markers**: ✓ for verified, → for inferred
- **Explain reasoning**: Why is this an issue?
- **Reference standards**: Link to docs, best practices, or past issues
- **Never guess**: If uncertain, mark as [→] and explain the inference
4. **Act on High Confidence**: Focus on ✓ (>0.8) issues first
5. **Validate Inferences**: [→] markers require verification before fixing
6. **Track Patterns**: Identify recurring problems
7. **Customize Rules**: Add project-specific exclusions
8. **Iterate on Feedback**: Tune confidence thresholds
## Integration Points
### Pre-commit Hook
```bash
claude review --quick || exit 1
```
### CI/CD Pipeline
```yaml
- name: Code Review
run: claude review --security --performance
```
### PR Comments
Results formatted for GitHub/GitLab comments
## Applied Development Principles
### Output Verifiability (AI Operation Principle #4)
All review findings MUST follow Output Verifiability:
- **Distinguish verified from inferred**: Use ✓/→ markers
- **Provide evidence**: File:line for every issue
- **State confidence explicitly**: Numeric + visual marker
- **Explain reasoning**: Why is this problematic?
- **Admit uncertainty**: [→] when inferred, never pretend to know
### Principles Guide
[@~/.claude/rules/PRINCIPLES_GUIDE.md](~/.claude/rules/PRINCIPLES_GUIDE.md) - Foundation for review prioritization
Application:
- **Priority Matrix**: Categorize issues by Essential > Default > Contextual principles
- **Conflict Resolution**: Decision criteria for DRY vs Readable, SOLID vs Simple, etc.
- **Red Flags**: Method chains > 3 levels, can't understand in 1 minute, "just in case" implementations
### Documentation Rules
[@~/.claude/docs/DOCUMENTATION_RULES.md](~/.claude/docs/DOCUMENTATION_RULES.md) - Review report format and structure
Application:
- **Clarity First**: Understandability over completeness
- **Consistency**: Unified report format with ✓/→ markers
- **Actionable Recommendations**: Specific improvement actions with evidence
## Next Steps After Review
- **Critical Issues** → `/hotfix` for production issues
- **Bugs** → `/fix` for development fixes
- **Refactoring** → `/think``/code` for improvements
- **Performance** → Targeted optimization with metrics
- **Tests** → `/test` with coverage goals
- **Documentation** → Update based on findings

122
commands/sow.md Normal file
View File

@@ -0,0 +1,122 @@
---
description: >
Display current SOW progress status, showing acceptance criteria completion, key metrics, and build status.
Read-only viewer for active work monitoring. Lists and views Statement of Work documents stored in workspace.
Use to check implementation progress anytime during development.
SOW文書の一覧表示と閲覧。受け入れ基準の完了状況、主要メトリクス、ビルドステータスを表示。
allowed-tools: Read, Bash(ls:*), Bash(find:*), Bash(cat:*)
model: inherit
---
# /sow - SOW Document Viewer
## Purpose
List and view Statement of Work (SOW) documents stored in the workspace.
**Simplified**: Read-only viewer for planning documents.
## Functionality
### List SOWs
```bash
!`ls -la ~/.claude/workspace/sow/`
```
### View Latest SOW
```bash
!`ls -t ~/.claude/workspace/sow/*/sow.md | head -1 | xargs cat`
```
### View Specific SOW
```bash
# By date or feature name
!`cat ~/.claude/workspace/sow/[directory]/sow.md`
```
## Output Format
```markdown
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📚 Available SOW Documents
1. 2025-01-14-oauth-authentication
Created: 2025-01-14
Status: Draft
2. 2025-01-13-api-refactor
Created: 2025-01-13
Status: Active
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
To view a specific SOW:
/sow "oauth-authentication"
```
## Usage Examples
### List All SOWs
```bash
/sow
```
Shows all available SOW documents with creation dates.
### View Latest SOW
```bash
/sow --latest
```
Displays the most recently created SOW.
### View Specific SOW
```bash
/sow "feature-name"
```
Shows the SOW for a specific feature.
## Integration with Workflow
```markdown
1. Create SOW: /think "feature"
2. View SOW: /sow
3. Track Tasks: Use TodoWrite independently
4. Reference SOW during implementation
```
## Simplified Design
- **Read-only**: No modification capabilities
- **Static documents**: SOWs are planning references
- **Clear separation**: SOW for planning, TodoWrite for execution
## Related Commands
- `/think` - Create new SOW
- `/todos` - View current tasks (separate from SOW)
## Applied Principles
### Single Responsibility
- SOW viewer only views documents
- No complex synchronization
### Occam's Razor
- Simple file listing and viewing
- No unnecessary features
### Progressive Enhancement
- Start with basic viewing
- Add search/filter if needed later

217
commands/test.md Normal file
View File

@@ -0,0 +1,217 @@
---
description: >
Run project tests and validate code quality through comprehensive testing. Automatically discovers test commands from package.json, README, or project configuration.
Handles unit, integration, and E2E tests with progress tracking via TodoWrite. Includes browser testing for UI changes when applicable.
Use after implementation to verify functionality and quality standards.
プロジェクトのテストを実行し、包括的なテストでコード品質を検証。ユニット、統合、E2Eテストに対応。
allowed-tools: Bash(npm test), Bash(npm run), Bash(yarn test), Bash(yarn run), Bash(pnpm test), Bash(pnpm run), Bash(bun test), Bash(bun run), Bash(npx), Read, Glob, Grep, TodoWrite, Task
model: inherit
argument-hint: "[test scope or specific tests]"
---
# /test - Test Execution & Quality Validation
## Purpose
Run project tests and ensure code quality through comprehensive testing and validation.
## Initial Discovery
### Check Package Manager
```bash
!`ls package*.json 2>/dev/null | head -1`
```
## Test Execution Process
### 1. Run Tests
Use TodoWrite to track testing progress.
First, check available scripts in package.json:
```bash
cat package.json
```
Then run appropriate test command:
```bash
npm test
```
Alternative package managers:
```bash
yarn test
```
```bash
pnpm test
```
```bash
bun test
```
### 2. Coverage Analysis
Run tests with coverage:
```bash
npm test -- --coverage
```
Check coverage directory:
```bash
ls coverage/
```
### 3. Test Gap Analysis
**Purpose**: Automatically identify missing tests and suggest improvements based on coverage data.
Use test-generator agent to analyze coverage gaps:
```typescript
Task({
subagent_type: "test-generator",
description: "Analyze test coverage gaps",
prompt: `
Test Results: ${testResults}
Coverage: ${coverageData}
Analyze gaps:
1. Uncovered code: files <80%, untested functions, branches [✓]
2. Missing scenarios: edge cases, error paths, boundaries [→]
3. Quality issues: shallow tests, missing assertions [→]
4. Generate test code for priority areas [→]
Return: Test code snippets (not descriptions), coverage improvement estimate.
Mark: [✓] verified gaps, [→] suggested tests.
`
})
```
#### Gap Analysis Output
```markdown
## Test Coverage Gaps
### High Priority (< 50% coverage)
- **File**: src/utils/validation.ts
- **Lines**: 45-67 (23 lines uncovered)
- **Issue**: No tests for error cases
- **Suggested Test**:
```typescript
describe('validation', () => {
it('should handle invalid input', () => {
expect(() => validate(null)).toThrow('Invalid input')
})
})
```
### Medium Priority (50-80% coverage)
- **File**: src/services/api.ts
- **Lines**: 120-135
- **Issue**: Network error handling not tested
### Edge Cases Not Covered
1. Boundary conditions (empty arrays, null values)
2. Concurrent operations
3. Timeout scenarios
### Estimated Impact
- Adding suggested tests: 65% → 85% coverage
- Effort: ~2 hours
- Critical paths covered: 95%+
```
### 4. Quality Checks
#### Linting
```bash
npm run lint
```
#### Type Checking
```bash
npm run type-check
```
Alternative:
```bash
npx tsc --noEmit
```
#### Format Check
```bash
npm run format:check
```
## Result Analysis
### Test Results Summary
Provide clear summary of:
- Total tests run
- Passed/Failed breakdown
- Execution time
- Coverage percentage (if measured)
### Failure Analysis
For failed tests:
1. Identify test file and line number
2. Analyze failure reason
3. Suggest specific fix
4. Link to relevant code
### Coverage Report
When coverage is available:
- Line coverage percentage
- Branch coverage percentage
- Uncovered critical paths
- Suggestions for improvement
## TodoWrite Integration
Automatic task tracking:
```markdown
1. Discover test infrastructure
2. Run test suite
3. Analyze failures (if any)
4. Generate coverage report
5. Analyze test gaps (NEW - test-generator)
6. Execute quality checks
7. Summarize results
```
**Enhanced with test-generator**: Step 5 now includes automated gap analysis and test suggestions.
## Best Practices
1. **Fix Immediately**: Don't accumulate test debt
2. **Monitor Coverage**: Track trends over time
3. **Prioritize Failures**: Fix broken tests before adding new ones
4. **Document Issues**: Keep failure patterns for future reference
## Next Steps
Based on results:
- **Failed Tests** → Use `/fix` to address specific failures
- **Low Coverage** → Add tests for uncovered critical paths
- **All Green** → Ready for commit/PR
- **Quality Issues** → Fix lint/type errors first

663
commands/think.md Normal file
View File

@@ -0,0 +1,663 @@
---
description: >
Create a comprehensive Statement of Work (SOW) for feature development or problem solving.
Use when planning complex tasks, defining acceptance criteria, or structuring implementation approaches.
Ideal for tasks requiring detailed analysis, risk assessment, and structured planning documentation.
構造化された計画文書SOWを生成。複雑なタスクの計画、受け入れ基準の定義、実装アプローチの構造化が必要な場合に使用。
allowed-tools: Bash(git log:*), Bash(git diff:*), Bash(git branch:*), Read, Write, Glob, Grep, LS, Task
model: inherit
argument-hint: "[feature or problem description]"
---
# /think - Simple SOW Generator
## Purpose
Create a comprehensive Statement of Work (SOW) as a static planning document for feature development or problem solving.
**Simplified**: Focus on planning and documentation without complex automation.
**Output Verifiability**: All analyses, assumptions, and solutions marked with ✓/→/? to distinguish facts from inferences per AI Operation Principle #4.
## Context Engineering Integration
**IMPORTANT**: Before starting analysis, check for existing research context.
### Automatic Context Discovery
```bash
# Search for latest research context files
!`find .claude/workspace/research ~/.claude/workspace/research -name "*-context.md" 2>/dev/null | sort -r | head -1`
```
### Context Loading Strategy
1. **Check for recent research**: Look for context files in the last 24 hours
2. **Prioritize project-local**: `.claude/workspace/research/` over global `~/.claude/workspace/research/`
3. **Extract key information**: Purpose, Prerequisites, Available Data, Constraints
4. **Integrate into planning**: Use research findings to inform SOW creation
### Context-Informed Planning
If research context is found:
- **🎯 Purpose**: Align SOW goals with research objectives
- **📋 Prerequisites**: Build on verified facts, validate assumptions
- **📊 Available Data**: Reference discovered files and stack
- **🔒 Constraints**: Respect identified limitations
**Benefits**:
- **Higher confidence**: Planning based on actual codebase knowledge
- **Fewer assumptions**: Replace unknowns with verified facts
- **Better estimates**: Realistic based on discovered complexity
- **Aligned goals**: Purpose-driven from research to implementation
## Codebase Analysis Phase
### Purpose
Before creating SOW, analyze the existing codebase to understand:
- Similar existing implementations
- Current architecture patterns
- Impact scope and affected modules
- Dependencies and integration points
### When to Invoke Plan Agent
**Default behavior** - Plan agent is invoked by default unless explicitly greenfield:
```typescript
// Decision logic
const needsCodebaseAnalysis =
// Default: invoke Plan agent unless explicitly greenfield
!/new project|from scratch|prototype|poc/i.test(featureDescription);
```
**Skip conditions** - Plan agent is NOT invoked only for:
- Greenfield projects ("new project", "from scratch")
- Prototypes and POCs
- Standalone utilities
- Documentation-only changes
### Plan Agent Invocation
When conditions are met, invoke Plan agent with medium thoroughness:
```typescript
Task({
subagent_type: "Plan",
model: "haiku",
thoroughness: "medium",
description: "Analyze codebase for feature context",
prompt: `
Feature: "${featureDescription}"
Investigate:
1. Existing patterns: similar implementations (file:line), architecture [✓]
2. Related modules: affected files, dependencies, integration points [✓]
3. Tech stack: libraries, conventions, testing [✓]
4. Recommendations: approach, reusable modules, challenges [→]
5. Impact: scope estimate, breaking changes, risks [→]
Return findings with markers: [✓] verified, [→] inferred, [?] unknown.
`
})
```
### Integration with SOW Generation
Plan agent findings are incorporated into SOW sections:
**Problem Analysis**:
```markdown
### Current State [✓]
${planFindings.existingPatterns}
- Located at: ${planFindings.fileReferences}
- Currently using: ${planFindings.techStack}
```
**Solution Design**:
```markdown
### Recommended Approach [→]
Based on codebase analysis:
- Follow pattern from: ${planFindings.similarImplementation}
- Integrate with: ${planFindings.integrationPoints}
- Reuse: ${planFindings.reusableModules}
```
**Dependencies**:
```markdown
### Existing [✓]
${planFindings.currentDependencies}
### New [→]
${inferredNewDependencies}
```
**Risks & Mitigations**:
```markdown
### High Confidence Risks [✓]
${planFindings.identifiedRisks}
### Potential Risks [→]
${planFindings.impactAssessment}
```
### Benefits
- **Higher confidence**: Planning based on actual codebase knowledge
- **Fewer assumptions**: Replace [?] unknowns with [✓] verified facts
- **Better alignment**: Solutions consistent with existing patterns
- **Realistic estimates**: Based on discovered complexity
- **Reduced surprises**: Identify integration challenges upfront
### Cost-Benefit Analysis
| Metric | Without Plan | With Plan |
|--------|-------------|-----------|
| **SOW Accuracy** | [→] 65-75% | [✓] 85-95% |
| **Assumptions** | Many [?] items | Mostly [✓] items |
| **Implementation surprises** | High | Low |
| **Additional cost** | $0 | +$0.05-0.15 |
| **Additional time** | 0s | +5-15s |
[→] Haiku-powered for minimal cost impact
## Dynamic Project Context
### Current State Analysis
```bash
!`git branch --show-current`
!`git log --oneline -5`
```
## SOW Structure
### Required Sections
**IMPORTANT**: Use ✓/→/? markers throughout to distinguish facts, inferences, and assumptions.
```markdown
# SOW: [Feature Name]
Version: 1.0.0
Status: Draft
Created: [Date]
## Executive Summary
[High-level overview with confidence markers]
## Problem Analysis
Use markers to indicate confidence:
- [✓] Verified issues (confirmed by logs, user reports, code review)
- [→] Inferred problems (reasonable deduction from symptoms)
- [?] Suspected issues (requires investigation)
## Assumptions & Prerequisites
**NEW SECTION**: Explicitly state what we're assuming to be true.
### Verified Facts (✓)
- [✓] Fact 1 - Evidence: [source]
- [✓] Fact 2 - Evidence: [file:line]
### Working Assumptions (→)
- [→] Assumption 1 - Based on: [reasoning]
- [→] Assumption 2 - Inferred from: [evidence]
### Unknown/Needs Verification (?)
- [?] Unknown 1 - Need to check: [what/where]
- [?] Unknown 2 - Requires: [action needed]
## Solution Design
### Proposed Approach
[Main solution with confidence level]
### Alternatives Considered
1. [✓/→/?] Option A - [Pro/cons with evidence]
2. [✓/→/?] Option B - [Pro/cons with reasoning]
### Recommendation
[Chosen solution] - Confidence: [✓/→/?]
Rationale: [Evidence-based reasoning]
## Test Plan
### Unit Tests (Priority: High)
- [ ] Function: `calculateDiscount(count)` - Returns 20% for 15+ purchases
- [ ] Function: `calculateDiscount(count)` - Returns 10% for <15 purchases
- [ ] Function: `calculateDiscount(count)` - Handles zero/negative input
### Integration Tests (Priority: Medium)
- [ ] API: POST /users - Creates user with valid data
- [ ] API: POST /users - Rejects duplicate email with 409
### E2E Tests (Priority: Low)
- [ ] User registration flow - Complete signup process
## Acceptance Criteria
Mark each with confidence:
- [ ] [✓] Criterion 1 - Directly from requirements
- [ ] [→] Criterion 2 - Inferred from user needs
- [ ] [?] Criterion 3 - Assumed, needs confirmation
## Implementation Plan
[Phases and steps with dependencies noted]
## Success Metrics
- [✓] Metric 1 - Measurable: [how]
- [→] Metric 2 - Estimated: [basis]
## Risks & Mitigations
### High Confidence Risks (✓)
- Risk 1 - Evidence: [past experience, data]
- Mitigation: [specific action]
### Potential Risks (→)
- Risk 2 - Inferred from: [analysis]
- Mitigation: [contingency plan]
### Unknown Risks (?)
- Risk 3 - Monitor: [what to watch]
- Mitigation: [preparedness]
## Verification Checklist
Before starting implementation, verify:
- [ ] All [?] items have been investigated
- [ ] Assumptions [→] have been validated
- [ ] Facts [✓] have current evidence
```
## Key Features
### 1. Problem Definition
- Clear articulation of the issue
- Impact assessment
- Stakeholder identification
- **Confidence markers**: ✓ verified / → inferred / ? suspected
### 2. Assumptions & Prerequisites
- Explicit statement of what's assumed
- Distinction between facts and inferences
- Clear identification of unknowns
- **Prevents**: Building on false assumptions
### 3. Solution Exploration
- Multiple approaches considered
- Trade-off analysis with evidence
- Recommendation with rationale
- **Each option marked**: ✓/→/? for confidence level
### 4. Acceptance Criteria
- Clear, testable criteria
- User-facing outcomes
- Technical requirements
- **Confidence per criterion**: Know which need verification
### 5. Risk Assessment
- Technical risks with evidence
- Timeline risks with reasoning
- Mitigation strategies
- **Categorized by confidence**: ✓ confirmed / → potential / ? unknown
## Output
### Dual-Document Generation
The `/think` command generates **two complementary documents**:
1. **SOW (Statement of Work)** - High-level planning
2. **Spec (Specification)** - Implementation-ready details
### Generation Process
**IMPORTANT**: Both documents MUST be generated using the Write tool:
1. **Create output directory**: `.claude/workspace/sow/[timestamp]-[feature-name]/`
2. **Generate sow.md**: Use SOW template structure (sections 1-8)
3. **Generate spec.md**: Use Spec template structure (sections 1-10)
4. **Confirm creation**: Display save locations with ✅ indicators
**Example**:
```bash
# After generating both documents
✅ SOW saved to: .claude/workspace/sow/2025-01-18-auth-feature/sow.md
✅ Spec saved to: .claude/workspace/sow/2025-01-18-auth-feature/spec.md
```
### Output Location (Auto-Detection)
Both files are saved using git-style directory search:
1. **Search upward** from current directory for `.claude/` directory
2. **If found**: Save to `.claude/workspace/sow/[timestamp]-[feature]/` (project-local)
3. **If not found**: Save to `~/.claude/workspace/sow/[timestamp]-[feature]/` (global)
**Output Structure**:
```text
.claude/workspace/sow/[timestamp]-[feature]/
├── sow.md # Statement of Work (planning)
└── spec.md # Specification (implementation details)
```
**Feedback**: The save location is displayed with context indicator:
- `✅ SOW saved to: .claude/workspace/sow/... (Project-local: .claude/ detected)`
- `✅ Spec saved to: .claude/workspace/sow/... (Project-local: .claude/ detected)`
**Benefits**:
- **Zero-config**: Automatically adapts to project structure
- **Team sharing**: Project-local enables git-based sharing
- **Personal notes**: Global storage for exploratory work
- **Flexible**: Create `.claude/` to switch to project-local mode
- **Integrated workflow**: spec.md automatically used by `/review` and `/code`
Features:
- **SOW**: Planning document with acceptance criteria
- **Spec**: Implementation-ready specifications
- Clear acceptance criteria with confidence markers
- Explicit assumptions and prerequisites section
- Risk assessment by confidence level
- Implementation roadmap
- **Output Verifiability**: All claims marked ✓/→/? with evidence
## Integration
```bash
/think "feature" # Create planning SOW
/todos # Track implementation separately
```
## Example Usage
```bash
/think "Add user authentication with OAuth"
```
Generates:
- Comprehensive planning document
- 8-12 Acceptance Criteria
- Risk assessment
- Implementation phases
## Simplified Workflow
1. **Planning Phase**
- Use `/think` to create SOW
- **Automatic codebase analysis** (Plan agent invoked if applicable)
- Review and refine plan with verified context
2. **Execution Phase**
- Use TodoWrite for task tracking
- Reference SOW for requirements
3. **Review Phase**
- Check against acceptance criteria
- Update documentation as needed
## Related Commands
- `/code` - Implementation with TDD
- `/test` - Testing and verification
- `/review` - Code review
## Specification Document (spec.md) Structure
The spec.md provides implementation-ready details that complement the high-level SOW.
### Spec Template
```markdown
# Specification: [Feature Name]
Version: 1.0.0
Based on: SOW v1.0.0
Last Updated: [Date]
---
## 1. Functional Requirements
### 1.1 Core Functionality
[✓] FR-001: [Clear, testable requirement]
- Input: [Expected inputs with types]
- Output: [Expected outputs]
- Validation: [Validation rules]
[→] FR-002: [Inferred requirement]
- [Details with confidence marker]
### 1.2 Edge Cases
[→] EC-001: [Edge case description]
- Action: [How to handle]
- [?] To confirm: [Unclear aspects]
---
## 2. API Specification (Backend features)
### 2.1 [HTTP METHOD] /path/to/endpoint
**Request**:
```json
{
"field": "type and example"
}
```
**Response (Success - 200)**:
```json
{
"result": "expected structure"
}
```
**Response (Error - 4xx/5xx)**:
```json
{
"error": "ERROR_CODE",
"message": "Human-readable message"
}
```
---
## 3. Data Model
### 3.1 [Entity Name]
```typescript
interface EntityName {
id: string; // Description, constraints
field: type; // Purpose, validation rules
created_at: Date;
updated_at: Date;
}
```
### 3.2 Relationships
- Entity A → Entity B: [Relationship type and constraints]
---
## 4. UI Specification (Frontend features)
### 4.1 [Screen/Component Name]
**Layout**:
- Element 1: position, sizing, placeholder text
- Element 2: styling details
**Validation**:
- Real-time: [When to validate]
- Submit: [Final validation rules]
- Error display: [How to show errors]
**Responsive**:
- Mobile (< 768px): [Layout adjustments]
- Desktop: [Default layout]
### 4.2 User Interactions
- Action: [User action] → Result: [System response]
---
## 5. Non-Functional Requirements
### 5.1 Performance
[✓] NFR-001: [Measurable performance requirement]
[→] NFR-002: [Inferred performance target]
### 5.2 Security
[✓] NFR-003: [Security requirement with evidence]
[→] NFR-004: [Security consideration]
### 5.3 Accessibility
[→] NFR-005: [Accessibility standard]
[?] NFR-006: [Needs confirmation]
---
## 6. Dependencies
### 6.1 External Libraries
[✓] library-name: ^version (purpose)
[→] optional-library: ^version (if needed, basis)
### 6.2 Internal Services
[✓] ServiceName: [Purpose and interface]
[?] UnclearService: [Needs investigation]
---
## 7. Test Scenarios
### 7.1 Unit Tests
```typescript
describe('FeatureName', () => {
it('[✓] handles typical case', () => {
// Given: setup
// When: action
// Then: expected result
});
it('[→] handles edge case', () => {
// Inferred behavior
});
});
```
### 7.2 Integration Tests
[Key integration scenarios]
### 7.3 E2E Tests (if UI)
[Critical user flows]
---
## 8. Known Issues & Assumptions
### Assumptions (→)
1. [Assumption 1 - basis for assumption]
2. [Assumption 2 - needs confirmation]
### Unknown / Need Verification (?)
1. [Unknown 1 - what needs checking]
2. [Unknown 2 - where to verify]
---
## 9. Implementation Checklist
- [ ] [Specific implementation step 1]
- [ ] [Specific implementation step 2]
- [ ] Unit tests (coverage >80%)
- [ ] Integration tests
- [ ] Documentation update
---
## 10. References
- SOW: `sow.md`
- Related specs: [Links to related specifications]
- API docs: [Links to API documentation]
```markdown
### Spec Generation Guidelines
1. **Use ✓/→/? markers consistently** - Align with SOW's Output Verifiability
2. **Be implementation-ready** - Developers should be able to code directly from spec
3. **Include concrete examples** - API requests/responses, data structures, UI layouts
4. **Define test scenarios** - Clear Given-When-Then format
5. **Document unknowns explicitly** - [?] markers for items requiring clarification
### When to Generate Spec
- **Always with SOW**: Spec is generated alongside SOW by default
- **Update together**: When SOW changes, update spec accordingly
- **Reference in reviews**: `/review` will automatically reference spec.md
## Applied Principles
### Output Verifiability (AI Operation Principle #4)
- **Distinguish facts from inferences**: ✓/→/? markers
- **Provide evidence**: File paths, line numbers, sources
- **State confidence levels**: Explicit about certainty
- **Admit unknowns**: [?] for items needing investigation
- **Prevents**: Building plans on false assumptions
### Occam's Razor
- Simple, static documents
- No complex automation
- Clear separation of concerns
### Progressive Enhancement
- Start with basic SOW
- Add detail as needed
- No premature optimization
- **Assumptions section**: Identify what to verify first
### Readable Documentation
- Clear structure
- Plain language
- Actionable criteria
- **Confidence markers**: Visual clarity for trust level

148
commands/validate.md Normal file
View File

@@ -0,0 +1,148 @@
---
description: >
Validate implementation against SOW acceptance criteria with L2 (practical) validation level.
Checks acceptance criteria, coverage, and performance. Pass/fail logic with clear scoring.
Identifies missing features and issues. Use when ready to verify implementation conformance.
SOWの受け入れ基準に対して実装を検証。受け入れ基準、カバレッジ、パフォーマンスをチェック。
allowed-tools: Read, Bash(ls:*), Bash(cat:*), Grep
model: inherit
---
# /validate - SOW Criteria Checker
## Purpose
Display SOW acceptance criteria for manual verification against completed work.
**Simplified**: Manual checklist review tool.
## Functionality
### Display Acceptance Criteria
```bash
# Show criteria from latest SOW
!`ls -t ~/.claude/workspace/sow/*/sow.md | head -1 | xargs grep -A 20 "Acceptance Criteria"`
```
### Manual Review Process
1. Display SOW criteria
2. Review each item manually
3. Check against implementation
4. Document findings
## Output Format
```markdown
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 SOW Validation Checklist
Feature: User Authentication
Created: 2025-01-14
## Acceptance Criteria:
□ AC-01: User registration with email
→ Check: Does registration form exist?
→ Check: Email validation implemented?
□ AC-02: Password requirements enforced
→ Check: Min 8 characters?
→ Check: Special character required?
□ AC-03: OAuth integration
→ Check: Google OAuth working?
→ Check: GitHub OAuth working?
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Manual Review Required:
- Test each feature
- Verify against criteria
- Document any gaps
```
## Usage Examples
### Validate Latest SOW
```bash
/validate
```
Shows acceptance criteria from most recent SOW.
### Validate Specific SOW
```bash
/validate "feature-name"
```
Shows criteria for specific feature.
## Manual Validation Process
### Step 1: Review Criteria
```markdown
- Read each acceptance criterion
- Understand requirements
- Note any ambiguities
```
### Step 2: Test Implementation
```markdown
- Run application
- Test each feature
- Document behavior
```
### Step 3: Compare Results
```markdown
- Match behavior to criteria
- Identify gaps
- Note improvements needed
```
## Integration with Workflow
```markdown
1. Complete implementation
2. Run /validate to see criteria
3. Manually test each item
4. Update documentation with results
```
## Simplified Approach
- **No automation**: Human judgment required
- **Clear checklist**: Easy to follow
- **Manual process**: Thorough verification
## Related Commands
- `/think` - Create SOW with criteria
- `/sow` - View full SOW document
- `/test` - Run automated tests
## Applied Principles
### Occam's Razor
- Simple checklist display
- No complex validation logic
- Human judgment valued
### Single Responsibility
- Only displays criteria
- Validation done manually
### Progressive Enhancement
- Start with manual process
- Automate later if needed

205
commands/workflow/create.md Normal file
View File

@@ -0,0 +1,205 @@
---
description: >
Create reusable browser automation workflows using Chrome DevTools MCP through interactive step recording.
Triggers interactive workflow builder, executes steps in real browser, saves as discoverable slash command.
Use when creating E2E tests, monitoring, or automation workflows.
allowed-tools: Read, Write, Task, mcp__chrome-devtools__*
model: inherit
argument-hint: "[workflow-name]"
---
# /workflow:create - Browser Workflow Generator
Create reusable browser automation workflows using Chrome DevTools MCP through interactive step recording.
## Purpose
Generate custom browser automation workflows that can be:
- Recorded interactively through conversation
- Saved as reusable slash commands
- Executed repeatedly with `/workflow-name`
- Used for E2E testing, monitoring, or automation
## Usage
```bash
/workflow:create "workflow-name"
```
Example:
```bash
/workflow:create "login-test"
```
## How It Works
1. **Interactive Recording**: AI guides you through defining each step
2. **Live Execution**: Steps are executed in real browser as you define them
3. **Markdown Generation**: Workflow is saved as a command file
4. **Automatic Discovery**: Generated command becomes available immediately
## Workflow Recording Process
### Step 1: Initialization
```txt
You: /workflow:create "my-workflow"
AI: Creating workflow 'my-workflow'. Let's define the steps.
What should we do first?
```
### Step 2: Define Steps Interactively
```txt
You: Navigate to https://example.com
AI: ✓ Navigated to https://example.com
[Shows page snapshot]
What's next?
You: Click the login button
AI: [Shows elements with UIDs]
Which element? (Provide UID or description)
You: uid:abc123
AI: ✓ Clicked login button
What's next?
You: Type "user@example.com" in email field
AI: [Shows input fields]
Which field? (Provide UID)
You: uid:def456
AI: ✓ Typed in email field
What's next?
You: done
AI: ✓ Workflow recorded!
Saved to: .claude/commands/workflows/my-workflow.md
Execute with: /my-workflow
```
## Generated Workflow File Structure
```markdown
# /my-workflow
Brief description of what this workflow does
## Steps
1. Navigate to https://example.com
2. Click element (uid: abc123) - login button
3. Fill element (uid: def456) with "user@example.com"
4. Click element (uid: ghi789) - submit button
5. Wait for text "Welcome" to appear
## Usage
\```bash
/my-workflow
\```
## Notes
- Created: 2025-10-02
- Chrome DevTools MCP required
```
## Available Actions
During recording, you can use these actions:
- **Navigate**: `Navigate to <URL>`
- **Click**: `Click <element description>` (AI will show available elements)
- **Fill**: `Type "<text>" in <field>` (AI will show input fields)
- **Wait**: `Wait for "<text>" to appear`
- **Screenshot**: `Take screenshot`
- **Scroll**: `Scroll to <element>`
- **Done**: `done` (finish recording)
## Chrome DevTools MCP Integration
This command uses `mcp__chrome-devtools__*` tools:
- `navigate_page` - Navigate to URLs
- `take_snapshot` - Identify page elements
- `click` - Click elements by UID
- `fill` - Fill form fields
- `wait_for` - Wait for conditions
- `take_screenshot` - Capture screenshots
## File Location
Generated workflows are saved to:
```txt
.claude/commands/workflows/<workflow-name>.md
```
Once saved, the workflow becomes a discoverable slash command:
```bash
/workflow-name
```
## Use Cases
- **E2E Testing**: Automate UI testing workflows
- **Monitoring**: Regular checks of critical user flows
- **Data Collection**: Scraping or form automation
- **Regression Testing**: Verify features after changes
- **Onboarding**: Document and automate setup processes
## Example Workflows
### Login Test
```bash
/workflow:create "login-test"
→ Interactive steps to test login flow
→ Saved as /login-test
```
### Price Monitor
```bash
/workflow:create "check-price"
→ Navigate to product page
→ Extract price element
→ Take screenshot
→ Saved as /check-price
```
## Tips
1. **Be Specific**: Describe elements clearly for accurate selection
2. **Use Snapshots**: Review page snapshots before selecting elements
3. **Add Waits**: Include wait steps for dynamic content
4. **Test As You Go**: Each step executes immediately for verification
5. **Edit Later**: Generated Markdown files can be manually edited
## Limitations
- Requires Chrome DevTools MCP to be configured
- Complex conditional logic requires manual editing
- JavaScript execution is supported but must be added manually
- Each workflow runs in a fresh browser session
## Related Commands
- `/test` - Run comprehensive tests including browser tests
- `/auto-test` - Automatic test runner with fixes
- `/fix` - Quick bug fixes
## Technical Details
**Storage Format**: Markdown (human-editable)
**Execution Method**: Slash command system
**MCP Tool**: Chrome DevTools MCP
**Auto-discovery**: Via `.claude/commands/workflows/` directory
---
*Generated workflows are immediately available as slash commands without restart.*