14 KiB
description, allowed-tools, model, argument-hint
| description | allowed-tools | model | argument-hint |
|---|---|---|---|
| Orchestrate multiple specialized review agents with dynamic context analysis, hierarchical task decomposition, and confidence-based filtering. Use after code changes or when comprehensive quality assessment is needed. Includes security, performance, accessibility, type safety, and more. All findings include evidence (file:line) and confidence markers (✓/→/?) per Output Verifiability principles. 複数の専門エージェントによるコードレビューを実行。セキュリティ、パフォーマンス、アクセシビリティなど包括的な品質評価。 | Bash(git diff:*), Bash(git status:*), Bash(git log:*), Bash(git show:*), Read, Glob, Grep, LS, Task | inherit | [target files or scope] |
/review - Advanced Code Review Orchestrator
Purpose
Orchestrate multiple specialized review agents with dynamic context analysis, hierarchical task decomposition, and confidence-based filtering.
Output Verifiability: All review findings include evidence (file:line), distinguish verified issues (✓) from inferred problems (→), per AI Operation Principle #4.
Integration with Skills
This command explicitly references the following Skills:
- [@~/.claude/skills/security-review/SKILL.md] - Security review knowledge based on OWASP Top 10
Other Skills are automatically loaded through each review agent's dependencies:
performance-reviewer→performance-optimizationskillreadability-reviewer→readability-reviewskillprogressive-enhancer→progressive-enhancementskill
Dynamic Context Analysis
Git Status
Check git status:
!`git status --porcelain`
Files Changed
List changed files:
!`git diff --name-only HEAD`
Recent Commits
View recent commits:
!`git log --oneline -10`
Change Statistics
Show change statistics:
!`git diff --stat HEAD`
Specification Context (Auto-Detection)
Discover Latest Spec
Search for spec.md in SOW workspace using Glob tool (approved):
Use Glob tool to find spec.md:
- Pattern: ".claude/workspace/sow/**/spec.md"
- Alternative: "~/.claude/workspace/sow/**/spec.md"
Select the most recent spec.md if multiple exist (check modification time).
Load Specification
If spec.md exists, load it for review context:
- Provides functional requirements for alignment checking
- Enables "specification vs implementation" verification
- Implements Article 2's approach: spec.md in review prompts
- Allows reviewers to identify gaps like "仕様書ではこう定義されていますが、この実装ではそのケースが考慮されていません"
If spec.md does not exist:
- Review proceeds with code-only analysis
- Focus on code quality, security, and best practices
- Consider creating specification with
/thinkfor future reference
Execution
Invoke the review-orchestrator agent to perform comprehensive code review:
Task({
subagent_type: "review-orchestrator",
description: "Comprehensive code review",
prompt: `
Execute comprehensive code review with the following requirements:
### Review Context
- Changed files: Use git status and git diff to identify scope
- Recent commits: Analyze recent changes for context
- Project type: Detect technology stack automatically
- Specification: If spec.md exists in workspace, verify implementation aligns with specification requirements
- Check if all functional requirements (FR-xxx) are implemented
- Identify missing features defined in spec
- Flag deviations from API specifications
- Compare actual vs expected behavior per spec
### Review Process
1. **Context Discovery**: Analyze repository structure and technology stack
2. **Parallel Reviews**: Launch specialized review agents concurrently
- structure-reviewer, readability-reviewer, progressive-enhancer
- type-safety-reviewer, design-pattern-reviewer, testability-reviewer
- performance-reviewer, accessibility-reviewer
- document-reviewer (if .md files present)
3. **Filtering & Consolidation**: Apply confidence filters and deduplication
### Output Requirements
- All findings MUST include evidence (file:line)
- Use confidence markers: ✓ (>0.8), → (0.5-0.8)
- Only include findings with confidence >0.7
- Group by severity: Critical, High, Medium, Low
- Provide actionable recommendations
Report results in Japanese.
`
})
Hierarchical Review Process Details
Phase 1: Context Discovery
Analyze repository and determine review scope:
- Analyze repository structure and technology stack
- Identify review scope (changed files, directories)
- Detect code patterns and existing quality standards
- Determine applicable review categories
Phase 2: Parallel Specialized Reviews
Launch multiple review agents concurrently:
- Each agent focuses on specific aspect
- Independent execution for efficiency
- Collect raw findings with confidence scores
Phase 3: Filtering and Consolidation
Apply multi-level filtering with evidence requirements:
- Confidence Filter: Only issues with >0.7 confidence
- Evidence Requirement: All findings MUST include:
- File path with line number (e.g.,
src/auth.ts:42) - Specific code reference or pattern
- Reasoning for the issue
- File path with line number (e.g.,
- False Positive Filter: Apply exclusion rules
- Deduplication: Merge similar findings
- Prioritization: Sort by impact and severity
Confidence Mapping:
- ✓ High Confidence (>0.8): Verified issue with direct code evidence
- → Medium Confidence (0.5-0.8): Inferred problem with reasoning
- ? Low Confidence (<0.5): Not included in output (too uncertain)
Review Agents and Their Focus
Core Architecture Reviewers
review-orchestrator: Coordinates all review activitiesstructure-reviewer: Code organization, DRY violations, couplingroot-cause-reviewer: Deep problem analysis, architectural debt
Quality Assurance Reviewers
readability-reviewer: Code clarity, naming, complexitytype-safety-reviewer: TypeScript coverage, any usage, type assertionstestability-reviewer: Test design, mocking, coverage gaps
Specialized Domain Reviewers
- Security review (via
security-reviewskill): OWASP Top 10 vulnerabilities, auth issues, data exposure accessibility-reviewer: WCAG compliance, keyboard navigation, ARIAperformance-reviewer: Bottlenecks, bundle size, rendering issuesdesign-pattern-reviewer: Pattern consistency, React best practicesprogressive-enhancer: CSS-first solutions, graceful degradationdocument-reviewer: README quality, API docs, inline comments
Exclusion Rules
Automatic Exclusions (False Positive Prevention)
- Style Issues: Formatting, indentation (handled by linters)
- Minor Naming: Unless severely misleading
- Test Files: Focus on production code unless requested
- Generated Code: Build outputs, vendor files
- Documentation: Unless specifically reviewing docs
- Theoretical Issues: Without concrete exploitation path
- Performance Micro-optimizations: Unless measurable impact
- Missing Features: vs actual bugs/issues
Context-Aware Exclusions
- Framework-specific patterns (React/Angular/Vue idioms)
- Project conventions (detected from existing code)
- Language-specific safety (memory-safe languages)
- Environment assumptions (browser vs Node.js)
Output Format with Confidence Scoring
IMPORTANT: Use both numeric scores (0.0-1.0) and visual markers (✓/→) for clarity.
[REVIEW OUTPUT TEMPLATE]
Review Summary
- Files Reviewed: [Count and list]
- Total Issues: [Count by severity with markers]
- Review Coverage: [Percentage]
- Overall Confidence: [✓/→] [Average score]
## ✓ Critical Issues 🚨 (Confidence > 0.9)
Issue #1: [Title]
- **Marker**: [✓] High Confidence
- **File**: path/to/file.ts:42-45
- **Category**: security|performance|accessibility|etc
- **Confidence**: 0.95
- **Evidence**: [Specific code snippet or pattern found]
- **Description**: [Detailed explanation]
- **Impact**: [User/system impact]
- **Recommendation**: [Specific fix with code example]
- **References**: [Related files, docs, or standards]
## ✓ High Priority ⚠️ (Confidence > 0.8)
Issue #2: [Title]
- **Marker**: [✓] High Confidence
- **File**: path/to/another.ts:123
- **Evidence**: [Direct observation]
- **Description**: [Issue explanation]
- **Recommendation**: [Fix with example]
## → Medium Priority 💡 (Confidence 0.7-0.8)
Issue #3: [Title]
- **Marker**: [→] Medium Confidence
- **File**: path/to/file.ts:200
- **Inference**: [Reasoning behind this finding]
- **Description**: [Issue explanation]
- **Recommendation**: [Suggested improvement]
- **Note**: Verify this inference before implementing fix
Improvement Opportunities
[→] Lower confidence suggestions (0.5-0.7) for consideration
- Mark as [→] to indicate these are recommendations, not confirmed issues
Metrics
- Code Quality Score: [A-F rating] [✓/→]
- Technical Debt Estimate: [Hours] [✓/→]
- Test Coverage Gap: [Percentage] [✓]
- Security Posture: [Rating] [✓/→]
Recommended Actions
1. **Immediate** [✓]: [Critical fixes with evidence]
2. **Next Sprint** [✓/→]: [High priority items]
3. **Backlog** [→]: [Nice-to-have improvements]
Evidence Summary
- **Verified Issues** [✓]: [Count] - Direct code evidence
- **Inferred Problems** [→]: [Count] - Based on patterns/reasoning
- **Total Confidence**: [Overall score]
Review Strategies
Quick Review (2-3 min)
Focus areas:
- Security vulnerabilities
- Critical bugs
- Breaking changes
- Accessibility violations
Command: /review --quick
Standard Review (5-7 min)
Includes Quick + :
- Performance issues
- Type safety problems
- Test coverage gaps
- Code organization
Command: /review (default)
Deep Review (10+ min)
Comprehensive analysis:
- All standard checks
- Root cause analysis
- Technical debt assessment
- Refactoring opportunities
- Architecture evaluation
Command: /review --deep
Focused Review
Target specific areas:
/review --security- Security focus/review --performance- Performance focus/review --accessibility- A11y focus/review --architecture- Design patterns
TodoWrite Integration
Automatic task creation:
[TODO LIST TEMPLATE]
Code Review: [Target]
1. ⏳ Context discovery and scope analysis
2. ⏳ Execute specialized review agents (parallel)
3. ⏳ Filter and validate findings (confidence > 0.7)
4. ⏳ Consolidate and prioritize results
5. ⏳ Generate actionable recommendations
Custom Review Instructions
Support for project-specific rules:
.claude/review-rules.md- Project conventions.claude/exclusions.md- Custom exclusions.claude/review-focus.md- Priority areas
Advanced Features
Incremental Reviews
Compare against baseline:
!`git diff origin/main...HEAD --name-only`
Pattern Detection
Identify recurring issues:
- Similar problems across files
- Systemic architectural issues
- Common anti-patterns
Learning Mode
Track and improve:
- False positive patterns
- Project-specific idioms
- Team preferences
Usage Examples
Basic Review
/review
# Reviews all changed files with standard depth
Targeted Review
/review "authentication module"
# Focuses on auth-related code
Security Audit
/review --security --deep
# Comprehensive security analysis
Pre-PR Review
/review --compare main
# Reviews changes against main branch
Component Review
/review "src/components" --accessibility
# A11y review of components directory
Best Practices
- Review Early: Catch issues before they compound
- Review Incrementally: Small, frequent reviews > large, rare ones
- Apply Output Verifiability:
- Always provide evidence: File paths with line numbers
- Use confidence markers: ✓ for verified, → for inferred
- Explain reasoning: Why is this an issue?
- Reference standards: Link to docs, best practices, or past issues
- Never guess: If uncertain, mark as [→] and explain the inference
- Act on High Confidence: Focus on ✓ (>0.8) issues first
- Validate Inferences: [→] markers require verification before fixing
- Track Patterns: Identify recurring problems
- Customize Rules: Add project-specific exclusions
- Iterate on Feedback: Tune confidence thresholds
Integration Points
Pre-commit Hook
claude review --quick || exit 1
CI/CD Pipeline
- name: Code Review
run: claude review --security --performance
PR Comments
Results formatted for GitHub/GitLab comments
Applied Development Principles
Output Verifiability (AI Operation Principle #4)
All review findings MUST follow Output Verifiability:
- Distinguish verified from inferred: Use ✓/→ markers
- Provide evidence: File:line for every issue
- State confidence explicitly: Numeric + visual marker
- Explain reasoning: Why is this problematic?
- Admit uncertainty: [→] when inferred, never pretend to know
Principles Guide
@~/.claude/rules/PRINCIPLES_GUIDE.md - Foundation for review prioritization
Application:
- Priority Matrix: Categorize issues by Essential > Default > Contextual principles
- Conflict Resolution: Decision criteria for DRY vs Readable, SOLID vs Simple, etc.
- Red Flags: Method chains > 3 levels, can't understand in 1 minute, "just in case" implementations
Documentation Rules
@~/.claude/docs/DOCUMENTATION_RULES.md - Review report format and structure
Application:
- Clarity First: Understandability over completeness
- Consistency: Unified report format with ✓/→ markers
- Actionable Recommendations: Specific improvement actions with evidence
Next Steps After Review
- Critical Issues →
/hotfixfor production issues - Bugs →
/fixfor development fixes - Refactoring →
/think→/codefor improvements - Performance → Targeted optimization with metrics
- Tests →
/testwith coverage goals - Documentation → Update based on findings