Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:50:24 +08:00
commit f172746dc6
52 changed files with 17406 additions and 0 deletions

View File

@@ -0,0 +1,995 @@
# Multi-Agent Review Strategies
This file describes how to coordinate multiple specialized agents for comprehensive code review and quality assurance.
## Agent Overview
### Available Agents for Review
```yaml
workflow-coordinator:
role: "Pre-review validation and workflow state management"
use_first: true
validates:
- Implementation phase completed
- All specification tasks done
- Tests passing before review
- Ready for review/completion phase
coordinates: "Transition from implementation to review"
criticality: MANDATORY
refactorer:
role: "Code quality and style review"
specializes:
- Code readability and clarity
- SOLID principles compliance
- Design pattern usage
- Code smell detection
- Complexity reduction
focus: "Making code maintainable and clean"
security:
role: "Security vulnerability review"
specializes:
- Authentication/authorization review
- Input validation checking
- Vulnerability scanning
- Encryption implementation
- Security best practices
focus: "Making code secure"
qa:
role: "Test quality and coverage review"
specializes:
- Test coverage analysis
- Test quality assessment
- Edge case identification
- Mock appropriateness
- Performance testing
focus: "Making code well-tested"
implementer:
role: "Documentation and feature completeness"
specializes:
- Documentation completeness
- API documentation review
- Code comment quality
- Example code validation
- Feature implementation gaps
focus: "Making code documented and complete"
architect:
role: "Architecture and design review"
specializes:
- Component boundary validation
- Dependency direction checking
- Abstraction level assessment
- Scalability evaluation
- Design pattern application
focus: "Making code architecturally sound"
when_needed: "Major changes, new components, refactoring"
```
---
## Agent Selection Rules
### MANDATORY: Workflow Coordinator First
**Always Start with workflow-coordinator:**
```yaml
Pre-Review Protocol:
1. ALWAYS use workflow-coordinator agent first
2. workflow-coordinator validates:
- Implementation phase is complete
- All tasks in specification are done
- Tests are passing
- No blocking issues remain
3. If validation fails:
- Do NOT proceed to review
- Report incomplete work to user
- Guide user to complete implementation
4. If validation passes:
- Proceed to multi-agent review
NEVER skip workflow-coordinator validation!
```
### Task-Based Agent Selection
**Use this matrix to determine which agents to invoke:**
```yaml
Authentication/Authorization Feature:
agents:
- security: Security requirements and review (primary)
- refactorer: Code quality and structure
- qa: Security and integration testing
- implementer: Documentation completeness
reason: Security-critical feature needs security focus
API Development:
agents:
- refactorer: Code structure and patterns (primary)
- implementer: API documentation
- qa: API testing and validation
reason: API quality and documentation critical
Performance Optimization:
agents:
- refactorer: Code efficiency review (primary)
- qa: Performance testing validation
- architect: Architecture implications (if major)
reason: Performance changes need quality + testing
Security Fix:
agents:
- security: Security fix validation (primary)
- qa: Security test coverage
- implementer: Security documentation
reason: Security fixes need security expert review
Refactoring:
agents:
- refactorer: Code quality improvements (primary)
- architect: Design pattern compliance (if structural)
- qa: No regression validation
reason: Refactoring needs quality focus + safety
Bug Fix:
agents:
- refactorer: Code quality of fix (primary)
- qa: Regression test addition
reason: Simple bug fixes need basic review
Documentation Update:
agents:
- implementer: Documentation quality (primary)
reason: Documentation changes need content review only
New Feature (Standard):
agents:
- refactorer: Code quality and structure
- security: Security implications
- qa: Test coverage and quality
- implementer: Documentation completeness
reason: Standard features need comprehensive review
Major System Change:
agents:
- architect: Architecture validation (primary)
- refactorer: Code quality review
- security: Security implications
- qa: Comprehensive testing
- implementer: Documentation update
reason: Major changes need all-hands review
```
---
## Agent Coordination Patterns
### Pattern 1: Parallel Review (Standard)
**When to Use:** Most feature reviews where agents can work independently.
```yaml
Parallel Review Pattern:
Spawn 4 agents simultaneously:
- refactorer: Review code quality
- security: Review security
- qa: Review testing
- implementer: Review documentation
Each agent focuses on their domain independently
Wait for all agents to complete
Consolidate results into unified review summary
Advantages:
- Fast (all reviews happen simultaneously)
- Comprehensive (all domains covered)
- Independent (no agent blocking others)
Time: ~5-10 minutes
```
**Example: New Feature Review**
```yaml
Review Authentication Feature:
Spawn in Parallel:
Use the refactorer agent to:
- Review code structure and organization
- Check SOLID principles compliance
- Identify code smells
- Suggest improvements
Use the security agent to:
- Review authentication logic
- Check password hashing
- Validate token handling
- Identify security risks
Use the qa agent to:
- Analyze test coverage
- Check edge case handling
- Validate test quality
- Identify missing tests
Use the implementer agent to:
- Review API documentation
- Check code comments
- Validate examples
- Identify doc gaps
Wait for All Completions
Consolidate:
- Combine all agent findings
- Identify common themes
- Prioritize issues
- Generate unified review summary
```
### Pattern 2: Sequential Review with Validation
**When to Use:** Critical features where one review informs the next.
```yaml
Sequential Review Pattern:
Step 1: First agent reviews
→ Wait for completion
→ Analyze findings
Step 2: Second agent reviews (builds on first)
→ Wait for completion
→ Analyze findings
Step 3: Third agent reviews (builds on previous)
→ Wait for completion
→ Final analysis
Advantages:
- Each review informs the next
- Can adjust focus based on findings
- Deeper analysis possible
Time: ~15-20 minutes
```
**Example: Security-Critical Feature Review**
```yaml
Review Security Feature:
Step 1: Use the security agent to:
- Deep security analysis
- Identify vulnerabilities
- Define security requirements
- Assess risk level
Output: Security requirements document + vulnerability report
Step 2: Use the refactorer agent to:
- Review code with security context
- Check if vulnerabilities addressed
- Ensure secure coding patterns
- Validate security requirements met
Context: Security agent's findings
Output: Code quality + security compliance report
Step 3: Use the qa agent to:
- Review tests with security focus
- Ensure vulnerabilities tested
- Check security edge cases
- Validate security test coverage
Context: Security requirements + code review
Output: Test coverage + security testing report
Step 4: Use the implementer agent to:
- Document security implementation
- Document security tests
- Add security examples
- Document threat model
Context: All previous reviews
Output: Documentation completeness report
```
### Pattern 3: Iterative Review with Fixing
**When to Use:** When issues are expected and fixes needed during review.
```yaml
Iterative Review Pattern:
Loop:
1. Agent reviews and identifies issues
2. Same agent (or another) fixes issues
3. Re-validate fixed issues
4. If new issues: repeat
5. If clean: proceed to next agent
Advantages:
- Issues fixed during review
- Continuous improvement
- Final review is clean
Time: ~20-30 minutes (depends on issues)
```
**Example: Refactoring Review with Fixes**
```yaml
Review Refactored Code:
Iteration 1 - Code Quality:
Use the refactorer agent to:
- Review code structure
- Identify 3 issues:
• Function too long (85 lines)
• Duplicate code in 2 places
• Complex nested conditionals
Use the refactorer agent to:
- Break long function into 4 smaller ones
- Extract duplicate code to shared utility
- Flatten nested conditionals
Validate: All issues fixed ✅
Iteration 2 - Testing:
Use the qa agent to:
- Review test coverage
- Identify 2 issues:
• New functions not tested
• Edge case missing
Use the qa agent to:
- Add tests for new functions
- Add edge case test
Validate: Coverage now 89% ✅
Iteration 3 - Documentation:
Use the implementer agent to:
- Review documentation
- Identify 1 issue:
• Refactored functions missing docstrings
Use the implementer agent to:
- Add docstrings to all new functions
- Update examples
Validate: All documented ✅
Final: All issues addressed, ready to ship
```
### Pattern 4: Focused Review (Subset)
**When to Use:** Small changes or specific review needs.
```yaml
Focused Review Pattern:
Select 1-2 agents based on change type:
- Bug fix → refactorer + qa
- Docs update → implementer only
- Security fix → security + qa
- Performance → refactorer + qa
Only review relevant aspects
Skip unnecessary reviews
Advantages:
- Fast (minimal agents)
- Focused (relevant only)
- Efficient (no wasted effort)
Time: ~3-5 minutes
```
**Example: Bug Fix Review**
```yaml
Review Bug Fix:
Use the refactorer agent to:
- Review fix code quality
- Check if fix is clean
- Ensure no new issues introduced
- Validate fix approach
Use the qa agent to:
- Verify regression test added
- Check test quality
- Ensure bug scenario covered
- Validate no other tests broken
Skip:
- security (not security-related)
- implementer (no doc changes)
- architect (no design changes)
Result: Fast, focused review in ~5 minutes
```
---
## Review Aspect Coordination
### Code Quality Review (refactorer)
**Review Focus:**
```yaml
Readability:
- Variable/function names clear
- Code organization logical
- Comments appropriate
- Formatting consistent
Design:
- DRY principle applied
- SOLID principles followed
- Abstractions appropriate
- Patterns used correctly
Maintainability:
- Functions focused and small
- Complexity low
- Dependencies minimal
- No code smells
Consistency:
- Follows codebase conventions
- Naming patterns consistent
- Error handling uniform
- Style guide compliance
```
**Review Output Format:**
```yaml
Code Quality Review by refactorer:
✅ Strengths:
• Clean separation of concerns in auth module
• Consistent error handling with custom exceptions
• Good use of dependency injection pattern
• Function sizes appropriate (avg 25 lines)
⚠️ Suggestions:
• Consider extracting UserValidator to separate class
• Could simplify nested conditionals in authenticate()
• Opportunity to cache user lookups for performance
• Some variable names could be more descriptive (e.g., 'data' → 'user_data')
🚨 Required Fixes:
• None
Complexity Metrics:
• Average cyclomatic complexity: 3.2 (target: <5) ✅
• Max function length: 42 lines (target: <50) ✅
• Duplicate code: 0.8% (target: <2%) ✅
```
### Security Review (security)
**Review Focus:**
```yaml
Authentication:
- Password storage secure (hashing)
- Token validation robust
- Session management safe
- MFA properly implemented
Authorization:
- Permission checks present
- RBAC correctly implemented
- Resource ownership validated
- No privilege escalation
Input Validation:
- All inputs sanitized
- SQL injection prevented
- XSS prevented
- CSRF protection active
Data Protection:
- Sensitive data encrypted
- Secure communication (HTTPS)
- No secrets in code
- PII handling compliant
Vulnerabilities:
- No known CVEs in deps
- No hardcoded credentials
- No insecure algorithms
- No information leakage
```
**Review Output Format:**
```yaml
Security Review by security:
✅ Strengths:
• Password hashing uses bcrypt with cost 12 (recommended)
• JWT validation includes expiry, signature, and issuer checks
• Input sanitization comprehensive across all endpoints
• No hardcoded secrets or credentials found
⚠️ Suggestions:
• Consider adding rate limiting to login endpoint (prevent brute force)
• Add logging for failed authentication attempts (security monitoring)
• Consider implementing password complexity requirements
• Could add request signing for critical API operations
🚨 Required Fixes:
• None - all critical security measures in place
Vulnerability Scan:
• Dependencies: 0 critical, 0 high, 1 low (acceptable)
• Code: No security vulnerabilities detected
• Secrets: No hardcoded secrets found
```
### Test Coverage Review (qa)
**Review Focus:**
```yaml
Coverage Metrics:
- Overall coverage ≥80%
- Critical paths 100%
- Edge cases covered
- Error paths tested
Test Quality:
- Assertions meaningful
- Test names descriptive
- Tests isolated
- No flaky tests
- Mocks appropriate
Test Types:
- Unit tests for logic
- Integration for flows
- E2E for critical paths
- Performance if needed
Test Organization:
- Clear structure
- Good fixtures
- Helper functions
- Easy to maintain
```
**Review Output Format:**
```yaml
Test Coverage Review by qa:
✅ Strengths:
• Coverage at 87% (target: 80%) - exceeds requirement ✅
• Critical auth paths 100% covered
• Good edge case coverage (token expiry, invalid tokens, etc.)
• Test names clear and descriptive
• Tests properly isolated with fixtures
⚠️ Suggestions:
• Could add tests for token refresh edge cases (concurrent requests)
• Consider adding load tests for auth endpoints (performance validation)
• Some assertions could be more specific (e.g., check exact error message)
• Could add property-based tests for token generation
🚨 Required Fixes:
• None
Coverage Breakdown:
• src/auth/jwt.py: 92% (23/25 lines)
• src/auth/service.py: 85% (34/40 lines)
• src/auth/validators.py: 100% (15/15 lines)
Test Counts:
• Unit tests: 38 passed
• Integration tests: 12 passed
• Security tests: 8 passed
• Total: 58 tests, 0 failures
```
### Documentation Review (implementer)
**Review Focus:**
```yaml
API Documentation:
- All endpoints documented
- Parameters described
- Responses documented
- Examples provided
Code Documentation:
- Functions have docstrings
- Complex logic explained
- Public APIs documented
- Types annotated
Project Documentation:
- README up to date
- Setup instructions clear
- Architecture documented
- Examples working
Completeness:
- No missing docs
- Accurate and current
- Easy to understand
- Maintained with code
```
**Review Output Format:**
```yaml
Documentation Review by implementer:
✅ Strengths:
• API documentation complete with OpenAPI specs
• All public functions have clear docstrings
• README updated with authentication section
• Examples provided and tested
⚠️ Suggestions:
• Could add more code examples for token refresh flow
• Consider adding architecture diagram for auth flow
• Some docstrings could include example usage
• Could document error codes more explicitly
🚨 Required Fixes:
• None
Documentation Coverage:
• Public functions: 100% (all documented)
• API endpoints: 100% (all in OpenAPI)
• README: Up to date ✅
• Examples: 3 working examples included
```
### Architecture Review (architect)
**When to Invoke:**
```yaml
Trigger Architecture Review:
- New system components added
- Major refactoring done
- Cross-module dependencies changed
- Database schema modified
- API contract changes
- Performance-critical features
Skip for:
- Small bug fixes
- Documentation updates
- Minor refactoring
- Single-file changes
```
**Review Focus:**
```yaml
Component Boundaries:
- Clear separation of concerns
- Dependencies flow correctly
- No circular dependencies
- Proper abstraction layers
Scalability:
- Horizontal scaling supported
- No obvious bottlenecks
- Database queries optimized
- Caching appropriate
Maintainability:
- Easy to extend
- Easy to test
- Low coupling
- High cohesion
Future-Proofing:
- Flexible design
- Easy to modify
- Minimal technical debt
- Clear upgrade path
```
**Review Output Format:**
```yaml
Architecture Review by architect:
✅ Strengths:
• Clean layered architecture maintained
• Auth module well-isolated from other concerns
• JWT implementation abstracted (easy to swap if needed)
• Good use of dependency injection for testability
⚠️ Suggestions:
• Consider event-driven approach for audit logging (scalability)
• Could abstract session storage interface (flexibility)
• May want to add caching layer for user lookups (performance)
• Consider adding rate limiting at architecture level
🚨 Required Fixes:
• None
Architecture Health:
• Coupling: Low ✅
• Cohesion: High ✅
• Complexity: Manageable ✅
• Scalability: Good ✅
• Technical debt: Low ✅
```
---
## Review Consolidation
### Collecting Agent Reviews
**Consolidation Strategy:**
```yaml
Step 1: Collect All Reviews
- Wait for all agents to complete
- Gather all review outputs
- Organize by agent
Step 2: Identify Common Themes
- Issues mentioned by multiple agents
- Conflicting suggestions (rare)
- Critical vs nice-to-have
Step 3: Prioritize Findings
- 🚨 Required Fixes (blocking)
- ⚠️ Suggestions (improvements)
- ✅ Strengths (positive feedback)
Step 4: Generate Unified Summary
- Overall assessment
- Critical issues (if any)
- Key improvements suggested
- Ready-to-ship decision
```
### Unified Review Summary Template
```yaml
📊 Multi-Agent Review Summary
Code Quality (refactorer): ✅ EXCELLENT
Strengths: [Top 3 strengths]
Suggestions: [Top 2-3 suggestions]
Security (security): ✅ SECURE
Strengths: [Top 3 strengths]
Suggestions: [Top 2-3 suggestions]
Testing (qa): ✅ WELL-TESTED
Strengths: [Coverage metrics + top strengths]
Suggestions: [Top 2-3 suggestions]
Documentation (implementer): ✅ COMPLETE
Strengths: [Documentation coverage]
Suggestions: [Top 2-3 suggestions]
Architecture (architect): ✅ SOLID (if included)
Strengths: [Architecture assessment]
Suggestions: [Top 2-3 suggestions]
Overall Assessment: ✅ READY TO SHIP
Critical Issues: [Count] (must be 0 to ship)
Suggestions: [Count] (nice-to-have improvements)
Quality Score: [Excellent/Good/Needs Work]
Recommendation: [Ship / Fix Critical Issues / Consider Suggestions]
```
### Example Consolidated Review
```yaml
📊 Multi-Agent Review Summary
Code Quality (refactorer): ✅ EXCELLENT
Strengths:
• Clean architecture with excellent separation of concerns
• Consistent code style and naming conventions
• Low complexity (avg 3.2, target <5)
Suggestions:
• Consider extracting UserValidator class
• Simplify nested conditionals in authenticate()
Security (security): ✅ SECURE
Strengths:
• Robust bcrypt password hashing (cost 12)
• Comprehensive JWT validation
• No hardcoded secrets or credentials
Suggestions:
• Add rate limiting to prevent brute force
• Add security event logging
Testing (qa): ✅ WELL-TESTED
Strengths:
• 87% coverage (exceeds 80% target)
• All critical paths fully tested
• Good edge case coverage
Suggestions:
• Add tests for concurrent token refresh
• Consider load testing auth endpoints
Documentation (implementer): ✅ COMPLETE
Strengths:
• All APIs documented with OpenAPI
• Clear docstrings on all functions
• README updated with examples
Suggestions:
• Add architecture diagram
• More code examples for token flow
Overall Assessment: ✅ READY TO SHIP
Critical Issues: 0
Suggestions: 8 nice-to-have improvements
Quality Score: Excellent
Recommendation: SHIP - All quality gates passed. Consider addressing suggestions in future iteration.
```
---
## Agent Communication Best Practices
### Clear Context Handoff
**When Chaining Agents:**
```yaml
Good Context Handoff:
Use the security agent to review authentication
→ Output: Security review with 3 suggestions
Use the implementer agent to document security measures
Context: Security review identified token expiry, hashing, validation
Task: Document these security features in API docs
Bad Context Handoff:
Use the security agent to review authentication
Use the implementer agent to add docs
Problem: implementer doesn't know what security found
```
### Explicit Review Boundaries
**Define What Each Agent Reviews:**
```yaml
Good Boundary Definition:
Use the refactorer agent to review code quality:
- Focus: Code structure, naming, patterns
- Scope: src/auth/ directory only
- Exclude: Security aspects (security agent will cover)
Bad Boundary Definition:
Use the refactorer agent to review the code
Problem: Unclear scope and focus
```
### Validation After Each Review
**Always Validate Agent Output:**
```yaml
Review Validation:
After agent completes:
1. Check review is comprehensive
2. Verify findings are actionable
3. Ensure no critical issues missed
4. Validate suggestions are reasonable
If issues:
- Re-prompt agent with clarifications
- Use different agent for second opinion
- Escalate to user if uncertain
```
---
## Quality Checkpoint Triggers
### Automatic Agent Invocation
**Based on Code Metrics:**
```yaml
High Complexity Detected:
If cyclomatic complexity >10:
→ Use the refactorer agent to:
- Analyze complex functions
- Suggest simplifications
- Break into smaller functions
Security Patterns Found:
If authentication/encryption code:
→ Use the security agent to:
- Review security implementation
- Validate secure patterns
- Check for vulnerabilities
Low Test Coverage:
If coverage <80%:
→ Use the qa agent to:
- Identify untested code
- Suggest test cases
- Improve coverage
Missing Documentation:
If docstring coverage <90%:
→ Use the implementer agent to:
- Identify missing docs
- Generate docstrings
- Add examples
Circular Dependencies:
If circular deps detected:
→ Use the architect agent to:
- Analyze dependency structure
- Suggest refactoring
- Break circular references
```
---
## Multi-Agent Review Best Practices
### DO:
```yaml
✅ Best Practices:
- ALWAYS use workflow-coordinator first
- Use parallel reviews for speed when possible
- Provide clear context to each agent
- Validate each agent's output
- Consolidate findings into unified summary
- Focus agents on their expertise areas
- Skip unnecessary agents for simple changes
- Use sequential review for critical features
```
### DON'T:
```yaml
❌ Anti-Patterns:
- Skip workflow-coordinator validation
- Use all agents for every review (overkill)
- Let agents review outside their expertise
- Forget to consolidate findings
- Accept reviews without validation
- Chain agents without clear handoff
- Run sequential when parallel would work
- Use parallel when sequential needed
```
---
*Comprehensive multi-agent review strategies for quality assurance and code validation*

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,869 @@
# Review Modes - Different Review Strategies
This file describes the different review modes available and when to use each one.
## Mode Overview
```yaml
Available Modes:
full: Complete 5-phase review pipeline (default)
quick: Fast review for small changes
commit-only: Generate commits without PR
validate-only: Quality checks and fixes only
pr-only: Create PR from existing commits
analysis: Deep code quality analysis
archive-spec: Move completed spec to completed/
Mode Selection:
- Auto-detect based on context
- User specifies with flags
- Optimize for common workflows
```
---
## Full Review Mode (Default)
### Overview
```yaml
Full Review Mode:
phases: [Validate, Fix, Commit, Review, Ship]
time: 15-30 minutes
coverage: Comprehensive
output: Complete PR with rich context
When to Use:
- Completed feature ready to ship
- Major changes need thorough review
- Want comprehensive quality validation
- Need multi-agent review insights
- Creating important PR
When NOT to Use:
- Small quick fixes (use quick mode)
- Just need commits (use commit-only)
- Already have commits (use pr-only)
- Just checking quality (use validate-only)
```
### Workflow
```yaml
Phase 1: Comprehensive Validation (🔍)
- Multi-domain quality checks
- Security vulnerability scanning
- Test coverage analysis
- Documentation completeness
- Quality gate enforcement
Phase 2: Intelligent Auto-Fixing (⚡)
- Simple issue direct fixes
- Complex issue agent delegation
- Parallel fix execution
- Validation after fixes
Phase 3: Smart Commit Generation (📝)
- Change analysis and grouping
- Commit classification
- Conventional commit format
- Specification integration
Phase 4: Multi-Agent Review (🤖)
- refactorer: Code quality review
- security: Security review
- qa: Test coverage review
- implementer: Documentation review
- architect: Architecture review (if needed)
- Consolidated review summary
Phase 5: PR Creation & Shipping (🚀)
- PR title and description generation
- Quality metrics inclusion
- Review insights integration
- Automation setup
- Specification archiving
```
### Example Usage
```bash
# Command-based
/review
# Conversation-based
"Review my changes and create a PR"
"Ready to ship this feature"
"Comprehensive review of authentication implementation"
```
### Expected Output
```yaml
Output Components:
1. Quality Validation Report:
- All quality gates status
- Issues found and fixed
- Metrics (coverage, linting, etc.)
2. Generated Commits:
- List of commits created
- Conventional commit format
- Specification references
3. Multi-Agent Review Summary:
- Code quality insights
- Security assessment
- Test coverage analysis
- Documentation completeness
- Overall recommendation
4. PR Details:
- PR number and URL
- Title and description preview
- Automation applied (labels, reviewers)
- Specification archive status
Time: ~15-30 minutes
```
---
## Quick Review Mode
### Overview
```yaml
Quick Review Mode:
phases: [Basic Validate, Auto-Fix, Simple Commit, Single Review, Basic PR]
time: 3-5 minutes
coverage: Essential checks only
output: Simple PR with basic context
When to Use:
- Small changes (1-3 files)
- Documentation updates
- Minor bug fixes
- Quick hotfixes
- Low-risk changes
When NOT to Use:
- Major features (use full review)
- Security changes (use full review)
- Complex refactoring (use full review)
- Need detailed analysis (use analysis mode)
```
### Workflow
```yaml
Phase 1: Basic Validation (🔍)
- Linting check only
- Quick test run
- No deep analysis
- Skip: Security scan, coverage analysis
Phase 2: Auto-Fix Only (⚡)
- Formatting fixes
- Linting auto-fixes
- Skip: Agent delegation
- Skip: Complex fixes
Phase 3: Simple Commit (📝)
- One commit for all changes
- Basic conventional format
- Skip: Intelligent grouping
- Skip: Complex classification
Phase 4: Single Agent Review (🤖)
- Use refactorer agent only
- Quick code quality check
- Skip: Security, QA, implementer reviews
- Skip: Consolidated summary
Phase 5: Basic PR (🚀)
- Simple title and description
- Basic quality metrics
- Skip: Detailed review insights
- Skip: Complex automation
```
### Example Usage
```bash
# Command-based
/review --quick
# Conversation-based
"Quick review for this small fix"
"Fast review, just need to ship docs"
"Simple review for typo fixes"
```
### Expected Output
```yaml
Output Components:
1. Basic Validation:
- Tests: ✅ Passed
- Linting: ✅ Clean
2. Single Commit:
- "fix(api): correct typo in error message"
3. Quick Review:
- Code quality: ✅ Good
- No major issues found
4. Simple PR:
- PR #124 created
- Basic description
- Ready for merge
Time: ~3-5 minutes
```
---
## Commit-Only Mode
### Overview
```yaml
Commit-Only Mode:
phases: [Basic Validate, Auto-Fix, Smart Commit]
time: 5-10 minutes
coverage: Commit generation focused
output: Organized commits, no PR
When to Use:
- Want organized commits but not ready for PR
- Working on long-running branch
- Need to commit progress
- Plan to create PR later
- Want conventional commits without review
When NOT to Use:
- Ready to ship (use full review)
- Need quality validation (use validate-only)
- Already have commits (no need)
```
### Workflow
```yaml
Phase 1: Basic Validation (🔍)
- Run linting
- Run tests
- Basic quality checks
- Ensure changes compile/run
Phase 2: Simple Auto-Fixing (⚡)
- Format code
- Fix simple linting issues
- Skip: Complex agent fixes
Phase 3: Smart Commit Generation (📝)
- Analyze all changes
- Group related changes
- Classify by type
- Generate conventional commits
- Include specification references
Phases Skipped:
- Multi-agent review
- PR creation
```
### Example Usage
```bash
# Command-based
/review --commit-only
# Conversation-based
"Generate commits from my changes"
"Create organized commits but don't make PR yet"
"I want proper commits but I'm not done with the feature"
```
### Expected Output
```yaml
Output Components:
1. Validation Status:
- Tests: ✅ Passed
- Linting: ✅ Clean (auto-fixed)
2. Generated Commits:
- 3 commits created:
• "feat(auth): implement JWT generation"
• "test(auth): add JWT generation tests"
• "docs(auth): document JWT implementation"
3. Summary:
- Commits created and pushed
- No PR created (as requested)
- Ready to continue work or create PR later
Time: ~5-10 minutes
```
---
## Validate-Only Mode
### Overview
```yaml
Validate-Only Mode:
phases: [Comprehensive Validate, Auto-Fix]
time: 5-10 minutes
coverage: Quality checks and fixes
output: Validation report with fixes
When to Use:
- Check code quality before committing
- Want to fix issues without committing
- Unsure if ready for review
- Need quality metrics
- Want to ensure quality gates pass
When NOT to Use:
- Ready to commit (use commit-only)
- Ready to ship (use full review)
- Just need PR (use pr-only)
```
### Workflow
```yaml
Phase 1: Comprehensive Validation (🔍)
- Multi-domain quality checks
- Security vulnerability scanning
- Test coverage analysis
- Documentation completeness
- Build validation
Phase 2: Intelligent Auto-Fixing (⚡)
- Simple issue direct fixes
- Complex issue agent delegation
- Parallel fix execution
- Re-validation after fixes
Phases Skipped:
- Commit generation
- Multi-agent review
- PR creation
```
### Example Usage
```bash
# Command-based
/review --validate-only
# Conversation-based
"Check if my code passes quality gates"
"Validate and fix issues but don't commit"
"Make sure my changes are good quality"
```
### Expected Output
```yaml
Output Components:
1. Initial Validation Report:
Code Quality: ⚠️ 3 issues
- 2 formatting issues
- 1 unused import
Security: ✅ Clean
- No vulnerabilities
Testing: ✅ Passed
- Coverage: 87%
Documentation: ⚠️ 1 issue
- 1 missing docstring
2. Auto-Fix Results:
- Formatted 2 files
- Removed unused import
- Added missing docstring
3. Final Validation:
Code Quality: ✅ Clean
Security: ✅ Clean
Testing: ✅ Passed
Documentation: ✅ Complete
Status: ✅ All quality gates passing
Ready to commit when you're ready
Time: ~5-10 minutes
```
---
## PR-Only Mode
### Overview
```yaml
PR-Only Mode:
phases: [Multi-Agent Review, PR Creation]
time: 10-15 minutes
coverage: Review and PR only
output: PR with review insights
When to Use:
- Commits already created manually
- Just need PR creation
- Want review insights without re-validation
- Already validated and fixed issues
- Ready to ship existing commits
When NOT to Use:
- No commits yet (use commit-only or full)
- Need quality validation (use validate-only)
- Need fixes (use full review)
```
### Workflow
```yaml
Phase 1: Verify Commits (✓)
- Check commits exist
- Analyze commit history
- Extract PR context
Phase 2: Multi-Agent Review (🤖)
- refactorer: Code quality review
- security: Security review
- qa: Test coverage review
- implementer: Documentation review
- Consolidated review summary
Phase 3: PR Creation (🚀)
- Extract title from commits
- Generate comprehensive description
- Include review insights
- Setup automation (labels, reviewers)
- Archive specification
Phases Skipped:
- Validation
- Auto-fixing
- Commit generation
```
### Example Usage
```bash
# Command-based
/review --pr-only
# Conversation-based
"Create PR from my existing commits"
"I already committed, just need the PR"
"Make a PR with review insights"
```
### Expected Output
```yaml
Output Components:
1. Commit Analysis:
- Found 3 commits:
• "feat(auth): implement JWT generation"
• "test(auth): add JWT tests"
• "docs(auth): document JWT"
2. Multi-Agent Review:
- Code quality: ✅ Excellent
- Security: ✅ Secure
- Testing: ✅ Well-tested
- Documentation: ✅ Complete
3. PR Created:
- PR #125: "feat: JWT Authentication"
- URL: https://github.com/user/repo/pull/125
- Labels: enhancement, security
- Reviewers: @security-team
- Specification: spec-feature-auth-001 archived
Time: ~10-15 minutes
```
---
## Deep Analysis Mode
### Overview
```yaml
Deep Analysis Mode:
phases: [Comprehensive Validate, Extended Review]
time: 20-30 minutes
coverage: In-depth analysis and metrics
output: Detailed quality report
When to Use:
- Need comprehensive quality insights
- Want to understand technical debt
- Planning refactoring
- Assessing code health
- Before major release
When NOT to Use:
- Just need quick check (use validate-only)
- Ready to ship (use full review)
- Simple changes (use quick mode)
```
### Workflow
```yaml
Phase 1: Comprehensive Validation (🔍)
- All standard quality checks
- Plus: Complexity analysis
- Plus: Technical debt assessment
- Plus: Performance profiling
- Plus: Architecture health
Phase 2: Extended Multi-Agent Review (🤖)
- All agents review (refactorer, security, qa, implementer, architect)
- Plus: Detailed metrics collection
- Plus: Historical comparison
- Plus: Trend analysis
- Plus: Actionable recommendations
Phase 3: Analysis Report Generation (📊)
- Code quality trends
- Security posture
- Test coverage evolution
- Documentation completeness
- Architecture health score
- Technical debt quantification
- Refactoring opportunities
- Performance bottlenecks
Phases Skipped:
- Commit generation
- PR creation
```
### Example Usage
```bash
# Command-based
/review --analysis
# Conversation-based
"Deep analysis of code quality"
"Comprehensive quality report"
"Assess codebase health"
```
### Expected Output
```yaml
Output Components:
1. Quality Metrics:
Code Quality:
- Overall score: 8.5/10
- Complexity: 3.2 avg (↓ from 3.8)
- Duplication: 1.2% (↓ from 2.1%)
- Maintainability: 85/100
Security:
- Security score: 9/10
- Vulnerabilities: 0 critical, 1 low
- Auth patterns: Excellent
- Data protection: Strong
Testing:
- Coverage: 87% (↑ from 82%)
- Test quality: 8/10
- Edge cases: Well covered
- Performance: No regressions
Documentation:
- Completeness: 92%
- API docs: 100%
- Code comments: 88%
- Examples: 3 provided
2. Trends:
- Code quality improving ↑
- Test coverage growing ↑
- Complexity decreasing ↓
- Tech debt reducing ↓
3. Recommendations:
Refactoring Opportunities:
- Extract UserValidator class (medium priority)
- Simplify authenticate() method (low priority)
- Consider caching layer (enhancement)
Performance Optimizations:
- Add database query caching
- Optimize token validation path
Security Hardening:
- Add rate limiting to auth endpoints
- Implement request signing
Technical Debt:
- Total: ~3 days of work
- High priority: 1 day
- Medium: 1.5 days
- Low: 0.5 days
Time: ~20-30 minutes
```
---
## Specification Archiving Mode
### Overview
```yaml
Specification Archiving Mode:
phases: [Verify Completion, Move Spec, Generate Summary]
time: 2-3 minutes
coverage: Specification management
output: Archived spec with completion summary
When to Use:
- Specification work complete
- All tasks and acceptance criteria met
- PR merged (or ready to merge)
- Want to archive completed work
- Clean up active specifications
When NOT to Use:
- Specification not complete
- PR not created yet (use full review)
- Still working on tasks
```
### Workflow
```yaml
Phase 1: Verify Completion (✓)
- Check all tasks completed
- Verify acceptance criteria met
- Confirm quality gates passed
- Check PR exists (if applicable)
Phase 2: Move Specification (📁)
- From: .quaestor/specs/active/<spec-id>.md
- To: .quaestor/specs/completed/<spec-id>.md
- Update status → "completed"
- Add completion_date
- Link PR URL
Phase 3: Generate Archive Summary (📝)
- What was delivered
- Key decisions made
- Lessons learned
- Performance metrics
- Completion evidence
```
### Example Usage
```bash
# Command-based
/review --archive-spec spec-feature-auth-001
# Conversation-based
"Archive completed specification spec-feature-auth-001"
"Move spec-feature-auth-001 to completed"
"Mark authentication spec as complete"
```
### Expected Output
```yaml
Output Components:
1. Verification:
✅ All tasks completed (8/8)
✅ Acceptance criteria met (5/5)
✅ Quality gates passed
✅ PR exists (#123)
2. Archive Action:
Moved: spec-feature-auth-001.md
From: .quaestor/specs/active/
To: .quaestor/specs/completed/
Status: completed
Completion Date: 2025-10-19
3. Completion Summary:
Delivered:
- JWT authentication with refresh tokens
- Comprehensive test suite (87% coverage)
- API documentation
- Security review passed
Key Decisions:
- JWT over sessions for scalability
- bcrypt cost factor 12 for security
- Refresh token rotation every 7 days
Lessons Learned:
- Token expiry edge cases need careful testing
- Rate limiting should be in initial design
Metrics:
- Timeline: 3 days (estimated: 5 days) ✅
- Quality: All gates passed ✅
- Tests: 58 tests, 87% coverage ✅
- Security: 0 vulnerabilities ✅
Links:
- PR: #123
- Commits: abc123, def456, ghi789
Time: ~2-3 minutes
```
---
## Mode Comparison Matrix
```yaml
Feature Comparison:
Full Quick Commit Validate PR Analysis Archive
Validation ✅ ⚡ ⚡ ✅ ❌ ✅ ✅
Auto-Fixing ✅ ⚡ ⚡ ✅ ❌ ❌ ❌
Commit Generation ✅ ⚡ ✅ ❌ ❌ ❌ ❌
Multi-Agent Review ✅ ⚡ ❌ ❌ ✅ ✅✅ ❌
PR Creation ✅ ⚡ ❌ ❌ ✅ ❌ ❌
Deep Analysis ❌ ❌ ❌ ❌ ❌ ✅ ❌
Spec Archiving ✅ ❌ ❌ ❌ ✅ ❌ ✅
Legend:
✅ = Full feature
⚡ = Simplified version
✅✅ = Extended version
❌ = Not included
Time Comparison:
Mode Time Best For
──────────── ────────────── ────────────────────────────
Full 15-30 min Complete feature shipping
Quick 3-5 min Small changes, hotfixes
Commit 5-10 min Progress commits
Validate 5-10 min Quality check before commit
PR 10-15 min PR from existing commits
Analysis 20-30 min Deep quality insights
Archive 2-3 min Spec completion tracking
```
---
## Mode Selection Guidelines
### Decision Tree
```yaml
Choose Mode Based on Situation:
Do you have uncommitted changes?
No → Do you want a PR?
Yes → Use: pr-only
No → Use: analysis (for insights)
Yes → Are you ready to ship?
Yes → Use: full (comprehensive review + PR)
No → Do you want to commit?
Yes → Use: commit-only (commits without PR)
No → Do you need quality check?
Yes → Use: validate-only (check + fix)
No → Continue working
Is this a small change (<5 files)?
Yes → Use: quick (fast review)
No → Use: full (comprehensive review)
Do you need detailed metrics?
Yes → Use: analysis (deep insights)
No → Use: appropriate mode above
Is specification complete?
Yes → Use: archive-spec (after PR merged)
No → Continue implementation
```
### Situational Recommendations
```yaml
Situation → Recommended Mode:
"I finished the feature and want to ship"
→ full: Complete review, commits, PR
"Quick typo fix in docs"
→ quick: Fast review and simple PR
"I want to save progress but not done"
→ commit-only: Organized commits, no PR
"Is my code good quality?"
→ validate-only: Check and fix issues
"I already committed, need PR"
→ pr-only: Review and PR creation
"How healthy is this codebase?"
→ analysis: Comprehensive metrics
"Feature done, PR merged"
→ archive-spec: Move spec to completed/
"Working on experimental feature"
→ commit-only: Save progress commits
"About to start refactoring"
→ analysis: Understand current state
"Hotfix for production"
→ quick: Fast review and ship
```
---
## Combining Modes
### Sequential Mode Usage
```yaml
Common Workflows:
Development → Validation → Commit → Review → Ship:
1. During development: validate-only (check quality)
2. End of day: commit-only (save progress)
3. Feature complete: full (review and PR)
4. After merge: archive-spec (archive spec)
Before Refactoring → During → After:
1. Before: analysis (understand current state)
2. During: validate-only (ensure quality)
3. After: full (review refactoring + PR)
Long Feature → Progress → Ship:
1. Daily: commit-only (save progress)
2. Weekly: validate-only (quality check)
3. Done: full (comprehensive review + PR)
4. Merged: archive-spec (archive spec)
```
---
*Comprehensive review mode documentation with clear guidelines for when to use each mode*

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,250 @@
---
name: Reviewing and Shipping
description: Validate quality with multi-agent review, auto-fix issues, generate organized commits, and create PRs with rich context. Use after completing features to ensure quality gates pass and ship confidently.
allowed-tools: [Read, Edit, MultiEdit, Bash, Grep, Glob, TodoWrite, Task]
---
# Reviewing and Shipping
I help you ship code confidently: validate quality, fix issues, generate commits, review with agents, and create pull requests.
## When to Use Me
**Review & validate:**
- "Review my changes"
- "Check if code is ready to ship"
- "Validate quality gates"
**Create pull request:**
- "Create a PR"
- "Ship this feature"
- "Make a pull request for spec-feature-001"
**Generate commits:**
- "Generate commits from my changes"
- "Create organized commits"
## Quick Start
**Most common:** Just completed work and want to ship it
```
"Review and ship this feature"
```
I'll automatically:
1. Validate quality (tests, linting, security)
2. Fix any issues
3. Generate organized commits
4. Review with agents
5. Create PR with rich description
**Just need PR:** Already validated and committed
```
"Create a PR for spec-feature-001"
```
I'll skip validation and just create the PR.
## How I Work - Conditional Workflow
I detect what you need and adapt:
### Mode 1: Full Review & Ship (Default)
**When:** "Review my changes", "Ship this"
**Steps:** Validate → Fix → Commit → Review → PR
**Load:** `@WORKFLOW.md` for complete 5-phase process
---
### Mode 2: Quick Review
**When:** "Quick review", small changes
**Steps:** Basic validation → Fast commits → Simple PR
**Load:** `@MODES.md` for quick mode details
---
### Mode 3: Create PR Only
**When:** "Create a PR", "Make pull request"
**Steps:** Generate PR description from spec/commits → Submit
**Load:** `@PR.md` for PR creation details
---
### Mode 4: Generate Commits Only
**When:** "Generate commits", "Organize my commits"
**Steps:** Analyze changes → Create atomic commits
**Load:** `@COMMITS.md` for commit strategies
---
### Mode 5: Validate Only
**When:** "Validate my code", "Check quality"
**Steps:** Run quality gates → Report results
**Load:** `@WORKFLOW.md` Phase 1
---
### Mode 6: Deep Analysis
**When:** "Analyze code quality", "Review for issues"
**Steps:** Multi-agent review → Detailed report
**Load:** `@AGENTS.md` for review strategies
---
## Progressive Loading Pattern
**Don't load all files!** Only load what's needed for your workflow:
```yaml
User Intent Detection:
"review my changes" → Load @WORKFLOW.md (full 5-phase)
"create a PR" → Load @PR.md (PR creation only)
"generate commits" → Load @COMMITS.md (commit organization)
"quick review" → Load @MODES.md (mode selection)
"validate code" → Load @WORKFLOW.md Phase 1 (validation)
```
## The 5-Phase Workflow
**When running full review:**
### Phase 1: Validate 🔍
- Run tests, linting, type checking
- Security scan
- Documentation check
**See @WORKFLOW.md Phase 1 for validation details**
### Phase 2: Auto-Fix ⚡
- Fix simple issues (formatting)
- Delegate complex issues to agents
- Re-validate
**See @AGENTS.md for fix strategies**
### Phase 3: Generate Commits 📝
- Group related changes
- Create atomic commits
- Conventional commit format
**See @COMMITS.md for commit generation**
### Phase 4: Multi-Agent Review 🤖
- Security review
- Code quality review
- Test coverage review
**See @AGENTS.md for review coordination**
### Phase 5: Create PR 🚀
- Generate description from spec/commits
- Include quality report
- Submit to GitHub
**See @PR.md for PR creation**
## Key Features
### Smart Quality Validation
✅ Language-specific validation (Python, Rust, JS, Go)
✅ Multi-domain checks (code, security, tests, docs)
✅ Automatic fixing of common issues
✅ Clear pass/fail reporting
### Intelligent Commit Generation
✅ Groups related changes by module
✅ Atomic commits (one logical change)
✅ Conventional commit format
✅ Links to specifications
### Multi-Agent Review
✅ Parallel agent execution
✅ Domain-specific expertise
✅ Actionable suggestions
✅ Required fix identification
### Rich PR Creation
✅ Spec-driven descriptions
✅ Quality metrics included
✅ Test coverage reported
✅ Links to specifications
✅ Review insights attached
## Common Workflows
### After Implementing a Feature
```
User: "Review and ship spec-feature-001"
Me:
1. Validate: Run tests, linting, security scan
2. Fix: Auto-fix formatting, delegate complex issues
3. Commit: Generate organized commits
4. Review: Multi-agent code review
5. PR: Create comprehensive pull request
```
### Just Need a PR
```
User: "Create PR for spec-feature-001"
Me:
1. Find completed spec
2. Generate PR description
3. Create GitHub PR
4. Report URL
```
### Want to Validate First
```
User: "Validate my code"
Me:
1. Run all quality gates
2. Report results (✅ or ❌)
3. If issues: List them with fix suggestions
4. Ask: "Fix issues and ship?" or "Just report?"
```
## Supporting Files (Load on Demand)
- **@WORKFLOW.md** (1329 lines) - Complete 5-phase process
- **@AGENTS.md** (995 lines) - Multi-agent coordination
- **@MODES.md** (869 lines) - Different workflow modes
- **@COMMITS.md** (1049 lines) - Commit generation strategies
- **@PR.md** (1094 lines) - PR creation with rich context
**Total if all loaded:** 5609 lines
**Typical usage:** 200-1500 lines (only what's needed)
## Success Criteria
**Full workflow complete when:**
- ✅ All quality gates passed
- ✅ Issues fixed or documented
- ✅ Commits properly organized
- ✅ Multi-agent review complete
- ✅ PR created with rich context
- ✅ Spec updated (if applicable)
**PR-only complete when:**
- ✅ Spec found (if spec-driven)
- ✅ PR description generated
- ✅ GitHub PR created
- ✅ URL returned to user
## Next Steps After Using
- PR created → Wait for team review
- Quality issues found → Use implementing-features to fix
- Want to iterate → Make changes, run me again
---
*I handle the entire "code is done, make it shippable" workflow. From validation to PR creation, I ensure quality and create comprehensive documentation for reviewers.*

File diff suppressed because it is too large Load Diff