Initial commit
This commit is contained in:
500
skills/debugging-issues/SKILL.md
Normal file
500
skills/debugging-issues/SKILL.md
Normal file
@@ -0,0 +1,500 @@
|
||||
---
|
||||
name: Debugging Issues
|
||||
description: Systematically debug issues with reproduction steps, error analysis, hypothesis testing, and root cause fixes. Use when investigating bugs, analyzing production incidents, or troubleshooting unexpected behavior.
|
||||
---
|
||||
|
||||
# Debugging Issues
|
||||
|
||||
## Purpose
|
||||
Provides systematic approaches to debugging, troubleshooting techniques, and error analysis strategies.
|
||||
|
||||
## When to Use
|
||||
- Investigating bugs or unexpected behavior
|
||||
- Analyzing error messages and stack traces
|
||||
- Troubleshooting system issues
|
||||
- Performance debugging
|
||||
- Root cause analysis
|
||||
- Production incident response
|
||||
|
||||
## Systematic Debugging Process
|
||||
|
||||
### 1. Reproduce the Issue
|
||||
**Goal**: Create a consistent way to trigger the bug
|
||||
|
||||
**Steps:**
|
||||
- [ ] Document exact steps to reproduce
|
||||
- [ ] Identify required preconditions
|
||||
- [ ] Note the environment (OS, browser, versions)
|
||||
- [ ] Create minimal reproduction case
|
||||
- [ ] Verify it reproduces consistently
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
reproduction_steps:
|
||||
- action: "Login as admin user"
|
||||
- action: "Navigate to /dashboard"
|
||||
- action: "Click 'Export Data' button"
|
||||
- expected: "CSV file downloads"
|
||||
- actual: "Error 500 appears"
|
||||
- frequency: "Occurs every time"
|
||||
```
|
||||
|
||||
### 2. Isolate the Problem
|
||||
**Goal**: Narrow down where the issue occurs
|
||||
|
||||
**Techniques:**
|
||||
```yaml
|
||||
isolation_methods:
|
||||
Divide and Conquer:
|
||||
description: "Split system in half, test which half has issue"
|
||||
example: "Comment out half the code, see if error persists"
|
||||
|
||||
Binary Search:
|
||||
description: "Use git bisect or similar to find breaking commit"
|
||||
command: "git bisect start && git bisect bad && git bisect good v1.0"
|
||||
|
||||
Component Isolation:
|
||||
description: "Test each component individually"
|
||||
example: "Test database, API, frontend separately"
|
||||
|
||||
Environment Comparison:
|
||||
description: "Compare working vs broken environments"
|
||||
checklist:
|
||||
- Different OS?
|
||||
- Different versions?
|
||||
- Different configurations?
|
||||
- Different data?
|
||||
```
|
||||
|
||||
### 3. Analyze Logs and Errors
|
||||
**Goal**: Gather evidence about what's going wrong
|
||||
|
||||
**Log Analysis:**
|
||||
```yaml
|
||||
log_analysis:
|
||||
error_messages:
|
||||
- Read the full error message
|
||||
- Note the error type/code
|
||||
- Identify the failing component
|
||||
|
||||
stack_traces:
|
||||
- Start from the bottom (root cause)
|
||||
- Identify the first non-library code
|
||||
- Check function arguments at that point
|
||||
|
||||
correlation:
|
||||
- Check logs before the error
|
||||
- Look for patterns
|
||||
- Correlate with user actions
|
||||
- Check timestamps
|
||||
```
|
||||
|
||||
**Common Error Patterns:**
|
||||
```python
|
||||
# NullPointerException / AttributeError
|
||||
# Usually: Accessing property of None/null object
|
||||
# Fix: Add null checks or ensure object is initialized
|
||||
|
||||
# IndexError / ArrayIndexOutOfBoundsException
|
||||
# Usually: Accessing array index that doesn't exist
|
||||
# Fix: Check array length before accessing
|
||||
|
||||
# KeyError / Property not found
|
||||
# Usually: Accessing dict/object key that doesn't exist
|
||||
# Fix: Use .get() with default or check if key exists
|
||||
|
||||
# TypeError / Type mismatch
|
||||
# Usually: Wrong type passed to function
|
||||
# Fix: Validate types, add type hints
|
||||
|
||||
# ConnectionError / Timeout
|
||||
# Usually: Network issues or service down
|
||||
# Fix: Add retry logic, check service health
|
||||
```
|
||||
|
||||
### 4. Form Hypothesis
|
||||
**Goal**: Develop theory about what's causing the issue
|
||||
|
||||
**Hypothesis Framework:**
|
||||
```yaml
|
||||
hypothesis_template:
|
||||
observation: "What did you observe?"
|
||||
theory: "What do you think is causing it?"
|
||||
prediction: "If theory is correct, what else would be true?"
|
||||
test: "How can you test this?"
|
||||
|
||||
example:
|
||||
observation: "API returns 500 error on POST /users"
|
||||
theory: "Input validation is rejecting valid email format"
|
||||
prediction: "If true, different email format should work"
|
||||
test: "Try with various email formats"
|
||||
```
|
||||
|
||||
### 5. Test the Hypothesis
|
||||
**Goal**: Verify or disprove your theory
|
||||
|
||||
**Testing Approaches:**
|
||||
```yaml
|
||||
testing_methods:
|
||||
Add Logging:
|
||||
description: "Add detailed logs around suspected area"
|
||||
example: |
|
||||
logger.debug(f"Input data: {data}")
|
||||
logger.debug(f"Validation result: {is_valid}")
|
||||
|
||||
Add Breakpoints:
|
||||
description: "Pause execution to inspect state"
|
||||
tools:
|
||||
- "pdb for Python"
|
||||
- "debugger for JavaScript"
|
||||
- "gdb for C/C++"
|
||||
|
||||
Change One Thing:
|
||||
description: "Modify one variable at a time"
|
||||
example: "Change input value, run again, observe result"
|
||||
|
||||
Write Failing Test:
|
||||
description: "Create test that reproduces the bug"
|
||||
benefit: "Ensures fix works and prevents regression"
|
||||
```
|
||||
|
||||
### 6. Implement Fix
|
||||
**Goal**: Resolve the root cause
|
||||
|
||||
**Fix Strategies:**
|
||||
```yaml
|
||||
fix_approaches:
|
||||
Quick Fix:
|
||||
when: "Production is down"
|
||||
approach: "Minimal change to restore service"
|
||||
followup: "Proper fix later"
|
||||
|
||||
Root Cause Fix:
|
||||
when: "Have time to do it right"
|
||||
approach: "Fix underlying cause"
|
||||
benefit: "Prevents similar bugs"
|
||||
|
||||
Workaround:
|
||||
when: "Fix is complex, need temporary solution"
|
||||
approach: "Add special handling"
|
||||
document: "Explain why workaround exists"
|
||||
```
|
||||
|
||||
### 7. Verify the Fix
|
||||
**Goal**: Ensure the issue is resolved
|
||||
|
||||
**Verification Checklist:**
|
||||
- [ ] Original bug is fixed
|
||||
- [ ] No new bugs introduced
|
||||
- [ ] All tests pass
|
||||
- [ ] Edge cases handled
|
||||
- [ ] Code reviewed
|
||||
- [ ] Deployed to test environment
|
||||
- [ ] Tested in production-like environment
|
||||
|
||||
## Debugging Techniques
|
||||
|
||||
### Print Debugging
|
||||
```python
|
||||
# Simple but effective
|
||||
def calculate_total(items):
|
||||
print(f"DEBUG: items = {items}")
|
||||
total = sum(item.price for item in items)
|
||||
print(f"DEBUG: total = {total}")
|
||||
return total
|
||||
```
|
||||
|
||||
### Interactive Debugging
|
||||
```python
|
||||
# Python pdb
|
||||
import pdb; pdb.set_trace()
|
||||
|
||||
# Common commands:
|
||||
# n (next) - Execute next line
|
||||
# s (step) - Step into function
|
||||
# c (continue) - Continue execution
|
||||
# p variable - Print variable
|
||||
# l (list) - Show code context
|
||||
# q (quit) - Exit debugger
|
||||
```
|
||||
|
||||
### Rubber Duck Debugging
|
||||
```yaml
|
||||
rubber_duck_method:
|
||||
step_1: "Get a rubber duck (or patient colleague)"
|
||||
step_2: "Explain your code line by line"
|
||||
step_3: "Explain what you expect to happen"
|
||||
step_4: "Explain what actually happens"
|
||||
step_5: "Often you'll realize the issue while explaining"
|
||||
```
|
||||
|
||||
### Binary Search Debugging
|
||||
```bash
|
||||
# Find which commit introduced a bug
|
||||
git bisect start
|
||||
git bisect bad # Current commit is bad
|
||||
git bisect good v1.0 # v1.0 was working
|
||||
|
||||
# Git will checkout commits for you to test
|
||||
# After each test, mark as good or bad:
|
||||
git bisect good # if works
|
||||
git bisect bad # if broken
|
||||
|
||||
# Git will find the problematic commit
|
||||
```
|
||||
|
||||
### Adding Instrumentation
|
||||
```python
|
||||
# Add metrics to understand behavior
|
||||
import time
|
||||
from functools import wraps
|
||||
|
||||
def timing_decorator(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
start = time.time()
|
||||
result = func(*args, **kwargs)
|
||||
duration = time.time() - start
|
||||
print(f"{func.__name__} took {duration:.2f}s")
|
||||
return result
|
||||
return wrapper
|
||||
|
||||
@timing_decorator
|
||||
def slow_function():
|
||||
# Your code here
|
||||
pass
|
||||
```
|
||||
|
||||
## Common Debugging Scenarios
|
||||
|
||||
### Performance Issues
|
||||
```yaml
|
||||
performance_debugging:
|
||||
profile_the_code:
|
||||
python: "python -m cProfile script.py"
|
||||
node: "node --prof script.js"
|
||||
|
||||
identify_bottlenecks:
|
||||
- Look for functions called many times
|
||||
- Check for slow database queries
|
||||
- Identify memory allocations
|
||||
|
||||
optimize:
|
||||
- Cache repeated calculations
|
||||
- Use more efficient algorithms
|
||||
- Add database indexes
|
||||
- Implement pagination
|
||||
```
|
||||
|
||||
### Memory Leaks
|
||||
```yaml
|
||||
memory_leak_debugging:
|
||||
detect:
|
||||
- Monitor memory usage over time
|
||||
- Look for steadily increasing memory
|
||||
- Check for unclosed resources
|
||||
|
||||
common_causes:
|
||||
- Unclosed file handles
|
||||
- Unclosed database connections
|
||||
- Event listeners not removed
|
||||
- Circular references
|
||||
- Large objects not garbage collected
|
||||
|
||||
fix:
|
||||
- Use context managers (with statement)
|
||||
- Explicitly close connections
|
||||
- Remove event listeners
|
||||
- Break circular references
|
||||
```
|
||||
|
||||
### Race Conditions
|
||||
```yaml
|
||||
race_condition_debugging:
|
||||
symptoms:
|
||||
- Intermittent failures
|
||||
- Harder to reproduce
|
||||
- Timing-dependent
|
||||
|
||||
detection:
|
||||
- Add logging with timestamps
|
||||
- Use thread/process IDs in logs
|
||||
- Add artificial delays to expose timing issues
|
||||
|
||||
solutions:
|
||||
- Add proper locking (mutex, semaphore)
|
||||
- Use atomic operations
|
||||
- Redesign to avoid shared state
|
||||
- Use message queues
|
||||
```
|
||||
|
||||
### Database Issues
|
||||
```yaml
|
||||
database_debugging:
|
||||
slow_queries:
|
||||
identify: "EXPLAIN ANALYZE query"
|
||||
solutions:
|
||||
- Add indexes
|
||||
- Optimize joins
|
||||
- Reduce data fetched
|
||||
- Use connection pooling
|
||||
|
||||
deadlocks:
|
||||
detect: "Check database logs for deadlock errors"
|
||||
prevent:
|
||||
- Acquire locks in consistent order
|
||||
- Keep transactions short
|
||||
- Use appropriate isolation levels
|
||||
|
||||
connection_issues:
|
||||
symptoms: "Connection refused, timeout errors"
|
||||
check:
|
||||
- Database is running
|
||||
- Connection string correct
|
||||
- Firewall/network allows connection
|
||||
- Connection pool not exhausted
|
||||
```
|
||||
|
||||
## Error Analysis Patterns
|
||||
|
||||
### Stack Trace Reading
|
||||
```python
|
||||
# Example stack trace
|
||||
Traceback (most recent call last):
|
||||
File "app.py", line 45, in main
|
||||
process_user(user_data)
|
||||
File "services.py", line 23, in process_user
|
||||
validate_email(user_data['email'])
|
||||
File "validators.py", line 12, in validate_email
|
||||
if '@' not in email:
|
||||
TypeError: argument of type 'NoneType' is not iterable
|
||||
|
||||
# Analysis:
|
||||
# 1. Error: TypeError at line 12 in validators.py
|
||||
# 2. Cause: 'email' variable is None
|
||||
# 3. Origin: Likely user_data['email'] is None from services.py line 23
|
||||
# 4. Fix: Add None check before validation
|
||||
```
|
||||
|
||||
### Error Messages Interpretation
|
||||
```yaml
|
||||
error_interpretation:
|
||||
"Connection refused":
|
||||
likely_causes:
|
||||
- Service not running
|
||||
- Wrong port
|
||||
- Firewall blocking
|
||||
|
||||
"Permission denied":
|
||||
likely_causes:
|
||||
- Insufficient file permissions
|
||||
- User lacks required role
|
||||
- Protected resource
|
||||
|
||||
"Resource not found":
|
||||
likely_causes:
|
||||
- Typo in path/URL
|
||||
- Resource deleted
|
||||
- Wrong environment
|
||||
|
||||
"Timeout":
|
||||
likely_causes:
|
||||
- Service too slow
|
||||
- Network issues
|
||||
- Infinite loop
|
||||
- Deadlock
|
||||
```
|
||||
|
||||
## Debugging Checklist
|
||||
|
||||
### Before Starting
|
||||
- [ ] Can you reproduce the issue?
|
||||
- [ ] Do you have access to logs?
|
||||
- [ ] Do you have a test environment?
|
||||
- [ ] Is there a recent change that might have caused it?
|
||||
|
||||
### During Debugging
|
||||
- [ ] Have you isolated the problem area?
|
||||
- [ ] Have you checked the logs?
|
||||
- [ ] Have you formed a hypothesis?
|
||||
- [ ] Have you tested your hypothesis?
|
||||
- [ ] Are you changing one thing at a time?
|
||||
|
||||
### Before Closing
|
||||
- [ ] Is the original issue fixed?
|
||||
- [ ] Have you written a test for this bug?
|
||||
- [ ] Have you checked for similar bugs?
|
||||
- [ ] Have you documented the root cause?
|
||||
- [ ] Have you shared knowledge with the team?
|
||||
|
||||
## Production Debugging
|
||||
|
||||
### Safe Debugging in Production
|
||||
```yaml
|
||||
production_debugging:
|
||||
do:
|
||||
- Add detailed logging
|
||||
- Monitor metrics
|
||||
- Use feature flags to isolate issues
|
||||
- Take snapshots/backups before changes
|
||||
- Have rollback plan ready
|
||||
|
||||
dont:
|
||||
- Don't use debugger breakpoints (freezes service)
|
||||
- Don't make changes without review
|
||||
- Don't restart services unnecessarily
|
||||
- Don't expose sensitive data in logs
|
||||
```
|
||||
|
||||
### Incident Response
|
||||
```yaml
|
||||
incident_response:
|
||||
immediate:
|
||||
- Assess severity
|
||||
- Notify stakeholders
|
||||
- Start incident log
|
||||
- Begin mitigation
|
||||
|
||||
mitigation:
|
||||
- Restore service (rollback if needed)
|
||||
- Implement workaround
|
||||
- Monitor closely
|
||||
|
||||
resolution:
|
||||
- Identify root cause
|
||||
- Implement proper fix
|
||||
- Test thoroughly
|
||||
- Deploy fix
|
||||
|
||||
followup:
|
||||
- Write postmortem
|
||||
- Update runbooks
|
||||
- Add monitoring/alerts
|
||||
- Share learnings
|
||||
```
|
||||
|
||||
## Tools and Resources
|
||||
|
||||
### Debugging Tools
|
||||
```yaml
|
||||
tools_by_language:
|
||||
python:
|
||||
- "pdb - Interactive debugger"
|
||||
- "ipdb - Enhanced pdb"
|
||||
- "memory_profiler - Memory profiling"
|
||||
- "cProfile - Performance profiling"
|
||||
|
||||
javascript:
|
||||
- "Chrome DevTools"
|
||||
- "Node.js debugger"
|
||||
- "VS Code debugger"
|
||||
|
||||
general:
|
||||
- "Git bisect - Find breaking commit"
|
||||
- "curl - Test APIs"
|
||||
- "tcpdump - Network debugging"
|
||||
- "strace/dtrace - System call tracing"
|
||||
```
|
||||
|
||||
---
|
||||
*Use this skill when debugging issues or conducting root cause analysis*
|
||||
515
skills/implementing-features/AGENTS.md
Normal file
515
skills/implementing-features/AGENTS.md
Normal file
@@ -0,0 +1,515 @@
|
||||
# Agent Orchestration Strategies
|
||||
|
||||
This file describes how to coordinate multiple specialized agents for complex implementation tasks.
|
||||
|
||||
## Agent Overview
|
||||
|
||||
### Available Agents for Implementation
|
||||
|
||||
```yaml
|
||||
workflow-coordinator:
|
||||
role: "Workflow validation and phase coordination"
|
||||
use_first: true
|
||||
validates:
|
||||
- Planning phase completed
|
||||
- Specification exists in active/
|
||||
- Prerequisites met
|
||||
coordinates: "Transition from planning to implementation"
|
||||
|
||||
implementer:
|
||||
role: "Core feature development"
|
||||
specializes:
|
||||
- Building new features
|
||||
- Implementing specifications
|
||||
- Writing production code
|
||||
- Updating documentation
|
||||
|
||||
architect:
|
||||
role: "System design and architecture"
|
||||
specializes:
|
||||
- Architecture decisions
|
||||
- Component design
|
||||
- System refactoring
|
||||
- Design patterns
|
||||
|
||||
security:
|
||||
role: "Security review and implementation"
|
||||
specializes:
|
||||
- Authentication systems
|
||||
- Authorization logic
|
||||
- Encryption implementation
|
||||
- Security best practices
|
||||
|
||||
qa:
|
||||
role: "Quality assurance and testing"
|
||||
specializes:
|
||||
- Test creation
|
||||
- Coverage analysis
|
||||
- Test strategy
|
||||
- Quality validation
|
||||
|
||||
refactorer:
|
||||
role: "Code improvement and consistency"
|
||||
specializes:
|
||||
- Code refactoring
|
||||
- Consistency enforcement
|
||||
- Code smell removal
|
||||
- Multi-file updates
|
||||
|
||||
researcher:
|
||||
role: "Code exploration and analysis"
|
||||
specializes:
|
||||
- Dependency mapping
|
||||
- Pattern identification
|
||||
- Impact analysis
|
||||
- Codebase exploration
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Selection Rules
|
||||
|
||||
### Task-Based Selection
|
||||
|
||||
**Use this matrix to determine which agents to invoke:**
|
||||
|
||||
```yaml
|
||||
Authentication Feature:
|
||||
primary: architect # Design auth flow
|
||||
secondary: security # Security requirements
|
||||
tertiary: implementer # Build feature
|
||||
final: qa # Create tests
|
||||
|
||||
API Development:
|
||||
primary: architect # Design API structure
|
||||
secondary: implementer # Build endpoints
|
||||
tertiary: qa # Create API tests
|
||||
|
||||
Bug Fix:
|
||||
primary: researcher # Find root cause
|
||||
secondary: implementer # Fix the bug
|
||||
tertiary: qa # Add regression test
|
||||
|
||||
Refactoring:
|
||||
primary: researcher # Analyze impact
|
||||
secondary: architect # Design new structure
|
||||
tertiary: refactorer # Update consistently
|
||||
final: qa # Validate no regressions
|
||||
|
||||
Multi-file Changes:
|
||||
primary: researcher # Map dependencies
|
||||
secondary: refactorer # Update consistently
|
||||
tertiary: qa # Ensure nothing breaks
|
||||
|
||||
Performance Optimization:
|
||||
primary: researcher # Profile and analyze
|
||||
secondary: implementer # Implement optimization
|
||||
tertiary: qa # Performance tests
|
||||
|
||||
Security Feature:
|
||||
primary: security # Define requirements
|
||||
secondary: implementer # Build securely
|
||||
tertiary: qa # Security tests
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Chaining Patterns
|
||||
|
||||
### Sequential Chaining
|
||||
|
||||
**When to Use:** Tasks that must be done in a specific order.
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Step 1: Use agent A to complete task
|
||||
→ Wait for completion
|
||||
Step 2: Use agent B to build on A's work
|
||||
→ Wait for completion
|
||||
Step 3: Use agent C to finalize
|
||||
→ Wait for completion
|
||||
```
|
||||
|
||||
**Example: Authentication System**
|
||||
```yaml
|
||||
Step 1: Use the architect agent to:
|
||||
- Design authentication flow
|
||||
- Define session management strategy
|
||||
- Plan token structure
|
||||
|
||||
Step 2: Use the security agent to:
|
||||
- Review architect's design
|
||||
- Add security requirements
|
||||
- Define encryption standards
|
||||
|
||||
Step 3: Use the implementer agent to:
|
||||
- Implement auth flow per design
|
||||
- Apply security requirements
|
||||
- Build according to specifications
|
||||
|
||||
Step 4: Use the qa agent to:
|
||||
- Create unit tests
|
||||
- Create integration tests
|
||||
- Create security tests
|
||||
```
|
||||
|
||||
### Parallel Coordination
|
||||
|
||||
**When to Use:** Independent tasks that can be done simultaneously.
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Spawn Multiple Agents in Parallel:
|
||||
- Agent A: Task 1 (independent)
|
||||
- Agent B: Task 2 (independent)
|
||||
- Agent C: Task 3 (independent)
|
||||
|
||||
Wait for All Completions
|
||||
|
||||
Consolidate Results
|
||||
```
|
||||
|
||||
**Example: Feature with Multiple Components**
|
||||
```yaml
|
||||
Parallel Tasks:
|
||||
- Use the implementer agent to: Build API endpoints
|
||||
- Use the qa agent to: Create test fixtures
|
||||
- Use the researcher agent to: Document existing patterns
|
||||
|
||||
All agents work simultaneously on independent tasks.
|
||||
|
||||
After All Complete:
|
||||
- Integrate API with tests
|
||||
- Apply documented patterns
|
||||
- Validate complete feature
|
||||
```
|
||||
|
||||
### Iterative Refinement
|
||||
|
||||
**When to Use:** Gradual improvement with feedback loops.
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Loop:
|
||||
1. Use agent to make changes
|
||||
2. Validate changes
|
||||
3. If issues found:
|
||||
- Use agent to fix issues
|
||||
- Validate again
|
||||
4. Repeat until quality gates pass
|
||||
```
|
||||
|
||||
**Example: Code Refactoring**
|
||||
```yaml
|
||||
Iteration 1:
|
||||
- Use refactorer agent to simplify function
|
||||
- Run tests → 2 failures
|
||||
- Use refactorer agent to fix test compatibility
|
||||
- Run tests → All pass
|
||||
|
||||
Iteration 2:
|
||||
- Use refactorer agent to extract duplicate code
|
||||
- Run linter → 3 style issues
|
||||
- Use refactorer agent to fix style
|
||||
- Run linter → Clean
|
||||
|
||||
Iteration 3:
|
||||
- Validate: All quality gates pass
|
||||
- Complete: Refactoring done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Coordination Strategies
|
||||
|
||||
### Strategy 1: Single Agent (Simple Tasks)
|
||||
|
||||
**Use When:**
|
||||
- Single file modification
|
||||
- Simple bug fix
|
||||
- Documentation update
|
||||
- Straightforward feature
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Single Agent:
|
||||
- Use the implementer agent to:
|
||||
- Make the change
|
||||
- Add tests
|
||||
- Update documentation
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
User: "Add a max_length validation to the username field"
|
||||
|
||||
Use the implementer agent to:
|
||||
- Add max_length=50 to User.username field
|
||||
- Add validation test for max_length
|
||||
- Update API documentation with constraint
|
||||
```
|
||||
|
||||
### Strategy 2: Agent Pairs (Moderate Complexity)
|
||||
|
||||
**Use When:**
|
||||
- Design + implementation needed
|
||||
- Security review required
|
||||
- Test coverage important
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Agent Pair:
|
||||
Primary Agent: Core work
|
||||
Secondary Agent: Validation/enhancement
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
User: "Implement password reset functionality"
|
||||
|
||||
Step 1: Use the architect agent to:
|
||||
- Design password reset flow
|
||||
- Plan token generation strategy
|
||||
- Define security requirements
|
||||
|
||||
Step 2: Use the implementer agent to:
|
||||
- Implement the designed flow
|
||||
- Build according to security requirements
|
||||
- Add comprehensive tests
|
||||
```
|
||||
|
||||
### Strategy 3: Agent Chain (High Complexity)
|
||||
|
||||
**Use When:**
|
||||
- System-wide changes
|
||||
- Architecture modifications
|
||||
- Security-critical features
|
||||
- Major refactoring
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Agent Chain:
|
||||
Phase 1: Research & Design
|
||||
- researcher: Analyze impact
|
||||
- architect: Design solution
|
||||
|
||||
Phase 2: Implementation
|
||||
- implementer: Build core
|
||||
- security: Review (if needed)
|
||||
|
||||
Phase 3: Quality Assurance
|
||||
- qa: Comprehensive testing
|
||||
- refactorer: Final polish
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
User: "Migrate from sessions to JWT authentication"
|
||||
|
||||
Phase 1 - Analysis:
|
||||
Use the researcher agent to:
|
||||
- Find all session usage
|
||||
- Map authentication dependencies
|
||||
- Identify breaking changes
|
||||
|
||||
Phase 2 - Design:
|
||||
Use the architect agent to:
|
||||
- Design JWT implementation
|
||||
- Plan migration strategy
|
||||
- Define backwards compatibility
|
||||
|
||||
Phase 3 - Security:
|
||||
Use the security agent to:
|
||||
- Review JWT implementation plan
|
||||
- Add security requirements
|
||||
- Define token validation rules
|
||||
|
||||
Phase 4 - Implementation:
|
||||
Use the implementer agent to:
|
||||
- Implement JWT manager
|
||||
- Add token validation
|
||||
- Build according to security requirements
|
||||
|
||||
Phase 5 - Migration:
|
||||
Use the refactorer agent to:
|
||||
- Update all authentication calls
|
||||
- Remove session dependencies
|
||||
- Ensure consistency
|
||||
|
||||
Phase 6 - Testing:
|
||||
Use the qa agent to:
|
||||
- Create unit tests
|
||||
- Create integration tests
|
||||
- Create security tests
|
||||
- Validate migration
|
||||
```
|
||||
|
||||
### Strategy 4: Parallel + Sequential Hybrid
|
||||
|
||||
**Use When:**
|
||||
- Multiple independent components with dependencies
|
||||
- Complex features with parallel work streams
|
||||
|
||||
**Pattern:**
|
||||
```yaml
|
||||
Parallel Phase:
|
||||
- Agent A: Independent task 1
|
||||
- Agent B: Independent task 2
|
||||
|
||||
Sequential Phase (after parallel complete):
|
||||
- Agent C: Integration work
|
||||
- Agent D: Final validation
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
User: "Add real-time notifications with WebSockets"
|
||||
|
||||
Parallel Phase:
|
||||
Use the architect agent to:
|
||||
- Design WebSocket architecture
|
||||
|
||||
Use the implementer agent to (simultaneously):
|
||||
- Set up WebSocket server configuration
|
||||
- Create notification data models
|
||||
|
||||
Sequential Phase:
|
||||
Use the implementer agent to:
|
||||
- Implement WebSocket handlers
|
||||
- Connect to notification models
|
||||
- Add client connection management
|
||||
|
||||
Use the qa agent to:
|
||||
- Create WebSocket connection tests
|
||||
- Create notification delivery tests
|
||||
- Test connection stability
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Communication Patterns
|
||||
|
||||
### Explicit Handoff
|
||||
|
||||
**Pattern:** Clearly state what the next agent should do based on previous work.
|
||||
|
||||
```yaml
|
||||
Step 1: Use the researcher agent to map all API endpoints
|
||||
→ Output: List of 47 endpoints in api_map.md
|
||||
|
||||
Step 2: Use the architect agent to design new API structure
|
||||
Context: Review the 47 endpoints in api_map.md
|
||||
Task: Design consolidated API with RESTful patterns
|
||||
|
||||
Step 3: Use the refactorer agent to update endpoints
|
||||
Context: Follow new structure from architect
|
||||
Task: Update all 47 endpoints to match design
|
||||
```
|
||||
|
||||
### Context Sharing
|
||||
|
||||
**Pattern:** Ensure agents have necessary context from previous work.
|
||||
|
||||
```yaml
|
||||
Context for Next Agent:
|
||||
Previous Work: "architect agent designed auth flow"
|
||||
Artifacts: "auth_design.md with flow diagram"
|
||||
Requirements: "Must follow JWT pattern with refresh tokens"
|
||||
|
||||
Use the implementer agent with this context to:
|
||||
- Implement auth flow from auth_design.md
|
||||
- Use JWT with refresh token pattern
|
||||
- Follow security guidelines from design
|
||||
```
|
||||
|
||||
### Validation Loops
|
||||
|
||||
**Pattern:** Use agents to validate each other's work.
|
||||
|
||||
```yaml
|
||||
Create → Validate → Fix Loop:
|
||||
|
||||
Step 1: Use the implementer agent to build feature
|
||||
|
||||
Step 2: Use the security agent to review implementation
|
||||
→ If issues found:
|
||||
Document security concerns
|
||||
|
||||
Step 3: Use the implementer agent to address security concerns
|
||||
Context: Security review findings
|
||||
Task: Fix identified issues
|
||||
|
||||
Step 4: Use the security agent to re-review
|
||||
→ If clean: Proceed
|
||||
→ If issues: Repeat loop
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checkpoints with Agents
|
||||
|
||||
### Code Quality Triggers
|
||||
|
||||
**Automatic Agent Invocation Based on Code Metrics:**
|
||||
|
||||
```yaml
|
||||
Function Length > 50 Lines:
|
||||
→ Use the refactorer agent to:
|
||||
- Break into smaller functions
|
||||
- Extract helper methods
|
||||
- Improve readability
|
||||
|
||||
Nesting Depth > 3:
|
||||
→ Use the refactorer agent to:
|
||||
- Flatten conditional logic
|
||||
- Extract nested blocks
|
||||
- Simplify control flow
|
||||
|
||||
Duplicate Code Detected:
|
||||
→ Use the refactorer agent to:
|
||||
- Extract common functionality
|
||||
- Create shared utilities
|
||||
- Apply DRY principle
|
||||
|
||||
Circular Dependencies Found:
|
||||
→ Use the architect agent to:
|
||||
- Review dependency structure
|
||||
- Redesign component relationships
|
||||
- Break circular references
|
||||
|
||||
Performance Concerns:
|
||||
→ Use the implementer agent to:
|
||||
- Add performance measurements
|
||||
- Identify bottlenecks
|
||||
- Implement optimizations
|
||||
|
||||
Security Patterns Detected:
|
||||
→ Use the security agent to:
|
||||
- Review authentication code
|
||||
- Validate authorization logic
|
||||
- Check encryption usage
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Coordination Best Practices
|
||||
|
||||
### DO:
|
||||
- ✅ Use workflow-coordinator first to validate workflow state
|
||||
- ✅ Be explicit about which agent to use and why
|
||||
- ✅ Provide clear context when chaining agents
|
||||
- ✅ Validate after each agent completes
|
||||
- ✅ Use parallel agents for independent tasks
|
||||
- ✅ Chain agents for dependent tasks
|
||||
|
||||
### DON'T:
|
||||
- ❌ Skip workflow-coordinator validation
|
||||
- ❌ Use wrong agent for the task
|
||||
- ❌ Chain agents without clear handoff
|
||||
- ❌ Run dependent tasks in parallel
|
||||
- ❌ Forget to validate agent output
|
||||
- ❌ Over-complicate simple tasks
|
||||
|
||||
---
|
||||
|
||||
*Comprehensive agent orchestration strategies for complex implementation tasks*
|
||||
200
skills/implementing-features/QUALITY.md
Normal file
200
skills/implementing-features/QUALITY.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# Quality Standards - Language Dispatch
|
||||
|
||||
This file provides an overview of quality standards and directs you to language-specific quality gates.
|
||||
|
||||
## When to Load This File
|
||||
|
||||
- User asks: "What are the quality standards?"
|
||||
- Need overview of validation approach
|
||||
- Choosing which language file to load
|
||||
|
||||
## Quality Philosophy
|
||||
|
||||
**All implementations must pass these gates:**
|
||||
- ✅ Linting (0 errors, warnings with justification)
|
||||
- ✅ Formatting (consistent code style)
|
||||
- ✅ Tests (all passing, appropriate coverage)
|
||||
- ✅ Type checking (if language supports it)
|
||||
- ✅ Documentation (comprehensive and current)
|
||||
|
||||
## Language-Specific Standards
|
||||
|
||||
**Load the appropriate file based on detected project language:**
|
||||
|
||||
### Python Projects
|
||||
**When to load:** `pyproject.toml`, `requirements.txt`, or `*.py` files detected
|
||||
|
||||
**Load:** `@languages/PYTHON.md`
|
||||
|
||||
**Quick commands:**
|
||||
```bash
|
||||
ruff check . && ruff format . && mypy . && pytest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Rust Projects
|
||||
**When to load:** `Cargo.toml` or `*.rs` files detected
|
||||
|
||||
**Load:** `@languages/RUST.md`
|
||||
|
||||
**Quick commands:**
|
||||
```bash
|
||||
cargo clippy -- -D warnings && cargo fmt --check && cargo test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### JavaScript/TypeScript Projects
|
||||
**When to load:** `package.json`, `tsconfig.json`, or `*.js`/`*.ts` files detected
|
||||
|
||||
**Load:** `@languages/JAVASCRIPT.md`
|
||||
|
||||
**Quick commands:**
|
||||
```bash
|
||||
# TypeScript
|
||||
npx eslint . && npx prettier --check . && npx tsc --noEmit && npm test
|
||||
|
||||
# JavaScript
|
||||
npx eslint . && npx prettier --check . && npm test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Go Projects
|
||||
**When to load:** `go.mod` or `*.go` files detected
|
||||
|
||||
**Load:** `@languages/GO.md`
|
||||
|
||||
**Quick commands:**
|
||||
```bash
|
||||
gofmt -w . && golangci-lint run && go test ./...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Other Languages
|
||||
**When to load:** No specific language detected, or unsupported language (PHP, Ruby, C++, C#, Java, etc.)
|
||||
|
||||
**Load:** `@languages/GENERIC.md`
|
||||
|
||||
**Provides:** General quality principles applicable across languages
|
||||
|
||||
---
|
||||
|
||||
## Progressive Loading Pattern
|
||||
|
||||
**Don't load all language files!** Only load the relevant one:
|
||||
|
||||
1. **Detect project language** (from file extensions, config files)
|
||||
2. **Load specific standards** for that language only
|
||||
3. **Apply language-specific validation** commands
|
||||
4. **Fallback to generic** if language not covered
|
||||
|
||||
## Continuous Validation
|
||||
|
||||
**Every 3 Edits:**
|
||||
```yaml
|
||||
Checkpoint:
|
||||
1. Run relevant tests
|
||||
2. Check linting
|
||||
3. Verify type checking (if applicable)
|
||||
4. If any fail:
|
||||
- Fix immediately
|
||||
- Re-validate
|
||||
5. Continue implementation
|
||||
```
|
||||
|
||||
## Pre-Completion Validation
|
||||
|
||||
**Before marking work complete:**
|
||||
```yaml
|
||||
Full Quality Suite:
|
||||
1. Run full test suite
|
||||
2. Run full linter
|
||||
3. Run type checker
|
||||
4. Check documentation
|
||||
5. Review specification compliance
|
||||
6. Verify all acceptance criteria met
|
||||
|
||||
If ANY fail:
|
||||
- Fix issues
|
||||
- Re-run full suite
|
||||
- Only complete when all pass
|
||||
```
|
||||
|
||||
## Quality Enforcement Strategy
|
||||
|
||||
```yaml
|
||||
Detect Language:
|
||||
- Check for language-specific files (pyproject.toml, Cargo.toml, etc.)
|
||||
- Identify from file extensions
|
||||
- User can override if auto-detection fails
|
||||
|
||||
Load Standards:
|
||||
- Load @languages/PYTHON.md for Python
|
||||
- Load @languages/RUST.md for Rust
|
||||
- Load @languages/JAVASCRIPT.md for JS/TS
|
||||
- Load @languages/GO.md for Go
|
||||
- Load @languages/GENERIC.md for others
|
||||
|
||||
Apply Validation:
|
||||
- Run language-specific commands
|
||||
- Check against language-specific standards
|
||||
- Enforce coverage requirements
|
||||
- Validate documentation completeness
|
||||
|
||||
Report Results:
|
||||
- Clear pass/fail for each gate
|
||||
- Specific error messages
|
||||
- Actionable fix suggestions
|
||||
```
|
||||
|
||||
## When Standards Apply
|
||||
|
||||
**During Implementation:**
|
||||
- After every 3 edits (checkpoint validation)
|
||||
- Before declaring task complete (full validation)
|
||||
- When explicitly requested by user
|
||||
|
||||
**Quality Gates Must Pass:**
|
||||
- To move from implementation → review phase
|
||||
- To mark specification acceptance criteria complete
|
||||
- Before creating pull request
|
||||
|
||||
## Cross-Language Principles
|
||||
|
||||
**These apply regardless of language:**
|
||||
|
||||
```yaml
|
||||
SOLID Principles:
|
||||
- Single Responsibility
|
||||
- Open/Closed
|
||||
- Liskov Substitution
|
||||
- Interface Segregation
|
||||
- Dependency Inversion
|
||||
|
||||
Code Quality:
|
||||
- No duplication
|
||||
- Clear naming
|
||||
- Reasonable function size (<= 50 lines guideline)
|
||||
- Low nesting depth (<= 3 levels)
|
||||
- Proper error handling
|
||||
|
||||
Testing:
|
||||
- Unit tests for business logic
|
||||
- Integration tests for workflows
|
||||
- Edge case coverage
|
||||
- Error path coverage
|
||||
- Reasonable coverage targets
|
||||
|
||||
Documentation:
|
||||
- README for setup
|
||||
- API documentation
|
||||
- Complex logic explained
|
||||
- Usage examples
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Load language-specific files for detailed standards - avoid loading all language contexts unnecessarily*
|
||||
179
skills/implementing-features/SKILL.md
Normal file
179
skills/implementing-features/SKILL.md
Normal file
@@ -0,0 +1,179 @@
|
||||
---
|
||||
name: Implementing Features
|
||||
description: Execute specification-driven implementation with automatic quality gates, multi-agent orchestration, and progress tracking. Use when building features from specs, fixing bugs with test coverage, or refactoring with validation.
|
||||
allowed-tools: [Read, Write, Edit, MultiEdit, Bash, Glob, Grep, TodoWrite, Task]
|
||||
---
|
||||
|
||||
# Implementing Features
|
||||
|
||||
I help you execute production-quality implementations with auto-detected language standards, intelligent agent orchestration, and specification integration.
|
||||
|
||||
## When to Use Me
|
||||
|
||||
**Auto-activate when:**
|
||||
- Invoked via `/quaestor:implement` slash command
|
||||
- User mentions "build [specific feature]" or "fix [specific bug]" with context
|
||||
- Continuing implementation after planning phase is complete
|
||||
- User says "continue implementation" or "resume implementing"
|
||||
- Coordinating multi-agent implementation of an active specification
|
||||
|
||||
**Do NOT auto-activate when:**
|
||||
- User says only "implement" or "implement it" (slash command handles this)
|
||||
- User is still in planning/research phase
|
||||
- Request is vague without feature details
|
||||
|
||||
## Supporting Files
|
||||
|
||||
This skill uses several supporting files for detailed workflows:
|
||||
|
||||
- **@WORKFLOW.md** - 4-phase implementation process (Discovery → Planning → Implementation → Validation)
|
||||
- **@AGENTS.md** - Agent orchestration strategies and coordination patterns
|
||||
- **@QUALITY.md** - Language-specific quality standards and validation gates
|
||||
- **@SPECS.md** - Specification integration and tracking protocols
|
||||
|
||||
## My Process
|
||||
|
||||
I follow a structured 4-phase workflow to ensure quality and completeness:
|
||||
|
||||
### Phase 1: Discovery & Research 🔍
|
||||
|
||||
**Specification Integration:**
|
||||
- Check `.quaestor/specs/active/` for in-progress work
|
||||
- Search `.quaestor/specs/draft/` for matching specifications
|
||||
- Move draft spec → active folder (if space available, max 3)
|
||||
- Update spec status → "in_progress"
|
||||
|
||||
**Research Protocol:**
|
||||
- Analyze codebase patterns & conventions
|
||||
- Identify dependencies & integration points
|
||||
- Determine required agents based on task requirements
|
||||
|
||||
**See @WORKFLOW.md Phase 1 for complete discovery process**
|
||||
|
||||
### Phase 2: Planning & Approval 📋
|
||||
|
||||
**Present Implementation Strategy:**
|
||||
- Architecture decisions & trade-offs
|
||||
- File changes & new components required
|
||||
- Quality gates & validation approach
|
||||
- Risk assessment & mitigation
|
||||
|
||||
**MANDATORY: Get user approval before proceeding**
|
||||
|
||||
**See @WORKFLOW.md Phase 2 for planning details**
|
||||
|
||||
### Phase 3: Implementation ⚡
|
||||
|
||||
**Agent Orchestration:**
|
||||
- **Multi-file operations** → Use researcher + implementer agents
|
||||
- **System refactoring** → Use architect + refactorer agents
|
||||
- **Test creation** → Use qa agent for comprehensive coverage
|
||||
- **Security implementation** → Use security + implementer agents
|
||||
|
||||
**Quality Cycle** (every 3 edits):
|
||||
```
|
||||
Execute → Validate → Fix (if ❌) → Continue
|
||||
```
|
||||
|
||||
**See @AGENTS.md for complete agent coordination strategies**
|
||||
|
||||
### Phase 4: Validation & Completion ✅
|
||||
|
||||
**Quality Validation:**
|
||||
1. Detect project language (Python, Rust, JS/TS, Go, or Generic)
|
||||
2. Load language-specific standards from @QUALITY.md
|
||||
3. Run validation pipeline for detected language
|
||||
4. Fix any issues and re-validate
|
||||
|
||||
**Completion Criteria:**
|
||||
- ✅ All tests passing
|
||||
- ✅ Zero linting errors
|
||||
- ✅ Type checking clean (if applicable)
|
||||
- ✅ Documentation complete
|
||||
- ✅ Specification status updated
|
||||
|
||||
**See @QUALITY.md for dispatch to language-specific standards:**
|
||||
- `@languages/PYTHON.md` - Python projects
|
||||
- `@languages/RUST.md` - Rust projects
|
||||
- `@languages/JAVASCRIPT.md` - JS/TS projects
|
||||
- `@languages/GO.md` - Go projects
|
||||
- `@languages/GENERIC.md` - Other languages
|
||||
|
||||
## Auto-Intelligence
|
||||
|
||||
### Project Detection
|
||||
- **Language**: Auto-detect → Python|Rust|JS|Generic standards
|
||||
- **Scope**: Assess changes → Single-file|Multi-file|System-wide
|
||||
- **Context**: Identify requirements → architecture|security|testing|refactoring
|
||||
|
||||
### Execution Strategy
|
||||
- **System-wide**: Comprehensive planning with multiple agent coordination
|
||||
- **Feature Development**: Iterative implementation with testing
|
||||
- **Bug Fixes**: Focused resolution with validation
|
||||
|
||||
## Agent Coordination
|
||||
|
||||
**I coordinate with specialized agents based on task requirements:**
|
||||
|
||||
- **workflow-coordinator** - First! Validates workflow state and ensures planning phase completed
|
||||
- **implementer** - Builds features according to specification
|
||||
- **architect** - Designs system architecture when needed
|
||||
- **security** - Reviews auth, encryption, or access control
|
||||
- **qa** - Creates comprehensive tests alongside implementation
|
||||
- **refactorer** - Ensures consistency across multiple files
|
||||
- **researcher** - Maps dependencies for multi-file changes
|
||||
|
||||
**See @AGENTS.md for agent chaining patterns and coordination strategies**
|
||||
|
||||
## Specification Integration
|
||||
|
||||
**Auto-Update Protocol:**
|
||||
|
||||
**Pre-Implementation:**
|
||||
- Check `.quaestor/specs/draft/` for matching spec ID
|
||||
- Move spec from draft/ → active/ (max 3 active)
|
||||
- Declare: "Working on Spec: [ID] - [Title]"
|
||||
- Update phase status in spec file
|
||||
|
||||
**Post-Implementation:**
|
||||
- Update phase status → "completed"
|
||||
- Track acceptance criteria completion
|
||||
- Move spec to completed/ when all phases done
|
||||
- Create git commit with spec reference
|
||||
|
||||
**See @SPECS.md for complete specification integration details**
|
||||
|
||||
## Quality Gates
|
||||
|
||||
**Code Quality Checkpoints:**
|
||||
- Function exceeds 50 lines → Use refactorer agent to break into smaller functions
|
||||
- Nesting depth exceeds 3 → Use refactorer agent to simplify logic
|
||||
- Circular dependencies detected → Use architect agent to review design
|
||||
- Performance implications unclear → Use implementer agent to add measurements
|
||||
|
||||
**See @QUALITY.md for language-specific quality gates and standards**
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- ✅ Workflow coordinator validates planning phase completed
|
||||
- ✅ Specification identified and moved to active/
|
||||
- ✅ User approval obtained for implementation strategy
|
||||
- ✅ All quality gates passed (linting, tests, type checking)
|
||||
- ✅ Documentation updated
|
||||
- ✅ Specification status updated and tracked
|
||||
- ✅ Ready for review phase
|
||||
|
||||
## Final Response
|
||||
|
||||
When implementation is complete:
|
||||
```
|
||||
Implementation complete. All quality gates passed.
|
||||
Specification [ID] updated to completed status.
|
||||
Ready for review and PR creation.
|
||||
```
|
||||
|
||||
**See @WORKFLOW.md for complete workflow details**
|
||||
|
||||
---
|
||||
|
||||
*Intelligent implementation with agent orchestration, quality gates, and specification tracking*
|
||||
539
skills/implementing-features/SPECS.md
Normal file
539
skills/implementing-features/SPECS.md
Normal file
@@ -0,0 +1,539 @@
|
||||
# Specification Integration & Tracking
|
||||
|
||||
This file describes how to integrate with Quaestor's specification system for tracking implementation progress.
|
||||
|
||||
## Specification Folder Structure
|
||||
|
||||
```yaml
|
||||
.quaestor/specs/
|
||||
├── draft/ # Planned specifications (not yet started)
|
||||
├── active/ # In-progress implementations (max 3)
|
||||
├── completed/ # Finished implementations
|
||||
└── archived/ # Old/cancelled specifications
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Specification Lifecycle
|
||||
|
||||
### States and Transitions
|
||||
|
||||
```yaml
|
||||
States:
|
||||
draft: "Specification created but not started"
|
||||
active: "Currently being implemented"
|
||||
completed: "Implementation finished and validated"
|
||||
archived: "Old or cancelled"
|
||||
|
||||
Transitions:
|
||||
draft → active: "Start implementation"
|
||||
active → completed: "Finish implementation"
|
||||
active → draft: "Pause work"
|
||||
any → archived: "Cancel or archive"
|
||||
|
||||
Limits:
|
||||
active: "Maximum 3 active specs"
|
||||
draft: "Unlimited"
|
||||
completed: "Unlimited"
|
||||
archived: "Unlimited"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Specification Discovery
|
||||
|
||||
### No Arguments Provided
|
||||
|
||||
**Discovery Protocol:**
|
||||
```yaml
|
||||
Step 1: Check Active Specs
|
||||
Location: .quaestor/specs/active/*.md
|
||||
Purpose: Find in-progress work
|
||||
Output: List of active specifications
|
||||
|
||||
Step 2: Check Draft Specs (if no active)
|
||||
Location: .quaestor/specs/draft/*.md
|
||||
Purpose: Find available work
|
||||
Output: List of draft specifications
|
||||
|
||||
Step 3: Present to User
|
||||
Format:
|
||||
"Found 2 specifications:
|
||||
- [active] spec-feature-001: User Authentication
|
||||
- [draft] spec-feature-002: Data Export API
|
||||
|
||||
Which would you like to work on?"
|
||||
|
||||
Step 4: User Selection
|
||||
User provides: spec ID or description
|
||||
Match: Find corresponding specification
|
||||
Activate: Move draft → active (if needed)
|
||||
```
|
||||
|
||||
### Arguments Provided
|
||||
|
||||
**Match Specification by ID or Description:**
|
||||
```yaml
|
||||
Argument Examples:
|
||||
- "spec-feature-001"
|
||||
- "feature-001"
|
||||
- "001"
|
||||
- "user authentication"
|
||||
- "auth system"
|
||||
|
||||
Matching Strategy:
|
||||
1. Exact ID match: spec-feature-001.md
|
||||
2. Partial ID match: Contains "feature-001"
|
||||
3. Description match: Title contains "user authentication"
|
||||
4. Fuzzy match: Similar words in title
|
||||
|
||||
Result:
|
||||
Match Found:
|
||||
→ Load specification
|
||||
→ Display: "Found: spec-feature-001 - User Authentication System"
|
||||
→ Activate if in draft/
|
||||
|
||||
No Match:
|
||||
→ Display: "No matching specification found"
|
||||
→ Suggest: "Available specs: [list]"
|
||||
→ Ask: "Would you like to create a new spec?"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Specification Activation
|
||||
|
||||
### Pre-Activation Validation
|
||||
|
||||
**Before Moving to Active:**
|
||||
```yaml
|
||||
Validation Checks:
|
||||
1. Spec Location:
|
||||
- If already active: "Already working on this spec"
|
||||
- If in completed: "Spec already completed"
|
||||
- If in draft: Proceed with activation
|
||||
|
||||
2. Active Limit:
|
||||
- Count: Active specs in .quaestor/specs/active/
|
||||
- Limit: Maximum 3 active specs
|
||||
- If at limit: "Active limit reached (3 specs). Complete one before starting another."
|
||||
- If under limit: Proceed with activation
|
||||
|
||||
3. Specification Validity:
|
||||
- Check: Has phases defined
|
||||
- Check: Has acceptance criteria
|
||||
- If invalid: "Specification incomplete. Please update before starting."
|
||||
```
|
||||
|
||||
### Activation Process
|
||||
|
||||
**Move from Draft to Active:**
|
||||
```yaml
|
||||
Atomic Operation:
|
||||
1. Read Specification:
|
||||
Source: .quaestor/specs/draft/spec-feature-001.md
|
||||
Parse: Extract metadata and phases
|
||||
|
||||
2. Update Status:
|
||||
Field: status
|
||||
Change: "draft" → "in_progress"
|
||||
Add: start_date (current date)
|
||||
|
||||
3. Move File:
|
||||
From: .quaestor/specs/draft/spec-feature-001.md
|
||||
To: .quaestor/specs/active/spec-feature-001.md
|
||||
Method: Git mv (preserves history)
|
||||
|
||||
4. Confirm:
|
||||
Display: "✅ Activated: spec-feature-001 - User Authentication"
|
||||
Display: "Status: in_progress"
|
||||
Display: "Phases: 4 total, 0 completed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Progress Tracking
|
||||
|
||||
### Phase Status Updates
|
||||
|
||||
**During Implementation:**
|
||||
```yaml
|
||||
Phase Tracking:
|
||||
Format in Specification:
|
||||
## Phases
|
||||
|
||||
### Phase 1: Authentication Flow Design
|
||||
- [ ] Task 1
|
||||
- [ ] Task 2
|
||||
Status: ⏳ in_progress
|
||||
|
||||
### Phase 2: JWT Implementation
|
||||
- [ ] Task 1
|
||||
- [ ] Task 2
|
||||
Status: ⏳ pending
|
||||
|
||||
Update Protocol:
|
||||
1. Complete tasks: Mark checkboxes [x]
|
||||
2. Update status: pending → in_progress → completed
|
||||
3. Add notes: Implementation details
|
||||
4. Track blockers: If any issues
|
||||
|
||||
Example Update:
|
||||
### Phase 1: Authentication Flow Design
|
||||
- [x] Design login flow
|
||||
- [x] Design registration flow
|
||||
- [x] Design password reset flow
|
||||
Status: ✅ completed
|
||||
|
||||
Implementation Notes:
|
||||
- Used JWT with 15min access, 7day refresh
|
||||
- Implemented token rotation for security
|
||||
- Added rate limiting on auth endpoints
|
||||
```
|
||||
|
||||
### Acceptance Criteria Tracking
|
||||
|
||||
**Track Progress Against Criteria:**
|
||||
```yaml
|
||||
Acceptance Criteria Format:
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] AC1: Users can register with email/password
|
||||
- [ ] AC2: Users can log in and receive JWT
|
||||
- [ ] AC3: Tokens expire after 15 minutes
|
||||
- [ ] AC4: Refresh tokens work correctly
|
||||
- [ ] AC5: Rate limiting prevents brute force
|
||||
|
||||
Update During Implementation:
|
||||
As Each Criterion Met:
|
||||
- Mark checkbox: [x]
|
||||
- Add evidence: Link to test or code
|
||||
- Validate: Ensure actually working
|
||||
|
||||
Example:
|
||||
- [x] AC1: Users can register with email/password
|
||||
✓ Implemented in auth/registration.py
|
||||
✓ Tests: test_registration_flow.py (8 tests passing)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Completion & Transition
|
||||
|
||||
### Completion Criteria
|
||||
|
||||
**Before Moving to Completed:**
|
||||
```yaml
|
||||
All Must Be True:
|
||||
1. All Phases Completed:
|
||||
- Every phase status: ✅ completed
|
||||
- All phase tasks: [x] checked
|
||||
|
||||
2. All Acceptance Criteria Met:
|
||||
- Every criterion: [x] checked
|
||||
- Evidence provided for each
|
||||
- Tests passing for each
|
||||
|
||||
3. Quality Gates Passed:
|
||||
- All tests passing
|
||||
- Linting clean
|
||||
- Type checking passed
|
||||
- Documentation complete
|
||||
|
||||
4. No Blockers:
|
||||
- All issues resolved
|
||||
- No pending decisions
|
||||
- Ready for review
|
||||
```
|
||||
|
||||
### Move to Completed
|
||||
|
||||
**Atomic Transition:**
|
||||
```yaml
|
||||
Operation:
|
||||
1. Update Specification:
|
||||
Field: status
|
||||
Change: "in_progress" → "completed"
|
||||
Add: completion_date (current date)
|
||||
Add: final_notes (summary of implementation)
|
||||
|
||||
2. Move File:
|
||||
From: .quaestor/specs/active/spec-feature-001.md
|
||||
To: .quaestor/specs/completed/spec-feature-001.md
|
||||
Method: Git mv (preserves history)
|
||||
|
||||
3. Create Commit:
|
||||
Message: "feat: implement spec-feature-001 - User Authentication"
|
||||
Body: Include spec summary and changes
|
||||
Reference: Link to specification
|
||||
|
||||
4. Confirm:
|
||||
Display: "✅ Completed: spec-feature-001 - User Authentication"
|
||||
Display: "Status: completed"
|
||||
Display: "All phases completed, all criteria met"
|
||||
Display: "Ready for review and PR creation"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Specification File Format
|
||||
|
||||
### Markdown Structure
|
||||
|
||||
**Required Sections:**
|
||||
```markdown
|
||||
---
|
||||
id: spec-feature-001
|
||||
title: User Authentication System
|
||||
status: in_progress
|
||||
priority: high
|
||||
type: feature
|
||||
start_date: 2024-01-15
|
||||
---
|
||||
|
||||
# User Authentication System
|
||||
|
||||
## Overview
|
||||
Brief description of what this spec implements.
|
||||
|
||||
## Phases
|
||||
|
||||
### Phase 1: Phase Name
|
||||
- [ ] Task 1
|
||||
- [ ] Task 2
|
||||
Status: ⏳ in_progress
|
||||
|
||||
### Phase 2: Phase Name
|
||||
- [ ] Task 1
|
||||
- [ ] Task 2
|
||||
Status: ⏳ pending
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] AC1: Criterion 1
|
||||
- [ ] AC2: Criterion 2
|
||||
|
||||
## Technical Details
|
||||
Technical implementation notes.
|
||||
|
||||
## Testing Strategy
|
||||
How this will be tested.
|
||||
|
||||
## Implementation Notes
|
||||
Notes added during implementation.
|
||||
```
|
||||
|
||||
### Metadata Fields
|
||||
|
||||
```yaml
|
||||
Required Fields:
|
||||
id: "Unique identifier (spec-feature-001)"
|
||||
title: "Human-readable title"
|
||||
status: "draft|in_progress|completed|archived"
|
||||
priority: "low|medium|high|critical"
|
||||
type: "feature|bugfix|refactor|docs|other"
|
||||
|
||||
Optional Fields:
|
||||
start_date: "When implementation started"
|
||||
completion_date: "When implementation finished"
|
||||
estimated_hours: "Time estimate"
|
||||
actual_hours: "Actual time spent"
|
||||
assignee: "Who implemented it"
|
||||
blockers: "Any blocking issues"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Git
|
||||
|
||||
### Commit Messages
|
||||
|
||||
**Reference Specifications in Commits:**
|
||||
```yaml
|
||||
Format:
|
||||
type(scope): message
|
||||
|
||||
Spec: spec-feature-001
|
||||
Description: Detailed description
|
||||
|
||||
Example:
|
||||
feat(auth): implement JWT authentication
|
||||
|
||||
Spec: spec-feature-001
|
||||
- Add JWT token generation
|
||||
- Implement refresh token rotation
|
||||
- Add authentication middleware
|
||||
|
||||
All acceptance criteria met.
|
||||
Tests: 42 new tests added (100% coverage)
|
||||
```
|
||||
|
||||
### Git History Preservation
|
||||
|
||||
**Using Git MV:**
|
||||
```yaml
|
||||
Benefit:
|
||||
- Preserves file history across moves
|
||||
- Maintains specification evolution
|
||||
- Enables tracking changes over time
|
||||
|
||||
Command:
|
||||
git mv .quaestor/specs/draft/spec-feature-001.md \
|
||||
.quaestor/specs/active/spec-feature-001.md
|
||||
|
||||
History:
|
||||
- See full edit history
|
||||
- Track progress over time
|
||||
- Understand evolution of spec
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Auto-Update Protocol
|
||||
|
||||
### Pre-Implementation
|
||||
|
||||
**When Starting Implementation:**
|
||||
```yaml
|
||||
Actions:
|
||||
1. Find Specification:
|
||||
- Search draft/ and active/
|
||||
- Match by ID or description
|
||||
|
||||
2. Activate Specification:
|
||||
- Move draft → active (if needed)
|
||||
- Update status → in_progress
|
||||
- Add start date
|
||||
|
||||
3. Declare Intent:
|
||||
Output: "🎯 Working on Spec: spec-feature-001 - User Authentication System"
|
||||
Output: "Status: in_progress (moved to active/)"
|
||||
Output: "Phases: 4 total, starting Phase 1"
|
||||
|
||||
4. Present Plan:
|
||||
- Show implementation strategy
|
||||
- Get user approval
|
||||
- Begin implementation
|
||||
```
|
||||
|
||||
### During Implementation
|
||||
|
||||
**Progress Updates:**
|
||||
```yaml
|
||||
After Completing Each Phase:
|
||||
1. Update Specification:
|
||||
- Mark phase tasks complete: [x]
|
||||
- Update phase status: completed
|
||||
- Add implementation notes
|
||||
|
||||
2. Track Progress:
|
||||
Output: "✅ Phase 1 complete (1/4 phases)"
|
||||
Output: " - All tasks finished"
|
||||
Output: " - Implementation notes added"
|
||||
Output: "Starting Phase 2..."
|
||||
|
||||
After Completing Acceptance Criterion:
|
||||
1. Update Specification:
|
||||
- Mark criterion complete: [x]
|
||||
- Add evidence (tests, code references)
|
||||
|
||||
2. Track Progress:
|
||||
Output: "✅ AC1 met: Users can register"
|
||||
Output: " - Tests: test_registration.py (8 passing)"
|
||||
Output: " - Code: auth/registration.py"
|
||||
Output: "Progress: 1/5 criteria met"
|
||||
```
|
||||
|
||||
### Post-Implementation
|
||||
|
||||
**When Implementation Complete:**
|
||||
```yaml
|
||||
Actions:
|
||||
1. Validate Completion:
|
||||
- All phases: ✅ completed
|
||||
- All criteria: [x] met
|
||||
- Quality gates: Passed
|
||||
|
||||
2. Update Specification:
|
||||
- Status → completed
|
||||
- Add completion date
|
||||
- Add final summary
|
||||
|
||||
3. Move to Completed:
|
||||
- From: active/spec-feature-001.md
|
||||
- To: completed/spec-feature-001.md
|
||||
- Method: Git mv
|
||||
|
||||
4. Create Commit:
|
||||
- Reference spec in message
|
||||
- Include summary of changes
|
||||
- Link to relevant files
|
||||
|
||||
5. Declare Complete:
|
||||
Output: "✅ Implementation Complete"
|
||||
Output: "Specification: spec-feature-001"
|
||||
Output: "Status: completed (moved to completed/)"
|
||||
Output: "All 4 phases completed, all 5 criteria met"
|
||||
Output: "Ready for review and PR creation"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Specification Not Found
|
||||
|
||||
```yaml
|
||||
Issue: No matching specification
|
||||
|
||||
Actions:
|
||||
1. Search all folders: draft/, active/, completed/
|
||||
2. Try fuzzy matching on title
|
||||
3. If still no match:
|
||||
Output: "❌ No matching specification found"
|
||||
Output: "Available specifications:"
|
||||
Output: [List active and draft specs]
|
||||
Output: "Would you like to create a new spec?"
|
||||
|
||||
4. If user wants to create:
|
||||
Delegate to spec-writing skill
|
||||
```
|
||||
|
||||
### Active Limit Reached
|
||||
|
||||
```yaml
|
||||
Issue: Already 3 active specs
|
||||
|
||||
Actions:
|
||||
1. Count active specs
|
||||
2. If at limit:
|
||||
Output: "❌ Active limit reached (3 specs)"
|
||||
Output: "Currently active:"
|
||||
Output: [List 3 active specs with progress]
|
||||
Output: "Complete one before starting another"
|
||||
|
||||
3. Suggest:
|
||||
Output: "Would you like to:"
|
||||
Output: "1. Continue one of the active specs"
|
||||
Output: "2. Move one back to draft"
|
||||
```
|
||||
|
||||
### Invalid Specification
|
||||
|
||||
```yaml
|
||||
Issue: Spec missing required fields
|
||||
|
||||
Actions:
|
||||
1. Validate specification structure
|
||||
2. Check required fields: id, title, phases, criteria
|
||||
3. If invalid:
|
||||
Output: "❌ Specification incomplete"
|
||||
Output: "Missing: [list missing fields]"
|
||||
Output: "Please update specification before starting"
|
||||
|
||||
4. Suggest fix:
|
||||
Output: "Use spec-writing skill to update specification"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Complete specification integration for tracking implementation progress with Quaestor*
|
||||
480
skills/implementing-features/WORKFLOW.md
Normal file
480
skills/implementing-features/WORKFLOW.md
Normal file
@@ -0,0 +1,480 @@
|
||||
# Implementation Workflow - Complete 4-Phase Process
|
||||
|
||||
This file describes the detailed workflow for executing production-quality implementations.
|
||||
|
||||
## Workflow Overview: Research → Plan → Implement → Validate
|
||||
|
||||
```yaml
|
||||
Phase 1: Discovery & Research (🔍)
|
||||
- Specification discovery and activation
|
||||
- Codebase analysis and pattern identification
|
||||
- Dependency mapping
|
||||
- Agent requirement determination
|
||||
|
||||
Phase 2: Planning & Approval (📋)
|
||||
- Strategy presentation
|
||||
- Architecture decisions
|
||||
- Risk assessment
|
||||
- MANDATORY user approval
|
||||
|
||||
Phase 3: Implementation (⚡)
|
||||
- Agent-orchestrated development
|
||||
- Quality cycle (every 3 edits)
|
||||
- Continuous validation
|
||||
- Documentation updates
|
||||
|
||||
Phase 4: Validation & Completion (✅)
|
||||
- Language-specific quality gates
|
||||
- Test execution
|
||||
- Specification status update
|
||||
- Completion confirmation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Discovery & Research 🔍
|
||||
|
||||
### Specification Discovery
|
||||
|
||||
**No Arguments Provided?**
|
||||
```yaml
|
||||
Discovery Protocol:
|
||||
1. Check: .quaestor/specs/active/*.md (current work in progress)
|
||||
2. If empty: Check .quaestor/specs/draft/*.md (available work)
|
||||
3. Match: spec ID from user request
|
||||
4. Output: "Found spec: [ID] - [Title]" OR "No matching specification"
|
||||
```
|
||||
|
||||
**Specification Activation:**
|
||||
```yaml
|
||||
🎯 Context Check:
|
||||
- Scan: .quaestor/specs/draft/*.md for matching spec
|
||||
- Validate: Max 3 active specs (enforce limit)
|
||||
- Move: draft spec → active/ folder
|
||||
- Update: spec status → "in_progress"
|
||||
- Track: implementation progress in spec phases
|
||||
```
|
||||
|
||||
### Codebase Research
|
||||
|
||||
**Research Protocol:**
|
||||
1. **Pattern Analysis**
|
||||
- Identify existing code conventions
|
||||
- Determine file organization patterns
|
||||
- Understand naming conventions
|
||||
- Map testing strategies
|
||||
|
||||
2. **Dependency Mapping**
|
||||
- Identify affected modules
|
||||
- Map integration points
|
||||
- Understand data flow
|
||||
- Detect circular dependencies
|
||||
|
||||
3. **Agent Determination**
|
||||
- Assess task complexity
|
||||
- Determine required agent specializations
|
||||
- Plan agent coordination strategy
|
||||
- Identify potential bottlenecks
|
||||
|
||||
**Example Research Output:**
|
||||
```
|
||||
🔍 Research Complete:
|
||||
|
||||
Specification: spec-feature-001 - User Authentication System
|
||||
Status: Moved to active/
|
||||
|
||||
Codebase Analysis:
|
||||
- Pattern: Repository pattern with service layer
|
||||
- Testing: pytest with 75% coverage requirement
|
||||
- Dependencies: auth module, user module, database layer
|
||||
|
||||
Required Agents:
|
||||
- architect: Design auth flow and session management
|
||||
- security: Review authentication implementation
|
||||
- implementer: Build core functionality
|
||||
- qa: Create comprehensive test suite
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Clarification & Decision 🤔
|
||||
|
||||
### MANDATORY: Ask User to Make Key Decisions
|
||||
|
||||
**After research, identify decisions user must make BEFORE planning:**
|
||||
|
||||
#### 1. Approach Selection (when 2+ valid options exist)
|
||||
```
|
||||
Use AskUserQuestion tool:
|
||||
- Present 2-3 architectural approaches
|
||||
- Include pros/cons and trade-offs for each
|
||||
- Explain complexity and maintenance implications
|
||||
- Wait for user to choose before proceeding
|
||||
```
|
||||
|
||||
**Example:**
|
||||
- Approach A: REST API - Simple, widely understood, but less efficient
|
||||
- Approach B: GraphQL - Flexible queries, but steeper learning curve
|
||||
- Approach C: gRPC - High performance, but requires protobuf setup
|
||||
|
||||
#### 2. Scope Boundaries
|
||||
```
|
||||
Ask clarifying questions:
|
||||
- "Should this also handle [related feature]?"
|
||||
- "Include [edge case scenario]?"
|
||||
- "Support [additional requirement]?"
|
||||
```
|
||||
|
||||
**Example:** "Should user authentication also include password reset functionality, or handle that separately?"
|
||||
|
||||
#### 3. Priority Trade-offs
|
||||
```
|
||||
When trade-offs exist, ask user to decide:
|
||||
- "Optimize for speed OR memory efficiency?"
|
||||
- "Prioritize simplicity OR flexibility?"
|
||||
- "Focus on performance OR maintainability?"
|
||||
```
|
||||
|
||||
**Example:** "This can be implemented for speed (caching, more memory) or simplicity (no cache, easier to maintain). Which priority?"
|
||||
|
||||
#### 4. Integration Decisions
|
||||
```
|
||||
Clarify connections to existing systems:
|
||||
- "Integrate with existing [system] OR standalone?"
|
||||
- "Use [library A] OR [library B]?"
|
||||
- "Follow [pattern X] OR [pattern Y]?"
|
||||
```
|
||||
|
||||
**Example:** "Should this use the existing Redis cache or create a new in-memory cache?"
|
||||
|
||||
**Only proceed to planning after user has made these decisions.**
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Planning & Approval 📋
|
||||
|
||||
### Present Implementation Strategy
|
||||
|
||||
**MANDATORY Components:**
|
||||
|
||||
1. **Architecture Decisions**
|
||||
- Design approach and rationale
|
||||
- Component structure
|
||||
- Data flow diagrams (if complex)
|
||||
- Integration strategy
|
||||
|
||||
2. **File Changes**
|
||||
- New files to create
|
||||
- Existing files to modify
|
||||
- Deletions (if any)
|
||||
- Configuration updates
|
||||
|
||||
3. **Quality Gates**
|
||||
- Testing strategy
|
||||
- Validation checkpoints
|
||||
- Coverage requirements
|
||||
- Performance benchmarks
|
||||
|
||||
4. **Risk Assessment**
|
||||
- Potential breaking changes
|
||||
- Migration requirements
|
||||
- Backwards compatibility concerns
|
||||
- Mitigation strategies
|
||||
|
||||
### Example Planning Output
|
||||
|
||||
```markdown
|
||||
## Implementation Strategy for spec-feature-001
|
||||
|
||||
### Architecture Decisions
|
||||
- Use JWT for stateless authentication
|
||||
- Implement refresh token rotation
|
||||
- Store sessions in Redis for scalability
|
||||
- Use bcrypt for password hashing (cost factor: 12)
|
||||
|
||||
**Trade-offs:**
|
||||
- ✅ Stateless = better scalability
|
||||
- ⚠️ Redis dependency added
|
||||
- ✅ Refresh rotation = better security
|
||||
|
||||
### File Changes
|
||||
**New Files:**
|
||||
- `src/auth/jwt_manager.py` - JWT generation and validation
|
||||
- `src/auth/session_store.py` - Redis session management
|
||||
- `tests/test_auth_flow.py` - Authentication flow tests
|
||||
|
||||
**Modified Files:**
|
||||
- `src/auth/service.py` - Add JWT authentication
|
||||
- `src/config.py` - Add auth configuration
|
||||
- `requirements.txt` - Add PyJWT, redis dependencies
|
||||
|
||||
### Quality Gates
|
||||
- Unit tests: All auth functions
|
||||
- Integration tests: Complete auth flow
|
||||
- Security tests: Token validation, expiry, rotation
|
||||
- Coverage target: 90% for auth module
|
||||
|
||||
### Risk Assessment
|
||||
- ⚠️ Breaking change: Session format changes
|
||||
- Migration: Clear existing sessions on deploy
|
||||
- Backwards compat: Old tokens expire gracefully
|
||||
- Mitigation: Feature flag for gradual rollout
|
||||
```
|
||||
|
||||
### Get User Approval
|
||||
|
||||
**MANDATORY: Wait for explicit approval before proceeding to Phase 3**
|
||||
|
||||
Approval phrases:
|
||||
- "Proceed"
|
||||
- "Looks good"
|
||||
- "Go ahead"
|
||||
- "Approved"
|
||||
- "Start implementation"
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Implementation ⚡
|
||||
|
||||
### Agent-Orchestrated Development
|
||||
|
||||
**Agent Selection Matrix:**
|
||||
|
||||
```yaml
|
||||
Task Type → Agent Strategy:
|
||||
|
||||
System Architecture:
|
||||
- Use architect agent to design solution
|
||||
- Use implementer agent to build components
|
||||
|
||||
Multi-file Changes:
|
||||
- Use researcher agent to map dependencies
|
||||
- Use refactorer agent to update consistently
|
||||
|
||||
Security Features:
|
||||
- Use security agent to define requirements
|
||||
- Use implementer agent to build securely
|
||||
- Use qa agent to create security tests
|
||||
|
||||
Test Creation:
|
||||
- Use qa agent for comprehensive coverage
|
||||
- Use implementer agent for test fixtures
|
||||
|
||||
Performance Optimization:
|
||||
- Use researcher agent to profile hotspots
|
||||
- Use refactorer agent to optimize code
|
||||
- Use qa agent to create performance tests
|
||||
```
|
||||
|
||||
### Quality Cycle (Every 3 Edits)
|
||||
|
||||
**Continuous Validation:**
|
||||
```yaml
|
||||
After Every 3 Code Changes:
|
||||
1. Execute: Run relevant tests
|
||||
2. Validate: Check linting and type checking
|
||||
3. Fix: If ❌, address issues immediately
|
||||
4. Continue: Proceed with next changes
|
||||
|
||||
Example:
|
||||
Edit 1: Create auth/jwt_manager.py
|
||||
Edit 2: Add JWT generation method
|
||||
Edit 3: Add JWT validation method
|
||||
→ RUN QUALITY CYCLE
|
||||
Execute: pytest tests/test_jwt.py
|
||||
Validate: ruff check auth/jwt_manager.py
|
||||
Fix: Address any issues
|
||||
Continue: Next 3 edits
|
||||
```
|
||||
|
||||
### Implementation Patterns
|
||||
|
||||
**Single-File Feature:**
|
||||
```yaml
|
||||
Pattern:
|
||||
1. Create/modify file
|
||||
2. Add documentation
|
||||
3. Create tests
|
||||
4. Validate quality
|
||||
5. Update specification
|
||||
```
|
||||
|
||||
**Multi-File Feature:**
|
||||
```yaml
|
||||
Pattern:
|
||||
1. Use researcher agent → Map dependencies
|
||||
2. Use architect agent → Design components
|
||||
3. Use implementer agent → Build core functionality
|
||||
4. Use refactorer agent → Ensure consistency
|
||||
5. Use qa agent → Create comprehensive tests
|
||||
6. Validate quality gates
|
||||
7. Update specification
|
||||
```
|
||||
|
||||
**System Refactoring:**
|
||||
```yaml
|
||||
Pattern:
|
||||
1. Use researcher agent → Analyze impact
|
||||
2. Use architect agent → Design new structure
|
||||
3. Use refactorer agent → Update all files
|
||||
4. Use qa agent → Validate no regressions
|
||||
5. Validate quality gates
|
||||
6. Update documentation
|
||||
```
|
||||
|
||||
### Code Quality Checkpoints
|
||||
|
||||
**Automatic Refactoring Triggers:**
|
||||
- Function exceeds 50 lines → Use refactorer agent to break into smaller functions
|
||||
- Nesting depth exceeds 3 → Use refactorer agent to simplify logic
|
||||
- Circular dependencies detected → Use architect agent to review design
|
||||
- Duplicate code found → Use refactorer agent to extract common functionality
|
||||
- Performance implications unclear → Use implementer agent to add measurements
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Validation & Completion ✅
|
||||
|
||||
### Language-Specific Validation
|
||||
|
||||
**Python:**
|
||||
```bash
|
||||
ruff check . --fix # Linting
|
||||
ruff format . # Formatting
|
||||
pytest -v # Tests
|
||||
mypy . --ignore-missing-imports # Type checking
|
||||
```
|
||||
|
||||
**Rust:**
|
||||
```bash
|
||||
cargo clippy -- -D warnings # Linting
|
||||
cargo fmt # Formatting
|
||||
cargo test # Tests
|
||||
cargo check # Type checking
|
||||
```
|
||||
|
||||
**JavaScript/TypeScript:**
|
||||
```bash
|
||||
npx eslint . --fix # Linting
|
||||
npx prettier --write . # Formatting
|
||||
npm test # Tests
|
||||
npx tsc --noEmit # Type checking (TS only)
|
||||
```
|
||||
|
||||
**Generic (Any Language):**
|
||||
- Syntax validation
|
||||
- Error handling review
|
||||
- Documentation completeness
|
||||
- Test coverage assessment
|
||||
|
||||
### Completion Criteria
|
||||
|
||||
**All Must Pass:**
|
||||
- ✅ All tests passing (no skipped tests without justification)
|
||||
- ✅ Zero linting errors (warnings acceptable with comment)
|
||||
- ✅ Type checking clean (if applicable to language)
|
||||
- ✅ Documentation complete (functions, classes, modules)
|
||||
- ✅ Specification status updated (phases marked complete)
|
||||
- ✅ No unhandled edge cases
|
||||
- ✅ Performance within acceptable bounds
|
||||
|
||||
### Specification Update
|
||||
|
||||
**Post-Implementation Protocol:**
|
||||
```yaml
|
||||
Update Specification:
|
||||
- Mark completed phases: ✅ in spec file
|
||||
- Update acceptance criteria status
|
||||
- Add implementation notes (if needed)
|
||||
- Check if all phases complete → Move to completed/
|
||||
- Generate commit message with spec reference
|
||||
|
||||
Example:
|
||||
Phase 1: Authentication Flow Design - ✅ Complete
|
||||
Phase 2: JWT Implementation - ✅ Complete
|
||||
Phase 3: Session Management - ✅ Complete
|
||||
Phase 4: Security Testing - ✅ Complete
|
||||
|
||||
→ All phases complete
|
||||
→ Move spec-feature-001 from active/ to completed/
|
||||
→ Ready for review and PR creation
|
||||
```
|
||||
|
||||
### Final Validation
|
||||
|
||||
**Before Declaring Complete:**
|
||||
1. Run full test suite: `uv run pytest` or equivalent
|
||||
2. Check git status: No unintended changes
|
||||
3. Verify specification: All acceptance criteria met
|
||||
4. Review documentation: Complete and accurate
|
||||
5. Confirm quality gates: All passed
|
||||
|
||||
### Completion Response
|
||||
|
||||
**Standard Response Format:**
|
||||
```
|
||||
✅ Implementation Complete
|
||||
|
||||
Specification: spec-feature-001 - User Authentication System
|
||||
Status: All phases completed, moved to completed/
|
||||
|
||||
Quality Gates:
|
||||
- ✅ Tests: 42 passed, 0 failed
|
||||
- ✅ Linting: 0 errors, 0 warnings
|
||||
- ✅ Type checking: Clean
|
||||
- ✅ Coverage: 92% (target: 90%)
|
||||
|
||||
Changes:
|
||||
- 3 new files created
|
||||
- 2 existing files modified
|
||||
- 42 tests added
|
||||
- 0 breaking changes
|
||||
|
||||
Ready for review phase. Use /review command to validate and create PR.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling & Recovery
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue: Tests Failing**
|
||||
```yaml
|
||||
Recovery:
|
||||
1. Analyze: Identify root cause
|
||||
2. Fix: Address failing tests
|
||||
3. Validate: Re-run test suite
|
||||
4. Continue: If fixed, proceed; if persistent, use qa agent for analysis
|
||||
```
|
||||
|
||||
**Issue: Linting Errors**
|
||||
```yaml
|
||||
Recovery:
|
||||
1. Auto-fix: Run linter with --fix flag
|
||||
2. Manual: Address remaining issues
|
||||
3. Validate: Re-run linter
|
||||
4. Continue: Proceed when clean
|
||||
```
|
||||
|
||||
**Issue: Type Checking Errors**
|
||||
```yaml
|
||||
Recovery:
|
||||
1. Analyze: Identify type mismatches
|
||||
2. Fix: Add proper type annotations
|
||||
3. Validate: Re-run type checker
|
||||
4. Continue: Proceed when clean
|
||||
```
|
||||
|
||||
**Issue: Specification Conflict**
|
||||
```yaml
|
||||
Recovery:
|
||||
1. Review: Check specification requirements
|
||||
2. Discuss: Clarify with user if ambiguous
|
||||
3. Adjust: Modify implementation or specification
|
||||
4. Continue: Proceed with aligned understanding
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Complete workflow for production-quality implementation with quality gates and specification tracking*
|
||||
190
skills/implementing-features/languages/GENERIC.md
Normal file
190
skills/implementing-features/languages/GENERIC.md
Normal file
@@ -0,0 +1,190 @@
|
||||
# Generic Language Quality Standards
|
||||
|
||||
**Load this file when:** Implementing in languages without specific quality standards (PHP, Ruby, C++, C#, etc.)
|
||||
|
||||
## General Quality Gates
|
||||
|
||||
```yaml
|
||||
Syntax & Structure:
|
||||
- Valid syntax (runs without parse errors)
|
||||
- Consistent indentation (2 or 4 spaces)
|
||||
- Clear variable naming
|
||||
- Functions <= 50 lines (guideline)
|
||||
- Nesting depth <= 3 levels
|
||||
|
||||
Testing:
|
||||
- Unit tests for core functionality
|
||||
- Integration tests for workflows
|
||||
- Edge case coverage
|
||||
- Error path testing
|
||||
- Reasonable coverage (>= 70%)
|
||||
|
||||
Documentation:
|
||||
- README with setup instructions
|
||||
- Function/method documentation
|
||||
- Complex algorithms explained
|
||||
- API documentation (if library)
|
||||
- Usage examples
|
||||
|
||||
Error Handling:
|
||||
- Proper exception/error handling
|
||||
- No swallowed errors
|
||||
- Meaningful error messages
|
||||
- Graceful failure modes
|
||||
- Resource cleanup
|
||||
|
||||
Code Quality:
|
||||
- No code duplication
|
||||
- Clear separation of concerns
|
||||
- Meaningful names
|
||||
- Single responsibility principle
|
||||
- No magic numbers/strings
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
**Before Declaring Complete:**
|
||||
- [ ] Code runs without errors
|
||||
- [ ] All tests pass
|
||||
- [ ] Documentation complete
|
||||
- [ ] Error handling in place
|
||||
- [ ] No obvious code smells
|
||||
- [ ] Functions reasonably sized
|
||||
- [ ] Clear variable names
|
||||
- [ ] No TODO comments left
|
||||
- [ ] Resources properly managed
|
||||
- [ ] Code reviewed for clarity
|
||||
|
||||
## SOLID Principles
|
||||
|
||||
**Apply regardless of language:**
|
||||
|
||||
```yaml
|
||||
Single Responsibility:
|
||||
- Each class/module has one reason to change
|
||||
- Clear, focused purpose
|
||||
- Avoid "god objects"
|
||||
|
||||
Open/Closed:
|
||||
- Open for extension, closed for modification
|
||||
- Use interfaces/traits for extensibility
|
||||
- Avoid modifying working code
|
||||
|
||||
Liskov Substitution:
|
||||
- Subtypes must be substitutable for base types
|
||||
- Honor contracts in inheritance
|
||||
- Avoid breaking parent behavior
|
||||
|
||||
Interface Segregation:
|
||||
- Many specific interfaces > one general interface
|
||||
- Clients shouldn't depend on unused methods
|
||||
- Keep interfaces focused
|
||||
|
||||
Dependency Inversion:
|
||||
- Depend on abstractions, not concretions
|
||||
- High-level modules independent of low-level
|
||||
- Use dependency injection
|
||||
```
|
||||
|
||||
## Code Smell Detection
|
||||
|
||||
**Watch for these issues:**
|
||||
|
||||
```yaml
|
||||
Long Methods:
|
||||
- Threshold: > 50 lines
|
||||
- Action: Extract smaller methods
|
||||
- Tool: Refactorer agent
|
||||
|
||||
Deep Nesting:
|
||||
- Threshold: > 3 levels
|
||||
- Action: Flatten with early returns
|
||||
- Tool: Refactorer agent
|
||||
|
||||
Duplicate Code:
|
||||
- Detection: Similar code blocks
|
||||
- Action: Extract to shared function
|
||||
- Tool: Refactorer agent
|
||||
|
||||
Large Classes:
|
||||
- Threshold: > 300 lines
|
||||
- Action: Split responsibilities
|
||||
- Tool: Architect + Refactorer agents
|
||||
|
||||
Magic Numbers:
|
||||
- Detection: Unexplained constants
|
||||
- Action: Named constants
|
||||
- Tool: Implementer agent
|
||||
|
||||
Poor Naming:
|
||||
- Detection: Unclear variable names
|
||||
- Action: Rename to be descriptive
|
||||
- Tool: Refactorer agent
|
||||
```
|
||||
|
||||
## Example Quality Pattern
|
||||
|
||||
**Pseudocode showing good practices:**
|
||||
|
||||
```
|
||||
// Good: Clear function with single responsibility
|
||||
function loadConfiguration(filePath: string): Config {
|
||||
// Early validation
|
||||
if (!fileExists(filePath)) {
|
||||
throw FileNotFoundError("Config not found: " + filePath)
|
||||
}
|
||||
|
||||
try {
|
||||
// Clear steps
|
||||
content = readFile(filePath)
|
||||
config = parseYAML(content)
|
||||
validateConfig(config)
|
||||
return config
|
||||
} catch (error) {
|
||||
// Proper error context
|
||||
throw ConfigError("Failed to load config from " + filePath, error)
|
||||
}
|
||||
}
|
||||
|
||||
// Good: Named constants instead of magic numbers
|
||||
const MAX_RETRY_ATTEMPTS = 3
|
||||
const TIMEOUT_MS = 5000
|
||||
|
||||
// Good: Early returns instead of deep nesting
|
||||
function processUser(user: User): Result {
|
||||
if (!user.isActive) {
|
||||
return Result.error("User not active")
|
||||
}
|
||||
|
||||
if (!user.hasPermission) {
|
||||
return Result.error("Insufficient permissions")
|
||||
}
|
||||
|
||||
if (!user.isVerified) {
|
||||
return Result.error("User not verified")
|
||||
}
|
||||
|
||||
// Main logic only runs if all checks pass
|
||||
return Result.success(doProcessing(user))
|
||||
}
|
||||
```
|
||||
|
||||
## Language-Specific Commands
|
||||
|
||||
**Find and use the standard tools for your language:**
|
||||
|
||||
```yaml
|
||||
Python: ruff, pytest, mypy
|
||||
Rust: cargo clippy, cargo test, cargo fmt
|
||||
JavaScript/TypeScript: eslint, prettier, jest/vitest
|
||||
Go: golangci-lint, go test, gofmt
|
||||
Java: checkstyle, junit, maven/gradle
|
||||
C#: dotnet format, xunit, roslyn analyzers
|
||||
Ruby: rubocop, rspec, yard
|
||||
PHP: phpcs, phpunit, psalm/phpstan
|
||||
C++: clang-tidy, gtest, clang-format
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Generic quality standards applicable across programming languages*
|
||||
162
skills/implementing-features/languages/GO.md
Normal file
162
skills/implementing-features/languages/GO.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# Go Quality Standards
|
||||
|
||||
**Load this file when:** Implementing features in Go projects
|
||||
|
||||
## Validation Commands
|
||||
|
||||
```bash
|
||||
# Linting
|
||||
golangci-lint run
|
||||
|
||||
# Formatting
|
||||
gofmt -w .
|
||||
# OR
|
||||
go fmt ./...
|
||||
|
||||
# Tests
|
||||
go test ./...
|
||||
|
||||
# Coverage
|
||||
go test -cover ./...
|
||||
|
||||
# Race Detection
|
||||
go test -race ./...
|
||||
|
||||
# Full Validation Pipeline
|
||||
gofmt -w . && golangci-lint run && go test ./...
|
||||
```
|
||||
|
||||
## Required Standards
|
||||
|
||||
```yaml
|
||||
Code Style:
|
||||
- Follow: Effective Go guidelines
|
||||
- Formatting: gofmt (automatic)
|
||||
- Naming: MixedCaps, not snake_case
|
||||
- Package names: Short, concise, lowercase
|
||||
|
||||
Testing:
|
||||
- Framework: Built-in testing package
|
||||
- Coverage: >= 75%
|
||||
- Test files: *_test.go
|
||||
- Table-driven tests: Prefer for multiple cases
|
||||
- Benchmarks: Include for performance-critical code
|
||||
|
||||
Documentation:
|
||||
- Package: Package-level doc comment
|
||||
- Exported: All exported items documented
|
||||
- Examples: Provide examples for complex APIs
|
||||
- README: Clear usage instructions
|
||||
|
||||
Error Handling:
|
||||
- Return errors, don't panic
|
||||
- Use errors.New or fmt.Errorf
|
||||
- Wrap errors with context (errors.Wrap)
|
||||
- Check all errors explicitly
|
||||
- No ignored errors (use _ = explicitly)
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
**Before Declaring Complete:**
|
||||
- [ ] Code formatted (`gofmt` or `go fmt`)
|
||||
- [ ] No linting issues (`golangci-lint run`)
|
||||
- [ ] All tests pass (`go test ./...`)
|
||||
- [ ] No race conditions (`go test -race ./...`)
|
||||
- [ ] Test coverage >= 75%
|
||||
- [ ] All exported items documented
|
||||
- [ ] All errors checked explicitly
|
||||
- [ ] No panics in library code
|
||||
- [ ] Proper error wrapping with context
|
||||
- [ ] Resource cleanup with defer
|
||||
|
||||
## Example Quality Pattern
|
||||
|
||||
```go
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Config represents the application configuration.
|
||||
type Config struct {
|
||||
APIKey string `yaml:"api_key"`
|
||||
Timeout int `yaml:"timeout"`
|
||||
}
|
||||
|
||||
// LoadConfig loads configuration from a YAML file.
|
||||
//
|
||||
// It returns an error if the file doesn't exist or contains invalid YAML.
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// config, err := LoadConfig("config.yaml")
|
||||
// if err != nil {
|
||||
// log.Fatal(err)
|
||||
// }
|
||||
func LoadConfig(path string) (*Config, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read config file %s: %w", path, err)
|
||||
}
|
||||
|
||||
var config Config
|
||||
if err := yaml.Unmarshal(data, &config); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse YAML in %s: %w", path, err)
|
||||
}
|
||||
|
||||
return &config, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Table-Driven Test Example:**
|
||||
```go
|
||||
func TestLoadConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
want *Config
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid config",
|
||||
path: "testdata/valid.yaml",
|
||||
want: &Config{APIKey: "test-key", Timeout: 30},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "missing file",
|
||||
path: "testdata/missing.yaml",
|
||||
want: nil,
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "invalid yaml",
|
||||
path: "testdata/invalid.yaml",
|
||||
want: nil,
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := LoadConfig(tt.path)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("LoadConfig() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("LoadConfig() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Go-specific quality standards for production-ready code*
|
||||
154
skills/implementing-features/languages/JAVASCRIPT.md
Normal file
154
skills/implementing-features/languages/JAVASCRIPT.md
Normal file
@@ -0,0 +1,154 @@
|
||||
# JavaScript/TypeScript Quality Standards
|
||||
|
||||
**Load this file when:** Implementing features in JavaScript or TypeScript projects
|
||||
|
||||
## Validation Commands
|
||||
|
||||
**JavaScript:**
|
||||
```bash
|
||||
# Linting
|
||||
npx eslint . --fix
|
||||
|
||||
# Formatting
|
||||
npx prettier --write .
|
||||
|
||||
# Tests
|
||||
npm test
|
||||
|
||||
# Full Validation Pipeline
|
||||
npx eslint . && npx prettier --check . && npm test
|
||||
```
|
||||
|
||||
**TypeScript:**
|
||||
```bash
|
||||
# Linting
|
||||
npx eslint . --fix
|
||||
|
||||
# Formatting
|
||||
npx prettier --write .
|
||||
|
||||
# Type Checking
|
||||
npx tsc --noEmit
|
||||
|
||||
# Tests
|
||||
npm test
|
||||
|
||||
# Full Validation Pipeline
|
||||
npx eslint . && npx prettier --check . && npx tsc --noEmit && npm test
|
||||
```
|
||||
|
||||
## Required Standards
|
||||
|
||||
```yaml
|
||||
Code Style:
|
||||
- Line length: 100-120 characters
|
||||
- Semicolons: Consistent (prefer with)
|
||||
- Quotes: Single or double (consistent)
|
||||
- Trailing commas: Always in multiline
|
||||
|
||||
Testing:
|
||||
- Framework: Jest, Mocha, or Vitest
|
||||
- Coverage: >= 80%
|
||||
- Test files: *.test.js, *.spec.js
|
||||
- Mocking: Prefer dependency injection
|
||||
- Async: Use async/await, not callbacks
|
||||
|
||||
Documentation:
|
||||
- JSDoc for all exported functions
|
||||
- README for packages
|
||||
- Type definitions (TypeScript or JSDoc)
|
||||
- API documentation for libraries
|
||||
|
||||
TypeScript Specific:
|
||||
- Strict mode enabled
|
||||
- No 'any' types (use 'unknown' if needed)
|
||||
- Proper interface/type definitions
|
||||
- Generic types where appropriate
|
||||
- Discriminated unions for state
|
||||
|
||||
Error Handling:
|
||||
- Try/catch for async operations
|
||||
- Error boundaries (React)
|
||||
- Proper promise handling
|
||||
- No unhandled promise rejections
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
**Before Declaring Complete:**
|
||||
- [ ] No linting errors (`eslint .`)
|
||||
- [ ] Code formatted (`prettier --check .`)
|
||||
- [ ] Type checking passes (TS: `tsc --noEmit`)
|
||||
- [ ] All tests pass (`npm test`)
|
||||
- [ ] Test coverage >= 80%
|
||||
- [ ] No 'any' types (TypeScript)
|
||||
- [ ] All exported functions have JSDoc
|
||||
- [ ] Async operations properly handled
|
||||
- [ ] Error boundaries implemented (React)
|
||||
- [ ] No console.log in production code
|
||||
|
||||
## Example Quality Pattern
|
||||
|
||||
**TypeScript:**
|
||||
```typescript
|
||||
/**
|
||||
* Load configuration from YAML file.
|
||||
*
|
||||
* @param configPath - Path to configuration file
|
||||
* @returns Parsed configuration object
|
||||
* @throws {Error} If file doesn't exist or YAML is invalid
|
||||
*
|
||||
* @example
|
||||
* ```ts
|
||||
* const config = await loadConfig('./config.yaml');
|
||||
* console.log(config.apiKey);
|
||||
* ```
|
||||
*/
|
||||
export async function loadConfig(configPath: string): Promise<Config> {
|
||||
if (!fs.existsSync(configPath)) {
|
||||
throw new Error(`Config not found: ${configPath}`);
|
||||
}
|
||||
|
||||
try {
|
||||
const contents = await fs.promises.readFile(configPath, 'utf-8');
|
||||
const config = yaml.parse(contents) as Config;
|
||||
return config;
|
||||
} catch (error) {
|
||||
throw new Error(`Invalid YAML in ${configPath}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**JavaScript with JSDoc:**
|
||||
```javascript
|
||||
/**
|
||||
* @typedef {Object} Config
|
||||
* @property {string} apiKey - API key for service
|
||||
* @property {number} timeout - Request timeout in ms
|
||||
*/
|
||||
|
||||
/**
|
||||
* Load configuration from YAML file.
|
||||
*
|
||||
* @param {string} configPath - Path to configuration file
|
||||
* @returns {Promise<Config>} Parsed configuration object
|
||||
* @throws {Error} If file doesn't exist or YAML is invalid
|
||||
*/
|
||||
export async function loadConfig(configPath) {
|
||||
if (!fs.existsSync(configPath)) {
|
||||
throw new Error(`Config not found: ${configPath}`);
|
||||
}
|
||||
|
||||
try {
|
||||
const contents = await fs.promises.readFile(configPath, 'utf-8');
|
||||
const config = yaml.parse(contents);
|
||||
return config;
|
||||
} catch (error) {
|
||||
throw new Error(`Invalid YAML in ${configPath}: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*JavaScript/TypeScript-specific quality standards for production-ready code*
|
||||
101
skills/implementing-features/languages/PYTHON.md
Normal file
101
skills/implementing-features/languages/PYTHON.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# Python Quality Standards
|
||||
|
||||
**Load this file when:** Implementing features in Python projects
|
||||
|
||||
## Validation Commands
|
||||
|
||||
```bash
|
||||
# Linting
|
||||
ruff check . --fix
|
||||
|
||||
# Formatting
|
||||
ruff format .
|
||||
|
||||
# Tests
|
||||
pytest -v
|
||||
|
||||
# Type Checking
|
||||
mypy . --ignore-missing-imports
|
||||
|
||||
# Coverage
|
||||
pytest --cov --cov-report=html
|
||||
|
||||
# Full Validation Pipeline
|
||||
ruff check . && ruff format . && mypy . && pytest
|
||||
```
|
||||
|
||||
## Required Standards
|
||||
|
||||
```yaml
|
||||
Code Style:
|
||||
- Line length: 120 characters (configurable)
|
||||
- Imports: Sorted with isort style
|
||||
- Docstrings: Google or NumPy style
|
||||
- Type hints: Everywhere (functions, methods, variables)
|
||||
|
||||
Testing:
|
||||
- Framework: pytest
|
||||
- Coverage: >= 80%
|
||||
- Test files: test_*.py or *_test.py
|
||||
- Fixtures: Prefer pytest fixtures over setup/teardown
|
||||
- Assertions: Use pytest assertions, not unittest
|
||||
|
||||
Documentation:
|
||||
- All modules: Docstring with purpose
|
||||
- All classes: Docstring with attributes
|
||||
- All functions: Docstring with args, returns, raises
|
||||
- Complex logic: Inline comments for clarity
|
||||
|
||||
Error Handling:
|
||||
- Use specific exceptions (not bare except)
|
||||
- Custom exceptions for domain errors
|
||||
- Proper exception chaining
|
||||
- Clean resource management (context managers)
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
**Before Declaring Complete:**
|
||||
- [ ] All functions have type hints
|
||||
- [ ] All functions have docstrings (Google/NumPy style)
|
||||
- [ ] No linting errors (`ruff check .`)
|
||||
- [ ] Code formatted consistently (`ruff format .`)
|
||||
- [ ] Type checking passes (`mypy .`)
|
||||
- [ ] All tests pass (`pytest`)
|
||||
- [ ] Test coverage >= 80%
|
||||
- [ ] No bare except clauses
|
||||
- [ ] Proper exception handling
|
||||
- [ ] Resources properly managed
|
||||
|
||||
## Example Quality Pattern
|
||||
|
||||
```python
|
||||
from typing import Optional
|
||||
from pathlib import Path
|
||||
|
||||
def load_config(config_path: Path) -> dict[str, any]:
|
||||
"""Load configuration from YAML file.
|
||||
|
||||
Args:
|
||||
config_path: Path to configuration file
|
||||
|
||||
Returns:
|
||||
Dictionary containing configuration values
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If config file doesn't exist
|
||||
ValueError: If config file is invalid YAML
|
||||
"""
|
||||
if not config_path.exists():
|
||||
raise FileNotFoundError(f"Config not found: {config_path}")
|
||||
|
||||
try:
|
||||
with config_path.open() as f:
|
||||
return yaml.safe_load(f)
|
||||
except yaml.YAMLError as e:
|
||||
raise ValueError(f"Invalid YAML in {config_path}") from e
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Python-specific quality standards for production-ready code*
|
||||
120
skills/implementing-features/languages/RUST.md
Normal file
120
skills/implementing-features/languages/RUST.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Rust Quality Standards
|
||||
|
||||
**Load this file when:** Implementing features in Rust projects
|
||||
|
||||
## Validation Commands
|
||||
|
||||
```bash
|
||||
# Linting
|
||||
cargo clippy -- -D warnings
|
||||
|
||||
# Formatting
|
||||
cargo fmt
|
||||
|
||||
# Tests
|
||||
cargo test
|
||||
|
||||
# Type Checking (implicit)
|
||||
cargo check
|
||||
|
||||
# Documentation
|
||||
cargo doc --no-deps --open
|
||||
|
||||
# Full Validation Pipeline
|
||||
cargo clippy -- -D warnings && cargo fmt --check && cargo test
|
||||
```
|
||||
|
||||
## Required Standards
|
||||
|
||||
```yaml
|
||||
Code Style:
|
||||
- Follow: Rust API guidelines
|
||||
- Formatting: rustfmt (automatic)
|
||||
- Naming: snake_case for functions, PascalCase for types
|
||||
- Modules: Clear separation of concerns
|
||||
|
||||
Testing:
|
||||
- Framework: Built-in test framework
|
||||
- Coverage: >= 75%
|
||||
- Unit tests: In same file with #[cfg(test)]
|
||||
- Integration tests: In tests/ directory
|
||||
- Doc tests: In documentation examples
|
||||
|
||||
Documentation:
|
||||
- All public items: /// documentation
|
||||
- Modules: //! module-level docs
|
||||
- Examples: Working examples in docs
|
||||
- Safety: Document unsafe blocks thoroughly
|
||||
|
||||
Error Handling:
|
||||
- Use Result<T, E> for fallible operations
|
||||
- Use Option<T> for optional values
|
||||
- No .unwrap() in production code
|
||||
- Custom error types with thiserror or anyhow
|
||||
- Proper error context with context/wrap_err
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
**Before Declaring Complete:**
|
||||
- [ ] No clippy warnings (`cargo clippy -- -D warnings`)
|
||||
- [ ] Code formatted (`cargo fmt --check`)
|
||||
- [ ] All tests pass (`cargo test`)
|
||||
- [ ] No unwrap() calls in production code
|
||||
- [ ] Result<T, E> used for all fallible operations
|
||||
- [ ] All public items documented
|
||||
- [ ] Examples in documentation tested
|
||||
- [ ] Unsafe blocks documented with safety comments
|
||||
- [ ] Proper error types defined
|
||||
- [ ] Resource cleanup handled (Drop trait if needed)
|
||||
|
||||
## Example Quality Pattern
|
||||
|
||||
```rust
|
||||
use thiserror::Error;
|
||||
use std::path::Path;
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
pub enum ConfigError {
|
||||
#[error("Config file not found: {0}")]
|
||||
NotFound(String),
|
||||
#[error("Invalid YAML: {0}")]
|
||||
InvalidYaml(#[from] serde_yaml::Error),
|
||||
}
|
||||
|
||||
/// Load configuration from YAML file.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `path` - Path to configuration file
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// Returns the parsed configuration or an error.
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// Returns `ConfigError::NotFound` if file doesn't exist.
|
||||
/// Returns `ConfigError::InvalidYaml` if parsing fails.
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
/// ```
|
||||
/// let config = load_config(Path::new("config.yaml"))?;
|
||||
/// ```
|
||||
pub fn load_config(path: &Path) -> Result<Config, ConfigError> {
|
||||
if !path.exists() {
|
||||
return Err(ConfigError::NotFound(path.display().to_string()));
|
||||
}
|
||||
|
||||
let contents = std::fs::read_to_string(path)
|
||||
.map_err(|e| ConfigError::InvalidYaml(e.into()))?;
|
||||
|
||||
let config: Config = serde_yaml::from_str(&contents)?;
|
||||
Ok(config)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Rust-specific quality standards for production-ready code*
|
||||
351
skills/initializing-project/DETECTION.md
Normal file
351
skills/initializing-project/DETECTION.md
Normal file
@@ -0,0 +1,351 @@
|
||||
# Detection & Analysis
|
||||
|
||||
This file contains detailed patterns for project analysis, framework detection, and agent orchestration.
|
||||
|
||||
## Phase 1: Agent-Orchestrated Discovery
|
||||
|
||||
I coordinate specialized agents for parallel analysis:
|
||||
|
||||
### Agent Execution Strategy
|
||||
|
||||
```yaml
|
||||
Parallel Agent Execution:
|
||||
Framework & Dependencies:
|
||||
agent: researcher
|
||||
timeout: 10 seconds
|
||||
analyzes:
|
||||
- Primary programming language and framework
|
||||
- Dependencies from package.json/requirements.txt/Cargo.toml/go.mod
|
||||
- Test framework and current coverage
|
||||
- Build tools and scripts
|
||||
output: "Structured YAML with framework, dependencies, and tools"
|
||||
|
||||
Architecture Patterns:
|
||||
agent: architect
|
||||
timeout: 10 seconds
|
||||
analyzes:
|
||||
- Architecture patterns (MVC, DDD, VSA, Clean Architecture)
|
||||
- Component relationships and boundaries
|
||||
- API design patterns
|
||||
- Database architecture
|
||||
- Technical debt and complexity hotspots
|
||||
output: "Structured analysis with patterns, strengths, and concerns"
|
||||
|
||||
Security Assessment:
|
||||
agent: security
|
||||
timeout: 10 seconds
|
||||
analyzes:
|
||||
- Security patterns and anti-patterns
|
||||
- Common vulnerabilities
|
||||
- Authentication/authorization approach
|
||||
- Data handling and encryption
|
||||
- Dependency security
|
||||
output: "Security assessment with risks and recommendations"
|
||||
```
|
||||
|
||||
### Result Consolidation
|
||||
|
||||
After all agents complete, consolidate findings:
|
||||
|
||||
```yaml
|
||||
consolidated_analysis:
|
||||
framework: "[from researcher agent]"
|
||||
language: "[detected primary language]"
|
||||
architecture: "[from architect agent]"
|
||||
security: "[from security agent]"
|
||||
complexity: "[calculated score 0.0-1.0]"
|
||||
phase: "[new|growth|legacy based on analysis]"
|
||||
```
|
||||
|
||||
## Phase 2.1: Language Detection
|
||||
|
||||
Detect the primary language from package files:
|
||||
|
||||
```yaml
|
||||
language_detection_patterns:
|
||||
Python:
|
||||
files: [requirements.txt, pyproject.toml, setup.py, Pipfile]
|
||||
confidence: high
|
||||
indicators:
|
||||
- "import statements in .py files"
|
||||
- "pip/poetry/pipenv config"
|
||||
|
||||
TypeScript:
|
||||
files: [package.json with "typescript" dependency, tsconfig.json]
|
||||
confidence: high
|
||||
indicators:
|
||||
- ".ts or .tsx files"
|
||||
- "typescript in devDependencies"
|
||||
|
||||
JavaScript:
|
||||
files: [package.json without typescript]
|
||||
confidence: medium
|
||||
indicators:
|
||||
- ".js or .jsx files"
|
||||
- "node_modules directory"
|
||||
|
||||
Rust:
|
||||
files: [Cargo.toml, Cargo.lock]
|
||||
confidence: high
|
||||
indicators:
|
||||
- ".rs files"
|
||||
- "cargo workspace config"
|
||||
|
||||
Go:
|
||||
files: [go.mod, go.sum]
|
||||
confidence: high
|
||||
indicators:
|
||||
- ".go files"
|
||||
- "go directive in go.mod"
|
||||
|
||||
Java:
|
||||
files: [pom.xml, build.gradle, build.gradle.kts]
|
||||
confidence: high
|
||||
indicators:
|
||||
- ".java files"
|
||||
- "maven/gradle config"
|
||||
|
||||
Ruby:
|
||||
files: [Gemfile, Gemfile.lock]
|
||||
confidence: high
|
||||
indicators:
|
||||
- ".rb files"
|
||||
- "bundler config"
|
||||
```
|
||||
|
||||
### Detection Algorithm
|
||||
|
||||
```
|
||||
1. Check for language-specific config files (highest confidence)
|
||||
2. Count files by extension in src/ directory
|
||||
3. Parse package managers to identify language
|
||||
4. Assign confidence score based on indicators
|
||||
5. Select language with highest confidence
|
||||
```
|
||||
|
||||
## Phase 2.2: Load Language Configuration
|
||||
|
||||
For the detected language, load defaults from `src/quaestor/core/languages.yaml`:
|
||||
|
||||
```yaml
|
||||
# Example for Python
|
||||
python:
|
||||
lint_command: "ruff check ."
|
||||
format_command: "ruff format ."
|
||||
test_command: "pytest"
|
||||
coverage_command: "pytest --cov"
|
||||
type_check_command: "mypy ."
|
||||
quick_check_command: "ruff check . && pytest -x"
|
||||
full_check_command: "ruff check . && ruff format --check . && mypy . && pytest"
|
||||
code_formatter: "ruff"
|
||||
testing_framework: "pytest"
|
||||
coverage_threshold_percent: ">= 80%"
|
||||
|
||||
# Example for TypeScript/JavaScript
|
||||
typescript:
|
||||
lint_command: "eslint ."
|
||||
format_command: "prettier --write ."
|
||||
test_command: "npm test"
|
||||
coverage_command: "npm run test:coverage"
|
||||
type_check_command: "tsc --noEmit"
|
||||
quick_check_command: "eslint . && npm test"
|
||||
full_check_command: "eslint . && prettier --check . && tsc --noEmit && npm test"
|
||||
code_formatter: "prettier"
|
||||
testing_framework: "jest"
|
||||
coverage_threshold_percent: ">= 80%"
|
||||
|
||||
# Example for Rust
|
||||
rust:
|
||||
lint_command: "cargo clippy"
|
||||
format_command: "cargo fmt"
|
||||
test_command: "cargo test"
|
||||
coverage_command: "cargo tarpaulin"
|
||||
type_check_command: "cargo check"
|
||||
quick_check_command: "cargo clippy && cargo test"
|
||||
full_check_command: "cargo clippy && cargo fmt --check && cargo test"
|
||||
code_formatter: "rustfmt"
|
||||
testing_framework: "cargo test"
|
||||
coverage_threshold_percent: ">= 75%"
|
||||
```
|
||||
|
||||
## Framework-Specific Intelligence
|
||||
|
||||
### React/Frontend Projects
|
||||
|
||||
```yaml
|
||||
react_detection:
|
||||
indicators:
|
||||
- "react" in package.json dependencies
|
||||
- ".jsx or .tsx files"
|
||||
- "src/components/" directory structure
|
||||
|
||||
analysis:
|
||||
state_management:
|
||||
patterns: [Redux, Context API, Zustand, Recoil, MobX]
|
||||
detection: "Search for store/context setup files"
|
||||
|
||||
component_patterns:
|
||||
types: [HOC, Hooks, Render Props, Class Components]
|
||||
detection: "Analyze component file structure"
|
||||
|
||||
architecture:
|
||||
types: [SPA, SSR, Static Site]
|
||||
detection: "Check for Next.js, Gatsby, or CRA setup"
|
||||
|
||||
quality_gates:
|
||||
defaults: "ESLint + Prettier + TypeScript"
|
||||
detection: "Parse .eslintrc and tsconfig.json"
|
||||
```
|
||||
|
||||
### Python/Backend Projects
|
||||
|
||||
```yaml
|
||||
python_detection:
|
||||
frameworks:
|
||||
Django:
|
||||
indicators: ["django" in requirements, manage.py, settings.py]
|
||||
architecture: "MTV (Model-Template-View)"
|
||||
|
||||
FastAPI:
|
||||
indicators: ["fastapi" in requirements, main.py with app = FastAPI()]
|
||||
architecture: "API-first, async-native"
|
||||
|
||||
Flask:
|
||||
indicators: ["flask" in requirements, app.py]
|
||||
architecture: "Microframework, flexible"
|
||||
|
||||
patterns:
|
||||
detection: [MVC, Repository, Service Layer, Domain-Driven Design]
|
||||
analysis: "Examine directory structure and import patterns"
|
||||
|
||||
testing:
|
||||
frameworks: [pytest, unittest, nose2]
|
||||
detection: "Check test files and conftest.py"
|
||||
|
||||
quality_gates:
|
||||
defaults: "ruff + mypy + pytest"
|
||||
detection: "Parse pyproject.toml and setup.cfg"
|
||||
```
|
||||
|
||||
### Rust Projects
|
||||
|
||||
```yaml
|
||||
rust_detection:
|
||||
frameworks:
|
||||
Axum:
|
||||
indicators: ["axum" in Cargo.toml, "use axum::"]
|
||||
architecture: "Async, Tower middleware"
|
||||
|
||||
Rocket:
|
||||
indicators: ["rocket" in Cargo.toml, "#[rocket::"]
|
||||
architecture: "Type-safe, batteries-included"
|
||||
|
||||
Actix:
|
||||
indicators: ["actix-web" in Cargo.toml, "use actix_web::"]
|
||||
architecture: "Actor-based, high performance"
|
||||
|
||||
patterns:
|
||||
detection: [Hexagonal, Clean Architecture, Layered]
|
||||
analysis: "Examine module structure and trait boundaries"
|
||||
|
||||
testing:
|
||||
default: "cargo test with built-in test framework"
|
||||
detection: "#[test] and #[cfg(test)] attributes"
|
||||
|
||||
quality_gates:
|
||||
defaults: "clippy + rustfmt + cargo test"
|
||||
detection: "Cargo.toml and clippy.toml"
|
||||
```
|
||||
|
||||
## Project Phase Detection
|
||||
|
||||
Analyze git history and project metrics to determine project phase:
|
||||
|
||||
```yaml
|
||||
phase_detection:
|
||||
Startup (0-6 months):
|
||||
indicators:
|
||||
- Commit count: < 200
|
||||
- Contributors: 1-3
|
||||
- Files: < 100
|
||||
- Test coverage: < 60%
|
||||
focus: "MVP Foundation, Core Features, User Feedback"
|
||||
|
||||
Growth (6-18 months):
|
||||
indicators:
|
||||
- Commit count: 200-1000
|
||||
- Contributors: 3-10
|
||||
- Files: 100-500
|
||||
- Test coverage: 60-80%
|
||||
focus: "Performance, Feature Expansion, Production Hardening"
|
||||
|
||||
Enterprise (18+ months):
|
||||
indicators:
|
||||
- Commit count: > 1000
|
||||
- Contributors: > 10
|
||||
- Files: > 500
|
||||
- Test coverage: > 80%
|
||||
focus: "Architecture Evolution, Scalability, Platform Maturation"
|
||||
```
|
||||
|
||||
## Error Handling & Graceful Degradation
|
||||
|
||||
```yaml
|
||||
error_handling:
|
||||
researcher_agent_fails:
|
||||
fallback:
|
||||
- Use basic file detection (package.json, requirements.txt)
|
||||
- Count files by extension
|
||||
- Parse package manager files directly
|
||||
log: "Framework detection limited - manual review recommended"
|
||||
continue: true
|
||||
|
||||
architect_agent_fails:
|
||||
fallback:
|
||||
- Use simplified pattern detection based on folder structure
|
||||
- Check for common patterns (models/, views/, controllers/)
|
||||
- Infer from file naming conventions
|
||||
log: "Architecture analysis incomplete - patterns may be missed"
|
||||
continue: true
|
||||
|
||||
security_agent_fails:
|
||||
fallback:
|
||||
- Flag for manual security review
|
||||
- Skip security-specific recommendations
|
||||
- Use generic security best practices
|
||||
log: "Security assessment skipped - manual review required"
|
||||
continue: true
|
||||
|
||||
timeout_handling:
|
||||
total_time_limit: 30 seconds
|
||||
individual_agent_timeout: 10 seconds
|
||||
strategy: "Kill agent on timeout, use partial results if available"
|
||||
|
||||
missing_config_files:
|
||||
strategy: "Use sensible defaults for detected language"
|
||||
log: "Using default configuration for [language]"
|
||||
continue: true
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
```yaml
|
||||
optimization:
|
||||
parallel_execution:
|
||||
- Run all 3 agents simultaneously using Task tool
|
||||
- Agents are independent, no sequential dependencies
|
||||
- Reduces total analysis time from 30s to ~10s
|
||||
|
||||
caching:
|
||||
- Cache language detection results during session
|
||||
- Cache parsed package manager files
|
||||
- Avoid redundant file system scans
|
||||
|
||||
early_termination:
|
||||
- If all agents complete in < 5s, proceed immediately
|
||||
- Don't wait for full timeout period
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*This file provides comprehensive detection patterns and agent orchestration strategies for intelligent project analysis.*
|
||||
485
skills/initializing-project/EXAMPLES.md
Normal file
485
skills/initializing-project/EXAMPLES.md
Normal file
@@ -0,0 +1,485 @@
|
||||
# Complete Initialization Examples
|
||||
|
||||
This file provides complete end-to-end examples of project initialization for different project types.
|
||||
|
||||
## Example 1: Python FastAPI Project
|
||||
|
||||
### User Request
|
||||
```
|
||||
User: "Initialize Quaestor in this Python project"
|
||||
```
|
||||
|
||||
### My Response - Phase 1 & 2 (Analysis)
|
||||
```
|
||||
🔍 Analyzing your project...
|
||||
|
||||
Spawning specialized agents for analysis:
|
||||
- researcher: Analyzing dependencies and framework
|
||||
- architect: Evaluating architecture patterns
|
||||
- security: Assessing security posture
|
||||
|
||||
[Agents complete in ~8 seconds]
|
||||
```
|
||||
|
||||
### My Response - Phase 3 (Validation)
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: FastAPI with SQLAlchemy
|
||||
- Architecture: Hexagonal (Ports & Adapters)
|
||||
- Complexity: 0.65/1.0 (moderate)
|
||||
- Phase: Growth (6-18 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: pytest with 75% coverage
|
||||
- Linting: ruff with pyproject.toml config
|
||||
- Type checking: mypy in strict mode
|
||||
- CI/CD: GitHub Actions detected
|
||||
- Security: No major vulnerabilities detected
|
||||
|
||||
**Files Analyzed:**
|
||||
- pyproject.toml: FastAPI, SQLAlchemy, pytest dependencies
|
||||
- src/domain/: Clean domain layer detected
|
||||
- src/infrastructure/: Repository pattern found
|
||||
- tests/: Good test coverage structure
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer for the initial setup?
|
||||
```
|
||||
|
||||
### User Response
|
||||
```
|
||||
User: "Proceed with detected setup"
|
||||
```
|
||||
|
||||
### My Response - Phase 4 (Completion)
|
||||
```
|
||||
✅ Initialization complete!
|
||||
|
||||
**Generated Documentation:**
|
||||
- .quaestor/AGENT.md (AI behavioral rules)
|
||||
- .quaestor/ARCHITECTURE.md (with Python quality standards)
|
||||
- CLAUDE.md (main entry point)
|
||||
|
||||
**Directory Structure:**
|
||||
- .quaestor/specs/draft/
|
||||
- .quaestor/specs/active/
|
||||
- .quaestor/specs/completed/
|
||||
- .quaestor/specs/archived/
|
||||
|
||||
**Next Steps:**
|
||||
1. Review .quaestor/ARCHITECTURE.md Section 3 to customize commands
|
||||
2. Create your first specification: "Create a spec for..."
|
||||
3. Start development with Research → Plan → Implement workflow
|
||||
```
|
||||
|
||||
### Generated ARCHITECTURE.md (Section 3 excerpt)
|
||||
```markdown
|
||||
## 3. CODE QUALITY STANDARDS
|
||||
|
||||
### Linting and Formatting
|
||||
- **Linter**: `ruff check .`
|
||||
- **Formatter**: `ruff format .`
|
||||
- **Code Formatter**: ruff
|
||||
- **Quick Check**: `ruff check . && pytest -x`
|
||||
- **Full Validation**: `ruff check . && ruff format --check . && mypy . && pytest`
|
||||
|
||||
### Testing Requirements
|
||||
- **Test Runner**: `pytest`
|
||||
- **Coverage**: `pytest --cov`
|
||||
- **Coverage Threshold**: >= 80%
|
||||
- **Testing Framework**: pytest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 2: React TypeScript Project with Modifications
|
||||
|
||||
### User Request
|
||||
```
|
||||
User: "Set up Quaestor with intelligent project analysis"
|
||||
```
|
||||
|
||||
### My Response - Phase 1 & 2 (Analysis)
|
||||
```
|
||||
🔍 Analyzing your project...
|
||||
|
||||
Spawning specialized agents:
|
||||
- researcher: Framework and dependency analysis
|
||||
- architect: Architecture pattern evaluation
|
||||
- security: Security assessment
|
||||
|
||||
[Analysis complete in ~9 seconds]
|
||||
```
|
||||
|
||||
### My Response - Phase 3 (Validation)
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: React with TypeScript
|
||||
- Architecture: Component-based with Redux state management
|
||||
- Complexity: 0.70/1.0 (high)
|
||||
- Phase: Growth (6-18 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: Jest with React Testing Library, 82% coverage
|
||||
- Linting: ESLint with Airbnb config
|
||||
- Type checking: TypeScript strict mode
|
||||
- CI/CD: GitHub Actions detected
|
||||
- Security: 2 outdated dependencies (non-critical)
|
||||
|
||||
**Files Analyzed:**
|
||||
- package.json: React 18.2, TypeScript 5.0, Jest, ESLint
|
||||
- src/components/: Hooks-based component architecture
|
||||
- src/store/: Redux Toolkit slices and sagas
|
||||
- tests/: Comprehensive test suite
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer for the initial setup?
|
||||
```
|
||||
|
||||
### User Response (Modification Path)
|
||||
```
|
||||
User: "Modify detected patterns"
|
||||
```
|
||||
|
||||
### My Follow-up
|
||||
```
|
||||
What would you like to adjust?
|
||||
|
||||
Options to modify:
|
||||
- Coverage threshold (currently >= 80%)
|
||||
- Linting rules
|
||||
- Architecture pattern description
|
||||
- Project phase classification
|
||||
- Quality standards
|
||||
```
|
||||
|
||||
### User Modification
|
||||
```
|
||||
User: "Change coverage requirement to 90% and add stricter ESLint rules"
|
||||
```
|
||||
|
||||
### My Confirmation
|
||||
```
|
||||
Updated:
|
||||
- Coverage threshold: >= 90%
|
||||
- ESLint: Added stricter rules to config
|
||||
|
||||
Anything else to modify?
|
||||
```
|
||||
|
||||
### User Confirmation
|
||||
```
|
||||
User: "No, proceed"
|
||||
```
|
||||
|
||||
### My Response - Phase 4 (Completion)
|
||||
```
|
||||
✅ Initialization complete with your customizations!
|
||||
|
||||
**Generated Documentation:**
|
||||
- .quaestor/AGENT.md
|
||||
- .quaestor/ARCHITECTURE.md (with TypeScript quality standards + your customizations)
|
||||
- CLAUDE.md
|
||||
|
||||
**Customizations Applied:**
|
||||
- Coverage threshold set to >= 90%
|
||||
- Stricter ESLint configuration documented
|
||||
|
||||
**Next Steps:**
|
||||
1. Review .quaestor/ARCHITECTURE.md - your customizations are in Section 3
|
||||
2. Run your first quality check: `npm run lint && npm test`
|
||||
3. Create specifications and start development
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Rust Axum Project with Custom Architecture
|
||||
|
||||
### User Request
|
||||
```
|
||||
User: "Analyze my project structure and initialize Quaestor"
|
||||
```
|
||||
|
||||
### My Response - Phase 3 (Validation)
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: Axum with Tokio and SQLx
|
||||
- Architecture: Layered with clear module boundaries
|
||||
- Complexity: 0.55/1.0 (moderate)
|
||||
- Phase: Startup (0-6 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: cargo test with 68% coverage
|
||||
- Linting: clippy with custom rules
|
||||
- Type checking: Rust's built-in type system
|
||||
- CI/CD: No CI detected
|
||||
- Security: All dependencies up to date
|
||||
|
||||
**Files Analyzed:**
|
||||
- Cargo.toml: axum 0.7, tokio 1.35, sqlx 0.7
|
||||
- src/lib.rs: Well-structured module hierarchy
|
||||
- src/handlers/: Clean separation of concerns
|
||||
- tests/: Integration tests present
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer?
|
||||
```
|
||||
|
||||
### User Response (Custom Architecture Path)
|
||||
```
|
||||
User: "Custom architecture description"
|
||||
```
|
||||
|
||||
### My Follow-up
|
||||
```
|
||||
Please describe your project architecture. I'll use your description instead of the detected patterns.
|
||||
|
||||
What I'd like to know:
|
||||
- Architecture pattern (if different from "Layered")
|
||||
- Key components and responsibilities
|
||||
- Design principles you're following
|
||||
- Quality standards and testing strategy
|
||||
```
|
||||
|
||||
### User Description
|
||||
```
|
||||
User: "We use Hexagonal Architecture with Domain-Driven Design. The domain layer is completely isolated, application layer orchestrates use cases, and infrastructure layer handles persistence and external APIs. We follow CQRS for complex operations and aim for 85% test coverage with property-based testing."
|
||||
```
|
||||
|
||||
### My Confirmation
|
||||
```
|
||||
Got it! I'll document your architecture as:
|
||||
|
||||
- **Pattern**: Hexagonal Architecture with Domain-Driven Design
|
||||
- **Layers**: Domain (isolated), Application (orchestration), Infrastructure (adapters)
|
||||
- **Patterns**: CQRS for complex operations
|
||||
- **Testing**: 85% coverage with property-based testing
|
||||
|
||||
Does this accurately represent your architecture?
|
||||
```
|
||||
|
||||
### User Confirmation
|
||||
```
|
||||
User: "Yes, perfect"
|
||||
```
|
||||
|
||||
### My Response - Phase 4 (Completion)
|
||||
```
|
||||
✅ Initialization complete with your custom architecture!
|
||||
|
||||
**Generated Documentation:**
|
||||
- .quaestor/AGENT.md
|
||||
- .quaestor/ARCHITECTURE.md (with your Hexagonal/DDD architecture)
|
||||
- CLAUDE.md
|
||||
|
||||
**Your Architecture:**
|
||||
- Pattern: Hexagonal Architecture with Domain-Driven Design
|
||||
- CQRS pattern documented
|
||||
- 85% coverage target configured
|
||||
- Property-based testing noted
|
||||
|
||||
**Next Steps:**
|
||||
1. Review .quaestor/ARCHITECTURE.md Section 2 - your architecture is documented
|
||||
2. Your custom testing strategy is in Section 5
|
||||
3. Start implementing specifications with your architecture patterns
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 4: Minimal Setup for New Project
|
||||
|
||||
### User Request
|
||||
```
|
||||
User: "Initialize Quaestor in this project"
|
||||
```
|
||||
|
||||
### Context
|
||||
User has an empty/new project with minimal files.
|
||||
|
||||
### My Response - Phase 3 (Validation)
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: Unable to detect (minimal files found)
|
||||
- Architecture: Unknown (new project)
|
||||
- Complexity: 0.0/1.0 (new project)
|
||||
- Phase: Startup (0-6 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- No existing quality tools detected
|
||||
- No tests found
|
||||
- No CI/CD configured
|
||||
|
||||
**Files Analyzed:**
|
||||
- No significant project files found
|
||||
- This appears to be a new project
|
||||
|
||||
**Recommendation:** Consider using minimal setup and configuring quality standards manually as your project develops.
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup (basic defaults)
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup (recommended for new projects)
|
||||
|
||||
What would you prefer?
|
||||
```
|
||||
|
||||
### User Response
|
||||
```
|
||||
User: "Start with minimal setup"
|
||||
```
|
||||
|
||||
### My Response - Phase 4 (Completion)
|
||||
```
|
||||
✅ Minimal setup complete!
|
||||
|
||||
**Generated Documentation:**
|
||||
- .quaestor/AGENT.md (standard workflow rules)
|
||||
- .quaestor/ARCHITECTURE.md (template - customize as you develop)
|
||||
- CLAUDE.md
|
||||
|
||||
**What to do next:**
|
||||
1. Edit .quaestor/ARCHITECTURE.md as your project takes shape:
|
||||
- Fill in Section 1: Project Configuration
|
||||
- Define Section 2: Architecture Overview
|
||||
- Configure Section 3: Code Quality Standards (add commands as you adopt tools)
|
||||
|
||||
2. Install quality tools for your language/framework:
|
||||
- Python: `pip install ruff mypy pytest`
|
||||
- TypeScript: `npm install -D eslint prettier jest`
|
||||
- Rust: Already has clippy, rustfmt, cargo test
|
||||
|
||||
3. Return anytime to re-analyze:
|
||||
- "Analyze my project" to update configuration
|
||||
- All skills remain available via plugin/CLI
|
||||
|
||||
**Minimal Structure Created:**
|
||||
- .quaestor/specs/draft/
|
||||
- .quaestor/specs/active/
|
||||
- .quaestor/specs/completed/
|
||||
- .quaestor/specs/archived/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 5: Error Handling - Agent Failure
|
||||
|
||||
### Scenario
|
||||
One or more agents fail during analysis.
|
||||
|
||||
### My Response (Graceful Degradation)
|
||||
```
|
||||
🔍 Analyzing your project...
|
||||
|
||||
Spawning specialized agents:
|
||||
- researcher: ✓ Complete (Python FastAPI detected)
|
||||
- architect: ⚠ Timeout (10s limit reached)
|
||||
- security: ✓ Complete (No major issues)
|
||||
|
||||
Note: Architecture analysis incomplete. Using basic pattern detection.
|
||||
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: FastAPI with SQLAlchemy
|
||||
- Architecture: Basic pattern detection (architect agent timed out)
|
||||
- Detected: Repository pattern in infrastructure/
|
||||
- Detected: Domain models in domain/
|
||||
- Recommendation: Review and customize architecture section
|
||||
- Complexity: 0.50/1.0 (estimated from available metrics)
|
||||
- Phase: Growth (6-18 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: pytest detected
|
||||
- Linting: ruff detected
|
||||
- Type checking: mypy in pyproject.toml
|
||||
- Security: No major vulnerabilities
|
||||
|
||||
**Files Analyzed:**
|
||||
- pyproject.toml: Dependencies analyzed
|
||||
- src/: Basic structure reviewed
|
||||
- tests/: Test suite found
|
||||
|
||||
⚠ **Note**: Architecture analysis was incomplete. The generated ARCHITECTURE.md will have a basic template in Section 2. Please review and customize it based on your actual architecture.
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup (with basic architecture template)
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description (recommended due to incomplete analysis)
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer?
|
||||
```
|
||||
|
||||
This shows graceful degradation when agents fail - I continue with available data and clearly communicate limitations.
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows Summary
|
||||
|
||||
### Quick Accept (Most Common)
|
||||
```
|
||||
1. User: "Initialize Quaestor"
|
||||
2. Me: [Analysis + Validation with all detected info]
|
||||
3. User: "Proceed"
|
||||
4. Me: [Complete setup]
|
||||
```
|
||||
|
||||
### Modification Path
|
||||
```
|
||||
1. User: "Initialize Quaestor"
|
||||
2. Me: [Analysis + Validation]
|
||||
3. User: "Modify detected patterns"
|
||||
4. Me: "What to adjust?"
|
||||
5. User: [Specific changes]
|
||||
6. Me: [Confirmation]
|
||||
7. User: "Proceed"
|
||||
8. Me: [Complete with modifications]
|
||||
```
|
||||
|
||||
### Custom Architecture Path
|
||||
```
|
||||
1. User: "Initialize Quaestor"
|
||||
2. Me: [Analysis + Validation]
|
||||
3. User: "Custom architecture description"
|
||||
4. Me: "Please describe your architecture"
|
||||
5. User: [Detailed architecture explanation]
|
||||
6. Me: [Confirmation of understanding]
|
||||
7. User: "Yes"
|
||||
8. Me: [Complete with custom architecture]
|
||||
```
|
||||
|
||||
### Minimal Setup Path
|
||||
```
|
||||
1. User: "Initialize Quaestor" (in empty/new project)
|
||||
2. Me: [Analysis shows minimal project]
|
||||
3. User: "Minimal setup"
|
||||
4. Me: [Create basic templates for user to customize]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*These examples demonstrate the complete initialization workflow across different project types and user interaction patterns.*
|
||||
104
skills/initializing-project/SKILL.md
Normal file
104
skills/initializing-project/SKILL.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
name: Initializing Project
|
||||
description: Intelligent project analysis with auto-framework detection and adaptive setup. Use when user wants to initialize Quaestor, setup a new project, or analyze existing project structure.
|
||||
allowed-tools: [Read, Bash, Glob, Grep, Edit, Write, Task, TodoWrite]
|
||||
---
|
||||
|
||||
# Initializing Project
|
||||
|
||||
I help you intelligently initialize Quaestor in your project with automatic framework detection, architecture analysis, and customized documentation generation.
|
||||
|
||||
## When to Use Me
|
||||
|
||||
- User says "initialize project", "setup quaestor", "analyze my project structure"
|
||||
- Starting Quaestor in a new or existing project
|
||||
- Migrating existing project to Quaestor
|
||||
- Need intelligent project analysis and setup
|
||||
- User asks "how do I set up quaestor?"
|
||||
|
||||
## Supporting Files
|
||||
|
||||
This skill uses several supporting files for detailed workflows:
|
||||
|
||||
- **@DETECTION.md** - Language and framework detection patterns, agent orchestration
|
||||
- **@TEMPLATES.md** - Document templates, variable mappings, skills installation
|
||||
- **@VALIDATION.md** - Mandatory user validation workflow
|
||||
- **@EXAMPLES.md** - Full initialization examples for different project types
|
||||
|
||||
## My Process
|
||||
|
||||
I follow a 4-phase workflow to intelligently initialize your project:
|
||||
|
||||
### Phase 1: Project Analysis 🔍
|
||||
|
||||
I coordinate specialized agents (researcher, architect, security) to analyze your project in parallel. They examine:
|
||||
- Framework and dependencies
|
||||
- Architecture patterns and design
|
||||
- Security posture and vulnerabilities
|
||||
- Project complexity and phase
|
||||
|
||||
**See @DETECTION.md for detailed agent orchestration and detection patterns**
|
||||
|
||||
### Phase 2: Document Generation ⚡
|
||||
|
||||
I detect your project's language, load language-specific configurations, and generate customized documentation:
|
||||
- AGENT.md (AI behavioral rules)
|
||||
- ARCHITECTURE.md (with your language's quality standards)
|
||||
- CLAUDE.md (main entry point)
|
||||
|
||||
**See @TEMPLATES.md for document templates and variable mappings**
|
||||
|
||||
### Phase 3: User Validation ✅ **[MANDATORY]**
|
||||
|
||||
I present my analysis and **MUST** get your approval before proceeding. You'll see:
|
||||
- Detected framework and architecture
|
||||
- Quality standards and tools
|
||||
- Options to proceed, modify, customize, or use minimal setup
|
||||
|
||||
**See @VALIDATION.md for the complete validation workflow**
|
||||
|
||||
### Phase 4: Setup Completion 🚀
|
||||
|
||||
After your approval, I create the directory structure, generate all documentation, install skills, and provide next steps.
|
||||
|
||||
## Error Handling
|
||||
|
||||
I handle failures gracefully:
|
||||
- **Agent failures**: Fall back to basic detection, continue with available data
|
||||
- **Time limits**: 30s total, 10s per agent
|
||||
- **Missing data**: Use sensible defaults and flag for manual review
|
||||
|
||||
**See @DETECTION.md for detailed error handling strategies**
|
||||
|
||||
## Next Steps After Initialization
|
||||
|
||||
After successful initialization:
|
||||
|
||||
### 1. Review and Customize Documentation
|
||||
- `.quaestor/AGENT.md` - AI behavioral rules
|
||||
- `.quaestor/ARCHITECTURE.md` - Architecture and quality standards (edit Section 3 to customize commands)
|
||||
|
||||
### 2. Start Development
|
||||
- Create specifications: "Create a spec for [feature]" or use spec-writing skill
|
||||
- Check progress: "What's the current project status?"
|
||||
- Implement: "Implement spec-feature-001" or use `/impl` command
|
||||
- Review: "Review my changes and create a PR" or use `/review` command
|
||||
|
||||
### 3. Available Skills
|
||||
All Quaestor skills are available via the plugin or CLI installation. Skills are loaded from the plugin/package and accessed directly by Claude Code.
|
||||
|
||||
**See @TEMPLATES.md for customization details and @EXAMPLES.md for complete workflows**
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- ✅ Framework and architecture accurately detected
|
||||
- ✅ USER VALIDATION COMPLETED (mandatory)
|
||||
- ✅ ARCHITECTURE.md generated with language-specific quality standards
|
||||
- ✅ Directory structure created
|
||||
- ✅ Project ready for specification-driven development
|
||||
|
||||
**See @EXAMPLES.md for complete initialization walkthroughs**
|
||||
|
||||
---
|
||||
|
||||
*I provide intelligent project initialization with automatic framework detection, architecture analysis, and customized documentation generation. Just tell me to initialize your project, and I'll handle the rest!*
|
||||
280
skills/initializing-project/TEMPLATES.md
Normal file
280
skills/initializing-project/TEMPLATES.md
Normal file
@@ -0,0 +1,280 @@
|
||||
# Document Templates & Generation
|
||||
|
||||
This file describes the document generation process, template variables, and skills installation.
|
||||
|
||||
## Generated Documents Overview
|
||||
|
||||
```yaml
|
||||
generated_documents:
|
||||
AGENT.md:
|
||||
location: ".quaestor/AGENT.md"
|
||||
source: "src/quaestor/agent.md"
|
||||
processing: "Copy template as-is"
|
||||
purpose: "AI behavioral rules and workflow enforcement"
|
||||
|
||||
ARCHITECTURE.md:
|
||||
location: ".quaestor/ARCHITECTURE.md"
|
||||
source: "src/quaestor/architecture.md"
|
||||
processing: "Jinja2 template with variable substitution"
|
||||
purpose: "Project architecture, patterns, and quality standards"
|
||||
|
||||
CLAUDE.md:
|
||||
location: "CLAUDE.md" (project root)
|
||||
source: "src/quaestor/include.md"
|
||||
processing: "Merge with existing or create new"
|
||||
purpose: "Main entry point with Quaestor configuration"
|
||||
```
|
||||
|
||||
## ARCHITECTURE.md Template Variables
|
||||
|
||||
ARCHITECTURE.md uses Jinja2 templating with variables populated from detected language config:
|
||||
|
||||
### Section 1: Project Configuration
|
||||
|
||||
```yaml
|
||||
template_variables:
|
||||
project_name: "[Detected from directory name or git config]"
|
||||
project_type: "[Detected from framework: web-api, web-app, library, cli, etc.]"
|
||||
language_display_name: "[Human-readable: Python, TypeScript, Rust]"
|
||||
primary_language: "[Code: python, typescript, rust]"
|
||||
config_system_version: "2.0"
|
||||
strict_mode: "[true if complexity > 0.7, else false]"
|
||||
|
||||
build_tool: "[Detected: cargo, npm, pip, gradle]"
|
||||
package_manager: "[Detected: cargo, npm/yarn, pip/poetry, maven]"
|
||||
language_server: "[Optional: pyright, rust-analyzer, tsserver]"
|
||||
virtual_env: "[Optional: venv, conda, nvm]"
|
||||
dependency_management: "[Detected from package files]"
|
||||
```
|
||||
|
||||
### Section 3: Code Quality Standards
|
||||
|
||||
These variables are populated from `src/quaestor/core/languages.yaml`:
|
||||
|
||||
```yaml
|
||||
quality_standards:
|
||||
lint_command: "{{ lint_command }}" # e.g., "ruff check ."
|
||||
format_command: "{{ format_command }}" # e.g., "ruff format ."
|
||||
test_command: "{{ test_command }}" # e.g., "pytest"
|
||||
coverage_command: "{{ coverage_command }}" # e.g., "pytest --cov"
|
||||
type_check_command: "{{ type_check_command }}" # e.g., "mypy ."
|
||||
|
||||
quick_check_command: "{{ quick_check_command }}"
|
||||
# e.g., "ruff check . && pytest -x"
|
||||
|
||||
full_check_command: "{{ full_check_command }}"
|
||||
# e.g., "ruff check . && ruff format --check . && mypy . && pytest"
|
||||
|
||||
code_formatter: "{{ code_formatter }}" # e.g., "ruff"
|
||||
testing_framework: "{{ testing_framework }}" # e.g., "pytest"
|
||||
coverage_threshold_percent: "{{ coverage_threshold_percent }}" # e.g., ">= 80%"
|
||||
```
|
||||
|
||||
### Section 6: Security & Performance
|
||||
|
||||
```yaml
|
||||
security_performance:
|
||||
has_security_scanner: "{{ has_security_scanner }}" # "true" or "false"
|
||||
security_scan_command: "{{ security_scan_command }}" # e.g., "bandit -r ."
|
||||
security_scanner: "{{ security_scanner }}" # e.g., "bandit"
|
||||
|
||||
has_profiler: "{{ has_profiler }}" # "true" or "false"
|
||||
profile_command: "{{ profile_command }}" # e.g., "py-spy top"
|
||||
performance_budget: "{{ performance_budget }}" # e.g., "< 200ms p95"
|
||||
```
|
||||
|
||||
### Section 8: Quality Thresholds
|
||||
|
||||
```yaml
|
||||
quality_thresholds:
|
||||
coverage_threshold_percent: "{{ coverage_threshold_percent }}"
|
||||
max_duplication: "{{ max_duplication }}" # e.g., "3%"
|
||||
max_debt_hours: "{{ max_debt_hours }}" # e.g., "40 hours"
|
||||
max_bugs_per_kloc: "{{ max_bugs_per_kloc }}" # e.g., "0.5"
|
||||
|
||||
current_coverage: "{{ current_coverage }}" # e.g., "0% (not yet measured)"
|
||||
current_duplication: "{{ current_duplication }}" # e.g., "N/A"
|
||||
current_debt: "{{ current_debt }}" # e.g., "N/A"
|
||||
current_bug_density: "{{ current_bug_density }}" # e.g., "N/A"
|
||||
main_config_available: "{{ main_config_available }}" # true or false
|
||||
```
|
||||
|
||||
### Section 10: Project Standards
|
||||
|
||||
```yaml
|
||||
project_standards:
|
||||
max_build_time: "{{ max_build_time }}" # e.g., "< 5 minutes"
|
||||
max_bundle_size: "{{ max_bundle_size }}" # e.g., "< 250KB gzipped"
|
||||
memory_threshold: "{{ memory_threshold }}" # e.g., "< 512MB"
|
||||
retry_configuration: "{{ retry_configuration }}" # e.g., "3 retries with exponential backoff"
|
||||
fallback_behavior: "{{ fallback_behavior }}" # e.g., "Fail gracefully with user message"
|
||||
rule_enforcement: "{{ rule_enforcement }}" # e.g., "Enforced on commit (pre-commit hooks)"
|
||||
pre_edit_script: "{{ pre_edit_script }}" # e.g., "ruff check"
|
||||
post_edit_script: "{{ post_edit_script }}" # e.g., "ruff format"
|
||||
```
|
||||
|
||||
## Variable Population Process
|
||||
|
||||
### Step 1: Detect Language
|
||||
Use patterns from @DETECTION.md to identify primary language.
|
||||
|
||||
### Step 2: Load Language Config
|
||||
```python
|
||||
# Read src/quaestor/core/languages.yaml
|
||||
# Extract section for detected language
|
||||
# Example:
|
||||
config = yaml.load("src/quaestor/core/languages.yaml")
|
||||
lang_config = config[detected_language] # e.g., config["python"]
|
||||
```
|
||||
|
||||
### Step 3: Populate Template
|
||||
```python
|
||||
# Use Jinja2 to render template
|
||||
from jinja2 import Template
|
||||
|
||||
template = Template(open("src/quaestor/architecture.md").read())
|
||||
rendered = template.render(**lang_config, **project_metadata)
|
||||
```
|
||||
|
||||
### Step 4: Write Output
|
||||
```python
|
||||
# Write to .quaestor/ARCHITECTURE.md
|
||||
output_path = ".quaestor/ARCHITECTURE.md"
|
||||
with open(output_path, "w") as f:
|
||||
f.write(rendered)
|
||||
```
|
||||
|
||||
## Language-Specific Template Examples
|
||||
|
||||
### Python Project
|
||||
```markdown
|
||||
## 3. CODE QUALITY STANDARDS
|
||||
|
||||
### Linting and Formatting
|
||||
- **Linter**: `ruff check .`
|
||||
- **Formatter**: `ruff format .`
|
||||
- **Code Formatter**: ruff
|
||||
- **Quick Check**: `ruff check . && pytest -x`
|
||||
- **Full Validation**: `ruff check . && ruff format --check . && mypy . && pytest`
|
||||
|
||||
### Testing Requirements
|
||||
- **Test Runner**: `pytest`
|
||||
- **Coverage**: `pytest --cov`
|
||||
- **Coverage Threshold**: >= 80%
|
||||
- **Testing Framework**: pytest
|
||||
```
|
||||
|
||||
### TypeScript Project
|
||||
```markdown
|
||||
## 3. CODE QUALITY STANDARDS
|
||||
|
||||
### Linting and Formatting
|
||||
- **Linter**: `eslint .`
|
||||
- **Formatter**: `prettier --write .`
|
||||
- **Code Formatter**: prettier
|
||||
- **Quick Check**: `eslint . && npm test`
|
||||
- **Full Validation**: `eslint . && prettier --check . && tsc --noEmit && npm test`
|
||||
|
||||
### Testing Requirements
|
||||
- **Test Runner**: `npm test`
|
||||
- **Coverage**: `npm run test:coverage`
|
||||
- **Coverage Threshold**: >= 80%
|
||||
- **Testing Framework**: jest
|
||||
```
|
||||
|
||||
### Rust Project
|
||||
```markdown
|
||||
## 3. CODE QUALITY STANDARDS
|
||||
|
||||
### Linting and Formatting
|
||||
- **Linter**: `cargo clippy`
|
||||
- **Formatter**: `cargo fmt`
|
||||
- **Code Formatter**: rustfmt
|
||||
- **Quick Check**: `cargo clippy && cargo test`
|
||||
- **Full Validation**: `cargo clippy && cargo fmt --check && cargo test`
|
||||
|
||||
### Testing Requirements
|
||||
- **Test Runner**: `cargo test`
|
||||
- **Coverage**: `cargo tarpaulin`
|
||||
- **Coverage Threshold**: >= 75%
|
||||
- **Testing Framework**: cargo test
|
||||
```
|
||||
|
||||
## Skills Availability
|
||||
|
||||
All Quaestor skills are available via:
|
||||
- **Plugin installation**: Skills loaded from the plugin package
|
||||
- **CLI installation** (`uvx quaestor`): Skills loaded from the installed package
|
||||
|
||||
Skills are NOT copied to the project directory. They remain in the plugin/package and are accessed directly by Claude Code.
|
||||
|
||||
## CLAUDE.md Merging
|
||||
|
||||
### Merge Strategy
|
||||
|
||||
```yaml
|
||||
claude_md_handling:
|
||||
if_exists:
|
||||
strategy: "Merge Quaestor config with existing content"
|
||||
process:
|
||||
- Check for existing QUAESTOR CONFIG markers
|
||||
- If found: Replace old config with new
|
||||
- If not found: Prepend Quaestor config to existing content
|
||||
preserve: "All user custom content"
|
||||
|
||||
if_not_exists:
|
||||
strategy: "Create new CLAUDE.md from template"
|
||||
content: "src/quaestor/include.md"
|
||||
```
|
||||
|
||||
### CLAUDE.md Template
|
||||
|
||||
```markdown
|
||||
<!-- QUAESTOR CONFIG START -->
|
||||
[!IMPORTANT]
|
||||
**Claude:** This project uses Quaestor for AI context management.
|
||||
Please read the following files in order:
|
||||
@.quaestor/AGENT.md - AI behavioral rules and workflow enforcement
|
||||
@.quaestor/ARCHITECTURE.md - Project architecture, standards, and quality guidelines
|
||||
@.quaestor/specs/active/ - Active specifications and implementation details
|
||||
<!-- QUAESTOR CONFIG END -->
|
||||
|
||||
<!-- Your custom content below -->
|
||||
```
|
||||
|
||||
## Customization After Generation
|
||||
|
||||
Users can customize the generated ARCHITECTURE.md:
|
||||
|
||||
### Common Customizations
|
||||
|
||||
```markdown
|
||||
# Example: Customize test command for specific project needs
|
||||
|
||||
# Before (default)
|
||||
- **Test Runner**: `pytest`
|
||||
|
||||
# After (customized for project)
|
||||
- **Test Runner**: `pytest -xvs --cov=src --cov-report=html`
|
||||
|
||||
# Example: Add project-specific linting rules
|
||||
|
||||
# Before (default)
|
||||
- **Linter**: `ruff check .`
|
||||
|
||||
# After (customized)
|
||||
- **Linter**: `ruff check . --select E,F,W,I,N --ignore E501`
|
||||
```
|
||||
|
||||
### Where to Customize
|
||||
|
||||
1. Open `.quaestor/ARCHITECTURE.md`
|
||||
2. Navigate to **Section 3: CODE QUALITY STANDARDS**
|
||||
3. Edit command values directly
|
||||
4. Save file - changes take effect immediately
|
||||
|
||||
**Important**: Users should edit `.quaestor/ARCHITECTURE.md` directly, not the template files in `src/quaestor/`.
|
||||
|
||||
---
|
||||
|
||||
*This file provides complete documentation templates and variable mappings for intelligent document generation.*
|
||||
367
skills/initializing-project/VALIDATION.md
Normal file
367
skills/initializing-project/VALIDATION.md
Normal file
@@ -0,0 +1,367 @@
|
||||
# User Validation Workflow
|
||||
|
||||
This file describes the mandatory user validation process that must occur before project initialization completes.
|
||||
|
||||
## Phase 3: User Validation ✅ **[MANDATORY]**
|
||||
|
||||
⚠️ **CRITICAL ENFORCEMENT RULE:**
|
||||
|
||||
```yaml
|
||||
before_phase_4:
|
||||
MUST_PRESENT_ANALYSIS:
|
||||
- framework_detection_results
|
||||
- architecture_pattern_analysis
|
||||
- quality_standards_detected
|
||||
- project_phase_determination
|
||||
|
||||
MUST_GET_USER_CHOICE:
|
||||
options:
|
||||
- "✅ Proceed with detected setup"
|
||||
- "🔄 Modify detected patterns"
|
||||
- "📝 Custom architecture description"
|
||||
- "🚫 Start with minimal setup"
|
||||
|
||||
VIOLATION_CONSEQUENCES:
|
||||
- if_skipped: "IMMEDIATE STOP - Restart from Phase 3"
|
||||
- required_response: "I must validate this analysis with you before proceeding"
|
||||
```
|
||||
|
||||
## Validation Template
|
||||
|
||||
I **MUST** present this analysis to the user:
|
||||
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: [detected_framework]
|
||||
- Architecture: [detected_pattern]
|
||||
- Complexity: [score]/1.0
|
||||
- Phase: [project_phase]
|
||||
|
||||
**Quality Standards:**
|
||||
[detected_tools_and_standards]
|
||||
|
||||
**Files Analyzed:**
|
||||
[list_of_key_files_examined]
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer for the initial setup?
|
||||
```
|
||||
|
||||
## Validation Components
|
||||
|
||||
### 1. Detected Configuration
|
||||
|
||||
Present the consolidated analysis from Phase 1:
|
||||
|
||||
```yaml
|
||||
configuration_display:
|
||||
framework:
|
||||
format: "[Framework Name] with [Key Libraries]"
|
||||
examples:
|
||||
- "FastAPI with SQLAlchemy"
|
||||
- "React with TypeScript and Redux"
|
||||
- "Axum with Tokio and SQLx"
|
||||
|
||||
architecture:
|
||||
format: "[Pattern Name] ([Key Characteristics])"
|
||||
examples:
|
||||
- "Hexagonal (Ports & Adapters)"
|
||||
- "Component-based with Redux state management"
|
||||
- "Clean Architecture with DDD"
|
||||
|
||||
complexity:
|
||||
format: "[score]/1.0 ([complexity_level])"
|
||||
levels:
|
||||
- "0.0-0.3: Low"
|
||||
- "0.3-0.6: Moderate"
|
||||
- "0.6-0.8: High"
|
||||
- "0.8-1.0: Very High"
|
||||
|
||||
phase:
|
||||
format: "[phase_name] ([duration])"
|
||||
phases:
|
||||
- "Startup (0-6 months)"
|
||||
- "Growth (6-18 months)"
|
||||
- "Enterprise (18+ months)"
|
||||
- "Legacy (maintenance mode)"
|
||||
```
|
||||
|
||||
### 2. Quality Standards
|
||||
|
||||
Present the detected tools and standards:
|
||||
|
||||
```yaml
|
||||
quality_standards_display:
|
||||
testing:
|
||||
format: "[framework] with [coverage]% coverage"
|
||||
examples:
|
||||
- "pytest with 75% coverage"
|
||||
- "jest with React Testing Library"
|
||||
- "cargo test with 80% coverage"
|
||||
|
||||
linting:
|
||||
format: "[linter] with [config_file] config"
|
||||
examples:
|
||||
- "ruff with pyproject.toml config"
|
||||
- "ESLint with Airbnb config"
|
||||
- "clippy with custom clippy.toml"
|
||||
|
||||
type_checking:
|
||||
format: "[type_checker] in [mode] mode"
|
||||
examples:
|
||||
- "mypy in strict mode"
|
||||
- "TypeScript strict mode"
|
||||
- "Rust's built-in type system"
|
||||
|
||||
ci_cd:
|
||||
format: "[ci_system] detected"
|
||||
examples:
|
||||
- "GitHub Actions detected"
|
||||
- "GitLab CI configured"
|
||||
- "No CI detected"
|
||||
|
||||
security:
|
||||
format: "[status]"
|
||||
examples:
|
||||
- "No major vulnerabilities detected"
|
||||
- "2 outdated dependencies found"
|
||||
- "Security scan recommended"
|
||||
```
|
||||
|
||||
### 3. Files Analyzed
|
||||
|
||||
Show what was examined to build confidence:
|
||||
|
||||
```yaml
|
||||
files_display:
|
||||
format: "- [file_path]: [what_was_found]"
|
||||
examples:
|
||||
- "pyproject.toml: FastAPI, SQLAlchemy, pytest dependencies"
|
||||
- "src/domain/: Clean domain layer detected"
|
||||
- "src/infrastructure/: Repository pattern found"
|
||||
- "tests/: Good test coverage structure"
|
||||
- "package.json: React 18, TypeScript 5, Jest"
|
||||
- "src/components/: Hooks-based components"
|
||||
- "Cargo.toml: axum, tokio, sqlx dependencies"
|
||||
- "src/lib.rs: Layered module structure"
|
||||
```
|
||||
|
||||
## User Response Handling
|
||||
|
||||
### Option 1: ✅ Proceed with detected setup
|
||||
|
||||
```yaml
|
||||
proceed:
|
||||
user_says: "Proceed with detected setup" | "Looks good" | "Yes" | "Continue"
|
||||
action: "Move to Phase 4 with all detected settings"
|
||||
no_changes: true
|
||||
```
|
||||
|
||||
### Option 2: 🔄 Modify detected patterns
|
||||
|
||||
```yaml
|
||||
modify:
|
||||
user_says: "Modify detected patterns" | "Change something" | "Adjust"
|
||||
follow_up_questions:
|
||||
- "What would you like to change?"
|
||||
- "Which aspect needs adjustment? (framework/architecture/quality standards)"
|
||||
|
||||
common_modifications:
|
||||
- Change complexity threshold
|
||||
- Adjust test coverage requirements
|
||||
- Modify linting rules
|
||||
- Update architecture pattern choice
|
||||
- Change project phase classification
|
||||
|
||||
example_dialogue:
|
||||
user: "Modify detected patterns"
|
||||
me: "What would you like to adjust?"
|
||||
user: "Change coverage requirement to 90%"
|
||||
me: "Updated coverage threshold to 90%. Anything else?"
|
||||
user: "No, proceed"
|
||||
me: "[Move to Phase 4 with modifications]"
|
||||
```
|
||||
|
||||
### Option 3: 📝 Custom architecture description
|
||||
|
||||
```yaml
|
||||
custom:
|
||||
user_says: "Custom architecture" | "I'll describe it" | "Let me explain"
|
||||
follow_up: "Please describe your project architecture"
|
||||
|
||||
collect_information:
|
||||
- Architecture pattern (if different from detected)
|
||||
- Key components and their responsibilities
|
||||
- Technology choices and rationale
|
||||
- Quality standards and thresholds
|
||||
- Testing strategy
|
||||
|
||||
example_dialogue:
|
||||
user: "Custom architecture description"
|
||||
me: "Please describe your architecture approach"
|
||||
user: "We use Clean Architecture with CQRS, event sourcing for writes..."
|
||||
me: "Got it. I'll use your custom architecture description. Proceed?"
|
||||
user: "Yes"
|
||||
me: "[Move to Phase 4 with custom architecture]"
|
||||
```
|
||||
|
||||
### Option 4: 🚫 Start with minimal setup
|
||||
|
||||
```yaml
|
||||
minimal:
|
||||
user_says: "Minimal setup" | "Keep it simple" | "Basic only"
|
||||
action: "Create minimal configuration without detected patterns"
|
||||
|
||||
minimal_setup_includes:
|
||||
- Basic AGENT.md (standard workflow rules)
|
||||
- Basic ARCHITECTURE.md template (user fills in later)
|
||||
- CLAUDE.md entry point
|
||||
- Directory structure
|
||||
- No language-specific customization
|
||||
- No framework detection applied
|
||||
|
||||
example_dialogue:
|
||||
user: "Start with minimal setup"
|
||||
me: "I'll create a minimal setup without framework-specific customization."
|
||||
me: "[Move to Phase 4 with minimal config]"
|
||||
```
|
||||
|
||||
## Validation Examples
|
||||
|
||||
### Example 1: Python FastAPI Project
|
||||
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: FastAPI with SQLAlchemy
|
||||
- Architecture: Hexagonal (Ports & Adapters)
|
||||
- Complexity: 0.65/1.0 (moderate)
|
||||
- Phase: Growth (6-18 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: pytest with 75% coverage
|
||||
- Linting: ruff with pyproject.toml config
|
||||
- Type checking: mypy in strict mode
|
||||
- CI/CD: GitHub Actions detected
|
||||
- Security: No major vulnerabilities detected
|
||||
|
||||
**Files Analyzed:**
|
||||
- pyproject.toml: FastAPI, SQLAlchemy, pytest dependencies
|
||||
- src/domain/: Clean domain layer detected
|
||||
- src/infrastructure/: Repository pattern found
|
||||
- tests/: Good test coverage structure
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer for the initial setup?
|
||||
```
|
||||
|
||||
### Example 2: React TypeScript Project
|
||||
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: React with TypeScript
|
||||
- Architecture: Component-based with Redux state management
|
||||
- Complexity: 0.70/1.0 (high)
|
||||
- Phase: Growth (6-18 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: Jest with React Testing Library
|
||||
- Linting: ESLint with Airbnb config
|
||||
- Type checking: TypeScript strict mode
|
||||
- CI/CD: GitHub Actions detected
|
||||
- Security: 2 outdated dependencies (non-critical)
|
||||
|
||||
**Files Analyzed:**
|
||||
- package.json: React 18.2, TypeScript 5.0, Jest, ESLint
|
||||
- src/components/: Hooks-based component architecture
|
||||
- src/store/: Redux Toolkit slices and sagas
|
||||
- tests/: 82% test coverage
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer for the initial setup?
|
||||
```
|
||||
|
||||
### Example 3: Rust Axum Project
|
||||
|
||||
```
|
||||
## Project Analysis Validation ✋
|
||||
|
||||
**Detected Configuration:**
|
||||
- Framework: Axum with Tokio and SQLx
|
||||
- Architecture: Layered with clear module boundaries
|
||||
- Complexity: 0.55/1.0 (moderate)
|
||||
- Phase: Startup (0-6 months)
|
||||
|
||||
**Quality Standards:**
|
||||
- Testing: cargo test with 68% coverage
|
||||
- Linting: clippy with custom rules
|
||||
- Type checking: Rust's built-in system
|
||||
- CI/CD: No CI detected
|
||||
- Security: All dependencies up to date
|
||||
|
||||
**Files Analyzed:**
|
||||
- Cargo.toml: axum 0.7, tokio 1.35, sqlx 0.7
|
||||
- src/lib.rs: Well-structured module hierarchy
|
||||
- src/handlers/: Clean separation of concerns
|
||||
- tests/: Integration tests present
|
||||
|
||||
## Your Options:
|
||||
- ✅ Proceed with detected setup
|
||||
- 🔄 Modify detected patterns
|
||||
- 📝 Custom architecture description
|
||||
- 🚫 Start with minimal setup
|
||||
|
||||
What would you prefer for the initial setup?
|
||||
```
|
||||
|
||||
## Enforcement Rules
|
||||
|
||||
### ❌ NEVER Skip Validation
|
||||
|
||||
```yaml
|
||||
prohibited_actions:
|
||||
- Proceeding to Phase 4 without user approval
|
||||
- Assuming user wants default configuration
|
||||
- Auto-selecting "Proceed" option
|
||||
- Skipping validation "to save time"
|
||||
|
||||
required_behavior:
|
||||
- ALWAYS present full analysis
|
||||
- ALWAYS wait for explicit user choice
|
||||
- ALWAYS confirm understanding of user's choice
|
||||
- ALWAYS document which option was chosen
|
||||
```
|
||||
|
||||
### ✅ Required Confirmation
|
||||
|
||||
Before moving to Phase 4, I must have:
|
||||
1. Presented complete analysis to user
|
||||
2. Shown all 4 options
|
||||
3. Received explicit user selection
|
||||
4. Confirmed understanding of selection
|
||||
|
||||
Only then can I proceed to Phase 4.
|
||||
|
||||
---
|
||||
|
||||
*This validation workflow ensures users have full control and understanding of their project setup before any files are generated.*
|
||||
226
skills/initializing-project/languages.yaml
Normal file
226
skills/initializing-project/languages.yaml
Normal file
@@ -0,0 +1,226 @@
|
||||
# Language-specific configurations for Quaestor templates
|
||||
# This file defines tooling, commands, and conventions for different programming languages
|
||||
|
||||
python:
|
||||
primary_language: python
|
||||
lint_command: "ruff check ."
|
||||
format_command: "ruff format ."
|
||||
test_command: pytest
|
||||
coverage_command: "pytest --cov"
|
||||
type_check_command: "mypy ."
|
||||
security_scan_command: "bandit -r src/"
|
||||
profile_command: "python -m cProfile"
|
||||
coverage_threshold: 80
|
||||
type_checking: true
|
||||
performance_target_ms: 200
|
||||
commit_prefix: feat
|
||||
quick_check_command: "ruff check . && pytest -x"
|
||||
full_check_command: "ruff check . && ruff format --check . && mypy . && pytest"
|
||||
precommit_install_command: "pre-commit install"
|
||||
doc_style_example: |
|
||||
def example_function(param: str) -> str:
|
||||
"""
|
||||
Brief description of what the function does.
|
||||
|
||||
Args:
|
||||
param: Description of the parameter
|
||||
|
||||
Returns:
|
||||
Description of return value
|
||||
|
||||
Raises:
|
||||
ValueError: When param is invalid
|
||||
"""
|
||||
pass
|
||||
|
||||
javascript:
|
||||
primary_language: javascript
|
||||
lint_command: "npx eslint ."
|
||||
format_command: "npx prettier --write ."
|
||||
test_command: "npm test"
|
||||
coverage_command: "npm test -- --coverage"
|
||||
type_check_command: "npx tsc --noEmit"
|
||||
security_scan_command: "npm audit"
|
||||
profile_command: "node --prof"
|
||||
coverage_threshold: 80
|
||||
type_checking: false
|
||||
performance_target_ms: 100
|
||||
commit_prefix: feat
|
||||
quick_check_command: "npm run lint && npm test -- --bail"
|
||||
full_check_command: "npm run lint && npm run prettier:check && npm test"
|
||||
precommit_install_command: "husky install"
|
||||
doc_style_example: |
|
||||
/**
|
||||
* Brief description of what the function does.
|
||||
*
|
||||
* @param {string} param - Description of the parameter
|
||||
* @returns {string} Description of return value
|
||||
* @throws {Error} When param is invalid
|
||||
*/
|
||||
function exampleFunction(param) {
|
||||
// Implementation
|
||||
}
|
||||
|
||||
typescript:
|
||||
primary_language: typescript
|
||||
lint_command: "npx eslint . --ext .ts,.tsx"
|
||||
format_command: "npx prettier --write ."
|
||||
test_command: "npm test"
|
||||
coverage_command: "npm test -- --coverage"
|
||||
type_check_command: "npx tsc --noEmit"
|
||||
security_scan_command: "npm audit"
|
||||
profile_command: "node --prof"
|
||||
coverage_threshold: 80
|
||||
type_checking: true
|
||||
performance_target_ms: 100
|
||||
commit_prefix: feat
|
||||
quick_check_command: "npm run lint && npm run type-check && npm test -- --bail"
|
||||
full_check_command: "npm run lint && npm run prettier:check && npm run type-check && npm test"
|
||||
precommit_install_command: "husky install"
|
||||
doc_style_example: |
|
||||
/**
|
||||
* Brief description of what the function does.
|
||||
*
|
||||
* @param param - Description of the parameter
|
||||
* @returns Description of return value
|
||||
* @throws {Error} When param is invalid
|
||||
*/
|
||||
function exampleFunction(param: string): string {
|
||||
// Implementation
|
||||
}
|
||||
|
||||
rust:
|
||||
primary_language: rust
|
||||
lint_command: "cargo clippy -- -D warnings"
|
||||
format_command: "cargo fmt"
|
||||
test_command: "cargo test"
|
||||
coverage_command: "cargo tarpaulin"
|
||||
type_check_command: "cargo check"
|
||||
security_scan_command: "cargo audit"
|
||||
profile_command: "cargo bench"
|
||||
coverage_threshold: 80
|
||||
type_checking: true
|
||||
performance_target_ms: 50
|
||||
commit_prefix: feat
|
||||
quick_check_command: "cargo check && cargo clippy && cargo test -- --fail-fast"
|
||||
full_check_command: "cargo fmt -- --check && cargo clippy -- -D warnings && cargo test"
|
||||
precommit_install_command: "pre-commit install"
|
||||
doc_style_example: |
|
||||
/// Brief description of what the function does.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `param` - Description of the parameter
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// Description of return value
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// Returns `Error` when param is invalid
|
||||
pub fn example_function(param: &str) -> Result<String, Error> {
|
||||
// Implementation
|
||||
}
|
||||
|
||||
go:
|
||||
primary_language: go
|
||||
lint_command: "golangci-lint run"
|
||||
format_command: "go fmt ./..."
|
||||
test_command: "go test ./..."
|
||||
coverage_command: "go test -cover ./..."
|
||||
type_check_command: "go vet ./..."
|
||||
security_scan_command: "gosec ./..."
|
||||
profile_command: "go test -cpuprofile=cpu.prof"
|
||||
coverage_threshold: 80
|
||||
type_checking: true
|
||||
performance_target_ms: 50
|
||||
commit_prefix: feat
|
||||
quick_check_command: "go vet ./... && go test -short ./..."
|
||||
full_check_command: "go fmt ./... && go vet ./... && golangci-lint run && go test ./..."
|
||||
precommit_install_command: "pre-commit install"
|
||||
doc_style_example: |
|
||||
// ExampleFunction does a brief description of what the function does.
|
||||
//
|
||||
// Parameters:
|
||||
// - param: Description of the parameter
|
||||
//
|
||||
// Returns:
|
||||
// - string: Description of return value
|
||||
// - error: When param is invalid
|
||||
func ExampleFunction(param string) (string, error) {
|
||||
// Implementation
|
||||
}
|
||||
|
||||
java:
|
||||
primary_language: java
|
||||
lint_command: "mvn checkstyle:check"
|
||||
format_command: "mvn spotless:apply"
|
||||
test_command: "mvn test"
|
||||
coverage_command: "mvn jacoco:report"
|
||||
type_check_command: "mvn compile"
|
||||
security_scan_command: "mvn dependency-check:check"
|
||||
profile_command: "mvn test -Dtest.profile=true"
|
||||
coverage_threshold: 80
|
||||
type_checking: true
|
||||
performance_target_ms: 100
|
||||
commit_prefix: feat
|
||||
quick_check_command: "mvn compile && mvn test -Dtest=*UnitTest"
|
||||
full_check_command: "mvn checkstyle:check && mvn compile && mvn test"
|
||||
precommit_install_command: "pre-commit install"
|
||||
doc_style_example: |
|
||||
/**
|
||||
* Brief description of what the method does.
|
||||
*
|
||||
* @param param Description of the parameter
|
||||
* @return Description of return value
|
||||
* @throws IllegalArgumentException When param is invalid
|
||||
*/
|
||||
public String exampleMethod(String param) {
|
||||
// Implementation
|
||||
}
|
||||
|
||||
ruby:
|
||||
primary_language: ruby
|
||||
lint_command: "rubocop"
|
||||
format_command: "rubocop --autocorrect"
|
||||
test_command: "bundle exec rspec"
|
||||
coverage_command: "bundle exec rspec --format progress"
|
||||
type_check_command: "bundle exec sorbet tc"
|
||||
security_scan_command: "bundle audit"
|
||||
profile_command: "ruby-prof"
|
||||
coverage_threshold: 80
|
||||
type_checking: false
|
||||
performance_target_ms: 150
|
||||
commit_prefix: feat
|
||||
quick_check_command: "rubocop && bundle exec rspec --fail-fast"
|
||||
full_check_command: "rubocop && bundle exec rspec && bundle audit"
|
||||
precommit_install_command: "pre-commit install"
|
||||
doc_style_example: |
|
||||
# Brief description of what the method does.
|
||||
#
|
||||
# @param param [String] Description of the parameter
|
||||
# @return [String] Description of return value
|
||||
# @raise [ArgumentError] When param is invalid
|
||||
def example_method(param)
|
||||
# Implementation
|
||||
end
|
||||
|
||||
# Default configuration for unknown project types
|
||||
unknown:
|
||||
primary_language: "unknown"
|
||||
lint_command: "# Configure your linter"
|
||||
format_command: "# Configure your formatter"
|
||||
test_command: "# Configure your test runner"
|
||||
coverage_command: "# Configure coverage tool"
|
||||
type_check_command: null
|
||||
security_scan_command: null
|
||||
profile_command: null
|
||||
coverage_threshold: null
|
||||
type_checking: false
|
||||
performance_target_ms: 200
|
||||
commit_prefix: chore
|
||||
quick_check_command: "make check"
|
||||
full_check_command: "make validate"
|
||||
precommit_install_command: "pre-commit install"
|
||||
doc_style_example: null
|
||||
602
skills/managing-specifications/LIFECYCLE.md
Normal file
602
skills/managing-specifications/LIFECYCLE.md
Normal file
@@ -0,0 +1,602 @@
|
||||
# Specification Lifecycle Management
|
||||
|
||||
This file provides complete details for managing specifications through their lifecycle. Only load when user needs management operations beyond basic "show status" or "activate spec".
|
||||
|
||||
## When to Load This File
|
||||
|
||||
- User asks about lifecycle details: "How do I move specs?", "What are the rules?"
|
||||
- User wants batch operations: "Show all high priority specs"
|
||||
- User needs validation info: "Why can't I activate this spec?"
|
||||
- User wants progress tracking details: "How is progress calculated?"
|
||||
|
||||
## The Folder-Based Lifecycle
|
||||
|
||||
Specifications move through three folders representing their state:
|
||||
|
||||
```
|
||||
.quaestor/specs/
|
||||
├── draft/ # New specs, not started (unlimited)
|
||||
│ └── spec-*.md
|
||||
├── active/ # Work in progress (MAX 3 enforced)
|
||||
│ └── spec-*.md
|
||||
└── completed/ # Finished work (archived, unlimited)
|
||||
└── spec-*.md
|
||||
```
|
||||
|
||||
**Core principle**: The folder IS the state. No separate tracking database needed.
|
||||
|
||||
## State Transitions
|
||||
|
||||
```
|
||||
draft/ → active/ → completed/
|
||||
↑ ↓
|
||||
└─────────┘
|
||||
(can move back to draft if needed)
|
||||
```
|
||||
|
||||
### Draft → Active (Activation)
|
||||
|
||||
**When**: User starts working on a specification
|
||||
|
||||
**Command**: "activate spec-feature-001" or "start working on spec-feature-001"
|
||||
|
||||
**Process**:
|
||||
1. Check: Is spec in draft/ folder?
|
||||
2. Check: Are there < 3 specs in active/?
|
||||
3. Move file: `draft/spec-*.md` → `active/`
|
||||
4. Update frontmatter: `status: draft` → `status: active`
|
||||
5. Add timestamp: `updated_at: [current time]`
|
||||
6. Report success
|
||||
|
||||
**Active Limit Enforcement**:
|
||||
```yaml
|
||||
limit: 3 active specifications maximum
|
||||
|
||||
if_limit_reached:
|
||||
action: Block activation
|
||||
message: |
|
||||
❌ Cannot activate - 3 specifications already active:
|
||||
- spec-feature-001 (80% complete)
|
||||
- spec-feature-002 (40% complete)
|
||||
- spec-bugfix-001 (95% complete)
|
||||
|
||||
💡 Suggestion: Complete spec-bugfix-001 first (almost done!)
|
||||
|
||||
user_options:
|
||||
- Complete an active spec
|
||||
- Move an active spec back to draft
|
||||
- Choose which active spec to pause
|
||||
```
|
||||
|
||||
### Active → Completed (Completion)
|
||||
|
||||
**When**: All acceptance criteria are checked off
|
||||
|
||||
**Command**: "complete spec-feature-001" or "mark spec-feature-001 as done"
|
||||
|
||||
**Validation Before Completion**:
|
||||
```yaml
|
||||
required_checks:
|
||||
- spec_in_active_folder: true
|
||||
- all_criteria_checked: true # All [ ] became [x]
|
||||
- status_field: "active"
|
||||
|
||||
optional_warnings:
|
||||
- no_test_scenarios: "⚠️ No test scenarios documented"
|
||||
- no_branch_linked: "⚠️ No branch linked to spec"
|
||||
- estimated_hours_missing: "⚠️ No time estimate"
|
||||
```
|
||||
|
||||
**Process**:
|
||||
1. Verify: All checkboxes marked `[x]`
|
||||
2. Verify: Spec is in active/ folder
|
||||
3. Move file: `active/spec-*.md` → `completed/`
|
||||
4. Update frontmatter: `status: active` → `status: completed`
|
||||
5. Add completion timestamp: `updated_at: [current time]`
|
||||
6. Report success + suggest PR creation
|
||||
|
||||
**If Incomplete**:
|
||||
```
|
||||
❌ Cannot complete spec-feature-001
|
||||
|
||||
Progress: 3/5 criteria complete (60%)
|
||||
|
||||
⏳ Remaining:
|
||||
- [ ] User can reset password via email
|
||||
- [ ] Session timeout after 24 hours
|
||||
|
||||
Mark these complete or continue implementation with /impl spec-feature-001
|
||||
```
|
||||
|
||||
### Completed → Draft (Reopening)
|
||||
|
||||
**When**: Need to reopen completed work (rare)
|
||||
|
||||
**Command**: "reopen spec-feature-001" or "move spec-feature-001 back to draft"
|
||||
|
||||
**Process**:
|
||||
1. Move file: `completed/spec-*.md` → `draft/`
|
||||
2. Update frontmatter: `status: completed` → `status: draft`
|
||||
3. Uncheck acceptance criteria (reset to `[ ]`)
|
||||
4. Add note about why reopened
|
||||
|
||||
### Active → Draft (Pausing)
|
||||
|
||||
**When**: Need to pause work temporarily
|
||||
|
||||
**Command**: "pause spec-feature-001" or "move spec-feature-001 to draft"
|
||||
|
||||
**Process**:
|
||||
1. Move file: `active/spec-*.md` → `draft/`
|
||||
2. Update frontmatter: `status: active` → `status: draft`
|
||||
3. Preserve progress (don't uncheck criteria)
|
||||
4. Add note about why paused
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
### Calculation Method
|
||||
|
||||
Progress is calculated by parsing checkbox completion:
|
||||
|
||||
```markdown
|
||||
## Acceptance Criteria
|
||||
- [x] User can login with email and password ✓ Complete
|
||||
- [x] Invalid credentials show error message ✓ Complete
|
||||
- [x] Sessions persist across browser restarts ✓ Complete
|
||||
- [ ] User can logout and clear session ✗ Incomplete
|
||||
- [ ] Password reset via email ✗ Incomplete
|
||||
|
||||
Progress: 3/5 = 60%
|
||||
```
|
||||
|
||||
**Algorithm**:
|
||||
```python
|
||||
def calculate_progress(spec_content):
|
||||
total = count_all_checkboxes(spec_content)
|
||||
completed = count_checked_boxes(spec_content) # [x]
|
||||
percentage = (completed / total) * 100
|
||||
return {
|
||||
'total': total,
|
||||
'completed': completed,
|
||||
'percentage': percentage
|
||||
}
|
||||
```
|
||||
|
||||
**What counts as a checkbox**:
|
||||
- `- [ ]` or `- [x]` in acceptance criteria section
|
||||
- `- [ ]` or `- [x]` in test scenarios (optional)
|
||||
- Checkboxes in other sections (optional)
|
||||
|
||||
### Progress Visualization
|
||||
|
||||
```
|
||||
📊 spec-feature-001: User Authentication
|
||||
Progress: [████████░░] 80%
|
||||
|
||||
✅ Completed (4):
|
||||
- User can login with email and password
|
||||
- Invalid credentials show error message
|
||||
- Sessions persist across browser restarts
|
||||
- User can logout and clear session
|
||||
|
||||
⏳ Remaining (1):
|
||||
- Password reset via email
|
||||
|
||||
Last updated: 2 hours ago
|
||||
Branch: feat/user-authentication
|
||||
```
|
||||
|
||||
## Status Dashboard
|
||||
|
||||
### Basic Status Check
|
||||
|
||||
**Command**: "show spec status" or "what's my spec status?"
|
||||
|
||||
**Output**:
|
||||
```
|
||||
📊 Specification Status
|
||||
|
||||
📁 Draft: 5 specifications
|
||||
- spec-feature-003: User Profile Management [high]
|
||||
- spec-feature-004: API Rate Limiting [medium]
|
||||
- spec-bugfix-002: Fix memory leak [critical]
|
||||
- spec-refactor-001: Simplify auth logic [medium]
|
||||
- spec-docs-001: API documentation [low]
|
||||
|
||||
📋 Active: 2/3 slots used
|
||||
- spec-feature-001: User Authentication [████████░░] 80%
|
||||
Branch: feat/user-authentication
|
||||
|
||||
- spec-feature-002: Email Notifications [████░░░░░░] 40%
|
||||
Branch: feat/email-notifications
|
||||
|
||||
✅ Completed: 12 specifications
|
||||
- Last completed: spec-bugfix-001 (2 days ago)
|
||||
|
||||
💡 You can activate 1 more specification
|
||||
```
|
||||
|
||||
### Detailed Status for Single Spec
|
||||
|
||||
**Command**: "status of spec-feature-001" or "show me spec-feature-001 progress"
|
||||
|
||||
**Output**:
|
||||
```
|
||||
📊 spec-feature-001: User Authentication
|
||||
|
||||
Status: Active
|
||||
Progress: [████████░░] 80% (4/5 criteria)
|
||||
Priority: High
|
||||
Branch: feat/user-authentication
|
||||
Created: 3 days ago
|
||||
Updated: 2 hours ago
|
||||
|
||||
✅ Completed:
|
||||
- User can login with email and password
|
||||
- Invalid credentials show error message
|
||||
- Sessions persist across browser restarts
|
||||
- User can logout and clear session
|
||||
|
||||
⏳ Remaining:
|
||||
- Password reset via email
|
||||
|
||||
Next steps:
|
||||
- Continue implementation: /impl spec-feature-001
|
||||
- Mark complete when done: "complete spec-feature-001"
|
||||
```
|
||||
|
||||
## Batch Operations
|
||||
|
||||
### Filter by Type
|
||||
|
||||
**Command**: "show all feature specs" or "list bugfix specs"
|
||||
|
||||
```bash
|
||||
# Search across all folders
|
||||
grep -l "type: feature" .quaestor/specs/**/*.md
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
📂 Feature Specifications (8 total)
|
||||
|
||||
Draft (4):
|
||||
- spec-feature-003: User Profile Management
|
||||
- spec-feature-004: API Rate Limiting
|
||||
- spec-feature-005: Search functionality
|
||||
- spec-feature-006: Export to CSV
|
||||
|
||||
Active (2):
|
||||
- spec-feature-001: User Authentication [80%]
|
||||
- spec-feature-002: Email Notifications [40%]
|
||||
|
||||
Completed (2):
|
||||
- spec-feature-000: Initial setup
|
||||
- spec-feature-007: Login page
|
||||
```
|
||||
|
||||
### Filter by Priority
|
||||
|
||||
**Command**: "show high priority specs" or "what critical specs do we have?"
|
||||
|
||||
```bash
|
||||
grep -l "priority: critical" .quaestor/specs/**/*.md
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
🚨 Critical Priority Specifications
|
||||
|
||||
Draft:
|
||||
- spec-bugfix-002: Fix memory leak [Not started]
|
||||
|
||||
Active:
|
||||
- None
|
||||
|
||||
💡 Consider activating spec-bugfix-002 (critical priority)
|
||||
```
|
||||
|
||||
### Check Dependencies
|
||||
|
||||
**Command**: "what specs are blocked?" or "show spec dependencies"
|
||||
|
||||
**Output**:
|
||||
```
|
||||
📊 Specification Dependencies
|
||||
|
||||
Blocked (waiting on other specs):
|
||||
- spec-feature-003 (Requires: spec-feature-001)
|
||||
- spec-feature-005 (Requires: spec-feature-002, spec-feature-003)
|
||||
|
||||
Blocking others:
|
||||
- spec-feature-001 (Blocks: spec-feature-003, spec-refactor-001)
|
||||
|
||||
Ready to start (no dependencies):
|
||||
- spec-feature-004
|
||||
- spec-bugfix-002
|
||||
- spec-docs-001
|
||||
```
|
||||
|
||||
## Metadata Management
|
||||
|
||||
### Update Priority
|
||||
|
||||
**Command**: "set spec-feature-001 priority to critical"
|
||||
|
||||
**Process**:
|
||||
1. Read spec file
|
||||
2. Update frontmatter: `priority: medium` → `priority: critical`
|
||||
3. Update timestamp
|
||||
4. Save file
|
||||
|
||||
### Link to Branch
|
||||
|
||||
**Command**: "link spec-feature-001 to feat/user-auth"
|
||||
|
||||
**Process**:
|
||||
1. Read spec file
|
||||
2. Add/update metadata: `branch: feat/user-auth`
|
||||
3. Update timestamp
|
||||
4. Save file
|
||||
|
||||
### Add Technical Notes
|
||||
|
||||
**Command**: "add note to spec-feature-001: using JWT for tokens"
|
||||
|
||||
**Process**:
|
||||
1. Read spec file
|
||||
2. Append to metadata section or create notes field
|
||||
3. Update timestamp
|
||||
4. Save file
|
||||
|
||||
## Validation Rules
|
||||
|
||||
### Before Activation
|
||||
|
||||
```yaml
|
||||
checks:
|
||||
valid_frontmatter:
|
||||
- id field exists and is unique
|
||||
- type is valid (feature|bugfix|refactor|etc)
|
||||
- priority is set
|
||||
- timestamps present
|
||||
|
||||
content_quality:
|
||||
- title is not empty
|
||||
- description has content
|
||||
- rationale provided
|
||||
- at least 1 acceptance criterion
|
||||
|
||||
warnings:
|
||||
- no test scenarios (⚠️ warn but allow)
|
||||
- estimated_hours missing (⚠️ warn but allow)
|
||||
```
|
||||
|
||||
### Before Completion
|
||||
|
||||
```yaml
|
||||
checks:
|
||||
required:
|
||||
- all checkboxes marked [x]
|
||||
- spec is in active/ folder
|
||||
- status field is "active"
|
||||
|
||||
warnings:
|
||||
- no branch linked (⚠️ warn but allow)
|
||||
- no test scenarios (⚠️ warn but allow)
|
||||
- estimated_hours vs actual time
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Spec Not Found
|
||||
|
||||
```
|
||||
❌ Specification 'spec-feature-999' not found
|
||||
|
||||
Searched in:
|
||||
- .quaestor/specs/draft/
|
||||
- .quaestor/specs/active/
|
||||
- .quaestor/specs/completed/
|
||||
|
||||
💡 Run "show draft specs" to see available specifications
|
||||
```
|
||||
|
||||
### Active Limit Reached
|
||||
|
||||
```
|
||||
❌ Cannot activate - already at maximum (3 active specs)
|
||||
|
||||
Active specs:
|
||||
1. spec-feature-001 (80% complete - almost done!)
|
||||
2. spec-feature-002 (40% complete)
|
||||
3. spec-refactor-001 (10% complete - just started)
|
||||
|
||||
Options:
|
||||
- Complete spec-feature-001 (almost finished)
|
||||
- Pause spec-refactor-001 (just started)
|
||||
- Continue with one of the active specs
|
||||
|
||||
💡 The 3-spec limit encourages finishing work before starting new features
|
||||
```
|
||||
|
||||
### Incomplete Spec
|
||||
|
||||
```
|
||||
❌ Cannot complete spec-feature-001
|
||||
|
||||
Progress: 3/5 criteria (60%)
|
||||
|
||||
Missing:
|
||||
- [ ] User can reset password via email
|
||||
- [ ] Session timeout after 24 hours
|
||||
|
||||
Options:
|
||||
- Continue implementation: /impl spec-feature-001
|
||||
- Mark these criteria complete manually
|
||||
- Split into new spec: "create spec for password reset"
|
||||
|
||||
💡 All acceptance criteria must be checked before completion
|
||||
```
|
||||
|
||||
## Git Integration
|
||||
|
||||
### Stage Spec Changes
|
||||
|
||||
When moving specs, stage the changes for commit:
|
||||
|
||||
```bash
|
||||
# Stage all spec folder changes
|
||||
git add .quaestor/specs/draft/
|
||||
git add .quaestor/specs/active/
|
||||
git add .quaestor/specs/completed/
|
||||
|
||||
# Commit with descriptive message
|
||||
git commit -m "chore: activate spec-feature-001"
|
||||
git commit -m "chore: complete spec-feature-001 - user authentication"
|
||||
```
|
||||
|
||||
### Commit Message Patterns
|
||||
|
||||
```yaml
|
||||
activation:
|
||||
format: "chore: activate [spec-id]"
|
||||
example: "chore: activate spec-feature-003"
|
||||
|
||||
completion:
|
||||
format: "chore: complete [spec-id] - [brief title]"
|
||||
example: "chore: complete spec-feature-001 - user authentication"
|
||||
|
||||
batch_update:
|
||||
format: "chore: update spec statuses"
|
||||
example: "chore: update spec statuses (2 completed, 1 activated)"
|
||||
```
|
||||
|
||||
## Progress History
|
||||
|
||||
### Track Updates
|
||||
|
||||
**Command**: "when was spec-feature-001 last updated?"
|
||||
|
||||
```bash
|
||||
# Read frontmatter
|
||||
grep "updated_at:" .quaestor/specs/active/spec-feature-001.md
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
spec-feature-001 last updated: 2025-01-19T14:30:00 (2 hours ago)
|
||||
```
|
||||
|
||||
### Show Velocity
|
||||
|
||||
**Command**: "how many specs completed this week?"
|
||||
|
||||
```bash
|
||||
# Check completed folder, filter by completion timestamp
|
||||
find .quaestor/specs/completed/ -name "*.md" -mtime -7
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
📊 Velocity Report (Last 7 Days)
|
||||
|
||||
Completed: 3 specifications
|
||||
- spec-feature-007: Login page (2 days ago)
|
||||
- spec-bugfix-001: Memory leak fix (4 days ago)
|
||||
- spec-docs-002: API docs update (6 days ago)
|
||||
|
||||
Average: 0.43 specs/day
|
||||
Weekly rate: 3 specs/week
|
||||
```
|
||||
|
||||
## Advanced Operations
|
||||
|
||||
### Bulk Priority Update
|
||||
|
||||
**Command**: "set all draft bugfix specs to high priority"
|
||||
|
||||
**Process**:
|
||||
1. Find all draft specs with `type: bugfix`
|
||||
2. Update each: `priority: medium` → `priority: high`
|
||||
3. Report changes
|
||||
|
||||
### Archive Old Completed Specs
|
||||
|
||||
**Command**: "archive specs completed > 90 days ago"
|
||||
|
||||
```bash
|
||||
# Create archive folder
|
||||
mkdir -p .quaestor/specs/archived/
|
||||
|
||||
# Move old completed specs
|
||||
find .quaestor/specs/completed/ -name "*.md" -mtime +90 \
|
||||
-exec mv {} .quaestor/specs/archived/ \;
|
||||
```
|
||||
|
||||
### Generate Status Report
|
||||
|
||||
**Command**: "generate spec status report"
|
||||
|
||||
**Output**: Markdown file with:
|
||||
- Current active specs and progress
|
||||
- Draft specs by priority
|
||||
- Recently completed specs
|
||||
- Velocity metrics
|
||||
- Blocked specs
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Keep Active Limit Low
|
||||
|
||||
The 3-spec limit is intentional:
|
||||
- ✅ Forces focus on completion
|
||||
- ✅ Reduces context switching
|
||||
- ✅ Makes priorities clear
|
||||
- ✅ Encourages finishing work
|
||||
|
||||
### Link Specs to Branches
|
||||
|
||||
When starting work:
|
||||
```yaml
|
||||
# In spec frontmatter
|
||||
branch: feat/user-authentication
|
||||
```
|
||||
|
||||
Benefits:
|
||||
- Easy to find related code
|
||||
- Track implementation progress
|
||||
- Connect commits to specs
|
||||
|
||||
### Update Progress Regularly
|
||||
|
||||
Check off criteria as you complete them:
|
||||
```markdown
|
||||
- [x] User can login ← Mark done immediately
|
||||
- [ ] User can logout ← Next to work on
|
||||
```
|
||||
|
||||
Benefits:
|
||||
- Accurate progress tracking
|
||||
- Visibility into what's left
|
||||
- Motivation from seeing progress
|
||||
|
||||
### Use Priority Ruthlessly
|
||||
|
||||
```yaml
|
||||
priority: critical # Drop everything, do now
|
||||
priority: high # Schedule this week
|
||||
priority: medium # Normal priority
|
||||
priority: low # Nice to have, do when time allows
|
||||
```
|
||||
|
||||
### Review Draft Specs Weekly
|
||||
|
||||
Prune specs that are no longer needed:
|
||||
- Requirements changed
|
||||
- Feature no longer wanted
|
||||
- Superseded by other work
|
||||
|
||||
---
|
||||
|
||||
*This guide provides complete lifecycle management details. Return to SKILL.md for overview or WRITING.md for spec creation guidance.*
|
||||
154
skills/managing-specifications/SKILL.md
Normal file
154
skills/managing-specifications/SKILL.md
Normal file
@@ -0,0 +1,154 @@
|
||||
---
|
||||
name: Managing Specifications
|
||||
description: Create, manage, and track specifications through their full lifecycle from draft to completion. Use when user wants to plan features, create specs, check progress, activate work, or complete specifications.
|
||||
allowed-tools: [Write, Read, Edit, Bash, Glob, Grep]
|
||||
---
|
||||
|
||||
# Managing Specifications
|
||||
|
||||
I help you work with specifications: creating new specs from requirements, managing their lifecycle (draft → active → completed), and tracking progress automatically.
|
||||
|
||||
## When to Use Me
|
||||
|
||||
**Auto-activate when:**
|
||||
- Invoked via `/quaestor:plan` slash command
|
||||
- User describes a feature with details: "I want to add user authentication with JWT tokens"
|
||||
- User explicitly requests spec creation: "Create a spec for X", "Write a specification for Y"
|
||||
- User asks about spec status: "What specs are active?", "Show spec progress"
|
||||
- User wants to activate work: "Start working on spec-feature-001"
|
||||
- User wants to complete: "Mark spec-feature-001 as done"
|
||||
- User checks progress: "How's the authentication feature going?"
|
||||
|
||||
**Do NOT auto-activate when:**
|
||||
- User says only "plan" or "plan it" (slash command handles this)
|
||||
- User is making general requests without specification context
|
||||
- Request needs more clarification before creating a spec
|
||||
|
||||
## Quick Start
|
||||
|
||||
**New to specs?** Just describe what you want to build:
|
||||
```
|
||||
"I want to add email notifications when orders are placed"
|
||||
```
|
||||
|
||||
I'll ask a few questions, then create a complete specification for you.
|
||||
|
||||
**Have specs already?** Tell me what you need:
|
||||
```
|
||||
"Show me my active specs"
|
||||
"Activate spec-feature-003"
|
||||
"What's the progress on spec-auth-001?"
|
||||
```
|
||||
|
||||
## How I Work
|
||||
|
||||
I detect what you need and adapt automatically:
|
||||
|
||||
### Mode 1: Creating Specifications
|
||||
|
||||
When you describe a feature or ask me to create a spec, I:
|
||||
1. Ask clarifying questions (if needed)
|
||||
2. Generate a unique spec ID (e.g., `spec-feature-001`)
|
||||
3. Create `.quaestor/specs/draft/[spec-id].md`
|
||||
4. Report next steps
|
||||
|
||||
**See @WRITING.md for the complete specification template and writing process**
|
||||
|
||||
### Mode 2: Managing Lifecycle
|
||||
|
||||
When you ask about spec status or want to move specs, I:
|
||||
1. Check current state (scan all folders)
|
||||
2. Perform the requested action (activate, complete, show status)
|
||||
3. Update spec metadata automatically
|
||||
4. Report changes
|
||||
|
||||
**See @LIFECYCLE.md for folder-based lifecycle management and progress tracking**
|
||||
|
||||
## The 3-Folder System
|
||||
|
||||
Specifications live in folders that represent their state:
|
||||
|
||||
```
|
||||
.quaestor/specs/
|
||||
├── draft/ # New specs (unlimited)
|
||||
├── active/ # Work in progress (MAX 3)
|
||||
└── completed/ # Finished work (archived)
|
||||
```
|
||||
|
||||
**The folder IS the state** - no complex tracking needed!
|
||||
|
||||
**Why max 3 active?** Forces focus on finishing work before starting new features.
|
||||
|
||||
## Progressive Workflows
|
||||
|
||||
I provide just enough information for your current task, with details available when needed:
|
||||
|
||||
### Creating Your First Spec
|
||||
|
||||
**Minimal workflow** (I guide you):
|
||||
```
|
||||
You: "Add user authentication"
|
||||
Me: I'll ask 3-5 questions
|
||||
Me: Create spec-feature-001.md in draft/
|
||||
```
|
||||
|
||||
### Managing Existing Specs
|
||||
|
||||
**Common operations**:
|
||||
- `"Show active specs"` → List with progress bars
|
||||
- `"Activate spec-feature-003"` → Move draft/ → active/
|
||||
- `"Complete spec-auth-001"` → Move active/ → completed/
|
||||
|
||||
### Deep Dive Available
|
||||
|
||||
When you need more details:
|
||||
- **@WRITING.md** - Complete template, field descriptions, examples
|
||||
- **@LIFECYCLE.md** - All lifecycle operations, validation rules, batch operations
|
||||
- **@TEMPLATE.md** - Field-by-field guide to spec structure
|
||||
|
||||
## Key Features
|
||||
|
||||
### Smart Spec Creation
|
||||
✅ Auto-generate unique IDs
|
||||
✅ Forgiving template (auto-corrects common mistakes)
|
||||
✅ No placeholders - only real values
|
||||
✅ Rich metadata (priority, type, timestamps)
|
||||
|
||||
### Automatic Lifecycle Management
|
||||
✅ Folder-based states (simple and visual)
|
||||
✅ 3-active-spec limit (enforced automatically)
|
||||
✅ Progress calculation from checkboxes
|
||||
✅ Metadata updates on state changes
|
||||
|
||||
### Progress Tracking
|
||||
✅ Parse checkbox completion: `- [x]` vs `- [ ]`
|
||||
✅ Calculate percentage: 4/5 complete = 80%
|
||||
✅ Visual progress bars
|
||||
✅ Branch linking support
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Creating specs:**
|
||||
- ✅ Spec has unique ID and proper frontmatter
|
||||
- ✅ All fields have actual values (no placeholders)
|
||||
- ✅ Acceptance criteria defined with checkboxes
|
||||
- ✅ Saved to `.quaestor/specs/draft/[spec-id].md`
|
||||
|
||||
**Managing specs:**
|
||||
- ✅ State transitions work correctly (draft → active → completed)
|
||||
- ✅ 3-active limit enforced
|
||||
- ✅ Progress calculated accurately from checkboxes
|
||||
- ✅ Metadata updates automatically
|
||||
|
||||
## Next Steps After Using This Skill
|
||||
|
||||
Once you have specifications:
|
||||
1. **Activate a spec**: "Activate spec-feature-001"
|
||||
2. **Implement it**: Use `/impl spec-feature-001` or implementing-features skill
|
||||
3. **Track progress**: "What's the status?" or "Show active specs"
|
||||
4. **Complete it**: "Complete spec-feature-001" when all criteria checked
|
||||
5. **Ship it**: Use reviewing-and-shipping skill to create PR
|
||||
|
||||
---
|
||||
|
||||
*I handle both creation and management of specifications. Just tell me what you need - I'll detect the mode and guide you through it with minimal upfront context.*
|
||||
697
skills/managing-specifications/TEMPLATE.md
Normal file
697
skills/managing-specifications/TEMPLATE.md
Normal file
@@ -0,0 +1,697 @@
|
||||
# Specification Template Reference
|
||||
|
||||
This file provides a field-by-field reference for the specification template. Only load when user asks specific questions about template structure or field meanings.
|
||||
|
||||
## When to Load This File
|
||||
|
||||
- User asks: "What does the rationale field mean?"
|
||||
- User wants field examples: "Show me examples of good acceptance criteria"
|
||||
- User is confused about a specific section
|
||||
- User wants to understand optional vs required fields
|
||||
|
||||
## Complete Template Structure
|
||||
|
||||
```markdown
|
||||
---
|
||||
# FRONTMATTER (YAML metadata)
|
||||
id: spec-TYPE-NNN # Required: Unique identifier
|
||||
type: feature # Required: Category of work
|
||||
status: draft # Required: Current state
|
||||
priority: medium # Required: Urgency level
|
||||
created_at: 2025-01-19T10:00:00 # Required: Creation timestamp
|
||||
updated_at: 2025-01-19T10:00:00 # Required: Last modified timestamp
|
||||
---
|
||||
|
||||
# Title # Required: Clear, descriptive name
|
||||
|
||||
## Description # Required: What needs to be done
|
||||
What exactly needs to be implemented, fixed, or changed.
|
||||
Be specific about functionality, scope, and affected components.
|
||||
|
||||
## Rationale # Required: Why this matters
|
||||
Business value, technical benefit, or problem being solved.
|
||||
Explain the impact if this is not done.
|
||||
|
||||
## Dependencies # Optional: Related specifications
|
||||
- **Requires**: spec-001 # Must be done before this
|
||||
- **Blocks**: spec-003 # This blocks other work
|
||||
- **Related**: spec-004 # Context, not blocking
|
||||
|
||||
## Risks # Optional: Potential issues
|
||||
- Technical risks
|
||||
- Schedule risks
|
||||
- Dependency risks
|
||||
|
||||
## Success Metrics # Optional: Measurable outcomes
|
||||
- Performance targets
|
||||
- Usage metrics
|
||||
- Quality metrics
|
||||
|
||||
## Acceptance Criteria # Required: How to know it's done
|
||||
- [ ] Specific, testable criterion
|
||||
- [ ] Another specific criterion
|
||||
- [ ] Include error cases
|
||||
- [ ] Minimum 3 criteria recommended
|
||||
|
||||
## Test Scenarios # Required: How to verify
|
||||
|
||||
### Happy path test # At least one success case
|
||||
**Given**: Initial state
|
||||
**When**: Action taken
|
||||
**Then**: Expected result
|
||||
|
||||
### Error case test # At least one failure case
|
||||
**Given**: Invalid input
|
||||
**When**: Action attempted
|
||||
**Then**: Appropriate error shown
|
||||
|
||||
## Metadata # Optional: Additional info
|
||||
estimated_hours: 8
|
||||
technical_notes: Implementation notes
|
||||
branch: feat/feature-name # Added when work starts
|
||||
```
|
||||
|
||||
## Field Reference
|
||||
|
||||
### Frontmatter Fields
|
||||
|
||||
#### id (Required)
|
||||
**Format**: `spec-TYPE-NNN`
|
||||
|
||||
**Purpose**: Unique identifier that never changes
|
||||
|
||||
**Type Prefixes**:
|
||||
- `spec-feature-NNN` - New functionality
|
||||
- `spec-bugfix-NNN` - Fix broken behavior
|
||||
- `spec-refactor-NNN` - Improve code structure
|
||||
- `spec-perf-NNN` - Performance improvements
|
||||
- `spec-sec-NNN` - Security enhancements
|
||||
- `spec-test-NNN` - Test coverage
|
||||
- `spec-docs-NNN` - Documentation
|
||||
|
||||
**Numbering**: Zero-padded 3 digits (001, 002, ..., 999)
|
||||
|
||||
**Examples**:
|
||||
```yaml
|
||||
id: spec-feature-001
|
||||
id: spec-bugfix-023
|
||||
id: spec-refactor-005
|
||||
```
|
||||
|
||||
**Rules**:
|
||||
- Generated automatically from type + next available number
|
||||
- Never changes once created
|
||||
- Must be unique across all specs
|
||||
|
||||
---
|
||||
|
||||
#### type (Required)
|
||||
**Values**: `feature | bugfix | refactor | documentation | performance | security | testing`
|
||||
|
||||
**Purpose**: Categorize the work type
|
||||
|
||||
**Descriptions**:
|
||||
```yaml
|
||||
feature:
|
||||
description: "New functionality or capability"
|
||||
examples:
|
||||
- "User authentication system"
|
||||
- "Export to PDF feature"
|
||||
- "Real-time notifications"
|
||||
|
||||
bugfix:
|
||||
description: "Fix broken or incorrect behavior"
|
||||
examples:
|
||||
- "Fix memory leak in processor"
|
||||
- "Correct calculation error"
|
||||
- "Resolve null pointer exception"
|
||||
|
||||
refactor:
|
||||
description: "Improve code structure without changing behavior"
|
||||
examples:
|
||||
- "Consolidate authentication logic"
|
||||
- "Simplify database queries"
|
||||
- "Extract reusable components"
|
||||
|
||||
documentation:
|
||||
description: "Add or improve documentation"
|
||||
examples:
|
||||
- "API documentation"
|
||||
- "Add code comments"
|
||||
- "Update README"
|
||||
|
||||
performance:
|
||||
description: "Improve speed, efficiency, or resource usage"
|
||||
examples:
|
||||
- "Optimize database queries"
|
||||
- "Implement caching"
|
||||
- "Reduce memory usage"
|
||||
|
||||
security:
|
||||
description: "Security improvements or vulnerability fixes"
|
||||
examples:
|
||||
- "Add input validation"
|
||||
- "Implement rate limiting"
|
||||
- "Fix SQL injection vulnerability"
|
||||
|
||||
testing:
|
||||
description: "Add or improve test coverage"
|
||||
examples:
|
||||
- "Add unit tests for auth module"
|
||||
- "Implement E2E tests"
|
||||
- "Improve test coverage to 80%"
|
||||
```
|
||||
|
||||
**Auto-Correction**: Parser auto-corrects common mistakes
|
||||
- "removal" → "refactor"
|
||||
- "fix" → "bugfix"
|
||||
- "test" → "testing"
|
||||
|
||||
---
|
||||
|
||||
#### status (Required, Auto-Managed)
|
||||
**Values**: `draft | active | completed`
|
||||
|
||||
**Purpose**: Track current state (managed by folder location)
|
||||
|
||||
**Note**: Folder location is source of truth, this field is kept in sync
|
||||
|
||||
```yaml
|
||||
status: draft # In .quaestor/specs/draft/
|
||||
status: active # In .quaestor/specs/active/
|
||||
status: completed # In .quaestor/specs/completed/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### priority (Required)
|
||||
**Values**: `critical | high | medium | low`
|
||||
|
||||
**Purpose**: Indicate urgency and importance
|
||||
|
||||
**Guidelines**:
|
||||
```yaml
|
||||
critical:
|
||||
when: "Production down, security vulnerability, data loss risk"
|
||||
sla: "Drop everything, fix immediately"
|
||||
examples:
|
||||
- "Production outage"
|
||||
- "Security breach"
|
||||
- "Data corruption"
|
||||
|
||||
high:
|
||||
when: "Important feature, significant bug, blocking other work"
|
||||
sla: "Schedule this week"
|
||||
examples:
|
||||
- "Key customer feature"
|
||||
- "Major bug affecting users"
|
||||
- "Blocking other development"
|
||||
|
||||
medium:
|
||||
when: "Normal priority work, planned features, minor bugs"
|
||||
sla: "Schedule in sprint"
|
||||
examples:
|
||||
- "Planned feature work"
|
||||
- "Minor bug fixes"
|
||||
- "Technical debt"
|
||||
|
||||
low:
|
||||
when: "Nice to have, minor improvements, future work"
|
||||
sla: "Do when time allows"
|
||||
examples:
|
||||
- "Nice to have features"
|
||||
- "Minor improvements"
|
||||
- "Future enhancements"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### created_at / updated_at (Required, Auto)
|
||||
**Format**: ISO 8601 timestamp `YYYY-MM-DDTHH:MM:SS`
|
||||
|
||||
**Purpose**: Track when spec was created and last modified
|
||||
|
||||
**Examples**:
|
||||
```yaml
|
||||
created_at: 2025-01-19T10:30:00
|
||||
updated_at: 2025-01-19T14:45:00
|
||||
```
|
||||
|
||||
**Auto-Management**:
|
||||
- `created_at`: Set once when spec is created
|
||||
- `updated_at`: Updated whenever spec content changes
|
||||
|
||||
---
|
||||
|
||||
### Content Sections
|
||||
|
||||
#### Title (Required)
|
||||
**Location**: First H1 heading after frontmatter
|
||||
|
||||
**Purpose**: Clear, descriptive name of the work
|
||||
|
||||
**Guidelines**:
|
||||
```yaml
|
||||
do:
|
||||
- Use clear, descriptive names
|
||||
- Be specific about what's being done
|
||||
- Include key context
|
||||
|
||||
dont:
|
||||
- Use vague terms: "Fix bug", "Update code"
|
||||
- Use technical jargon without context
|
||||
- Make it too long (> 80 chars)
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```markdown
|
||||
# Good
|
||||
- User Authentication System with JWT Tokens
|
||||
- Fix Memory Leak in Background Job Processor
|
||||
- Refactor Payment Validation Logic
|
||||
- Optimize Database Query Performance
|
||||
|
||||
# Bad
|
||||
- Auth
|
||||
- Fix Bug
|
||||
- Update Code
|
||||
- Make It Faster
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Description (Required)
|
||||
**Location**: `## Description` section
|
||||
|
||||
**Purpose**: Detailed explanation of what needs to be done
|
||||
|
||||
**What to Include**:
|
||||
- Specific functionality or changes
|
||||
- Scope and boundaries
|
||||
- Key components affected
|
||||
- Current state vs desired state
|
||||
|
||||
**Example Structure**:
|
||||
```markdown
|
||||
## Description
|
||||
[Opening paragraph: What needs to be done]
|
||||
|
||||
[Current state: How things work now]
|
||||
|
||||
[Desired state: How things should work]
|
||||
|
||||
[Scope: What's included and excluded]
|
||||
|
||||
[Key components: What parts of system affected]
|
||||
```
|
||||
|
||||
**Good Example**:
|
||||
```markdown
|
||||
## Description
|
||||
Implement a user authentication system with email/password login,
|
||||
JWT-based session management, and secure password storage using bcrypt.
|
||||
|
||||
Current state: No authentication system exists. All endpoints are public.
|
||||
|
||||
Desired state: Users must authenticate to access protected endpoints.
|
||||
Sessions persist for 24 hours with automatic renewal. Passwords are
|
||||
securely hashed and never stored in plain text.
|
||||
|
||||
Scope includes:
|
||||
- Login/logout endpoints
|
||||
- JWT token generation and validation
|
||||
- Password hashing with bcrypt
|
||||
- Session management middleware
|
||||
|
||||
Scope excludes:
|
||||
- OAuth/social login (future enhancement)
|
||||
- Password reset (separate spec: spec-auth-002)
|
||||
- Multi-factor authentication (future enhancement)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Rationale (Required)
|
||||
**Location**: `## Rationale` section
|
||||
|
||||
**Purpose**: Explain WHY this work matters
|
||||
|
||||
**What to Include**:
|
||||
- Business value or technical benefit
|
||||
- Problem being solved
|
||||
- Impact if not done
|
||||
- Alignment with goals
|
||||
|
||||
**Example Structure**:
|
||||
```markdown
|
||||
## Rationale
|
||||
[Why this is needed]
|
||||
|
||||
[Problem being solved]
|
||||
|
||||
[Business/technical impact]
|
||||
|
||||
[What happens if not done]
|
||||
```
|
||||
|
||||
**Good Example**:
|
||||
```markdown
|
||||
## Rationale
|
||||
User authentication is essential for protecting user data and enabling
|
||||
personalized features. Currently, all endpoints are public, exposing
|
||||
sensitive user information and preventing per-user customization.
|
||||
|
||||
Problem solved: Unauthorized access to user data and inability to track
|
||||
user-specific actions.
|
||||
|
||||
Business impact: Enables premium features, protects user privacy, meets
|
||||
security compliance requirements.
|
||||
|
||||
If not done: Cannot launch paid features, risk data breaches, fail
|
||||
security audits, lose customer trust.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Dependencies (Optional)
|
||||
**Location**: `## Dependencies` section
|
||||
|
||||
**Purpose**: Link to related specifications
|
||||
|
||||
**Format**:
|
||||
```markdown
|
||||
## Dependencies
|
||||
- **Requires**: spec-001, spec-002
|
||||
- **Blocks**: spec-003, spec-004
|
||||
- **Related**: spec-005
|
||||
```
|
||||
|
||||
**Relationship Types**:
|
||||
```yaml
|
||||
Requires:
|
||||
meaning: "These must be completed before this spec can start"
|
||||
use_when: "Hard dependency on other work"
|
||||
example: "Requires: spec-email-001 (email service must exist)"
|
||||
|
||||
Blocks:
|
||||
meaning: "This spec prevents other specs from starting"
|
||||
use_when: "Other work depends on this being done"
|
||||
example: "Blocks: spec-auth-002 (password reset needs auth)"
|
||||
|
||||
Related:
|
||||
meaning: "Related for context, not blocking"
|
||||
use_when: "Useful context, not a hard dependency"
|
||||
example: "Related: spec-user-001 (user profile system)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Risks (Optional)
|
||||
**Location**: `## Risks` section
|
||||
|
||||
**Purpose**: Identify potential issues or challenges
|
||||
|
||||
**Categories**:
|
||||
```yaml
|
||||
technical_risks:
|
||||
- "Complex integration with third-party service"
|
||||
- "Database migration required"
|
||||
- "Performance impact on existing features"
|
||||
|
||||
schedule_risks:
|
||||
- "Depends on external team's timeline"
|
||||
- "Blocked by infrastructure work"
|
||||
- "May require more time than estimated"
|
||||
|
||||
dependency_risks:
|
||||
- "Third-party API may change"
|
||||
- "Requires approval from security team"
|
||||
- "Depends on unstable library"
|
||||
|
||||
mitigation:
|
||||
- "Include how to reduce or handle each risk"
|
||||
```
|
||||
|
||||
**Good Example**:
|
||||
```markdown
|
||||
## Risks
|
||||
- **Performance risk**: Auth middleware adds latency to every request.
|
||||
Mitigation: Cache token validation, use fast JWT library.
|
||||
|
||||
- **Security risk**: Password storage vulnerability if implemented wrong.
|
||||
Mitigation: Use battle-tested bcrypt library, security review required.
|
||||
|
||||
- **Schedule risk**: Depends on database migration (spec-db-001).
|
||||
Mitigation: Can implement with mock data, migrate later.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Success Metrics (Optional)
|
||||
**Location**: `## Success Metrics` section
|
||||
|
||||
**Purpose**: Define measurable outcomes
|
||||
|
||||
**What to Include**:
|
||||
- Performance targets
|
||||
- Usage metrics
|
||||
- Quality metrics
|
||||
- Business metrics
|
||||
|
||||
**Good Example**:
|
||||
```markdown
|
||||
## Success Metrics
|
||||
- Authentication latency < 200ms (p95)
|
||||
- Session validation < 50ms (p95)
|
||||
- Zero security vulnerabilities in audit
|
||||
- 99.9% uptime for auth service
|
||||
- 100% of protected endpoints require auth
|
||||
- Password hashing takes 200-300ms (bcrypt security)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Acceptance Criteria (Required)
|
||||
**Location**: `## Acceptance Criteria` section
|
||||
|
||||
**Purpose**: Define what "done" means with testable criteria
|
||||
|
||||
**Format**: Checklist with `- [ ]` or `- [x]`
|
||||
|
||||
**Guidelines**:
|
||||
```yaml
|
||||
do:
|
||||
- Make each criterion specific and testable
|
||||
- Include happy path and error cases
|
||||
- Minimum 3 criteria (typically 5-8)
|
||||
- Use action verbs: "User can...", "System will..."
|
||||
- Be precise with numbers and timeframes
|
||||
|
||||
dont:
|
||||
- Use vague criteria: "System works well"
|
||||
- Forget error cases
|
||||
- Make criteria too large (break down if > 10)
|
||||
- Forget non-functional requirements
|
||||
```
|
||||
|
||||
**Good Example**:
|
||||
```markdown
|
||||
## Acceptance Criteria
|
||||
- [ ] User can login with valid email and password
|
||||
- [ ] Invalid credentials return 401 with error message
|
||||
- [ ] Successful login returns JWT token valid for 24 hours
|
||||
- [ ] Token automatically refreshes 1 hour before expiration
|
||||
- [ ] User can logout and invalidate their token
|
||||
- [ ] Logout clears session and prevents token reuse
|
||||
- [ ] Protected endpoints return 401 without valid token
|
||||
- [ ] Passwords are hashed with bcrypt (cost factor 12)
|
||||
- [ ] Login attempts are rate-limited (5 per minute)
|
||||
```
|
||||
|
||||
**Bad Example**:
|
||||
```markdown
|
||||
## Acceptance Criteria
|
||||
- [ ] Login works
|
||||
- [ ] Errors handled
|
||||
- [ ] Security is good
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Test Scenarios (Required)
|
||||
**Location**: `## Test Scenarios` section
|
||||
|
||||
**Purpose**: Describe how to verify acceptance criteria
|
||||
|
||||
**Format**: Given/When/Then (BDD style)
|
||||
|
||||
**Minimum**: 2 scenarios (happy path + error case)
|
||||
|
||||
**Structure**:
|
||||
```markdown
|
||||
### [Scenario Name]
|
||||
**Given**: [Initial state / preconditions]
|
||||
**When**: [Action taken]
|
||||
**Then**: [Expected result]
|
||||
```
|
||||
|
||||
**Good Example**:
|
||||
```markdown
|
||||
## Test Scenarios
|
||||
|
||||
### Successful login
|
||||
**Given**: User has account with email "user@example.com" and password "SecurePass123"
|
||||
**When**: User submits correct credentials to /login endpoint
|
||||
**Then**: System returns 200 status with JWT token valid for 24 hours
|
||||
|
||||
### Invalid password
|
||||
**Given**: User exists with email "user@example.com"
|
||||
**When**: User submits incorrect password
|
||||
**Then**: System returns 401 status with error "Invalid credentials"
|
||||
|
||||
### Token expiration
|
||||
**Given**: User has JWT token that expired 1 minute ago
|
||||
**When**: User attempts to access protected endpoint
|
||||
**Then**: System returns 401 status with error "Token expired"
|
||||
|
||||
### Rate limiting
|
||||
**Given**: User has attempted login 5 times in last minute
|
||||
**When**: User attempts 6th login
|
||||
**Then**: System returns 429 status with error "Too many attempts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Metadata (Optional)
|
||||
**Location**: `## Metadata` section
|
||||
|
||||
**Purpose**: Additional information for tracking
|
||||
|
||||
**Common Fields**:
|
||||
```markdown
|
||||
## Metadata
|
||||
estimated_hours: 8
|
||||
actual_hours: 10
|
||||
technical_notes: Using JWT library "jsonwebtoken", bcrypt cost factor 12
|
||||
branch: feat/user-authentication
|
||||
assignee: @developer-name
|
||||
labels: security, backend, high-priority
|
||||
```
|
||||
|
||||
**Field Meanings**:
|
||||
```yaml
|
||||
estimated_hours:
|
||||
description: "Time estimate before starting"
|
||||
use: "Planning and capacity"
|
||||
|
||||
actual_hours:
|
||||
description: "Actual time spent (filled after completion)"
|
||||
use: "Improve future estimates"
|
||||
|
||||
technical_notes:
|
||||
description: "Implementation details, library choices, etc"
|
||||
use: "Context for implementers"
|
||||
|
||||
branch:
|
||||
description: "Git branch name for this work"
|
||||
use: "Link spec to code changes"
|
||||
added_when: "Work starts (activation)"
|
||||
|
||||
assignee:
|
||||
description: "Who's working on this"
|
||||
use: "Track ownership"
|
||||
|
||||
labels:
|
||||
description: "Tags for categorization"
|
||||
use: "Filtering and reporting"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template Validation
|
||||
|
||||
### Required Fields Check
|
||||
```yaml
|
||||
must_have:
|
||||
- id
|
||||
- type
|
||||
- status
|
||||
- priority
|
||||
- created_at
|
||||
- updated_at
|
||||
- title
|
||||
- description
|
||||
- rationale
|
||||
- acceptance_criteria (at least 1)
|
||||
- test_scenarios (at least 1)
|
||||
|
||||
can_warn_if_missing:
|
||||
- dependencies
|
||||
- risks
|
||||
- success_metrics
|
||||
- metadata
|
||||
```
|
||||
|
||||
### Quality Checks
|
||||
```yaml
|
||||
description:
|
||||
min_length: 50 characters
|
||||
recommendation: "2-4 paragraphs"
|
||||
|
||||
rationale:
|
||||
min_length: 30 characters
|
||||
recommendation: "Explain business/technical value"
|
||||
|
||||
acceptance_criteria:
|
||||
min_count: 3
|
||||
recommendation: "5-8 criteria typical"
|
||||
format: "Use checkboxes [ ] or [x]"
|
||||
|
||||
test_scenarios:
|
||||
min_count: 2
|
||||
recommendation: "At least happy path + error case"
|
||||
format: "Use Given/When/Then"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Minimal Valid Spec
|
||||
```markdown
|
||||
---
|
||||
id: spec-feature-001
|
||||
type: feature
|
||||
status: draft
|
||||
priority: medium
|
||||
created_at: 2025-01-19T10:00:00
|
||||
updated_at: 2025-01-19T10:00:00
|
||||
---
|
||||
|
||||
# Feature Title
|
||||
|
||||
## Description
|
||||
What needs to be done.
|
||||
|
||||
## Rationale
|
||||
Why this matters.
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Criterion 1
|
||||
- [ ] Criterion 2
|
||||
- [ ] Criterion 3
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### Happy path
|
||||
**Given**: Initial state
|
||||
**When**: Action
|
||||
**Then**: Result
|
||||
```
|
||||
|
||||
### Complete Spec
|
||||
See WRITING.md for complete examples with all optional sections filled.
|
||||
|
||||
---
|
||||
|
||||
*This template reference provides field-by-field details. Return to SKILL.md for overview, WRITING.md for creation process, or LIFECYCLE.md for management operations.*
|
||||
515
skills/managing-specifications/WRITING.md
Normal file
515
skills/managing-specifications/WRITING.md
Normal file
@@ -0,0 +1,515 @@
|
||||
# Specification Writing Guide
|
||||
|
||||
This file provides complete details for creating specifications. Only load when user needs template details or writing guidance.
|
||||
|
||||
## When to Load This File
|
||||
|
||||
- User asks: "What fields does a spec have?", "Show me the template"
|
||||
- User wants examples of well-written specs
|
||||
- User is confused about spec format
|
||||
- Creating first spec and needs structure guidance
|
||||
|
||||
## The Markdown Specification Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: spec-TYPE-NNN
|
||||
type: feature # feature, bugfix, refactor, documentation, performance, security, testing
|
||||
status: draft
|
||||
priority: medium # critical, high, medium, or low
|
||||
created_at: 2025-01-10T10:00:00
|
||||
updated_at: 2025-01-10T10:00:00
|
||||
---
|
||||
|
||||
# Descriptive Title
|
||||
|
||||
## Description
|
||||
What needs to be done. Be specific and detailed.
|
||||
Multiple paragraphs are fine.
|
||||
|
||||
## Rationale
|
||||
Why this is needed.
|
||||
What problem it solves.
|
||||
Business or technical justification.
|
||||
|
||||
## Dependencies
|
||||
- **Requires**: spec-001, spec-002 (specs that must be completed first)
|
||||
- **Blocks**: spec-003 (specs that can't start until this is done)
|
||||
- **Related**: spec-004 (related specs for context)
|
||||
|
||||
## Risks
|
||||
- Risk description if any
|
||||
- Another risk if applicable
|
||||
|
||||
## Success Metrics
|
||||
- Measurable success metric
|
||||
- Another measurable metric
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] User can do X
|
||||
- [ ] System performs Y
|
||||
- [ ] Feature handles Z
|
||||
- [ ] Error cases handled gracefully
|
||||
- [ ] Performance meets requirements
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### Happy path test
|
||||
**Given**: Initial state
|
||||
**When**: Action taken
|
||||
**Then**: Expected result
|
||||
|
||||
### Error case test
|
||||
**Given**: Invalid input
|
||||
**When**: Action attempted
|
||||
**Then**: Appropriate error message shown
|
||||
|
||||
## Metadata
|
||||
estimated_hours: 8
|
||||
technical_notes: Any technical considerations
|
||||
branch: feat/feature-name (added when work starts)
|
||||
```
|
||||
|
||||
## Spec ID Generation
|
||||
|
||||
I generate unique IDs based on type and existing specs:
|
||||
|
||||
```yaml
|
||||
id_patterns:
|
||||
feature: "spec-feature-NNN"
|
||||
bugfix: "spec-bugfix-NNN"
|
||||
refactor: "spec-refactor-NNN"
|
||||
performance: "spec-perf-NNN"
|
||||
security: "spec-sec-NNN"
|
||||
testing: "spec-test-NNN"
|
||||
documentation: "spec-docs-NNN"
|
||||
|
||||
generation_process:
|
||||
1. Check existing specs in draft/ folder
|
||||
2. Find highest number for that type
|
||||
3. Increment by 1
|
||||
4. Zero-pad to 3 digits: 001, 002, etc.
|
||||
```
|
||||
|
||||
**Example**: If `spec-feature-001` and `spec-feature-002` exist, next is `spec-feature-003`
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
### Frontmatter Fields
|
||||
|
||||
**id** (required): Unique identifier
|
||||
- Format: `spec-TYPE-NNN`
|
||||
- Auto-generated from type and sequence
|
||||
- Never changes once created
|
||||
|
||||
**type** (required): Category of work
|
||||
- `feature` - New functionality
|
||||
- `bugfix` - Fix broken behavior
|
||||
- `refactor` - Improve code structure
|
||||
- `documentation` - Docs/comments
|
||||
- `performance` - Speed/efficiency
|
||||
- `security` - Security improvements
|
||||
- `testing` - Test coverage
|
||||
|
||||
**status** (auto-managed): Current state
|
||||
- `draft` - Not started
|
||||
- `active` - Work in progress
|
||||
- `completed` - Finished
|
||||
- **Folder determines status**, not this field
|
||||
|
||||
**priority** (required): Urgency level
|
||||
- `critical` - Drop everything, do this now
|
||||
- `high` - Important, schedule soon
|
||||
- `medium` - Normal priority
|
||||
- `low` - Nice to have, do when time allows
|
||||
|
||||
**created_at** / **updated_at** (auto): ISO timestamps
|
||||
- Format: `2025-01-10T14:30:00`
|
||||
- Created when spec is written
|
||||
- Updated when spec is modified
|
||||
|
||||
### Content Sections
|
||||
|
||||
**Title** (required): Clear, descriptive name
|
||||
- Bad: "Auth", "Fix bug", "Update code"
|
||||
- Good: "User Authentication System", "Fix memory leak in processor", "Refactor payment validation logic"
|
||||
|
||||
**Description** (required): What needs to be done
|
||||
- Be specific about functionality
|
||||
- Include scope and boundaries
|
||||
- Mention key components affected
|
||||
- Multiple paragraphs encouraged
|
||||
|
||||
**Rationale** (required): Why this matters
|
||||
- Business value or technical benefit
|
||||
- Problem being solved
|
||||
- Impact if not done
|
||||
|
||||
**Dependencies** (optional): Related specs
|
||||
- **Requires**: Must be completed first
|
||||
- **Blocks**: Prevents other specs from starting
|
||||
- **Related**: Provides context
|
||||
|
||||
**Risks** (optional): Potential issues
|
||||
- Technical risks
|
||||
- Schedule risks
|
||||
- Dependency risks
|
||||
|
||||
**Success Metrics** (optional): Measurable outcomes
|
||||
- Performance targets
|
||||
- Usage metrics
|
||||
- Quality metrics
|
||||
|
||||
**Acceptance Criteria** (required): How to know it's done
|
||||
- Use checkboxes: `- [ ]` and `- [x]`
|
||||
- Be specific and testable
|
||||
- Include error cases
|
||||
- Minimum 3 criteria recommended
|
||||
|
||||
**Test Scenarios** (required): How to verify
|
||||
- Happy path (success case)
|
||||
- Error cases (failure handling)
|
||||
- Edge cases (boundary conditions)
|
||||
- Use Given/When/Then format
|
||||
|
||||
**Metadata** (optional): Additional info
|
||||
- `estimated_hours`: Time estimate
|
||||
- `technical_notes`: Implementation notes
|
||||
- `branch`: Git branch name
|
||||
|
||||
## Writing Process
|
||||
|
||||
### Step 1: Interactive Requirements Gathering
|
||||
|
||||
**ALWAYS ask clarifying questions using AskUserQuestion tool:**
|
||||
|
||||
#### Required Information
|
||||
If any of these are missing, ask:
|
||||
- **Title**: "What are we building/fixing?" (if not provided)
|
||||
- **Type**: Present options using AskUserQuestion - Feature, Bugfix, Refactor, Performance, Security, Testing, Documentation
|
||||
- **Description**: "What exactly needs to be done?"
|
||||
- **Scope**: "Should this include [related functionality]?"
|
||||
- **Priority**: Ask to choose - Critical, High, Medium, or Low?
|
||||
|
||||
#### Decision Points
|
||||
When multiple approaches exist, use AskUserQuestion:
|
||||
|
||||
**Example Question Pattern:**
|
||||
```yaml
|
||||
question: "I see multiple approaches for implementing [feature]. Which direction should we take?"
|
||||
options:
|
||||
- label: "Approach A: [name]"
|
||||
description: "[description with trade-offs like: Simple but limited, Fast but complex, etc.]"
|
||||
- label: "Approach B: [name]"
|
||||
description: "[description with trade-offs]"
|
||||
- label: "Approach C: [name]"
|
||||
description: "[description with trade-offs]"
|
||||
```
|
||||
|
||||
#### Trade-off Clarifications
|
||||
When design choices exist, ask user to decide:
|
||||
- "Optimize for speed or simplicity?"
|
||||
- "Comprehensive feature OR minimal initial version?"
|
||||
- "Integrate with [existing system] OR standalone?"
|
||||
- "High performance OR easier maintenance?"
|
||||
|
||||
**Always use structured questions (AskUserQuestion tool) rather than open-ended prompts.**
|
||||
|
||||
### Step 2: Generate Unique ID
|
||||
|
||||
Check existing specs and create next available ID:
|
||||
```bash
|
||||
# Check what exists
|
||||
ls .quaestor/specs/draft/spec-feature-*.md
|
||||
|
||||
# If spec-feature-001 and spec-feature-002 exist
|
||||
# Create spec-feature-003
|
||||
```
|
||||
|
||||
### Step 3: Fill Template with Actual Values
|
||||
|
||||
**Always use real values, never placeholders:**
|
||||
|
||||
✅ Good:
|
||||
```yaml
|
||||
id: spec-feature-001
|
||||
title: User Authentication System
|
||||
description: Implement secure login with JWT tokens and password hashing
|
||||
```
|
||||
|
||||
❌ Bad:
|
||||
```yaml
|
||||
id: [SPEC_ID]
|
||||
title: [Feature Title]
|
||||
description: TODO: Add description
|
||||
```
|
||||
|
||||
### Step 4: Create Checkboxes for Criteria
|
||||
|
||||
Make criteria specific and testable:
|
||||
|
||||
✅ Good:
|
||||
```markdown
|
||||
- [ ] User can login with email and password
|
||||
- [ ] Invalid credentials show error within 500ms
|
||||
- [ ] Session expires after 24 hours
|
||||
- [ ] Logout clears session completely
|
||||
```
|
||||
|
||||
❌ Bad:
|
||||
```markdown
|
||||
- [ ] Login works
|
||||
- [ ] Errors handled
|
||||
- [ ] Security is good
|
||||
```
|
||||
|
||||
### Step 5: Save to Draft Folder
|
||||
|
||||
Write to `.quaestor/specs/draft/[spec-id].md`
|
||||
|
||||
All specs start in draft/ regardless of when they'll be worked on.
|
||||
|
||||
### Step 6: Report Success
|
||||
|
||||
Tell user:
|
||||
- Spec ID created
|
||||
- File location
|
||||
- Next steps: "Run `/impl spec-feature-001` to start implementation"
|
||||
|
||||
## Important Rules
|
||||
|
||||
### ✅ Always Use Actual Values
|
||||
|
||||
Never use placeholders like `[TODO]`, `[REPLACE THIS]`, `[SPEC_ID]`
|
||||
|
||||
### ✅ Generate Sequential IDs
|
||||
|
||||
Check existing files to find next number for each type
|
||||
|
||||
### ✅ Include Test Scenarios
|
||||
|
||||
Every spec needs at least:
|
||||
1. Happy path test
|
||||
2. Error case test
|
||||
|
||||
### ✅ Make Criteria Testable
|
||||
|
||||
Each acceptance criterion should be verifiable:
|
||||
- Can you write a test for it?
|
||||
- Is success/failure clear?
|
||||
- Is it specific enough?
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Feature Spec
|
||||
|
||||
**User request**: "I want to add email notifications when orders are placed"
|
||||
|
||||
**Created spec**: `spec-feature-001.md`
|
||||
```markdown
|
||||
---
|
||||
id: spec-feature-001
|
||||
type: feature
|
||||
status: draft
|
||||
priority: high
|
||||
created_at: 2025-01-19T10:30:00
|
||||
updated_at: 2025-01-19T10:30:00
|
||||
---
|
||||
|
||||
# Order Confirmation Email Notifications
|
||||
|
||||
## Description
|
||||
Send automated email notifications to customers when they successfully place an order. The email should include order details (items, quantities, total price), estimated delivery date, and a link to track the order.
|
||||
|
||||
## Rationale
|
||||
Customers need immediate confirmation that their order was received. This reduces support inquiries about order status and provides professional customer experience. Industry standard for e-commerce platforms.
|
||||
|
||||
## Dependencies
|
||||
- **Requires**: spec-email-001 (Email service integration)
|
||||
- **Related**: spec-order-003 (Order processing system)
|
||||
|
||||
## Risks
|
||||
- Email delivery failures (use queuing system)
|
||||
- High volume during peak times (rate limiting needed)
|
||||
|
||||
## Success Metrics
|
||||
- 95% email delivery rate within 30 seconds
|
||||
- Less than 1% bounce rate
|
||||
- Customer satisfaction score improvement
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Email sent within 30 seconds of order placement
|
||||
- [ ] Email contains all order items with prices
|
||||
- [ ] Email includes estimated delivery date
|
||||
- [ ] Tracking link works and shows order status
|
||||
- [ ] Failed emails retry 3 times with exponential backoff
|
||||
- [ ] Admin dashboard shows email delivery status
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### Successful order email
|
||||
**Given**: User places order successfully
|
||||
**When**: Order is confirmed in database
|
||||
**Then**: Email is queued and sent within 30 seconds
|
||||
|
||||
### Email delivery failure
|
||||
**Given**: Email service is temporarily down
|
||||
**When**: System attempts to send email
|
||||
**Then**: Email is queued for retry with exponential backoff
|
||||
|
||||
### High volume scenario
|
||||
**Given**: 1000 orders placed simultaneously
|
||||
**When**: System processes order confirmations
|
||||
**Then**: All emails delivered within 5 minutes, no failures
|
||||
|
||||
## Metadata
|
||||
estimated_hours: 12
|
||||
technical_notes: Use SendGrid API, implement queue with Redis
|
||||
```
|
||||
|
||||
### Example 2: Bugfix Spec
|
||||
|
||||
**User request**: "Memory leak in background processor needs fixing"
|
||||
|
||||
**Created spec**: `spec-bugfix-001.md`
|
||||
```markdown
|
||||
---
|
||||
id: spec-bugfix-001
|
||||
type: bugfix
|
||||
status: draft
|
||||
priority: critical
|
||||
created_at: 2025-01-19T11:00:00
|
||||
updated_at: 2025-01-19T11:00:00
|
||||
---
|
||||
|
||||
# Fix Memory Leak in Background Job Processor
|
||||
|
||||
## Description
|
||||
The background job processor accumulates memory over time and doesn't release it after job completion. Memory usage grows from 200MB to 2GB+ over 24 hours, eventually causing OOM crashes. Affects job processing for order fulfillment and email sending.
|
||||
|
||||
## Rationale
|
||||
Critical production issue causing service restarts every 12 hours. Impacts order processing reliability and customer experience. Root cause is database connections not being properly closed after job completion.
|
||||
|
||||
## Dependencies
|
||||
None
|
||||
|
||||
## Risks
|
||||
- Fix might affect job processing throughput
|
||||
- Need careful testing to avoid breaking existing jobs
|
||||
|
||||
## Success Metrics
|
||||
- Memory usage stable at < 300MB over 72 hours
|
||||
- No OOM crashes
|
||||
- Job processing throughput unchanged
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Memory usage remains stable over 72-hour test period
|
||||
- [ ] Database connections properly closed after each job
|
||||
- [ ] No memory leaks detected by profiler
|
||||
- [ ] All existing job types still process correctly
|
||||
- [ ] Performance benchmarks show no regression
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### Memory stability test
|
||||
**Given**: Background processor running for 72 hours
|
||||
**When**: 10,000 jobs processed during test period
|
||||
**Then**: Memory usage remains under 300MB
|
||||
|
||||
### Connection cleanup verification
|
||||
**Given**: Single job completes
|
||||
**When**: Check database connection pool
|
||||
**Then**: Connection is returned to pool and not held
|
||||
|
||||
## Metadata
|
||||
estimated_hours: 6
|
||||
technical_notes: Use context managers for DB connections, add memory profiling
|
||||
```
|
||||
|
||||
### Example 3: Refactor Spec
|
||||
|
||||
**User request**: "Authentication logic is spread across 5 files, needs consolidation"
|
||||
|
||||
**Created spec**: `spec-refactor-001.md`
|
||||
```markdown
|
||||
---
|
||||
id: spec-refactor-001
|
||||
type: refactor
|
||||
status: draft
|
||||
priority: medium
|
||||
created_at: 2025-01-19T11:30:00
|
||||
updated_at: 2025-01-19T11:30:00
|
||||
---
|
||||
|
||||
# Consolidate Authentication Logic
|
||||
|
||||
## Description
|
||||
Authentication logic is currently scattered across 5 different files (api.py, middleware.py, services.py, utils.py, validators.py). This makes it hard to maintain, test, and understand the auth flow. Consolidate into a single AuthService class with clear responsibilities.
|
||||
|
||||
## Rationale
|
||||
Technical debt causing maintenance issues. Recent security update required changes in 5 places. New developer onboarding takes longer due to scattered logic. Consolidation will improve testability and make security audits easier.
|
||||
|
||||
## Dependencies
|
||||
None (existing functionality must continue working)
|
||||
|
||||
## Risks
|
||||
- Regression in auth functionality
|
||||
- Need comprehensive test coverage before refactoring
|
||||
|
||||
## Success Metrics
|
||||
- Auth logic in single module with < 300 lines
|
||||
- Test coverage > 90%
|
||||
- No behavior changes (all existing tests pass)
|
||||
- Reduced complexity score
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] All auth logic moved to single AuthService class
|
||||
- [ ] Existing functionality unchanged (all tests pass)
|
||||
- [ ] Test coverage increased to > 90%
|
||||
- [ ] Documentation updated with new structure
|
||||
- [ ] Code review approved by security team
|
||||
|
||||
## Test Scenarios
|
||||
|
||||
### Existing functionality preserved
|
||||
**Given**: Complete existing test suite
|
||||
**When**: Refactored code deployed
|
||||
**Then**: All 127 existing tests pass without modification
|
||||
|
||||
### Improved testability
|
||||
**Given**: New AuthService class
|
||||
**When**: Write tests for edge cases
|
||||
**Then**: Can test authentication logic in isolation
|
||||
|
||||
## Metadata
|
||||
estimated_hours: 16
|
||||
technical_notes: Start with comprehensive test coverage, refactor incrementally
|
||||
```
|
||||
|
||||
## Tips for Best Specs
|
||||
|
||||
### Be Specific
|
||||
- Instead of: "Add authentication"
|
||||
- Better: "Add email/password authentication with JWT tokens, 24-hour session expiry, and password reset via email"
|
||||
|
||||
### Define Success Clearly
|
||||
- Bad: "System works"
|
||||
- Good: "User can login in < 2 seconds, sessions persist across browser restarts, invalid credentials show within 500ms"
|
||||
|
||||
### Break Down Large Features
|
||||
If > 5 acceptance criteria, consider splitting:
|
||||
- `spec-auth-001`: Basic login/logout
|
||||
- `spec-auth-002`: Password reset
|
||||
- `spec-auth-003`: OAuth integration
|
||||
|
||||
### Use Given/When/Then for Tests
|
||||
Follows BDD format that's clear and testable:
|
||||
```
|
||||
Given: Initial state
|
||||
When: Action taken
|
||||
Then: Expected result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*This guide provides complete specification writing details. Return to SKILL.md for overview or LIFECYCLE.md for management operations.*
|
||||
211
skills/optimizing-performance/SKILL.md
Normal file
211
skills/optimizing-performance/SKILL.md
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
name: Optimizing Performance
|
||||
description: Optimize performance with profiling, caching strategies, database query optimization, and bottleneck analysis. Use when improving response times, implementing caching layers, or scaling for high load.
|
||||
---
|
||||
|
||||
# Optimizing Performance
|
||||
|
||||
I help you identify and fix performance bottlenecks using language-specific profiling tools, optimization patterns, and best practices.
|
||||
|
||||
## When to Use Me
|
||||
|
||||
**Performance analysis:**
|
||||
- "Profile this code for bottlenecks"
|
||||
- "Analyze performance issues"
|
||||
- "Why is this slow?"
|
||||
|
||||
**Optimization:**
|
||||
- "Optimize database queries"
|
||||
- "Improve response time"
|
||||
- "Reduce memory usage"
|
||||
|
||||
**Scaling:**
|
||||
- "Implement caching strategy"
|
||||
- "Optimize for high load"
|
||||
- "Scale this service"
|
||||
|
||||
## How I Work - Progressive Loading
|
||||
|
||||
I load only the performance guidance relevant to your language:
|
||||
|
||||
```yaml
|
||||
Language Detection:
|
||||
"Python project" → Load @languages/PYTHON.md
|
||||
"Rust project" → Load @languages/RUST.md
|
||||
"JavaScript/Node.js" → Load @languages/JAVASCRIPT.md
|
||||
"Go project" → Load @languages/GO.md
|
||||
"Any language" → Load @languages/GENERIC.md
|
||||
```
|
||||
|
||||
**Don't load all files!** Start with language detection, then load specific guidance.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Measure First
|
||||
**Never optimize without data.** Profile to find actual bottlenecks, don't guess.
|
||||
|
||||
- Establish baseline metrics
|
||||
- Profile to identify hot paths
|
||||
- Focus on the 20% of code that takes 80% of time
|
||||
- Measure improvements after optimization
|
||||
|
||||
### 2. Performance Budgets
|
||||
Set clear targets before optimizing:
|
||||
|
||||
```yaml
|
||||
targets:
|
||||
api_response: "<200ms (p95)"
|
||||
page_load: "<2 seconds"
|
||||
database_query: "<50ms (p95)"
|
||||
cache_lookup: "<10ms"
|
||||
```
|
||||
|
||||
### 3. Trade-offs
|
||||
Balance performance vs:
|
||||
- Code readability
|
||||
- Maintainability
|
||||
- Development time
|
||||
- Memory usage
|
||||
|
||||
Premature optimization is the root of all evil. Optimize when:
|
||||
- Profiling shows clear bottleneck
|
||||
- Performance requirement not met
|
||||
- User experience degraded
|
||||
|
||||
## Quick Wins (Language-Agnostic)
|
||||
|
||||
### Database
|
||||
- Add indexes for frequently queried columns
|
||||
- Implement connection pooling
|
||||
- Use batch operations instead of loops
|
||||
- Cache expensive query results
|
||||
|
||||
### Caching
|
||||
- Implement multi-level caching (L1: in-memory, L2: Redis, L3: database, L4: CDN)
|
||||
- Define cache invalidation strategy
|
||||
- Monitor cache hit rates
|
||||
|
||||
### Network
|
||||
- Enable compression for responses
|
||||
- Use HTTP/2 or HTTP/3
|
||||
- Implement CDN for static assets
|
||||
- Configure appropriate timeouts
|
||||
|
||||
## Language-Specific Guidance
|
||||
|
||||
### Python
|
||||
**Load:** `@languages/PYTHON.md`
|
||||
|
||||
**Quick reference:**
|
||||
- Profiling: `cProfile`, `py-spy`, `memory_profiler`
|
||||
- Patterns: Generators, async/await, list comprehensions
|
||||
- Anti-patterns: String concatenation in loops, GIL contention
|
||||
- Tools: `pytest-benchmark`, `locust`
|
||||
|
||||
### Rust
|
||||
**Load:** `@languages/RUST.md`
|
||||
|
||||
**Quick reference:**
|
||||
- Profiling: `cargo bench`, `flamegraph`, `perf`
|
||||
- Patterns: Zero-cost abstractions, iterator chains, preallocated collections
|
||||
- Anti-patterns: Unnecessary allocations, large enum variants
|
||||
- Tools: `criterion`, `rayon`, `parking_lot`
|
||||
|
||||
### JavaScript/Node.js
|
||||
**Load:** `@languages/JAVASCRIPT.md`
|
||||
|
||||
**Quick reference:**
|
||||
- Profiling: `clinic.js`, `0x`, Chrome DevTools
|
||||
- Patterns: Event loop optimization, worker threads, streaming
|
||||
- Anti-patterns: Blocking event loop, memory leaks, unnecessary re-renders
|
||||
- Tools: `autocannon`, `react-window`, `p-limit`
|
||||
|
||||
### Go
|
||||
**Load:** `@languages/GO.md`
|
||||
|
||||
**Quick reference:**
|
||||
- Profiling: `pprof`, `go test -bench`, `go tool trace`
|
||||
- Patterns: Goroutine pools, buffered channels, `sync.Pool`
|
||||
- Anti-patterns: Unlimited goroutines, defer in loops, lock contention
|
||||
- Tools: `benchstat`, `sync.Map`, `strings.Builder`
|
||||
|
||||
### Generic Patterns
|
||||
**Load:** `@languages/GENERIC.md`
|
||||
|
||||
**When to use:** Database optimization, caching strategies, load balancing, monitoring - applicable to any language.
|
||||
|
||||
## Optimization Workflow
|
||||
|
||||
### Phase 1: Baseline
|
||||
1. Define performance requirements
|
||||
2. Measure current performance
|
||||
3. Identify user-facing metrics (response time, throughput)
|
||||
|
||||
### Phase 2: Profile
|
||||
1. Use language-specific profiling tools
|
||||
2. Identify hot paths (where time is spent)
|
||||
3. Find memory bottlenecks
|
||||
4. Check for resource leaks
|
||||
|
||||
### Phase 3: Optimize
|
||||
1. Focus on biggest bottleneck first
|
||||
2. Apply language-specific optimizations
|
||||
3. Implement caching where appropriate
|
||||
4. Optimize database queries
|
||||
|
||||
### Phase 4: Verify
|
||||
1. Re-profile to measure improvements
|
||||
2. Run performance regression tests
|
||||
3. Monitor in production
|
||||
4. Set up alerts for degradation
|
||||
|
||||
## Common Bottlenecks
|
||||
|
||||
### Database
|
||||
- Missing indexes
|
||||
- N+1 query problem
|
||||
- No connection pooling
|
||||
- Expensive joins
|
||||
→ **Load** `@languages/GENERIC.md` for DB optimization
|
||||
|
||||
### Memory
|
||||
- Memory leaks
|
||||
- Excessive allocations
|
||||
- Large object graphs
|
||||
- No pooling
|
||||
→ **Load** language-specific file for memory management
|
||||
|
||||
### Network
|
||||
- No compression
|
||||
- Chatty API calls
|
||||
- Synchronous external calls
|
||||
- No CDN
|
||||
→ **Load** `@languages/GENERIC.md` for network optimization
|
||||
|
||||
### Concurrency
|
||||
- Lock contention
|
||||
- Excessive threading/goroutines
|
||||
- Blocking operations
|
||||
- Poor work distribution
|
||||
→ **Load** language-specific file for concurrency patterns
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Optimization complete when:**
|
||||
- ✅ Performance targets met
|
||||
- ✅ No regressions in functionality
|
||||
- ✅ Code remains maintainable
|
||||
- ✅ Improvements verified with profiling
|
||||
- ✅ Production metrics show improvement
|
||||
- ✅ Alerts configured for degradation
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Use profiling tools to identify bottlenecks
|
||||
- Load language-specific guidance
|
||||
- Apply targeted optimizations
|
||||
- Set up monitoring and alerts
|
||||
|
||||
---
|
||||
|
||||
*Load language-specific files for detailed profiling tools, optimization patterns, and best practices*
|
||||
426
skills/optimizing-performance/languages/GENERIC.md
Normal file
426
skills/optimizing-performance/languages/GENERIC.md
Normal file
@@ -0,0 +1,426 @@
|
||||
# Generic Performance Optimization
|
||||
|
||||
**Load this file when:** Optimizing performance in any language or need language-agnostic patterns
|
||||
|
||||
## Universal Principles
|
||||
|
||||
### Measure First
|
||||
- Never optimize without profiling
|
||||
- Establish baseline metrics before changes
|
||||
- Focus on bottlenecks, not micro-optimizations
|
||||
- Use 80/20 rule: 80% of time spent in 20% of code
|
||||
|
||||
### Performance Budgets
|
||||
```yaml
|
||||
response_time_targets:
|
||||
api_endpoint: "<200ms (p95)"
|
||||
page_load: "<2 seconds"
|
||||
database_query: "<50ms (p95)"
|
||||
cache_lookup: "<10ms"
|
||||
|
||||
resource_limits:
|
||||
max_memory: "512MB per process"
|
||||
max_cpu: "80% sustained"
|
||||
max_connections: "100 per instance"
|
||||
```
|
||||
|
||||
## Database Optimization
|
||||
|
||||
### Indexing Strategy
|
||||
```sql
|
||||
-- Identify slow queries
|
||||
-- PostgreSQL
|
||||
SELECT query, mean_exec_time
|
||||
FROM pg_stat_statements
|
||||
ORDER BY mean_exec_time DESC
|
||||
LIMIT 10;
|
||||
|
||||
-- Add indexes for frequently queried columns
|
||||
CREATE INDEX idx_users_email ON users(email);
|
||||
CREATE INDEX idx_orders_user_created ON orders(user_id, created_at);
|
||||
|
||||
-- Composite indexes for common query patterns
|
||||
CREATE INDEX idx_search ON products(category, price, created_at);
|
||||
```
|
||||
|
||||
### Query Optimization
|
||||
```sql
|
||||
-- Use EXPLAIN to understand query plans
|
||||
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'user@example.com';
|
||||
|
||||
-- Avoid SELECT *
|
||||
-- Bad
|
||||
SELECT * FROM users;
|
||||
|
||||
-- Good
|
||||
SELECT id, name, email FROM users;
|
||||
|
||||
-- Use LIMIT for pagination
|
||||
SELECT id, name FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 0;
|
||||
|
||||
-- Use EXISTS instead of COUNT for checking existence
|
||||
-- Bad
|
||||
SELECT COUNT(*) FROM orders WHERE user_id = 123;
|
||||
|
||||
-- Good
|
||||
SELECT EXISTS(SELECT 1 FROM orders WHERE user_id = 123);
|
||||
```
|
||||
|
||||
### Connection Pooling
|
||||
```yaml
|
||||
connection_pool_config:
|
||||
min_connections: 5
|
||||
max_connections: 20
|
||||
connection_timeout: 30s
|
||||
idle_timeout: 10m
|
||||
max_lifetime: 1h
|
||||
```
|
||||
|
||||
## Caching Strategies
|
||||
|
||||
### Multi-Level Caching
|
||||
```yaml
|
||||
caching_layers:
|
||||
L1_application:
|
||||
type: "In-Memory (LRU)"
|
||||
size: "100MB"
|
||||
ttl: "5 minutes"
|
||||
use_case: "Hot data, session data"
|
||||
|
||||
L2_distributed:
|
||||
type: "Redis"
|
||||
ttl: "1 hour"
|
||||
use_case: "Shared data across instances"
|
||||
|
||||
L3_database:
|
||||
type: "Query Result Cache"
|
||||
ttl: "15 minutes"
|
||||
use_case: "Expensive query results"
|
||||
|
||||
L4_cdn:
|
||||
type: "CDN"
|
||||
ttl: "24 hours"
|
||||
use_case: "Static assets, public API responses"
|
||||
```
|
||||
|
||||
### Cache Invalidation Patterns
|
||||
```yaml
|
||||
strategies:
|
||||
time_based:
|
||||
description: "TTL-based expiration"
|
||||
use_case: "Data with predictable change patterns"
|
||||
example: "Weather data, stock prices"
|
||||
|
||||
event_based:
|
||||
description: "Invalidate on data change events"
|
||||
use_case: "Real-time consistency required"
|
||||
example: "User profile updates"
|
||||
|
||||
write_through:
|
||||
description: "Update cache on write"
|
||||
use_case: "Strong consistency needed"
|
||||
example: "Shopping cart, user sessions"
|
||||
|
||||
lazy_refresh:
|
||||
description: "Refresh on cache miss"
|
||||
use_case: "Acceptable stale data"
|
||||
example: "Analytics dashboards"
|
||||
```
|
||||
|
||||
## Network Optimization
|
||||
|
||||
### HTTP/2 and HTTP/3
|
||||
```yaml
|
||||
benefits:
|
||||
- Multiplexing: Multiple requests over single connection
|
||||
- Header compression: Reduced overhead
|
||||
- Server push: Proactive resource sending
|
||||
- Binary protocol: Faster parsing
|
||||
```
|
||||
|
||||
### Compression
|
||||
```yaml
|
||||
compression_config:
|
||||
enabled: true
|
||||
min_size: "1KB" # Don't compress tiny responses
|
||||
types:
|
||||
- "text/html"
|
||||
- "text/css"
|
||||
- "application/javascript"
|
||||
- "application/json"
|
||||
level: 6 # Balance speed vs size
|
||||
```
|
||||
|
||||
### Connection Management
|
||||
```yaml
|
||||
keep_alive:
|
||||
enabled: true
|
||||
timeout: "60s"
|
||||
max_requests: 100
|
||||
|
||||
timeouts:
|
||||
connect: "10s"
|
||||
read: "30s"
|
||||
write: "30s"
|
||||
idle: "120s"
|
||||
```
|
||||
|
||||
## Monitoring and Observability
|
||||
|
||||
### Key Metrics to Track
|
||||
```yaml
|
||||
application_metrics:
|
||||
- response_time_p50
|
||||
- response_time_p95
|
||||
- response_time_p99
|
||||
- error_rate
|
||||
- throughput_rps
|
||||
|
||||
system_metrics:
|
||||
- cpu_utilization
|
||||
- memory_utilization
|
||||
- disk_io
|
||||
- network_io
|
||||
|
||||
database_metrics:
|
||||
- query_execution_time
|
||||
- connection_pool_usage
|
||||
- slow_query_count
|
||||
- cache_hit_rate
|
||||
```
|
||||
|
||||
### Alert Thresholds
|
||||
```yaml
|
||||
alerts:
|
||||
critical:
|
||||
- metric: "error_rate"
|
||||
threshold: ">5%"
|
||||
duration: "2 minutes"
|
||||
|
||||
- metric: "response_time_p99"
|
||||
threshold: ">1000ms"
|
||||
duration: "5 minutes"
|
||||
|
||||
warning:
|
||||
- metric: "cpu_utilization"
|
||||
threshold: ">80%"
|
||||
duration: "10 minutes"
|
||||
|
||||
- metric: "memory_utilization"
|
||||
threshold: ">85%"
|
||||
duration: "5 minutes"
|
||||
```
|
||||
|
||||
## Load Balancing
|
||||
|
||||
### Strategies
|
||||
```yaml
|
||||
round_robin:
|
||||
description: "Distribute requests evenly"
|
||||
use_case: "Homogeneous backend servers"
|
||||
|
||||
least_connections:
|
||||
description: "Route to server with fewest connections"
|
||||
use_case: "Varying request processing times"
|
||||
|
||||
ip_hash:
|
||||
description: "Consistent routing based on client IP"
|
||||
use_case: "Session affinity required"
|
||||
|
||||
weighted:
|
||||
description: "Route based on server capacity"
|
||||
use_case: "Heterogeneous server specs"
|
||||
```
|
||||
|
||||
### Health Checks
|
||||
```yaml
|
||||
health_check:
|
||||
interval: "10s"
|
||||
timeout: "5s"
|
||||
unhealthy_threshold: 3
|
||||
healthy_threshold: 2
|
||||
path: "/health"
|
||||
```
|
||||
|
||||
## CDN Configuration
|
||||
|
||||
### Caching Rules
|
||||
```yaml
|
||||
static_assets:
|
||||
pattern: "*.{js,css,png,jpg,svg,woff2}"
|
||||
cache_control: "public, max-age=31536000, immutable"
|
||||
|
||||
api_responses:
|
||||
pattern: "/api/public/*"
|
||||
cache_control: "public, max-age=300, s-maxage=600"
|
||||
|
||||
html_pages:
|
||||
pattern: "*.html"
|
||||
cache_control: "public, max-age=60, s-maxage=300"
|
||||
```
|
||||
|
||||
### Geographic Distribution
|
||||
```yaml
|
||||
regions:
|
||||
- us-east: "Primary"
|
||||
- us-west: "Failover"
|
||||
- eu-west: "Regional"
|
||||
- ap-southeast: "Regional"
|
||||
|
||||
routing:
|
||||
policy: "latency-based"
|
||||
fallback: "round-robin"
|
||||
```
|
||||
|
||||
## Horizontal Scaling Patterns
|
||||
|
||||
### Stateless Services
|
||||
```yaml
|
||||
principles:
|
||||
- No local state storage
|
||||
- Session data in external store (Redis, database)
|
||||
- Any instance can handle any request
|
||||
- Easy to add/remove instances
|
||||
```
|
||||
|
||||
### Message Queues
|
||||
```yaml
|
||||
use_cases:
|
||||
- Decouple services
|
||||
- Handle traffic spikes
|
||||
- Async processing
|
||||
- Retry logic
|
||||
|
||||
patterns:
|
||||
work_queue:
|
||||
description: "Distribute tasks to workers"
|
||||
example: "Image processing, email sending"
|
||||
|
||||
pub_sub:
|
||||
description: "Event broadcasting"
|
||||
example: "User registration notifications"
|
||||
```
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### N+1 Query Problem
|
||||
```sql
|
||||
-- Bad: N+1 queries (1 for users + N for profiles)
|
||||
SELECT * FROM users;
|
||||
-- Then for each user:
|
||||
SELECT * FROM profiles WHERE user_id = ?;
|
||||
|
||||
-- Good: Single join query
|
||||
SELECT u.*, p.*
|
||||
FROM users u
|
||||
LEFT JOIN profiles p ON u.id = p.user_id;
|
||||
```
|
||||
|
||||
### Chatty Interfaces
|
||||
```yaml
|
||||
bad:
|
||||
requests: 100
|
||||
description: "100 separate API calls to get data"
|
||||
latency: "100 * 50ms = 5000ms"
|
||||
|
||||
good:
|
||||
requests: 1
|
||||
description: "Single batch API call"
|
||||
latency: "200ms"
|
||||
```
|
||||
|
||||
### Synchronous External Calls
|
||||
```yaml
|
||||
bad:
|
||||
pattern: "Sequential blocking calls"
|
||||
time: "call1 (500ms) + call2 (500ms) + call3 (500ms) = 1500ms"
|
||||
|
||||
good:
|
||||
pattern: "Parallel async calls"
|
||||
time: "max(call1, call2, call3) = 500ms"
|
||||
```
|
||||
|
||||
## Performance Testing Strategy
|
||||
|
||||
### Load Testing
|
||||
```yaml
|
||||
scenarios:
|
||||
smoke_test:
|
||||
users: 1
|
||||
duration: "1 minute"
|
||||
purpose: "Verify system works"
|
||||
|
||||
load_test:
|
||||
users: "normal_traffic"
|
||||
duration: "15 minutes"
|
||||
purpose: "Performance under normal load"
|
||||
|
||||
stress_test:
|
||||
users: "2x_normal"
|
||||
duration: "30 minutes"
|
||||
purpose: "Find breaking point"
|
||||
|
||||
spike_test:
|
||||
users: "0 → 1000 → 0"
|
||||
duration: "10 minutes"
|
||||
purpose: "Handle sudden traffic spikes"
|
||||
|
||||
endurance_test:
|
||||
users: "normal_traffic"
|
||||
duration: "24 hours"
|
||||
purpose: "Memory leaks, degradation"
|
||||
```
|
||||
|
||||
### Performance Regression Tests
|
||||
```yaml
|
||||
approach:
|
||||
- Baseline metrics from production
|
||||
- Run automated perf tests in CI
|
||||
- Compare against baseline
|
||||
- Fail build if regression > threshold
|
||||
|
||||
thresholds:
|
||||
response_time: "+10%"
|
||||
throughput: "-5%"
|
||||
error_rate: "+1%"
|
||||
```
|
||||
|
||||
## Checklist
|
||||
|
||||
**Initial Assessment:**
|
||||
- [ ] Identify performance requirements
|
||||
- [ ] Establish current baseline metrics
|
||||
- [ ] Profile to find bottlenecks
|
||||
|
||||
**Database Optimization:**
|
||||
- [ ] Add indexes for common queries
|
||||
- [ ] Implement connection pooling
|
||||
- [ ] Cache query results
|
||||
- [ ] Use batch operations
|
||||
|
||||
**Caching:**
|
||||
- [ ] Implement multi-level caching
|
||||
- [ ] Define cache invalidation strategy
|
||||
- [ ] Monitor cache hit rates
|
||||
|
||||
**Network:**
|
||||
- [ ] Enable compression
|
||||
- [ ] Use HTTP/2 or HTTP/3
|
||||
- [ ] Implement CDN for static assets
|
||||
- [ ] Configure appropriate timeouts
|
||||
|
||||
**Monitoring:**
|
||||
- [ ] Track key performance metrics
|
||||
- [ ] Set up alerts for anomalies
|
||||
- [ ] Implement distributed tracing
|
||||
- [ ] Create performance dashboards
|
||||
|
||||
**Testing:**
|
||||
- [ ] Run load tests
|
||||
- [ ] Conduct stress tests
|
||||
- [ ] Set up performance regression tests
|
||||
- [ ] Monitor in production
|
||||
|
||||
---
|
||||
|
||||
*Language-agnostic performance optimization patterns applicable to any technology stack*
|
||||
433
skills/optimizing-performance/languages/GO.md
Normal file
433
skills/optimizing-performance/languages/GO.md
Normal file
@@ -0,0 +1,433 @@
|
||||
# Go Performance Optimization
|
||||
|
||||
**Load this file when:** Optimizing performance in Go projects
|
||||
|
||||
## Profiling Tools
|
||||
|
||||
### Built-in pprof
|
||||
```bash
|
||||
# CPU profiling
|
||||
go test -cpuprofile=cpu.prof -bench=.
|
||||
go tool pprof cpu.prof
|
||||
|
||||
# Memory profiling
|
||||
go test -memprofile=mem.prof -bench=.
|
||||
go tool pprof mem.prof
|
||||
|
||||
# Web UI for profiles
|
||||
go tool pprof -http=:8080 cpu.prof
|
||||
|
||||
# Goroutine profiling
|
||||
go tool pprof http://localhost:6060/debug/pprof/goroutine
|
||||
|
||||
# Heap profiling
|
||||
go tool pprof http://localhost:6060/debug/pprof/heap
|
||||
```
|
||||
|
||||
### Benchmarking
|
||||
```go
|
||||
// Basic benchmark
|
||||
func BenchmarkFibonacci(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
fibonacci(20)
|
||||
}
|
||||
}
|
||||
|
||||
// With sub-benchmarks
|
||||
func BenchmarkSizes(b *testing.B) {
|
||||
sizes := []int{10, 100, 1000}
|
||||
for _, size := range sizes {
|
||||
b.Run(fmt.Sprintf("size=%d", size), func(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
process(size)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Reset timer for setup
|
||||
func BenchmarkWithSetup(b *testing.B) {
|
||||
data := setupExpensiveData()
|
||||
b.ResetTimer() // Don't count setup time
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
process(data)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Runtime Metrics
|
||||
```go
|
||||
import (
|
||||
"net/http"
|
||||
_ "net/http/pprof" // Import for side effects
|
||||
"runtime"
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Enable profiling endpoint
|
||||
go func() {
|
||||
http.ListenAndServe("localhost:6060", nil)
|
||||
}()
|
||||
}
|
||||
|
||||
// Monitor goroutines
|
||||
func printStats() {
|
||||
fmt.Printf("Goroutines: %d\n", runtime.NumGoroutine())
|
||||
|
||||
var m runtime.MemStats
|
||||
runtime.ReadMemStats(&m)
|
||||
fmt.Printf("Alloc: %d MB\n", m.Alloc/1024/1024)
|
||||
fmt.Printf("TotalAlloc: %d MB\n", m.TotalAlloc/1024/1024)
|
||||
}
|
||||
```
|
||||
|
||||
## Memory Management
|
||||
|
||||
### Avoiding Allocations
|
||||
```go
|
||||
// Bad: Allocates on every call
|
||||
func process(data []byte) []byte {
|
||||
result := make([]byte, len(data)) // New allocation
|
||||
copy(result, data)
|
||||
return result
|
||||
}
|
||||
|
||||
// Good: Reuse buffer
|
||||
var bufferPool = sync.Pool{
|
||||
New: func() interface{} {
|
||||
return make([]byte, 1024)
|
||||
},
|
||||
}
|
||||
|
||||
func process(data []byte) {
|
||||
buf := bufferPool.Get().([]byte)
|
||||
defer bufferPool.Put(buf)
|
||||
// Process with buf
|
||||
}
|
||||
```
|
||||
|
||||
### Preallocate Slices
|
||||
```go
|
||||
// Bad: Multiple allocations as slice grows
|
||||
items := []Item{}
|
||||
for i := 0; i < 1000; i++ {
|
||||
items = append(items, Item{i}) // Reallocates when cap exceeded
|
||||
}
|
||||
|
||||
// Good: Single allocation
|
||||
items := make([]Item, 0, 1000)
|
||||
for i := 0; i < 1000; i++ {
|
||||
items = append(items, Item{i}) // No reallocation
|
||||
}
|
||||
|
||||
// Or if final size is known
|
||||
items := make([]Item, 1000)
|
||||
for i := 0; i < 1000; i++ {
|
||||
items[i] = Item{i}
|
||||
}
|
||||
```
|
||||
|
||||
### String vs []byte
|
||||
```go
|
||||
// Bad: String concatenation allocates
|
||||
var result string
|
||||
for _, s := range strings {
|
||||
result += s // New allocation each time
|
||||
}
|
||||
|
||||
// Good: Use strings.Builder
|
||||
var builder strings.Builder
|
||||
builder.Grow(estimatedSize) // Preallocate
|
||||
for _, s := range strings {
|
||||
builder.WriteString(s)
|
||||
}
|
||||
result := builder.String()
|
||||
|
||||
// For byte operations, work with []byte
|
||||
data := []byte("hello")
|
||||
data = append(data, " world"...) // Efficient
|
||||
```
|
||||
|
||||
## Goroutine Optimization
|
||||
|
||||
### Worker Pool Pattern
|
||||
```go
|
||||
// Bad: Unlimited goroutines
|
||||
for _, task := range tasks {
|
||||
go process(task) // Could spawn millions!
|
||||
}
|
||||
|
||||
// Good: Limited worker pool
|
||||
func workerPool(tasks <-chan Task, workers int) {
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < workers; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for task := range tasks {
|
||||
process(task)
|
||||
}
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// Usage
|
||||
taskChan := make(chan Task, 100)
|
||||
go workerPool(taskChan, 10) // 10 workers
|
||||
```
|
||||
|
||||
### Channel Patterns
|
||||
```go
|
||||
// Buffered channels reduce blocking
|
||||
ch := make(chan int, 100) // Buffer of 100
|
||||
|
||||
// Fan-out pattern for parallel work
|
||||
func fanOut(in <-chan int, n int) []<-chan int {
|
||||
outs := make([]<-chan int, n)
|
||||
for i := 0; i < n; i++ {
|
||||
out := make(chan int)
|
||||
outs[i] = out
|
||||
go func() {
|
||||
for v := range in {
|
||||
out <- process(v)
|
||||
}
|
||||
close(out)
|
||||
}()
|
||||
}
|
||||
return outs
|
||||
}
|
||||
|
||||
// Fan-in pattern to merge results
|
||||
func fanIn(channels ...<-chan int) <-chan int {
|
||||
out := make(chan int)
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for _, ch := range channels {
|
||||
wg.Add(1)
|
||||
go func(c <-chan int) {
|
||||
defer wg.Done()
|
||||
for v := range c {
|
||||
out <- v
|
||||
}
|
||||
}(ch)
|
||||
}
|
||||
|
||||
go func() {
|
||||
wg.Wait()
|
||||
close(out)
|
||||
}()
|
||||
|
||||
return out
|
||||
}
|
||||
```
|
||||
|
||||
## Data Structure Optimization
|
||||
|
||||
### Map Preallocation
|
||||
```go
|
||||
// Bad: Map grows as needed
|
||||
m := make(map[string]int)
|
||||
for i := 0; i < 10000; i++ {
|
||||
m[fmt.Sprint(i)] = i // Reallocates periodically
|
||||
}
|
||||
|
||||
// Good: Preallocate
|
||||
m := make(map[string]int, 10000)
|
||||
for i := 0; i < 10000; i++ {
|
||||
m[fmt.Sprint(i)] = i // No reallocation
|
||||
}
|
||||
```
|
||||
|
||||
### Struct Field Alignment
|
||||
```go
|
||||
// Bad: Poor alignment (40 bytes due to padding)
|
||||
type BadLayout struct {
|
||||
a bool // 1 byte + 7 padding
|
||||
b int64 // 8 bytes
|
||||
c bool // 1 byte + 7 padding
|
||||
d int64 // 8 bytes
|
||||
e bool // 1 byte + 7 padding
|
||||
}
|
||||
|
||||
// Good: Optimal alignment (24 bytes)
|
||||
type GoodLayout struct {
|
||||
b int64 // 8 bytes
|
||||
d int64 // 8 bytes
|
||||
a bool // 1 byte
|
||||
c bool // 1 byte
|
||||
e bool // 1 byte + 5 padding
|
||||
}
|
||||
```
|
||||
|
||||
## I/O Optimization
|
||||
|
||||
### Buffered I/O
|
||||
```go
|
||||
// Bad: Unbuffered reads
|
||||
file, _ := os.Open("file.txt")
|
||||
scanner := bufio.NewScanner(file)
|
||||
|
||||
// Good: Buffered with custom size
|
||||
file, _ := os.Open("file.txt")
|
||||
reader := bufio.NewReaderSize(file, 64*1024) // 64KB buffer
|
||||
scanner := bufio.NewScanner(reader)
|
||||
```
|
||||
|
||||
### Connection Pooling
|
||||
```go
|
||||
// HTTP client with connection pooling
|
||||
client := &http.Client{
|
||||
Transport: &http.Transport{
|
||||
MaxIdleConns: 100,
|
||||
MaxIdleConnsPerHost: 10,
|
||||
IdleConnTimeout: 90 * time.Second,
|
||||
},
|
||||
Timeout: 10 * time.Second,
|
||||
}
|
||||
|
||||
// Database connection pool
|
||||
db, _ := sql.Open("postgres", dsn)
|
||||
db.SetMaxOpenConns(25)
|
||||
db.SetMaxIdleConns(5)
|
||||
db.SetConnMaxLifetime(5 * time.Minute)
|
||||
```
|
||||
|
||||
## Performance Anti-Patterns
|
||||
|
||||
### Unnecessary Interface Conversions
|
||||
```go
|
||||
// Bad: Interface conversion in hot path
|
||||
func process(items []interface{}) {
|
||||
for _, item := range items {
|
||||
v := item.(MyType) // Type assertion overhead
|
||||
use(v)
|
||||
}
|
||||
}
|
||||
|
||||
// Good: Use concrete types
|
||||
func process(items []MyType) {
|
||||
for _, item := range items {
|
||||
use(item) // Direct access
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Defer in Loops
|
||||
```go
|
||||
// Bad: Defers accumulate in loop
|
||||
for _, file := range files {
|
||||
f, _ := os.Open(file)
|
||||
defer f.Close() // All close calls deferred until function returns!
|
||||
}
|
||||
|
||||
// Good: Close immediately or use function
|
||||
for _, file := range files {
|
||||
func() {
|
||||
f, _ := os.Open(file)
|
||||
defer f.Close() // Deferred to end of this closure
|
||||
process(f)
|
||||
}()
|
||||
}
|
||||
```
|
||||
|
||||
### Lock Contention
|
||||
```go
|
||||
// Bad: Lock held during expensive operation
|
||||
mu.Lock()
|
||||
result := expensiveComputation(data)
|
||||
cache[key] = result
|
||||
mu.Unlock()
|
||||
|
||||
// Good: Minimize lock time
|
||||
result := expensiveComputation(data)
|
||||
mu.Lock()
|
||||
cache[key] = result
|
||||
mu.Unlock()
|
||||
|
||||
// Better: Use sync.Map for concurrent reads
|
||||
var cache sync.Map
|
||||
cache.Store(key, value)
|
||||
val, ok := cache.Load(key)
|
||||
```
|
||||
|
||||
## Compiler Optimizations
|
||||
|
||||
### Escape Analysis
|
||||
```go
|
||||
// Bad: Escapes to heap
|
||||
func makeSlice() *[]int {
|
||||
s := make([]int, 1000)
|
||||
return &s // Pointer returned, allocates on heap
|
||||
}
|
||||
|
||||
// Good: Stays on stack
|
||||
func makeSlice() []int {
|
||||
s := make([]int, 1000)
|
||||
return s // Value returned, can stay on stack
|
||||
}
|
||||
|
||||
// Check with: go build -gcflags='-m'
|
||||
```
|
||||
|
||||
### Inline Functions
|
||||
```go
|
||||
// Small functions are inlined automatically
|
||||
func add(a, b int) int {
|
||||
return a + b // Will be inlined
|
||||
}
|
||||
|
||||
// Prevent inlining if needed: //go:noinline
|
||||
```
|
||||
|
||||
## Performance Checklist
|
||||
|
||||
**Before Optimizing:**
|
||||
- [ ] Profile with pprof to identify bottlenecks
|
||||
- [ ] Write benchmarks for hot paths
|
||||
- [ ] Measure allocations with `-benchmem`
|
||||
- [ ] Check for goroutine leaks
|
||||
|
||||
**Go-Specific Optimizations:**
|
||||
- [ ] Preallocate slices and maps with known capacity
|
||||
- [ ] Use `strings.Builder` for string concatenation
|
||||
- [ ] Implement worker pools instead of unlimited goroutines
|
||||
- [ ] Use buffered channels to reduce blocking
|
||||
- [ ] Reuse buffers with `sync.Pool`
|
||||
- [ ] Minimize allocations in hot paths
|
||||
- [ ] Order struct fields by size (largest first)
|
||||
- [ ] Use concrete types instead of interfaces in hot paths
|
||||
- [ ] Avoid `defer` in tight loops
|
||||
- [ ] Use `sync.Map` for concurrent read-heavy maps
|
||||
|
||||
**After Optimizing:**
|
||||
- [ ] Re-profile to verify improvements
|
||||
- [ ] Compare benchmarks: `benchstat old.txt new.txt`
|
||||
- [ ] Check memory allocations decreased
|
||||
- [ ] Monitor goroutine count in production
|
||||
- [ ] Use `go test -race` to check for race conditions
|
||||
|
||||
## Tools and Packages
|
||||
|
||||
**Profiling:**
|
||||
- `pprof` - Built-in profiler
|
||||
- `go-torch` - Flamegraph generation
|
||||
- `benchstat` - Compare benchmark results
|
||||
- `trace` - Execution tracer
|
||||
|
||||
**Optimization:**
|
||||
- `sync.Pool` - Object pooling
|
||||
- `sync.Map` - Concurrent map
|
||||
- `strings.Builder` - Efficient string building
|
||||
- `bufio` - Buffered I/O
|
||||
|
||||
**Analysis:**
|
||||
- `-gcflags='-m'` - Escape analysis
|
||||
- `go test -race` - Race detector
|
||||
- `go test -benchmem` - Memory allocations
|
||||
- `goleak` - Goroutine leak detection
|
||||
|
||||
---
|
||||
|
||||
*Go-specific performance optimization with goroutines, channels, and profiling*
|
||||
406
skills/optimizing-performance/languages/JAVASCRIPT.md
Normal file
406
skills/optimizing-performance/languages/JAVASCRIPT.md
Normal file
@@ -0,0 +1,406 @@
|
||||
# JavaScript/Node.js Performance Optimization
|
||||
|
||||
**Load this file when:** Optimizing performance in JavaScript or Node.js projects
|
||||
|
||||
## Profiling Tools
|
||||
|
||||
### Node.js Built-in Profiler
|
||||
```bash
|
||||
# CPU profiling
|
||||
node --prof app.js
|
||||
node --prof-process isolate-0x*.log > processed.txt
|
||||
|
||||
# Inspect with Chrome DevTools
|
||||
node --inspect app.js
|
||||
# Open chrome://inspect
|
||||
|
||||
# Heap snapshots
|
||||
node --inspect --inspect-brk app.js
|
||||
# Take heap snapshots in DevTools
|
||||
```
|
||||
|
||||
### Clinic.js Suite
|
||||
```bash
|
||||
# Install clinic
|
||||
npm install -g clinic
|
||||
|
||||
# Doctor - Overall health check
|
||||
clinic doctor -- node app.js
|
||||
|
||||
# Flame - Flamegraph profiling
|
||||
clinic flame -- node app.js
|
||||
|
||||
# Bubbleprof - Async operations
|
||||
clinic bubbleprof -- node app.js
|
||||
|
||||
# Heap profiler
|
||||
clinic heapprofiler -- node app.js
|
||||
```
|
||||
|
||||
### Performance Measurement
|
||||
```bash
|
||||
# 0x - Flamegraph generator
|
||||
npx 0x app.js
|
||||
|
||||
# autocannon - HTTP load testing
|
||||
npx autocannon http://localhost:3000
|
||||
|
||||
# lighthouse - Frontend performance
|
||||
npx lighthouse https://example.com
|
||||
```
|
||||
|
||||
## V8 Optimization Patterns
|
||||
|
||||
### Hidden Classes and Inline Caches
|
||||
```javascript
|
||||
// Bad: Dynamic property addition breaks hidden class
|
||||
function Point(x, y) {
|
||||
this.x = x;
|
||||
this.y = y;
|
||||
}
|
||||
const p1 = new Point(1, 2);
|
||||
p1.z = 3; // Deoptimizes!
|
||||
|
||||
// Good: Consistent object shape
|
||||
function Point(x, y, z = 0) {
|
||||
this.x = x;
|
||||
this.y = y;
|
||||
this.z = z; // Always present
|
||||
}
|
||||
```
|
||||
|
||||
### Avoid Polymorphism in Hot Paths
|
||||
```javascript
|
||||
// Bad: Type changes break optimization
|
||||
function add(a, b) {
|
||||
return a + b;
|
||||
}
|
||||
add(1, 2); // Optimized for numbers
|
||||
add("a", "b"); // Deoptimized! Now handles strings too
|
||||
|
||||
// Good: Separate functions for different types
|
||||
function addNumbers(a, b) {
|
||||
return a + b; // Always numbers
|
||||
}
|
||||
|
||||
function concatStrings(a, b) {
|
||||
return a + b; // Always strings
|
||||
}
|
||||
```
|
||||
|
||||
### Array Optimization
|
||||
```javascript
|
||||
// Bad: Mixed types in array
|
||||
const mixed = [1, "two", 3, "four"]; // Slow property access
|
||||
|
||||
// Good: Homogeneous arrays
|
||||
const numbers = [1, 2, 3, 4]; // Fast element access
|
||||
const strings = ["one", "two", "three"];
|
||||
|
||||
// Use typed arrays for numeric data
|
||||
const buffer = new Float64Array(1000); // Faster than regular arrays
|
||||
```
|
||||
|
||||
## Event Loop Optimization
|
||||
|
||||
### Avoid Blocking the Event Loop
|
||||
```javascript
|
||||
// Bad: Synchronous operations block event loop
|
||||
const data = fs.readFileSync('large-file.txt');
|
||||
const result = heavyComputation(data);
|
||||
|
||||
// Good: Async operations
|
||||
const data = await fs.promises.readFile('large-file.txt');
|
||||
const result = await processAsync(data);
|
||||
|
||||
// For CPU-intensive work, use worker threads
|
||||
const { Worker } = require('worker_threads');
|
||||
const worker = new Worker('./cpu-intensive.js');
|
||||
```
|
||||
|
||||
### Batch Async Operations
|
||||
```javascript
|
||||
// Bad: Sequential async calls
|
||||
for (const item of items) {
|
||||
await processItem(item); // Waits for each
|
||||
}
|
||||
|
||||
// Good: Parallel execution
|
||||
await Promise.all(items.map(item => processItem(item)));
|
||||
|
||||
// Better: Controlled concurrency with p-limit
|
||||
const pLimit = require('p-limit');
|
||||
const limit = pLimit(10); // Max 10 concurrent
|
||||
|
||||
await Promise.all(
|
||||
items.map(item => limit(() => processItem(item)))
|
||||
);
|
||||
```
|
||||
|
||||
## Memory Management
|
||||
|
||||
### Avoid Memory Leaks
|
||||
```javascript
|
||||
// Bad: Global variables and closures retain memory
|
||||
let cache = {}; // Never cleared
|
||||
function addToCache(key, value) {
|
||||
cache[key] = value; // Grows indefinitely
|
||||
}
|
||||
|
||||
// Good: Use WeakMap for caching
|
||||
const cache = new WeakMap();
|
||||
function addToCache(obj, value) {
|
||||
cache.set(obj, value); // Auto garbage collected
|
||||
}
|
||||
|
||||
// Good: Implement cache eviction
|
||||
const LRU = require('lru-cache');
|
||||
const cache = new LRU({ max: 500 });
|
||||
```
|
||||
|
||||
### Stream Large Data
|
||||
```javascript
|
||||
// Bad: Load entire file in memory
|
||||
const data = await fs.promises.readFile('large-file.txt');
|
||||
const processed = data.toString().split('\n').map(process);
|
||||
|
||||
// Good: Stream processing
|
||||
const readline = require('readline');
|
||||
const stream = fs.createReadStream('large-file.txt');
|
||||
const rl = readline.createInterface({ input: stream });
|
||||
|
||||
for await (const line of rl) {
|
||||
process(line); // Process one line at a time
|
||||
}
|
||||
```
|
||||
|
||||
## Database Query Optimization
|
||||
|
||||
### Connection Pooling
|
||||
```javascript
|
||||
// Bad: Create new connection per request
|
||||
async function query(sql) {
|
||||
const conn = await mysql.createConnection(config);
|
||||
const result = await conn.query(sql);
|
||||
await conn.end();
|
||||
return result;
|
||||
}
|
||||
|
||||
// Good: Use connection pool
|
||||
const pool = mysql.createPool(config);
|
||||
async function query(sql) {
|
||||
return pool.query(sql); // Reuses connections
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Database Operations
|
||||
```javascript
|
||||
// Bad: Multiple round trips
|
||||
for (const user of users) {
|
||||
await db.insert('users', user);
|
||||
}
|
||||
|
||||
// Good: Single batch insert
|
||||
await db.batchInsert('users', users, 1000); // Chunks of 1000
|
||||
```
|
||||
|
||||
## HTTP Server Optimization
|
||||
|
||||
### Compression
|
||||
```javascript
|
||||
const compression = require('compression');
|
||||
app.use(compression({
|
||||
level: 6, // Balance between speed and compression
|
||||
threshold: 1024 // Only compress responses > 1KB
|
||||
}));
|
||||
```
|
||||
|
||||
### Caching Headers
|
||||
```javascript
|
||||
app.get('/static/*', (req, res) => {
|
||||
res.setHeader('Cache-Control', 'public, max-age=31536000');
|
||||
res.setHeader('ETag', computeETag(file));
|
||||
res.sendFile(file);
|
||||
});
|
||||
```
|
||||
|
||||
### Keep-Alive Connections
|
||||
```javascript
|
||||
const http = require('http');
|
||||
const server = http.createServer({
|
||||
keepAlive: true,
|
||||
keepAliveTimeout: 60000 // 60 seconds
|
||||
}, app);
|
||||
```
|
||||
|
||||
## Frontend Performance
|
||||
|
||||
### Code Splitting
|
||||
```javascript
|
||||
// Dynamic imports for code splitting
|
||||
const HeavyComponent = lazy(() => import('./HeavyComponent'));
|
||||
|
||||
// Route-based code splitting
|
||||
const routes = [
|
||||
{
|
||||
path: '/dashboard',
|
||||
component: lazy(() => import('./Dashboard'))
|
||||
}
|
||||
];
|
||||
```
|
||||
|
||||
### Memoization
|
||||
```javascript
|
||||
// React.memo for expensive components
|
||||
const ExpensiveComponent = React.memo(({ data }) => {
|
||||
return <div>{expensiveRender(data)}</div>;
|
||||
});
|
||||
|
||||
// useMemo for expensive computations
|
||||
const sortedData = useMemo(() => {
|
||||
return data.sort(compare);
|
||||
}, [data]);
|
||||
|
||||
// useCallback for stable function references
|
||||
const handleClick = useCallback(() => {
|
||||
doSomething(id);
|
||||
}, [id]);
|
||||
```
|
||||
|
||||
### Virtual Scrolling
|
||||
```javascript
|
||||
// For large lists, render only visible items
|
||||
import { FixedSizeList } from 'react-window';
|
||||
|
||||
<FixedSizeList
|
||||
height={600}
|
||||
itemCount={10000}
|
||||
itemSize={50}
|
||||
width="100%"
|
||||
>
|
||||
{Row}
|
||||
</FixedSizeList>
|
||||
```
|
||||
|
||||
## Performance Anti-Patterns
|
||||
|
||||
### Unnecessary Re-renders
|
||||
```javascript
|
||||
// Bad: Creates new object on every render
|
||||
function MyComponent() {
|
||||
const style = { color: 'red' }; // New object each render
|
||||
return <div style={style}>Text</div>;
|
||||
}
|
||||
|
||||
// Good: Define outside or use useMemo
|
||||
const style = { color: 'red' };
|
||||
function MyComponent() {
|
||||
return <div style={style}>Text</div>;
|
||||
}
|
||||
```
|
||||
|
||||
### Expensive Operations in Render
|
||||
```javascript
|
||||
// Bad: Expensive computation in render
|
||||
function MyComponent({ items }) {
|
||||
const sorted = items.sort(); // Sorts on every render!
|
||||
return <List data={sorted} />;
|
||||
}
|
||||
|
||||
// Good: Memoize expensive computations
|
||||
function MyComponent({ items }) {
|
||||
const sorted = useMemo(() => items.sort(), [items]);
|
||||
return <List data={sorted} />;
|
||||
}
|
||||
```
|
||||
|
||||
## Benchmarking
|
||||
|
||||
### Simple Benchmarks
|
||||
```javascript
|
||||
const { performance } = require('perf_hooks');
|
||||
|
||||
function benchmark(fn, iterations = 1000) {
|
||||
const start = performance.now();
|
||||
for (let i = 0; i < iterations; i++) {
|
||||
fn();
|
||||
}
|
||||
const end = performance.now();
|
||||
console.log(`Avg: ${(end - start) / iterations}ms`);
|
||||
}
|
||||
|
||||
benchmark(() => myFunction());
|
||||
```
|
||||
|
||||
### Benchmark.js
|
||||
```javascript
|
||||
const Benchmark = require('benchmark');
|
||||
const suite = new Benchmark.Suite;
|
||||
|
||||
suite
|
||||
.add('Array#forEach', function() {
|
||||
[1,2,3].forEach(x => x * 2);
|
||||
})
|
||||
.add('Array#map', function() {
|
||||
[1,2,3].map(x => x * 2);
|
||||
})
|
||||
.on('complete', function() {
|
||||
console.log('Fastest is ' + this.filter('fastest').map('name'));
|
||||
})
|
||||
.run();
|
||||
```
|
||||
|
||||
## Performance Checklist
|
||||
|
||||
**Before Optimizing:**
|
||||
- [ ] Profile with Chrome DevTools or clinic.js
|
||||
- [ ] Identify hot paths and bottlenecks
|
||||
- [ ] Measure baseline performance
|
||||
|
||||
**Node.js Optimizations:**
|
||||
- [ ] Use worker threads for CPU-intensive tasks
|
||||
- [ ] Implement connection pooling for databases
|
||||
- [ ] Enable compression middleware
|
||||
- [ ] Use streams for large data processing
|
||||
- [ ] Implement caching (Redis, in-memory)
|
||||
- [ ] Batch async operations with controlled concurrency
|
||||
- [ ] Monitor event loop lag
|
||||
|
||||
**Frontend Optimizations:**
|
||||
- [ ] Implement code splitting
|
||||
- [ ] Use React.memo for expensive components
|
||||
- [ ] Implement virtual scrolling for large lists
|
||||
- [ ] Optimize bundle size (tree shaking, minification)
|
||||
- [ ] Use Web Workers for heavy computations
|
||||
- [ ] Implement service workers for offline caching
|
||||
- [ ] Lazy load images and components
|
||||
|
||||
**After Optimizing:**
|
||||
- [ ] Re-profile to verify improvements
|
||||
- [ ] Check memory usage for leaks
|
||||
- [ ] Run load tests (autocannon, artillery)
|
||||
- [ ] Monitor with APM tools
|
||||
|
||||
## Tools and Libraries
|
||||
|
||||
**Profiling:**
|
||||
- `clinic.js` - Performance profiling suite
|
||||
- `0x` - Flamegraph profiler
|
||||
- `node --inspect` - Chrome DevTools integration
|
||||
- `autocannon` - HTTP load testing
|
||||
|
||||
**Optimization:**
|
||||
- `p-limit` - Concurrency control
|
||||
- `lru-cache` - LRU caching
|
||||
- `compression` - Response compression
|
||||
- `react-window` - Virtual scrolling
|
||||
- `workerpool` - Worker thread pools
|
||||
|
||||
**Monitoring:**
|
||||
- `prom-client` - Prometheus metrics
|
||||
- `newrelic` / `datadog` - APM
|
||||
- `clinic-doctor` - Health diagnostics
|
||||
|
||||
---
|
||||
|
||||
*JavaScript/Node.js-specific performance optimization with V8 patterns and profiling tools*
|
||||
326
skills/optimizing-performance/languages/PYTHON.md
Normal file
326
skills/optimizing-performance/languages/PYTHON.md
Normal file
@@ -0,0 +1,326 @@
|
||||
# Python Performance Optimization
|
||||
|
||||
**Load this file when:** Optimizing performance in Python projects
|
||||
|
||||
## Profiling Tools
|
||||
|
||||
### Execution Time Profiling
|
||||
```bash
|
||||
# cProfile - Built-in profiler
|
||||
python -m cProfile -o profile.stats script.py
|
||||
python -m pstats profile.stats
|
||||
|
||||
# py-spy - Sampling profiler (no code changes needed)
|
||||
py-spy record -o profile.svg -- python script.py
|
||||
py-spy top -- python script.py
|
||||
|
||||
# line_profiler - Line-by-line profiling
|
||||
kernprof -l -v script.py
|
||||
```
|
||||
|
||||
### Memory Profiling
|
||||
```bash
|
||||
# memory_profiler - Line-by-line memory usage
|
||||
python -m memory_profiler script.py
|
||||
|
||||
# memray - Modern memory profiler
|
||||
memray run script.py
|
||||
memray flamegraph output.bin
|
||||
|
||||
# tracemalloc - Built-in memory tracking
|
||||
# (use in code, see example below)
|
||||
```
|
||||
|
||||
### Benchmarking
|
||||
```bash
|
||||
# pytest-benchmark
|
||||
pytest tests/ --benchmark-only
|
||||
|
||||
# timeit - Quick microbenchmarks
|
||||
python -m timeit "'-'.join(str(n) for n in range(100))"
|
||||
```
|
||||
|
||||
## Python-Specific Optimization Patterns
|
||||
|
||||
### Async/Await Patterns
|
||||
```python
|
||||
import asyncio
|
||||
import aiohttp
|
||||
|
||||
# Good: Parallel async operations
|
||||
async def fetch_all(urls):
|
||||
async with aiohttp.ClientSession() as session:
|
||||
tasks = [fetch_url(session, url) for url in urls]
|
||||
return await asyncio.gather(*tasks)
|
||||
|
||||
# Bad: Sequential async (defeats the purpose)
|
||||
async def fetch_all_bad(urls):
|
||||
results = []
|
||||
async with aiohttp.ClientSession() as session:
|
||||
for url in urls:
|
||||
results.append(await fetch_url(session, url))
|
||||
return results
|
||||
```
|
||||
|
||||
### List Comprehensions vs Generators
|
||||
```python
|
||||
# Generator (memory efficient for large datasets)
|
||||
def process_large_file(filename):
|
||||
return (process_line(line) for line in open(filename))
|
||||
|
||||
# List comprehension (when you need all data in memory)
|
||||
def process_small_file(filename):
|
||||
return [process_line(line) for line in open(filename)]
|
||||
|
||||
# Use itertools for complex generators
|
||||
from itertools import islice, chain
|
||||
first_10 = list(islice(generate_data(), 10))
|
||||
```
|
||||
|
||||
### Efficient Data Structures
|
||||
```python
|
||||
# Use sets for membership testing
|
||||
# Bad: O(n)
|
||||
if item in my_list: # Slow for large lists
|
||||
...
|
||||
|
||||
# Good: O(1)
|
||||
if item in my_set: # Fast
|
||||
...
|
||||
|
||||
# Use deque for queue operations
|
||||
from collections import deque
|
||||
queue = deque()
|
||||
queue.append(item) # O(1)
|
||||
queue.popleft() # O(1) vs list.pop(0) which is O(n)
|
||||
|
||||
# Use defaultdict to avoid key checks
|
||||
from collections import defaultdict
|
||||
counter = defaultdict(int)
|
||||
counter[key] += 1 # No need to check if key exists
|
||||
```
|
||||
|
||||
## GIL (Global Interpreter Lock) Considerations
|
||||
|
||||
### CPU-Bound Work
|
||||
```python
|
||||
# Use multiprocessing for CPU-bound tasks
|
||||
from multiprocessing import Pool
|
||||
|
||||
def cpu_intensive_task(data):
|
||||
# Heavy computation
|
||||
return result
|
||||
|
||||
with Pool(processes=4) as pool:
|
||||
results = pool.map(cpu_intensive_task, data_list)
|
||||
```
|
||||
|
||||
### I/O-Bound Work
|
||||
```python
|
||||
# Use asyncio or threading for I/O-bound tasks
|
||||
import asyncio
|
||||
|
||||
async def io_bound_task(url):
|
||||
# Network I/O, file I/O
|
||||
return result
|
||||
|
||||
results = await asyncio.gather(*[io_bound_task(url) for url in urls])
|
||||
```
|
||||
|
||||
## Common Python Anti-Patterns
|
||||
|
||||
### String Concatenation
|
||||
```python
|
||||
# Bad: O(n²) for n strings
|
||||
result = ""
|
||||
for s in strings:
|
||||
result += s
|
||||
|
||||
# Good: O(n)
|
||||
result = "".join(strings)
|
||||
```
|
||||
|
||||
### Unnecessary Lambda
|
||||
```python
|
||||
# Bad: Extra function call overhead
|
||||
sorted_items = sorted(items, key=lambda x: x.value)
|
||||
|
||||
# Good: Direct attribute access
|
||||
from operator import attrgetter
|
||||
sorted_items = sorted(items, key=attrgetter('value'))
|
||||
```
|
||||
|
||||
### Loop Invariant Code
|
||||
```python
|
||||
# Bad: Repeated calculation in loop
|
||||
for item in items:
|
||||
expensive_result = expensive_function()
|
||||
process(item, expensive_result)
|
||||
|
||||
# Good: Calculate once
|
||||
expensive_result = expensive_function()
|
||||
for item in items:
|
||||
process(item, expensive_result)
|
||||
```
|
||||
|
||||
## Performance Measurement
|
||||
|
||||
### Tracemalloc for Memory Tracking
|
||||
```python
|
||||
import tracemalloc
|
||||
|
||||
# Start tracking
|
||||
tracemalloc.start()
|
||||
|
||||
# Your code here
|
||||
data = [i for i in range(1000000)]
|
||||
|
||||
# Get memory usage
|
||||
current, peak = tracemalloc.get_traced_memory()
|
||||
print(f"Current: {current / 1024 / 1024:.2f} MB")
|
||||
print(f"Peak: {peak / 1024 / 1024:.2f} MB")
|
||||
|
||||
tracemalloc.stop()
|
||||
```
|
||||
|
||||
### Context Manager for Timing
|
||||
```python
|
||||
import time
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def timer(name):
|
||||
start = time.perf_counter()
|
||||
yield
|
||||
elapsed = time.perf_counter() - start
|
||||
print(f"{name}: {elapsed:.4f}s")
|
||||
|
||||
# Usage
|
||||
with timer("Database query"):
|
||||
results = db.query(...)
|
||||
```
|
||||
|
||||
## Database Optimization (Python-Specific)
|
||||
|
||||
### SQLAlchemy Best Practices
|
||||
```python
|
||||
# Bad: N+1 queries
|
||||
for user in session.query(User).all():
|
||||
print(user.profile.bio) # Separate query for each
|
||||
|
||||
# Good: Eager loading
|
||||
from sqlalchemy.orm import joinedload
|
||||
|
||||
users = session.query(User).options(
|
||||
joinedload(User.profile)
|
||||
).all()
|
||||
|
||||
# Good: Batch operations
|
||||
session.bulk_insert_mappings(User, user_dicts)
|
||||
session.commit()
|
||||
```
|
||||
|
||||
## Caching Strategies
|
||||
|
||||
### Function Caching
|
||||
```python
|
||||
from functools import lru_cache, cache
|
||||
|
||||
# LRU cache with size limit
|
||||
@lru_cache(maxsize=128)
|
||||
def expensive_computation(n):
|
||||
# Heavy computation
|
||||
return result
|
||||
|
||||
# Unlimited cache (Python 3.9+)
|
||||
@cache
|
||||
def fibonacci(n):
|
||||
if n < 2:
|
||||
return n
|
||||
return fibonacci(n-1) + fibonacci(n-2)
|
||||
|
||||
# Manual cache with expiration
|
||||
from cachetools import TTLCache
|
||||
cache = TTLCache(maxsize=100, ttl=300) # 5 minutes
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### pytest-benchmark
|
||||
```python
|
||||
def test_processing_performance(benchmark):
|
||||
# Benchmark automatically handles iterations
|
||||
result = benchmark(process_data, large_dataset)
|
||||
assert result is not None
|
||||
|
||||
# Compare against baseline
|
||||
def test_against_baseline(benchmark):
|
||||
benchmark.pedantic(
|
||||
process_data,
|
||||
args=(dataset,),
|
||||
iterations=10,
|
||||
rounds=100
|
||||
)
|
||||
```
|
||||
|
||||
### Load Testing with Locust
|
||||
```python
|
||||
from locust import HttpUser, task, between
|
||||
|
||||
class WebsiteUser(HttpUser):
|
||||
wait_time = between(1, 3)
|
||||
|
||||
@task
|
||||
def load_homepage(self):
|
||||
self.client.get("/")
|
||||
|
||||
@task(3) # 3x more likely than homepage
|
||||
def load_api(self):
|
||||
self.client.get("/api/data")
|
||||
```
|
||||
|
||||
## Performance Checklist
|
||||
|
||||
**Before Optimizing:**
|
||||
- [ ] Profile to identify actual bottlenecks (don't guess!)
|
||||
- [ ] Measure baseline performance
|
||||
- [ ] Set performance targets
|
||||
|
||||
**Python-Specific Optimizations:**
|
||||
- [ ] Use generators for large datasets
|
||||
- [ ] Replace loops with list comprehensions where appropriate
|
||||
- [ ] Use appropriate data structures (set, deque, defaultdict)
|
||||
- [ ] Implement caching with @lru_cache or @cache
|
||||
- [ ] Use async/await for I/O-bound operations
|
||||
- [ ] Use multiprocessing for CPU-bound operations
|
||||
- [ ] Avoid string concatenation in loops
|
||||
- [ ] Minimize attribute lookups in hot loops
|
||||
- [ ] Use __slots__ for classes with many instances
|
||||
|
||||
**After Optimizing:**
|
||||
- [ ] Re-profile to verify improvements
|
||||
- [ ] Check memory usage hasn't increased significantly
|
||||
- [ ] Ensure code readability is maintained
|
||||
- [ ] Add performance regression tests
|
||||
|
||||
## Tools and Libraries
|
||||
|
||||
**Profiling:**
|
||||
- `cProfile` - Built-in execution profiler
|
||||
- `py-spy` - Sampling profiler without code changes
|
||||
- `memory_profiler` - Memory usage line-by-line
|
||||
- `memray` - Modern memory profiler with flamegraphs
|
||||
|
||||
**Performance Testing:**
|
||||
- `pytest-benchmark` - Benchmark tests
|
||||
- `locust` - Load testing framework
|
||||
- `hyperfine` - Command-line benchmarking
|
||||
|
||||
**Optimization:**
|
||||
- `numpy` - Vectorized operations for numerical data
|
||||
- `numba` - JIT compilation for numerical functions
|
||||
- `cython` - Compile Python to C for speed
|
||||
|
||||
---
|
||||
|
||||
*Python-specific performance optimization with profiling tools and patterns*
|
||||
382
skills/optimizing-performance/languages/RUST.md
Normal file
382
skills/optimizing-performance/languages/RUST.md
Normal file
@@ -0,0 +1,382 @@
|
||||
# Rust Performance Optimization
|
||||
|
||||
**Load this file when:** Optimizing performance in Rust projects
|
||||
|
||||
## Profiling Tools
|
||||
|
||||
### Benchmarking with Criterion
|
||||
```bash
|
||||
# Add to Cargo.toml
|
||||
[dev-dependencies]
|
||||
criterion = "0.5"
|
||||
|
||||
[[bench]]
|
||||
name = "my_benchmark"
|
||||
harness = false
|
||||
|
||||
# Run benchmarks
|
||||
cargo bench
|
||||
|
||||
# Compare against baseline
|
||||
cargo bench --baseline master
|
||||
```
|
||||
|
||||
### CPU Profiling
|
||||
```bash
|
||||
# perf (Linux)
|
||||
cargo build --release
|
||||
perf record --call-graph dwarf ./target/release/myapp
|
||||
perf report
|
||||
|
||||
# Instruments (macOS)
|
||||
cargo instruments --release --template "Time Profiler"
|
||||
|
||||
# cargo-flamegraph
|
||||
cargo install flamegraph
|
||||
cargo flamegraph
|
||||
|
||||
# samply (cross-platform)
|
||||
cargo install samply
|
||||
samply record ./target/release/myapp
|
||||
```
|
||||
|
||||
### Memory Profiling
|
||||
```bash
|
||||
# valgrind (memory leaks, cache performance)
|
||||
cargo build
|
||||
valgrind --tool=massif ./target/debug/myapp
|
||||
|
||||
# dhat (heap profiling)
|
||||
# Add dhat crate to project
|
||||
|
||||
# cargo-bloat (binary size analysis)
|
||||
cargo install cargo-bloat
|
||||
cargo bloat --release
|
||||
```
|
||||
|
||||
## Zero-Cost Abstractions
|
||||
|
||||
### Avoiding Unnecessary Allocations
|
||||
```rust
|
||||
// Bad: Allocates on every call
|
||||
fn process_string(s: String) -> String {
|
||||
s.to_uppercase()
|
||||
}
|
||||
|
||||
// Good: Borrows, no allocation
|
||||
fn process_string(s: &str) -> String {
|
||||
s.to_uppercase()
|
||||
}
|
||||
|
||||
// Best: In-place modification where possible
|
||||
fn process_string_mut(s: &mut String) {
|
||||
*s = s.to_uppercase();
|
||||
}
|
||||
```
|
||||
|
||||
### Stack vs Heap Allocation
|
||||
```rust
|
||||
// Stack: Fast, known size at compile time
|
||||
let numbers = [1, 2, 3, 4, 5];
|
||||
|
||||
// Heap: Flexible, runtime-sized data
|
||||
let numbers = vec![1, 2, 3, 4, 5];
|
||||
|
||||
// Use Box<[T]> for fixed-size heap data (smaller than Vec)
|
||||
let numbers: Box<[i32]> = vec![1, 2, 3, 4, 5].into_boxed_slice();
|
||||
```
|
||||
|
||||
### Iterator Chains vs For Loops
|
||||
```rust
|
||||
// Good: Zero-cost iterator chains (compiled to efficient code)
|
||||
let sum: i32 = numbers
|
||||
.iter()
|
||||
.filter(|&&n| n > 0)
|
||||
.map(|&n| n * 2)
|
||||
.sum();
|
||||
|
||||
// Also good: Manual loop (similar performance)
|
||||
let mut sum = 0;
|
||||
for &n in numbers.iter() {
|
||||
if n > 0 {
|
||||
sum += n * 2;
|
||||
}
|
||||
}
|
||||
|
||||
// Choose iterators for readability, loops for complex logic
|
||||
```
|
||||
|
||||
## Compilation Optimizations
|
||||
|
||||
### Release Profile Tuning
|
||||
```toml
|
||||
[profile.release]
|
||||
opt-level = 3 # Maximum optimization
|
||||
lto = "fat" # Link-time optimization
|
||||
codegen-units = 1 # Better optimization, slower compile
|
||||
strip = true # Strip symbols from binary
|
||||
panic = "abort" # Smaller binary, no stack unwinding
|
||||
|
||||
[profile.release-with-debug]
|
||||
inherits = "release"
|
||||
debug = true # Keep debug symbols for profiling
|
||||
```
|
||||
|
||||
### Target CPU Features
|
||||
```bash
|
||||
# Use native CPU features
|
||||
RUSTFLAGS="-C target-cpu=native" cargo build --release
|
||||
|
||||
# Or in .cargo/config.toml
|
||||
[build]
|
||||
rustflags = ["-C", "target-cpu=native"]
|
||||
```
|
||||
|
||||
## Memory Layout Optimization
|
||||
|
||||
### Struct Field Ordering
|
||||
```rust
|
||||
// Bad: Wasted padding (24 bytes)
|
||||
struct BadLayout {
|
||||
a: u8, // 1 byte + 7 padding
|
||||
b: u64, // 8 bytes
|
||||
c: u8, // 1 byte + 7 padding
|
||||
}
|
||||
|
||||
// Good: Minimal padding (16 bytes)
|
||||
struct GoodLayout {
|
||||
b: u64, // 8 bytes
|
||||
a: u8, // 1 byte
|
||||
c: u8, // 1 byte + 6 padding
|
||||
}
|
||||
|
||||
// Use #[repr(C)] for consistent layout
|
||||
#[repr(C)]
|
||||
struct FixedLayout {
|
||||
// Fields laid out in declaration order
|
||||
}
|
||||
```
|
||||
|
||||
### Enum Optimization
|
||||
```rust
|
||||
// Consider enum size (uses largest variant)
|
||||
enum Large {
|
||||
Small(u8),
|
||||
Big([u8; 1000]), // Entire enum is 1000+ bytes!
|
||||
}
|
||||
|
||||
// Better: Box large variants
|
||||
enum Optimized {
|
||||
Small(u8),
|
||||
Big(Box<[u8; 1000]>), // Enum is now pointer-sized
|
||||
}
|
||||
```
|
||||
|
||||
## Concurrency Patterns
|
||||
|
||||
### Using Rayon for Data Parallelism
|
||||
```rust
|
||||
use rayon::prelude::*;
|
||||
|
||||
// Sequential
|
||||
let sum: i32 = data.iter().map(|x| expensive(x)).sum();
|
||||
|
||||
// Parallel (automatic work stealing)
|
||||
let sum: i32 = data.par_iter().map(|x| expensive(x)).sum();
|
||||
```
|
||||
|
||||
### Async Runtime Optimization
|
||||
```rust
|
||||
// tokio - For I/O-heavy workloads
|
||||
#[tokio::main(flavor = "multi_thread", worker_threads = 4)]
|
||||
async fn main() {
|
||||
// Async I/O operations
|
||||
}
|
||||
|
||||
// async-std - Alternative runtime
|
||||
// Choose based on ecosystem compatibility
|
||||
```
|
||||
|
||||
## Common Rust Performance Patterns
|
||||
|
||||
### String Handling
|
||||
```rust
|
||||
// Avoid unnecessary clones
|
||||
// Bad
|
||||
fn process(s: String) -> String {
|
||||
let upper = s.clone().to_uppercase();
|
||||
upper
|
||||
}
|
||||
|
||||
// Good
|
||||
fn process(s: &str) -> String {
|
||||
s.to_uppercase()
|
||||
}
|
||||
|
||||
// Use Cow for conditional cloning
|
||||
use std::borrow::Cow;
|
||||
|
||||
fn maybe_uppercase<'a>(s: &'a str, uppercase: bool) -> Cow<'a, str> {
|
||||
if uppercase {
|
||||
Cow::Owned(s.to_uppercase())
|
||||
} else {
|
||||
Cow::Borrowed(s)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Collection Preallocation
|
||||
```rust
|
||||
// Bad: Multiple reallocations
|
||||
let mut vec = Vec::new();
|
||||
for i in 0..1000 {
|
||||
vec.push(i);
|
||||
}
|
||||
|
||||
// Good: Single allocation
|
||||
let mut vec = Vec::with_capacity(1000);
|
||||
for i in 0..1000 {
|
||||
vec.push(i);
|
||||
}
|
||||
|
||||
// Best: Use collect with size_hint
|
||||
let vec: Vec<_> = (0..1000).collect();
|
||||
```
|
||||
|
||||
### Minimize Clones
|
||||
```rust
|
||||
// Bad: Unnecessary clones in loop
|
||||
for item in &items {
|
||||
let owned = item.clone();
|
||||
process(owned);
|
||||
}
|
||||
|
||||
// Good: Borrow when possible
|
||||
for item in &items {
|
||||
process_borrowed(item);
|
||||
}
|
||||
|
||||
// Use Rc/Arc only when necessary
|
||||
use std::rc::Rc;
|
||||
let shared = Rc::new(expensive_data);
|
||||
let clone1 = Rc::clone(&shared); // Cheap pointer clone
|
||||
```
|
||||
|
||||
## Performance Anti-Patterns
|
||||
|
||||
### Unnecessary Dynamic Dispatch
|
||||
```rust
|
||||
// Bad: Dynamic dispatch overhead
|
||||
fn process(items: &[Box<dyn Trait>]) {
|
||||
for item in items {
|
||||
item.method(); // Virtual call
|
||||
}
|
||||
}
|
||||
|
||||
// Good: Static dispatch via generics
|
||||
fn process<T: Trait>(items: &[T]) {
|
||||
for item in items {
|
||||
item.method(); // Direct call, can be inlined
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Lock Contention
|
||||
```rust
|
||||
// Bad: Holding lock during expensive operation
|
||||
let data = mutex.lock().unwrap();
|
||||
let result = expensive_computation(&data);
|
||||
drop(data);
|
||||
|
||||
// Good: Release lock quickly
|
||||
let cloned = {
|
||||
let data = mutex.lock().unwrap();
|
||||
data.clone()
|
||||
};
|
||||
let result = expensive_computation(&cloned);
|
||||
```
|
||||
|
||||
## Benchmarking with Criterion
|
||||
|
||||
### Basic Benchmark
|
||||
```rust
|
||||
use criterion::{black_box, criterion_group, criterion_main, Criterion};
|
||||
|
||||
fn fibonacci_benchmark(c: &mut Criterion) {
|
||||
c.bench_function("fib 20", |b| {
|
||||
b.iter(|| fibonacci(black_box(20)))
|
||||
});
|
||||
}
|
||||
|
||||
criterion_group!(benches, fibonacci_benchmark);
|
||||
criterion_main!(benches);
|
||||
```
|
||||
|
||||
### Parameterized Benchmarks
|
||||
```rust
|
||||
fn bench_sizes(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("process");
|
||||
|
||||
for size in [10, 100, 1000, 10000].iter() {
|
||||
group.bench_with_input(
|
||||
BenchmarkId::from_parameter(size),
|
||||
size,
|
||||
|b, &size| {
|
||||
b.iter(|| process_data(black_box(size)))
|
||||
},
|
||||
);
|
||||
}
|
||||
|
||||
group.finish();
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Checklist
|
||||
|
||||
**Before Optimizing:**
|
||||
- [ ] Profile with release build to identify bottlenecks
|
||||
- [ ] Measure baseline with criterion benchmarks
|
||||
- [ ] Use cargo-flamegraph to visualize hot paths
|
||||
|
||||
**Rust-Specific Optimizations:**
|
||||
- [ ] Enable LTO in release profile
|
||||
- [ ] Use target-cpu=native for CPU-specific features
|
||||
- [ ] Preallocate collections with `with_capacity`
|
||||
- [ ] Prefer borrowing (&T) over owned (T) in APIs
|
||||
- [ ] Use iterators over manual loops
|
||||
- [ ] Minimize clones - use Rc/Arc only when needed
|
||||
- [ ] Order struct fields by size (largest first)
|
||||
- [ ] Box large enum variants
|
||||
- [ ] Use rayon for CPU-bound parallelism
|
||||
- [ ] Avoid unnecessary dynamic dispatch
|
||||
|
||||
**After Optimizing:**
|
||||
- [ ] Re-benchmark to verify improvements
|
||||
- [ ] Check binary size with cargo-bloat
|
||||
- [ ] Profile memory with valgrind/dhat
|
||||
- [ ] Add regression tests with criterion baselines
|
||||
|
||||
## Tools and Crates
|
||||
|
||||
**Profiling:**
|
||||
- `criterion` - Statistical benchmarking
|
||||
- `flamegraph` - Flamegraph generation
|
||||
- `cargo-instruments` - macOS profiling
|
||||
- `perf` - Linux performance analysis
|
||||
- `dhat` - Heap profiling
|
||||
|
||||
**Optimization:**
|
||||
- `rayon` - Data parallelism
|
||||
- `tokio` / `async-std` - Async runtime
|
||||
- `parking_lot` - Faster mutex/rwlock
|
||||
- `smallvec` - Stack-allocated vectors
|
||||
- `once_cell` - Lazy static initialization
|
||||
|
||||
**Analysis:**
|
||||
- `cargo-bloat` - Binary size analysis
|
||||
- `cargo-udeps` - Find unused dependencies
|
||||
- `twiggy` - Code size profiler
|
||||
|
||||
---
|
||||
|
||||
*Rust-specific performance optimization with zero-cost abstractions and profiling tools*
|
||||
995
skills/reviewing-and-shipping/AGENTS.md
Normal file
995
skills/reviewing-and-shipping/AGENTS.md
Normal file
@@ -0,0 +1,995 @@
|
||||
# Multi-Agent Review Strategies
|
||||
|
||||
This file describes how to coordinate multiple specialized agents for comprehensive code review and quality assurance.
|
||||
|
||||
## Agent Overview
|
||||
|
||||
### Available Agents for Review
|
||||
|
||||
```yaml
|
||||
workflow-coordinator:
|
||||
role: "Pre-review validation and workflow state management"
|
||||
use_first: true
|
||||
validates:
|
||||
- Implementation phase completed
|
||||
- All specification tasks done
|
||||
- Tests passing before review
|
||||
- Ready for review/completion phase
|
||||
coordinates: "Transition from implementation to review"
|
||||
criticality: MANDATORY
|
||||
|
||||
refactorer:
|
||||
role: "Code quality and style review"
|
||||
specializes:
|
||||
- Code readability and clarity
|
||||
- SOLID principles compliance
|
||||
- Design pattern usage
|
||||
- Code smell detection
|
||||
- Complexity reduction
|
||||
focus: "Making code maintainable and clean"
|
||||
|
||||
security:
|
||||
role: "Security vulnerability review"
|
||||
specializes:
|
||||
- Authentication/authorization review
|
||||
- Input validation checking
|
||||
- Vulnerability scanning
|
||||
- Encryption implementation
|
||||
- Security best practices
|
||||
focus: "Making code secure"
|
||||
|
||||
qa:
|
||||
role: "Test quality and coverage review"
|
||||
specializes:
|
||||
- Test coverage analysis
|
||||
- Test quality assessment
|
||||
- Edge case identification
|
||||
- Mock appropriateness
|
||||
- Performance testing
|
||||
focus: "Making code well-tested"
|
||||
|
||||
implementer:
|
||||
role: "Documentation and feature completeness"
|
||||
specializes:
|
||||
- Documentation completeness
|
||||
- API documentation review
|
||||
- Code comment quality
|
||||
- Example code validation
|
||||
- Feature implementation gaps
|
||||
focus: "Making code documented and complete"
|
||||
|
||||
architect:
|
||||
role: "Architecture and design review"
|
||||
specializes:
|
||||
- Component boundary validation
|
||||
- Dependency direction checking
|
||||
- Abstraction level assessment
|
||||
- Scalability evaluation
|
||||
- Design pattern application
|
||||
focus: "Making code architecturally sound"
|
||||
when_needed: "Major changes, new components, refactoring"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Selection Rules
|
||||
|
||||
### MANDATORY: Workflow Coordinator First
|
||||
|
||||
**Always Start with workflow-coordinator:**
|
||||
|
||||
```yaml
|
||||
Pre-Review Protocol:
|
||||
1. ALWAYS use workflow-coordinator agent first
|
||||
2. workflow-coordinator validates:
|
||||
- Implementation phase is complete
|
||||
- All tasks in specification are done
|
||||
- Tests are passing
|
||||
- No blocking issues remain
|
||||
3. If validation fails:
|
||||
- Do NOT proceed to review
|
||||
- Report incomplete work to user
|
||||
- Guide user to complete implementation
|
||||
4. If validation passes:
|
||||
- Proceed to multi-agent review
|
||||
|
||||
NEVER skip workflow-coordinator validation!
|
||||
```
|
||||
|
||||
### Task-Based Agent Selection
|
||||
|
||||
**Use this matrix to determine which agents to invoke:**
|
||||
|
||||
```yaml
|
||||
Authentication/Authorization Feature:
|
||||
agents:
|
||||
- security: Security requirements and review (primary)
|
||||
- refactorer: Code quality and structure
|
||||
- qa: Security and integration testing
|
||||
- implementer: Documentation completeness
|
||||
reason: Security-critical feature needs security focus
|
||||
|
||||
API Development:
|
||||
agents:
|
||||
- refactorer: Code structure and patterns (primary)
|
||||
- implementer: API documentation
|
||||
- qa: API testing and validation
|
||||
reason: API quality and documentation critical
|
||||
|
||||
Performance Optimization:
|
||||
agents:
|
||||
- refactorer: Code efficiency review (primary)
|
||||
- qa: Performance testing validation
|
||||
- architect: Architecture implications (if major)
|
||||
reason: Performance changes need quality + testing
|
||||
|
||||
Security Fix:
|
||||
agents:
|
||||
- security: Security fix validation (primary)
|
||||
- qa: Security test coverage
|
||||
- implementer: Security documentation
|
||||
reason: Security fixes need security expert review
|
||||
|
||||
Refactoring:
|
||||
agents:
|
||||
- refactorer: Code quality improvements (primary)
|
||||
- architect: Design pattern compliance (if structural)
|
||||
- qa: No regression validation
|
||||
reason: Refactoring needs quality focus + safety
|
||||
|
||||
Bug Fix:
|
||||
agents:
|
||||
- refactorer: Code quality of fix (primary)
|
||||
- qa: Regression test addition
|
||||
reason: Simple bug fixes need basic review
|
||||
|
||||
Documentation Update:
|
||||
agents:
|
||||
- implementer: Documentation quality (primary)
|
||||
reason: Documentation changes need content review only
|
||||
|
||||
New Feature (Standard):
|
||||
agents:
|
||||
- refactorer: Code quality and structure
|
||||
- security: Security implications
|
||||
- qa: Test coverage and quality
|
||||
- implementer: Documentation completeness
|
||||
reason: Standard features need comprehensive review
|
||||
|
||||
Major System Change:
|
||||
agents:
|
||||
- architect: Architecture validation (primary)
|
||||
- refactorer: Code quality review
|
||||
- security: Security implications
|
||||
- qa: Comprehensive testing
|
||||
- implementer: Documentation update
|
||||
reason: Major changes need all-hands review
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Coordination Patterns
|
||||
|
||||
### Pattern 1: Parallel Review (Standard)
|
||||
|
||||
**When to Use:** Most feature reviews where agents can work independently.
|
||||
|
||||
```yaml
|
||||
Parallel Review Pattern:
|
||||
Spawn 4 agents simultaneously:
|
||||
- refactorer: Review code quality
|
||||
- security: Review security
|
||||
- qa: Review testing
|
||||
- implementer: Review documentation
|
||||
|
||||
Each agent focuses on their domain independently
|
||||
|
||||
Wait for all agents to complete
|
||||
|
||||
Consolidate results into unified review summary
|
||||
|
||||
Advantages:
|
||||
- Fast (all reviews happen simultaneously)
|
||||
- Comprehensive (all domains covered)
|
||||
- Independent (no agent blocking others)
|
||||
|
||||
Time: ~5-10 minutes
|
||||
```
|
||||
|
||||
**Example: New Feature Review**
|
||||
|
||||
```yaml
|
||||
Review Authentication Feature:
|
||||
|
||||
Spawn in Parallel:
|
||||
Use the refactorer agent to:
|
||||
- Review code structure and organization
|
||||
- Check SOLID principles compliance
|
||||
- Identify code smells
|
||||
- Suggest improvements
|
||||
|
||||
Use the security agent to:
|
||||
- Review authentication logic
|
||||
- Check password hashing
|
||||
- Validate token handling
|
||||
- Identify security risks
|
||||
|
||||
Use the qa agent to:
|
||||
- Analyze test coverage
|
||||
- Check edge case handling
|
||||
- Validate test quality
|
||||
- Identify missing tests
|
||||
|
||||
Use the implementer agent to:
|
||||
- Review API documentation
|
||||
- Check code comments
|
||||
- Validate examples
|
||||
- Identify doc gaps
|
||||
|
||||
Wait for All Completions
|
||||
|
||||
Consolidate:
|
||||
- Combine all agent findings
|
||||
- Identify common themes
|
||||
- Prioritize issues
|
||||
- Generate unified review summary
|
||||
```
|
||||
|
||||
### Pattern 2: Sequential Review with Validation
|
||||
|
||||
**When to Use:** Critical features where one review informs the next.
|
||||
|
||||
```yaml
|
||||
Sequential Review Pattern:
|
||||
Step 1: First agent reviews
|
||||
→ Wait for completion
|
||||
→ Analyze findings
|
||||
|
||||
Step 2: Second agent reviews (builds on first)
|
||||
→ Wait for completion
|
||||
→ Analyze findings
|
||||
|
||||
Step 3: Third agent reviews (builds on previous)
|
||||
→ Wait for completion
|
||||
→ Final analysis
|
||||
|
||||
Advantages:
|
||||
- Each review informs the next
|
||||
- Can adjust focus based on findings
|
||||
- Deeper analysis possible
|
||||
|
||||
Time: ~15-20 minutes
|
||||
```
|
||||
|
||||
**Example: Security-Critical Feature Review**
|
||||
|
||||
```yaml
|
||||
Review Security Feature:
|
||||
|
||||
Step 1: Use the security agent to:
|
||||
- Deep security analysis
|
||||
- Identify vulnerabilities
|
||||
- Define security requirements
|
||||
- Assess risk level
|
||||
|
||||
Output: Security requirements document + vulnerability report
|
||||
|
||||
Step 2: Use the refactorer agent to:
|
||||
- Review code with security context
|
||||
- Check if vulnerabilities addressed
|
||||
- Ensure secure coding patterns
|
||||
- Validate security requirements met
|
||||
|
||||
Context: Security agent's findings
|
||||
Output: Code quality + security compliance report
|
||||
|
||||
Step 3: Use the qa agent to:
|
||||
- Review tests with security focus
|
||||
- Ensure vulnerabilities tested
|
||||
- Check security edge cases
|
||||
- Validate security test coverage
|
||||
|
||||
Context: Security requirements + code review
|
||||
Output: Test coverage + security testing report
|
||||
|
||||
Step 4: Use the implementer agent to:
|
||||
- Document security implementation
|
||||
- Document security tests
|
||||
- Add security examples
|
||||
- Document threat model
|
||||
|
||||
Context: All previous reviews
|
||||
Output: Documentation completeness report
|
||||
```
|
||||
|
||||
### Pattern 3: Iterative Review with Fixing
|
||||
|
||||
**When to Use:** When issues are expected and fixes needed during review.
|
||||
|
||||
```yaml
|
||||
Iterative Review Pattern:
|
||||
Loop:
|
||||
1. Agent reviews and identifies issues
|
||||
2. Same agent (or another) fixes issues
|
||||
3. Re-validate fixed issues
|
||||
4. If new issues: repeat
|
||||
5. If clean: proceed to next agent
|
||||
|
||||
Advantages:
|
||||
- Issues fixed during review
|
||||
- Continuous improvement
|
||||
- Final review is clean
|
||||
|
||||
Time: ~20-30 minutes (depends on issues)
|
||||
```
|
||||
|
||||
**Example: Refactoring Review with Fixes**
|
||||
|
||||
```yaml
|
||||
Review Refactored Code:
|
||||
|
||||
Iteration 1 - Code Quality:
|
||||
Use the refactorer agent to:
|
||||
- Review code structure
|
||||
- Identify 3 issues:
|
||||
• Function too long (85 lines)
|
||||
• Duplicate code in 2 places
|
||||
• Complex nested conditionals
|
||||
|
||||
Use the refactorer agent to:
|
||||
- Break long function into 4 smaller ones
|
||||
- Extract duplicate code to shared utility
|
||||
- Flatten nested conditionals
|
||||
|
||||
Validate: All issues fixed ✅
|
||||
|
||||
Iteration 2 - Testing:
|
||||
Use the qa agent to:
|
||||
- Review test coverage
|
||||
- Identify 2 issues:
|
||||
• New functions not tested
|
||||
• Edge case missing
|
||||
|
||||
Use the qa agent to:
|
||||
- Add tests for new functions
|
||||
- Add edge case test
|
||||
|
||||
Validate: Coverage now 89% ✅
|
||||
|
||||
Iteration 3 - Documentation:
|
||||
Use the implementer agent to:
|
||||
- Review documentation
|
||||
- Identify 1 issue:
|
||||
• Refactored functions missing docstrings
|
||||
|
||||
Use the implementer agent to:
|
||||
- Add docstrings to all new functions
|
||||
- Update examples
|
||||
|
||||
Validate: All documented ✅
|
||||
|
||||
Final: All issues addressed, ready to ship
|
||||
```
|
||||
|
||||
### Pattern 4: Focused Review (Subset)
|
||||
|
||||
**When to Use:** Small changes or specific review needs.
|
||||
|
||||
```yaml
|
||||
Focused Review Pattern:
|
||||
Select 1-2 agents based on change type:
|
||||
- Bug fix → refactorer + qa
|
||||
- Docs update → implementer only
|
||||
- Security fix → security + qa
|
||||
- Performance → refactorer + qa
|
||||
|
||||
Only review relevant aspects
|
||||
|
||||
Skip unnecessary reviews
|
||||
|
||||
Advantages:
|
||||
- Fast (minimal agents)
|
||||
- Focused (relevant only)
|
||||
- Efficient (no wasted effort)
|
||||
|
||||
Time: ~3-5 minutes
|
||||
```
|
||||
|
||||
**Example: Bug Fix Review**
|
||||
|
||||
```yaml
|
||||
Review Bug Fix:
|
||||
|
||||
Use the refactorer agent to:
|
||||
- Review fix code quality
|
||||
- Check if fix is clean
|
||||
- Ensure no new issues introduced
|
||||
- Validate fix approach
|
||||
|
||||
Use the qa agent to:
|
||||
- Verify regression test added
|
||||
- Check test quality
|
||||
- Ensure bug scenario covered
|
||||
- Validate no other tests broken
|
||||
|
||||
Skip:
|
||||
- security (not security-related)
|
||||
- implementer (no doc changes)
|
||||
- architect (no design changes)
|
||||
|
||||
Result: Fast, focused review in ~5 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Review Aspect Coordination
|
||||
|
||||
### Code Quality Review (refactorer)
|
||||
|
||||
**Review Focus:**
|
||||
|
||||
```yaml
|
||||
Readability:
|
||||
- Variable/function names clear
|
||||
- Code organization logical
|
||||
- Comments appropriate
|
||||
- Formatting consistent
|
||||
|
||||
Design:
|
||||
- DRY principle applied
|
||||
- SOLID principles followed
|
||||
- Abstractions appropriate
|
||||
- Patterns used correctly
|
||||
|
||||
Maintainability:
|
||||
- Functions focused and small
|
||||
- Complexity low
|
||||
- Dependencies minimal
|
||||
- No code smells
|
||||
|
||||
Consistency:
|
||||
- Follows codebase conventions
|
||||
- Naming patterns consistent
|
||||
- Error handling uniform
|
||||
- Style guide compliance
|
||||
```
|
||||
|
||||
**Review Output Format:**
|
||||
|
||||
```yaml
|
||||
Code Quality Review by refactorer:
|
||||
|
||||
✅ Strengths:
|
||||
• Clean separation of concerns in auth module
|
||||
• Consistent error handling with custom exceptions
|
||||
• Good use of dependency injection pattern
|
||||
• Function sizes appropriate (avg 25 lines)
|
||||
|
||||
⚠️ Suggestions:
|
||||
• Consider extracting UserValidator to separate class
|
||||
• Could simplify nested conditionals in authenticate()
|
||||
• Opportunity to cache user lookups for performance
|
||||
• Some variable names could be more descriptive (e.g., 'data' → 'user_data')
|
||||
|
||||
🚨 Required Fixes:
|
||||
• None
|
||||
|
||||
Complexity Metrics:
|
||||
• Average cyclomatic complexity: 3.2 (target: <5) ✅
|
||||
• Max function length: 42 lines (target: <50) ✅
|
||||
• Duplicate code: 0.8% (target: <2%) ✅
|
||||
```
|
||||
|
||||
### Security Review (security)
|
||||
|
||||
**Review Focus:**
|
||||
|
||||
```yaml
|
||||
Authentication:
|
||||
- Password storage secure (hashing)
|
||||
- Token validation robust
|
||||
- Session management safe
|
||||
- MFA properly implemented
|
||||
|
||||
Authorization:
|
||||
- Permission checks present
|
||||
- RBAC correctly implemented
|
||||
- Resource ownership validated
|
||||
- No privilege escalation
|
||||
|
||||
Input Validation:
|
||||
- All inputs sanitized
|
||||
- SQL injection prevented
|
||||
- XSS prevented
|
||||
- CSRF protection active
|
||||
|
||||
Data Protection:
|
||||
- Sensitive data encrypted
|
||||
- Secure communication (HTTPS)
|
||||
- No secrets in code
|
||||
- PII handling compliant
|
||||
|
||||
Vulnerabilities:
|
||||
- No known CVEs in deps
|
||||
- No hardcoded credentials
|
||||
- No insecure algorithms
|
||||
- No information leakage
|
||||
```
|
||||
|
||||
**Review Output Format:**
|
||||
|
||||
```yaml
|
||||
Security Review by security:
|
||||
|
||||
✅ Strengths:
|
||||
• Password hashing uses bcrypt with cost 12 (recommended)
|
||||
• JWT validation includes expiry, signature, and issuer checks
|
||||
• Input sanitization comprehensive across all endpoints
|
||||
• No hardcoded secrets or credentials found
|
||||
|
||||
⚠️ Suggestions:
|
||||
• Consider adding rate limiting to login endpoint (prevent brute force)
|
||||
• Add logging for failed authentication attempts (security monitoring)
|
||||
• Consider implementing password complexity requirements
|
||||
• Could add request signing for critical API operations
|
||||
|
||||
🚨 Required Fixes:
|
||||
• None - all critical security measures in place
|
||||
|
||||
Vulnerability Scan:
|
||||
• Dependencies: 0 critical, 0 high, 1 low (acceptable)
|
||||
• Code: No security vulnerabilities detected
|
||||
• Secrets: No hardcoded secrets found
|
||||
```
|
||||
|
||||
### Test Coverage Review (qa)
|
||||
|
||||
**Review Focus:**
|
||||
|
||||
```yaml
|
||||
Coverage Metrics:
|
||||
- Overall coverage ≥80%
|
||||
- Critical paths 100%
|
||||
- Edge cases covered
|
||||
- Error paths tested
|
||||
|
||||
Test Quality:
|
||||
- Assertions meaningful
|
||||
- Test names descriptive
|
||||
- Tests isolated
|
||||
- No flaky tests
|
||||
- Mocks appropriate
|
||||
|
||||
Test Types:
|
||||
- Unit tests for logic
|
||||
- Integration for flows
|
||||
- E2E for critical paths
|
||||
- Performance if needed
|
||||
|
||||
Test Organization:
|
||||
- Clear structure
|
||||
- Good fixtures
|
||||
- Helper functions
|
||||
- Easy to maintain
|
||||
```
|
||||
|
||||
**Review Output Format:**
|
||||
|
||||
```yaml
|
||||
Test Coverage Review by qa:
|
||||
|
||||
✅ Strengths:
|
||||
• Coverage at 87% (target: 80%) - exceeds requirement ✅
|
||||
• Critical auth paths 100% covered
|
||||
• Good edge case coverage (token expiry, invalid tokens, etc.)
|
||||
• Test names clear and descriptive
|
||||
• Tests properly isolated with fixtures
|
||||
|
||||
⚠️ Suggestions:
|
||||
• Could add tests for token refresh edge cases (concurrent requests)
|
||||
• Consider adding load tests for auth endpoints (performance validation)
|
||||
• Some assertions could be more specific (e.g., check exact error message)
|
||||
• Could add property-based tests for token generation
|
||||
|
||||
🚨 Required Fixes:
|
||||
• None
|
||||
|
||||
Coverage Breakdown:
|
||||
• src/auth/jwt.py: 92% (23/25 lines)
|
||||
• src/auth/service.py: 85% (34/40 lines)
|
||||
• src/auth/validators.py: 100% (15/15 lines)
|
||||
|
||||
Test Counts:
|
||||
• Unit tests: 38 passed
|
||||
• Integration tests: 12 passed
|
||||
• Security tests: 8 passed
|
||||
• Total: 58 tests, 0 failures
|
||||
```
|
||||
|
||||
### Documentation Review (implementer)
|
||||
|
||||
**Review Focus:**
|
||||
|
||||
```yaml
|
||||
API Documentation:
|
||||
- All endpoints documented
|
||||
- Parameters described
|
||||
- Responses documented
|
||||
- Examples provided
|
||||
|
||||
Code Documentation:
|
||||
- Functions have docstrings
|
||||
- Complex logic explained
|
||||
- Public APIs documented
|
||||
- Types annotated
|
||||
|
||||
Project Documentation:
|
||||
- README up to date
|
||||
- Setup instructions clear
|
||||
- Architecture documented
|
||||
- Examples working
|
||||
|
||||
Completeness:
|
||||
- No missing docs
|
||||
- Accurate and current
|
||||
- Easy to understand
|
||||
- Maintained with code
|
||||
```
|
||||
|
||||
**Review Output Format:**
|
||||
|
||||
```yaml
|
||||
Documentation Review by implementer:
|
||||
|
||||
✅ Strengths:
|
||||
• API documentation complete with OpenAPI specs
|
||||
• All public functions have clear docstrings
|
||||
• README updated with authentication section
|
||||
• Examples provided and tested
|
||||
|
||||
⚠️ Suggestions:
|
||||
• Could add more code examples for token refresh flow
|
||||
• Consider adding architecture diagram for auth flow
|
||||
• Some docstrings could include example usage
|
||||
• Could document error codes more explicitly
|
||||
|
||||
🚨 Required Fixes:
|
||||
• None
|
||||
|
||||
Documentation Coverage:
|
||||
• Public functions: 100% (all documented)
|
||||
• API endpoints: 100% (all in OpenAPI)
|
||||
• README: Up to date ✅
|
||||
• Examples: 3 working examples included
|
||||
```
|
||||
|
||||
### Architecture Review (architect)
|
||||
|
||||
**When to Invoke:**
|
||||
|
||||
```yaml
|
||||
Trigger Architecture Review:
|
||||
- New system components added
|
||||
- Major refactoring done
|
||||
- Cross-module dependencies changed
|
||||
- Database schema modified
|
||||
- API contract changes
|
||||
- Performance-critical features
|
||||
|
||||
Skip for:
|
||||
- Small bug fixes
|
||||
- Documentation updates
|
||||
- Minor refactoring
|
||||
- Single-file changes
|
||||
```
|
||||
|
||||
**Review Focus:**
|
||||
|
||||
```yaml
|
||||
Component Boundaries:
|
||||
- Clear separation of concerns
|
||||
- Dependencies flow correctly
|
||||
- No circular dependencies
|
||||
- Proper abstraction layers
|
||||
|
||||
Scalability:
|
||||
- Horizontal scaling supported
|
||||
- No obvious bottlenecks
|
||||
- Database queries optimized
|
||||
- Caching appropriate
|
||||
|
||||
Maintainability:
|
||||
- Easy to extend
|
||||
- Easy to test
|
||||
- Low coupling
|
||||
- High cohesion
|
||||
|
||||
Future-Proofing:
|
||||
- Flexible design
|
||||
- Easy to modify
|
||||
- Minimal technical debt
|
||||
- Clear upgrade path
|
||||
```
|
||||
|
||||
**Review Output Format:**
|
||||
|
||||
```yaml
|
||||
Architecture Review by architect:
|
||||
|
||||
✅ Strengths:
|
||||
• Clean layered architecture maintained
|
||||
• Auth module well-isolated from other concerns
|
||||
• JWT implementation abstracted (easy to swap if needed)
|
||||
• Good use of dependency injection for testability
|
||||
|
||||
⚠️ Suggestions:
|
||||
• Consider event-driven approach for audit logging (scalability)
|
||||
• Could abstract session storage interface (flexibility)
|
||||
• May want to add caching layer for user lookups (performance)
|
||||
• Consider adding rate limiting at architecture level
|
||||
|
||||
🚨 Required Fixes:
|
||||
• None
|
||||
|
||||
Architecture Health:
|
||||
• Coupling: Low ✅
|
||||
• Cohesion: High ✅
|
||||
• Complexity: Manageable ✅
|
||||
• Scalability: Good ✅
|
||||
• Technical debt: Low ✅
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Review Consolidation
|
||||
|
||||
### Collecting Agent Reviews
|
||||
|
||||
**Consolidation Strategy:**
|
||||
|
||||
```yaml
|
||||
Step 1: Collect All Reviews
|
||||
- Wait for all agents to complete
|
||||
- Gather all review outputs
|
||||
- Organize by agent
|
||||
|
||||
Step 2: Identify Common Themes
|
||||
- Issues mentioned by multiple agents
|
||||
- Conflicting suggestions (rare)
|
||||
- Critical vs nice-to-have
|
||||
|
||||
Step 3: Prioritize Findings
|
||||
- 🚨 Required Fixes (blocking)
|
||||
- ⚠️ Suggestions (improvements)
|
||||
- ✅ Strengths (positive feedback)
|
||||
|
||||
Step 4: Generate Unified Summary
|
||||
- Overall assessment
|
||||
- Critical issues (if any)
|
||||
- Key improvements suggested
|
||||
- Ready-to-ship decision
|
||||
```
|
||||
|
||||
### Unified Review Summary Template
|
||||
|
||||
```yaml
|
||||
📊 Multi-Agent Review Summary
|
||||
|
||||
Code Quality (refactorer): ✅ EXCELLENT
|
||||
Strengths: [Top 3 strengths]
|
||||
Suggestions: [Top 2-3 suggestions]
|
||||
|
||||
Security (security): ✅ SECURE
|
||||
Strengths: [Top 3 strengths]
|
||||
Suggestions: [Top 2-3 suggestions]
|
||||
|
||||
Testing (qa): ✅ WELL-TESTED
|
||||
Strengths: [Coverage metrics + top strengths]
|
||||
Suggestions: [Top 2-3 suggestions]
|
||||
|
||||
Documentation (implementer): ✅ COMPLETE
|
||||
Strengths: [Documentation coverage]
|
||||
Suggestions: [Top 2-3 suggestions]
|
||||
|
||||
Architecture (architect): ✅ SOLID (if included)
|
||||
Strengths: [Architecture assessment]
|
||||
Suggestions: [Top 2-3 suggestions]
|
||||
|
||||
Overall Assessment: ✅ READY TO SHIP
|
||||
Critical Issues: [Count] (must be 0 to ship)
|
||||
Suggestions: [Count] (nice-to-have improvements)
|
||||
Quality Score: [Excellent/Good/Needs Work]
|
||||
|
||||
Recommendation: [Ship / Fix Critical Issues / Consider Suggestions]
|
||||
```
|
||||
|
||||
### Example Consolidated Review
|
||||
|
||||
```yaml
|
||||
📊 Multi-Agent Review Summary
|
||||
|
||||
Code Quality (refactorer): ✅ EXCELLENT
|
||||
Strengths:
|
||||
• Clean architecture with excellent separation of concerns
|
||||
• Consistent code style and naming conventions
|
||||
• Low complexity (avg 3.2, target <5)
|
||||
|
||||
Suggestions:
|
||||
• Consider extracting UserValidator class
|
||||
• Simplify nested conditionals in authenticate()
|
||||
|
||||
Security (security): ✅ SECURE
|
||||
Strengths:
|
||||
• Robust bcrypt password hashing (cost 12)
|
||||
• Comprehensive JWT validation
|
||||
• No hardcoded secrets or credentials
|
||||
|
||||
Suggestions:
|
||||
• Add rate limiting to prevent brute force
|
||||
• Add security event logging
|
||||
|
||||
Testing (qa): ✅ WELL-TESTED
|
||||
Strengths:
|
||||
• 87% coverage (exceeds 80% target)
|
||||
• All critical paths fully tested
|
||||
• Good edge case coverage
|
||||
|
||||
Suggestions:
|
||||
• Add tests for concurrent token refresh
|
||||
• Consider load testing auth endpoints
|
||||
|
||||
Documentation (implementer): ✅ COMPLETE
|
||||
Strengths:
|
||||
• All APIs documented with OpenAPI
|
||||
• Clear docstrings on all functions
|
||||
• README updated with examples
|
||||
|
||||
Suggestions:
|
||||
• Add architecture diagram
|
||||
• More code examples for token flow
|
||||
|
||||
Overall Assessment: ✅ READY TO SHIP
|
||||
Critical Issues: 0
|
||||
Suggestions: 8 nice-to-have improvements
|
||||
Quality Score: Excellent
|
||||
|
||||
Recommendation: SHIP - All quality gates passed. Consider addressing suggestions in future iteration.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Communication Best Practices
|
||||
|
||||
### Clear Context Handoff
|
||||
|
||||
**When Chaining Agents:**
|
||||
|
||||
```yaml
|
||||
Good Context Handoff:
|
||||
Use the security agent to review authentication
|
||||
→ Output: Security review with 3 suggestions
|
||||
|
||||
Use the implementer agent to document security measures
|
||||
Context: Security review identified token expiry, hashing, validation
|
||||
Task: Document these security features in API docs
|
||||
|
||||
Bad Context Handoff:
|
||||
Use the security agent to review authentication
|
||||
Use the implementer agent to add docs
|
||||
Problem: implementer doesn't know what security found
|
||||
```
|
||||
|
||||
### Explicit Review Boundaries
|
||||
|
||||
**Define What Each Agent Reviews:**
|
||||
|
||||
```yaml
|
||||
Good Boundary Definition:
|
||||
Use the refactorer agent to review code quality:
|
||||
- Focus: Code structure, naming, patterns
|
||||
- Scope: src/auth/ directory only
|
||||
- Exclude: Security aspects (security agent will cover)
|
||||
|
||||
Bad Boundary Definition:
|
||||
Use the refactorer agent to review the code
|
||||
Problem: Unclear scope and focus
|
||||
```
|
||||
|
||||
### Validation After Each Review
|
||||
|
||||
**Always Validate Agent Output:**
|
||||
|
||||
```yaml
|
||||
Review Validation:
|
||||
After agent completes:
|
||||
1. Check review is comprehensive
|
||||
2. Verify findings are actionable
|
||||
3. Ensure no critical issues missed
|
||||
4. Validate suggestions are reasonable
|
||||
|
||||
If issues:
|
||||
- Re-prompt agent with clarifications
|
||||
- Use different agent for second opinion
|
||||
- Escalate to user if uncertain
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checkpoint Triggers
|
||||
|
||||
### Automatic Agent Invocation
|
||||
|
||||
**Based on Code Metrics:**
|
||||
|
||||
```yaml
|
||||
High Complexity Detected:
|
||||
If cyclomatic complexity >10:
|
||||
→ Use the refactorer agent to:
|
||||
- Analyze complex functions
|
||||
- Suggest simplifications
|
||||
- Break into smaller functions
|
||||
|
||||
Security Patterns Found:
|
||||
If authentication/encryption code:
|
||||
→ Use the security agent to:
|
||||
- Review security implementation
|
||||
- Validate secure patterns
|
||||
- Check for vulnerabilities
|
||||
|
||||
Low Test Coverage:
|
||||
If coverage <80%:
|
||||
→ Use the qa agent to:
|
||||
- Identify untested code
|
||||
- Suggest test cases
|
||||
- Improve coverage
|
||||
|
||||
Missing Documentation:
|
||||
If docstring coverage <90%:
|
||||
→ Use the implementer agent to:
|
||||
- Identify missing docs
|
||||
- Generate docstrings
|
||||
- Add examples
|
||||
|
||||
Circular Dependencies:
|
||||
If circular deps detected:
|
||||
→ Use the architect agent to:
|
||||
- Analyze dependency structure
|
||||
- Suggest refactoring
|
||||
- Break circular references
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-Agent Review Best Practices
|
||||
|
||||
### DO:
|
||||
|
||||
```yaml
|
||||
✅ Best Practices:
|
||||
- ALWAYS use workflow-coordinator first
|
||||
- Use parallel reviews for speed when possible
|
||||
- Provide clear context to each agent
|
||||
- Validate each agent's output
|
||||
- Consolidate findings into unified summary
|
||||
- Focus agents on their expertise areas
|
||||
- Skip unnecessary agents for simple changes
|
||||
- Use sequential review for critical features
|
||||
```
|
||||
|
||||
### DON'T:
|
||||
|
||||
```yaml
|
||||
❌ Anti-Patterns:
|
||||
- Skip workflow-coordinator validation
|
||||
- Use all agents for every review (overkill)
|
||||
- Let agents review outside their expertise
|
||||
- Forget to consolidate findings
|
||||
- Accept reviews without validation
|
||||
- Chain agents without clear handoff
|
||||
- Run sequential when parallel would work
|
||||
- Use parallel when sequential needed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Comprehensive multi-agent review strategies for quality assurance and code validation*
|
||||
1049
skills/reviewing-and-shipping/COMMITS.md
Normal file
1049
skills/reviewing-and-shipping/COMMITS.md
Normal file
File diff suppressed because it is too large
Load Diff
869
skills/reviewing-and-shipping/MODES.md
Normal file
869
skills/reviewing-and-shipping/MODES.md
Normal file
@@ -0,0 +1,869 @@
|
||||
# Review Modes - Different Review Strategies
|
||||
|
||||
This file describes the different review modes available and when to use each one.
|
||||
|
||||
## Mode Overview
|
||||
|
||||
```yaml
|
||||
Available Modes:
|
||||
full: Complete 5-phase review pipeline (default)
|
||||
quick: Fast review for small changes
|
||||
commit-only: Generate commits without PR
|
||||
validate-only: Quality checks and fixes only
|
||||
pr-only: Create PR from existing commits
|
||||
analysis: Deep code quality analysis
|
||||
archive-spec: Move completed spec to completed/
|
||||
|
||||
Mode Selection:
|
||||
- Auto-detect based on context
|
||||
- User specifies with flags
|
||||
- Optimize for common workflows
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Full Review Mode (Default)
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
Full Review Mode:
|
||||
phases: [Validate, Fix, Commit, Review, Ship]
|
||||
time: 15-30 minutes
|
||||
coverage: Comprehensive
|
||||
output: Complete PR with rich context
|
||||
|
||||
When to Use:
|
||||
- Completed feature ready to ship
|
||||
- Major changes need thorough review
|
||||
- Want comprehensive quality validation
|
||||
- Need multi-agent review insights
|
||||
- Creating important PR
|
||||
|
||||
When NOT to Use:
|
||||
- Small quick fixes (use quick mode)
|
||||
- Just need commits (use commit-only)
|
||||
- Already have commits (use pr-only)
|
||||
- Just checking quality (use validate-only)
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Comprehensive Validation (🔍)
|
||||
- Multi-domain quality checks
|
||||
- Security vulnerability scanning
|
||||
- Test coverage analysis
|
||||
- Documentation completeness
|
||||
- Quality gate enforcement
|
||||
|
||||
Phase 2: Intelligent Auto-Fixing (⚡)
|
||||
- Simple issue direct fixes
|
||||
- Complex issue agent delegation
|
||||
- Parallel fix execution
|
||||
- Validation after fixes
|
||||
|
||||
Phase 3: Smart Commit Generation (📝)
|
||||
- Change analysis and grouping
|
||||
- Commit classification
|
||||
- Conventional commit format
|
||||
- Specification integration
|
||||
|
||||
Phase 4: Multi-Agent Review (🤖)
|
||||
- refactorer: Code quality review
|
||||
- security: Security review
|
||||
- qa: Test coverage review
|
||||
- implementer: Documentation review
|
||||
- architect: Architecture review (if needed)
|
||||
- Consolidated review summary
|
||||
|
||||
Phase 5: PR Creation & Shipping (🚀)
|
||||
- PR title and description generation
|
||||
- Quality metrics inclusion
|
||||
- Review insights integration
|
||||
- Automation setup
|
||||
- Specification archiving
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review
|
||||
|
||||
# Conversation-based
|
||||
"Review my changes and create a PR"
|
||||
"Ready to ship this feature"
|
||||
"Comprehensive review of authentication implementation"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Quality Validation Report:
|
||||
- All quality gates status
|
||||
- Issues found and fixed
|
||||
- Metrics (coverage, linting, etc.)
|
||||
|
||||
2. Generated Commits:
|
||||
- List of commits created
|
||||
- Conventional commit format
|
||||
- Specification references
|
||||
|
||||
3. Multi-Agent Review Summary:
|
||||
- Code quality insights
|
||||
- Security assessment
|
||||
- Test coverage analysis
|
||||
- Documentation completeness
|
||||
- Overall recommendation
|
||||
|
||||
4. PR Details:
|
||||
- PR number and URL
|
||||
- Title and description preview
|
||||
- Automation applied (labels, reviewers)
|
||||
- Specification archive status
|
||||
|
||||
Time: ~15-30 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Review Mode
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
Quick Review Mode:
|
||||
phases: [Basic Validate, Auto-Fix, Simple Commit, Single Review, Basic PR]
|
||||
time: 3-5 minutes
|
||||
coverage: Essential checks only
|
||||
output: Simple PR with basic context
|
||||
|
||||
When to Use:
|
||||
- Small changes (1-3 files)
|
||||
- Documentation updates
|
||||
- Minor bug fixes
|
||||
- Quick hotfixes
|
||||
- Low-risk changes
|
||||
|
||||
When NOT to Use:
|
||||
- Major features (use full review)
|
||||
- Security changes (use full review)
|
||||
- Complex refactoring (use full review)
|
||||
- Need detailed analysis (use analysis mode)
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Basic Validation (🔍)
|
||||
- Linting check only
|
||||
- Quick test run
|
||||
- No deep analysis
|
||||
- Skip: Security scan, coverage analysis
|
||||
|
||||
Phase 2: Auto-Fix Only (⚡)
|
||||
- Formatting fixes
|
||||
- Linting auto-fixes
|
||||
- Skip: Agent delegation
|
||||
- Skip: Complex fixes
|
||||
|
||||
Phase 3: Simple Commit (📝)
|
||||
- One commit for all changes
|
||||
- Basic conventional format
|
||||
- Skip: Intelligent grouping
|
||||
- Skip: Complex classification
|
||||
|
||||
Phase 4: Single Agent Review (🤖)
|
||||
- Use refactorer agent only
|
||||
- Quick code quality check
|
||||
- Skip: Security, QA, implementer reviews
|
||||
- Skip: Consolidated summary
|
||||
|
||||
Phase 5: Basic PR (🚀)
|
||||
- Simple title and description
|
||||
- Basic quality metrics
|
||||
- Skip: Detailed review insights
|
||||
- Skip: Complex automation
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review --quick
|
||||
|
||||
# Conversation-based
|
||||
"Quick review for this small fix"
|
||||
"Fast review, just need to ship docs"
|
||||
"Simple review for typo fixes"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Basic Validation:
|
||||
- Tests: ✅ Passed
|
||||
- Linting: ✅ Clean
|
||||
|
||||
2. Single Commit:
|
||||
- "fix(api): correct typo in error message"
|
||||
|
||||
3. Quick Review:
|
||||
- Code quality: ✅ Good
|
||||
- No major issues found
|
||||
|
||||
4. Simple PR:
|
||||
- PR #124 created
|
||||
- Basic description
|
||||
- Ready for merge
|
||||
|
||||
Time: ~3-5 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Commit-Only Mode
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
Commit-Only Mode:
|
||||
phases: [Basic Validate, Auto-Fix, Smart Commit]
|
||||
time: 5-10 minutes
|
||||
coverage: Commit generation focused
|
||||
output: Organized commits, no PR
|
||||
|
||||
When to Use:
|
||||
- Want organized commits but not ready for PR
|
||||
- Working on long-running branch
|
||||
- Need to commit progress
|
||||
- Plan to create PR later
|
||||
- Want conventional commits without review
|
||||
|
||||
When NOT to Use:
|
||||
- Ready to ship (use full review)
|
||||
- Need quality validation (use validate-only)
|
||||
- Already have commits (no need)
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Basic Validation (🔍)
|
||||
- Run linting
|
||||
- Run tests
|
||||
- Basic quality checks
|
||||
- Ensure changes compile/run
|
||||
|
||||
Phase 2: Simple Auto-Fixing (⚡)
|
||||
- Format code
|
||||
- Fix simple linting issues
|
||||
- Skip: Complex agent fixes
|
||||
|
||||
Phase 3: Smart Commit Generation (📝)
|
||||
- Analyze all changes
|
||||
- Group related changes
|
||||
- Classify by type
|
||||
- Generate conventional commits
|
||||
- Include specification references
|
||||
|
||||
Phases Skipped:
|
||||
- Multi-agent review
|
||||
- PR creation
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review --commit-only
|
||||
|
||||
# Conversation-based
|
||||
"Generate commits from my changes"
|
||||
"Create organized commits but don't make PR yet"
|
||||
"I want proper commits but I'm not done with the feature"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Validation Status:
|
||||
- Tests: ✅ Passed
|
||||
- Linting: ✅ Clean (auto-fixed)
|
||||
|
||||
2. Generated Commits:
|
||||
- 3 commits created:
|
||||
• "feat(auth): implement JWT generation"
|
||||
• "test(auth): add JWT generation tests"
|
||||
• "docs(auth): document JWT implementation"
|
||||
|
||||
3. Summary:
|
||||
- Commits created and pushed
|
||||
- No PR created (as requested)
|
||||
- Ready to continue work or create PR later
|
||||
|
||||
Time: ~5-10 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validate-Only Mode
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
Validate-Only Mode:
|
||||
phases: [Comprehensive Validate, Auto-Fix]
|
||||
time: 5-10 minutes
|
||||
coverage: Quality checks and fixes
|
||||
output: Validation report with fixes
|
||||
|
||||
When to Use:
|
||||
- Check code quality before committing
|
||||
- Want to fix issues without committing
|
||||
- Unsure if ready for review
|
||||
- Need quality metrics
|
||||
- Want to ensure quality gates pass
|
||||
|
||||
When NOT to Use:
|
||||
- Ready to commit (use commit-only)
|
||||
- Ready to ship (use full review)
|
||||
- Just need PR (use pr-only)
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Comprehensive Validation (🔍)
|
||||
- Multi-domain quality checks
|
||||
- Security vulnerability scanning
|
||||
- Test coverage analysis
|
||||
- Documentation completeness
|
||||
- Build validation
|
||||
|
||||
Phase 2: Intelligent Auto-Fixing (⚡)
|
||||
- Simple issue direct fixes
|
||||
- Complex issue agent delegation
|
||||
- Parallel fix execution
|
||||
- Re-validation after fixes
|
||||
|
||||
Phases Skipped:
|
||||
- Commit generation
|
||||
- Multi-agent review
|
||||
- PR creation
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review --validate-only
|
||||
|
||||
# Conversation-based
|
||||
"Check if my code passes quality gates"
|
||||
"Validate and fix issues but don't commit"
|
||||
"Make sure my changes are good quality"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Initial Validation Report:
|
||||
Code Quality: ⚠️ 3 issues
|
||||
- 2 formatting issues
|
||||
- 1 unused import
|
||||
|
||||
Security: ✅ Clean
|
||||
- No vulnerabilities
|
||||
|
||||
Testing: ✅ Passed
|
||||
- Coverage: 87%
|
||||
|
||||
Documentation: ⚠️ 1 issue
|
||||
- 1 missing docstring
|
||||
|
||||
2. Auto-Fix Results:
|
||||
- Formatted 2 files
|
||||
- Removed unused import
|
||||
- Added missing docstring
|
||||
|
||||
3. Final Validation:
|
||||
Code Quality: ✅ Clean
|
||||
Security: ✅ Clean
|
||||
Testing: ✅ Passed
|
||||
Documentation: ✅ Complete
|
||||
|
||||
Status: ✅ All quality gates passing
|
||||
Ready to commit when you're ready
|
||||
|
||||
Time: ~5-10 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## PR-Only Mode
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
PR-Only Mode:
|
||||
phases: [Multi-Agent Review, PR Creation]
|
||||
time: 10-15 minutes
|
||||
coverage: Review and PR only
|
||||
output: PR with review insights
|
||||
|
||||
When to Use:
|
||||
- Commits already created manually
|
||||
- Just need PR creation
|
||||
- Want review insights without re-validation
|
||||
- Already validated and fixed issues
|
||||
- Ready to ship existing commits
|
||||
|
||||
When NOT to Use:
|
||||
- No commits yet (use commit-only or full)
|
||||
- Need quality validation (use validate-only)
|
||||
- Need fixes (use full review)
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Verify Commits (✓)
|
||||
- Check commits exist
|
||||
- Analyze commit history
|
||||
- Extract PR context
|
||||
|
||||
Phase 2: Multi-Agent Review (🤖)
|
||||
- refactorer: Code quality review
|
||||
- security: Security review
|
||||
- qa: Test coverage review
|
||||
- implementer: Documentation review
|
||||
- Consolidated review summary
|
||||
|
||||
Phase 3: PR Creation (🚀)
|
||||
- Extract title from commits
|
||||
- Generate comprehensive description
|
||||
- Include review insights
|
||||
- Setup automation (labels, reviewers)
|
||||
- Archive specification
|
||||
|
||||
Phases Skipped:
|
||||
- Validation
|
||||
- Auto-fixing
|
||||
- Commit generation
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review --pr-only
|
||||
|
||||
# Conversation-based
|
||||
"Create PR from my existing commits"
|
||||
"I already committed, just need the PR"
|
||||
"Make a PR with review insights"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Commit Analysis:
|
||||
- Found 3 commits:
|
||||
• "feat(auth): implement JWT generation"
|
||||
• "test(auth): add JWT tests"
|
||||
• "docs(auth): document JWT"
|
||||
|
||||
2. Multi-Agent Review:
|
||||
- Code quality: ✅ Excellent
|
||||
- Security: ✅ Secure
|
||||
- Testing: ✅ Well-tested
|
||||
- Documentation: ✅ Complete
|
||||
|
||||
3. PR Created:
|
||||
- PR #125: "feat: JWT Authentication"
|
||||
- URL: https://github.com/user/repo/pull/125
|
||||
- Labels: enhancement, security
|
||||
- Reviewers: @security-team
|
||||
- Specification: spec-feature-auth-001 archived
|
||||
|
||||
Time: ~10-15 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Deep Analysis Mode
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
Deep Analysis Mode:
|
||||
phases: [Comprehensive Validate, Extended Review]
|
||||
time: 20-30 minutes
|
||||
coverage: In-depth analysis and metrics
|
||||
output: Detailed quality report
|
||||
|
||||
When to Use:
|
||||
- Need comprehensive quality insights
|
||||
- Want to understand technical debt
|
||||
- Planning refactoring
|
||||
- Assessing code health
|
||||
- Before major release
|
||||
|
||||
When NOT to Use:
|
||||
- Just need quick check (use validate-only)
|
||||
- Ready to ship (use full review)
|
||||
- Simple changes (use quick mode)
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Comprehensive Validation (🔍)
|
||||
- All standard quality checks
|
||||
- Plus: Complexity analysis
|
||||
- Plus: Technical debt assessment
|
||||
- Plus: Performance profiling
|
||||
- Plus: Architecture health
|
||||
|
||||
Phase 2: Extended Multi-Agent Review (🤖)
|
||||
- All agents review (refactorer, security, qa, implementer, architect)
|
||||
- Plus: Detailed metrics collection
|
||||
- Plus: Historical comparison
|
||||
- Plus: Trend analysis
|
||||
- Plus: Actionable recommendations
|
||||
|
||||
Phase 3: Analysis Report Generation (📊)
|
||||
- Code quality trends
|
||||
- Security posture
|
||||
- Test coverage evolution
|
||||
- Documentation completeness
|
||||
- Architecture health score
|
||||
- Technical debt quantification
|
||||
- Refactoring opportunities
|
||||
- Performance bottlenecks
|
||||
|
||||
Phases Skipped:
|
||||
- Commit generation
|
||||
- PR creation
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review --analysis
|
||||
|
||||
# Conversation-based
|
||||
"Deep analysis of code quality"
|
||||
"Comprehensive quality report"
|
||||
"Assess codebase health"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Quality Metrics:
|
||||
Code Quality:
|
||||
- Overall score: 8.5/10
|
||||
- Complexity: 3.2 avg (↓ from 3.8)
|
||||
- Duplication: 1.2% (↓ from 2.1%)
|
||||
- Maintainability: 85/100
|
||||
|
||||
Security:
|
||||
- Security score: 9/10
|
||||
- Vulnerabilities: 0 critical, 1 low
|
||||
- Auth patterns: Excellent
|
||||
- Data protection: Strong
|
||||
|
||||
Testing:
|
||||
- Coverage: 87% (↑ from 82%)
|
||||
- Test quality: 8/10
|
||||
- Edge cases: Well covered
|
||||
- Performance: No regressions
|
||||
|
||||
Documentation:
|
||||
- Completeness: 92%
|
||||
- API docs: 100%
|
||||
- Code comments: 88%
|
||||
- Examples: 3 provided
|
||||
|
||||
2. Trends:
|
||||
- Code quality improving ↑
|
||||
- Test coverage growing ↑
|
||||
- Complexity decreasing ↓
|
||||
- Tech debt reducing ↓
|
||||
|
||||
3. Recommendations:
|
||||
Refactoring Opportunities:
|
||||
- Extract UserValidator class (medium priority)
|
||||
- Simplify authenticate() method (low priority)
|
||||
- Consider caching layer (enhancement)
|
||||
|
||||
Performance Optimizations:
|
||||
- Add database query caching
|
||||
- Optimize token validation path
|
||||
|
||||
Security Hardening:
|
||||
- Add rate limiting to auth endpoints
|
||||
- Implement request signing
|
||||
|
||||
Technical Debt:
|
||||
- Total: ~3 days of work
|
||||
- High priority: 1 day
|
||||
- Medium: 1.5 days
|
||||
- Low: 0.5 days
|
||||
|
||||
Time: ~20-30 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Specification Archiving Mode
|
||||
|
||||
### Overview
|
||||
|
||||
```yaml
|
||||
Specification Archiving Mode:
|
||||
phases: [Verify Completion, Move Spec, Generate Summary]
|
||||
time: 2-3 minutes
|
||||
coverage: Specification management
|
||||
output: Archived spec with completion summary
|
||||
|
||||
When to Use:
|
||||
- Specification work complete
|
||||
- All tasks and acceptance criteria met
|
||||
- PR merged (or ready to merge)
|
||||
- Want to archive completed work
|
||||
- Clean up active specifications
|
||||
|
||||
When NOT to Use:
|
||||
- Specification not complete
|
||||
- PR not created yet (use full review)
|
||||
- Still working on tasks
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
```yaml
|
||||
Phase 1: Verify Completion (✓)
|
||||
- Check all tasks completed
|
||||
- Verify acceptance criteria met
|
||||
- Confirm quality gates passed
|
||||
- Check PR exists (if applicable)
|
||||
|
||||
Phase 2: Move Specification (📁)
|
||||
- From: .quaestor/specs/active/<spec-id>.md
|
||||
- To: .quaestor/specs/completed/<spec-id>.md
|
||||
- Update status → "completed"
|
||||
- Add completion_date
|
||||
- Link PR URL
|
||||
|
||||
Phase 3: Generate Archive Summary (📝)
|
||||
- What was delivered
|
||||
- Key decisions made
|
||||
- Lessons learned
|
||||
- Performance metrics
|
||||
- Completion evidence
|
||||
```
|
||||
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
# Command-based
|
||||
/review --archive-spec spec-feature-auth-001
|
||||
|
||||
# Conversation-based
|
||||
"Archive completed specification spec-feature-auth-001"
|
||||
"Move spec-feature-auth-001 to completed"
|
||||
"Mark authentication spec as complete"
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```yaml
|
||||
Output Components:
|
||||
1. Verification:
|
||||
✅ All tasks completed (8/8)
|
||||
✅ Acceptance criteria met (5/5)
|
||||
✅ Quality gates passed
|
||||
✅ PR exists (#123)
|
||||
|
||||
2. Archive Action:
|
||||
Moved: spec-feature-auth-001.md
|
||||
From: .quaestor/specs/active/
|
||||
To: .quaestor/specs/completed/
|
||||
Status: completed
|
||||
Completion Date: 2025-10-19
|
||||
|
||||
3. Completion Summary:
|
||||
Delivered:
|
||||
- JWT authentication with refresh tokens
|
||||
- Comprehensive test suite (87% coverage)
|
||||
- API documentation
|
||||
- Security review passed
|
||||
|
||||
Key Decisions:
|
||||
- JWT over sessions for scalability
|
||||
- bcrypt cost factor 12 for security
|
||||
- Refresh token rotation every 7 days
|
||||
|
||||
Lessons Learned:
|
||||
- Token expiry edge cases need careful testing
|
||||
- Rate limiting should be in initial design
|
||||
|
||||
Metrics:
|
||||
- Timeline: 3 days (estimated: 5 days) ✅
|
||||
- Quality: All gates passed ✅
|
||||
- Tests: 58 tests, 87% coverage ✅
|
||||
- Security: 0 vulnerabilities ✅
|
||||
|
||||
Links:
|
||||
- PR: #123
|
||||
- Commits: abc123, def456, ghi789
|
||||
|
||||
Time: ~2-3 minutes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mode Comparison Matrix
|
||||
|
||||
```yaml
|
||||
Feature Comparison:
|
||||
|
||||
Full Quick Commit Validate PR Analysis Archive
|
||||
Validation ✅ ⚡ ⚡ ✅ ❌ ✅ ✅
|
||||
Auto-Fixing ✅ ⚡ ⚡ ✅ ❌ ❌ ❌
|
||||
Commit Generation ✅ ⚡ ✅ ❌ ❌ ❌ ❌
|
||||
Multi-Agent Review ✅ ⚡ ❌ ❌ ✅ ✅✅ ❌
|
||||
PR Creation ✅ ⚡ ❌ ❌ ✅ ❌ ❌
|
||||
Deep Analysis ❌ ❌ ❌ ❌ ❌ ✅ ❌
|
||||
Spec Archiving ✅ ❌ ❌ ❌ ✅ ❌ ✅
|
||||
|
||||
Legend:
|
||||
✅ = Full feature
|
||||
⚡ = Simplified version
|
||||
✅✅ = Extended version
|
||||
❌ = Not included
|
||||
|
||||
Time Comparison:
|
||||
|
||||
Mode Time Best For
|
||||
──────────── ────────────── ────────────────────────────
|
||||
Full 15-30 min Complete feature shipping
|
||||
Quick 3-5 min Small changes, hotfixes
|
||||
Commit 5-10 min Progress commits
|
||||
Validate 5-10 min Quality check before commit
|
||||
PR 10-15 min PR from existing commits
|
||||
Analysis 20-30 min Deep quality insights
|
||||
Archive 2-3 min Spec completion tracking
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Mode Selection Guidelines
|
||||
|
||||
### Decision Tree
|
||||
|
||||
```yaml
|
||||
Choose Mode Based on Situation:
|
||||
|
||||
Do you have uncommitted changes?
|
||||
No → Do you want a PR?
|
||||
Yes → Use: pr-only
|
||||
No → Use: analysis (for insights)
|
||||
Yes → Are you ready to ship?
|
||||
Yes → Use: full (comprehensive review + PR)
|
||||
No → Do you want to commit?
|
||||
Yes → Use: commit-only (commits without PR)
|
||||
No → Do you need quality check?
|
||||
Yes → Use: validate-only (check + fix)
|
||||
No → Continue working
|
||||
|
||||
Is this a small change (<5 files)?
|
||||
Yes → Use: quick (fast review)
|
||||
No → Use: full (comprehensive review)
|
||||
|
||||
Do you need detailed metrics?
|
||||
Yes → Use: analysis (deep insights)
|
||||
No → Use: appropriate mode above
|
||||
|
||||
Is specification complete?
|
||||
Yes → Use: archive-spec (after PR merged)
|
||||
No → Continue implementation
|
||||
```
|
||||
|
||||
### Situational Recommendations
|
||||
|
||||
```yaml
|
||||
Situation → Recommended Mode:
|
||||
|
||||
"I finished the feature and want to ship"
|
||||
→ full: Complete review, commits, PR
|
||||
|
||||
"Quick typo fix in docs"
|
||||
→ quick: Fast review and simple PR
|
||||
|
||||
"I want to save progress but not done"
|
||||
→ commit-only: Organized commits, no PR
|
||||
|
||||
"Is my code good quality?"
|
||||
→ validate-only: Check and fix issues
|
||||
|
||||
"I already committed, need PR"
|
||||
→ pr-only: Review and PR creation
|
||||
|
||||
"How healthy is this codebase?"
|
||||
→ analysis: Comprehensive metrics
|
||||
|
||||
"Feature done, PR merged"
|
||||
→ archive-spec: Move spec to completed/
|
||||
|
||||
"Working on experimental feature"
|
||||
→ commit-only: Save progress commits
|
||||
|
||||
"About to start refactoring"
|
||||
→ analysis: Understand current state
|
||||
|
||||
"Hotfix for production"
|
||||
→ quick: Fast review and ship
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Combining Modes
|
||||
|
||||
### Sequential Mode Usage
|
||||
|
||||
```yaml
|
||||
Common Workflows:
|
||||
|
||||
Development → Validation → Commit → Review → Ship:
|
||||
1. During development: validate-only (check quality)
|
||||
2. End of day: commit-only (save progress)
|
||||
3. Feature complete: full (review and PR)
|
||||
4. After merge: archive-spec (archive spec)
|
||||
|
||||
Before Refactoring → During → After:
|
||||
1. Before: analysis (understand current state)
|
||||
2. During: validate-only (ensure quality)
|
||||
3. After: full (review refactoring + PR)
|
||||
|
||||
Long Feature → Progress → Ship:
|
||||
1. Daily: commit-only (save progress)
|
||||
2. Weekly: validate-only (quality check)
|
||||
3. Done: full (comprehensive review + PR)
|
||||
4. Merged: archive-spec (archive spec)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Comprehensive review mode documentation with clear guidelines for when to use each mode*
|
||||
1094
skills/reviewing-and-shipping/PR.md
Normal file
1094
skills/reviewing-and-shipping/PR.md
Normal file
File diff suppressed because it is too large
Load Diff
250
skills/reviewing-and-shipping/SKILL.md
Normal file
250
skills/reviewing-and-shipping/SKILL.md
Normal file
@@ -0,0 +1,250 @@
|
||||
---
|
||||
name: Reviewing and Shipping
|
||||
description: Validate quality with multi-agent review, auto-fix issues, generate organized commits, and create PRs with rich context. Use after completing features to ensure quality gates pass and ship confidently.
|
||||
allowed-tools: [Read, Edit, MultiEdit, Bash, Grep, Glob, TodoWrite, Task]
|
||||
---
|
||||
|
||||
# Reviewing and Shipping
|
||||
|
||||
I help you ship code confidently: validate quality, fix issues, generate commits, review with agents, and create pull requests.
|
||||
|
||||
## When to Use Me
|
||||
|
||||
**Review & validate:**
|
||||
- "Review my changes"
|
||||
- "Check if code is ready to ship"
|
||||
- "Validate quality gates"
|
||||
|
||||
**Create pull request:**
|
||||
- "Create a PR"
|
||||
- "Ship this feature"
|
||||
- "Make a pull request for spec-feature-001"
|
||||
|
||||
**Generate commits:**
|
||||
- "Generate commits from my changes"
|
||||
- "Create organized commits"
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Most common:** Just completed work and want to ship it
|
||||
```
|
||||
"Review and ship this feature"
|
||||
```
|
||||
|
||||
I'll automatically:
|
||||
1. Validate quality (tests, linting, security)
|
||||
2. Fix any issues
|
||||
3. Generate organized commits
|
||||
4. Review with agents
|
||||
5. Create PR with rich description
|
||||
|
||||
**Just need PR:** Already validated and committed
|
||||
```
|
||||
"Create a PR for spec-feature-001"
|
||||
```
|
||||
|
||||
I'll skip validation and just create the PR.
|
||||
|
||||
## How I Work - Conditional Workflow
|
||||
|
||||
I detect what you need and adapt:
|
||||
|
||||
### Mode 1: Full Review & Ship (Default)
|
||||
**When:** "Review my changes", "Ship this"
|
||||
**Steps:** Validate → Fix → Commit → Review → PR
|
||||
|
||||
**Load:** `@WORKFLOW.md` for complete 5-phase process
|
||||
|
||||
---
|
||||
|
||||
### Mode 2: Quick Review
|
||||
**When:** "Quick review", small changes
|
||||
**Steps:** Basic validation → Fast commits → Simple PR
|
||||
|
||||
**Load:** `@MODES.md` for quick mode details
|
||||
|
||||
---
|
||||
|
||||
### Mode 3: Create PR Only
|
||||
**When:** "Create a PR", "Make pull request"
|
||||
**Steps:** Generate PR description from spec/commits → Submit
|
||||
|
||||
**Load:** `@PR.md` for PR creation details
|
||||
|
||||
---
|
||||
|
||||
### Mode 4: Generate Commits Only
|
||||
**When:** "Generate commits", "Organize my commits"
|
||||
**Steps:** Analyze changes → Create atomic commits
|
||||
|
||||
**Load:** `@COMMITS.md` for commit strategies
|
||||
|
||||
---
|
||||
|
||||
### Mode 5: Validate Only
|
||||
**When:** "Validate my code", "Check quality"
|
||||
**Steps:** Run quality gates → Report results
|
||||
|
||||
**Load:** `@WORKFLOW.md` Phase 1
|
||||
|
||||
---
|
||||
|
||||
### Mode 6: Deep Analysis
|
||||
**When:** "Analyze code quality", "Review for issues"
|
||||
**Steps:** Multi-agent review → Detailed report
|
||||
|
||||
**Load:** `@AGENTS.md` for review strategies
|
||||
|
||||
---
|
||||
|
||||
## Progressive Loading Pattern
|
||||
|
||||
**Don't load all files!** Only load what's needed for your workflow:
|
||||
|
||||
```yaml
|
||||
User Intent Detection:
|
||||
"review my changes" → Load @WORKFLOW.md (full 5-phase)
|
||||
"create a PR" → Load @PR.md (PR creation only)
|
||||
"generate commits" → Load @COMMITS.md (commit organization)
|
||||
"quick review" → Load @MODES.md (mode selection)
|
||||
"validate code" → Load @WORKFLOW.md Phase 1 (validation)
|
||||
```
|
||||
|
||||
## The 5-Phase Workflow
|
||||
|
||||
**When running full review:**
|
||||
|
||||
### Phase 1: Validate 🔍
|
||||
- Run tests, linting, type checking
|
||||
- Security scan
|
||||
- Documentation check
|
||||
|
||||
**See @WORKFLOW.md Phase 1 for validation details**
|
||||
|
||||
### Phase 2: Auto-Fix ⚡
|
||||
- Fix simple issues (formatting)
|
||||
- Delegate complex issues to agents
|
||||
- Re-validate
|
||||
|
||||
**See @AGENTS.md for fix strategies**
|
||||
|
||||
### Phase 3: Generate Commits 📝
|
||||
- Group related changes
|
||||
- Create atomic commits
|
||||
- Conventional commit format
|
||||
|
||||
**See @COMMITS.md for commit generation**
|
||||
|
||||
### Phase 4: Multi-Agent Review 🤖
|
||||
- Security review
|
||||
- Code quality review
|
||||
- Test coverage review
|
||||
|
||||
**See @AGENTS.md for review coordination**
|
||||
|
||||
### Phase 5: Create PR 🚀
|
||||
- Generate description from spec/commits
|
||||
- Include quality report
|
||||
- Submit to GitHub
|
||||
|
||||
**See @PR.md for PR creation**
|
||||
|
||||
## Key Features
|
||||
|
||||
### Smart Quality Validation
|
||||
✅ Language-specific validation (Python, Rust, JS, Go)
|
||||
✅ Multi-domain checks (code, security, tests, docs)
|
||||
✅ Automatic fixing of common issues
|
||||
✅ Clear pass/fail reporting
|
||||
|
||||
### Intelligent Commit Generation
|
||||
✅ Groups related changes by module
|
||||
✅ Atomic commits (one logical change)
|
||||
✅ Conventional commit format
|
||||
✅ Links to specifications
|
||||
|
||||
### Multi-Agent Review
|
||||
✅ Parallel agent execution
|
||||
✅ Domain-specific expertise
|
||||
✅ Actionable suggestions
|
||||
✅ Required fix identification
|
||||
|
||||
### Rich PR Creation
|
||||
✅ Spec-driven descriptions
|
||||
✅ Quality metrics included
|
||||
✅ Test coverage reported
|
||||
✅ Links to specifications
|
||||
✅ Review insights attached
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### After Implementing a Feature
|
||||
```
|
||||
User: "Review and ship spec-feature-001"
|
||||
|
||||
Me:
|
||||
1. Validate: Run tests, linting, security scan
|
||||
2. Fix: Auto-fix formatting, delegate complex issues
|
||||
3. Commit: Generate organized commits
|
||||
4. Review: Multi-agent code review
|
||||
5. PR: Create comprehensive pull request
|
||||
```
|
||||
|
||||
### Just Need a PR
|
||||
```
|
||||
User: "Create PR for spec-feature-001"
|
||||
|
||||
Me:
|
||||
1. Find completed spec
|
||||
2. Generate PR description
|
||||
3. Create GitHub PR
|
||||
4. Report URL
|
||||
```
|
||||
|
||||
### Want to Validate First
|
||||
```
|
||||
User: "Validate my code"
|
||||
|
||||
Me:
|
||||
1. Run all quality gates
|
||||
2. Report results (✅ or ❌)
|
||||
3. If issues: List them with fix suggestions
|
||||
4. Ask: "Fix issues and ship?" or "Just report?"
|
||||
```
|
||||
|
||||
## Supporting Files (Load on Demand)
|
||||
|
||||
- **@WORKFLOW.md** (1329 lines) - Complete 5-phase process
|
||||
- **@AGENTS.md** (995 lines) - Multi-agent coordination
|
||||
- **@MODES.md** (869 lines) - Different workflow modes
|
||||
- **@COMMITS.md** (1049 lines) - Commit generation strategies
|
||||
- **@PR.md** (1094 lines) - PR creation with rich context
|
||||
|
||||
**Total if all loaded:** 5609 lines
|
||||
**Typical usage:** 200-1500 lines (only what's needed)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Full workflow complete when:**
|
||||
- ✅ All quality gates passed
|
||||
- ✅ Issues fixed or documented
|
||||
- ✅ Commits properly organized
|
||||
- ✅ Multi-agent review complete
|
||||
- ✅ PR created with rich context
|
||||
- ✅ Spec updated (if applicable)
|
||||
|
||||
**PR-only complete when:**
|
||||
- ✅ Spec found (if spec-driven)
|
||||
- ✅ PR description generated
|
||||
- ✅ GitHub PR created
|
||||
- ✅ URL returned to user
|
||||
|
||||
## Next Steps After Using
|
||||
|
||||
- PR created → Wait for team review
|
||||
- Quality issues found → Use implementing-features to fix
|
||||
- Want to iterate → Make changes, run me again
|
||||
|
||||
---
|
||||
|
||||
*I handle the entire "code is done, make it shippable" workflow. From validation to PR creation, I ensure quality and create comprehensive documentation for reviewers.*
|
||||
1329
skills/reviewing-and-shipping/WORKFLOW.md
Normal file
1329
skills/reviewing-and-shipping/WORKFLOW.md
Normal file
File diff suppressed because it is too large
Load Diff
182
skills/security-auditing/SKILL.md
Normal file
182
skills/security-auditing/SKILL.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
name: Security Auditing
|
||||
description: Audit security with vulnerability scanning, input validation checks, and auth/authz review against OWASP Top 10. Use when implementing authentication, reviewing security-sensitive code, or conducting security audits.
|
||||
---
|
||||
|
||||
# Security Auditing
|
||||
|
||||
## Purpose
|
||||
Provides security best practices, patterns, and checklists for ensuring secure code implementation.
|
||||
|
||||
## When to Use
|
||||
- Implementing authentication or authorization systems
|
||||
- Reviewing code for security vulnerabilities
|
||||
- Validating input/output handling
|
||||
- Designing secure APIs
|
||||
- Conducting security audits
|
||||
- Analyzing data protection requirements
|
||||
|
||||
## Security Checklist
|
||||
|
||||
### Input Validation
|
||||
- ✅ Sanitize all external inputs
|
||||
- ✅ Validate data types and formats
|
||||
- ✅ Implement whitelist validation where possible
|
||||
- ✅ Prevent SQL injection via parameterized queries
|
||||
- ✅ Guard against XSS attacks
|
||||
- ✅ Validate file uploads (type, size, content)
|
||||
|
||||
### Authentication & Authorization
|
||||
- ✅ Use strong password hashing (bcrypt, Argon2)
|
||||
- ✅ Implement proper session management
|
||||
- ✅ Use secure token generation (JWT with proper signing)
|
||||
- ✅ Implement token expiration and refresh strategies
|
||||
- ✅ Apply role-based access control (RBAC)
|
||||
- ✅ Verify permissions at every access point
|
||||
- ✅ Use multi-factor authentication for sensitive operations
|
||||
|
||||
### Data Protection
|
||||
- ✅ Encrypt sensitive data at rest
|
||||
- ✅ Use TLS/HTTPS for data in transit
|
||||
- ✅ Implement proper key management
|
||||
- ✅ Avoid storing sensitive data in logs
|
||||
- ✅ Implement data retention policies
|
||||
- ✅ Comply with GDPR/HIPAA requirements if applicable
|
||||
|
||||
### API Security
|
||||
- ✅ Implement rate limiting
|
||||
- ✅ Use API keys or OAuth for authentication
|
||||
- ✅ Validate and sanitize all API inputs
|
||||
- ✅ Implement proper CORS policies
|
||||
- ✅ Use security headers (CSP, HSTS, X-Frame-Options)
|
||||
- ✅ Version APIs to manage breaking changes safely
|
||||
|
||||
### Audit Logging
|
||||
- ✅ Log all authentication attempts
|
||||
- ✅ Log authorization failures
|
||||
- ✅ Track sensitive data access
|
||||
- ✅ Log configuration changes
|
||||
- ✅ Implement secure log storage
|
||||
- ✅ Monitor logs for suspicious activity
|
||||
|
||||
## Common Vulnerabilities
|
||||
|
||||
### OWASP Top 10
|
||||
1. **Injection**: Use parameterized queries, input validation
|
||||
2. **Broken Authentication**: Implement secure session management
|
||||
3. **Sensitive Data Exposure**: Encrypt data, use HTTPS
|
||||
4. **XML External Entities (XXE)**: Disable XML external entity processing
|
||||
5. **Broken Access Control**: Verify permissions at every endpoint
|
||||
6. **Security Misconfiguration**: Follow security hardening guides
|
||||
7. **Cross-Site Scripting (XSS)**: Sanitize output, use CSP headers
|
||||
8. **Insecure Deserialization**: Validate serialized data
|
||||
9. **Using Components with Known Vulnerabilities**: Keep dependencies updated
|
||||
10. **Insufficient Logging & Monitoring**: Implement comprehensive logging
|
||||
|
||||
## Security Patterns
|
||||
|
||||
### Secure Configuration
|
||||
```yaml
|
||||
security_config:
|
||||
session:
|
||||
secure: true
|
||||
httpOnly: true
|
||||
sameSite: "strict"
|
||||
maxAge: 3600
|
||||
|
||||
passwords:
|
||||
minLength: 12
|
||||
requireSpecialChars: true
|
||||
hashAlgorithm: "argon2"
|
||||
|
||||
api:
|
||||
rateLimit: 100/minute
|
||||
corsOrigins: ["https://trusted-domain.com"]
|
||||
requireApiKey: true
|
||||
```
|
||||
|
||||
### Authentication Flow
|
||||
```
|
||||
1. User submits credentials
|
||||
2. Validate input format
|
||||
3. Check against secure hash in database
|
||||
4. Generate secure session token (JWT)
|
||||
5. Set secure, httpOnly cookie
|
||||
6. Return success with minimal user info
|
||||
7. Log authentication event
|
||||
```
|
||||
|
||||
### Authorization Pattern
|
||||
```
|
||||
1. Receive request with token
|
||||
2. Validate token signature and expiration
|
||||
3. Extract user roles/permissions
|
||||
4. Check if user has required permission
|
||||
5. Execute action if authorized
|
||||
6. Log authorization decision
|
||||
7. Return 403 if unauthorized
|
||||
```
|
||||
|
||||
## Security Commands
|
||||
|
||||
### Dependency Scanning
|
||||
```bash
|
||||
# Python
|
||||
pip-audit
|
||||
|
||||
# Node.js
|
||||
npm audit
|
||||
npm audit fix
|
||||
|
||||
# General
|
||||
snyk test
|
||||
```
|
||||
|
||||
### Static Analysis
|
||||
```bash
|
||||
# Python
|
||||
bandit -r src/
|
||||
|
||||
# Node.js
|
||||
npm run lint:security
|
||||
```
|
||||
|
||||
### Secrets Detection
|
||||
```bash
|
||||
# Detect secrets in code
|
||||
trufflehog filesystem .
|
||||
git-secrets --scan
|
||||
|
||||
# Scan for API keys
|
||||
detect-secrets scan
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Code Review Security Checklist
|
||||
- [ ] All inputs validated and sanitized
|
||||
- [ ] Outputs properly encoded
|
||||
- [ ] Authentication required for sensitive operations
|
||||
- [ ] Authorization checked at every access point
|
||||
- [ ] Sensitive data encrypted
|
||||
- [ ] Error messages don't leak information
|
||||
- [ ] Dependencies up to date
|
||||
- [ ] Security headers implemented
|
||||
- [ ] Rate limiting in place
|
||||
- [ ] Audit logging configured
|
||||
|
||||
### Secure Development Workflow
|
||||
1. **Design Phase**: Threat modeling, security requirements
|
||||
2. **Development**: Follow secure coding guidelines
|
||||
3. **Testing**: Security unit tests, penetration testing
|
||||
4. **Review**: Security-focused code review
|
||||
5. **Deployment**: Security configuration review
|
||||
6. **Monitoring**: Active security monitoring and alerts
|
||||
|
||||
## Additional Resources
|
||||
- OWASP Top 10: https://owasp.org/www-project-top-ten/
|
||||
- CWE Top 25: https://cwe.mitre.org/top25/
|
||||
- Security Headers: https://securityheaders.com/
|
||||
|
||||
---
|
||||
*Use this skill when implementing security features or conducting security reviews*
|
||||
Reference in New Issue
Block a user