Initial commit
This commit is contained in:
75
commands/workflows/data-driven-feature.md
Normal file
75
commands/workflows/data-driven-feature.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Build data-driven features with integrated pipelines and ML capabilities using specialized agents:
|
||||
|
||||
[Extended thinking: This workflow orchestrates data scientists, data engineers, backend architects, and AI engineers to build features that leverage data pipelines, analytics, and machine learning. Each agent contributes their expertise to create a complete data-driven solution.]
|
||||
|
||||
## Phase 1: Data Analysis and Design
|
||||
|
||||
### 1. Data Requirements Analysis
|
||||
- Use Task tool with subagent_type="data-scientist"
|
||||
- Prompt: "Analyze data requirements for: $ARGUMENTS. Identify data sources, required transformations, analytics needs, and potential ML opportunities."
|
||||
- Output: Data analysis report, feature engineering requirements, ML feasibility
|
||||
|
||||
### 2. Data Pipeline Architecture
|
||||
- Use Task tool with subagent_type="data-engineer"
|
||||
- Prompt: "Design data pipeline architecture for: $ARGUMENTS. Include ETL/ELT processes, data storage, streaming requirements, and integration with existing systems based on data scientist's analysis."
|
||||
- Output: Pipeline architecture, technology stack, data flow diagrams
|
||||
|
||||
## Phase 2: Backend Integration
|
||||
|
||||
### 3. API and Service Design
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design backend services to support data-driven feature: $ARGUMENTS. Include APIs for data ingestion, analytics endpoints, and ML model serving based on pipeline architecture."
|
||||
- Output: Service architecture, API contracts, integration patterns
|
||||
|
||||
### 4. Database and Storage Design
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Design optimal database schema and storage strategy for: $ARGUMENTS. Consider both transactional and analytical workloads, time-series data, and ML feature stores."
|
||||
- Output: Database schemas, indexing strategies, storage recommendations
|
||||
|
||||
## Phase 3: ML and AI Implementation
|
||||
|
||||
### 5. ML Pipeline Development
|
||||
- Use Task tool with subagent_type="ml-engineer"
|
||||
- Prompt: "Implement ML pipeline for: $ARGUMENTS. Include feature engineering, model training, validation, and deployment based on data scientist's requirements."
|
||||
- Output: ML pipeline code, model artifacts, deployment strategy
|
||||
|
||||
### 6. AI Integration
|
||||
- Use Task tool with subagent_type="ai-engineer"
|
||||
- Prompt: "Build AI-powered features for: $ARGUMENTS. Integrate LLMs, implement RAG if needed, and create intelligent automation based on ML engineer's models."
|
||||
- Output: AI integration code, prompt engineering, RAG implementation
|
||||
|
||||
## Phase 4: Implementation and Optimization
|
||||
|
||||
### 7. Data Pipeline Implementation
|
||||
- Use Task tool with subagent_type="data-engineer"
|
||||
- Prompt: "Implement production data pipelines for: $ARGUMENTS. Include real-time streaming, batch processing, and data quality monitoring based on all previous designs."
|
||||
- Output: Pipeline implementation, monitoring setup, data quality checks
|
||||
|
||||
### 8. Performance Optimization
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize data processing and model serving performance for: $ARGUMENTS. Focus on query optimization, caching strategies, and model inference speed."
|
||||
- Output: Performance improvements, caching layers, optimization report
|
||||
|
||||
## Phase 5: Testing and Deployment
|
||||
|
||||
### 9. Comprehensive Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create test suites for data pipelines and ML components: $ARGUMENTS. Include data validation tests, model performance tests, and integration tests."
|
||||
- Output: Test suites, data quality tests, ML monitoring tests
|
||||
|
||||
### 10. Production Deployment
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Deploy data-driven feature to production: $ARGUMENTS. Include pipeline orchestration, model deployment, monitoring, and rollback strategies."
|
||||
- Output: Deployment configurations, monitoring dashboards, operational runbooks
|
||||
|
||||
## Coordination Notes
|
||||
- Data flow and requirements cascade from data scientists to engineers
|
||||
- ML models must integrate seamlessly with backend services
|
||||
- Performance considerations apply to both data processing and model serving
|
||||
- Maintain data lineage and versioning throughout the pipeline
|
||||
|
||||
Data-driven feature to build: $ARGUMENTS
|
||||
120
commands/workflows/feature-development.md
Normal file
120
commands/workflows/feature-development.md
Normal file
@@ -0,0 +1,120 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
allowed-tools: Task, Read, Write, Bash(*), Glob, Grep
|
||||
argument-hint: <feature-description> [--complexity=<level>] [--learning-focus=<aspect>] [--collaboration=<mode>]
|
||||
description: Intelligent feature development with multi-expert orchestration and adaptive learning
|
||||
---
|
||||
|
||||
# Intelligent Feature Development Engine
|
||||
|
||||
Implement complete features through multi-expert collaboration with adaptive learning, structured dissent, and cognitive harmonics optimization. Transform feature development into a comprehensive learning and building experience that delivers both functionality and team capability growth.
|
||||
|
||||
[Extended thinking: Enhanced workflow integrates Split Team Framework for comprehensive feature analysis, Teacher Framework for skill development during implementation, and structured dissent for robust architectural decisions. Each phase includes meta-cognitive reflection and knowledge transfer opportunities.]
|
||||
|
||||
## Intelligent Development Framework
|
||||
|
||||
### Multi-Expert Team Assembly
|
||||
**Core Development Specialists:**
|
||||
- **Feature Architect**: Overall design strategy and system integration
|
||||
- **Frontend Specialist**: User interface and experience implementation
|
||||
- **Backend Engineer**: Service logic and data management
|
||||
- **Quality Assurance**: Testing strategy and validation
|
||||
- **Performance Optimizer**: Efficiency and scalability considerations
|
||||
- **Security Analyst**: Protection and compliance requirements
|
||||
|
||||
**Learning and Growth Roles:**
|
||||
- **Adaptive Mentor**: Skill development and knowledge transfer
|
||||
- **Pattern Recognition**: Best practice identification and application
|
||||
- **Knowledge Bridge**: Cross-domain learning and connection building
|
||||
|
||||
**Challenge and Innovation:**
|
||||
- **Constructive Critic**: Design assumption challenging and alternative generation
|
||||
- **Future-Proofing Visionary**: Long-term evolution and maintainability advocacy
|
||||
|
||||
### Development Approach Selection
|
||||
|
||||
#### Option A: Collaborative Multi-Expert Development
|
||||
- Use `/orchestrate` command for comprehensive team coordination
|
||||
- Integrate multiple perspectives for robust feature design
|
||||
- Include structured dissent for design validation
|
||||
- Emphasis on learning and capability building
|
||||
|
||||
#### Option B: Enhanced TDD-Driven Development
|
||||
- Use `/tdd-cycle` workflow with multi-expert enhancement
|
||||
- Integrate constructive challenge in test design
|
||||
- Include adaptive learning for TDD skill development
|
||||
- Meta-cognitive reflection on testing effectiveness
|
||||
|
||||
#### Option C: Learning-Focused Development
|
||||
- Use `/teach_concept` for skill building during implementation
|
||||
- Use `/adaptive_mentor` for personalized development guidance
|
||||
- Include `/pattern_discovery` for reusable pattern identification
|
||||
- Emphasis on transferable knowledge and capability growth
|
||||
|
||||
### Adaptive Complexity Management
|
||||
- **Simple Features**: Direct implementation with basic orchestration
|
||||
- **Moderate Features**: Multi-expert collaboration with structured phases
|
||||
- **Complex Features**: Comprehensive orchestration with structured dissent
|
||||
- **Learning Features**: High educational focus with mentoring integration
|
||||
|
||||
## Traditional Development Steps
|
||||
|
||||
1. **Backend Architecture Design**
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design RESTful API and data model for: $ARGUMENTS. Include endpoint definitions, database schema, and service boundaries."
|
||||
- Save the API design and schema for next agents
|
||||
|
||||
2. **Frontend Implementation**
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Create UI components for: $ARGUMENTS. Use the API design from backend-architect: [include API endpoints and data models from step 1]"
|
||||
- Ensure UI matches the backend API contract
|
||||
|
||||
3. **Test Coverage**
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Write comprehensive tests for: $ARGUMENTS. Cover both backend API endpoints: [from step 1] and frontend components: [from step 2]"
|
||||
- Include unit, integration, and e2e tests
|
||||
|
||||
4. **Production Deployment**
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Prepare production deployment for: $ARGUMENTS. Include CI/CD pipeline, containerization, and monitoring for the implemented feature."
|
||||
- Ensure all components from previous steps are deployment-ready
|
||||
|
||||
## TDD Development Steps
|
||||
|
||||
When using TDD mode, the sequence changes to:
|
||||
|
||||
1. **Test-First Backend Design**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Design and write failing tests for backend API: $ARGUMENTS. Define test cases before implementation."
|
||||
- Create comprehensive test suite for API endpoints
|
||||
|
||||
2. **Test-First Frontend Design**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Write failing tests for frontend components: $ARGUMENTS. Include unit and integration tests."
|
||||
- Define expected UI behavior through tests
|
||||
|
||||
3. **Incremental Implementation**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Implement features to pass tests for: $ARGUMENTS. Follow strict red-green-refactor cycles."
|
||||
- Build features incrementally, guided by tests
|
||||
|
||||
4. **Refactoring & Optimization**
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Refactor implementation while maintaining green tests: $ARGUMENTS. Optimize for maintainability."
|
||||
- Improve code quality with test safety net
|
||||
|
||||
5. **Production Deployment**
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Deploy TDD-developed feature: $ARGUMENTS. Verify all tests pass in CI/CD pipeline."
|
||||
- Ensure test suite runs in deployment pipeline
|
||||
|
||||
## Execution Parameters
|
||||
|
||||
- **--tdd**: Enable TDD mode (uses tdd-orchestrator agent)
|
||||
- **--strict-tdd**: Enforce strict red-green-refactor cycles
|
||||
- **--test-coverage-min**: Set minimum test coverage threshold (default: 80%)
|
||||
- **--tdd-cycle**: Use dedicated tdd-cycle workflow for granular control
|
||||
|
||||
Aggregate results from all agents and present a unified implementation plan.
|
||||
|
||||
Feature description: $ARGUMENTS
|
||||
80
commands/workflows/full-review.md
Normal file
80
commands/workflows/full-review.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Perform a comprehensive review using multiple specialized agents with explicit Task tool invocations:
|
||||
|
||||
[Extended thinking: This workflow performs a thorough multi-perspective review by orchestrating specialized review agents. Each agent examines different aspects and the results are consolidated into a unified action plan. Includes TDD compliance verification when enabled.]
|
||||
|
||||
## Review Configuration
|
||||
|
||||
- **Standard Review**: Traditional comprehensive review (default)
|
||||
- **TDD-Enhanced Review**: Includes TDD compliance and test-first verification
|
||||
- Enable with **--tdd-review** flag
|
||||
- Verifies red-green-refactor cycle adherence
|
||||
- Checks test-first implementation patterns
|
||||
|
||||
Execute parallel reviews using Task tool with specialized agents:
|
||||
|
||||
## 1. Code Quality Review
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Review code quality and maintainability for: $ARGUMENTS. Check for code smells, readability, documentation, and adherence to best practices."
|
||||
- Focus: Clean code principles, SOLID, DRY, naming conventions
|
||||
|
||||
## 2. Security Audit
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Perform security audit on: $ARGUMENTS. Check for vulnerabilities, OWASP compliance, authentication issues, and data protection."
|
||||
- Focus: Injection risks, authentication, authorization, data encryption
|
||||
|
||||
## 3. Architecture Review
|
||||
- Use Task tool with subagent_type="architect-reviewer"
|
||||
- Prompt: "Review architectural design and patterns in: $ARGUMENTS. Evaluate scalability, maintainability, and adherence to architectural principles."
|
||||
- Focus: Service boundaries, coupling, cohesion, design patterns
|
||||
|
||||
## 4. Performance Analysis
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Analyze performance characteristics of: $ARGUMENTS. Identify bottlenecks, resource usage, and optimization opportunities."
|
||||
- Focus: Response times, memory usage, database queries, caching
|
||||
|
||||
## 5. Test Coverage Assessment
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Evaluate test coverage and quality for: $ARGUMENTS. Assess unit tests, integration tests, and identify gaps in test coverage."
|
||||
- Focus: Coverage metrics, test quality, edge cases, test maintainability
|
||||
|
||||
## 6. TDD Compliance Review (When --tdd-review is enabled)
|
||||
- Use Task tool with subagent_type="tdd-orchestrator"
|
||||
- Prompt: "Verify TDD compliance for: $ARGUMENTS. Check for test-first development patterns, red-green-refactor cycles, and test-driven design."
|
||||
- Focus on TDD metrics:
|
||||
- **Test-First Verification**: Were tests written before implementation?
|
||||
- **Red-Green-Refactor Cycles**: Evidence of proper TDD cycles
|
||||
- **Test Coverage Trends**: Coverage growth patterns during development
|
||||
- **Test Granularity**: Appropriate test size and scope
|
||||
- **Refactoring Evidence**: Code improvements with test safety net
|
||||
- **Test Quality**: Tests that drive design, not just verify behavior
|
||||
|
||||
## Consolidated Report Structure
|
||||
Compile all feedback into a unified report:
|
||||
- **Critical Issues** (must fix): Security vulnerabilities, broken functionality, architectural flaws
|
||||
- **Recommendations** (should fix): Performance bottlenecks, code quality issues, missing tests
|
||||
- **Suggestions** (nice to have): Refactoring opportunities, documentation improvements
|
||||
- **Positive Feedback** (what's done well): Good practices to maintain and replicate
|
||||
|
||||
### TDD-Specific Metrics (When --tdd-review is enabled)
|
||||
Additional TDD compliance report section:
|
||||
- **TDD Adherence Score**: Percentage of code developed using TDD methodology
|
||||
- **Test-First Evidence**: Commits showing tests before implementation
|
||||
- **Cycle Completeness**: Percentage of complete red-green-refactor cycles
|
||||
- **Test Design Quality**: How well tests drive the design
|
||||
- **Coverage Delta Analysis**: Coverage changes correlated with feature additions
|
||||
- **Refactoring Frequency**: Evidence of continuous improvement
|
||||
- **Test Execution Time**: Performance of test suite
|
||||
- **Test Stability**: Flakiness and reliability metrics
|
||||
|
||||
## Review Options
|
||||
|
||||
- **--tdd-review**: Enable TDD compliance checking
|
||||
- **--strict-tdd**: Fail review if TDD practices not followed
|
||||
- **--tdd-metrics**: Generate detailed TDD metrics report
|
||||
- **--test-first-only**: Only review code with test-first evidence
|
||||
|
||||
Target: $ARGUMENTS
|
||||
63
commands/workflows/full-stack-feature.md
Normal file
63
commands/workflows/full-stack-feature.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Implement a full-stack feature across multiple platforms with coordinated agent orchestration:
|
||||
|
||||
[Extended thinking: This workflow orchestrates a comprehensive feature implementation across backend, frontend, mobile, and API layers. Each agent builds upon the work of previous agents to create a cohesive multi-platform solution.]
|
||||
|
||||
## Phase 1: Architecture and API Design
|
||||
|
||||
### 1. Backend Architecture
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design backend architecture for: $ARGUMENTS. Include service boundaries, data models, and technology recommendations."
|
||||
- Output: Service architecture, database schema, API structure
|
||||
|
||||
### 2. GraphQL API Design (if applicable)
|
||||
- Use Task tool with subagent_type="graphql-architect"
|
||||
- Prompt: "Design GraphQL schema and resolvers for: $ARGUMENTS. Build on the backend architecture from previous step. Include types, queries, mutations, and subscriptions."
|
||||
- Output: GraphQL schema, resolver structure, federation strategy
|
||||
|
||||
## Phase 2: Implementation
|
||||
|
||||
### 3. Frontend Development
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Implement web frontend for: $ARGUMENTS. Use the API design from previous steps. Include responsive UI, state management, and API integration."
|
||||
- Output: React/Vue/Angular components, state management, API client
|
||||
|
||||
### 4. Mobile Development
|
||||
- Use Task tool with subagent_type="mobile-developer"
|
||||
- Prompt: "Implement mobile app features for: $ARGUMENTS. Ensure consistency with web frontend and use the same API. Include offline support and native integrations."
|
||||
- Output: React Native/Flutter implementation, offline sync, push notifications
|
||||
|
||||
## Phase 3: Quality Assurance
|
||||
|
||||
### 5. Comprehensive Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create test suites for: $ARGUMENTS. Cover backend APIs, frontend components, mobile app features, and integration tests across all platforms."
|
||||
- Output: Unit tests, integration tests, e2e tests, test documentation
|
||||
|
||||
### 6. Security Review
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Audit security across all implementations for: $ARGUMENTS. Check API security, frontend vulnerabilities, and mobile app security."
|
||||
- Output: Security report, remediation steps
|
||||
|
||||
## Phase 4: Optimization and Deployment
|
||||
|
||||
### 7. Performance Optimization
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize performance across all platforms for: $ARGUMENTS. Focus on API response times, frontend bundle size, and mobile app performance."
|
||||
- Output: Performance improvements, caching strategies, optimization report
|
||||
|
||||
### 8. Deployment Preparation
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Prepare deployment for all components of: $ARGUMENTS. Include CI/CD pipelines, containerization, and monitoring setup."
|
||||
- Output: Deployment configurations, monitoring setup, rollout strategy
|
||||
|
||||
## Coordination Notes
|
||||
- Each agent receives outputs from previous agents
|
||||
- Maintain consistency across all platforms
|
||||
- Ensure API contracts are honored by all clients
|
||||
- Document integration points between components
|
||||
|
||||
Feature to implement: $ARGUMENTS
|
||||
13
commands/workflows/git-workflow.md
Normal file
13
commands/workflows/git-workflow.md
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Complete Git workflow using specialized agents:
|
||||
|
||||
1. code-reviewer: Review uncommitted changes
|
||||
2. test-automator: Ensure tests pass
|
||||
3. deployment-engineer: Verify deployment readiness
|
||||
4. Create commit message following conventions
|
||||
5. Push and create PR with proper description
|
||||
|
||||
Target branch: $ARGUMENTS
|
||||
17
commands/workflows/improve-agent.md
Normal file
17
commands/workflows/improve-agent.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Improve an existing agent based on recent performance:
|
||||
|
||||
1. Analyze recent uses of: $ARGUMENTS
|
||||
2. Identify patterns in:
|
||||
- Failed tasks
|
||||
- User corrections
|
||||
- Suboptimal outputs
|
||||
3. Update the agent's prompt with:
|
||||
- New examples
|
||||
- Clarified instructions
|
||||
- Additional constraints
|
||||
4. Test on recent scenarios
|
||||
5. Save improved version
|
||||
85
commands/workflows/incident-response.md
Normal file
85
commands/workflows/incident-response.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Respond to production incidents with coordinated agent expertise for rapid resolution:
|
||||
|
||||
[Extended thinking: This workflow handles production incidents with urgency and precision. Multiple specialized agents work together to identify root causes, implement fixes, and prevent recurrence.]
|
||||
|
||||
## Phase 1: Immediate Response
|
||||
|
||||
### 1. Incident Assessment
|
||||
- Use Task tool with subagent_type="incident-responder"
|
||||
- Prompt: "URGENT: Assess production incident: $ARGUMENTS. Determine severity, impact, and immediate mitigation steps. Time is critical."
|
||||
- Output: Incident severity, impact assessment, immediate actions
|
||||
|
||||
### 2. Initial Troubleshooting
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Investigate production issue: $ARGUMENTS. Check logs, metrics, recent deployments, and system health. Identify potential root causes."
|
||||
- Output: Initial findings, suspicious patterns, potential causes
|
||||
|
||||
## Phase 2: Root Cause Analysis
|
||||
|
||||
### 3. Deep Debugging
|
||||
- Use Task tool with subagent_type="debugger"
|
||||
- Prompt: "Debug production issue: $ARGUMENTS using findings from initial investigation. Analyze stack traces, reproduce issue if possible, identify exact root cause."
|
||||
- Output: Root cause identification, reproduction steps, debug analysis
|
||||
|
||||
### 4. Performance Analysis (if applicable)
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Analyze performance aspects of incident: $ARGUMENTS. Check for resource exhaustion, bottlenecks, or performance degradation."
|
||||
- Output: Performance metrics, resource analysis, bottleneck identification
|
||||
|
||||
### 5. Database Investigation (if applicable)
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Investigate database-related aspects of incident: $ARGUMENTS. Check for locks, slow queries, connection issues, or data corruption."
|
||||
- Output: Database health report, query analysis, data integrity check
|
||||
|
||||
## Phase 3: Resolution Implementation
|
||||
|
||||
### 6. Fix Development
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Design and implement fix for incident: $ARGUMENTS based on root cause analysis. Ensure fix is safe for immediate production deployment."
|
||||
- Output: Fix implementation, safety analysis, rollout strategy
|
||||
|
||||
### 7. Emergency Deployment
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Deploy emergency fix for incident: $ARGUMENTS. Implement with minimal risk, include rollback plan, and monitor deployment closely."
|
||||
- Output: Deployment execution, rollback procedures, monitoring setup
|
||||
|
||||
## Phase 4: Stabilization and Prevention
|
||||
|
||||
### 8. System Stabilization
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Stabilize system after incident fix: $ARGUMENTS. Monitor system health, clear any backlogs, and ensure full recovery."
|
||||
- Output: System health report, recovery metrics, stability confirmation
|
||||
|
||||
### 9. Security Review (if applicable)
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Review security implications of incident: $ARGUMENTS. Check for any security breaches, data exposure, or vulnerabilities exploited."
|
||||
- Output: Security assessment, breach analysis, hardening recommendations
|
||||
|
||||
## Phase 5: Post-Incident Activities
|
||||
|
||||
### 10. Monitoring Enhancement
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Enhance monitoring to prevent recurrence of: $ARGUMENTS. Add alerts, improve observability, and set up early warning systems."
|
||||
- Output: New monitoring rules, alert configurations, observability improvements
|
||||
|
||||
### 11. Test Coverage
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create tests to prevent regression of incident: $ARGUMENTS. Include unit tests, integration tests, and chaos engineering scenarios."
|
||||
- Output: Test implementations, regression prevention, chaos tests
|
||||
|
||||
### 12. Documentation
|
||||
- Use Task tool with subagent_type="incident-responder"
|
||||
- Prompt: "Document incident postmortem for: $ARGUMENTS. Include timeline, root cause, impact, resolution, and lessons learned. No blame, focus on improvement."
|
||||
- Output: Postmortem document, action items, process improvements
|
||||
|
||||
## Coordination Notes
|
||||
- Speed is critical in early phases - parallel agent execution where possible
|
||||
- Communication between agents must be clear and rapid
|
||||
- All changes must be safe and reversible
|
||||
- Document everything for postmortem analysis
|
||||
|
||||
Production incident: $ARGUMENTS
|
||||
14
commands/workflows/legacy-modernize.md
Normal file
14
commands/workflows/legacy-modernize.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Modernize legacy code using expert agents:
|
||||
|
||||
1. legacy-modernizer: Analyze and plan modernization
|
||||
2. test-automator: Create tests for legacy code
|
||||
3. code-reviewer: Review modernization plan
|
||||
4. python-pro/golang-pro: Implement modernization
|
||||
5. security-auditor: Verify security improvements
|
||||
6. performance-engineer: Validate performance
|
||||
|
||||
Target: $ARGUMENTS
|
||||
47
commands/workflows/ml-pipeline.md
Normal file
47
commands/workflows/ml-pipeline.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
# Machine Learning Pipeline
|
||||
|
||||
Design and implement a complete ML pipeline for: $ARGUMENTS
|
||||
|
||||
Create a production-ready pipeline including:
|
||||
|
||||
1. **Data Ingestion**:
|
||||
- Multiple data source connectors
|
||||
- Schema validation with Pydantic
|
||||
- Data versioning strategy
|
||||
- Incremental loading capabilities
|
||||
|
||||
2. **Feature Engineering**:
|
||||
- Feature transformation pipeline
|
||||
- Feature store integration
|
||||
- Statistical validation
|
||||
- Handling missing data and outliers
|
||||
|
||||
3. **Model Training**:
|
||||
- Experiment tracking (MLflow/W&B)
|
||||
- Hyperparameter optimization
|
||||
- Cross-validation strategy
|
||||
- Model versioning
|
||||
|
||||
4. **Model Evaluation**:
|
||||
- Comprehensive metrics
|
||||
- A/B testing framework
|
||||
- Bias detection
|
||||
- Performance monitoring
|
||||
|
||||
5. **Deployment**:
|
||||
- Model serving API
|
||||
- Batch/stream prediction
|
||||
- Model registry
|
||||
- Rollback capabilities
|
||||
|
||||
6. **Monitoring**:
|
||||
- Data drift detection
|
||||
- Model performance tracking
|
||||
- Alert system
|
||||
- Retraining triggers
|
||||
|
||||
Include error handling, logging, and make it cloud-agnostic. Use modern tools like DVC, MLflow, or similar. Ensure reproducibility and scalability.
|
||||
14
commands/workflows/multi-platform.md
Normal file
14
commands/workflows/multi-platform.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Build the same feature across multiple platforms:
|
||||
|
||||
Run in parallel:
|
||||
- frontend-developer: Web implementation
|
||||
- mobile-developer: Mobile app implementation
|
||||
- api-documenter: API documentation
|
||||
|
||||
Ensure consistency across all platforms.
|
||||
|
||||
Feature specification: $ARGUMENTS
|
||||
75
commands/workflows/performance-optimization.md
Normal file
75
commands/workflows/performance-optimization.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Optimize application performance end-to-end using specialized performance and optimization agents:
|
||||
|
||||
[Extended thinking: This workflow coordinates multiple agents to identify and fix performance bottlenecks across the entire stack. From database queries to frontend rendering, each agent contributes their expertise to create a highly optimized application.]
|
||||
|
||||
## Phase 1: Performance Analysis
|
||||
|
||||
### 1. Application Profiling
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Profile application performance for: $ARGUMENTS. Identify CPU, memory, and I/O bottlenecks. Include flame graphs, memory profiles, and resource utilization metrics."
|
||||
- Output: Performance profile, bottleneck analysis, optimization priorities
|
||||
|
||||
### 2. Database Performance Analysis
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Analyze database performance for: $ARGUMENTS. Review query execution plans, identify slow queries, check indexing, and analyze connection pooling."
|
||||
- Output: Query optimization report, index recommendations, schema improvements
|
||||
|
||||
## Phase 2: Backend Optimization
|
||||
|
||||
### 3. Backend Code Optimization
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize backend code for: $ARGUMENTS based on profiling results. Focus on algorithm efficiency, caching strategies, and async operations."
|
||||
- Output: Optimized code, caching implementation, performance improvements
|
||||
|
||||
### 4. API Optimization
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Optimize API design and implementation for: $ARGUMENTS. Consider pagination, response compression, field filtering, and batch operations."
|
||||
- Output: Optimized API endpoints, GraphQL query optimization, response time improvements
|
||||
|
||||
## Phase 3: Frontend Optimization
|
||||
|
||||
### 5. Frontend Performance
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Optimize frontend performance for: $ARGUMENTS. Focus on bundle size, lazy loading, code splitting, and rendering performance. Implement Core Web Vitals improvements."
|
||||
- Output: Optimized bundles, lazy loading implementation, performance metrics
|
||||
|
||||
### 6. Mobile App Optimization
|
||||
- Use Task tool with subagent_type="mobile-developer"
|
||||
- Prompt: "Optimize mobile app performance for: $ARGUMENTS. Focus on startup time, memory usage, battery efficiency, and offline performance."
|
||||
- Output: Optimized mobile code, reduced app size, improved battery life
|
||||
|
||||
## Phase 4: Infrastructure Optimization
|
||||
|
||||
### 7. Cloud Infrastructure Optimization
|
||||
- Use Task tool with subagent_type="cloud-architect"
|
||||
- Prompt: "Optimize cloud infrastructure for: $ARGUMENTS. Review auto-scaling, instance types, CDN usage, and geographic distribution."
|
||||
- Output: Infrastructure improvements, cost optimization, scaling strategy
|
||||
|
||||
### 8. Deployment Optimization
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Optimize deployment and build processes for: $ARGUMENTS. Improve CI/CD performance, implement caching, and optimize container images."
|
||||
- Output: Faster builds, optimized containers, improved deployment times
|
||||
|
||||
## Phase 5: Monitoring and Validation
|
||||
|
||||
### 9. Performance Monitoring Setup
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Set up comprehensive performance monitoring for: $ARGUMENTS. Include APM, real user monitoring, and custom performance metrics."
|
||||
- Output: Monitoring dashboards, alert thresholds, SLO definitions
|
||||
|
||||
### 10. Performance Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create performance test suites for: $ARGUMENTS. Include load tests, stress tests, and performance regression tests."
|
||||
- Output: Performance test suite, benchmark results, regression prevention
|
||||
|
||||
## Coordination Notes
|
||||
- Performance metrics guide optimization priorities
|
||||
- Each optimization must be validated with measurements
|
||||
- Consider trade-offs between different performance aspects
|
||||
- Document all optimizations and their impact
|
||||
|
||||
Performance optimization target: $ARGUMENTS
|
||||
108
commands/workflows/security-hardening.md
Normal file
108
commands/workflows/security-hardening.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
allowed-tools: Task, Read, Write, Bash(*), Glob, Grep
|
||||
argument-hint: <system-or-application> [--threat-model=<category>] [--compliance=<framework>] [--learning=<security-education>]
|
||||
description: Multi-expert security hardening with threat modeling and adaptive security education
|
||||
---
|
||||
|
||||
# Advanced Security Hardening Engine
|
||||
|
||||
Implement comprehensive security measures through multi-expert collaboration with threat modeling, structured dissent, and adaptive security learning. Transform security implementation into a sophisticated, educational process that builds both robust protection and security expertise.
|
||||
|
||||
[Extended thinking: Enhanced workflow integrates multi-perspective threat analysis, constructive challenge of security assumptions, adaptive learning for security skill development, and structured dissent to identify security blind spots and strengthen defenses.]
|
||||
|
||||
## Phase 1: Multi-Expert Threat Analysis and Security Assessment
|
||||
|
||||
### 1. Comprehensive Security Multi-Perspective Analysis
|
||||
[Extended thinking: Leverage multiple expert perspectives to ensure comprehensive threat identification and risk assessment from different attack vectors and defense viewpoints.]
|
||||
|
||||
**Multi-Expert Threat Assessment:**
|
||||
- Use `/multi_perspective` command with `"$ARGUMENTS security analysis" security --perspectives=6 --integration=comprehensive --depth=systematic`
|
||||
- **Security Architect**: Overall security design and defense-in-depth strategy
|
||||
- **Penetration Tester**: Offensive perspective identifying attack vectors and vulnerabilities
|
||||
- **Compliance Specialist**: Regulatory requirements and audit preparation
|
||||
- **Infrastructure Security**: Network, server, and deployment security concerns
|
||||
- **Application Security**: Code-level vulnerabilities and secure development practices
|
||||
- **Incident Responder**: Monitoring, detection, and response capability assessment
|
||||
|
||||
**Threat Model Challenge:**
|
||||
- Use `/constructive_dissent` command with `"Primary security threats for $ARGUMENTS" --dissent-intensity=rigorous --alternatives=3 --focus=threat-assumptions`
|
||||
- Challenge assumptions about primary threats and attack vectors
|
||||
- Generate alternative threat scenarios and attack pathways
|
||||
- Question whether security focus areas are appropriately prioritized
|
||||
|
||||
**Security Learning Integration:**
|
||||
- Use `/teach_concept` command with `"threat modeling for $ARGUMENTS" intermediate --approach=experiential --pathway=analytical`
|
||||
- Build understanding of security principles through hands-on threat analysis
|
||||
- Develop security intuition and pattern recognition skills
|
||||
- Create transferable security knowledge for future projects
|
||||
|
||||
### 2. Enhanced Architecture Security Design
|
||||
[Extended thinking: Create robust security architecture through collaborative design with red-team thinking and structured challenge of security assumptions.]
|
||||
|
||||
**Collaborative Security Architecture:**
|
||||
- Use `/orchestrate` command with `"design secure architecture for $ARGUMENTS" complex security-auditor,backend-architect,network-engineer,devops-troubleshooter --mode=dialectical`
|
||||
- Generate secure architecture through multi-expert collaboration
|
||||
- Include threat modeling, defense layers, and security boundaries
|
||||
- Ensure architecture supports zero-trust principles and defense-in-depth
|
||||
|
||||
**Red Team Architecture Challenge:**
|
||||
- Use `/guest_expert` command with `"cybersecurity" "How would you attack this $ARGUMENTS architecture?" --expertise-depth=authority --perspective-count=3 --style=adversarial`
|
||||
- Assume attacker perspective to identify architecture weaknesses
|
||||
- Generate attack scenarios and exploitation pathways
|
||||
- Validate architecture against sophisticated threat actors
|
||||
|
||||
**Security Assumption Audit:**
|
||||
- Use `/assumption_audit` command with `"Security architecture assumptions for $ARGUMENTS" --audit-depth=paradigmatic --challenge-method=red-team-analysis`
|
||||
- Challenge fundamental assumptions about security boundaries and trust models
|
||||
- Examine assumptions about user behavior, system reliability, and threat environment
|
||||
- Generate alternative security paradigms and approaches
|
||||
|
||||
## Phase 2: Security Implementation
|
||||
|
||||
### 3. Backend Security Hardening
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Implement backend security measures for: $ARGUMENTS. Include authentication, authorization, input validation, and secure data handling based on security audit findings."
|
||||
- Output: Secure API implementations, auth middleware, validation layers
|
||||
|
||||
### 4. Infrastructure Security
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Implement infrastructure security for: $ARGUMENTS. Configure firewalls, secure secrets management, implement least privilege access, and set up security monitoring."
|
||||
- Output: Infrastructure security configs, secrets management, monitoring setup
|
||||
|
||||
### 5. Frontend Security
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Implement frontend security measures for: $ARGUMENTS. Include CSP headers, XSS prevention, secure authentication flows, and sensitive data handling."
|
||||
- Output: Secure frontend code, CSP policies, auth integration
|
||||
|
||||
## Phase 3: Compliance and Testing
|
||||
|
||||
### 6. Compliance Verification
|
||||
- Use Task tool with subagent_type="security-auditor"
|
||||
- Prompt: "Verify compliance with security standards for: $ARGUMENTS. Check OWASP Top 10, GDPR, SOC2, or other relevant standards. Validate all security implementations."
|
||||
- Output: Compliance report, remediation requirements
|
||||
|
||||
### 7. Security Testing
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Create security test suites for: $ARGUMENTS. Include penetration tests, security regression tests, and automated vulnerability scanning."
|
||||
- Output: Security test suite, penetration test results, CI/CD integration
|
||||
|
||||
## Phase 4: Deployment and Monitoring
|
||||
|
||||
### 8. Secure Deployment
|
||||
- Use Task tool with subagent_type="deployment-engineer"
|
||||
- Prompt: "Implement secure deployment pipeline for: $ARGUMENTS. Include security gates, vulnerability scanning in CI/CD, and secure configuration management."
|
||||
- Output: Secure CI/CD pipeline, deployment security checks, rollback procedures
|
||||
|
||||
### 9. Security Monitoring Setup
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Set up security monitoring and incident response for: $ARGUMENTS. Include intrusion detection, log analysis, and automated alerting."
|
||||
- Output: Security monitoring dashboards, alert rules, incident response procedures
|
||||
|
||||
## Coordination Notes
|
||||
- Security findings from each phase inform subsequent implementations
|
||||
- All agents must prioritize security in their recommendations
|
||||
- Regular security reviews between phases ensure nothing is missed
|
||||
- Document all security decisions and trade-offs
|
||||
|
||||
Security hardening target: $ARGUMENTS
|
||||
48
commands/workflows/smart-fix.md
Normal file
48
commands/workflows/smart-fix.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
---
|
||||
|
||||
Intelligently fix the issue using automatic agent selection with explicit Task tool invocations:
|
||||
|
||||
[Extended thinking: This workflow analyzes the issue and automatically routes to the most appropriate specialist agent(s). Complex issues may require multiple agents working together.]
|
||||
|
||||
First, analyze the issue to categorize it, then use Task tool with the appropriate agent:
|
||||
|
||||
## Analysis Phase
|
||||
Examine the issue: "$ARGUMENTS" to determine the problem domain.
|
||||
|
||||
## Agent Selection and Execution
|
||||
|
||||
### For Deployment/Infrastructure Issues
|
||||
If the issue involves deployment failures, infrastructure problems, or DevOps concerns:
|
||||
- Use Task tool with subagent_type="devops-troubleshooter"
|
||||
- Prompt: "Debug and fix this deployment/infrastructure issue: $ARGUMENTS"
|
||||
|
||||
### For Code Errors and Bugs
|
||||
If the issue involves application errors, exceptions, or functional bugs:
|
||||
- Use Task tool with subagent_type="debugger"
|
||||
- Prompt: "Analyze and fix this code error: $ARGUMENTS. Provide root cause analysis and solution."
|
||||
|
||||
### For Database Performance
|
||||
If the issue involves slow queries, database bottlenecks, or data access patterns:
|
||||
- Use Task tool with subagent_type="database-optimizer"
|
||||
- Prompt: "Optimize database performance for: $ARGUMENTS. Include query analysis, indexing strategies, and schema improvements."
|
||||
|
||||
### For Application Performance
|
||||
If the issue involves slow response times, high resource usage, or performance degradation:
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Profile and optimize application performance issue: $ARGUMENTS. Identify bottlenecks and provide optimization strategies."
|
||||
|
||||
### For Legacy Code Issues
|
||||
If the issue involves outdated code, deprecated patterns, or technical debt:
|
||||
- Use Task tool with subagent_type="legacy-modernizer"
|
||||
- Prompt: "Modernize and fix legacy code issue: $ARGUMENTS. Provide migration path and updated implementation."
|
||||
|
||||
## Multi-Domain Coordination
|
||||
For complex issues spanning multiple domains:
|
||||
1. Use primary agent based on main symptom
|
||||
2. Use secondary agents for related aspects
|
||||
3. Coordinate fixes across all affected areas
|
||||
4. Verify integration between different fixes
|
||||
|
||||
Issue: $ARGUMENTS
|
||||
247
commands/workflows/tdd-cycle.md
Normal file
247
commands/workflows/tdd-cycle.md
Normal file
@@ -0,0 +1,247 @@
|
||||
---
|
||||
model: claude-opus-4-1
|
||||
allowed-tools: Task, Read, Write, Bash(*), Grep, Glob
|
||||
argument-hint: <feature-requirement> [--complexity=<level>] [--learning-mode=<approach>] [--dissent-level=<intensity>]
|
||||
description: Test-Driven Development with multi-expert orchestration and adaptive learning integration
|
||||
---
|
||||
|
||||
# Advanced TDD Orchestration Engine
|
||||
|
||||
Execute comprehensive Test-Driven Development through multi-expert collaboration with structured dissent, adaptive learning, and cognitive harmonics optimization. Transform traditional TDD into an intelligent, self-improving development process that builds both code quality and team understanding.
|
||||
|
||||
[Extended thinking: This enhanced workflow integrates Split Team Framework for multi-perspective analysis, Teacher Framework for adaptive learning, and structured dissent protocols for robust test design. Each phase includes constructive challenge mechanisms and meta-cognitive reflection for continuous improvement.]
|
||||
|
||||
## Configuration
|
||||
|
||||
### Multi-Expert Team Configuration
|
||||
**Core TDD Specialists:**
|
||||
- **Test Strategist**: Overall test approach and architecture design
|
||||
- **Quality Guardian**: Test completeness and edge case coverage advocate
|
||||
- **Implementation Guide**: Code structure and maintainability focus
|
||||
- **Performance Analyst**: Testing efficiency and execution speed optimization
|
||||
- **Usability Advocate**: Developer experience and test readability champion
|
||||
|
||||
**Challenge Perspectives:**
|
||||
- **Constructive Critic**: Questions test assumptions and approaches
|
||||
- **Pragmatic Realist**: Balances ideal practices with practical constraints
|
||||
- **Future-Proofing Visionary**: Considers long-term maintainability and evolution
|
||||
|
||||
### Adaptive Learning Parameters
|
||||
- **Novice Mode**: Heavy scaffolding, detailed explanations, step-by-step guidance
|
||||
- **Intermediate Mode**: Moderate guidance with pattern recognition development
|
||||
- **Advanced Mode**: Minimal scaffolding, collaborative peer-level interaction
|
||||
- **Expert Mode**: Innovation-focused with paradigm challenging
|
||||
|
||||
### Quality Thresholds
|
||||
- **Coverage Standards**: Line coverage 80%, Branch coverage 75%, Critical path 100%
|
||||
- **Complexity Limits**: Cyclomatic complexity ≤ 10, Method length ≤ 20 lines
|
||||
- **Architecture Standards**: Class length ≤ 200 lines, Duplicate blocks ≤ 3 lines
|
||||
- **Test Quality**: Fast (<100ms), Isolated, Repeatable, Self-validating
|
||||
|
||||
## Phase 1: Multi-Expert Requirements Analysis and Test Strategy
|
||||
|
||||
### 1. Collaborative Requirements Analysis
|
||||
[Extended thinking: Leverage multi-perspective analysis to ensure comprehensive understanding of requirements from different stakeholder viewpoints, reducing blind spots and improving test coverage.]
|
||||
|
||||
**Primary Analysis:**
|
||||
- Use `/multi_perspective` command with `"$ARGUMENTS requirements analysis" technical --perspectives=5 --integration=comprehensive`
|
||||
- **Test Strategist**: Overall testing approach and comprehensive coverage strategy
|
||||
- **Quality Guardian**: Edge cases, error conditions, and boundary value identification
|
||||
- **Implementation Guide**: Code structure implications and testability requirements
|
||||
- **Performance Analyst**: Performance testing needs and execution constraints
|
||||
- **Usability Advocate**: Developer experience and test maintainability considerations
|
||||
|
||||
**Constructive Challenge:**
|
||||
- Use `/constructive_dissent` command with `"Proposed test strategy for $ARGUMENTS" --dissent-intensity=systematic --alternatives=2`
|
||||
- Challenge assumptions about what needs testing and how
|
||||
- Generate alternative testing approaches for comparison
|
||||
- Question whether requirements are testable as specified
|
||||
|
||||
**Adaptive Learning Integration:**
|
||||
- Use `/teach_concept` command with `"test strategy for $ARGUMENTS" intermediate --approach=socratic` for learning-oriented sessions
|
||||
- Build understanding of testing principles through guided discovery
|
||||
- Develop pattern recognition for similar future testing challenges
|
||||
|
||||
### 2. Enhanced Test Architecture Design
|
||||
[Extended thinking: Create robust test architecture through collaborative design with structured disagreement to identify potential weaknesses and improvements.]
|
||||
|
||||
**Collaborative Design:**
|
||||
- Use `/orchestrate` command with `"design test architecture for $ARGUMENTS" moderate test-automator,performance-engineer,architect-review --mode=dialectical`
|
||||
- Generate test architecture through structured collaboration
|
||||
- Include fixture design, mock strategy, and test data management
|
||||
- Ensure architecture supports TDD principles: fast, isolated, repeatable, self-validating
|
||||
|
||||
**Architecture Validation:**
|
||||
- Use `/assumption_audit` command with `"Test architecture assumptions for $ARGUMENTS" --audit-depth=structural --challenge-method=alternative-generation`
|
||||
- Challenge fundamental assumptions about test organization and structure
|
||||
- Generate alternative architectural approaches for comparison
|
||||
- Validate architecture against long-term maintainability and scalability needs
|
||||
|
||||
## Phase 2: RED - Write Failing Tests
|
||||
|
||||
### 3. Write Unit Tests (Failing)
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Write FAILING unit tests for: $ARGUMENTS. Tests must fail initially. Include edge cases, error scenarios, and happy paths. DO NOT implement production code."
|
||||
- Output: Failing unit tests, test documentation
|
||||
- **CRITICAL**: Verify all tests fail with expected error messages
|
||||
|
||||
### 4. Verify Test Failure
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Verify that all tests for: $ARGUMENTS are failing correctly. Ensure failures are for the right reasons (missing implementation, not test errors). Confirm no false positives."
|
||||
- Output: Test failure verification report
|
||||
- **GATE**: Do not proceed until all tests fail appropriately
|
||||
|
||||
## Phase 3: GREEN - Make Tests Pass
|
||||
|
||||
### 5. Minimal Implementation
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Implement MINIMAL code to make tests pass for: $ARGUMENTS. Focus only on making tests green. Do not add extra features or optimizations. Keep it simple."
|
||||
- Output: Minimal working implementation
|
||||
- Constraint: No code beyond what's needed to pass tests
|
||||
|
||||
### 6. Verify Test Success
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Run all tests for: $ARGUMENTS and verify they pass. Check test coverage metrics. Ensure no tests were accidentally broken."
|
||||
- Output: Test execution report, coverage metrics
|
||||
- **GATE**: All tests must pass before proceeding
|
||||
|
||||
## Phase 4: REFACTOR - Improve Code Quality
|
||||
|
||||
### 7. Code Refactoring
|
||||
- Use Task tool with subagent_type="code-reviewer"
|
||||
- Prompt: "Refactor implementation for: $ARGUMENTS while keeping tests green. Apply SOLID principles, remove duplication, improve naming, and optimize performance. Run tests after each refactoring."
|
||||
- Output: Refactored code, refactoring report
|
||||
- Constraint: Tests must remain green throughout
|
||||
|
||||
### 8. Test Refactoring
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Refactor tests for: $ARGUMENTS. Remove test duplication, improve test names, extract common fixtures, and enhance test readability. Ensure tests still provide same coverage."
|
||||
- Output: Refactored tests, improved test structure
|
||||
- Validation: Coverage metrics unchanged or improved
|
||||
|
||||
## Phase 5: Integration and System Tests
|
||||
|
||||
### 9. Write Integration Tests (Failing First)
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Write FAILING integration tests for: $ARGUMENTS. Test component interactions, API contracts, and data flow. Tests must fail initially."
|
||||
- Output: Failing integration tests
|
||||
- Validation: Tests fail due to missing integration logic
|
||||
|
||||
### 10. Implement Integration
|
||||
- Use Task tool with subagent_type="backend-architect"
|
||||
- Prompt: "Implement integration code for: $ARGUMENTS to make integration tests pass. Focus on component interaction and data flow."
|
||||
- Output: Integration implementation
|
||||
- Validation: All integration tests pass
|
||||
|
||||
## Phase 6: Continuous Improvement Cycle
|
||||
|
||||
### 11. Performance and Edge Case Tests
|
||||
- Use Task tool with subagent_type="test-automator"
|
||||
- Prompt: "Add performance tests and additional edge case tests for: $ARGUMENTS. Include stress tests, boundary tests, and error recovery tests."
|
||||
- Output: Extended test suite
|
||||
- Metric: Increased test coverage and scenario coverage
|
||||
|
||||
### 12. Final Code Review
|
||||
- Use Task tool with subagent_type="architect-review"
|
||||
- Prompt: "Perform comprehensive review of: $ARGUMENTS. Verify TDD process was followed, check code quality, test quality, and coverage. Suggest improvements."
|
||||
- Output: Review report, improvement suggestions
|
||||
- Action: Implement critical suggestions while maintaining green tests
|
||||
|
||||
## Incremental Development Mode
|
||||
|
||||
For test-by-test development:
|
||||
1. Write ONE failing test
|
||||
2. Make ONLY that test pass
|
||||
3. Refactor if needed
|
||||
4. Repeat for next test
|
||||
|
||||
Use this approach by adding `--incremental` flag to focus on one test at a time.
|
||||
|
||||
## Test Suite Mode
|
||||
|
||||
For comprehensive test suite development:
|
||||
1. Write ALL tests for a feature/module (failing)
|
||||
2. Implement code to pass ALL tests
|
||||
3. Refactor entire module
|
||||
4. Add integration tests
|
||||
|
||||
Use this approach by adding `--suite` flag for batch test development.
|
||||
|
||||
## Validation Checkpoints
|
||||
|
||||
### RED Phase Validation
|
||||
- [ ] All tests written before implementation
|
||||
- [ ] All tests fail with meaningful error messages
|
||||
- [ ] Test failures are due to missing implementation
|
||||
- [ ] No test passes accidentally
|
||||
|
||||
### GREEN Phase Validation
|
||||
- [ ] All tests pass
|
||||
- [ ] No extra code beyond test requirements
|
||||
- [ ] Coverage meets minimum thresholds
|
||||
- [ ] No test was modified to make it pass
|
||||
|
||||
### REFACTOR Phase Validation
|
||||
- [ ] All tests still pass after refactoring
|
||||
- [ ] Code complexity reduced
|
||||
- [ ] Duplication eliminated
|
||||
- [ ] Performance improved or maintained
|
||||
- [ ] Test readability improved
|
||||
|
||||
## Coverage Reports
|
||||
|
||||
Generate coverage reports after each phase:
|
||||
- Line coverage
|
||||
- Branch coverage
|
||||
- Function coverage
|
||||
- Statement coverage
|
||||
|
||||
## Failure Recovery
|
||||
|
||||
If TDD discipline is broken:
|
||||
1. **STOP** immediately
|
||||
2. Identify which phase was violated
|
||||
3. Rollback to last valid state
|
||||
4. Resume from correct phase
|
||||
5. Document lesson learned
|
||||
|
||||
## TDD Metrics Tracking
|
||||
|
||||
Track and report:
|
||||
- Time in each phase (Red/Green/Refactor)
|
||||
- Number of test-implementation cycles
|
||||
- Coverage progression
|
||||
- Refactoring frequency
|
||||
- Defect escape rate
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- Writing implementation before tests
|
||||
- Writing tests that already pass
|
||||
- Skipping the refactor phase
|
||||
- Writing multiple features without tests
|
||||
- Modifying tests to make them pass
|
||||
- Ignoring failing tests
|
||||
- Writing tests after implementation
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- 100% of code written test-first
|
||||
- All tests pass continuously
|
||||
- Coverage exceeds thresholds
|
||||
- Code complexity within limits
|
||||
- Zero defects in covered code
|
||||
- Clear test documentation
|
||||
- Fast test execution (< 5 seconds for unit tests)
|
||||
|
||||
## Notes
|
||||
|
||||
- Enforce strict RED-GREEN-REFACTOR discipline
|
||||
- Each phase must be completed before moving to next
|
||||
- Tests are the specification
|
||||
- If a test is hard to write, the design needs improvement
|
||||
- Refactoring is NOT optional
|
||||
- Keep test execution fast
|
||||
- Tests should be independent and isolated
|
||||
|
||||
TDD implementation for: $ARGUMENTS
|
||||
1343
commands/workflows/workflow-automate.md
Normal file
1343
commands/workflows/workflow-automate.md
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user