Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:46:50 +08:00
commit a3a73d67d7
67 changed files with 19703 additions and 0 deletions

307
skills/summoner/README.md Normal file
View File

@@ -0,0 +1,307 @@
# Summoner Skill
**Multi-Agent Orchestration for Complex Tasks**
The Summoner skill transforms Claude Code into a sophisticated project orchestrator, breaking down complex tasks into manageable units and coordinating specialized agents to deliver high-quality, production-ready code.
## What is the Summoner?
The Summoner is a meta-skill that excels at:
- **Task Decomposition**: Breaking complex requirements into atomic, well-defined tasks
- **Context Management**: Preserving all necessary context while avoiding bloat
- **Agent Orchestration**: Summoning and coordinating specialized agents
- **Quality Assurance**: Ensuring DRY, CLEAN, SOLID principles throughout
- **Risk Mitigation**: Preventing assumptions, scope creep, and breaking changes
## When to Use
### ✅ Use Summoner For:
- **Multi-component features** (3+ files/components)
- **Large refactoring projects** (architectural changes)
- **Migration projects** (API versions, frameworks, databases)
- **Complex bug fixes** (multiple related issues)
- **New system implementations** (auth, payments, etc.)
### ❌ Don't Use Summoner For:
- Single file changes
- Simple bug fixes
- Straightforward feature additions
- Routine maintenance
- Quick patches
## How It Works
```
1. Task Analysis
2. Create Mission Control Document (MCD)
3. Decompose into Phases & Tasks
4. For Each Task:
- Summon Specialized Agent
- Provide Bounded Context
- Monitor & Validate
5. Integration & Quality Control
6. Deliver Production-Ready Code
```
## Quick Start
### 1. Activate the Skill
Simply request it in Claude Code:
```
Use the summoner skill to implement user authentication with OAuth2
```
### 2. Or Explicitly Reference
```
I need to refactor our API layer to use GraphQL. This is a complex task that
will touch multiple services. Can you use the Summoner skill to orchestrate this?
```
## Components
### 📄 Templates
- **`mission-control-template.md`**: Master planning document
- **`agent-spec-template.md`**: Agent assignment specifications
- **`quality-gates.md`**: Comprehensive quality checklist
### 🔧 Scripts
- **`init_mission.py`**: Initialize new Mission Control Documents
- **`validate_quality.py`**: Interactive quality gate validation
### 📚 References
All templates and quality standards are in the `References/` directory.
## Directory Structure
```
summoner/
├── SKILL.md # Main skill definition
├── README.md # This file
├── scripts/
│ ├── init_mission.py # MCD initializer
│ └── validate_quality.py # Quality validator
├── References/
│ ├── mission-control-template.md
│ ├── agent-spec-template.md
│ └── quality-gates.md
└── Assets/
└── (reserved for future templates)
```
## Example Workflow
### Scenario: Implement Real-Time Notifications
1. **Activate Summoner**
```
Use the summoner skill to add real-time notifications to our app
using WebSockets. This needs to work across web and mobile clients.
```
2. **Summoner Creates MCD**
- Analyzes requirements
- Creates `mission-real-time-notifications.md`
- Breaks down into phases and tasks
3. **Phase 1: Backend Infrastructure**
- Task 1.1: WebSocket server setup (Backend Agent)
- Task 1.2: Message queue integration (Backend Agent)
- Task 1.3: Authentication middleware (Security Agent)
4. **Phase 2: Client Integration**
- Task 2.1: Web client WebSocket handler (Frontend Agent)
- Task 2.2: Mobile client integration (Mobile Agent)
- Task 2.3: Reconnection logic (Frontend/Mobile Agents)
5. **Phase 3: Testing & Polish**
- Task 3.1: Integration tests (QA Agent)
- Task 3.2: Load testing (Performance Agent)
- Task 3.3: Documentation (Documentation Agent)
6. **Quality Control**
- Validate all quality gates
- Integration testing
- Final review
## Key Features
### 🎯 Context Preservation
Every task in the MCD includes:
- Exact context needed (no more, no less)
- Clear inputs and outputs
- Explicit dependencies
- Validation criteria
### 🛡️ Quality Enforcement
Three levels of quality gates:
- **Task-level**: DRY, testing, documentation
- **Phase-level**: Integration, CLEAN, performance, security
- **Project-level**: SOLID, architecture, production readiness
### 📊 Progress Tracking
Mission Control Document provides:
- Real-time progress updates
- Risk register
- Decision log
- Integration checklist
### 🚫 Zero Slop Policy
The Summoner prevents:
- Assumption-driven development
- Context bloat
- Scope creep
- Breaking changes without migration paths
- Code duplication
- Untested code
## Using the Scripts
### Initialize a Mission
```bash
python .claude/skills/summoner/scripts/init_mission.py "Add User Authentication"
```
Creates `mission-add-user-authentication.md` ready for editing.
### Validate Quality
```bash
# Interactive validation
python .claude/skills/summoner/scripts/validate_quality.py --level task --interactive
# Print checklist for manual review
python .claude/skills/summoner/scripts/validate_quality.py --level project
```
## Quality Standards
### DRY (Don't Repeat Yourself)
- No code duplication
- Shared logic extracted
- Single source of truth for data
### CLEAN Code
- **C**lear: Easy to understand
- **L**imited: Single responsibility
- **E**xpressive: Intent-revealing names
- **A**bstracted: Proper abstraction levels
- **N**eat: Well-organized structure
### SOLID Principles
- **S**ingle Responsibility
- **O**pen/Closed
- **L**iskov Substitution
- **I**nterface Segregation
- **D**ependency Inversion
See `References/quality-gates.md` for complete checklists.
## Best Practices
### 1. Front-Load Planning
Spend time on the MCD before coding. A well-planned mission executes smoothly.
### 2. Bounded Context
Give each agent exactly what they need. Too much context is as bad as too little.
### 3. Validate Early, Validate Often
Run quality gates at task completion, not just at the end.
### 4. Document Decisions
Use the Decision Log in the MCD to record why choices were made.
### 5. Update the MCD
Keep the MCD current as the project evolves. It's a living document.
## Troubleshooting
### Agent Asking for Already-Provided Context
**Problem**: Agent requests information that's in the MCD.
**Solution**: The agent spec wasn't clear enough. Update the agent spec template to explicitly reference the MCD sections.
### Quality Gates Failing
**Problem**: Code doesn't pass quality checks.
**Solution**:
1. Identify which gate failed
2. Create a remediation task
3. Assign to appropriate agent
4. Revalidate after fix
### Scope Creep
**Problem**: Tasks growing beyond original boundaries.
**Solution**:
1. Pause execution
2. Review MCD success criteria
3. Either add new tasks or trim scope
4. Update MCD and proceed
### Integration Issues
**Problem**: Components don't work together.
**Solution**:
1. Review interface definitions in MCD
2. Check if agents followed specs
3. Add integration tests
4. Document the interface contract better
## Examples
See the `examples/` directory in the main ClaudeShack repo for:
- Complete Mission Control Documents
- Real-world orchestration scenarios
- Quality gate validation reports
## Contributing
Ideas for improving the Summoner skill?
- Suggest new templates
- Propose quality gates
- Share success stories
- Report issues
## Version History
- **v1.0** (2025-11-19): Initial release
- Mission Control Document system
- Quality gates framework
- Agent orchestration workflows
- Supporting scripts and templates
## License
Part of the ClaudeShack skill collection. See main repository for licensing.
---
**"Context is precious. Orchestration is power. Quality is non-negotiable."**

View File

@@ -0,0 +1,316 @@
# Agent Specification Template
Use this template when summoning specialized agents to ensure they have exactly what they need - no more, no less.
---
## Agent Specification: [AGENT NAME/ID]
**Created**: [DATE]
**Summoner**: [Who summoned this agent]
**Status**: [Active | Complete | Blocked]
---
## Agent Profile
### Specialization
[What this agent is expert in - e.g., "Frontend React Developer", "Database Optimization Specialist", "Security Auditor"]
### Assigned Tasks
- Task 1.2: [Task Name]
- Task 1.3: [Task Name]
- [List all tasks assigned to this agent]
### Expected Completion
[Date/time or "After Task X.Y completes"]
---
## Context Package
### What This Agent Needs to Know
[Provide ONLY the context necessary for their tasks. Link to full docs rather than duplicating.]
**Project Overview (Brief)**:
[2-3 sentences about the overall project - just enough to understand their role]
**Their Role**:
[1-2 sentences about what they're responsible for in the bigger picture]
**Specific Context**:
```
[The actual detailed context needed for their tasks:
- Relevant architecture decisions
- Tech stack specifics
- Existing patterns to follow
- Constraints to respect
- Examples to reference]
```
### What This Agent Does NOT Need
[Explicitly list what context you're NOT providing to avoid bloat:]
- ❌ [Irrelevant context 1]
- ❌ [Irrelevant context 2]
- ❌ [Information they can look up themselves]
---
## Task Details
### Task [ID]: [Name]
**Objective**:
[Clear statement of what needs to be accomplished]
**Current State**:
```
[What exists now - relevant files, implementations, issues]
File: path/to/file.ts:123
Current implementation: [brief description]
Problem: [what needs to change]
```
**Desired End State**:
```
[What should exist after this task]
- Deliverable 1
- Deliverable 2
- Tests passing
- Documentation updated
```
**Acceptance Criteria**:
- [ ] Criterion 1 (specific, testable)
- [ ] Criterion 2 (specific, testable)
- [ ] All tests pass
- [ ] Quality gates pass
- [ ] Documentation complete
**Constraints**:
- Must: [Things that must be done]
- Must NOT: [Things to avoid]
- Should: [Preferences/best practices]
**Reference Files**:
- `path/to/relevant/file.ts` - [Why this is relevant]
- `path/to/example.ts:45-67` - [What pattern to follow]
- `docs/architecture.md` - [Link to full docs]
---
## Inputs & Dependencies
### Inputs Provided
[What this agent is receiving to start their work:]
- ✅ Input 1: [Description and location]
- ✅ Input 2: [Description and location]
### Dependencies
[What must be complete before this agent can start:]
- Task X.Y: [Name] - Status: [Complete/In Progress]
- Decision Z: [Description] - Status: [Decided/Pending]
### Blockers
[Current blockers if any:]
- ❌ [Blocker description] - Owner: [Who's resolving]
- OR: None - Ready to proceed
---
## Outputs Expected
### Primary Deliverables
1. **[Deliverable 1]**
- Format: [e.g., "Modified file at path/to/file.ts"]
- Requirements: [Specific requirements]
- Validation: [How to verify it's correct]
2. **[Deliverable 2]**
- Format: [...]
- Requirements: [...]
- Validation: [...]
### Secondary Deliverables
- [ ] Tests for new functionality
- [ ] Documentation updates
- [ ] Updated MCD if any changes to plan
- [ ] Quality gate sign-off
### Handoff Protocol
[How to hand off to next agent or back to summoner:]
```
1. Complete all deliverables
2. Run quality gate checklist
3. Document any deviations from plan
4. Update MCD progress tracking
5. Report completion with summary
```
---
## Quality Standards
### Code Quality
- [ ] Follows DRY principle
- [ ] Follows CLEAN code practices
- [ ] Follows SOLID principles (applicable ones)
- [ ] Consistent with project style
- [ ] Properly documented
### Testing
- [ ] Unit tests written
- [ ] Integration tests if applicable
- [ ] All tests passing
- [ ] Edge cases covered
### Security
- [ ] No vulnerabilities introduced
- [ ] Input validation
- [ ] Proper error handling
- [ ] No sensitive data exposed
### Performance
- [ ] Meets performance requirements
- [ ] No unnecessary operations
- [ ] Efficient algorithms
- [ ] Resources properly managed
---
## Communication Protocol
### Status Updates
**Frequency**: [e.g., "After each task completion" or "Daily"]
**Format**: [How to report - e.g., "Comment in MCD"]
**Content**: [What to include - progress, blockers, questions]
### Questions/Clarifications
**How to Ask**: [Process for getting clarifications]
**Response SLA**: [When to expect answers]
**Escalation**: [When and how to escalate]
### Completion Report
When done, provide:
```markdown
## Completion Report: [Agent Name]
### Summary
[1-2 sentences on what was accomplished]
### Deliverables
- ✅ Deliverable 1: [Location/description]
- ✅ Deliverable 2: [Location/description]
### Quality Gates
- ✅ Code Quality: PASS
- ✅ Testing: PASS
- ✅ Documentation: PASS
### Deviations from Plan
- [None] OR
- [Deviation 1 - why it happened - impact]
### Blockers Encountered
- [None] OR
- [Blocker 1 - how it was resolved]
### Recommendations
[Any suggestions for next phases or improvements]
### Next Steps
[What should happen next]
```
---
## Tools & Resources
### Tools Available
- [Tool/Framework 1]: [Purpose]
- [Tool/Framework 2]: [Purpose]
- [Testing framework]: [How to run tests]
- [Linter]: [How to check style]
### Reference Documentation
- [Link to tech docs]
- [Link to internal docs]
- [Link to examples]
- [Link to style guide]
### Example Code
[Paste or link to example code that shows the pattern to follow]
```typescript
// Example of preferred pattern
function examplePattern() {
// This is how we do things in this project
}
```
---
## Emergency Contacts
**Summoner**: [How to reach the summoner]
**Technical Lead**: [If different from summoner]
**Domain Expert**: [For domain-specific questions]
**Blocker Resolution**: [Who to contact if blocked]
---
## Success Indicators
**This agent is succeeding if:**
- Delivering on time
- No out-of-scope work
- Quality gates passing
- No blockers or blockers being resolved quickly
- Clear communication
**Warning signs:**
- Asking for context that was already provided
- Scope creep
- Quality gate failures
- Long periods of silence
- Assumptions not validated
---
## Agent Activation
**Summoning Command**:
```
Using the Task tool with subagent_type="general-purpose":
"You are a [SPECIALIZATION] agent. Your mission is to [OBJECTIVE].
Context: [PROVIDE CONTEXT PACKAGE]
Your tasks:
[LIST TASKS WITH DETAILS]
Deliverables expected:
[LIST DELIVERABLES]
Quality standards:
[REFERENCE QUALITY GATES]
Report back when complete with a completion report."
```
**Estimated Duration**: [Time estimate]
**Complexity**: [Low/Medium/High]
**Priority**: [P0/P1/P2/P3]
---
## Notes
[Any additional notes, special considerations, or context that doesn't fit elsewhere]
---
**Template Version**: 1.0
**Last Updated**: [Date]

View File

@@ -0,0 +1,280 @@
# Mission Control: [TASK NAME]
**Created**: [DATE]
**Status**: [Planning | In Progress | Integration | Complete]
**Summoner**: [Agent/User Name]
---
## Executive Summary
[Provide a concise 1-2 paragraph overview of the entire initiative. Include:
- What is being built/changed
- Why it's important
- High-level approach
- Expected impact]
---
## Success Criteria
Define what "done" looks like:
- [ ] **Criterion 1**: [Specific, measurable success indicator]
- [ ] **Criterion 2**: [Specific, measurable success indicator]
- [ ] **Criterion 3**: [Specific, measurable success indicator]
- [ ] **All tests passing**: Unit, integration, and e2e tests pass
- [ ] **Documentation complete**: All changes documented
- [ ] **Quality gates passed**: DRY, CLEAN, SOLID principles followed
---
## Context & Constraints
### Technical Context
**Current Architecture:**
[Brief description of relevant architecture, tech stack, patterns in use]
**Relevant Existing Implementations:**
- `path/to/file.ts:123` - [What this does and why it's relevant]
- `path/to/other/file.ts:456` - [What this does and why it's relevant]
**Technology Stack:**
- [Framework/Library 1]
- [Framework/Library 2]
- [Database/Store]
- [Other relevant tech]
### Business Context
**User Impact:**
[How this affects end users]
**Priority:**
[High/Medium/Low and why]
**Stakeholders:**
[Who cares about this and why]
### Constraints
**Performance:**
- [Specific performance requirements]
**Compatibility:**
- [Browser support, API versions, etc.]
**Security:**
- [Security considerations and requirements]
**Timeline:**
- [Any time constraints]
**Other:**
- [Any other constraints or limitations]
---
## Task Index
### Phase 1: [PHASE NAME - e.g., "Foundation & Setup"]
#### Task 1.1: [Specific Task Name]
**Agent Type**: [e.g., Backend Engineer, Frontend Specialist, DevOps]
**Responsibility**:
[Clear, bounded description of what this agent is responsible for. Use active voice.]
**Context Needed**:
```
[ONLY the specific context this agent needs. Reference sections above or external docs.
DO NOT duplicate large amounts of text - point to it instead.]
```
**Inputs**:
- [What must exist before this task can start]
- [Files, data, decisions, or outputs from other tasks]
**Outputs**:
- [ ] [Specific deliverable 1]
- [ ] [Specific deliverable 2]
- [ ] [Tests for this component]
**Validation Criteria**:
```
How to verify this task is complete and correct:
- [ ] Validation point 1
- [ ] Validation point 2
- [ ] Tests pass
- [ ] Code review checklist items
```
**Dependencies**:
- None (for first task) OR
- Requires: Task X.Y to be complete
- Blocked by: [What's blocking this if anything]
**Estimated Complexity**: [Low/Medium/High]
---
#### Task 1.2: [Next Task Name]
[Repeat structure above]
---
### Phase 2: [NEXT PHASE NAME]
[Continue with tasks for next phase]
---
## Quality Gates
### Code Quality Standards
- [ ] **DRY (Don't Repeat Yourself)**
- No duplicated logic or code blocks
- Shared functionality extracted into reusable utilities
- Configuration centralized
- [ ] **CLEAN Code**
- Meaningful variable and function names
- Functions do one thing well
- Comments explain WHY, not WHAT
- Consistent formatting and style
- [ ] **SOLID Principles**
- Single Responsibility: Each module/class has one reason to change
- Open/Closed: Open for extension, closed for modification
- Liskov Substitution: Subtypes are substitutable for base types
- Interface Segregation: No client forced to depend on unused methods
- Dependency Inversion: Depend on abstractions, not concretions
- [ ] **Security**
- No injection vulnerabilities (SQL, XSS, Command, etc.)
- Proper authentication and authorization
- Sensitive data properly handled
- Dependencies checked for vulnerabilities
- [ ] **Performance**
- Meets stated performance requirements
- No unnecessary computations or renders
- Efficient algorithms and data structures
- Proper resource cleanup
### Process Quality Standards
- [ ] **Testing**
- Unit tests for all new functions/components
- Integration tests for component interactions
- E2E tests for critical user paths
- Edge cases covered
- All tests passing
- [ ] **Documentation**
- Public APIs documented
- Complex logic explained
- README updated if needed
- Migration guide if breaking changes
- [ ] **Integration**
- No breaking changes (or explicitly documented with migration path)
- Backwards compatible where possible
- All integrations tested
- Dependencies updated
- [ ] **Code Review**
- Self-review completed
- Peer review if applicable
- All review comments addressed
---
## Agent Roster
### [Agent Role/Name 1]
**Specialization**: [What domain expertise this agent brings]
**Assigned Tasks**:
- Task 1.1
- Task 2.3
**Context Provided**:
- Section: [Reference to MCD sections this agent needs]
- Files: [Key files this agent will work with]
- External Docs: [Any external documentation needed]
**Communication Protocol**:
- Reports to: [Who/what]
- Updates: [When and how to provide status updates]
- Blockers: [How to escalate blockers]
---
### [Agent Role/Name 2]
[Repeat structure above]
---
## Risk Register
| Risk | Likelihood | Impact | Mitigation |
|------|-----------|--------|------------|
| [Risk description] | Low/Med/High | Low/Med/High | [How we're mitigating this] |
| [Risk description] | Low/Med/High | Low/Med/High | [How we're mitigating this] |
---
## Progress Tracking
### Phase 1: [PHASE NAME]
- [x] Task 1.1: [Name] - ✅ Complete
- [ ] Task 1.2: [Name] - 🔄 In Progress
- [ ] Task 1.3: [Name] - ⏸️ Blocked by X
- [ ] Task 1.4: [Name] - ⏳ Pending
### Phase 2: [PHASE NAME]
- [ ] Task 2.1: [Name] - ⏳ Pending
---
## Decision Log
| Date | Decision | Rationale | Impact |
|------|----------|-----------|--------|
| [DATE] | [What was decided] | [Why this decision] | [What this affects] |
---
## Integration Checklist
Final integration before marking complete:
- [ ] All tasks completed and validated
- [ ] All tests passing (unit, integration, e2e)
- [ ] No breaking changes or migration guide provided
- [ ] Performance benchmarks met
- [ ] Security review passed
- [ ] Documentation complete
- [ ] Quality gates all green
- [ ] Stakeholder acceptance (if applicable)
---
## Lessons Learned
[To be filled at completion - what went well, what could improve for next time]
---
## References
- [Link to relevant docs]
- [Link to design docs]
- [Link to related issues/PRs]

View File

@@ -0,0 +1,282 @@
# Quality Gates Checklist
This document provides detailed quality gates for validating work at task, phase, and project levels.
---
## Task-Level Quality Gates
Run these checks after completing each individual task:
### ✅ Functional Requirements
- [ ] All specified outputs delivered
- [ ] Functionality works as described
- [ ] Edge cases handled
- [ ] Error cases handled gracefully
- [ ] No regression in existing functionality
### ✅ Code Quality
- [ ] Code is readable and self-documenting
- [ ] Variable/function names are meaningful
- [ ] No magic numbers or strings
- [ ] No commented-out code (unless explicitly documented why)
- [ ] Consistent code style with project
### ✅ DRY (Don't Repeat Yourself)
- [ ] No duplicated logic
- [ ] Shared functionality extracted to utilities
- [ ] Constants defined once, referenced everywhere
- [ ] No copy-paste code blocks
### ✅ Testing
- [ ] Unit tests written for new code
- [ ] Tests cover happy path
- [ ] Tests cover edge cases
- [ ] Tests cover error conditions
- [ ] All tests pass
- [ ] Test names clearly describe what they test
### ✅ Documentation
- [ ] Complex logic has explanatory comments
- [ ] Public APIs documented (JSDoc, docstrings, etc.)
- [ ] README updated if user-facing changes
- [ ] Breaking changes documented
---
## Phase-Level Quality Gates
Run these checks after completing a phase (group of related tasks):
### ✅ Integration
- [ ] All components integrate correctly
- [ ] Data flows between components as expected
- [ ] No integration bugs
- [ ] APIs between components are clean
- [ ] Interfaces are well-defined
### ✅ CLEAN Principles
- [ ] **C**lear: Code is easy to understand
- [ ] **L**imited: Functions/methods have single responsibility
- [ ] **E**xpressive: Naming reveals intent
- [ ] **A**bstracted: Proper level of abstraction
- [ ] **N**eat: Organized, well-structured code
### ✅ Performance
- [ ] No obvious performance issues
- [ ] Efficient algorithms used
- [ ] No unnecessary computations
- [ ] Resources properly managed (memory, connections, etc.)
- [ ] Meets stated performance requirements
### ✅ Security
- [ ] No injection vulnerabilities (SQL, XSS, Command, etc.)
- [ ] Input validation in place
- [ ] Output encoding where needed
- [ ] Authentication/authorization checked
- [ ] Sensitive data not logged or exposed
- [ ] Dependencies have no known vulnerabilities
---
## Project-Level Quality Gates
Run these checks before marking the entire project complete:
### ✅ SOLID Principles
#### Single Responsibility Principle
- [ ] Each class/module has one reason to change
- [ ] Each function does one thing well
- [ ] No god objects or god functions
- [ ] Responsibilities clearly separated
#### Open/Closed Principle
- [ ] Open for extension (can add new behavior)
- [ ] Closed for modification (don't change existing code)
- [ ] Use abstractions (interfaces, base classes) for extension points
- [ ] Configuration over hardcoding
#### Liskov Substitution Principle
- [ ] Subtypes can replace base types without breaking
- [ ] Derived classes don't weaken preconditions
- [ ] Derived classes don't strengthen postconditions
- [ ] Inheritance is "is-a" relationship, not "has-a"
#### Interface Segregation Principle
- [ ] Interfaces are focused and cohesive
- [ ] No client forced to depend on methods it doesn't use
- [ ] Many small interfaces > one large interface
- [ ] Clients see only methods they need
#### Dependency Inversion Principle
- [ ] High-level modules don't depend on low-level modules
- [ ] Both depend on abstractions
- [ ] Abstractions don't depend on details
- [ ] Details depend on abstractions
- [ ] Dependencies injected, not hardcoded
### ✅ Architecture Quality
- [ ] Architecture supports future growth
- [ ] Clear separation of concerns
- [ ] Proper layering (presentation, business logic, data)
- [ ] No architectural violations
- [ ] Design patterns used appropriately
### ✅ Testing Coverage
- [ ] Unit test coverage meets threshold (e.g., 80%)
- [ ] Integration tests for key workflows
- [ ] E2E tests for critical user paths
- [ ] All tests passing consistently
- [ ] No flaky tests
### ✅ Documentation Completeness
- [ ] README is current and accurate
- [ ] API documentation complete
- [ ] Architecture documented
- [ ] Setup/installation instructions clear
- [ ] Troubleshooting guide if applicable
- [ ] Inline documentation for complex code
### ✅ Production Readiness
- [ ] No breaking changes (or migration guide provided)
- [ ] Error handling comprehensive
- [ ] Logging appropriate (not too much, not too little)
- [ ] Monitoring/observability in place
- [ ] Configuration externalized
- [ ] Secrets/credentials properly managed
### ✅ User Impact
- [ ] User-facing features work as expected
- [ ] UX is intuitive
- [ ] Error messages are helpful
- [ ] Performance is acceptable to users
- [ ] Accessibility considerations addressed (if applicable)
---
## Quality Gate Severity Levels
When a quality gate fails, assess severity:
### 🔴 Critical (MUST FIX)
- Security vulnerabilities
- Data loss or corruption
- Breaking changes without migration path
- Production crashes or errors
- Major performance degradation
### 🟡 Warning (SHOULD FIX)
- SOLID principle violations
- Missing tests for complex logic
- Poor performance (but not critical)
- Missing documentation
- Code duplication
### 🟢 Info (NICE TO FIX)
- Minor style inconsistencies
- Optimization opportunities
- Refactoring suggestions
- Documentation enhancements
---
## Remediation Process
When quality gates fail:
1. **Document the Issue**
- What gate failed
- Severity level
- Impact assessment
2. **Create Remediation Task**
- Add to task index
- Assign to appropriate agent
- Provide context and acceptance criteria
3. **Revalidate**
- After fix, re-run quality gate
- Ensure no new issues introduced
- Update MCD with results
4. **Learn**
- Why did this get through?
- How to prevent in future?
- Update checklist if needed
---
## Automated Checks
Where possible, automate quality gates:
### Recommended Tools
**Linting:**
- ESLint (JavaScript/TypeScript)
- Pylint/Flake8 (Python)
- RuboCop (Ruby)
- Clippy (Rust)
**Testing:**
- Jest, Vitest (JavaScript)
- pytest (Python)
- RSpec (Ruby)
- cargo test (Rust)
**Security:**
- npm audit, yarn audit
- Snyk
- OWASP Dependency-Check
- Trivy
**Coverage:**
- Istanbul/nyc (JavaScript)
- Coverage.py (Python)
- SimpleCov (Ruby)
- Tarpaulin (Rust)
**Type Checking:**
- TypeScript
- mypy (Python)
- Sorbet (Ruby)
---
## Sign-Off Template
```markdown
## Quality Gate Sign-Off
**Task/Phase/Project**: [Name]
**Date**: [Date]
**Reviewed By**: [Agent/Person]
### Results
- ✅ Functional Requirements: PASS
- ✅ Code Quality: PASS
- ✅ DRY: PASS
- ✅ Testing: PASS
- ✅ Documentation: PASS
- ✅ SOLID Principles: PASS
- ✅ Security: PASS
- ✅ Performance: PASS
### Issues Found
- [None] OR
- [Issue 1 - Severity - Status]
- [Issue 2 - Severity - Status]
### Recommendation
- [ ] Approved - Ready to proceed
- [ ] Approved with conditions - [List conditions]
- [ ] Rejected - [List blockers]
### Notes
[Any additional notes or observations]
```
---
**Remember: Quality is not negotiable. It's faster to build it right than to fix it later.**

293
skills/summoner/SKILL.md Normal file
View File

@@ -0,0 +1,293 @@
---
name: summoner
description: Multi-agent orchestration skill for complex tasks requiring coordination, decomposition, and quality control. Use for large implementations, refactoring projects, multi-component features, or work requiring multiple specialized agents. Excels at preventing context bloat and ensuring SOLID principles. Integrates with oracle, guardian, and wizard.
allowed-tools: Read, Write, Edit, Glob, Grep, Task, Bash
---
# Summoner: Multi-Agent Orchestration Skill
You are now operating as the **Summoner**, a meta-orchestrator designed to handle complex, multi-faceted tasks through intelligent decomposition and specialized agent coordination.
## Core Responsibilities
### 1. Task Analysis & Decomposition
When given a complex task:
1. **Analyze Scope**: Understand the full scope, requirements, constraints, and success criteria
2. **Identify Dependencies**: Map out technical and logical dependencies between components
3. **Decompose Atomically**: Break down into highly specific, atomic tasks that can be independently validated
4. **Preserve Context**: Ensure each subtask has all necessary context without duplication
### 2. Mission Control Document Creation
Create a **Mission Control Document** (MCD) as a markdown file that serves as the single source of truth:
**Structure:**
```markdown
# Mission Control: [Task Name]
## Executive Summary
[1-2 paragraph overview of the entire initiative]
## Success Criteria
- [ ] Criterion 1
- [ ] Criterion 2
...
## Context & Constraints
### Technical Context
[Relevant tech stack, architecture patterns, existing implementations]
### Business Context
[Why this matters, user impact, priority]
### Constraints
[Performance requirements, compatibility, security, etc.]
## Task Index
### Phase 1: [Phase Name]
#### Task 1.1: [Specific Task Name]
- **Agent Type**: [e.g., Backend Developer, Frontend Specialist, QA Engineer]
- **Responsibility**: [Clear, bounded responsibility]
- **Context**: [Specific context needed for THIS task only]
- **Inputs**: [What this task needs to start]
- **Outputs**: [What this task must produce]
- **Validation**: [How to verify success]
- **Dependencies**: [What must be completed first]
[Repeat for each task...]
## Quality Gates
### Code Quality
- [ ] DRY: No code duplication
- [ ] CLEAN: Readable, maintainable code
- [ ] SOLID: Proper abstractions and separation of concerns
- [ ] Security: No vulnerabilities introduced
- [ ] Performance: Meets performance requirements
### Process Quality
- [ ] All tests pass
- [ ] Documentation updated
- [ ] No breaking changes (or explicitly documented)
- [ ] Code reviewed for best practices
## Agent Roster
### [Agent Name/Role]
- **Specialization**: [What they're expert in]
- **Assigned Tasks**: [Task IDs]
- **Context Provided**: [References to MCD sections]
```
### 3. Agent Summoning & Coordination
For each task or group of related tasks:
1. **Summon Specialized Agent**: Use the Task tool to create an agent with specific expertise
2. **Provide Bounded Context**: Give ONLY the context needed for their specific tasks
3. **Clear Handoff Protocol**: Define what success looks like and how to hand off to next agent
4. **Quality Validation**: Review output against quality gates before proceeding
### 4. Quality Control & Integration
After each phase:
1. **Validate Outputs**: Check against quality gates and success criteria
2. **Integration Check**: Ensure components work together correctly
3. **Context Sync**: Update MCD with any learnings or changes
4. **Risk Assessment**: Identify any blockers or risks that emerged
## Operating Principles
### Minimize Context Bloat
- **Progressive Disclosure**: Load only what's needed, when it's needed
- **Reference by Location**: Point to existing documentation rather than duplicating
- **Summarize vs. Copy**: Summarize large contexts; provide full details only when necessary
### Eliminate Assumptions
- **Explicit Over Implicit**: Make all assumptions explicit in the MCD
- **Validation Points**: Build in checkpoints to validate assumptions
- **Question Everything**: Challenge vague requirements before decomposition
### Enforce Quality
- **Definition of Done**: Each task has clear completion criteria
- **No Slop**: Reject outputs that don't meet quality standards
- **Continuous Review**: Quality checks at task, phase, and project levels
## Workflow
```
1. Receive Complex Task
2. Create Mission Control Document
3. For Each Phase:
a. For Each Task:
- Summon Specialized Agent
- Provide Bounded Context
- Monitor Execution
- Validate Output
b. Phase Integration Check
c. Update MCD
4. Final Integration & Validation
5. Deliverable + Updated Documentation
```
## Summoner vs Guardian vs Wizard
### Summoner (YOU - Task Orchestration)
**Purpose**: Coordinate multiple agents for complex, multi-component tasks
**When to Use**:
- Large feature spanning 3+ components
- Multi-phase refactoring projects
- Complex research requiring multiple specialized agents
- Migration projects with many dependencies
- Coordinating documentation research (with Wizard)
**Key Traits**:
- **Proactive**: Plans ahead, orchestrates workflows
- **Multi-Agent**: Coordinates multiple specialists
- **Mission Control**: Creates MCD as single source of truth
- **Parallel Work**: Can run agents in parallel when dependencies allow
**Example**: "Build REST API with auth, rate limiting, caching, and WebSocket support" → Summoner decomposes into 5 subtasks, assigns to specialized agents, coordinates execution
### Guardian (Quality Gates)
**Purpose**: Monitor session health, detect issues, review code automatically
**When to Use**:
- Automatic code review (when 50+ lines written)
- Detecting repeated errors (same error 3+ times)
- Session health monitoring (context bloat, file churn)
- Security/performance audits (using templates)
**Key Traits**:
- **Reactive**: Triggers based on thresholds
- **Single-Agent**: Spawns one focused Haiku reviewer
- **Minimal Context**: Only passes relevant code + Oracle patterns
- **Validation**: Cross-checks suggestions against Oracle
**Example**: You write 60 lines of auth code → Guardian automatically triggers security review → Presents suggestions with confidence scores
### Wizard (Documentation Maintenance)
**Purpose**: Keep documentation accurate, up-to-date, and comprehensive
**When to Use**:
- Updating README for new features
- Generating skill documentation
- Validating documentation accuracy
- Syncing docs across files
**Key Traits**:
- **Research-First**: Uses Oracle + conversation history + code analysis
- **No Hallucinations**: Facts only, with references
- **Uses Both**: Summoner for research coordination, Guardian for doc review
- **Accuracy Focused**: Verifies all claims against code
**Example**: "Document the Guardian skill" → Wizard uses Summoner to coordinate research agents → Generates comprehensive docs → Guardian validates accuracy
### When to Use Which
**Use Summoner When:**
- ✅ Task has 3+ distinct components
- ✅ Need to coordinate multiple specialists
- ✅ Complex research requiring different expertise
- ✅ Multi-phase execution with dependencies
- ✅ Wizard needs comprehensive research coordination
**Use Guardian When:**
- ✅ Need automatic quality checks
- ✅ Code review for security/performance
- ✅ Session is degrading (errors, churn, corrections)
- ✅ Validating Wizard's documentation against code
**Use Wizard When:**
- ✅ Documentation needs updating
- ✅ New feature needs documenting
- ✅ Need to verify documentation accuracy
- ✅ Cross-referencing docs with code
**Use Together:**
```
User: "Comprehensively document the Guardian skill"
Wizard: "This is complex research - using Summoner"
Summoner creates Mission Control Document with tasks:
Task 1: Analyze all Guardian scripts
Task 2: Search Oracle for Guardian patterns
Task 3: Search conversation history for Guardian design
Summoner coordinates 3 research agents in parallel
Summoner synthesizes findings into structured data
Wizard generates comprehensive documentation with references
Guardian reviews documentation for accuracy and quality
Wizard applies Guardian's suggestions
Final accurate, comprehensive documentation
```
## When to Use This Skill
**Ideal For:**
- Features touching 3+ components/systems
- Large refactoring efforts
- Migration projects
- Complex bug fixes requiring multiple fixes
- New architectural implementations
- Comprehensive research coordination (for Wizard)
- Any task where coordination overhead > execution overhead
**Not Needed For:**
- Single-file changes
- Straightforward bug fixes
- Simple feature additions
- Routine maintenance
- Simple code reviews (use Guardian)
- Simple documentation updates (use Wizard directly)
## Templates & Scripts
- **MCD Template**: See `References/mission-control-template.md`
- **Quality Checklist**: See `References/quality-gates.md`
- **Agent Specification**: See `References/agent-spec-template.md`
## Success Indicators
**You're succeeding when:**
- No agent needs to ask for context that should have been provided
- Each agent completes tasks without scope creep
- Integration is smooth with minimal rework
- Quality gates pass on first check
- No "surprise" requirements emerge late
**Warning signs:**
- Agents making assumptions not in MCD
- Repeated context requests
- Integration failures
- Quality gate failures
- Scope creep within tasks
## Remember
> "The context window is a public good. Use it wisely."
Your job is not to do the work yourself, but to **orchestrate specialists** who do their best work when given:
1. Clear, bounded responsibilities
2. Precise context (no more, no less)
3. Explicit success criteria
4. Trust to execute within their domain
---
**Summoner activated. Ready to orchestrate excellence.**

View File

@@ -0,0 +1,89 @@
#!/usr/bin/env python3
"""
Mission Control Document Initializer
This script creates a new Mission Control Document from the template
with proper naming and initial metadata.
Usage:
python init_mission.py "Task Name" [output_dir]
Example:
python init_mission.py "Add User Authentication"
python init_mission.py "Refactor API Layer" ./missions
"""
import sys
import os
from datetime import datetime
from pathlib import Path
def slugify(text):
"""Convert text to a safe filename slug."""
# Remove special characters and replace spaces with hyphens
slug = text.lower()
slug = ''.join(c if c.isalnum() or c in ' -_' else '' for c in slug)
slug = '-'.join(slug.split())
return slug
def create_mission_control(task_name, output_dir='.'):
"""Create a new Mission Control Document."""
# Load template
template_path = Path(__file__).parent.parent / 'References' / 'mission-control-template.md'
if not template_path.exists():
print(f"[ERROR] Error: Template not found at {template_path}")
return False
with open(template_path, 'r') as f:
template = f.read()
# Replace placeholders
date_str = datetime.now().strftime('%Y-%m-%d')
content = template.replace('[TASK NAME]', task_name)
content = content.replace('[DATE]', date_str)
content = content.replace('[Planning | In Progress | Integration | Complete]', 'Planning')
# Create output filename
slug = slugify(task_name)
filename = f"mission-{slug}.md"
output_path = Path(output_dir) / filename
# Ensure output directory exists
output_path.parent.mkdir(parents=True, exist_ok=True)
# Write file
with open(output_path, 'w') as f:
f.write(content)
print(f"[OK] Mission Control Document created: {output_path}")
print(f"\n Next steps:")
print(f" 1. Open {output_path}")
print(f" 2. Fill in Executive Summary and Context")
print(f" 3. Define Success Criteria")
print(f" 4. Break down into Tasks")
print(f" 5. Summon agents and begin orchestration!")
return True
def main():
if len(sys.argv) < 2:
print("Usage: python init_mission.py \"Task Name\" [output_dir]")
print("\nExample:")
print(" python init_mission.py \"Add User Authentication\"")
print(" python init_mission.py \"Refactor API Layer\" ./missions")
sys.exit(1)
task_name = sys.argv[1]
output_dir = sys.argv[2] if len(sys.argv) > 2 else '.'
success = create_mission_control(task_name, output_dir)
sys.exit(0 if success else 1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,303 @@
#!/usr/bin/env python3
"""
Quality Gates Validator
This script helps validate code against quality gates defined in the
quality-gates.md reference document.
Usage:
python validate_quality.py [--level task|phase|project] [--interactive]
Example:
python validate_quality.py --level task --interactive
python validate_quality.py --level project
"""
import sys
import argparse
from enum import Enum
class Level(Enum):
TASK = 'task'
PHASE = 'phase'
PROJECT = 'project'
class Severity(Enum):
CRITICAL = ''
WARNING = ''
INFO = ''
# Quality Gate Definitions
TASK_GATES = {
'Functional Requirements': [
'All specified outputs delivered',
'Functionality works as described',
'Edge cases handled',
'Error cases handled gracefully',
'No regression in existing functionality'
],
'Code Quality': [
'Code is readable and self-documenting',
'Variable/function names are meaningful',
'No magic numbers or strings',
'No commented-out code',
'Consistent code style with project'
],
'DRY': [
'No duplicated logic',
'Shared functionality extracted to utilities',
'Constants defined once',
'No copy-paste code blocks'
],
'Testing': [
'Unit tests written for new code',
'Tests cover happy path',
'Tests cover edge cases',
'Tests cover error conditions',
'All tests pass'
],
'Documentation': [
'Complex logic has explanatory comments',
'Public APIs documented',
'README updated if needed',
'Breaking changes documented'
]
}
PHASE_GATES = {
'Integration': [
'All components integrate correctly',
'Data flows between components as expected',
'No integration bugs',
'APIs between components are clean',
'Interfaces are well-defined'
],
'CLEAN Principles': [
'Clear: Code is easy to understand',
'Limited: Functions have single responsibility',
'Expressive: Naming reveals intent',
'Abstracted: Proper level of abstraction',
'Neat: Organized, well-structured code'
],
'Performance': [
'No obvious performance issues',
'Efficient algorithms used',
'No unnecessary computations',
'Resources properly managed',
'Meets stated performance requirements'
],
'Security': [
'No injection vulnerabilities',
'Input validation in place',
'Output encoding where needed',
'Authentication/authorization checked',
'Sensitive data not logged or exposed',
'Dependencies have no known vulnerabilities'
]
}
PROJECT_GATES = {
'SOLID - Single Responsibility': [
'Each class/module has one reason to change',
'Each function does one thing well',
'No god objects or god functions',
'Responsibilities clearly separated'
],
'SOLID - Open/Closed': [
'Open for extension (can add new behavior)',
'Closed for modification',
'Use abstractions for extension points',
'Configuration over hardcoding'
],
'SOLID - Liskov Substitution': [
'Subtypes can replace base types without breaking',
'Derived classes don\'t weaken preconditions',
'Inheritance is "is-a" relationship'
],
'SOLID - Interface Segregation': [
'Interfaces are focused and cohesive',
'No client forced to depend on unused methods',
'Many small interfaces over one large interface'
],
'SOLID - Dependency Inversion': [
'High-level modules don\'t depend on low-level modules',
'Both depend on abstractions',
'Dependencies injected, not hardcoded'
],
'Testing Coverage': [
'Unit test coverage meets threshold',
'Integration tests for key workflows',
'E2E tests for critical user paths',
'All tests passing consistently',
'No flaky tests'
],
'Documentation Completeness': [
'README is current and accurate',
'API documentation complete',
'Architecture documented',
'Setup instructions clear',
'Troubleshooting guide available'
],
'Production Readiness': [
'No breaking changes or migration guide provided',
'Error handling comprehensive',
'Logging appropriate',
'Configuration externalized',
'Secrets properly managed'
]
}
def get_gates_for_level(level):
"""Get quality gates for specified level."""
if level == Level.TASK:
return TASK_GATES
elif level == Level.PHASE:
return {**TASK_GATES, **PHASE_GATES}
elif level == Level.PROJECT:
return {**TASK_GATES, **PHASE_GATES, **PROJECT_GATES}
def interactive_validation(gates):
"""Run interactive validation."""
results = {}
total_checks = sum(len(checks) for checks in gates.values())
current = 0
print("\n" + "="*70)
print("[SEARCH] Quality Gates Validation")
print("="*70)
print(f"\nTotal checks: {total_checks}")
print("\nFor each check, respond: y (yes/pass), n (no/fail), s (skip)\n")
for category, checks in gates.items():
print(f"\n{''*70}")
print(f" {category}")
print(f"{''*70}")
category_results = []
for check in checks:
current += 1
while True:
response = input(f" [{current}/{total_checks}] {check}? [y/n/s]: ").lower().strip()
if response in ['y', 'yes']:
print(f" [OK] Pass")
category_results.append(('pass', check))
break
elif response in ['n', 'no']:
print(f" [ERROR] Fail")
category_results.append(('fail', check))
break
elif response in ['s', 'skip']:
print(f" Skip")
category_results.append(('skip', check))
break
else:
print(" Invalid input. Use y/n/s")
results[category] = category_results
return results
def print_summary(results, level):
"""Print validation summary."""
total_pass = sum(1 for cat in results.values() for status, _ in cat if status == 'pass')
total_fail = sum(1 for cat in results.values() for status, _ in cat if status == 'fail')
total_skip = sum(1 for cat in results.values() for status, _ in cat if status == 'skip')
total_checks = total_pass + total_fail + total_skip
print("\n" + "="*70)
print("[INFO] VALIDATION SUMMARY")
print("="*70)
print(f"\nLevel: {level.value.upper()}")
print(f"\nResults:")
print(f" [OK] Passed: {total_pass}/{total_checks}")
print(f" [ERROR] Failed: {total_fail}/{total_checks}")
print(f" Skipped: {total_skip}/{total_checks}")
if total_fail > 0:
print(f"\n FAILED CHECKS:")
for category, checks in results.items():
failed = [(status, check) for status, check in checks if status == 'fail']
if failed:
print(f"\n {category}:")
for _, check in failed:
print(f" [ERROR] {check}")
print("\n" + "="*70)
if total_fail == 0 and total_skip == 0:
print(" ALL QUALITY GATES PASSED!")
print("="*70)
return True
elif total_fail == 0:
print(f"[WARNING] All checked gates passed, but {total_skip} checks were skipped.")
print("="*70)
return True
else:
print(f"[ERROR] {total_fail} QUALITY GATES FAILED")
print("[TOOL] Please address failed checks before proceeding.")
print("="*70)
return False
def main():
parser = argparse.ArgumentParser(
description='Validate code against quality gates',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
python validate_quality.py --level task --interactive
python validate_quality.py --level project
python validate_quality.py --level phase --interactive
"""
)
parser.add_argument(
'--level',
type=str,
choices=['task', 'phase', 'project'],
default='task',
help='Validation level (default: task)'
)
parser.add_argument(
'--interactive',
action='store_true',
help='Run interactive validation'
)
args = parser.parse_args()
level = Level(args.level)
gates = get_gates_for_level(level)
if args.interactive:
results = interactive_validation(gates)
success = print_summary(results, level)
sys.exit(0 if success else 1)
else:
# Non-interactive mode - just print checklist
print(f"\n{'='*70}")
print(f"Quality Gates Checklist - {level.value.upper()} Level")
print(f"{'='*70}\n")
for category, checks in gates.items():
print(f"\n{category}:")
for check in checks:
print(f" [ ] {check}")
print(f"\n{'='*70}")
print("Run with --interactive flag for guided validation")
print(f"{'='*70}\n")
if __name__ == '__main__':
main()