Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:24:12 +08:00
commit cb95332d64
14 changed files with 1105 additions and 0 deletions

120
commands/analyze.md Normal file
View File

@@ -0,0 +1,120 @@
---
model: claude-sonnet-4-0
allowed-tools: Task
argument-hint: <problem-or-question> [complexity-level] [perspective-count]
description: Multi-persona analysis using split-team framework with cognitive harmonics and productive disagreement
---
# Multi-Persona Analysis Command
Orchestrate sophisticated multi-perspective analysis of problems using the split-team framework. Assemble optimal persona teams, facilitate cognitive harmonics, and synthesize insights through productive disagreement.
## How It Works
This command invokes the persona-coordinator agent to:
1. Analyze your problem and determine required perspectives
2. Assemble an optimal team of 3-7 persona agents
3. Orchestrate divergent analysis from each perspective
4. Facilitate productive disagreement and assumption challenging
5. Synthesize insights into coherent, actionable recommendations
## Arguments
**$1 (Required)**: Problem statement or question to analyze
**$2 (Optional)**: Complexity level
- `simple`: 3-4 personas, straightforward analysis
- `moderate`: 5-6 personas, balanced trade-offs (default)
- `complex`: 7+ personas, multifaceted challenges
**$3 (Optional)**: Number of personas (3-10)
- Overrides complexity-based team size
- Must be between 3 and 10
## Examples
### Simple Analysis
```bash
/analyze "Should we use REST or GraphQL for our API?" simple
```
Assembles: Analytical Thinker + Pragmatic Realist + Systems Architect
### Moderate Analysis (Default)
```bash
/analyze "Design an authentication system for our platform"
```
Assembles: Systems Architect + Risk Analyst + User Advocate + Pragmatic Realist + Constructive Critic
### Complex Analysis
```bash
/analyze "Should we migrate from monolith to microservices?" complex
```
Assembles: Systems Architect + Risk Analyst + Pragmatic Realist + Creative Innovator + Constructive Critic + User Advocate + Analytical Thinker
### Custom Team Size
```bash
/analyze "Evaluate our tech stack choices" moderate 6
```
Assembles: 6 most relevant personas for the problem
## Use Cases
**Architecture Decisions**
- Technology selection
- System design choices
- Migration strategies
- Scaling approaches
**Product Strategy**
- Feature prioritization
- User experience design
- Market positioning
- Competitive analysis
**Technical Challenges**
- Performance optimization
- Security hardening
- Debugging complex issues
- Code architecture review
**Strategic Planning**
- Long-term technology roadmap
- Resource allocation
- Risk assessment
- Innovation opportunities
## What You Get
1. **Diverse Perspectives**: Each persona contributes unique insights through their specialized lens
2. **Productive Disagreement**: Constructive challenge of assumptions and alternatives
3. **Cognitive Harmonics**: Emergent insights from persona interactions
4. **Synthesis**: Coherent integration of perspectives with clear recommendations
5. **Trade-off Clarity**: Explicit acknowledgment of competing concerns and balanced choices
## Split-Team Framework Principles
**Voice Differentiation**: Each persona maintains unique vocabulary, questions, and analytical approach
**Cognitive Harmonics**: Multiple perspectives create constructive interference for emergent insights
**Productive Disagreement**: Systematic challenge strengthens solutions and prevents groupthink
**Integration Synthesis**: Coordinator weaves perspectives into coherent, actionable guidance
## Tips for Best Results
1. **Be Specific**: Provide context and constraints in your problem statement
2. **State Goals**: Mention success criteria or what you're optimizing for
3. **Match Complexity**: Use simple for straightforward questions, complex for critical decisions
4. **Trust the Process**: Persona disagreement is valuable, not problematic
5. **Implementation Focus**: Include practical constraints for realistic recommendations
## Example Session
```bash
/analyze "We need to choose between PostgreSQL and MongoDB for user data storage. We have 10M users, need strong consistency, but want flexibility for future features." moderate
```
**Result**: Assembles 5 personas who examine through data/performance (Analytical Thinker), implementation reality (Pragmatic Realist), system architecture (Systems Architect), risk factors (Risk Analyst), and challenges assumptions (Constructive Critic). Synthesis provides clear recommendation with rationale and acknowledged trade-offs.
Invoke the persona-coordinator agent with: $ARGUMENTS

159
commands/debate.md Normal file
View File

@@ -0,0 +1,159 @@
---
model: claude-sonnet-4-0
allowed-tools: Task
argument-hint: <proposition-or-claim> [intensity-level]
description: Structured debate with opposing personas examining a proposition from multiple angles through productive disagreement
---
# Multi-Persona Debate Command
Orchestrate structured debate around a proposition or claim, featuring opposing perspectives that challenge assumptions and explore alternatives through productive disagreement.
## How It Works
This command creates a debate-focused analysis where:
1. Multiple personas examine the proposition critically
2. Constructive critics challenge the mainstream view
3. Alternative approaches are systematically generated
4. Assumptions are tested through first-principles thinking
5. Synthesis reveals robust insights from creative tension
## Arguments
**$1 (Required)**: Proposition, claim, or approach to debate
**$2 (Optional)**: Level of challenging scrutiny
- `balanced`: Respectful challenge with alternatives (default)
- `rigorous`: Systematic assumption testing
- `maximum`: Aggressive first-principles questioning
## Examples
### Balanced Debate
```bash
/debate "We should adopt microservices architecture"
```
Personas explore benefits, challenges, alternatives, and contexts where the proposition holds or fails.
### Rigorous Scrutiny
```bash
/debate "TypeScript provides better developer experience than JavaScript" rigorous
```
Deep assumption testing, edge case exploration, systematic challenge of premises.
### Maximum Challenge
```bash
/debate "Code review improves code quality" maximum
```
First-principles questioning, counterintuitive alternatives, paradigm-level examination.
## Use Cases
**Technology Decisions**
- Debate framework choices
- Challenge architectural assumptions
- Evaluate tool adoption proposals
**Best Practices**
- Question conventional wisdom
- Test methodology assumptions
- Explore alternative approaches
**Strategic Direction**
- Challenge product roadmap decisions
- Debate market positioning
- Question resource allocation
**Process & Culture**
- Evaluate team practices
- Challenge organizational assumptions
- Explore alternative workflows
## Debate Structure
### Phase 1: Proposition Framing
- Establish the claim or approach being examined
- Clarify context and underlying assumptions
### Phase 2: Supportive Analysis
- Personas (Systems Architect, Analytical Thinker) examine merits
- Identify contexts where proposition holds
- Document supporting evidence and reasoning
### Phase 3: Critical Challenge
- Constructive Critic and Risk Analyst systematically challenge
- Test assumptions through first-principles thinking
- Generate alternative approaches
### Phase 4: Alternative Exploration
- Creative Innovator proposes unconventional alternatives
- Pragmatic Realist assesses practical viability
- Explore edge cases and boundary conditions
### Phase 5: Synthesis
- Coordinator integrates insights from debate
- Clarify contexts where proposition works vs. fails
- Provide nuanced recommendations
## What You Get
1. **Assumption Audit**: Systematic identification of hidden premises
2. **Alternative Approaches**: Multiple options beyond the original proposition
3. **Context Clarity**: Understanding when the proposition holds or fails
4. **Robust Insights**: Solutions strengthened through critical examination
5. **Nuanced Recommendations**: Avoiding false dichotomies and oversimplification
## Split-Team Principles in Debate
**Productive Disagreement**: Constructive challenge strengthens understanding
**First-Principles Thinking**: Break assumptions down to fundamental truths
**Alternative Generation**: Explore options beyond binary choices
**Evidence-Based Challenge**: Ground disagreement in logic and data
## Tips for Effective Debates
1. **Frame Clearly**: State the proposition precisely
2. **Provide Context**: Include relevant constraints and goals
3. **Embrace Challenge**: Dissent reveals blind spots
4. **Seek Nuance**: Avoid forcing binary yes/no conclusions
5. **Value Alternatives**: Often the best solution emerges from synthesis
## Example Session
```bash
/debate "We should prioritize velocity over code quality to meet market deadlines" rigorous
```
**Result**: Personas systematically challenge this false dichotomy, explore hidden assumptions (quality vs. velocity trade-off, technical debt impact), generate alternatives (quality-enabling speed, incremental quality), and synthesize nuanced guidance about when to optimize for speed vs. when quality accelerates delivery.
## Debate Output Format
```markdown
## Debate: [Proposition]
### Proposition Framing
[Clear statement and context]
### Supporting Analysis
- [Supportive persona perspectives]
- [Contexts where proposition holds]
### Critical Challenge
- [Systematic assumption testing]
- [First-principles questioning]
- [Alternative approaches]
### Creative Alternatives
- [Unconventional options]
- [Edge case exploration]
### Synthesis & Recommendations
[Nuanced guidance integrating debate insights]
[Context-dependent recommendations]
[Acknowledged trade-offs]
```
Invoke the persona-coordinator agent with debate mode: $ARGUMENTS

170
commands/evaluate.md Normal file
View File

@@ -0,0 +1,170 @@
---
model: claude-sonnet-4-0
allowed-tools: Task
argument-hint: <solution-or-proposal> [focus-areas]
description: Multi-persona evaluation of solutions or proposals through diverse expert lenses with comprehensive assessment
---
# Multi-Persona Evaluation Command
Evaluate solutions, proposals, or designs through multiple expert perspectives, providing comprehensive assessment of strengths, weaknesses, risks, and opportunities.
## How It Works
This command creates evaluation-focused analysis where:
1. Multiple personas assess the solution from their expertise
2. Each perspective applies distinct evaluation criteria
3. Strengths and weaknesses are systematically identified
4. Risks and opportunities are comprehensively assessed
5. Synthesis provides balanced evaluation with improvement recommendations
## Arguments
**$1 (Required)**: Solution, proposal, or design to evaluate
**$2 (Optional)**: Focus areas for evaluation (comma-separated)
- Examples: `security,performance`, `usability,scalability`, `cost,maintainability`
- If not specified, uses comprehensive multi-dimensional assessment
## Examples
### General Evaluation
```bash
/evaluate "We'll use Redis for caching, PostgreSQL for data, and Next.js on the frontend"
```
Comprehensive assessment from all relevant perspectives.
### Focused Evaluation
```bash
/evaluate "Authentication via JWT tokens stored in localStorage" security,usability
```
Security and user experience focused assessment.
### Architecture Evaluation
```bash
/evaluate "Microservices with event-driven communication via Kafka" scalability,complexity,reliability
```
Focused on scale, complexity, and reliability dimensions.
## Use Cases
**Architecture Review**
- Evaluate system designs
- Assess technology choices
- Review integration patterns
**Feature Assessment**
- Evaluate product proposals
- Assess user experience designs
- Review feature implementation approaches
**Security Review**
- Evaluate authentication mechanisms
- Assess security architectures
- Review access control designs
**Performance Evaluation**
- Assess optimization approaches
- Evaluate scaling strategies
- Review caching designs
## Evaluation Dimensions
### Technical Assessment
- **Systems Architect**: Structural soundness, scalability, integration
- **Analytical Thinker**: Performance metrics, efficiency, measurability
### Risk & Feasibility
- **Risk Analyst**: Failure modes, vulnerabilities, contingencies
- **Pragmatic Realist**: Implementation feasibility, resource requirements
### Innovation & Alternatives
- **Creative Innovator**: Opportunities for improvement, alternative approaches
- **Constructive Critic**: Assumption testing, overlooked considerations
### User & Experience
- **User Advocate**: User experience, accessibility, human impact
## Evaluation Framework
### Phase 1: Solution Understanding
- Coordinator establishes what's being evaluated
- Clarifies context, goals, and constraints
### Phase 2: Multi-Dimensional Assessment
Each persona evaluates through their lens:
- **Strengths**: What works well from this perspective
- **Weaknesses**: Gaps, limitations, concerns
- **Risks**: What could go wrong
- **Opportunities**: How to improve or enhance
### Phase 3: Comprehensive Synthesis
- Integrate assessments across dimensions
- Identify critical issues requiring attention
- Highlight notable strengths
- Recommend improvements
- Provide overall assessment
## What You Get
1. **Comprehensive Coverage**: Assessment from technical, user, risk, and innovation perspectives
2. **Balanced View**: Both strengths and weaknesses identified
3. **Risk Identification**: Potential problems surfaced early
4. **Improvement Opportunities**: Actionable suggestions for enhancement
5. **Overall Assessment**: Clear recommendation on viability
## Output Format
```markdown
## Multi-Persona Evaluation: [Solution]
### Solution Overview
[What's being evaluated and context]
### Dimensional Assessments
#### Systems Architect Evaluation
**Strengths**: [Positive aspects]
**Concerns**: [Architectural issues]
**Recommendations**: [Improvements]
#### Risk Analyst Assessment
**Strengths**: [Risk-mitigation strengths]
**Vulnerabilities**: [Risk factors]
**Recommendations**: [Risk mitigation strategies]
[Continue for each persona]
### Synthesis & Overall Assessment
**Critical Strengths**: [Top 3 positive aspects]
**Key Concerns**: [Top 3 issues requiring attention]
**Risk Summary**: [High/Medium/Low with key factors]
**Improvement Recommendations**:
1. [Priority improvement 1]
2. [Priority improvement 2]
3. [Priority improvement 3]
**Overall Assessment**: [Clear evaluation with context]
```
## Tips for Best Evaluations
1. **Provide Context**: Include goals, constraints, alternatives considered
2. **Be Specific**: Detailed proposals get detailed assessments
3. **State Priorities**: Mention what's most important (speed vs. quality, etc.)
4. **Include Criteria**: Use second argument for focused evaluation
5. **Expect Balanced**: Good solutions have trade-offs; perfect solutions don't exist
## Example Session
```bash
/evaluate "We'll implement real-time features using WebSockets with Redis pub/sub for scaling across servers. Fallback to long-polling for older browsers." scalability,reliability,complexity
```
**Result**: Systems Architect assesses scalability approach, Risk Analyst evaluates reliability and failure modes, Pragmatic Realist considers implementation complexity, Constructive Critic challenges assumptions about scaling needs. Synthesis provides balanced assessment with improvement suggestions like connection management, monitoring strategies, and fallback testing.
Invoke the persona-coordinator agent with evaluation mode: $ARGUMENTS