Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:29:00 +08:00
commit 42b99b32d2
10 changed files with 3821 additions and 0 deletions

View File

@@ -0,0 +1,273 @@
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
# Agent Skills Integration (2025)
## Overview
Integration patterns between context-master and Agent Skills for autonomous context management in 2025 Claude Code.
## Core Pattern: Context-Aware Agent Skills
### What Are Agent Skills?
Context-efficient knowledge packages that:
- Activate automatically based on context
- Provide specialized guidance
- Stay lean (avoid context bloat)
- Delegate heavy lifting to subagents
### Context Master + Agent Skills Synergy
**Context Master provides:**
- Planning frameworks for multi-file projects
- Thinking delegation architecture
- Context optimization strategies
- Session management patterns
**Agent Skills provide:**
- Domain-specific knowledge
- Automated activation triggers
- Custom tool integration
- Team-wide consistency
## Pattern 1: Context-First Agent Skill
```markdown
# My Custom Agent Skill
## Activation Triggers
- User mentions "create [N]+ files"
- Request involves "architecture"
- Task needs "planning"
## Instructions
### Step 1: Context Check
Before proceeding, ask:
- Are we working with multi-file project? (YES → use context-master)
- Is thinking delegation needed? (YES → delegate)
### Step 2: Leverage Context Master
- Use /plan-project command for architecture
- Use thinking delegation for deep analysis
- Reference context-master patterns
### Step 3: Your Domain Work
- Implement using domain expertise
- Verify structure using /verify-structure
- Document decisions in DECISIONS.md
```
## Pattern 2: Autonomous Context Delegation
Instead of doing analysis in Agent Skill context:
**Bad (fills Agent Skill context):**
```
"Let me think deeply about the architecture..."
[5K tokens of thinking in Agent Skill context]
```
**Good (preserves Agent Skill context):**
```
"This needs deep analysis. Let me delegate:
/agent deep-analyzer "Ultrathink about [architecture]"
[Deep analysis happens in isolated agent context]
[Returns summary to Agent Skill - clean]
```
## Pattern 3: Project-Specific Context Strategy
**In your Agent Skill:**
```
## When This Skill Activates
1. Check if CLAUDE.md exists
2. If yes: Load context strategy from CLAUDE.md
3. If no: Use default context-master patterns
## Recommended CLAUDE.md Strategy for This Skill
Include in your project's CLAUDE.md:
```yaml
ContextStrategy:
- Use subagents for: [domain-specific searches]
- Keep in main for: [your domain decisions]
- Compact when: [context grows beyond X]
- Clear before: [major phase transitions]
```
## Pattern 4: Team Consistency
### Create Standard Agent Skill Template
```markdown
# Team Agent Skill Template
## Activation
Activates for: [domain work]
## Context Management
Before doing any analysis:
1. Reference /plan-project for multi-file work
2. Use thinking delegation for complex decisions
3. Document findings in [DOMAIN]_FINDINGS.md
4. Leave main context clean for other agents
## Integration Points
- Works with: context-master, plugin-master
- Delegates to: deep_analyzer for critical choices
- Outputs to: Structured documents, not context
```
## Pattern 5: Cascading Deep Analysis
**For complex domains requiring multiple analyses:**
```
User Request → Triggers Your Agent Skill
Agent Skill identifies sub-questions:
Q1: Frontend implications?
Q2: Backend implications?
Q3: Data implications?
Q4: Integrated recommendation?
Delegates each:
/agent frontend-deep-analyzer "Ultrathink Q1"
/agent backend-deep-analyzer "Ultrathink Q2"
/agent data-deep-analyzer "Ultrathink Q3"
/agent synthesis-analyzer "Ultrathink Q4"
Receives 4 summaries (~200 tokens each)
Agent Skill synthesizes in clean context
Returns integrated recommendation to main
```
**Context used in main:** ~1,200 tokens (4 summaries + synthesis)
**vs Traditional:** 20K+ tokens (all thinking in main)
**Efficiency:** 16-17x
## Pattern 6: Progressive Context Loading
Avoid loading all project context upfront:
```
// In your Agent Skill:
Step 1: Minimal context
- Load just CLAUDE.md
- Understand strategy
Step 2: Selective context
- Load only relevant files (use subagent search)
- Get summaries, not full content
Step 3: Deep dive only if needed
- Load full context only for specific modules
- Use Progressive disclosure pattern
```
## Implementation Checklist
- [ ] Agent Skill documentation mentions context-master
- [ ] Activation triggers align with planning needs
- [ ] Uses /plan-project for multi-file work
- [ ] Delegates deep analysis to subagents
- [ ] Documents decisions outside of context
- [ ] CLAUDE.md includes skill-specific strategies
- [ ] Team training covers thinking delegation
- [ ] Hooks configured for auto-management
## Advanced: Agent Skill + Plugin Creation Workflow
For creating domain-specific plugins:
```
1. User wants new plugin for domain X
2. Agent Skill → plugin-master integration:
/agent plugin-architect "Design plugin for X"
3. plugin-architect:
- Thinks about structure
- Considers context implications
- References context-master patterns
4. Returns design
5. User/Agent Skill creates plugin
6. New plugin includes context-master references
```
## Real-World Example: Frontend Agent Skill
```markdown
# Frontend Agent Skill
## When This Activates
- User: "Create a React component..."
- User: "Build a multi-page website..."
- User: "Design component architecture..."
## Instructions
### Multi-File Check
If creating 3+ files:
1. "/plan-project - Think about component structure"
2. Wait for analysis
3. Implement files in recommended order
4. "/verify-structure - Check component references"
### Complex Decisions
If component architecture is complex:
1. "/agent frontend-analyzer - Think about patterns"
2. Receive analysis
3. Design components in main context (clean)
### Documentation
1. Save component decisions to COMPONENT_DECISIONS.md
2. Leave main context for next task
3. Reference document as needed
```
## Measuring Success
**Good indicators:**
- Main context stays under 50K tokens for complex work
- Multiple features/analyses per session without degradation
- Clear decision logs without context bloat
- Smooth team collaboration
**Warning signs:**
- Main context consistently >80K
- Responses getting less focused
- Need to restart sessions more often
- Team members report context issues

View File

@@ -0,0 +1,406 @@
# Context Management Strategies
Detailed workflows and strategies for managing context efficiently in Claude Code.
## Understanding Context Usage
**Context window size:**
- Standard: 200K tokens (~150K words)
- Extended (API): 1M tokens (~750K words)
**What consumes context:**
- Conversation history (messages back and forth)
- File contents loaded into context
- Tool call results (bash output, test results, etc.)
- CLAUDE.md configuration
- Extended thinking blocks
**Context awareness:** Claude Sonnet 4.5 tracks remaining context and reports it with each tool call.
## Core Commands
### `/clear` - Reset Context
**When to use:**
- Between major features or tasks
- After completing a self-contained piece of work
- When switching between different parts of the codebase
- If you notice Claude getting distracted or referencing old context
**Example workflow:**
```
1. Complete feature A
2. Run tests and commit
3. /clear
4. Start feature B with fresh context
```
### `/compact` - Compress Context
**When to use:**
- Before starting complex multi-step work
- When approaching context limits (~80% full)
- To preserve key decisions while clearing clutter
**What it does:** Summarizes conversation history while retaining key information.
**Example workflow:**
```
1. Long research and planning session
2. /compact "Summarize architecture decisions and open TODOs"
3. Continue with implementation
```
### `/continue` - Resume Session
**When to use:**
- Returning to previous work
- After a break
- To pick up where you left off
**Combines well with:**
```
claude --continue # Resume last session in current project
```
## Strategy 1: Task Isolation
**Goal:** Keep each task in its own context bubble.
**Workflow:**
```
1. Start task → /clear (if needed)
2. Use subagents for research/analysis
3. Main context focuses on implementation
4. Complete and test
5. /clear before next task
```
**When to use:**
- Multiple independent features
- Bug fixes that don't require historical context
- Refactoring isolated modules
**Benefits:**
- Each task starts with clean slate
- No cross-contamination between tasks
- More predictable context usage
---
## Strategy 2: Progressive Context Management
**Goal:** Build up context deliberately, clearing non-essential information.
**Workflow:**
```
1. Research phase
- Subagents search and analyze
- Main context reviews summaries
2. Planning phase
- "think hard" to create plan
- Save plan to document/issue
3. /compact "Keep architecture decisions and plan"
4. Implementation phase
- Reference plan document
- Focus on current file/module
5. Testing phase
- Subagent runs tests
- Main context addresses failures
6. /clear before next feature
```
**When to use:**
- Large features requiring multiple steps
- Complex refactoring
- Projects with extensive research phase
**Benefits:**
- Intentional context building
- Clear phase transitions
- Preserved key decisions
---
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
## Strategy 3: Parallel Workstreams
**Goal:** Work on multiple aspects simultaneously using subagents.
**Workflow:**
```
1. Main context: High-level orchestration
2. Subagent A: Frontend work
3. Subagent B: Backend work
4. Subagent C: Test execution
5. Main context: Integration and coordination
```
**When to use:**
- Full-stack features
- Multi-component changes
- When different aspects are independent
**Benefits:**
- Efficient use of subagent isolation
- Parallel progress
- Main context stays focused on coordination
---
## Strategy 4: Test-Driven Context Management
**Goal:** Keep context focused on current test/implementation cycle.
**Workflow:**
```
1. Write test in main context
2. /agent test-runner "run new test"
3. Implement feature to pass test
4. /agent test-runner "run test suite"
5. If fail → fix in main context
6. If pass → commit and /clear
```
**When to use:**
- TDD workflows
- Bug fixes with test coverage
- API endpoint development
**Benefits:**
- Tight feedback loop
- Context stays focused on current test
- Test output doesn't clutter main context
---
## Strategy 5: Documentation-First Development
**Goal:** Use CLAUDE.md as persistent memory across sessions.
**Setup:**
```markdown
# CLAUDE.md
## Current Focus
Sprint goal: User authentication system
## Recent Decisions
- Using JWT with refresh tokens
- PostgreSQL for user storage
- Redis for session management
## Next Tasks
- [ ] Implement token refresh endpoint
- [ ] Add rate limiting
- [ ] Write integration tests
## Architecture Notes
[Key decisions that inform all work]
```
**Workflow:**
```
1. CLAUDE.md provides persistent context
2. Each session references current focus
3. Update CLAUDE.md with new decisions
4. /clear frequently - CLAUDE.md persists
```
**When to use:**
- Multi-day projects
- Team collaboration (CLAUDE.md in git)
- Complex projects needing persistent memory
**Benefits:**
- Survives /clear commands
- Shared team knowledge
- Consistent across sessions
---
## Scenario-Based Strategies
### Scenario: Large Refactoring
**Challenge:** Need broad codebase understanding but context fills quickly.
**Strategy:**
```
1. Subagent: "Map all files using old pattern"
2. Review map, create refactoring plan
3. Save plan to REFACTOR.md
4. /clear
5. For each file:
a. Load file
b. Refactor based on plan
c. Test
d. /clear before next file
```
---
### Scenario: Bug Investigation
**Challenge:** Unknown cause, need to search widely but track findings.
**Strategy:**
```
1. Create BUG_NOTES.md to track findings
2. Subagent: "Search logs for error X"
3. Document findings in BUG_NOTES.md
4. Subagent: "Analyze code paths that could cause X"
5. Document in BUG_NOTES.md
6. /compact "Keep bug theory and evidence"
7. Implement fix
8. /agent test-runner "verify fix"
```
---
### Scenario: New Feature with Unknown Patterns
**Challenge:** Need to research existing patterns without cluttering context.
**Strategy:**
```
1. Subagent: "Find similar features in codebase"
2. Subagent: "Extract common patterns from those features"
3. Main context reviews patterns
4. "think about best approach for new feature"
5. Create implementation plan
6. /clear
7. Implement based on plan
8. Reference plan doc if needed
```
---
### Scenario: Multi-File Feature
**Challenge:** Changes span many files, hard to keep all in context.
**Strategy:**
```
1. Create FEATURE.md with:
- Overall design
- File change checklist
- Cross-file dependencies
2. For each file:
a. Load just that file
b. Reference FEATURE.md for context
c. Make changes
d. Test
e. /compact if context getting full
3. Final integration test
4. /clear and move to next feature
```
---
## Advanced Techniques
### Technique: Context Checkpoints
**Save key state to files before clearing:**
```
1. Long planning session
2. "Create PLAN.md with our architecture decisions"
3. /clear
4. Reference PLAN.md during implementation
```
### Technique: Layered Context Loading
**Load information progressively as needed:**
```
1. Start with just current file
2. If need more context: "show me the caller"
3. If need more: "show me the config"
4. Don't load everything upfront
```
### Technique: Subagent Summarization
**Use subagents to create digestible summaries:**
```
Subagent: "Analyze all 50 test files and create a summary:
- Total coverage percentage
- Files with <50% coverage
- Most complex tests"
Then work from the summary, not the raw test files.
```
### Technique: Incremental /compact
**Compress context multiple times in long sessions:**
```
1. Research phase → /compact "Keep research findings"
2. Planning phase → /compact "Keep findings and plan"
3. Implementation → /compact "Keep plan and decisions"
```
## Monitoring Context Health
**Signs context is getting cluttered:**
- Claude references old, irrelevant information
- Responses become less focused
- Performance seems to degrade
- You're >80% through context budget
**Remedies:**
1. `/compact` for quick compression
2. `/clear` for fresh start
3. Move key info to files before clearing
4. Use subagents more aggressively
## Best Practices Summary
1. **Use /clear liberally** between tasks
2. **Front-load subagent usage** for research
3. **Document decisions** in CLAUDE.md or files
4. **Load files progressively** as needed
5. **Test in subagents** to keep output isolated
6. **/compact before** complex multi-step work
7. **Think first** to plan before implementing
8. **Reference plans** instead of keeping full context
9. **Batch similar operations** in single subagent
10. **Monitor context usage** and respond proactively

View File

@@ -0,0 +1,664 @@
# Subagent Patterns
Common patterns and best practices for using subagents in Claude Code, with emphasis on thinking delegation for context efficiency.
## The Thinking Delegation Paradigm
**Core insight:** Subagents have isolated context windows. When subagents use extended thinking, that reasoning happens in THEIR context, not the main session's context.
**This enables:**
- Deep analysis (5K+ thinking tokens)
- Main context receives summaries (~200 tokens)
- 23x context efficiency while maintaining analytical rigor
- Sustainable long sessions with multiple complex analyses
**The architecture:**
```
Main Session: Makes decisions, stays focused
↓ delegates with thinking trigger
Subagent: Uses extended thinking in isolation (5K tokens)
↑ returns summary
Main Session: Receives actionable conclusion (200 tokens)
```
**Context math:**
- Traditional: 7K tokens per analysis in main context
- With delegation: 300 tokens per analysis in main context
- Efficiency gain: 23x
## Thinking-Enabled Subagent Types
### Deep Analyzer (Type: deep_analyzer)
**Purpose:** Complex decisions requiring extensive analysis
**What it does:**
- ALWAYS uses "ultrathink" for analysis
- Evaluates multiple approaches
- Considers tradeoffs and implications
- Returns well-reasoned recommendations
**When to use:**
- Architecture decisions
- Technology evaluations
- Design pattern selection
- Performance optimization strategies
- Security assessments
- Refactoring approach planning
**Example usage:**
```
/agent architecture-advisor "Should we use microservices or modular monolith
for a 10M user e-commerce platform with 8 developers?"
[Subagent thinks deeply in isolation - 5K tokens]
[Returns to main: ~200 token summary with recommendation]
```
### Pattern Researcher (Type: researcher)
**Purpose:** Research with analytical thinking
**What it does:**
- Searches documentation/code
- Uses "think hard" for multi-source analysis
- Synthesizes insights with reasoning
- Returns analysis, not just data
**When to use:**
- API pattern research
- Best practice discovery
- Technology comparison
- Design pattern evaluation
**Example usage:**
```
/agent pattern-researcher "Research authentication patterns in our codebase
and think hard about which approach fits our scale requirements"
[Subagent searches + analyzes - 3K tokens thinking]
[Returns: Summary of patterns with reasoned recommendation]
```
### Code Analyzer (Type: analyzer)
**Purpose:** Architectural insights and deep code analysis
**What it does:**
- Analyzes code structure
- Uses "think harder" for architecture
- Identifies implications and opportunities
- Returns actionable insights
**When to use:**
- Architecture assessment
- Technical debt identification
- Performance bottleneck analysis
- Refactoring opportunity discovery
**Example usage:**
```
/agent code-analyzer "Think deeply about our authentication system's
architecture and identify improvement opportunities"
[Subagent analyzes + thinks - 4K tokens]
[Returns: Key findings with prioritized recommendations]
```
### Test Analyzer (Type: tester)
**Purpose:** Test execution with failure analysis
**What it does:**
- Runs test suites
- Uses "think hard" when tests fail
- Analyzes root causes
- Returns actionable diagnostics
**When to use:**
- Test suite execution
- Failure diagnosis
- Regression analysis
- Coverage assessment
**Example usage:**
```
/agent test-analyzer "Run the auth test suite and if failures occur,
think hard about root causes"
[Subagent runs tests, analyzes failures - 2K tokens thinking]
[Returns: Test status + root cause analysis if needed]
```
## Core Principles
**Subagents have isolated context windows** - They only send relevant information back to the main orchestrator, not their full context. This makes them ideal for tasks that generate lots of intermediary results.
**When to use subagents:**
- Searching through large codebases
- Analyzing multiple files for patterns
- Research tasks with extensive documentation
- Running tests or builds
- Any investigation that doesn't need full project context
**When NOT to use subagents:**
- Quick single-file edits
- Simple queries that need immediate response
- Tasks requiring full project context for decision-making
## Common Patterns
### Pattern 1: Research → Plan → Implement
**Main Context:**
```
1. "Use a subagent to search our codebase for similar authentication implementations"
2. [Review subagent findings]
3. "think about the best approach based on those examples"
4. [Implement in main context]
```
**Why it works:** Research generates lots of search results that would clutter main context. Main agent only sees the summary.
---
### Pattern 2: Parallel Investigation
**Main Context:**
```
"Spin up three subagents:
1. One to analyze our error handling patterns
2. One to check test coverage
3. One to review documentation
Report back with key findings from each."
```
**Why it works:** Each subagent has its own context window. They can work in parallel without interfering with each other.
---
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
### Pattern 3: Test-Driven Workflow
**Main Context:**
```
1. Write tests in main context
2. "Use a subagent to run the test suite and report results"
3. [Implement fixes based on failures]
4. "Subagent: run tests again"
5. [Repeat until passing]
```
**Why it works:** Test output can be verbose. Subagent filters it down to pass/fail status and specific failures.
---
### Pattern 4: Build Verification
**Main Context:**
```
1. Make code changes
2. "Subagent: run the build and verify it succeeds"
3. [If build fails, review error]
4. [Fix and repeat]
```
**Why it works:** Build logs are long. Subagent only reports success/failure and relevant errors.
---
### Pattern 5: Multi-File Analysis
**Main Context:**
```
"Use a subagent to:
1. Find all files using the old API
2. Analyze migration complexity
3. Return list of files and complexity assessment"
[Review findings]
"Create a migration plan based on that analysis"
```
**Why it works:** File searching and analysis stays in subagent. Main context gets clean summary for planning.
## Usage Syntax
### Starting a Subagent
```
/agent <agent-name> <task-description>
```
or in natural language:
```
"Use a subagent to [task]"
"Spin up a subagent for [task]"
"Delegate [task] to a subagent"
```
### Pre-configured vs Ad-hoc
**Pre-configured agents** (stored in `.claude/agents/`):
```
/agent test-runner run the full test suite
```
**Ad-hoc agents** (created on the fly):
```
"Use a subagent to search the codebase for error handling patterns"
```
## Example Subagent Configurations
### Research Subagent
**File:** `.claude/agents/researcher.md`
```markdown
# Researcher
Documentation and code search specialist
## Instructions
Search through documentation and code efficiently. Return only the most
relevant information with specific file paths and line numbers.
Summarize findings concisely.
## Allowed Tools
- read
- search
- web_search
## Autonomy Level
Medium - Take standard search actions autonomously
```
### Test Runner Subagent
**File:** `.claude/agents/test-runner.md`
```markdown
# Test Runner
Automated test execution and reporting
## Instructions
Execute test suites and report results clearly. Focus on:
- Pass/fail status
- Specific failing tests
- Error messages and stack traces
- Coverage metrics if available
## Allowed Tools
- bash
- read
## Autonomy Level
High - Execute tests fully autonomously
```
### Code Analyzer Subagent
**File:** `.claude/agents/analyzer.md`
```markdown
# Analyzer
Code analysis and pattern detection
## Instructions
Analyze code structure and identify:
- Duplicate patterns
- Complexity hotspots
- Dependency relationships
- Potential issues
Provide actionable insights with specific locations.
## Allowed Tools
- read
- search
- bash
## Autonomy Level
Medium - Analyze autonomously, ask before making suggestions
```
## Anti-Patterns
### ❌ Using Subagents for Everything
**Bad:**
```
"Use a subagent to edit this single file"
```
**Why:** Overhead of subagent isn't worth it for simple tasks.
**Good:**
```
"Edit this file to add the new function"
```
---
### ❌ Not Providing Clear Task Scope
**Bad:**
```
"Use a subagent to look at the code"
```
**Why:** Too vague. Subagent doesn't know what to focus on.
**Good:**
```
"Use a subagent to search for all database query patterns and assess
which ones are vulnerable to SQL injection"
```
---
### ❌ Expecting Full Context Transfer
**Bad:**
```
Main: [Long discussion about architecture]
Then: "Subagent: implement that plan we just discussed"
```
**Why:** Subagent doesn't have access to your conversation history.
**Good:**
```
"Subagent: implement the authentication module with:
- JWT tokens
- Refresh token rotation
- Rate limiting
Based on our existing user service patterns."
```
## Performance Tips
1. **Front-load research** - Use subagents early for research, then implement in main context
2. **Batch similar tasks** - One subagent for all file searches, not separate subagents per file
3. **Clear instructions** - Be specific about what the subagent should return
4. **Iterate in main context** - Use main context for back-and-forth refinement
5. **Trust the summary** - Don't ask subagent to return full documents
## Advanced: Chaining Subagents
**Scenario:** Complex analysis requiring multiple specialized agents
```
1. "Subagent: search for all API endpoints and list them"
2. [Review list]
3. "Subagent: for each endpoint in that list, check test coverage"
4. [Review coverage report]
5. "Subagent: analyze the untested endpoints and estimate testing effort"
```
**Why chaining works:** Each subagent builds on the previous results without cluttering the main context with intermediary data.
---
## Thinking Delegation Patterns
### Pattern 1: Deep Decision Analysis
**Problem:** Need to make complex architectural decision
**Traditional approach (main context):**
```
"Think deeply about microservices vs monolith"
[5K tokens of thinking in main context]
```
**Thinking delegation approach:**
```
/agent deep-analyzer "Ultrathink about microservices vs monolith
for 10M user platform, 8 dev team, considering deployment, maintenance,
scaling, and team velocity"
[Subagent's isolated context: 6K tokens of thinking]
[Main receives: 250 token summary + recommendation]
```
**Context saved:** 5,750 tokens (~97%)
---
### Pattern 2: Research → Think → Recommend
**Problem:** Need to research options and provide reasoned recommendation
**Workflow:**
```
Step 1: Research phase
/agent pattern-researcher "Research state management libraries
and think hard about tradeoffs"
[Subagent searches + analyzes in isolation]
[Returns: Options with pros/cons]
Step 2: Decision phase
/agent deep-analyzer "Based on these options, ultrathink and
recommend best fit for our use case"
[Subagent thinks deeply in isolation]
[Returns: Recommendation with rationale]
Step 3: Implementation
[Main context implements based on recommendation]
```
**Why it works:** Research and analysis isolated, implementation focused
---
### Pattern 3: Iterative Analysis Refinement
**Problem:** Need to analyze multiple aspects without context accumulation
**Workflow:**
```
Round 1: /agent analyzer "Think about performance implications"
[Returns summary to main]
Round 2: /agent analyzer "Think about security implications"
[Returns summary to main]
Round 3: /agent deep-analyzer "Synthesize performance and security
analyses, recommend approach"
[Returns final recommendation to main]
Main context: Make decision with 3 concise summaries (~600 tokens total)
```
**vs Traditional:**
```
"Think about performance" [3K tokens in main]
"Think about security" [3K tokens in main]
"Synthesize" [needs both analyses in context]
Total: 6K+ tokens
```
**Context efficiency:** 10x improvement
---
### Pattern 4: Parallel Deep Analysis
**Problem:** Multiple independent analyses needed
**Workflow:**
```
/agent analyzer-1 "Think deeply about database options"
/agent analyzer-2 "Think deeply about caching strategies"
/agent analyzer-3 "Think deeply about API design patterns"
[Each analyzes in parallel, isolated contexts]
[Each returns summary]
/agent deep-analyzer "Synthesize these analyses into coherent architecture"
[Returns integrated recommendation]
```
**Why it works:** Multiple deep analyses happen without accumulating in main context
---
### Pattern 5: Test-Driven Development with Thinking
**Problem:** TDD cycle fills context with test output and debugging analysis
**Traditional TDD:**
```
Write test → Run test (verbose output) → Debug (thinking in main) → Fix → Repeat
[Context fills with test output + debugging thinking]
```
**Thinking delegation TDD:**
```
1. Write test in main context (focused)
2. /agent test-analyzer "Run test, if failure think hard about root cause"
3. [Subagent runs + analyzes in isolation]
4. [Returns: Status + root cause analysis if needed]
5. Fix based on analysis in main context
6. /agent test-analyzer "Verify fix"
7. Repeat until passing
```
**Why it works:** Test output and failure analysis isolated, main context stays implementation-focused
---
### Pattern 6: Refactoring with Deep Assessment
**Problem:** Large refactoring needs strategy without filling main context
**Workflow:**
```
Step 1: Assessment
/agent analyzer "Think deeply about refactoring scope, risks,
and approach for legacy auth system"
[Subagent analyzes codebase + thinks in isolation - 4K tokens]
[Returns: Risk assessment + strategy - 300 tokens]
Step 2: Planning
Create REFACTOR.md in main context based on strategy
Step 3: Execution
/clear
For each module:
- Refactor based on plan
- /agent test-analyzer "verify changes"
- Commit
- /clear
```
**Why it works:** Deep analysis happens once (isolated), execution follows clean plan
---
### Pattern 7: Compound Decision Making
**Problem:** Multi-layer decision with dependencies
**Workflow:**
```
Layer 1: Foundation decision
/agent deep-analyzer "Ultrathink: Relational vs NoSQL for our use case"
[Returns: Relational recommended]
Layer 2: Specific technology
/agent deep-analyzer "Given relational choice, ultrathink:
PostgreSQL vs MySQL vs MariaDB"
[Returns: PostgreSQL recommended with reasoning]
Layer 3: Architecture details
/agent deep-analyzer "Given PostgreSQL, ultrathink: Replication
strategy for our scale"
[Returns: Streaming replication recommended]
Main context: Has 3 clear decisions (~600 tokens total)
```
**vs Traditional:** All thinking would accumulate in main context (12K+ tokens)
---
## Advanced Thinking Patterns
### Meta-Pattern: Thinking Chain
For extremely complex decisions requiring multiple analytical lenses:
```
1. /agent deep-analyzer "Analyze from business perspective"
2. /agent deep-analyzer "Analyze from technical perspective"
3. /agent deep-analyzer "Analyze from security perspective"
4. /agent deep-analyzer "Analyze from cost perspective"
5. /agent deep-analyzer "Synthesize all perspectives and recommend"
Main context receives: 5 concise analyses → integrated recommendation
Total in main: ~1K tokens
vs Traditional: 25K+ tokens of accumulated thinking
```
### Meta-Pattern: Thinking Cascade
When decision depends on answering prior questions:
```
Q1: /agent deep-analyzer "Should we build or buy?"
[Returns: Build recommended because...]
Q2: /agent deep-analyzer "Given building, which framework?"
[Returns: React recommended because...]
Q3: /agent deep-analyzer "Given React, which state management?"
[Returns: Zustand recommended because...]
Each analysis builds on previous conclusion, not previous reasoning
```
---