Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:17:35 +08:00
commit 1c7d065a98
11 changed files with 2005 additions and 0 deletions

56
commands/analyze.md Normal file
View File

@@ -0,0 +1,56 @@
# Analyze File with Ollama
Analyze the specified file using the ollama agent pipeline.
**Usage:** `/analyze <file_path> [focus_area]`
**Examples:**
- `/analyze src/auth.py security` - Security analysis
- `/analyze README.md` - General analysis
- `/analyze implementation-plan.md architecture` - Architecture focus
---
You are an intelligent task router for ollama-based analysis.
**Task:** Analyze the file at path: $1
**Focus Area:** $2
**Your Process:**
1. **Check File:**
- Use Read tool to verify file exists
- Get file size and estimate tokens
- Determine if chunking needed
2. **Select Strategy:**
- Small files (< 20KB): Direct ollama-prompt
- Large files (> 20KB): Use ollama-chunked-analyzer approach
- Complex analysis: Consider multi-perspective analysis
3. **Invoke Agent:**
Use the Task tool to invoke the ollama-task-router agent:
- Pass the file path: $1
- Pass the focus area: $2
- Let the agent handle model selection and execution
4. **Agent Will:**
- Select appropriate model (kimi-k2-thinking, deepseek, qwen3-vl)
- Route to chunked analyzer if file is large
- Execute analysis with ollama-prompt
- Return synthesized results
5. **Your Role:**
- Receive agent's analysis report
- Present findings to user concisely
- Highlight critical issues
- Provide actionable recommendations
**Focus Areas:**
- security: Vulnerabilities, attack vectors, security best practices
- architecture: Design patterns, scalability, maintainability
- performance: Bottlenecks, optimization opportunities
- quality: Code quality, best practices, refactoring needs
- general: Comprehensive overview
**Remember:** This delegates to ollama to save your context budget!

101
commands/architect.md Normal file
View File

@@ -0,0 +1,101 @@
# Architecture Analysis with Ollama
Analyze system architecture, design patterns, and structural decisions.
**Usage:** `/architect <file_or_directory> [aspect]`
**Aspects:**
- `patterns`: Design patterns and architectural patterns
- `scalability`: Scalability analysis
- `security`: Security architecture
- `dependencies`: Dependency analysis
- `all`: Comprehensive architecture review (default)
**Examples:**
- `/architect src/` - Full architecture analysis
- `/architect docs/architecture.md patterns` - Pattern analysis
- `/architect src/api/ scalability` - Scalability review
---
You are performing architecture analysis by orchestrating ollama agents.
**Target:** $1
**Aspect:** ${2:-all}
**Your Process:**
1. **Understand Scope:**
- Read architecture documentation if available
- Identify key components and modules (via Glob/Grep)
- Map dependencies and relationships
2. **Invoke Appropriate Agent:**
**Specific Aspect Analysis:**
Use ollama-task-router agent with focused prompt:
- Target: $1
- Aspect: ${2:-all}
- Request specific analysis (patterns, scalability, security, dependencies)
**Comprehensive Analysis (aspect=all):**
Use ollama-parallel-orchestrator agent:
- Perspectives: architecture, security, scalability, maintainability
- Target: $1
- Multi-angle deep analysis
3. **Analysis Framework (for agent to apply):**
**Structure:**
- Separation of Concerns: Are responsibilities clearly separated?
- Modularity: Are modules cohesive and loosely coupled?
- Layering: Is there clear layering (presentation, business, data)?
- Abstraction: Are abstractions at appropriate levels?
**Quality Attributes:**
- Scalability: Can system handle growth?
- Maintainability: Is code easy to modify?
- Testability: Can components be tested independently?
- Security: Are security principles followed?
- Performance: Are performance requirements met?
**Design Principles:**
- SOLID principles
- DRY (Don't Repeat Yourself)
- YAGNI (You Aren't Gonna Need It)
- KISS (Keep It Simple)
4. **Your Role:**
- Invoke appropriate agent based on aspect
- Receive architectural analysis
- Format findings for user
- Highlight key insights and recommendations
5. **Report Format:**
```
## Architecture Analysis
**Target:** $1
**Aspect:** ${2:-all}
### Architecture Overview
- High-level structure
- Key components
- Design patterns identified
### Strengths
- What's working well
- Good architectural decisions
### Concerns
- Architectural issues
- Anti-patterns found
- Technical debt
### Recommendations
- Specific improvements
- Refactoring suggestions
- Pattern applications
```
**Remember:** Delegate deep architectural analysis to agents. You focus on presenting clear, actionable insights.

163
commands/deep-analyze.md Normal file
View File

@@ -0,0 +1,163 @@
# Deep Multi-Perspective Analysis
Comprehensive analysis using parallel orchestrator with multiple perspectives.
**Usage:** `/deep-analyze <file> [perspectives]`
**Perspectives:** (comma-separated, max 4)
- `security`: Security vulnerabilities and threat modeling
- `architecture`: Design patterns and structural analysis
- `implementation`: Code quality and best practices
- `testing`: Test coverage and validation strategies
- `performance`: Bottlenecks and optimization opportunities
**Examples:**
- `/deep-analyze implementation-plan.md` - Auto-select perspectives
- `/deep-analyze src/auth.py security,testing` - Focus on security and testing
- `/deep-analyze architecture.md architecture,scalability` - Architecture focused
---
You are performing deep multi-perspective analysis using the parallel orchestrator agent.
**Target:** $1
**Perspectives:** ${2:-auto}
**Your Process:**
1. **Validate Target:**
- Verify file/directory exists (use Read/Glob tools)
- Check size and estimate tokens
- Ensure suitable for deep analysis (not trivial files)
2. **Determine Perspectives:**
**Auto-Select (if perspectives=$ARGUMENTS or empty):**
Based on file type:
- Code files (.py, .js, .ts, etc.): security, quality, testing
- Architecture docs: architecture, scalability, security
- Implementation plans: security, architecture, implementation
- API specs: security, architecture, performance
**User-Specified:**
Parse comma-separated list from $2
Validate 2-4 perspectives
3. **Invoke Parallel Orchestrator Agent:**
Use Task tool to invoke ollama-parallel-orchestrator:
- Target file: $1
- Perspectives: Parsed list (2-4 perspectives)
- Agent will:
* Decompose into parallel analyses
* Execute concurrently
* Track sessions
* Synthesize results
4. **Perspectives Explained:**
**Security:**
- Vulnerabilities and attack vectors
- Threat modeling
- Authentication/authorization
- Input validation
- Secrets management
**Architecture:**
- Design patterns
- Structural organization
- Separation of concerns
- Modularity and coupling
- Scalability considerations
**Implementation:**
- Code quality and readability
- Best practices adherence
- Error handling
- Edge case coverage
- Refactoring opportunities
**Testing:**
- Test coverage assessment
- Testing strategy
- Edge cases and corner cases
- Integration points
- Test quality
**Performance:**
- Bottleneck identification
- Algorithm efficiency
- Resource utilization
- Caching opportunities
- Optimization recommendations
5. **Your Role:**
- Invoke ollama-parallel-orchestrator agent via Task tool
- Receive comprehensive synthesized analysis
- Format report for user
- Highlight critical findings
- Present prioritized recommendations
6. **Expected Report Format (from agent):**
```
# Deep Analysis Report
**Target:** $1
**Perspectives:** [list]
**Orchestration ID:** [id]
## Executive Summary
[High-level summary across all perspectives]
## Critical Findings
### Security Critical
- [Issues requiring immediate attention]
### Architecture Critical
- [Structural issues with major impact]
### Implementation Critical
- [Code quality issues needing urgent fix]
## Analysis by Perspective
[Detailed findings from each perspective]
## Cross-Perspective Insights
[Common themes and patterns]
## Prioritized Recommendations
1. [Highest priority]
2. [Second priority]
...
## Next Steps
[Actionable items]
```
7. **Session Tracking:**
- Agent saves results to `~/.claude/orchestrations/[id].json`
- Session includes all perspective analyses
- Synthesis strategy applied
- Full audit trail maintained
**When to Use Deep Analysis:**
- Comprehensive code reviews
- Architecture decision making
- Security audits
- Pre-production validation
- Complex refactoring planning
- Technical debt assessment
**When NOT to Use:**
- Simple file reviews (use `/analyze` instead)
- Quick checks (use `/review quick`)
- Small files < 100 lines
- Trivial changes
**Token Efficiency:**
- Deep analysis delegates to ollama-parallel-orchestrator
- Saves ~70% of Claude's context
- Enables multiple comprehensive analyses per session
- Parallel execution faster than sequential
**Remember:** This invokes the most comprehensive analysis. The parallel orchestrator handles all complexity. You just present the synthesized results clearly.

181
commands/models.md Normal file
View File

@@ -0,0 +1,181 @@
# Manage Ollama Models
Discover, list, and manage ollama models for the agent pipeline.
**Usage:** `/models [action] [target]`
**Actions:**
- `discover`: Scan and register all installed ollama models
- `list`: Show all registered models and capabilities
- `check <model>`: Verify specific model availability
- `defaults`: Show default models for each task type
**Examples:**
- `/models discover` - Scan for new models
- `/models list` - Show all models
- `/models check kimi-k2-thinking:cloud` - Check if model available
- `/models defaults` - Show default selections
---
You are managing the ollama model registry.
**Action:** ${1:-list}
**Target:** $2
**Your Process:**
1. **Execute Action:**
**Discover:**
```bash
# Scan ollama and update registry
~/.claude/scripts/discover-models.sh
# Show results
cat ~/.claude/model-capabilities.json | python3 -c "
import json, sys
data = json.load(sys.stdin)
print(f'Discovered {len(data[\"models\"])} models:')
for model, info in data['models'].items():
caps = ', '.join(set(info['capabilities']))
print(f' - {model}: {caps}')
"
```
**List:**
```bash
# Show all models with capabilities
cat ~/.claude/model-capabilities.json | python3 -c "
import json, sys
from pathlib import Path
registry_file = Path.home() / '.claude' / 'model-capabilities.json'
with open(registry_file, 'r', encoding='utf-8') as f:
data = json.load(f)
print('## Registered Models\n')
for model, info in sorted(data['models'].items()):
caps = ', '.join(set(info['capabilities']))
family = info.get('family', 'unknown')
context = info.get('context_window', 'unknown')
cost = info.get('cost', 'unknown')
print(f'### {model}')
print(f' - Family: {family}')
print(f' - Capabilities: {caps}')
if isinstance(context, int):
print(f' - Context: {context:,} tokens')
else:
print(f' - Context: {context}')
print(f' - Cost: {cost}')
print()
"
```
**Check:**
```bash
# Check if specific model is available
~/.claude/scripts/check-model.sh $2
```
**Defaults:**
```bash
# Show default model selections
cat ~/.claude/model-capabilities.json | python3 -c "
import json, sys
from pathlib import Path
registry_file = Path.home() / '.claude' / 'model-capabilities.json'
with open(registry_file, 'r', encoding='utf-8') as f:
data = json.load(f)
print('## Default Models by Task\n')
defaults = data.get('user_defaults', {})
for task, model in sorted(defaults.items()):
print(f'- **{task}**: {model}')
print('\n## Task Preferences with Fallbacks\n')
prefs = data.get('task_preferences', {})
for task, config in sorted(prefs.items()):
if config.get('preferred'):
print(f'### {task}')
print(f' Preferred: {config[\"preferred\"][0]}')
if config.get('fallback'):
fallbacks = config['fallback'][:3]
print(f' Fallbacks: {\" -> \".join(fallbacks)}')
print()
"
```
2. **Model Capability Reference:**
**Vision Models:**
- qwen3-vl:235b-instruct-cloud (best vision, 262K context)
- qwen3:1.7b (lightweight, has vision)
**Code Models:**
- kimi-k2-thinking:cloud (reasoning + code, 262K context)
- deepseek-v3.1:671b-cloud (strong code, 163K context)
- qwen2.5-coder:3b (lightweight coder)
**Reasoning Models:**
- kimi-k2-thinking:cloud (explicit thinking)
- deepseek-v3.1:671b-cloud (strong reasoning)
**General Purpose:**
- All models have general capability
- Prefer larger models for complex tasks
3. **Registry Location:**
- File: `~/.claude/model-capabilities.json`
- Contains: Models, capabilities, defaults, task preferences
- Auto-updated: By discover-models.sh
4. **Capability Taxonomy:**
- `vision`: Image analysis, OCR, screenshots
- `code`: Code review, refactoring, security
- `reasoning`: Multi-step logic, complex analysis
- `general`: General purpose tasks
**Common Operations:**
```bash
# After installing new ollama model
/models discover
# Before using specific model
/models check deepseek-v3.1:671b-cloud
# See what's available
/models list
# Check your defaults
/models defaults
```
**Registry Structure:**
```json
{
"models": {
"model-name": {
"capabilities": ["code", "reasoning"],
"context_window": 128000,
"family": "deepseek",
"cost": "cloud"
}
},
"user_defaults": {
"code": "kimi-k2-thinking:cloud",
"vision": "qwen3-vl:235b-instruct-cloud"
},
"task_preferences": {
"code": {
"preferred": ["kimi-k2-thinking:cloud"],
"fallback": ["deepseek-v3.1:671b-cloud", ...]
}
}
}
```
**Remember:** Keep your model registry up to date for best agent performance!

93
commands/review.md Normal file
View File

@@ -0,0 +1,93 @@
# Code Review with Ollama
Perform comprehensive code review using ollama agents.
**Usage:** `/review <file_or_directory> [strictness]`
**Strictness Levels:**
- `quick`: Fast review, major issues only
- `standard`: Balanced review (default)
- `thorough`: Deep analysis with security, quality, and architecture
**Examples:**
- `/review src/auth.py` - Standard review of auth module
- `/review src/api/ thorough` - Deep review of API directory
- `/review main.py quick` - Quick check
---
You are performing a code review by orchestrating ollama agents.
**Target:** $1
**Strictness:** ${2:-standard}
**Your Process:**
1. **Determine Scope:**
- Single file: Direct analysis via ollama-task-router
- Directory: Review key files (main entry points, complex modules)
- Large codebase: Focus on changed files or critical paths
2. **Select Review Strategy:**
**Quick Review:**
Invoke ollama-task-router agent:
- Request: Quick code review focusing on critical bugs and security
- Target: $1
- Agent handles model selection and execution
**Standard Review:**
Invoke ollama-task-router agent:
- Request: Standard code review
- Checklist: Security, quality, bugs, performance, best practices
- Target: $1
**Thorough Review:**
Invoke ollama-parallel-orchestrator agent:
- Perspectives: security, quality, architecture, testing
- Target: $1
- Multi-angle comprehensive analysis
3. **Review Checklist (for agent to cover):**
- Security: Injection, XSS, auth issues, secrets in code
- Quality: Naming, structure, complexity, duplication
- Bugs: Logic errors, edge cases, error handling
- Performance: Inefficient algorithms, memory leaks
- Best Practices: Language idioms, design patterns
- Testing: Test coverage, test quality
4. **Your Role:**
- Invoke appropriate agent based on strictness level
- Receive agent's analysis
- Format results for user
- Prioritize findings by severity
5. **Report Format:**
```
## Code Review Summary
**File/Directory:** $1
**Strictness:** ${2:-standard}
### Critical Issues (Fix Immediately)
- [From agent analysis]
### Major Issues (Fix Soon)
- [From agent analysis]
### Minor Issues (Consider Fixing)
- [From agent analysis]
### Positive Observations
- [From agent analysis]
### Recommendations
- [Actionable items]
```
6. **Priority Levels:**
- CRITICAL: Security vulnerabilities, data loss risks
- MAJOR: Bugs, performance issues, maintainability problems
- MINOR: Style issues, minor optimizations
**Remember:** Agents handle the heavy analysis. You orchestrate and present results clearly.