Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:05:29 +08:00
commit 91fde12a8b
51 changed files with 11738 additions and 0 deletions

View File

@@ -0,0 +1,158 @@
## Dependency Analysis
Analyzes your project's dependencies and checks architecture health.
### Usage
```bash
/dependency-analysis [options]
```
### Options
- `--visual`: Visually display dependencies
- `--circular`: Detect only circular dependencies
- `--depth <number>`: Specify analysis depth (default: 3)
- `--focus <path>`: Focus on specific module/directory
### Basic Examples
```bash
# Analyze dependencies for entire project
/dependency-analysis
# Detect circular dependencies
/dependency-analysis --circular
# Detailed analysis of specific module
/dependency-analysis --focus src/core --depth 5
```
### What Gets Analyzed
#### 1. Dependency Matrix
Shows how modules connect to each other:
- Direct dependencies
- Indirect dependencies
- Dependency depth
- Fan-in/fan-out
#### 2. Architecture Violations
- Layer violations (when lower layers depend on upper ones)
- Circular dependencies
- Excessive coupling (too many connections)
- Orphaned modules
#### 3. Clean Architecture Check
- Is the domain layer independent?
- Is infrastructure properly separated?
- Do use case dependencies flow correctly?
- Are interfaces being used properly?
### Output Example
```text
Dependency Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Metrics Overview
├─ Total modules: 42
├─ Average dependencies: 3.2
├─ Maximum dependency depth: 5
└─ Circular dependencies: 2 detected
⚠️ Architecture Violations
├─ [HIGH] src/domain/user.js → src/infra/database.js
│ └─ Domain layer directly depends on infrastructure layer
├─ [MED] src/api/auth.js ⟲ src/services/user.js
│ └─ Circular dependency detected
└─ [LOW] src/utils/helper.js → 12 modules
└─ Excessive fan-out
✅ Recommended Actions
1. Introduce UserRepository interface
2. Redesign authentication service responsibilities
3. Split helper functions by functionality
📈 Dependency Graph
[Visual dependency diagram displayed in ASCII art]
```
### Advanced Usage Examples
```bash
# Automatic CI/CD checks
/dependency-analysis --circular --fail-on-violation
# Check against architecture rules
/dependency-analysis --rules .architecture-rules.yml
# See how dependencies changed
/dependency-analysis --compare HEAD~10
```
### Configuration File Example (.dependency-analysis.yml)
```yaml
rules:
- name: "Domain Independence"
source: "src/domain/**"
forbidden: ["src/infra/**", "src/api/**"]
- name: "API Layer Dependencies"
source: "src/api/**"
allowed: ["src/domain/**", "src/application/**"]
forbidden: ["src/infra/**"]
thresholds:
max_dependencies: 8
max_depth: 4
coupling_threshold: 0.7
ignore:
- "**/test/**"
- "**/mocks/**"
```
### Tools We Use
- `madge`: Shows JavaScript/TypeScript dependencies visually
- `dep-cruiser`: Checks dependency rules
- `nx`: Manages monorepo dependencies
- `plato`: Analyzes complexity and dependencies together
### Collaboration with Claude
```bash
# Check dependencies with package.json
cat package.json
/analyze-dependencies
"Find dependency issues in this project"
# Deep dive into a specific module
ls -la src/core/
/analyze-dependencies --focus src/core
"Check the core module's dependencies in detail"
# Compare design vs reality
cat docs/architecture.md
/analyze-dependencies --visual
"Does our implementation match the architecture docs?"
```
### Notes
- **Run from**: Project root directory
- **Be patient**: Large projects take time to analyze
- **Act fast**: Fix circular dependencies as soon as you find them
### Best Practices
1. **Check weekly**: Keep an eye on dependency health
2. **Write rules down**: Put architecture rules in config files
3. **Small steps**: Fix things gradually, not all at once
4. **Track trends**: Watch how complexity changes over time

View File

@@ -0,0 +1,168 @@
## Analyze Performance
Analyzes application performance from a user experience perspective and quantifies experience improvements from optimizations. Calculates UX scores based on Core Web Vitals and proposes prioritized optimization strategies.
### UX Performance Score
```text
User Experience Score: B+ (78/100)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⏱️ Core Web Vitals
├─ LCP (Loading): 2.3s [Good] Target<2.5s ✅
├─ FID (Interaction): 95ms [Good] Target<100ms ✅
├─ CLS (Visual Stability): 0.08 [Good] Target<0.1 ✅
├─ FCP (First Paint): 1.8s [Good] Target<1.8s ✅
├─ TTFB (Server): 450ms [Needs Work] Target<200ms ⚠️
└─ TTI (Interactive): 3.5s [Needs Work] Target<3.8s ⚠️
📊 Perceived Speed
├─ Initial Load: 2.3s [Industry avg: 3.0s]
├─ Page Navigation: 1.1s [Industry avg: 1.5s]
├─ Search Results: 0.8s [Industry avg: 1.2s]
├─ Form Submission: 1.5s [Industry avg: 2.0s]
└─ Image Loading: Lazy loading implemented ✅
😊 User Satisfaction Prediction
├─ Bounce Rate: 12% (Industry avg: 20%)
├─ Completion Rate: 78% (Target: 85%)
├─ NPS Score: +24 (Industry avg: +15)
└─ Return Rate: 65% (Target: 70%)
📊 User Experience Impact
├─ 0.5s faster display → -7% bounce rate
├─ 5% bounce reduction → +15% session length
├─ Search improvement → +15% time on site
└─ Overall UX improvement: +25%
🎯 Expected Improvement Effects (Priority Order)
├─ [P0] TTFB improvement (CDN) → LCP -0.3s = +15% perceived speed
├─ [P1] JS bundle optimization → TTI -0.8s = -20% interactive time
├─ [P2] Image optimization (WebP) → -40% transfer = -25% load time
└─ [P3] Cache strategy → 50% faster repeat visits
```
### Usage
```bash
# Comprehensive UX score analysis
find . -name "*.js" -o -name "*.ts" | xargs wc -l | sort -rn | head -10
"Calculate UX performance score and evaluate Core Web Vitals"
# Performance bottleneck detection
grep -r "for.*await\|forEach.*await" . --include="*.js"
"Detect async processing bottlenecks and analyze UX impact"
# User experience impact analysis
grep -r "addEventListener\|setInterval" . --include="*.js" | grep -v "removeEventListener\|clearInterval"
"Analyze performance impact on user experience"
```
### Basic Examples
```bash
# Bundle size and load time
npm ls --depth=0 && find ./public -name "*.js" -o -name "*.css" | xargs ls -lh
"Identify bundle size and asset optimization improvements"
# Database performance
grep -r "SELECT\|findAll\|query" . --include="*.js" | head -20
"Analyze database query optimization points"
# Dependency performance impact
npm outdated && npm audit
"Evaluate performance impact of outdated dependencies"
```
### Analysis Perspectives
#### 1. Code-Level Problems
- **O(n²) Algorithms**: Detect inefficient array operations
- **Synchronous I/O**: Identify blocking processes
- **Redundant Processing**: Remove unnecessary calculations or requests
- **Memory Leaks**: Manage event listeners and timers
#### 2. Architecture-Level Problems
- **N+1 Queries**: Database access patterns
- **Missing Cache**: Repeated calculations or API calls
- **Bundle Size**: Unnecessary libraries or code splitting
- **Resource Management**: Connection pools and thread usage
#### 3. Technical Debt Impact
- **Legacy Code**: Performance degradation from old implementations
- **Design Issues**: High coupling from poor responsibility distribution
- **Insufficient Testing**: Missing performance regression detection
- **Monitoring Gaps**: Early problem detection system
### Performance Improvement ROI Matrix
```text
Improvement ROI = (Time Savings + Quality Improvement) ÷ Implementation Effort
```
| Priority | UX Impact | Implementation Difficulty | Time Savings | Example | Effort | Effect |
| ------------------------------- | --------- | ------------------------- | ------------ | ------------------- | ------ | ------------- |
| **[P0] Implement Now** | High | Low | > 50% | CDN implementation | 8h | Response -60% |
| **[P1] Early Implementation** | High | Medium | 20-50% | Image optimization | 16h | Load -30% |
| **[P2] Planned Implementation** | Low | High | 10-20% | Code splitting | 40h | Initial -15% |
| **[P3] Hold/Monitor** | Low | Low | < 10% | Minor optimizations | 20h | Partial -5% |
#### Priority Criteria
- **P0 (Immediate)**: High UX impact × Low difficulty = Maximum ROI
- **P1 (Early)**: High UX impact × Medium difficulty = High ROI
- **P2 (Planned)**: Low UX impact × High difficulty = Medium ROI
- **P3 (Hold)**: Low UX impact × Low difficulty = Low ROI
### Performance Metrics and UX Improvement Correlation
| Metric | Improvement | Perceived Speed | User Satisfaction | Implementation Effort |
| -------------------------- | ----------- | --------------- | -------------------- | --------------------- |
| **LCP (Loading)** | -0.5s | +30% | -7% bounce rate | 16h |
| **FID (Interaction)** | -50ms | +15% | -20% stress | 8h |
| **CLS (Visual Stability)** | -0.05 | +10% | -50% misclicks | 4h |
| **TTFB (Server)** | -200ms | +25% | +40% perceived speed | 24h |
| **TTI (Interactive)** | -1.0s | +35% | +15% completion rate | 32h |
| **Bundle Size** | -30% | +20% | +25% first visit | 16h |
### Measurement and Tools
#### Node.js / JavaScript
```bash
# Profiling
node --prof app.js
clinic doctor -- node app.js
# Bundle analysis
npx webpack-bundle-analyzer
lighthouse --chrome-flags="--headless"
```
#### Database
```sql
-- Query analysis
EXPLAIN ANALYZE SELECT ...
SHOW SLOW LOG;
```
#### Frontend
```bash
# React performance
grep -r "useMemo\|useCallback" . --include="*.jsx"
# Resource analysis
find ./src -name "*.png" -o -name "*.jpg" | xargs ls -lh
```
### Continuous Improvement
- **Regular Audits**: Run weekly performance tests
- **Metrics Collection**: Track response times and memory usage
- **Alert Setup**: Automatic notifications for threshold violations
- **Team Sharing**: Document improvement cases and anti-patterns

104
commands/check-fact.md Normal file
View File

@@ -0,0 +1,104 @@
## Check Fact
Verifies if a statement is true by checking your project's code and documentation.
### Usage
```bash
# Basic usage
/check-fact "The Flutter app uses Riverpod"
# Check multiple facts at once
/check-fact "This project uses GraphQL and manages routing with auto_route"
# Check technical details
/check-fact "JWT is used for authentication, and Firebase Auth is not used"
```
### How It Works
1. **Where I Look (in order)**
- The actual code (most trustworthy)
- README.md and docs/ folder
- Config files (package.json, pubspec.yaml, etc.)
- Issues and PR discussions
2. **What You'll See**
- `✅ Correct` - Statement matches the code exactly
- `❌ Incorrect` - Statement is wrong
- `⚠️ Partially correct` - Some parts are right, some aren't
- `❓ Cannot determine` - Not enough info to check
3. **Proof I Provide**
- File name and line number
- Relevant code snippets
- Matching documentation
### Report Format
```text
## Fact Check Results
### What You Asked
"[Your statement]"
### Verdict
[✅/❌/⚠️/❓] [True/False/Partial/Unknown]
### Evidence
- **File**: `path/to/file.dart:123`
- **Code**: [The actual code]
- **Note**: [Why this proves it]
### Details
[If wrong, here's what's actually true]
[If partial, here's what's missing]
[If unknown, here's what I'd need to check]
```
### Basic Examples
```bash
# Check the tech stack
/check-fact "This app is built with Flutter + Riverpod + GraphQL"
# Check if a feature exists
/check-fact "Dark mode is implemented and can be switched from user settings"
# Check architecture choices
/check-fact "All state management is done with Riverpod, BLoC is not used"
# Check security setup
/check-fact "Authentication tokens are encrypted and stored in secure storage"
```
### Collaboration with Claude
```bash
# Check dependencies
ls -la && find . -name "pubspec.yaml" -exec cat {} \;
/check-fact "The main dependencies used in this project are..."
# Check how something is built
grep -r "authentication" . --include="*.dart"
/check-fact "Authentication is custom built, not using third-party auth"
# Check if docs match reality
cat README.md
/check-fact "Everything in the README is actually implemented"
```
### When to Use This
- Writing specs: Make sure your descriptions are accurate
- Taking over a project: Check if you understand it correctly
- Client updates: Verify what's actually built
- Blog posts: Fact-check your technical content
- Presentations: Confirm project details before presenting
### Important
- Code beats docs: If they disagree, the code is right
- Old docs happen: Implementation is what matters
- No guessing: If I can't verify it, I'll say so
- Security matters: Extra careful with security-related facts

View File

@@ -0,0 +1,53 @@
## GitHub CI Monitoring
Monitors GitHub Actions CI status and tracks until completion.
### Usage
```bash
# Check CI status
gh pr checks
```
### Basic Examples
```bash
# Check CI after creating PR
gh pr create --title "Add new feature" --body "Description"
gh pr checks
```
### Collaboration with Claude
```bash
# Flow from CI check to correction
gh pr checks
"Analyze CI check results and suggest fixes if there are failures"
# Recheck after correction
git push origin feature-branch
gh pr checks
"Check CI results after correction to confirm no issues"
```
### Example Execution Results
```text
All checks were successful
0 cancelled, 0 failing, 8 successful, 3 skipped, and 0 pending checks
NAME DESCRIPTION ELAPSED URL
○ Build/test (pull_request) 5m20s https://github.com/user/repo/actions/runs/123456789
○ Build/lint (pull_request) 2m15s https://github.com/user/repo/actions/runs/123456789
○ Security/scan (pull_request) 1m30s https://github.com/user/repo/actions/runs/123456789
○ Type Check (pull_request) 45s https://github.com/user/repo/actions/runs/123456789
○ Commit Messages (pull_request) 12s https://github.com/user/repo/actions/runs/123456789
- Deploy Preview (pull_request) https://github.com/user/repo/actions/runs/123456789
- Visual Test (pull_request) https://github.com/user/repo/actions/runs/123456789
```
### Notes
- Check details when failed
- Wait for all checks to complete before merging
- Re-run `gh pr checks` as needed

461
commands/check-prompt.md Normal file
View File

@@ -0,0 +1,461 @@
## Check Prompt
A comprehensive collection of best practices for evaluating and improving the quality of prompts for AI Agents. It systematizes knowledge gained from actual prompt improvement processes, covering all important aspects such as ambiguity elimination, information integration, enforcement enhancement, tracking systems, and continuous improvement.
### Usage
```bash
# Check the quality of a prompt file
cat your-prompt.md
/check-prompt
"Check the quality of this prompt and suggest improvements"
```
### Options
- None: Analyze current file or selected text
- `--category <name>`: Check only specific category (structure/execution/restrictions/quality/roles/improvement)
- `--score`: Calculate quality score only
- `--fix`: Automatically suggest fixes for detected issues
- `--deep`: Deep analysis mode (focus on ambiguity, information dispersion, and enforcement)
### Basic Examples
```bash
# Evaluate overall prompt quality
cat devin/playbooks/code-review.md
/check-prompt
"Evaluate this prompt across 6 categories and suggest improvements"
# Deep analysis mode
/check-prompt --deep
"Focus on checking ambiguity, information dispersion, and lack of enforcement to suggest fundamental improvements"
# Check specific category
/check-prompt --category structure
"Check this prompt from the perspective of structure and clarity"
# Detect and fix ambiguous expressions
/check-prompt --fix
"Detect ambiguous expressions and suggest corrections for clarity"
```
---
## Core Design Principles
### Principle 1: Completely Eliminate Room for Interpretation
- **Absolutely Prohibited**: "In principle", "Recommended", "If possible", "Depending on the situation", "Use your judgment"
- **Must Use**: "Always", "Absolutely", "Strictly observe", "Without exception", "Mandatory"
- **Exception Conditions**: Strictly limited by numbers ("Only under the following 3 conditions", "Except in these 2 cases")
### Principle 2: Strategic Integration of Information
- Completely integrate related important information into one section
- Summarize the overall picture in an execution checklist
- Thoroughly eliminate circular references and dispersion
### Principle 3: Building Gradual Enforcement
- Clear hierarchy of 🔴 (Execution stop level) → 🟡 (Quality important) → 🟢 (Recommended items)
- Gradual upgrade from recommended to mandatory level
- Explicit indication of impact and countermeasures for violations
### Principle 4: Ensuring Traceability
- All execution results can be recorded and verified
- Technically prevent false reporting
- Objective criteria for success/failure judgment
### Principle 5: Feedback-Driven Improvement
- Learn from actual failure cases
- Continuous effectiveness verification
- Automatic detection of new patterns
---
## 📋 Comprehensive Check Items
### 1. 📐 Structure and Clarity (Weight: 25 points)
#### 1.1 Priority Indication of Instructions (8 points)
- [ ] 🔴🟡🟢 priorities are clearly indicated for all important instructions
- [ ] Conditions for execution stop level are specifically and clearly defined
- [ ] Criteria for each priority level are objective and verifiable
- [ ] Priority hierarchy is consistently applied
#### 1.2 Complete Elimination of Ambiguous Expressions (9 points)
- [ ] **Fatal ambiguous expressions**: 0 instances of "In principle", "Recommended", "If possible"
- [ ] **Use of mandatory expressions**: Appropriate use of "Always", "Absolutely", "Strictly observe", "Without exception"
- [ ] **Numerical limitation of exception conditions**: Clear boundaries like "Only 3 conditions"
- [ ] **Elimination of judgment room**: Use only expressions that cannot be multiple interpreted
- [ ] **Elimination of gray zones**: Clear judgment criteria for all situations
#### 1.3 Strategic Integration of Information (8 points)
- [ ] Multiple location dispersion of important information is completely eliminated
- [ ] Related instructions are logically integrated into one section
- [ ] The overall picture is completely summarized in the execution checklist
- [ ] There are no circular references or infinite loops
### 2. 🎯 Executability (Weight: 20 points)
#### 2.1 Completeness of Specific Procedures (7 points)
- [ ] All command examples are actually executable and verified
- [ ] Environment variables, prerequisites, and dependencies are clearly stated without omissions
- [ ] Error handling methods are specific and executable
- [ ] The order of procedures is logical and necessary
#### 2.2 Ensuring Verifiability (7 points)
- [ ] Success/failure of execution results can be objectively determined
- [ ] Output examples, log formats, and expected values are specifically shown
- [ ] Testing methods and verification procedures can be implemented
- [ ] Checkpoints for confirming intermediate results are appropriately placed
#### 2.3 Automation Adaptability (6 points)
- [ ] Format that allows easy scripting and CI/CD integration
- [ ] Clear separation between human judgment and AI execution points
- [ ] Support for batch processing and parallel execution
### 3. 🚫 Clarification of Prohibited Items (Weight: 15 points)
#### 3.1 Systematization of Absolute Prohibitions (8 points)
- [ ] Complete list of operations that must not be performed
- [ ] Explicit indication of impact level (minor/major/fatal) for each prohibited item violation
- [ ] Specific presentation of alternatives and avoidance methods
- [ ] Explanation of technical basis for prohibited items
#### 3.2 Strict Limitation of Exception Conditions (7 points)
- [ ] Conditions allowing exceptions are specific and limited (numerical specification)
- [ ] Objective judgment criteria such as "Completely duplicate", "Explicitly stated"
- [ ] Clear boundaries without leaving gray zones
- [ ] Explicit indication of additional conditions and constraints when applying exceptions
### 4. 📊 Quality Assurance Mechanisms (Weight: 20 points)
#### 4.1 Completeness of Tracking System (8 points)
- [ ] Automatic recording and statistics collection function for all execution results
- [ ] Verification function to technically prevent false reporting
- [ ] Real-time monitoring and alert functions
- [ ] Audit log tampering prevention function
#### 4.2 Enforcement of Template Compliance (7 points)
- [ ] Clear definition and checking function for mandatory elements
- [ ] Technical restrictions on areas prohibited from customization
- [ ] Automated checkpoints for compliance confirmation
- [ ] Automatic correction and warning functions when violations occur
#### 4.3 comprehensiveness of Error Handling (5 points)
- [ ] Complete cataloging of expected error patterns
- [ ] Step-by-step handling process for errors
- [ ] Technical prevention of reporting failures as successes
### 5. 🎭 Clarification of Roles and Responsibilities (Weight: 10 points)
#### 5.1 AI Agent's Authority Scope (5 points)
- [ ] Clear boundaries between executable and prohibited operations
- [ ] Specific scope and constraints of judgment authority
- [ ] Clear separation of operations requiring human confirmation
#### 5.2 Unification of Classification System (5 points)
- [ ] Clarity, uniqueness, and exclusivity of classification definitions
- [ ] Explicit explanations to prevent misunderstanding of importance between classifications
- [ ] Specific usage examples and decision flowcharts for each classification
### 6. 🔄 Continuous Improvement (Weight: 10 points)
#### 6.1 Automation of Feedback Collection (5 points)
- [ ] Automatic extraction of improvement points from execution logs
- [ ] Machine learning-based analysis of failure patterns
- [ ] Automatic update mechanism for best practices
#### 6.2 Implementing Learning Functions (5 points)
- [ ] Automatic detection and classification of new patterns
- [ ] Continuous monitoring of effectiveness of existing rules
- [ ] Automatic suggestions for gradual improvements
---
## 🚨 Fatal Problem Patterns (Immediate Correction Required)
### ❌ Level 1: Fatal Ambiguity (Execution Stop Level)
- **Instructions with multiple interpretations**: "Use your judgment", "Depending on the situation", "In principle"
- **Ambiguous exception conditions**: "In special cases", "As needed"
- **Subjective judgment criteria**: "Appropriately", "Sufficiently", "As much as possible"
- **Undefined important concepts**: "Standard", "General", "Basic"
### ❌ Level 2: Structural Defects (Quality Important Level)
- **Information dispersion**: Important related information scattered in 3 or more locations
- **Circular references**: Infinite loops of section A→B→C→A
- **Contradictory instructions**: Contradictory instructions in different sections
- **Unclear execution order**: Procedures with unclear dependencies
### ❌ Level 3: Quality Degradation (Recommended Improvement Level)
- **Non-verifiability**: Unclear criteria for success/failure judgment
- **Difficulty in automation**: Design dependent on human subjective judgment
- **Difficulty in maintenance**: Structure where impact range during updates cannot be predicted
- **Difficulty in learning**: Complexity that takes time for newcomers to understand
---
## 🎯 Proven Improvement Methods
### ✅ Gradual Enhancement Approach
1. **Current situation analysis**: Classification, prioritization, and impact assessment of problems
2. **Fatal problem priority**: Top priority on complete resolution of Level 1 problems
3. **Gradual implementation**: Implement in verifiable units without making all changes at once
4. **Effect measurement**: Quantitative comparison before and after improvement
5. **Continuous monitoring**: Confirmation of sustainability of improvement effects
### ✅ Practical Methods for Ambiguity Elimination
```markdown
# ❌ Before Improvement (Ambiguous)
"Comments should be, in principle, written as inline comments at the corresponding change points on GitHub"
# ✅ After Improvement (Clear)
"Comments must be written as inline comments at the corresponding change points on GitHub. Exceptions are only the 3 conditions defined in section 3.3"
```
### ✅ Practical Methods for Information Integration
```markdown
# ❌ Before Improvement (Dispersed)
Section 2.1: "Use mandatory 6 sections"
Section 3.5: "📊 Comprehensive evaluation, 📋 Comments..."
Section 4.2: "Prohibition of section deletion"
# ✅ After Improvement (Integrated)
Execution Checklist:
□ 10. Post summary comment (using mandatory 6 sections)
🔴 Mandatory 6 sections: 1) 📊 Comprehensive evaluation 2) 📋 Classification of comments 3) ⚠️ Main concerns 4) ✅ Evaluable points 5) 🎯 Conclusion 6) 🤖 Self-evaluation of AI review quality
❌ Absolutely prohibited: Section deletion, addition, name change
```
### ✅ Implementation Patterns for Tracking Systems
```bash
# Strict tracking of execution results
POSTED_COMMENTS=0
FAILED_COMMENTS=0
TOTAL_COMMENTS=0
# Record results of each operation
if [ $? -eq 0 ]; then
echo "✅ Success: $OPERATION" >> /tmp/execution_log.txt
POSTED_COMMENTS=$((POSTED_COMMENTS + 1))
else
echo "❌ Failure: $OPERATION" >> /tmp/execution_log.txt
FAILED_COMMENTS=$((FAILED_COMMENTS + 1))
fi
# Prevent false reporting
if [ $POSTED_COMMENTS -ne $REPORTED_COMMENTS ]; then
echo "🚨 Warning: Mismatch between reported and actual posted comments"
exit 1
fi
```
---
## 📈 Quality Score Calculation (Improved Version)
### Comprehensive Score Calculation
```text
Basic score = Σ(category score × weight) / 100
Fatal problem penalties:
- Level 1 problem: -20 points per case
- Level 2 problem: -10 points per case
- Level 3 problem: -5 points per case
Bonus elements:
- Automation support: +5 points
- Learning function implementation: +5 points
- Proven improvement cases: +5 points
Final score = Basic score + Bonus - Penalties
```
### Quality Level Assessment
```text
95-100 points: World's highest standard (Recommended as industry standard)
90-94 points: Excellent (Production ready)
80-89 points: Good (Ready for operation with minor improvements)
70-79 points: Average (Needs improvement)
60-69 points: Needs improvement (Requires significant correction)
50-59 points: Needs major correction (Requires fundamental review)
49 points or below: Prohibited from use (Requires complete redesign)
```
---
## 🔧 Practical Improvement Process
### Phase 1: Diagnosis/Analysis (1-2 days)
1. **Understanding overall structure**: Visualization of section composition, information flow, and dependencies
2. **Ambiguity detection**: Extraction of all expressions with room for interpretation
3. **Information dispersion analysis**: Mapping of scattered patterns of related information
4. **Enforcement evaluation**: Evaluation of recommended/mandatory classification and effectiveness
5. **Traceability confirmation**: Evaluation of execution result recording and verification functions
### Phase 2: Prioritization/Planning (Half a day)
1. **Fatality classification**: Problem classification into Levels 1-3 and impact assessment
2. **Improvement order determination**: Optimal order considering interdependencies
3. **Resource allocation**: Optimization of balance between improvement effects and costs
4. **Risk assessment**: Prediction of side effects and compatibility issues during improvement
### Phase 3: Gradual Implementation (2-5 days)
1. **Level 1 problem resolution**: Complete elimination of fatal ambiguities
2. **Information integration implementation**: Strategic aggregation of dispersed information
3. **Enforcement enhancement**: Gradual upgrade from recommended to mandatory
4. **Tracking system implementation**: Automatic recording and verification functions for execution results
5. **Template enhancement**: Clarification of mandatory elements and enforcement of compliance
### Phase 4: Verification/Adjustment (1-2 days)
1. **Function testing**: Operation confirmation of all changes
2. **Integration testing**: Confirmation of system-wide consistency
3. **Performance testing**: Confirmation of execution efficiency and response
4. **Usability testing**: Verification in actual usage scenarios
### Phase 5: Operation/Monitoring (Continuous)
1. **Effect measurement**: Quantitative comparison before and after improvement
2. **Continuous monitoring**: Early detection of quality degradation
3. **Feedback collection**: Extraction of problems in actual operation
4. **Continuous optimization**: Continuous improvement cycle
---
## 📊 Actual Improvement Cases (Detailed Version)
### Case Study: Quality Improvement of Large-Scale Prompts
#### Situation Before Improvement
```bash
Quality score: 70/100 points
- Ambiguous expressions: 15 detected
- Information dispersion: Important information scattered in 6 locations
- Lack of enforcement: 80% of expressions at recommended level
- Tracking function: No execution result recording
- Error handling: Unclear countermeasures for failures
```
#### Implemented Improvements
```bash
# 1. Ambiguity elimination (2 days)
- "In principle""Exceptions are only the 3 conditions in section 3.3"
- "Recommended""Mandatory" (for importance level 2 and above)
- "As appropriate" → Explicit indication of specific judgment criteria
# 2. Information integration (1 day)
- Dispersed mandatory 6-section information → Integrated into execution checklist
- Related prohibited items → Aggregated into one section
- Eliminated circular references → Linear information flow
# 3. Tracking system implementation (1 day)
- Automatic log recording of execution results
- Verification function to prevent false reporting
- Real-time statistics display
# 4. Error handling enhancement (Half a day)
- Complete cataloging of expected error patterns
- Documentation of step-by-step handling processes
- Implementation of automatic recovery functions
```
#### Results After Improvement
```bash
Quality score: 90/100 points (+20 points improvement)
- Ambiguous expressions: 0 (completely eliminated)
- Information integration: Important information aggregated into 3 locations
- Enforcement: 95% of expressions at mandatory level
- Tracking function: Fully automated
- Error handling: 90% of problems solved automatically
Actual improvement effects:
- Assessment errors: 85% reduction
- Execution time: 40% reduction
- Error occurrence rate: 70% reduction
- User satisfaction: 95% improvement
```
### Lessons/Best Practices
#### Success Factors
1. **Gradual approach**: Implement in verifiable units without making all changes at once
2. **Data-driven**: Improve based on measured data rather than subjective judgment
3. **Continuous monitoring**: Regularly confirm the sustainability of improvement effects
4. **Feedback-oriented**: Actively collect opinions from actual users
#### Failure Avoidance Measures
1. **Excessive perfectionism**: Start operation once reaching 90 points, aim for 100 points through continuous improvement
2. **Dangers of batch changes**: Always implement large-scale changes gradually
3. **Backward compatibility**: Minimize impact on existing workflows
4. **Insufficient documentation**: Record and share all changes in detail
---
### Collaboration with Claude
```bash
# Quality check combined with prompt file
cat your-prompt.md
/check-prompt
"Evaluate the quality of this prompt and suggest improvements"
# Comparison of multiple prompt files
cat prompt-v1.md && echo "---" && cat prompt-v2.md
/check-prompt
"Compare the two versions and analyze improved points and remaining issues"
# Analysis combined with actual error logs
cat execution-errors.log
/check-prompt --deep
"Identify potential prompt issues that may have caused this error"
```
### Notes
- **Prerequisite**: Prompt files are recommended to be written in Markdown format
- **Limitation**: For large-scale prompts (10,000 lines or more), it is recommended to analyze in parts
- **Recommendation**: Regularly check prompt quality and continuously improve
---
_This checklist is a complete version of knowledge proven in actual prompt improvement projects and continues to evolve._

354
commands/commit-message.md Normal file
View File

@@ -0,0 +1,354 @@
## Commit Message
Generates commit messages from staged changes (git diff --staged). This command only creates messages and copies them to your clipboard—it doesn't run any git commands.
### Usage
```bash
/commit-message [options]
```
### Options
- `--format <format>` : Choose message format (conventional, gitmoji, angular)
- `--lang <language>` : Set language explicitly (en)
- `--breaking` : Include breaking change detection
### Basic Examples
```bash
# Generate message from staged changes (language auto-detected)
# The top suggestion is automatically copied to your clipboard
/commit-message
# Specify language explicitly
/commit-message --lang ja
/commit-message --lang en
# Include breaking change detection
/commit-message --breaking
```
### Prerequisites
**Important**: This command only works with staged changes. Run `git add` first to stage your changes.
```bash
# If nothing is staged, you'll see:
$ /commit-message
No staged changes found. Please run git add first.
```
### Automatic Clipboard Feature
The top suggestion gets copied to your clipboard as a complete command: `git commit -m "message"`. Just paste and run it in your terminal.
**Implementation Notes**:
- Run `pbcopy` in a separate process from the message output
- Use `printf` instead of `echo` to avoid unwanted newlines
### Automatic Project Convention Detection
**Important**: If project-specific conventions exist, they take priority.
#### 1. CommitLint Configuration Check
Automatically detects settings from the following files:
- `commitlint.config.js`
- `commitlint.config.mjs`
- `commitlint.config.cjs`
- `commitlint.config.ts`
- `.commitlintrc.js`
- `.commitlintrc.json`
- `.commitlintrc.yml`
- `.commitlintrc.yaml`
- `package.json` with `commitlint` section
```bash
# Search for configuration files
find . -name "commitlint.config.*" -o -name ".commitlintrc.*" | head -1
```
#### 2. Custom Type Detection
Example of project-specific types:
```javascript
// commitlint.config.mjs
export default {
extends: ["@commitlint/config-conventional"],
rules: {
"type-enum": [
2,
"always",
[
"feat",
"fix",
"docs",
"style",
"refactor",
"test",
"chore",
"wip", // work in progress
"hotfix", // urgent fix
"release", // release
"deps", // dependency update
"config", // configuration change
],
],
},
};
```
#### 3. Detecting Language Settings
```javascript
// When project uses Japanese messages
export default {
rules: {
"subject-case": [0], // Disabled for Japanese support
"subject-max-length": [2, "always", 72], // Adjusted character limit for Japanese
},
};
```
#### 4. Existing Commit History Analysis
```bash
# Learn patterns from recent commits
git log --oneline -50 --pretty=format:"%s"
# Type usage statistics
git log --oneline -100 --pretty=format:"%s" | \
grep -oE '^[a-z]+(\([^)]+\))?' | \
sort | uniq -c | sort -nr
```
### Automatic Language Detection
Automatically switches between Japanese/English based on:
1. **CommitLint configuration** language settings
2. **git log analysis** automatic detection
3. **Project file** language settings
4. **Changed file** comment and string analysis
Default is English. Generates in Japanese if detected as Japanese project.
### Message Format
#### Conventional Commits (Default)
```text
<type>: <description>
```
**Important**: Always generates single-line commit messages. Does not generate multi-line messages.
**Note**: Project-specific conventions take priority if they exist.
### Standard Types
**Required Types**:
- `feat`: New feature (user-visible feature addition)
- `fix`: Bug fix
**Optional Types**:
- `build`: Build system or external dependency changes
- `chore`: Other changes (no release impact)
- `ci`: CI configuration files and scripts changes
- `docs`: Documentation only changes
- `style`: Changes that don't affect code meaning (whitespace, formatting, semicolons, etc.)
- `refactor`: Code changes without bug fixes or feature additions
- `perf`: Performance improvements
- `test`: Adding or fixing tests
### Output Example (English Project)
```bash
$ /commit-message
📝 Commit Message Suggestions
━━━━━━━━━━━━━━━━━━━━━━━━━
✨ Main Candidate:
feat: implement JWT-based authentication system
📋 Alternatives:
1. feat: add user authentication with JWT tokens
2. fix: resolve token validation error in auth middleware
3. refactor: extract auth logic into separate module
`git commit -m "feat: implement JWT-based authentication system"` copied to clipboard
```
**Implementation Example (Fixed)**:
```bash
# Copy commit command to clipboard first (no newline)
printf 'git commit -m "%s"' "$COMMIT_MESSAGE" | pbcopy
# Then display message
cat << EOF
📝 Commit Message Suggestions
━━━━━━━━━━━━━━━━━━━━━━━━━
✨ Main Candidate:
$COMMIT_MESSAGE
📋 Alternatives:
1. ...
2. ...
3. ...
✅ \`git commit -m "$COMMIT_MESSAGE"\` copied to clipboard
EOF
```
### Output Example (Japanese Project)
```bash
$ /commit-message
📝 Commit Message Suggestions
━━━━━━━━━━━━━━━━━━━━━━━━━
✨ Main Candidate:
feat: JWT authentication system implemented
📋 Alternatives:
1. feat: add user authentication with JWT tokens
2. fix: resolve token validation error in auth middleware
3. docs: separate auth logic into different module
`git commit -m "feat: JWT authentication system implemented"` copied to clipboard
```
### Operation Overview
1. **Analysis**: Analyze content of `git diff --staged`
2. **Generation**: Generate appropriate commit message
3. **Copy**: Automatically copy main candidate to clipboard
**Note**: This command does not execute git add or git commit. It only generates commit messages and copies to clipboard.
### Smart Features
#### 1. Automatic Change Classification (Staged Files Only)
- New file addition → `feat`
- Error fix patterns → `fix`
- Test files only → `test`
- Configuration file changes → `chore`
- README/docs updates → `docs`
#### 2. Automatic Project Convention Detection
- `.gitmessage` file
- Conventions in `CONTRIBUTING.md`
- Past commit history patterns
#### 3. Language Detection Details (Staged Changes Only)
```bash
# Detection criteria (priority order)
1. Detect language from git diff --staged content
2. Comment analysis of staged files
3. Language analysis of git log --oneline -20
4. Project main language settings
```
#### 4. Staging Analysis Details
Information used for analysis (read-only):
- `git diff --staged --name-only` - Changed file list
- `git diff --staged` - Actual change content
- `git status --porcelain` - File status
### Breaking Change Detection
For breaking API changes:
**English**:
```bash
feat!: change user API response format
BREAKING CHANGE: user response now includes additional metadata
```
Or:
```bash
feat(api)!: change authentication flow
```
**Japanese**:
```bash
feat!: change user API response format
BREAKING CHANGE: response now includes additional metadata
```
Or:
```bash
feat(api)!: change authentication flow
```
### Best Practices
1. **Match project**: Follow existing commit language
2. **Conciseness**: Clear within 50 characters
3. **Consistency**: Don't mix languages (stay consistent in English)
4. **OSS**: English recommended for open source
5. **Single line**: Always single-line commit message (supplement with PR for detailed explanations)
### Common Patterns
**English**:
```text
feat: add user registration endpoint
fix: resolve memory leak in cache manager
docs: update API documentation
```
**Japanese**:
```text
feat: add user registration endpoint
fix: resolve memory leak in cache manager
docs: update API documentation
```
### Integration with Claude
```bash
# Use with staged changes
git add -p # Interactive staging
/commit-message
"Generate optimal commit message"
# Stage and analyze specific files
git add src/auth/*.js
/commit-message --lang en
"Generate message for authentication changes"
# Breaking Change detection and handling
git add -A
/commit-message --breaking
"Mark appropriately if there are breaking changes"
```
### Important Notes
- **Prerequisite**: Changes must be staged with `git add` beforehand
- **Limitation**: Unstaged changes are not analyzed
- **Recommendation**: Check existing project commit conventions first

50
commands/context7.md Normal file
View File

@@ -0,0 +1,50 @@
## Context7
Searches technical documentation using MCP's Context7.
### Usage
```bash
# Format for requesting Claude
"Search for [search keyword] using context7"
```
### Basic Examples
```bash
# Research React hooks
"Search for React hooks using context7"
# Search for error solutions
"Search for TypeScript type errors using context7"
```
### Collaboration with Claude
```bash
# Request technical research
"Search for information about Rust's ownership system using context7 and explain it for beginners"
# Request error solution
"Search for common causes and solutions for Python's ImportError using context7"
# Confirm best practices
"Search for best practices for React performance optimization using context7"
```
### Detailed Examples
```bash
# Research from multiple perspectives
"Search for GraphQL using context7 from the following perspectives:
1. Basic concepts and differences from REST API
2. Implementation methods and best practices
3. Common issues and solutions"
# Research specific versions or environments
"Search for new features in Next.js 14 using context7, focusing on how to use App Router"
```
### Notes
If information cannot be found with Context7, Claude will automatically suggest other methods such as web search.

186
commands/design-patterns.md Normal file
View File

@@ -0,0 +1,186 @@
## Design Patterns
Suggests design patterns for your code and checks if it follows SOLID principles.
### Usage
```bash
/design-patterns [analysis_target] [options]
```
### Options
- `--suggest`: Suggest applicable patterns (default)
- `--analyze`: Analyze existing pattern usage
- `--refactor`: Generate refactoring proposals
- `--solid`: Check compliance with SOLID principles
- `--anti-patterns`: Detect anti-patterns
### Basic Examples
```bash
# Analyze patterns for entire project
/design-patterns
# Suggest patterns for specific file
/design-patterns src/services/user.js --suggest
# Check SOLID principles
/design-patterns --solid
# Detect anti-patterns
/design-patterns --anti-patterns
```
### Pattern Categories
#### 1. Creational Patterns
- **Factory Pattern**: Abstracts object creation
- **Builder Pattern**: Step-by-step construction of complex objects
- **Singleton Pattern**: Ensures only one instance exists
- **Prototype Pattern**: Creates object clones
#### 2. Structural Patterns
- **Adapter Pattern**: Converts interfaces
- **Decorator Pattern**: Dynamically adds functionality
- **Facade Pattern**: Simplifies complex subsystems
- **Proxy Pattern**: Controls access to objects
#### 3. Behavioral Patterns
- **Observer Pattern**: Implements event notifications
- **Strategy Pattern**: Switches algorithms
- **Command Pattern**: Encapsulates operations
- **Iterator Pattern**: Traverses collections
### SOLID Principles We Check
```text
S - Single Responsibility (one class, one job)
O - Open/Closed (open for extension, closed for modification)
L - Liskov Substitution (subtypes should be replaceable)
I - Interface Segregation (don't force unused methods)
D - Dependency Inversion (depend on abstractions, not details)
```
### Output Example
```text
Design Pattern Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Currently Used Patterns
├─ Observer Pattern: EventEmitter (12 instances)
├─ Factory Pattern: UserFactory (3 instances)
├─ Singleton Pattern: DatabaseConnection (1 instance)
└─ Strategy Pattern: PaymentProcessor (5 instances)
Recommended Patterns
├─ [HIGH] Repository Pattern
│ └─ Where: src/models/*.js
│ └─ Why: Separate data access from business logic
│ └─ Example:
│ class UserRepository {
│ async findById(id) { ... }
│ async save(user) { ... }
│ }
├─ [MED] Command Pattern
│ └─ Where: src/api/handlers/*.js
│ └─ Why: Standardize how requests are handled
└─ [LOW] Decorator Pattern
└─ Where: src/middleware/*.js
└─ Why: Better way to combine features
SOLID Violations Found
├─ [S] UserService: Does too much (auth AND authorization)
├─ [O] PaymentGateway: Must change code to add payment types
├─ [D] EmailService: Depends on specific classes, not interfaces
└─ [I] IDataStore: Has methods nobody uses
How to Fix
1. Split UserService into AuthService and AuthorizationService
2. Add a PaymentStrategy interface for new payment types
3. Create an EmailService interface
4. Break up IDataStore into smaller interfaces
```
### Advanced Usage Examples
```bash
# See what happens if you use a pattern
/design-patterns --impact-analysis Repository
# Get example code for a pattern
/design-patterns --generate Factory --for src/models/Product.js
# Find patterns that work well together
/design-patterns --combine --context "API with caching"
# Check your architecture
/design-patterns --architecture MVC
```
### Example: Before and After
#### Before (Problem Code)
```javascript
class OrderService {
processOrder(order, paymentType) {
if (paymentType === "credit") {
// Credit card processing
} else if (paymentType === "paypal") {
// PayPal processing
}
// Other payment methods...
}
}
```
#### After (Applying Strategy Pattern)
```javascript
// Strategy interface
class PaymentStrategy {
process(amount) {
throw new Error("Must implement process method");
}
}
// Concrete strategies
class CreditCardPayment extends PaymentStrategy {
process(amount) {
/* Implementation */
}
}
// Context
class OrderService {
constructor(paymentStrategy) {
this.paymentStrategy = paymentStrategy;
}
processOrder(order) {
this.paymentStrategy.process(order.total);
}
}
```
### Anti-Patterns We Find
- **God Object**: Classes that do everything
- **Spaghetti Code**: Tangled mess of control flow
- **Copy-Paste Programming**: Same code everywhere
- **Magic Numbers**: Random numbers with no explanation
- **Callback Hell**: Callbacks inside callbacks inside callbacks
### Best Practices
1. **Go slow**: Add patterns one at a time
2. **Need first**: Only use patterns to solve real problems
3. **Talk it out**: Get team buy-in before big changes
4. **Write it down**: Document why you chose each pattern

79
commands/explain-code.md Normal file
View File

@@ -0,0 +1,79 @@
## Code Explain
Explains how code works in detail.
### Usage
```bash
# Show a file and ask for explanation
cat <file>
"Explain how this code works"
```
### Basic Examples
```bash
# Understand Rust ownership
cat main.rs
"Explain the ownership and lifetimes in this Rust code"
# Explain an algorithm
grep -A 50 "quicksort" sort.rs
"How does this sorting work? What's its time complexity?"
# Explain design patterns
cat factory.rs
"What design pattern is this? What are the benefits?"
```
### Collaboration with Claude
```bash
# Beginner-friendly explanation
cat complex_function.py
"Explain this code line by line for someone new to programming"
# Performance check
cat algorithm.rs
"Find performance problems and how to fix them"
# Visual explanation
cat state_machine.js
"Show me the flow with ASCII diagrams"
# Security check
cat auth_handler.go
"What security issues do you see?"
```
### Detailed Examples
```bash
# Complex logic breakdown
cat recursive_parser.rs
"Break down this recursive parser:
1. How does it flow?
2. What does each function do?
3. How are edge cases handled?
4. What could be better?"
# Async code explanation
cat async_handler.ts
"Explain this async code:
1. How do the Promises flow?
2. How are errors handled?
3. What runs in parallel?
4. Could this deadlock?"
# Architecture overview
ls -la src/ && cat src/main.rs src/lib.rs
"Explain how this project is structured"
```
### What You'll Get
Not just what the code does, but also:
- Why it's written that way
- What benefits it provides
- What problems might come up

311
commands/fix-error.md Normal file
View File

@@ -0,0 +1,311 @@
## Error Fix
Analyzes error messages to identify root causes, predict resolution time, and suggest proven fixes. Learns patterns from similar errors to provide immediate solutions.
### Usage
```bash
/fix-error [options]
```
### Options
- None: Standard error analysis
- `--deep`: Deep dive including dependencies and environment
- `--preventive`: Focus on preventing future occurrences
- `--quick`: Quick fixes only
### Basic Examples
```bash
# Standard error analysis
npm run build 2>&1
/fix-error
"Analyze this build error and suggest fixes"
# Deep analysis mode
python app.py 2>&1
/fix-error --deep
"Find the root cause, including environment issues"
# Quick fixes only
cargo test 2>&1
/fix-error --quick
"Just give me a quick fix"
# Prevention-focused
./app 2>&1 | tail -50
/fix-error --preventive
"Fix this and help me prevent it next time"
```
### Collaboration with Claude
```bash
# Analyze error logs
cat error.log
/fix-error
"What's causing this error and how do I fix it?"
# Resolve test failures
npm test 2>&1
/fix-error --quick
"These tests are failing - need a quick fix"
# Analyze stack traces
python script.py 2>&1
/fix-error --deep
"Dig into this stack trace and check for environment issues"
# Handle multiple errors
grep -E "ERROR|WARN" app.log | tail -20
/fix-error
"Sort these by priority and tell me how to fix each one"
```
### Error Resolution Time Prediction
```text
🚀 Immediate Fix (< 5 minutes)
├─ Typos, missing imports
├─ Environment variables not set
├─ Undefined variable references
└─ Predicted time: 2-5 minutes
⚡ Quick Fix (< 30 minutes)
├─ Dependency version conflicts
├─ Configuration file errors
├─ Type mismatches
└─ Predicted time: 10-30 minutes
🔧 Investigation Required (< 2 hours)
├─ Complex logic errors
├─ Async processing race conditions
├─ API integration issues
└─ Predicted time: 30 minutes-2 hours
🔬 Deep Analysis (Half day or more)
├─ Architecture-related issues
├─ Multi-system integration problems
├─ Performance degradation
└─ Predicted time: 4 hours-several days
```
### Similar Error Pattern Database
```text
Common Errors and Immediate Solutions
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 "Cannot read property 'X' of undefined/null" (Frequency: Extremely High)
├─ Primary cause: Insufficient null checks on objects
├─ Resolution time: 5-10 minutes
└─ Solution: Add Optional chaining (?.) or null checks
📊 "ECONNREFUSED" / "ENOTFOUND" (Frequency: High)
├─ Primary cause: Service not running or URL misconfiguration
├─ Resolution time: 5-15 minutes
└─ Solution: Check service startup, environment variables
📊 "Module not found" / "Cannot resolve" (Frequency: High)
├─ Primary cause: Package not installed, incorrect path
├─ Resolution time: 2-5 minutes
└─ Solution: Run npm install, check relative paths
📊 "Unexpected token" / "SyntaxError" (Frequency: Medium)
├─ Primary cause: Bracket/quote mismatch, reserved word usage
├─ Resolution time: 2-10 minutes
└─ Solution: Check syntax highlighting, run linter
📊 "CORS policy" / "Access-Control-Allow-Origin" (Frequency: Medium)
├─ Primary cause: Insufficient CORS configuration on server
├─ Resolution time: 15-30 minutes
└─ Solution: Configure server CORS, setup proxy
📊 "Maximum call stack size exceeded" (Frequency: Low)
├─ Primary cause: Infinite loops/recursion, circular references
├─ Resolution time: 30 minutes-2 hours
└─ Solution: Check recursion termination conditions, resolve circular references
```
### Error Analysis Priority Matrix
| Priority | Icon | Impact Range | Resolution Difficulty | Response Deadline | Description |
| ----------------- | ------------------- | ------------ | --------------------- | ---------------------- | ------------------------------------------------ |
| **Critical** | 🔴 Emergency | Wide | Low | Start within 15 min | System-wide outage, data loss risk |
| **High Priority** | 🟠 Early Response | Wide | High | Start within 1 hour | Major feature outage, many users affected |
| **Medium** | 🟡 Planned Response | Narrow | High | Address same day | Partial feature limitation, workaround available |
| **Low** | 🟢 Monitor | Narrow | Low | Next maintenance cycle | Minor bugs, minimal UX impact |
### Analysis Process
#### Phase 1: Error Information Collection
```bash
🔴 Must have:
- Full error message
- Stack trace
- Steps to reproduce
🟡 Should have:
- Environment details (OS, versions, dependencies)
- Recent changes (git log, commits)
- Related logs
🟢 Nice to have:
- System resources
- Network state
- External services
```
#### Phase 2: Root Cause Analysis
1. **Identify symptoms**
- Exact error message
- When and how it happens
- What's affected
2. **Find root causes**
- Use 5 Whys analysis
- Check dependencies
- Compare environments
3. **Test your theory**
- Create minimal repro
- Isolate the issue
- Confirm the cause
#### Phase 3: Solution Implementation
```bash
🔴 Quick fix (hotfix):
- Stop the bleeding
- Apply workarounds
- Get ready to deploy
🟡 Root cause fix:
- Fix the actual problem
- Add tests
- Update docs
🟢 Prevent future issues:
- Better error handling
- Add monitoring
- Improve CI/CD
```
### Output Example
```text
🚨 Error Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📍 Error Overview
├─ Type: [Compilation/Runtime/Logical/Environmental]
├─ Urgency: 🔴 High / 🟡 Medium / 🟢 Low
├─ Impact Scope: [Feature name/Component]
└─ Reproducibility: [100% / Intermittent / Specific conditions]
🔍 Root Cause
├─ Direct Cause: [Specific cause]
├─ Background Factors: [Environment/Configuration/Dependencies]
└─ Trigger: [Occurrence conditions]
💡 Solutions
🔴 Immediate response:
1. [Specific fix command/code]
2. [Temporary workaround]
🟡 Fundamental solution:
1. [Essential fix method]
2. [Necessary refactoring]
🟢 Preventive measures:
1. [Error handling improvement]
2. [Add tests]
3. [Monitoring setup]
📝 Verification Procedure
1. [Method to confirm after applying fix]
2. [Test execution command]
3. [Operation check items]
```
### Analysis Methods by Error Type
#### Compilation/Build Errors
```bash
# TypeScript type errors
Must check (high):
- tsconfig.json settings
- Presence of type definition files (.d.ts)
- Accuracy of import statements
# Rust lifetime errors
Must check (high):
- Ownership movement
- Reference validity periods
- Mutability conflicts
```
#### Runtime Errors
```bash
# Null/Undefined references
Must check (high):
- Insufficient optional chaining
- Initialization timing
- Waiting for async processing completion
# Memory-related errors
Must check (high):
- Heap dump acquisition
- GC log analysis
- Circular reference detection
```
#### Dependency Errors
```bash
# Version conflicts
Must check (high):
- Lock file consistency
- Peer dependencies requirements
- Transitive dependencies
# Module resolution errors
Must check (high):
- NODE_PATH settings
- Path alias configuration
- Symbolic links
```
### Notes
- **Absolutely prohibited**: Making judgments based only on part of an error message, applying Stack Overflow solutions without verification
- **Exception conditions**: Temporary workarounds are only allowed under these 3 conditions:
1. Emergency response in production environment (root solution required within 24 hours)
2. External service failures (alternative means while waiting for recovery)
3. Known framework bugs (waiting for fixed version release)
- **Recommendation**: Prioritize identifying root causes and avoid superficial fixes
### Best Practices
1. **Complete information collection**: Check error messages from beginning to end
2. **Reproducibility confirmation**: Prioritize creating minimal reproduction code
3. **Step-by-step approach**: Start with small fixes and verify
4. **Documentation**: Record the solution process for knowledge sharing
#### Common Pitfalls
- **Symptom treatment**: Superficial fixes that miss root causes
- **Overgeneralization**: Widely applying solutions for specific cases
- **Omitted verification**: Not checking side effects after fixes
- **Knowledge individualization**: Not documenting solution methods
### Related Commands
- `/design-patterns`: Analyze code structure issues and suggest patterns
- `/tech-debt`: Analyze root causes of errors from a technical debt perspective
- `/analyzer`: For cases requiring deeper root cause analysis

314
commands/multi-role.md Normal file
View File

@@ -0,0 +1,314 @@
## Multi Role
A command that analyzes the same target in parallel with multiple roles and generates an integrated report.
### Usage
```bash
/multi-role <role1>,<role2> [--agent|-a] [analysis_target]
/multi-role <role1>,<role2>,<role3> [--agent|-a] [analysis_target]
```
### Available Roles
#### Specialized Analysis Roles
- `security`: Security audit expert
- `performance`: Performance optimization expert
- `analyzer`: Root cause analysis expert
- `frontend`: Frontend and UI/UX expert
- `mobile`: Mobile development expert
- `backend`: Backend and server-side expert
#### Development Support Roles
- `reviewer`: Code review expert
- `architect`: System architect
- `qa`: Test engineer
**Important**:
- Place the `--agent` option immediately after specifying roles
- Write your message after `--agent`
- Correct example: `/multi-role qa,architect --agent Evaluate the plan`
- Incorrect example: `/multi-role qa,architect Evaluate the plan --agent`
### Options
- `--agent` or `-a`: Execute each role as a sub-agent in parallel (recommended for large-scale analysis)
- When using this option, if role descriptions include proactive delegation phrases (like "use PROACTIVELY"), more aggressive automatic delegation becomes enabled
### Basic Examples
```bash
# Dual analysis of security and performance (normal)
/multi-role security,performance
"Evaluate this API endpoint"
# Parallel analysis of large-scale system (sub-agents)
/multi-role security,performance --agent
"Comprehensively analyze system security and performance"
# Integrated analysis of frontend, mobile, and performance
/multi-role frontend,mobile,performance
"Consider optimization proposals for this screen"
# Multifaceted evaluation of architecture design (sub-agents)
/multi-role architect,security,performance --agent
"Evaluate microservices design"
```
### Analysis Process
### Phase 1: Parallel Analysis
Each role independently analyzes the same target
- Perform evaluation from specialized perspective
- Make judgments based on role-specific criteria
- Generate independent recommendations
### Phase 2: Integrated Analysis
Structure and integrate results
- Organize evaluation results from each role
- Identify overlaps and contradictions
- Clarify complementary relationships
### Phase 3: Integrated Report
Generate final recommendations
- Prioritized action plan
- Explicit trade-offs
- Implementation roadmap
### Output Format Examples
### For 2-role Analysis
```text
Multi-role Analysis: Security + Performance
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Analysis Target: API endpoint /api/users
Security Analysis Results:
Authentication: JWT verification properly implemented
Authorization: Role-based access control incomplete
Encryption: API keys logged in plain text
Evaluation Score: 65/100
Importance: High (due to sensitive data access)
Performance Analysis Results:
Response Time: Average 180ms (within target of 200ms)
Database Queries: N+1 problem detected
Caching: Redis cache not implemented
Evaluation Score: 70/100
Importance: Medium (currently within acceptable range)
Interrelated Analysis:
Synergistic Opportunities:
- Consider encryption when implementing Redis cache
- Improve logging for both security and performance gains
Trade-off Points:
- Authorization check strengthening ↔ Impact on response time
- Log encryption ↔ Reduced debugging efficiency
Integrated Priorities:
Critical: Fix API key plain text output
High: Resolve N+1 queries
Medium: Implement Redis cache + encryption
Low: Refine authorization control
Implementation Roadmap:
Week 1: Implement API key masking
Week 2: Database query optimization
Weeks 3-4: Cache layer design and implementation
Month 2: Progressive strengthening of authorization control
```
### For 3-role Analysis
```text
Multi-role Analysis: Frontend + Mobile + Performance
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Analysis Target: User Profile Screen
Frontend Analysis Results:
Usability: Intuitive layout
Accessibility: 85% WCAG 2.1 compliance
Responsive: Issues with tablet display
Mobile Analysis Results:
Touch Targets: 44pt+ ensured
One-handed Operation: Important buttons placed at top
Offline Support: Not implemented
Performance Analysis Results:
Initial Display: LCP 2.1s (good)
Image Optimization: WebP not supported
Lazy Loading: Not implemented
Integrated Recommendations:
1. Mobile optimization (one-handed operation + offline support)
2. Image optimization (WebP + lazy loading)
3. Tablet UI improvements
Priority: Mobile > Performance > Frontend
Implementation Period: 3-4 weeks
```
### Effective Combination Patterns
### Security-focused
```bash
/multi-role security,architect
"Authentication system design"
/multi-role security,frontend
"Login screen security"
/multi-role security,mobile
"Mobile app data protection"
```
### Performance-focused
```bash
/multi-role performance,architect
"Scalability design"
/multi-role performance,frontend
"Web page speed optimization"
/multi-role performance,mobile
"App performance optimization"
```
### User Experience-focused
```bash
/multi-role frontend,mobile
"Cross-platform UI"
/multi-role frontend,performance
"Balance between UX and performance"
/multi-role mobile,performance
"Mobile UX optimization"
```
### Comprehensive Analysis
```bash
/multi-role architect,security,performance
"Overall system evaluation"
/multi-role frontend,mobile,performance
"Comprehensive user experience evaluation"
/multi-role security,performance,mobile
"Comprehensive mobile app diagnosis"
```
### Collaboration with Claude
```bash
# Combine with file analysis
cat src/components/UserProfile.tsx
/multi-role frontend,mobile
"Evaluate this component from multiple perspectives"
# Evaluate design documents
cat architecture-design.md
/multi-role architect,security,performance
"Evaluate this design across multiple specialties"
# Error analysis
cat performance-issues.log
/multi-role performance,analyzer
"Analyze performance issues from multiple angles"
```
### Choosing between multi-role and role-debate
### When to use multi-role
- You want independent evaluations from each specialty
- You want to create an integrated improvement plan
- You want to organize contradictions and overlaps
- You want to determine implementation priorities
### When to use role-debate
- There are trade-offs between specialties
- Opinions might differ on technology selection
- You want to decide design policies through discussion
- You want to hear debates from different perspectives
### Sub-agent Parallel Execution (--agent)
Using the `--agent` option executes each role as an independent sub-agent in parallel.
#### Promoting Automatic Delegation
If role file descriptions include phrases like these, more proactive automatic delegation is enabled when using `--agent`:
- "use PROACTIVELY"
- "MUST BE USED"
- Other emphasis expressions
#### Execution Flow
```text
Normal execution:
Role 1 → Role 2 → Role 3 → Integration
(Sequential execution, approx. 3-5 minutes)
--agent execution:
Role 1 ─┐
Role 2 ─┼→ Integration
Role 3 ─┘
(Parallel execution, approx. 1-2 minutes)
```
#### Effective Usage Examples
```bash
# Comprehensive evaluation of large-scale system
/multi-role architect,security,performance,qa --agent
"Comprehensive evaluation of new system"
# Detailed analysis from multiple perspectives
/multi-role frontend,mobile,performance --agent
"Full screen UX optimization analysis"
```
#### Performance Comparison
| Number of Roles | Normal Execution | --agent Execution | Reduction Rate |
| --------------- | ---------------- | ----------------- | -------------- |
| 2 roles | 2-3 minutes | 1 minute | 50% |
| 3 roles | 3-5 minutes | 1-2 minutes | 60% |
| 4 roles | 5-8 minutes | 2-3 minutes | 65% |
### Notes
- Executing 3 or more roles simultaneously results in longer output
- Complex analyses may take longer to execute
- If conflicting recommendations arise, consider using role-debate
- Final judgments should be made by the user with reference to integrated results
- **When using --agent**: Consumes more resources but is efficient for large-scale analyses
### Role Configuration Details
- Detailed settings, domain expertise, and discussion traits for each role are defined in `.claude/agents/roles/`
- Includes Evidence-First practices and cognitive bias countermeasures
- Role-specific trigger phrases automatically enable the specialized mode

134
commands/plan.md Normal file
View File

@@ -0,0 +1,134 @@
## Plan
Helps you plan before coding. Creates detailed strategies to make development smoother.
### Usage
```bash
# Request Plan Mode from Claude
"Create an implementation plan for [implementation content]"
```
### Basic Examples
```bash
# Implementation plan for new feature
"Create an implementation plan for user authentication functionality"
# System design plan
"Create an implementation plan for microservice splitting"
# Refactoring plan
"Create a refactoring plan for legacy code"
```
### Collaboration with Claude
```bash
# Complex feature implementation
"Create an implementation plan for chat functionality, including WebSocket, real-time notifications, and history management"
# Database design
"Create a database design plan for an e-commerce site, including product, order, and user management"
# API design
"Create an implementation plan for GraphQL API, including authentication, caching, and rate limiting"
# Infrastructure design
"Create an implementation plan for Dockerization, including development environment, production environment, and CI/CD"
```
### How Plan Mode Works
**Automatic Start**
- Starts automatically when you describe what to build
- Or just say "Create an implementation plan"
**What You Get**
- Clear requirements (user stories, success criteria)
- Design docs (architecture, data model, UI)
- Implementation steps (tasks, tracking, quality checks)
- Risk analysis and solutions
**Getting Your Approval**
- I'll show you the plan using `exit_plan_mode`
- **Important**: I always wait for your explicit OK
- I won't code without your approval
- You can request changes anytime
- TodoWrite tracking starts after you approve
### Detailed Examples
```bash
# Complex system implementation
"Create an implementation plan for an online payment system, including Stripe integration, security, and error handling"
# Frontend implementation
"Create an implementation plan for a React dashboard, including state management, component design, and testing"
# Backend implementation
"Create an implementation plan for a RESTful API, including authentication, validation, and logging"
# DevOps implementation
"Create an implementation plan for a CI/CD pipeline, including test automation, deployment, and monitoring"
```
### 3-Phase Workflow
#### Phase 1: Requirements
- **User Stories**: What are we building and why?
- **Success Criteria**: How do we know it's done?
- **Constraints**: What limits do we have?
- **Priority**: What's must-have vs nice-to-have?
#### Phase 2: Design
- **Architecture**: How will the system work?
- **Data Model**: Database schema and APIs
- **UI/UX**: Screen layouts and user flow
- **Risks**: What could go wrong and how to prevent it
#### Phase 3: Implementation
- **Task Breakdown**: Split into manageable chunks
- **Progress Tracking**: TodoWrite manages status
- **Quality Checks**: Testing and verification plan
- **Your Approval**: Show plan and wait for your OK
### Notes
**When to Use This**
- Best for complex projects
- Skip for simple fixes
- Great for 3+ step tasks or new features
**Technical Notes**
- Don't rely on `exit_plan_mode` return values
- Only your explicit approval counts
- Works differently than CLI plan mode
**Important Rules**
- Never start coding before you approve
- Always wait for your response
- Offer alternatives if something fails
### Execution Example
```bash
# Usage example
"Create an implementation plan for a user management system"
# What happens:
# 1. Plan Mode starts
# 2. Analyze requirements and pick tech
# 3. Structure the implementation
# 4. Show you the plan
# 5. Start coding after you approve
```

460
commands/pr-auto-update.md Normal file
View File

@@ -0,0 +1,460 @@
## PR Auto Update
## Overview
A command that automatically updates Pull Request descriptions and labels. Analyzes Git changes to generate and set appropriate descriptions and labels.
## Usage
```bash
/pr-auto-update [options] [PR number]
```
### Options
- `--pr <number>`: Specify target PR number (automatically detected from current branch if omitted)
- `--description-only`: Update only the description (keep labels unchanged)
- `--labels-only`: Update only labels (keep description unchanged)
- `--dry-run`: Show generated content without making actual updates
- `--lang <language>`: Specify language (en)
### Basic Examples
```bash
# Auto-update PR for current branch
/pr-auto-update
# Update specific PR
/pr-auto-update --pr 1234
# Update description only
/pr-auto-update --description-only
# Check with dry-run
/pr-auto-update --dry-run
```
## Feature Details
### 1. PR Auto Detection
Automatically detects the corresponding PR from the current branch:
```bash
# Search PR from branch
gh pr list --head $(git branch --show-current) --json number,title,url
```
### 2. Change Analysis
Collects and analyzes the following information:
- **File changes**: Added, deleted, or modified files
- **Code analysis**: Changes to imports, function definitions, class definitions
- **Tests**: Presence and content of test files
- **Documentation**: Updates to README, docs
- **Configuration**: Changes to package.json, pubspec.yaml, configuration files
- **CI/CD**: Changes to GitHub Actions, workflows
### 3. Automatic Description Generation
#### Template Processing Priority
1. **Existing PR description**: Completely follows already written content
2. **Project template**: Gets structure from `.github/PULL_REQUEST_TEMPLATE.md`
3. **Default template**: Fallback when above don't exist
#### Rules for Preserving Existing Content
**Important**: Do not modify existing content
- Keep existing sections
- Only complete empty sections
- Keep functional comments (like Copilot review rules)
#### Using Project Templates
```bash
# Parse structure of .github/PULL_REQUEST_TEMPLATE.md
parse_template_structure() {
local template_file="$1"
if [ -f "$template_file" ]; then
# Extract section structure
grep -E '^##|^###' "$template_file"
# Identify comment placeholders
grep -E '<!--.*-->' "$template_file"
# Completely follow existing template structure
cat "$template_file"
fi
}
```
### 4. Automatic Label Setting
#### Label Retrieval Mechanism
**Priority**:
1. **`.github/labels.yml`**: Get from project-specific label definitions
2. **GitHub API**: Get existing labels with `gh api repos/{OWNER}/{REPO}/labels --jq '.[].name'`
#### Automatic Determination Rules
**File Pattern Based**:
- Documentation: `*.md`, `README`, `docs/` → labels containing `documentation|docs|doc`
- Tests: `test`, `spec` → labels containing `test|testing`
- CI/CD: `.github/`, `*.yml`, `Dockerfile` → labels containing `ci|build|infra|ops`
- Dependencies: `package.json`, `pubspec.yaml`, `requirements.txt` → labels containing `dependencies|deps`
**Change Content Based**:
- Bug fixes: `fix|bug|error|crash|correction` → labels containing `bug|fix`
- New features: `feat|feature|add|implement|new-feature|implementation` → labels containing `feature|enhancement|feat`
- Refactoring: `refactor|clean|restructure` → labels containing `refactor|cleanup|clean`
- Performance: `performance|perf|optimize|optimization` → labels containing `performance|perf`
- Security: `security|secure|vulnerability` → labels containing `security`
#### Constraints
- **Maximum 3**: Upper limit on automatically selected labels
- **Existing labels only**: Creating new labels is prohibited
- **Partial match**: Determined by whether keywords are contained in label names
#### Actual Usage Examples
**When `.github/labels.yml` exists**:
```bash
# Auto-retrieve from label definitions
grep "^- name:" .github/labels.yml | sed "s/^- name: '\?\([^']*\)'\?/\1/"
# Example: Use project-specific label system
```
**When retrieving from GitHub API**:
```bash
# Get list of existing labels
gh api repos/{OWNER}/{REPO}/labels --jq '.[].name'
# Example: Use standard labels like bug, enhancement, documentation
```
### 5. Execution Flow
```bash
#!/bin/bash
# 1. PR Detection & Retrieval
detect_pr() {
if [ -n "$PR_NUMBER" ]; then
echo $PR_NUMBER
else
gh pr list --head $(git branch --show-current) --json number --jq '.[0].number'
fi
}
# 2. Change Analysis
analyze_changes() {
local pr_number=$1
# Get file changes
gh pr diff $pr_number --name-only
# Content analysis
gh pr diff $pr_number | head -1000
}
# 3. Description Generation
generate_description() {
local pr_number=$1
local changes=$2
# Get current PR description
local current_body=$(gh pr view $pr_number --json body --jq -r .body)
# Use existing content if available
if [ -n "$current_body" ]; then
echo "$current_body"
else
# Generate new from template
local template_file=".github/PULL_REQUEST_TEMPLATE.md"
if [ -f "$template_file" ]; then
generate_from_template "$(cat "$template_file")" "$changes"
else
generate_from_template "" "$changes"
fi
fi
}
# Generate from template
generate_from_template() {
local template="$1"
local changes="$2"
if [ -n "$template" ]; then
# Use template as-is (preserve HTML comments)
echo "$template"
else
# Generate in default format
echo "## What does this change?"
echo ""
echo "$changes"
fi
}
# 4. Label Determination
determine_labels() {
local changes=$1
local file_list=$2
local pr_number=$3
# Get available labels
local available_labels=()
if [ -f ".github/labels.yml" ]; then
# Extract label names from labels.yml
available_labels=($(grep "^- name:" .github/labels.yml | sed "s/^- name: '\?\([^']*\)'\?/\1/"))
else
# Get labels from GitHub API
local repo_info=$(gh repo view --json owner,name)
local owner=$(echo "$repo_info" | jq -r .owner.login)
local repo=$(echo "$repo_info" | jq -r .name)
available_labels=($(gh api "repos/$owner/$repo/labels" --jq '.[].name'))
fi
local suggested_labels=()
# Generic pattern matching
analyze_change_patterns "$file_list" "$changes" available_labels suggested_labels
# Limit to maximum 3
echo "${suggested_labels[@]:0:3}"
}
# Determine labels from change patterns
analyze_change_patterns() {
local file_list="$1"
local changes="$2"
local -n available_ref=$3
local -n suggested_ref=$4
# File type determination
if echo "$file_list" | grep -q "\.md$\|README\|docs/"; then
add_matching_label "documentation\|docs\|doc" available_ref suggested_ref
fi
if echo "$file_list" | grep -q "test\|spec"; then
add_matching_label "test\|testing" available_ref suggested_ref
fi
# Change content determination
if echo "$changes" | grep -iq "fix\|bug\|error\|crash\|correction"; then
add_matching_label "bug\|fix" available_ref suggested_ref
fi
if echo "$changes" | grep -iq "feat\|feature\|add\|implement\|new-feature\|implementation"; then
add_matching_label "feature\|enhancement\|feat" available_ref suggested_ref
fi
}
# Add matching label
add_matching_label() {
local pattern="$1"
local -n available_ref=$2
local -n suggested_ref=$3
# Skip if already have 3 labels
if [ ${#suggested_ref[@]} -ge 3 ]; then
return
fi
# Add first label matching pattern
for available_label in "${available_ref[@]}"; do
if echo "$available_label" | grep -iq "$pattern"; then
# Check for duplicates
local already_exists=false
for existing in "${suggested_ref[@]}"; do
if [ "$existing" = "$available_label" ]; then
already_exists=true
break
fi
done
if [ "$already_exists" = false ]; then
suggested_ref+=("$available_label")
return
fi
fi
done
}
# Keep old function for compatibility
find_and_add_label() {
add_matching_label "$@"
}
# 5. PR Update
update_pr() {
local pr_number=$1
local description="$2"
local labels="$3"
if [ "$DRY_RUN" = "true" ]; then
echo "=== DRY RUN ==="
echo "Description:"
echo "$description"
echo "Labels: $labels"
else
# Get repository information
local repo_info=$(gh repo view --json owner,name)
local owner=$(echo "$repo_info" | jq -r .owner.login)
local repo=$(echo "$repo_info" | jq -r .name)
# Update body using GitHub API (preserve HTML comments)
# Handle JSON escaping properly
local escaped_body=$(echo "$description" | jq -R -s .)
gh api \
--method PATCH \
"/repos/$owner/$repo/pulls/$pr_number" \
--field body="$description"
# Labels can be handled with regular gh command
if [ -n "$labels" ]; then
gh pr edit $pr_number --add-label "$labels"
fi
fi
}
```
## Configuration File (Future Extension)
`~/.claude/pr-auto-update.config`:
```json
{
"language": "ja",
"max_labels": 3
}
```
## Common Patterns
### Flutter Projects
```markdown
## What does this change?
Implemented {feature name}. Solves user {issue}.
### Main Changes
- **UI Implementation**: Created new {screen name}
- **State Management**: Added Riverpod providers
- **API Integration**: Implemented GraphQL queries & mutations
- **Testing**: Added widget tests & unit tests
### Technical Specifications
- **Architecture**: {pattern used}
- **Dependencies**: {newly added packages}
- **Performance**: {optimization details}
```
### Node.js Projects
```markdown
## What does this change?
Implemented {API name} endpoint. Supports {use case}.
### Main Changes
- **API Implementation**: Created new {endpoint}
- **Validation**: Added request validation logic
- **Database**: Implemented operations for {table name}
- **Testing**: Added integration & unit tests
### Security
- **Authentication**: JWT token validation
- **Authorization**: Role-based access control
- **Input Validation**: SQL injection protection
```
### CI/CD Improvements
```markdown
## What does this change?
Improved GitHub Actions workflow. Achieves {effect}.
### Improvements
- **Performance**: Reduced build time by {time}
- **Reliability**: Enhanced error handling
- **Security**: Improved secret management
### Technical Details
- **Parallelization**: Run {job name} in parallel
- **Caching**: Optimized caching strategy for {cache target}
- **Monitoring**: Added monitoring for {metrics}
```
## Important Notes
1. **Complete Preservation of Existing Content**:
- Do not change even a single character of already written content
- Only complete empty comment sections and placeholders
- Respect content intentionally written by users
2. **Template Priority**:
- Existing PR description > `.github/PULL_REQUEST_TEMPLATE.md` > Default
- Completely follow project-specific template structure
3. **Label Constraints**:
- Use `.github/labels.yml` preferentially if it exists
- Get existing labels from GitHub API if it doesn't exist
- Creating new labels is prohibited
- Maximum 3 labels auto-selected
4. **Safe Updates**:
- Recommend pre-confirmation with `--dry-run`
- Show warning for changes containing sensitive information
- Save original description as backup
5. **Consistency Maintenance**:
- Match existing PR style of the project
- Maintain language consistency (Japanese/English)
- Inherit labeling conventions
## Troubleshooting
### Common Issues
1. **PR not found**: Check branch name and PR association
2. **Permission error**: Check GitHub CLI authentication status
3. **Cannot set labels**: Check repository permissions
4. **HTML comments get escaped**: GitHub CLI specification converts `<!-- -->` to `&lt;!-- --&gt;`
### GitHub CLI HTML Comment Escaping Issue
**Important**: GitHub CLI (`gh pr edit`) automatically escapes HTML comments. Also, shell redirect processing may introduce invalid strings like `EOF < /dev/null`.
#### Fundamental Solutions
1. **Use GitHub API --field option**: Use `--field` for proper escape processing
2. **Simplify shell processing**: Avoid complex redirects and pipe processing
3. **Simplify template processing**: Eliminate HTML comment removal processing and preserve completely
4. **Proper JSON escaping**: Handle special characters correctly
### Debug Options
```bash
# Detailed log output (to be added during implementation)
/pr-auto-update --verbose
```

249
commands/pr-create.md Normal file
View File

@@ -0,0 +1,249 @@
## PR Create
Creates Pull Requests automatically by analyzing your Git changes for a smoother workflow.
### Usage
```bash
# Auto-create PR from your changes
git add . && git commit -m "feat: Implement user authentication"
"Create a Draft PR with the right description and labels"
# Keep your existing template
cp .github/PULL_REQUEST_TEMPLATE.md pr_body.md
"Fill in the blanks but keep the template structure intact"
# Mark as ready when done
gh pr ready
"Switch to Ready for Review after checking quality"
```
### Basic Examples
```bash
# 1. Create branch and commit
git checkout main && git pull
git checkout -b feat-user-profile
git add . && git commit -m "feat: Implement user profile feature"
git push -u origin feat-user-profile
# 2. Create PR
"Please create a PR:
1. Check what changed with git diff --cached
2. Use the PR template from .github/PULL_REQUEST_TEMPLATE.md
3. Pick up to 3 labels that match the changes
4. Create it as a Draft (keep HTML comments)"
# 3. Make it ready after CI passes
"Once CI is green, mark the PR as Ready for Review"
```
### Execution Steps
#### 1. Create Branch
```bash
# Branch naming: {type}-{subject}
git checkout main
git pull
git checkout -b feat-user-authentication
# Confirm you're on the right branch
git branch --show-current
```
#### 2. Commit
```bash
# Stage your changes
git add .
# Commit with a clear message
git commit -m "feat: Implement user authentication API"
```
#### 3. Push to Remote
```bash
# First push (sets upstream)
git push -u origin feat-user-authentication
# Later pushes
git push
```
#### 4. Create Draft PR with Automatic Analysis
**Step 1: Analyze Changes**
```bash
# See what files changed
git diff --cached --name-only
# Review the actual changes (first 1000 lines)
git diff --cached | head -1000
```
**Step 2: Auto-generate Description**
```bash
# Template priority:
# 1. Keep existing PR description as-is
# 2. Use .github/PULL_REQUEST_TEMPLATE.md
# 3. Fall back to default template
cp .github/PULL_REQUEST_TEMPLATE.md pr_body.md
# Fill empty sections only - don't touch HTML comments or separators
```
**Step 3: Auto-select Labels**
```bash
# Get available labels (non-interactive)
"Retrieve available labels from .github/labels.yml or GitHub repository and automatically select appropriate labels based on changes"
# Auto-selection by pattern matching (max 3)
# - Documentation: *.md, docs/ → documentation|docs
# - Tests: test, spec → test|testing
# - Bug fixes: fix|bug → bug|fix
# - New features: feat|feature → feature|enhancement
```
**Step 4: Create PR via GitHub API (Preserve HTML Comments)**
```bash
# Create PR
"Create a Draft PR with the following information:
- Title: Auto-generated from commit message
- Description: Properly filled using .github/PULL_REQUEST_TEMPLATE.md
- Labels: Auto-selected from changes (max 3)
- Base branch: main
- Preserve all HTML comments"
```
**Method B: GitHub MCP (Fallback)**
```javascript
// Create PR while preserving HTML comments
mcp_github_create_pull_request({
owner: "organization",
repo: "repository",
base: "main",
head: "feat-user-authentication",
title: "feat: Implement user authentication",
body: prBodyContent, // Full content including HTML comments
draft: true,
maintainer_can_modify: true,
});
```
### Auto Label Selection System
#### Determining from File Patterns
- **Documentation**: `*.md`, `README`, `docs/``documentation|docs|doc`
- **Tests**: `test`, `spec``test|testing`
- **CI/CD**: `.github/`, `*.yml`, `Dockerfile``ci|build|infra|ops`
- **Dependencies**: `package.json`, `pubspec.yaml``dependencies|deps`
#### Determining from Content
- **Bug fixes**: `fix|bug|error|crash|repair``bug|fix`
- **New features**: `feat|feature|add|implement|new-feature|implementation``feature|enhancement|feat`
- **Refactoring**: `refactor|clean|restructure``refactor|cleanup|clean`
- **Performance**: `performance|perf|optimize``performance|perf`
- **Security**: `security|secure``security`
#### Constraints
- **Max 3 labels**: Upper limit for automatic selection
- **Existing labels only**: Prohibited from creating new labels
- **Partial match**: Determined by keyword inclusion in label names
### Project Guidelines
#### Basic Approach
1. **Always start as Draft**: All PRs must be created in Draft state
2. **Gradual quality improvement**: Phase 1 (Basic implementation) → Phase 2 (Add tests) → Phase 3 (Update documentation)
3. **Appropriate labels**: Always add up to 3 labels
4. **Use templates**: Always use `.github/PULL_REQUEST_TEMPLATE.md`
5. **Japanese spacing**: Always add half-width space between Japanese text and alphanumerics
#### Branch Naming Convention
```text
{type}-{subject}
Examples:
- feat-user-profile
- fix-login-error
- refactor-api-client
```
#### Commit Messages
```text
{type}: {description}
Examples:
- feat: Implement user authentication API
- fix: Correct login error
- docs: Update README
```
### Template Processing System
#### Processing Priority
1. **Existing PR description**: Keep everything that's already written
2. **Project template**: Use `.github/PULL_REQUEST_TEMPLATE.md`
3. **Default template**: Use this if nothing else exists
#### Existing Content Preservation Rules
- **Don't touch existing content**: Leave what's already there alone
- **Fill in the blanks only**: Add content where it's missing
- **Keep functional comments**: Like `<!-- Copilot review rule -->`
- **Keep HTML comments**: All `<!-- ... -->` stay as-is
- **Keep separators**: Things like `---` stay put
#### Handling HTML Comment Preservation
**Heads up**: GitHub CLI (`gh pr edit`) escapes HTML comments, and shell processing can mess things up with strings like `EOF < /dev/null`.
**How to fix this**:
1. **Use GitHub API's --field option**: This handles escaping properly
2. **Keep it simple**: Skip complex pipes and redirects
3. **Don't remove anything**: Keep all HTML comments and templates intact
### Review Comment Responses
```bash
# Commit your fixes
git add .
git commit -m "fix: Address review feedback"
git push
```
### Notes
#### Importance of HTML Comment Preservation
- **GitHub CLI issue**: `gh pr edit` escapes HTML comments and can break things
- **The fix**: Use GitHub API's `--field` option for proper handling
- **Keep everything**: Don't remove HTML comments - keep the whole template
#### Automation Constraints
- **No new labels**: Can only use labels from `.github/labels.yml`
- **3 labels max**: That's the limit for auto-selection
- **Hands off manual content**: Never change what someone wrote
#### Step-by-Step Quality
- **Start with Draft**: Every PR begins as a draft
- **Check CI**: Run `gh pr checks` to see the status
- **Mark as ready**: Use `gh pr ready` when quality looks good
- **Follow the template**: Stick to your project's structure

143
commands/pr-feedback.md Normal file
View File

@@ -0,0 +1,143 @@
## PR Feedback
Efficiently handle Pull Request review comments and achieve root cause resolution using a 3-stage error analysis approach.
### Usage
```bash
# Retrieve and analyze review comments
gh pr view --comments
"Classify review comments by priority and create an action plan"
# Detailed analysis of error-related comments
gh pr checks
"Analyze CI errors using a 3-stage approach to identify root causes"
# Quality confirmation after fixes
npm test && npm run lint
"Fixes are complete - please check regression tests and code quality"
```
### Basic Examples
```bash
# Classify comments
gh pr view 123 --comments | head -20
"Classify review comments into must/imo/nits/q categories and determine response order"
# Collect error information
npm run build 2>&1 | tee error.log
"Identify the root cause of build errors and suggest appropriate fixes"
# Verify fix implementation
git diff HEAD~1
"Evaluate whether this fix appropriately addresses the review comments"
```
### Comment Classification System
```text
🔴 must: Required fixes
├─ Security issues
├─ Functional bugs
├─ Design principle violations
└─ Convention violations
🟡 imo: Improvement suggestions
├─ Better implementation methods
├─ Performance improvements
├─ Readability enhancements
└─ Refactoring proposals
🟢 nits: Minor points
├─ Typo fixes
├─ Indentation adjustments
├─ Comment additions
└─ Naming refinements
🔵 q: Questions/confirmations
├─ Implementation intent verification
├─ Specification clarification
├─ Design decision background
└─ Alternative solution consideration
```
### 3-Stage Error Analysis Approach
#### Stage 1: Information Collection
**Required actions**
- Full error message capture
- Stack trace review
- Reproduction condition identification
**Recommended actions**
- Environment information collection
- Recent change history
- Related logs review
#### Stage 2: Root Cause Analysis
- 5 Whys analysis application
- Dependency tracking
- Environment difference checking
- Minimal reproduction code creation
#### Stage 3: Solution Implementation
- Immediate response (hotfix)
- Root cause resolution (essential fix)
- Preventive measures (recurrence prevention)
### Response Flow
1. **Comment analysis**: Classification by priority
2. **Fix plan**: Determining response order
3. **Phased fixes**: Critical → High → Medium → Low
4. **Quality confirmation**: Testing, linting, building
5. **Progress report**: Description of specific fixes
### Post-Fix Verification
```bash
# Basic checks
npm test
npm run lint
npm run build
# Regression tests
npm run test:e2e
# Code quality
npm run test:coverage
```
### Reply Templates
**Fix completion report**
```markdown
@reviewer Thank you for your feedback.
Fixes are complete:
- [Specific fix details]
- [Test results]
- [Verification method]
```
**Technical decision explanation**
```markdown
Implementation background: [Reason]
Considered alternatives: [Options and decision rationale]
Adopted solution benefits: [Advantages]
```
### Notes
- **Priority adherence**: Address in order of Critical → High → Medium → Low
- **Test first**: Confirm regression tests before making fixes
- **Clear reporting**: Describe fix details and verification methods specifically
- **Constructive dialogue**: Polite communication based on technical grounds

78
commands/pr-issue.md Normal file
View File

@@ -0,0 +1,78 @@
## Issue List
Displays a prioritized list of open issues in the current repository.
### Usage
```bash
# Request from Claude
"Show a prioritized list of open issues"
```
### Basic Examples
```bash
# Get repository information
gh repo view --json nameWithOwner | jq -r '.nameWithOwner'
# Get open issue information and request from Claude
gh issue list --state open --json number,title,author,createdAt,updatedAt,labels,assignees,comments --limit 30
"Organize the above issues by priority, including a 2-line summary for each issue. Generate URLs using the repository name obtained above"
```
### Display Format
```text
Open Issues List (by Priority)
### High Priority
#number Title [labels] | Author | Time since opened | Comment count | Assignee
├─ Summary line 1
└─ Summary line 2
https://github.com/owner/repo/issues/number
### Medium Priority
(Similar format)
### Low Priority
(Similar format)
```
### Priority Assessment Criteria
**High Priority**
- Issues with `bug` label
- Issues with `critical` or `urgent` labels
- Issues with `security` label
**Medium Priority**
- Issues with `enhancement` label
- Issues with `feature` label
- Issues with assignees
**Low Priority**
- Issues with `documentation` label
- Issues with `good first issue` label
- Issues with `wontfix` or `duplicate` labels
### Label Filtering
```bash
# Get only issues with specific label
gh issue list --state open --label "bug" --json number,title,author,createdAt,labels,comments --limit 30
# Filter with multiple labels (AND condition)
gh issue list --state open --label "bug,high-priority" --json number,title,author,createdAt,labels,comments --limit 30
```
### Notes
- Requires GitHub CLI (`gh`)
- Only displays issues in open state
- Shows maximum 30 issues
- Elapsed time is from when the issue was opened
- Issue URLs are automatically generated from the actual repository name

66
commands/pr-list.md Normal file
View File

@@ -0,0 +1,66 @@
## PR List
Displays a prioritized list of open PRs in the current repository.
### Usage
```bash
# Request from Claude
"Show a prioritized list of open PRs"
```
### Basic Examples
```bash
# Get repository information
gh repo view --json nameWithOwner | jq -r '.nameWithOwner'
# Get open PR information and request from Claude
gh pr list --state open --draft=false --json number,title,author,createdAt,additions,deletions,reviews --limit 30
"Organize the above PRs by priority, including a 2-line summary for each PR. Generate URLs using the repository name obtained above"
```
### Display Format
```text
Open PRs List (by Priority)
### High Priority
#number Title [Draft/DNM] | Author | Time since opened | Approved count | +additions/-deletions
├─ Summary line 1
└─ Summary line 2
https://github.com/owner/repo/pull/number
### Medium Priority
(Similar format)
### Low Priority
(Similar format)
```
### Priority Assessment Criteria
**High Priority**
- `fix:` Bug fixes
- `release:` Release work
**Medium Priority**
- `feat:` New features
- `update:` Feature improvements
- Other regular PRs
**Low Priority**
- PRs containing DO NOT MERGE
- Draft PRs with `test:`, `build:`, `perf:`
### Notes
- Requires GitHub CLI (`gh`)
- Only displays PRs in open state (Drafts are excluded)
- Shows maximum 30 PRs
- Elapsed time is from when the PR was opened
- PR URLs are automatically generated from the actual repository name

164
commands/pr-review.md Normal file
View File

@@ -0,0 +1,164 @@
## PR Review
Ensure code quality and architectural soundness through systematic Pull Request reviews.
### Usage
```bash
# Comprehensive PR review
gh pr view 123 --comments
"Systematically review this PR and provide feedback from code quality, security, and architecture perspectives"
# Security-focused review
gh pr diff 123
"Focus on reviewing security risks and vulnerabilities"
# Architecture perspective review
gh pr checkout 123 && find . -name "*.js" | head -10
"Evaluate the architecture from the perspectives of layer separation, dependencies, and SOLID principles"
```
### Basic Examples
```bash
# Quantitative code quality assessment
find . -name "*.js" -exec wc -l {} + | sort -rn | head -5
"Evaluate code complexity, function size, and duplication, and point out improvements"
# Security vulnerability check
grep -r "password\|secret\|token" . --include="*.js" | head -10
"Check for risks of sensitive information leakage, hardcoding, and authentication bypass"
# Architecture violation detection
grep -r "import.*from.*\.\./\.\." . --include="*.js"
"Evaluate layer violations, circular dependencies, and coupling issues"
```
### Comment Classification System
```text
🔴 critical.must: Critical issues
├─ Security vulnerabilities
├─ Data integrity problems
└─ System failure risks
🟡 high.imo: High-priority improvements
├─ Risk of malfunction
├─ Performance issues
└─ Significant decrease in maintainability
🟢 medium.imo: Medium-priority improvements
├─ Readability enhancement
├─ Code structure improvement
└─ Test quality improvement
🟢 low.nits: Minor points
├─ Style unification
├─ Typo fixes
└─ Comment additions
🔵 info.q: Questions/information
├─ Implementation intent confirmation
├─ Design decision background
└─ Best practices sharing
```
### Review Perspectives
#### 1. Code Correctness
- **Logic errors**: Boundary values, null checks, exception handling
- **Data integrity**: Type safety, validation
- **Error handling**: Completeness, appropriate processing
#### 2. Security
- **Authentication/authorization**: Appropriate checks, permission management
- **Input validation**: SQL injection, XSS countermeasures
- **Sensitive information**: Logging restrictions, encryption
#### 3. Performance
- **Algorithms**: Time complexity, memory efficiency
- **Database**: N+1 queries, index optimization
- **Resources**: Memory leaks, cache utilization
#### 4. Architecture
- **Layer separation**: Dependency direction, appropriate separation
- **Coupling**: Tight coupling, interface utilization
- **SOLID principles**: Single responsibility, open-closed, dependency inversion
### Review Flow
1. **Pre-check**: PR information, change diff, related issues
2. **Systematic checks**: Security → Correctness → Performance → Architecture
3. **Constructive feedback**: Specific improvement suggestions and code examples
4. **Follow-up**: Fix confirmation, CI status, final approval
### Comment Templates
#### Security Issues Template
**Format:**
- Priority: `critical.must.`
- Issue: Clear description of the problem
- Code example: Proposed fix
- Rationale: Why this is necessary
**Example:**
```text
critical.must. Password is stored in plaintext
Proposed fix:
const bcrypt = require('bcrypt');
const hashedPassword = await bcrypt.hash(password, 12);
Hashing is required to prevent security risks.
```
#### Performance Improvement Template
**Format:**
- Priority: `high.imo.`
- Issue: Explain performance impact
- Code example: Proposed improvement
- Effect: Describe expected improvement
**Example:**
```text
high.imo. N+1 query problem occurs
Improvement: Eager Loading
const users = await User.findAll({ include: [Post] });
This can significantly reduce the number of queries.
```
#### Architecture Violation Template
**Format:**
- Priority: `high.must.`
- Issue: Point out architectural principle violation
- Recommendation: Specific improvement method
**Example:**
```text
high.must. Layer violation occurred
The domain layer directly depends on the infrastructure layer.
Please introduce an interface following the dependency inversion principle.
```
### Notes
- **Constructive tone**: Collaborative rather than aggressive communication
- **Specific suggestions**: Provide solutions along with pointing out problems
- **Prioritization**: Address in order of Critical → High → Medium → Low
- **Continuous improvement**: Document review results in a knowledge base

305
commands/refactor.md Normal file
View File

@@ -0,0 +1,305 @@
## Refactor
Performs safe, step-by-step code refactoring with quantitative SOLID principles evaluation. Visualizes technical debt and clarifies improvement priorities.
### Usage
```bash
# Identify complex code and create refactoring plan
find . -name "*.js" -exec wc -l {} + | sort -rn | head -10
"Refactor large files to reduce complexity"
# Detect and consolidate duplicate code
grep -r "function processUser" . --include="*.js"
"Consolidate duplicate functions using Extract Method"
# Evaluate SOLID principles violations
grep -r "class.*Service" . --include="*.js" | head -10
"Assess whether these classes follow Single Responsibility Principle"
```
### Basic Examples
```bash
# Find long methods
grep -A 50 "function" src/*.js | grep -B 50 -A 50 "return" | wc -l
"Split methods over 50 lines using Extract Method"
# Find complex conditions
grep -r "if.*if.*if" . --include="*.js"
"Improve nested conditions using Strategy pattern"
# Find code smells
grep -r "TODO\|FIXME\|HACK" . --exclude-dir=node_modules
"Resolve technical debt in comments"
```
### Refactoring Techniques
#### Extract Method (Split Big Functions)
```javascript
// Before: Long method
function processOrder(order) {
// 50 lines of complex processing
}
// After: Separation of responsibilities
function processOrder(order) {
validateOrder(order);
calculateTotal(order);
saveOrder(order);
}
```
#### Replace Conditional with Polymorphism (Remove Switch/If Chains)
```javascript
// Before: switch statement
function getPrice(user) {
switch (user.type) {
case "premium":
return basePrice * 0.8;
case "regular":
return basePrice;
}
}
// After: Strategy pattern
class PremiumPricing {
calculate(basePrice) {
return basePrice * 0.8;
}
}
```
### SOLID Principles Scoring (0-100 points)
#### Evaluation Criteria and Scoring
```text
S - Single Responsibility (20 points)
├─ Number of responsibilities: 1 (20pts) | 2 (15pts) | 3 (10pts) | 4+ (5pts)
├─ Method count: <7 (+5pts) | 7-15 (+3pts) | >15 (0pts)
├─ Clear reasons for change: Clear (+5pts) | Unclear (0pts)
└─ Example score: UserService (auth + data processing) = 10 points
O - Open/Closed (20 points)
├─ Extension points: Strategy/Template Method (20pts) | Inheritance only (10pts) | None (5pts)
├─ Existing code changes for new features: Not needed (+5pts) | Minimal (+3pts) | Required (0pts)
├─ Interface usage: Appropriate (+5pts) | Partial (+3pts) | None (0pts)
└─ Example score: PaymentProcessor (Strategy) = 20 points
L - Liskov Substitution (20 points)
├─ Derived class contract compliance: Complete (20pts) | Partial (10pts) | Violated (0pts)
├─ Precondition strengthening: None (+5pts) | Present (-5pts)
├─ Postcondition weakening: None (+5pts) | Present (-5pts)
└─ Example score: Square extends Rectangle = 0 points (violated)
I - Interface Segregation (20 points)
├─ Interface size: 1-3 methods (20pts) | 4-7 (15pts) | 8+ (5pts)
├─ Unused method implementations: None (+5pts) | 1-2 (+2pts) | 3+ (0pts)
├─ Role clarity: Single role (+5pts) | Multiple roles (0pts)
└─ Example score: Readable/Writable separation = 20 points
D - Dependency Inversion (20 points)
├─ Dependency direction: Abstractions only (20pts) | Mixed (10pts) | Concretions only (5pts)
├─ DI usage: Constructor Injection (+5pts) | Setter (+3pts) | None (0pts)
├─ Testability: Mockable (+5pts) | Difficult (0pts)
└─ Example score: Repository Pattern = 20 points
Total Score = S + O + L + I + D
├─ 90-100 points: Excellent (SOLID compliant)
├─ 70-89 points: Good (Minor improvements needed)
├─ 50-69 points: Fair (Refactoring recommended)
├─ 30-49 points: Poor (Major improvements required)
└─ 0-29 points: Critical (Design overhaul required)
```
### Technical Debt Quantification
#### Debt Calculation Formula
```text
Technical Debt (time) = Complexity Score × Impact Range × Fix Difficulty
Complexity Score:
├─ Cyclomatic complexity: 1-5 (low) | 6-10 (med) | 11-20 (high) | 21+ (critical)
├─ Cognitive complexity: Nesting depth × conditional branches
├─ Lines of code: <50 (1pt) | 50-200 (2pts) | 200-500 (3pts) | 500+ (5pts)
└─ Duplication rate: 0-10% (1pt) | 10-30% (2pts) | 30-50% (3pts) | 50%+ (5pts)
Impact Range:
├─ Dependent modules: Direct dependencies + Indirect × 0.5
├─ Usage frequency: API calls/day
├─ Business importance: Critical (×3) | High (×2) | Medium (×1) | Low (×0.5)
└─ Team knowledge: 1 person knows (×3) | 2-3 (×2) | 4+ (×1)
Fix Difficulty:
├─ Test coverage: 0% (×3) | <50% (×2) | 50-80% (×1.5) | >80% (×1)
├─ Documentation: None (×2) | Insufficient (×1.5) | Adequate (×1)
├─ Dependencies: Tightly coupled (×3) | Moderate (×2) | Loosely coupled (×1)
└─ Change risk: Breaking change (×3) | Backward compatibility (×2) | Safe (×1)
Cost Conversion:
├─ Time cost: Debt time × Developer hourly rate
├─ Opportunity cost: New feature delay days × Daily revenue impact
├─ Quality cost: Bug probability × Fix cost × Frequency
└─ Total cost: Time + Opportunity + Quality costs
```
#### Priority Matrix
| Priority | Impact | Fix Cost | Time Savings | Investment ROI | Response Deadline |
| -------------------------- | ------ | -------- | ------------ | --------------------- | ----------------- |
| **Critical (Immediate)** | High | Low | > 5x | Invest 1h → Save 5h+ | Immediately |
| **Important (Planned)** | High | High | 2-5x | Invest 1h → Save 2-5h | Within 1 month |
| **Watch (Monitor)** | Low | High | 1-2x | Invest 1h → Save 1-2h | Within 3 months |
| **Acceptable (Tolerable)** | Low | Low | < 1x | Investment = Savings | No action needed |
### Refactoring Process
1. **Current Analysis and Measurement**
- Measure complexity (cyclomatic & cognitive)
- Calculate SOLID score (0-100 points)
- Quantify technical debt (time/cost)
- Create priority matrix
2. **Step-by-Step Execution**
- Small steps (15-30 minute increments)
- Run tests after each change
- Frequent commits
- Continuous SOLID score measurement
3. **Quality Verification**
- Maintain test coverage
- Measure performance
- Verify technical debt reduction
- Code review
### Common Code Smells and Debt Scores
| Code Smell | Detection Criteria | Debt Score | Improvement Method |
| ----------------------- | -------------------------------- | -------------- | ------------------------ |
| **God Object** | Responsibilities >3, Methods >20 | High (15-20h) | Extract Class, Apply SRP |
| **Long Method** | Lines >50, Complexity >10 | Medium (5-10h) | Extract Method |
| **Duplicate Code** | Duplication rate >30% | High (10-15h) | Extract Method/Class |
| **Large Class** | Lines >300, Methods >15 | High (10-20h) | Extract Class |
| **Long Parameter List** | Parameters >4 | Low (2-5h) | Parameter Object |
| **Feature Envy** | Other class references >5 | Medium (5-10h) | Move Method |
| **Data Clumps** | Repeated argument groups | Low (3-5h) | Extract Class |
| **Primitive Obsession** | Excessive primitive type usage | Medium (5-8h) | Replace with Object |
| **Switch Statements** | Cases >5 | Medium (5-10h) | Strategy Pattern |
| **Shotgun Surgery** | Change impact areas >3 | High (10-15h) | Move Method/Field |
### Practical Example: SOLID Score Evaluation
```javascript
// Evaluation target: UserService class
class UserService {
constructor(db, cache, logger, emailService) { // 4 dependencies
this.db = db;
this.cache = cache;
this.logger = logger;
this.emailService = emailService;
}
// Responsibility 1: Authentication
authenticate(username, password) { /* ... */ }
refreshToken(token) { /* ... */ }
// Responsibility 2: User management
createUser(data) { /* ... */ }
updateUser(id, data) { /* ... */ }
deleteUser(id) { /* ... */ }
// Responsibility 3: Notifications
sendWelcomeEmail(user) { /* ... */ }
sendPasswordReset(email) { /* ... */ }
}
// SOLID Score Evaluation Result
S: 10 points (3 responsibilities: auth, CRUD, notifications)
O: 5 points (No extension points, direct implementation)
L: 15 points (No inheritance, not applicable)
I: 10 points (Interfaces not segregated)
D: 10 points (Depends on concrete classes)
Total: 50 points (Fair - Refactoring recommended)
// Technical Debt
Complexity: 15 (7 methods, 3 responsibilities)
Impact Range: 8 (Authentication used across all features)
Fix Difficulty: 2 (Tests exist, documentation lacking)
Debt Time: 15 × 8 × 2 = 240 hours
Priority: Critical (Auth system requires immediate attention)
```
### Improved Implementation Example
```javascript
// After applying SOLID principles (Score: 90 points)
// S: Single Responsibility (20 points)
class AuthenticationService {
authenticate(credentials) { /* ... */ }
refreshToken(token) { /* ... */ }
}
// O: Open/Closed (20 points)
class UserRepository {
constructor(storage) { // Strategy Pattern
this.storage = storage;
}
save(user) { return this.storage.save(user); }
}
// I: Interface Segregation (20 points)
interface Readable {
find(id);
findAll();
}
interface Writable {
save(entity);
delete(id);
}
// D: Dependency Inversion (20 points)
class UserService {
constructor(
private auth: IAuthService,
private repo: IUserRepository,
private notifier: INotificationService
) {}
}
// Debt reduction: 240 hours → 20 hours (92% reduction)
```
### Automation Support
```bash
# SOLID score measurement
npx solid-analyzer src/ --output report.json
# Complexity analysis
npx complexity-report src/ --format json
sonar-scanner -Dsonar.javascript.lcov.reportPaths=coverage/lcov.info
# Technical debt visualization
npx code-debt-analyzer --config .debt.yml
# Code formatting
npm run lint:fix
prettier --write src/
# Test execution and coverage
npm test -- --coverage
npm run test:mutation # Mutation testing
```
### Important Rules
- **No functional changes**: Don't alter external behavior
- **Test first**: Add tests before refactoring
- **Step-by-step approach**: No large changes at once
- **Continuous verification**: Run tests at each step

571
commands/role-debate.md Normal file
View File

@@ -0,0 +1,571 @@
## Role Debate
A command that allows roles with different expertise to discuss and examine trade-offs to derive optimal solutions.
### Usage
```bash
/role-debate <Role 1>,<Role 2> [Topic]
/role-debate <Role 1>,<Role 2>,<Role 3> [Topic]
```
### Basic Examples
```bash
# Security vs Performance trade-off
/role-debate security,performance
"JWT Token Expiry Setting"
# Usability vs Security balance
/role-debate frontend,security
"2-Factor Authentication UX Optimization"
# Technology selection discussion
/role-debate architect,mobile
"React Native vs Flutter Selection"
# Three-party debate
/role-debate architect,security,performance
"Pros and Cons of Microservices"
```
### Basic Principles of Debate
#### Constructive Debate Guidelines
- **Mutual Respect**: Respect the expertise and perspectives of other roles
- **Fact-Based**: Debate based on data and evidence, not emotional reactions
- **Solution-Oriented**: Aim for better solutions rather than criticizing for criticism's sake
- **Implementation-Focused**: Consider feasibility rather than idealism
#### Quality Requirements for Arguments
- **Official Documentation**: Reference standards, guidelines, and official documentation
- **Empirical Cases**: Specific citations of success or failure cases
- **Quantitative Evaluation**: Comparisons using numbers and metrics whenever possible
- **Time-Series Consideration**: Evaluation of short-term, medium-term, and long-term impacts
#### Debate Ethics
- **Honesty**: Acknowledge the limits of your expertise
- **Openness**: Flexibility toward new information and perspectives
- **Transparency**: Explicitly state judgment grounds and assumptions
- **Accountability**: Mention implementation risks of proposals
### Debate Process
### Phase 1: Initial Position Statement
Each role independently expresses opinions from their professional perspective
- Presentation of recommendations
- Explicit citation of standards and documents as grounds
- Explanation of anticipated risks and issues
- Definition of success metrics
### Phase 2: Mutual Discussion & Rebuttal
Cross-discussion between roles
- Constructive rebuttal of other roles' proposals
- Identification of overlooked perspectives
- Clarification of trade-offs
- Presentation of alternatives
### Phase 3: Search for Compromises
Exploration of implementable solutions
- Evaluation of the importance of each perspective
- Consideration of win-win solutions
- Step-by-step implementation approach
- Consideration of risk mitigation measures
### Phase 4: Integrated Conclusion
Determining final recommendations
- Agreed-upon solution
- Implementation roadmap
- Success metrics and measurement methods
- Future review points
### Output Format Examples
### For 2-Role Debate
```text
Role Debate: Security vs Performance
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Topic: JWT Token Expiry Setting
Security Role's Argument:
"15-minute short expiry recommended"
Grounds:
- Compliance with OWASP JWT Security Cheat Sheet
- Minimizing damage window in case of token leakage
- Limiting attacker available time
Concerns:
- Long expiry exponentially increases attack risk
- Short expiry mandatory for financial compliance
Success Metrics:
- Security incident rate < 0.1%
- Average attack detection time < 5 minutes
Performance Role's Rebuttal:
"2-hour expiry recommended"
Grounds:
- Reference to Google OAuth 2.0 Best Practices
- Avoiding increased server load from frequent reauthentication
- Minimizing user experience disruption
Concerns:
- 15-minute reauthentication increases API load by 8x
- Frequent disconnections in mobile environments
Success Metrics:
- Maintain API response time < 200ms
- Server CPU usage < 60%
Mutual Discussion:
Security → Performance:
"Business losses from security breaches are greater than server load.
Example: Equifax breach cost $700 million"
Performance → Security:
"Both can be achieved with refresh token mechanism.
Background updates ensure security without compromising UX"
Security → Performance:
"Refresh tokens are also attack targets. Proper implementation is prerequisite"
Performance → Security:
"Propose phased approach. 30 minutes for normal operations, 15 minutes for sensitive operations"
Compromise Search:
Common Understanding:
- Need to balance user experience and security
- Flexible response based on risk level
- Practical consideration of implementation and operational costs
Win-Win Elements:
- Utilization of refresh token mechanism
- Phased introduction of risk-based authentication
- Complementation with auto-logout function
Integrated Conclusion:
"30-minute expiry + refresh token + risk-based authentication"
Implementation Details:
1. Access token: 30-minute expiry
2. Refresh token: 7-day expiry
3. High-risk operations: Enforce reauthentication every 15 minutes
4. Auto-logout after 30 minutes of inactivity
Phased Implementation:
Weeks 1-2: Basic 30-minute token implementation
Weeks 3-4: Add refresh token mechanism
Month 2: Introduce risk-based authentication
Success Metrics:
- Security: Incident rate < 0.1%
- Performance: API load increase < 20%
- UX: User satisfaction > 85%
Future Review:
- After 3 months: Evaluate actual attack patterns and load conditions
- After 6 months: Consider migration to more sophisticated risk-based authentication
```
### For 3-Role Debate
```text
Role Debate: Architect vs Security vs Performance
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Topic: Pros and Cons of Microservices
Architect Role's Argument:
"Phased microservices recommended"
Grounds: Clear domain boundaries, independent deployment, freedom in technology selection
Security Role's Concerns:
"Security complexity of inter-service communication"
Grounds: Management costs of API Gateway, mTLS, distributed authentication
Performance Role's Concerns:
"Latency increase due to network communication"
Grounds: N+1 problem from internal API calls, distributed transactions
Three-Party Discussion:
Architect → Security: "Can be controlled through centralized API Gateway management"
Security → Architect: "Risk of single point of failure"
Performance → Architect: "Service division granularity is important"
...(discussion continues)
Integrated Conclusion:
"Domain-driven design for phased division + security-first design"
```
### Effective Debate Patterns
### Technology Selection
```bash
/role-debate architect,performance
"Database Selection: PostgreSQL vs MongoDB"
/role-debate frontend,mobile
"UI Framework: React vs Vue"
/role-debate security,architect
"Authentication Method: JWT vs Session Cookie"
```
### Design Decisions
```bash
/role-debate security,frontend
"User Authentication UX Design"
/role-debate performance,mobile
"Data Synchronization Strategy Optimization"
/role-debate architect,qa
"Test Strategy and Architecture Design"
```
### Trade-off Issues
```bash
/role-debate security,performance
"Encryption Level vs Processing Speed"
/role-debate frontend,performance
"Rich UI vs Page Loading Speed"
/role-debate mobile,security
"Convenience vs Data Protection Level"
```
### Role-Specific Debate Characteristics
#### 🔒 Security Role
```yaml
debate_stance:
- Conservative approach (risk minimization)
- Compliance-focused (cautious about deviations from standards)
- Worst-case scenario assumption (attacker perspective)
- Long-term impact focus (security as technical debt)
typical_issues:
- "Security vs Convenience" trade-offs
- "Mandatory compliance requirements"
- "Attack cost vs Defense cost comparison"
- "Thorough privacy protection"
evidence_sources:
- OWASP guidelines
- NIST frameworks
- Industry standards (ISO 27001, SOC 2)
- Actual attack cases and statistics
debate_strengths:
- Precision in risk assessment
- Knowledge of regulatory requirements
- Understanding of attack methods
potential_biases:
- Excessive conservatism (inhibiting innovation)
- Insufficient UX consideration
- Downplaying implementation costs
```
#### ⚡ Performance Role
```yaml
debate_stance:
- Data-driven decisions (measurement-based)
- Efficiency-focused (optimizing cost-effectiveness)
- User experience priority (perceived speed focus)
- Continuous improvement (phased optimization)
typical_issues:
- "Performance vs Security"
- "Optimization cost vs effectiveness ROI"
- "Current vs future scalability"
- "User experience vs system efficiency"
evidence_sources:
- Core Web Vitals metrics
- Benchmark results and statistics
- Impact data on user behavior
- Industry performance standards
debate_strengths:
- Quantitative evaluation ability
- Bottleneck identification
- Knowledge of optimization techniques
potential_biases:
- Downplaying security
- Insufficient maintainability consideration
- Premature optimization
```
#### 🏗️ Architect Role
```yaml
debate_stance:
- Long-term perspective (consideration for system evolution)
- Balance pursuit (overall optimization)
- Phased changes (risk management)
- Standard compliance (preference for proven patterns)
typical_issues:
- "Short-term efficiency vs long-term maintainability"
- "Technical debt vs development speed"
- "Microservices vs monolith"
- "New technology adoption vs stability"
evidence_sources:
- Architecture pattern collections
- Design principles (SOLID, DDD)
- Large-scale system cases
- Technology evolution trends
debate_strengths:
- Overall perspective ability
- Knowledge of design patterns
- Prediction of long-term impacts
potential_biases:
- Excessive generalization
- Conservatism toward new technologies
- Insufficient understanding of implementation details
```
#### 🎨 Frontend Role
```yaml
debate_stance:
- User-centered design (UX first priority)
- Inclusive approach (diversity consideration)
- Intuitiveness focus (minimizing learning costs)
- Accessibility standards (WCAG compliance)
typical_issues:
- "Usability vs Security"
- "Design consistency vs platform optimization"
- "Functionality vs simplicity"
- "Performance vs rich experience"
evidence_sources:
- UX research and usability test results
- Accessibility guidelines
- Design system standards
- User behavior data
debate_strengths:
- Representation of user perspective
- Knowledge of design principles
- Accessibility requirements
potential_biases:
- Insufficient understanding of technical constraints
- Downplaying security requirements
- Underestimation of performance impact
```
#### 📱 Mobile Role
```yaml
debate_stance:
- Platform specialization (considering iOS/Android differences)
- Context adaptation (on-the-go, one-handed operation)
- Resource constraints (battery, memory, communication)
- Store compliance (review guidelines)
typical_issues:
- "Native vs cross-platform"
- "Offline support vs real-time synchronization"
- "Battery efficiency vs functionality"
- "Platform unification vs optimization"
evidence_sources:
- iOS HIG / Android Material Design
- App Store / Google Play guidelines
- Mobile UX research
- Device performance statistics
debate_strengths:
- Understanding of mobile-specific constraints
- Knowledge of platform differences
- Touch interface design
potential_biases:
- Insufficient understanding of web platform
- Downplaying server-side constraints
- Insufficient consideration for desktop environment
```
#### 🔍 Analyzer Role
```yaml
debate_stance:
- Evidence-focused (data-first)
- Hypothesis verification (scientific approach)
- Structural thinking (system thinking)
- Bias elimination (objectivity pursuit)
typical_issues:
- "Correlation vs causation"
- "Symptomatic treatment vs root solution"
- "Distinction between hypothesis and fact"
- "Short-term symptoms vs structural problems"
evidence_sources:
- Measured data and log analysis
- Statistical methods and analysis results
- System thinking theory
- Cognitive bias research
debate_strengths:
- Logical analysis ability
- Objectivity in evidence evaluation
- Discovery of structural problems
potential_biases:
- Analysis paralysis (insufficient action)
- Perfectionism (downplaying practicality)
- Data absolutism
```
### Debate Progression Templates
#### Phase 1: Position Statement Template
```text
[Role Name]'s Recommendation:
"[Specific proposal]"
Grounds:
- [Reference to official documents/standards]
- [Empirical cases/data]
- [Professional field principles]
Expected Effects:
- [Short-term effects]
- [Medium to long-term effects]
Concerns/Risks:
- [Implementation risks]
- [Operational risks]
- [Impacts on other fields]
Success Metrics:
- [Measurable metric 1]
- [Measurable metric 2]
```
#### Phase 2: Rebuttal Template
```text
Rebuttal to [Target Role]:
"[Specific rebuttal to target proposal]"
Rebuttal Grounds:
- [Overlooked perspectives]
- [Contradictory evidence/cases]
- [Concerns from professional field]
Alternative Proposal:
"[Improved proposal]"
Compromise Points:
- [Acceptable conditions]
- [Possibility of phased implementation]
```
#### Phase 3: Integrated Solution Template
```text
Integrated Solution:
"[Final proposal considering all roles' concerns]"
Considerations for Each Role:
- [Security]: [How security requirements are met]
- [Performance]: [How performance requirements are met]
- [Others]: [How other requirements are met]
Implementation Roadmap:
- Phase 1 (Immediate): [Urgent response items]
- Phase 2 (Short-term): [Basic implementation]
- Phase 3 (Medium-term): [Complete implementation]
Success Metrics & Measurement Methods:
- [Integrated success metrics]
- [Measurement methods/frequency]
- [Review timing]
```
### Debate Quality Checklist
#### Evidence Quality
- [ ] References to official documents/standards
- [ ] Specific cases/data presented
- [ ] Distinction between speculation and fact
- [ ] Sources explicitly stated
#### Debate Constructiveness
- [ ] Accurate understanding of opponent's proposals
- [ ] Logical rather than emotional rebuttal
- [ ] Alternatives also presented
- [ ] Exploration of win-win possibilities
#### Implementation Feasibility
- [ ] Technical feasibility considered
- [ ] Implementation costs/duration estimated
- [ ] Phased implementation possibility considered
- [ ] Risk mitigation measures presented
#### Integration
- [ ] Impacts on other fields considered
- [ ] Pursuit of overall optimization
- [ ] Long-term perspective included
- [ ] Measurable success metrics set
### Collaboration with Claude
```bash
# Debate based on design documents
cat system-design.md
/role-debate architect,security
"Discuss security issues in this design"
# Solution debate based on problems
cat performance-issues.md
/role-debate performance,architect
"Discuss fundamental solutions to performance issues"
# Technology selection debate based on requirements
/role-debate mobile,frontend
"Discuss unified UI strategy for iOS, Android, and Web"
```
### Notes
- Debates may take time (longer for complex topics)
- With 3+ roles, discussions may diverge
- Final decisions should be made by users referencing debate results
- For urgent issues, consider single role or multi-role first

276
commands/role-help.md Normal file
View File

@@ -0,0 +1,276 @@
## Role Help
A selection guide and help system when you're unsure which role to use.
### Usage
```bash
/role-help # General role selection guide
/role-help <situation/problem> # Recommended roles for specific situations
/role-help compare <Role 1>,<Role 2> # Compare roles
```
### Basic Examples
```bash
# General guidance
/role-help
→ List of available roles and their characteristics
# Situation-specific recommendation
/role-help "Concerned about API security"
→ Recommendation and usage of security role
# Role comparison
/role-help compare frontend,mobile
→ Differences and appropriate usage between frontend and mobile roles
```
### Situation-Based Role Selection Guide
### Security-Related
```text
Use security role for:
✅ Implementation of login/authentication functions
✅ Security vulnerability checks for APIs
✅ Data encryption and privacy protection
✅ Security compliance verification
✅ Penetration testing
Usage: /role security
```
### 🏗️ Architecture & Design
```text
Use architect role for:
✅ Evaluation of overall system design
✅ Microservices vs monolith decisions
✅ Database design and technology selection
✅ Scalability and extensibility considerations
✅ Technical debt assessment and improvement planning
Usage: /role architect
```
### ⚡ Performance Issues
```text
Use performance role for:
✅ Slow applications
✅ Database query optimization
✅ Web page loading speed improvement
✅ Memory and CPU usage optimization
✅ Scaling and load countermeasures
Usage: /role performance
```
### 🔍 Problem Root Cause Investigation
```text
Use analyzer role for:
✅ Root cause analysis of bugs and errors
✅ Investigation of system failures
✅ Structural analysis of complex problems
✅ Data analysis and statistical research
✅ Understanding why problems occur
Usage: /role analyzer
```
### 🎨 Frontend & UI/UX
```text
Use frontend role for:
✅ User interface improvements
✅ Accessibility compliance
✅ Responsive design
✅ Usability and ease of use enhancement
✅ General web frontend technologies
Usage: /role frontend
```
### 📱 Mobile App Development
```text
Use mobile role for:
✅ iOS and Android app optimization
✅ Mobile-specific UX design
✅ Touch interface optimization
✅ Offline support and synchronization functions
✅ App Store and Google Play compliance
Usage: /role mobile
```
### 👀 Code Review & Quality
```text
Use reviewer role for:
✅ Code quality checks
✅ Readability and maintainability evaluation
✅ Coding convention verification
✅ Refactoring proposals
✅ PR and commit reviews
Usage: /role reviewer
```
### 🧪 Testing & Quality Assurance
```text
Use qa role for:
✅ Test strategy planning
✅ Test coverage evaluation
✅ Automated test implementation guidelines
✅ Bug prevention and quality improvement measures
✅ Test automation in CI/CD
Usage: /role qa
```
### When Multiple Roles Are Needed
### 🔄 multi-role (Parallel Analysis)
```text
Use multi-role for:
✅ Evaluation from multiple professional perspectives
✅ Creating integrated improvement plans
✅ Comparing evaluations from different fields
✅ Organizing contradictions and overlaps
Example: /multi-role security,performance
```
### 🗣️ role-debate (Discussion)
```text
Use role-debate for:
✅ Trade-offs between specialized fields
✅ Divided opinions on technology selection
✅ Making design decisions through discussion
✅ Hearing debates from different perspectives
Example: /role-debate security,performance
```
### 🤖 smart-review (Automatic Proposal)
```text
Use smart-review for:
✅ Uncertainty about which role to use
✅ Wanting to know the optimal approach for current situation
✅ Choosing from multiple options
✅ Beginner indecision
Example: /smart-review
```
### Role Comparison Table
### Security Category
| Role | Main Use | Strengths | Weaknesses |
| -------- | ---------------------------------------- | -------------------------------------- | ------------------------------------ |
| security | Vulnerability and attack countermeasures | Threat analysis, authentication design | UX, performance |
| analyzer | Root cause analysis | Logical analysis, evidence collection | Preventive measures, future planning |
### Design Category
| Role | Main Use | Strengths | Weaknesses |
| --------- | ------------- | ------------------------------------------- | --------------------------------------------- |
| architect | System design | Long-term perspective, overall optimization | Detailed implementation, short-term solutions |
| reviewer | Code quality | Implementation level, maintainability | Business requirements, UX |
### Performance Category
| Role | Main Use | Strengths | Weaknesses |
| ----------- | ---------------------------------- | -------------------------------------- | -------------------- |
| performance | Speed improvement and optimization | Measurement, bottleneck identification | Security, UX |
| qa | Quality assurance | Testing, automation | Design, architecture |
### User Experience Category
| Role | Main Use | Strengths | Weaknesses |
| -------- | --------- | ---------------------- | ---------------- |
| frontend | Web UI/UX | Browser, accessibility | Server-side, DB |
| mobile | Mobile UX | Touch, offline support | Server-side, Web |
### Decision Flowchart When Unsure
```text
What is the nature of the problem?
├─ Security-related → security
├─ Performance issues → performance
├─ Bug/failure investigation → analyzer
├─ UI/UX improvement → frontend or mobile
├─ Design/architecture → architect
├─ Code quality → reviewer
├─ Testing-related → qa
└─ Complex/composite → smart-review for proposal
Spans multiple fields?
├─ Want integrated analysis → multi-role
├─ Discussion/trade-offs → role-debate
└─ Unsure → smart-review
```
### Frequently Asked Questions
### Q: What's the difference between frontend and mobile roles?
```text
A:
frontend: Web browser-focused, HTML/CSS/JavaScript
mobile: Mobile app-focused, iOS/Android native, React Native, etc.
For issues related to both, multi-role frontend,mobile is recommended
```
### Q: How to choose between security and analyzer roles?
```text
A:
security: Prevention of attacks and threats, security design
analyzer: Analysis of causes of existing problems, investigation
For security incident investigations, use multi-role security,analyzer
```
### Q: What's the difference between architect and performance roles?
```text
A:
architect: Long-term design of entire systems, scalability
performance: Specific speed and efficiency improvements
For performance design of large-scale systems, use multi-role architect,performance
```
### Collaboration with Claude
```bash
# Combined with situation description
/role-help
"React app page loading is slow, receiving complaints from users"
# Combined with file content
cat problem-description.md
/role-help
"Recommend the most suitable role for this problem"
# When unsure between specific options
/role-help compare security,performance
"Which role is appropriate for JWT token expiration issues?"
```
### Notes
- For complex problems, combining multiple roles is more effective
- For urgent matters, use single role for quick response
- When unsure, it's recommended to use smart-review for automatic proposals
- The final decision should be made by the user considering the nature of the problem

367
commands/role.md Normal file
View File

@@ -0,0 +1,367 @@
## Role
Switch to a specific role to perform specialized analysis or work.
### Usage
```bash
/role <role_name> [--agent|-a]
```
### Options
- `--agent` or `-a`: Execute as a sub-agent (recommended for large-scale analysis)
- When using this option, if the role description includes proactive delegation phrases (such as "use PROACTIVELY"), more proactive automatic delegation will be enabled
### Available Roles
#### Specialized Analysis Roles (Evidence-First Integrated)
- `security`: Security audit expert (OWASP Top 10, threat modeling, Zero Trust principles, CVE matching)
- `performance`: Performance optimization expert (Core Web Vitals, RAIL model, phased optimization, ROI analysis)
- `analyzer`: Root cause analysis expert (5 Whys, systems thinking, hypothesis-driven, cognitive bias countermeasures)
- `frontend`: Frontend and UI/UX expert (WCAG 2.1, design systems, user-centered design)
- `mobile`: Mobile development expert (iOS HIG, Android Material Design, cross-platform strategy)
- `backend`: Backend and server-side expert (RESTful design, scalability, database optimization)
#### Development Support Roles
- `reviewer`: Code review expert (readability, maintainability, performance, refactoring proposals)
- `architect`: System architect (Evidence-First design, MECE analysis, evolutionary architecture)
- `qa`: Test engineer (test coverage, E2E/integration/unit strategy, automation proposals)
### Basic Examples
```bash
# Switch to security audit mode (normal)
/role security
"Check the security vulnerabilities of this project"
# Run security audit as a sub-agent (large-scale analysis)
/role security --agent
"Perform a security audit of the entire project"
# Switch to code review mode
/role reviewer
"Review recent changes and point out improvements"
# Switch to performance optimization mode
/role performance
"Analyze the bottlenecks of the application"
# Switch to root cause analysis mode
/role analyzer
"Investigate the root cause of this failure"
# Switch to frontend specialist mode
/role frontend
"Evaluate UI/UX improvements"
# Switch to mobile development specialist mode
/role mobile
"Evaluate mobile optimization of this app"
# Return to normal mode
/role default
"Return to normal Claude"
```
### Collaboration with Claude
```bash
# Security-specific analysis
/role security
cat app.js
"Analyze potential security risks in this code in detail"
# Architecture evaluation
/role architect
ls -la src/
"Present problems and improvements for the current structure"
# Test strategy planning
/role qa
"Propose the optimal test strategy for this project"
```
### Detailed Examples
```bash
# Analysis with multiple roles
/role security
"First check from a security perspective"
git diff HEAD~1
/role reviewer
"Next review general code quality"
/role architect
"Finally evaluate from an architectural perspective"
# Role-specific output format
/role security
Security Analysis Results
━━━━━━━━━━━━━━━━━━━━━
Vulnerability: SQL Injection
Severity: High
Location: db.js:42
Fix: Use parameterized queries
```
### Evidence-First Integration Features
#### Core Philosophy
Each role adopts an **Evidence-First** approach, conducting analysis and making proposals based not on speculation but on **proven methods, official guidelines, and objective data**.
#### Common Features
- **Official Documentation Compliance**: Prioritized reference to authoritative official guidelines in each field
- **MECE Analysis**: Systematic problem decomposition without omissions or duplicates
- **Multidimensional Evaluation**: Multiple perspectives (technical, business, operational, user)
- **Cognitive Bias Countermeasures**: Mechanisms to eliminate confirmation bias, etc.
- **Discussion Characteristics**: Role-specific professional discussion stances
### Details of Specialized Analysis Roles
#### security (Security Audit Expert)
**Evidence-Based Security Audit**
- Systematic evaluation according to OWASP Top 10, Testing Guide, and SAMM
- Known vulnerability checks through CVE and NVD database matching
- Threat modeling using STRIDE, Attack Tree, and PASTA
- Design evaluation based on Zero Trust principles and least privilege
**Professional Report Format**
```text
Evidence-Based Security Audit Results
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
OWASP Top 10 Compliance: XX% / CVE Matching: Completed
Threat Modeling: STRIDE Analysis Completed
```
#### performance (Performance Optimization Expert)
**Evidence-First Performance Optimization**
- Compliance with Core Web Vitals (LCP, FID, CLS) and RAIL model
- Implementation of Google PageSpeed Insights recommendations
- Phased optimization process (measurement → analysis → prioritization → implementation)
- Quantitative evaluation of ROI through analysis
**Professional Report Format**
```text
Evidence-First Performance Analysis
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Core Web Vitals: LCP[XXXms] FID[XXXms] CLS[X.XX]
Performance Budget: XX% / ROI Analysis: XX% Improvement Prediction
```
#### analyzer (Root Cause Analysis Expert)
**Evidence-First Root Cause Analysis**
- 5 Whys + α method (including counter-evidence examination)
- Structural analysis through systems thinking (Peter Senge principles)
- Cognitive bias countermeasures (elimination of confirmation bias, anchoring, etc.)
- Thorough hypothesis-driven analysis (parallel verification of multiple hypotheses)
**Professional Report Format**
```text
Evidence-First Root Cause Analysis
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Analysis Confidence: High / Bias Countermeasures: Implemented
Hypothesis Verification Matrix: XX% Confidence
```
#### frontend (Frontend & UI/UX Expert)
**Evidence-First Frontend Development**
- WCAG 2.1 accessibility compliance
- Material Design and iOS HIG official guidelines compliance
- User-centered design and design system standard application
- Verification through A/B testing and user behavior analysis
### Details of Development Support Roles
#### reviewer (Code Review Expert)
- Multidimensional evaluation of readability, maintainability, and performance
- Coding convention compliance checks and refactoring proposals
- Cross-cutting confirmation of security and accessibility
#### architect (System Architect)
- Evidence-First design principles and MECE analysis for phased thinking
- Evolutionary architecture and multi-perspective evaluation (technical, business, operational, user)
- Reference to official architecture patterns and best practices
#### qa (Test Engineer)
- Test coverage analysis and E2E/integration/unit test strategies
- Test automation proposals and quality metrics design
#### mobile (Mobile Development Expert)
- iOS HIG and Android Material Design official guidelines compliance
- Cross-platform strategy and Touch-First design
- Store review guidelines and mobile-specific UX optimization
#### backend (Backend and Server-Side Expert)
- RESTful/GraphQL API design, domain-driven design, clean architecture
- Scalability, fault tolerance, performance optimization
- Database optimization, caching strategies, reliability improvements
### Role-Specific Discussion Characteristics
Each role has unique discussion stances, evidence sources, and strengths according to their specialized field.
#### security Role Discussion Characteristics
- **Stance**: Conservative approach, risk minimization priority, worst-case scenario assumption
- **Evidence**: OWASP guidelines, NIST frameworks, actual attack cases
- **Strengths**: Precision in risk assessment, deep knowledge of regulatory requirements, comprehensive understanding of attack methods
- **Caution**: Excessive conservatism, insufficient UX consideration, downplaying implementation costs
#### performance Role Discussion Characteristics
- **Stance**: Data-driven decisions, efficiency focus, user experience priority, continuous improvement
- **Evidence**: Core Web Vitals, benchmark results, user behavior data, industry standards
- **Strengths**: Quantitative evaluation ability, precision in bottleneck identification, ROI analysis
- **Caution**: Downplaying security, insufficient maintainability consideration, overemphasis on measurement
#### analyzer Role Discussion Characteristics
- **Stance**: Evidence-focused, hypothesis verification, structural thinking, bias elimination
- **Evidence**: Measured data, statistical methods, systems thinking theory, cognitive bias research
- **Strengths**: Logical analysis ability, objectivity in evidence evaluation, ability to discover structural problems
- **Caution**: Analysis paralysis, perfectionism, data absolutism, excessive skepticism
#### frontend Role Discussion Characteristics
- **Stance**: User-centered, accessibility-focused, design principle compliance, experience value priority
- **Evidence**: UX research, accessibility standards, design systems, usability testing
- **Strengths**: User perspective, design principles, accessibility, experience design
- **Caution**: Downplaying technical constraints, insufficient performance consideration, implementation complexity
### Effects of Multi-Role Collaboration
Combining roles with different discussion characteristics enables balanced analysis:
#### Typical Collaboration Patterns
- **security + frontend**: Balance between security and usability
- **performance + security**: Balance between speed and safety
- **analyzer + architect**: Integration of problem analysis and structural design
- **reviewer + qa**: Coordination of code quality and test strategy
## Advanced Role Features
### Intelligent Role Selection
- `/smart-review`: Automatic role proposal through project analysis
- `/role-help`: Optimal role selection guide according to the situation
### Multi-Role Collaboration
- `/multi-role <Role 1>,<Role 2>`: Simultaneous analysis by multiple roles
- `/role-debate <Role 1>,<Role 2>`: Discussion between roles
### Usage Examples
#### Automatic Role Proposal
```bash
/smart-review
→ Analyze current situation and propose optimal role
/smart-review src/auth/
→ Recommend security role based on authentication-related files
```
#### Multiple Role Analysis
```bash
/multi-role security,performance
"Evaluate this API from multiple perspectives"
→ Integrated analysis from both security and performance perspectives
/role-debate frontend,security
"Discuss the UX of 2-factor authentication"
→ Discussion from usability and security perspectives
```
#### When Unsure About Role Selection
```bash
/role-help "API is slow and security is also a concern"
→ Propose appropriate approach (multi-role or debate)
/role-help compare frontend,mobile
→ Differences and appropriate usage between frontend and mobile roles
```
## Notes
### About Role Behavior
- When switching roles, Claude's **behavior, priorities, analysis methods, and report formats** become specialized
- Each role prioritizes applying official guidelines and proven methods through an **Evidence-First approach**
- Return to normal mode with `default` (role specialization is removed)
- Roles are only effective within the current session
### Effective Usage Methods
- **Simple problems**: Sufficient specialized analysis with a single role
- **Complex problems**: Multi-perspective analysis with multi-role or role-debate is effective
- **When unsure**: Use smart-review or role-help
- **Continuous improvement**: Even with the same role, analysis accuracy improves with new evidence and methods
### Sub-Agent Function (--agent Option)
For large-scale analysis or independent specialized processing, you can run a role as a sub-agent using the `--agent` option.
#### Benefits
- **Independent context**: Does not interfere with main conversation
- **Parallel execution**: Multiple analyses can be performed simultaneously
- **Specialized expertise**: Deeper analysis and detailed reports
- **Promotion of automatic delegation**: When role descriptions include "use PROACTIVELY" or "MUST BE USED", more proactive automatic delegation is enabled
#### Recommended Usage Scenarios
```bash
# Security: OWASP full-item check, CVE matching
/role security --agent
"Security audit of entire codebase"
# Analyst: Root cause analysis of large logs
/role analyzer --agent
"Analyze error logs from the past week"
# Reviewer: Detailed review of large PR
/role reviewer --agent
"Review 1000-line changes in PR #500"
```
#### Normal Role vs Sub-Agent
| Situation | Recommendation | Command |
| -------------------- | -------------- | ------------------------ |
| Simple confirmation | Normal role | `/role security` |
| Large-scale analysis | Sub-agent | `/role security --agent` |
| Interactive work | Normal role | `/role frontend` |
| Independent audit | Sub-agent | `/role qa --agent` |
### Role Configuration Details
- Detailed settings, expertise, and discussion characteristics of each role are defined in the `.claude/agents/roles/` directory
- Includes Evidence-First methods and cognitive bias countermeasures
- Specialized mode is automatically enabled with role-specific trigger phrases
- Actual role files consist of over 200 lines of professional content

103
commands/screenshot.md Normal file
View File

@@ -0,0 +1,103 @@
## Screenshot
Capture screenshots on macOS and analyze the images.
### Usage
```bash
/screenshot [options]
```
### Options
- None: Select window (Claude will confirm options)
- `--window`: Capture a specific window
- `--full`: Capture the entire screen
- `--crop`: Select a region to capture
### Basic Examples
```bash
# Capture and analyze a window
/screenshot --window
"Analyze the captured screen"
# Select a region and analyze
/screenshot --crop
"Explain the content of the selected region"
# Capture full screen and analyze
/screenshot --full
"Analyze the overall screen composition"
```
### Collaboration with Claude
```bash
# No specific problem - situation analysis
/screenshot --crop
(Claude will automatically analyze screen content, explaining elements and composition)
# UI/UX problem analysis
/screenshot --window
"Propose problems and improvements for this UI"
# Error analysis
/screenshot --window
"Tell me the cause and solution for this error message"
# Design review
/screenshot --full
"Evaluate this design from a UX perspective"
# Code analysis
/screenshot --crop
"Point out problems in this code"
# Data visualization analysis
/screenshot --crop
"Analyze trends visible in this graph"
```
### Detailed Examples
```bash
# Analysis from multiple perspectives
/screenshot --window
"Analyze this screen regarding:
1. UI consistency
2. Accessibility issues
3. Improvement proposals"
# Multiple captures for comparative analysis
/screenshot --window
# (Save before image)
# Make changes
/screenshot --window
# (Save after image)
"Compare before and after images, analyzing changes and improvement effects"
# Focus on specific elements
/screenshot --crop
"Evaluate whether the selected button design harmonizes with other elements"
```
### Prohibited Items
- **Prohibited to say "captured" when no screenshot was taken**
- **Prohibited to attempt analysis of non-existent image files**
- **The `/screenshot` command does not actually capture screenshots**
### Notes
- If no option is specified, please present the following choices:
```
"How would you like to capture the screenshot?
1. Select window (--window) → screencapture -W
2. Full screen (--full) → screencapture -x
3. Select region (--crop) → screencapture -i"
```
- Start image analysis after the user has executed the screencapture command
- Specifying specific problems or perspectives enables more focused analysis

66
commands/search-gemini.md Normal file
View File

@@ -0,0 +1,66 @@
## Gemini Web Search
Execute web searches via Gemini CLI to obtain the latest information.
### Usage
```bash
# Web search via Gemini CLI (required)
gemini --prompt "WebSearch: <search_query>"
```
### Basic Examples
```bash
# Using Gemini CLI
gemini --prompt "WebSearch: React 19 new features"
gemini --prompt "WebSearch: TypeError Cannot read property of undefined solution"
```
### Collaboration with Claude
```bash
# Document search and summarization
gemini --prompt "WebSearch: Next.js 14 App Router official documentation"
"Summarize the search results and explain the main features"
# Error investigation
cat error.log
gemini --prompt "WebSearch: [error_message] solution"
"Propose the most appropriate solution from the search results"
# Technology comparison
gemini --prompt "WebSearch: Rust vs Go performance benchmark 2024"
"Summarize the performance differences from the search results"
```
### Detailed Examples
```bash
# Information gathering from multiple sources
gemini --prompt "WebSearch: GraphQL best practices 2024 multiple sources"
"Summarize information from multiple reliable sources in the search results"
# Investigating changes over time
gemini --prompt "WebSearch: JavaScript ES2015 ES2016 ES2017 ES2018 ES2019 ES2020 ES2021 ES2022 ES2023 ES2024 features"
"Summarize the main changes in each version in chronological order"
# Search limited to specific domain
gemini --prompt "WebSearch: site:github.com Rust WebAssembly projects stars:>1000"
"List the top 10 projects by number of stars"
# Latest security information
gemini --prompt "WebSearch: CVE-2024 Node.js vulnerabilities"
"Summarize the impact and countermeasures of found vulnerabilities"
```
### Prohibited Items
- **Prohibited to use Claude's built-in WebSearch tool**
- When web search is needed, always use `gemini --prompt "WebSearch: ..."`
### Important Notes
- **When Gemini CLI is available, always use `gemini --prompt "WebSearch: ..."`**
- Web search results are not always the latest
- It is recommended to verify important information with official documentation or reliable sources

1102
commands/semantic-commit.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,90 @@
## Sequential Thinking
Solve complex problems step-by-step through a dynamic, iterative thinking process. This flexible approach allows for course corrections and revisions during the thinking process.
### Usage
```bash
# Request Claude to think sequentially
"Analyze [task] using sequential-thinking"
```
### Basic Examples
```bash
# Algorithm design
"Design an efficient caching strategy using sequential-thinking"
# Problem solving
"Solve database performance issues using sequential-thinking"
# Design review
"Examine real-time notification system design using sequential-thinking"
```
### Collaboration with Claude
```bash
# Complex implementation strategy
"Examine authentication system implementation strategy using sequential-thinking. Consider OAuth2, JWT, and session management"
# Bug cause analysis
"Analyze memory leak causes using sequential-thinking. Include code review and profiling results"
# Refactoring strategy
cat src/complex_module.js
"Develop a refactoring strategy for this module using sequential-thinking"
# Technology selection
"Analyze front-end framework selection using sequential-thinking. Consider project requirements and constraints"
```
### Thinking Process
1. **Initial Analysis** - Basic understanding and decomposition of the problem
2. **Hypothesis Generation** - Formulate hypotheses for solutions
3. **Verification and Revision** - Verify hypotheses and revise as needed
4. **Branching and Exploration** - Explore multiple solution paths
5. **Integration and Conclusion** - Derive optimal solution
### Features
- **Dynamic Adjustment** - Ability to change direction during thinking
- **Hypothesis Testing** - Cycle of forming and testing hypotheses
- **Branching Thinking** - Simultaneously explore multiple thought paths
- **Gradual Refinement** - Step-by-step refinement of solutions
- **Flexibility** - Policy changes based on new information
### Detailed Examples
```bash
# Complex system design
"Examine e-commerce site microservice design using sequential-thinking. Include order processing, inventory management, and payment integration"
# Security design
"Examine API security design using sequential-thinking. Include authentication, authorization, rate limiting, and audit logging"
# Performance optimization
"Examine large-scale data processing optimization using sequential-thinking. Consider memory usage, processing speed, and scalability"
# Dependency management
"Examine monorepo dependency management strategy using sequential-thinking. Include build time, deployment, and test execution"
```
### Notes
Sequential-thinking is ideal for complex problems that require deepening thought in stages. For simple questions or those with clear answers, use normal question format.
### Execution Example
```bash
# Usage example
"Examine GraphQL schema design using sequential-thinking"
# Expected behavior
# 1. Initial analysis: Analyze basic requirements for GraphQL schema
# 2. Hypothesis generation: Examine multiple design patterns
# 3. Verification: Verify advantages and disadvantages of each pattern
# 4. Branching: Explore new approaches as needed
# 5. Integration: Propose optimal schema design
```

59
commands/show-plan.md Normal file
View File

@@ -0,0 +1,59 @@
## Show Plan
Display the plan being executed or executed in the current session.
### Usage
```bash
/show-plan
```
### Basic Examples
```bash
# Check current plan
/show-plan
"Display executing plan"
# When no plan exists
/show-plan
"There is no plan in the current session"
```
### Features
- Detect plans created with exit_plan_mode
- Search for headings containing keywords like implementation plan, implementation details, plan
- Format and display plan contents
- Clearly notify when no plan exists
### Collaboration with Claude
```bash
# Check plan during implementation
"What was I implementing?"
/show-plan
# When executing multiple tasks
"Let me check the current plan again"
/show-plan
# Review after plan execution
"Show me the plan I executed earlier"
/show-plan
```
### Detection Patterns
Based on the format of plans generated by exit_plan_mode, the following patterns are detected:
- Headings starting with `##` (including Plan, Planning, Strategy)
- `### Changes` (Changes)
- `### Implementation Details` (Implementation Details)
- `### Implementation Plan` (Implementation Plan)
- Numbered headings like `### 1.`
### Notes
- Only displays plans in the current session (does not include past sessions)
- Displays the latest plan with priority

174
commands/smart-review.md Normal file
View File

@@ -0,0 +1,174 @@
## Smart Review
A command that analyzes the current situation and automatically suggests the optimal role and approach.
### Usage
```bash
/smart-review # Analyze current directory
/smart-review <file/directory> # Analyze specific target
```
### Automatic Analysis Logic
### Analysis by File Extension
- `package.json`, `*.tsx`, `*.jsx`, `*.css`, `*.scss`**frontend**
- `Dockerfile`, `docker-compose.yml`, `*.yaml`**architect**
- `*.test.js`, `*.spec.ts`, `test/`, `__tests__/`**qa**
- `*.rs`, `Cargo.toml`, `performance/`**performance**
### Security-related File Detection
- `auth.js`, `security.yml`, `.env`, `config/auth/`**security**
- `login.tsx`, `signup.js`, `jwt.js`**security + frontend**
- `api/auth/`, `middleware/auth/`**security + architect**
### Complex Analysis Patterns
- `mobile/` + `*.swift`, `*.kt`, `react-native/`**mobile**
- `webpack.config.js`, `vite.config.js`, `large-dataset/`**performance**
- `components/` + `responsive.css`**frontend + mobile**
- `api/` + `auth/`**security + architect**
### Error/Problem Analysis
- Stack traces, `error.log`, `crash.log`**analyzer**
- `memory leak`, `high CPU`, `slow query`**performance + analyzer**
- `SQL injection`, `XSS`, `CSRF`**security + analyzer**
### Suggestion Patterns
### Single Role Suggestion
```bash
$ /smart-review src/auth/login.js
"Authentication file detected"
"Analysis with security role recommended"
"Execute? [y]es / [n]o / [m]ore options"
```
### Multiple Role Suggestion
```bash
$ /smart-review src/mobile/components/
"📱🎨 Mobile + Frontend elements detected"
"Recommended approaches:"
"[1] mobile role alone"
"[2] frontend role alone"
"[3] multi-role mobile,frontend"
"[4] role-debate mobile,frontend"
```
### Suggestions for Problem Analysis
```bash
$ /smart-review error.log
"⚠️ Error log detected"
"Starting root cause analysis with analyzer role"
"[Auto-execute] /role analyzer"
$ /smart-review slow-api.log
"🐌 Performance issue detected"
"Recommended: [1]/role performance [2]/role-debate performance,analyzer"
```
### Suggestions for Complex Design Decisions
```bash
$ /smart-review architecture-design.md
"🏗️🔒⚡ Architecture + Security + Performance elements detected"
"For complex design decisions, debate format recommended"
"[Recommended] /role-debate architect,security,performance"
"[Alternative] /multi-role architect,security,performance"
```
### Suggestion Logic Details
### Priority Assessment
1. **Security** - Authentication, authorization, and encryption are top priorities
2. **Critical Errors** - System outages and data loss are urgent
3. **Architecture** - Large-scale changes and technology selection require careful consideration
4. **Performance** - Directly impacts user experience
5. **Frontend/Mobile** - UI/UX improvements
6. **QA** - Quality assurance and testing
### Conditions for Recommending Debate
- When 3 or more roles are involved
- When there's a trade-off between security and performance
- When significant architectural changes are involved
- When both mobile and web are affected
### Basic Examples
```bash
# Analyze current directory
/smart-review
"Suggest the optimal role and approach"
# Analyze specific file
/smart-review src/auth/login.js
"Suggest the best review method for this file"
# Analyze error log
/smart-review error.log
"Suggest the best approach to resolve this error"
```
### Practical Examples
### Project-wide Analysis
```bash
$ /smart-review
"📊 Analyzing project..."
"React + TypeScript project detected"
"Authentication functionality + API + mobile support confirmed"
""
"💡 Recommended workflow:"
"1. Check authentication with security"
"2. Evaluate UI/UX with frontend"
"3. Confirm mobile optimization with mobile"
"4. Review overall design with architect"
""
"Auto-execute? [y]es / [s]elect role / [c]ustom"
```
### Specific Problem Analysis
```bash
$ /smart-review "How to set JWT expiration time"
"🤔 Technical design decision detected"
"This issue requires multiple expert perspectives"
""
"Recommended approach:"
"/role-debate security,performance,frontend"
"Reason: Balance between security, performance, and UX is important"
```
### Collaboration with Claude
```bash
# Analysis combined with file content
cat src/auth/middleware.js
/smart-review
"Analyze this file from a security perspective"
# Analysis combined with errors
npm run build 2>&1 | tee build-error.log
/smart-review build-error.log
"Suggest ways to resolve build errors"
# Design consultation
/smart-review
"Discuss whether to choose React Native or Progressive Web App"
```
### Notes
- Suggestions are for reference only. The final decision is up to the user
- Debate format (role-debate) is recommended for complex issues
- Single role is often sufficient for simple problems
- Security-related matters should always be confirmed with a specialized role

560
commands/spec.md Normal file
View File

@@ -0,0 +1,560 @@
## Spec
**"Give structure before writing code"** - Fully compliant with Kiro's spec-driven development
Unlike traditional code generation tools, it realizes Kiro's specification-driven development focused on bringing structure to development chaos. From minimal requirement input, it progressively develops from detailed product manager-level specifications to implementable designs, ensuring consistent quality from **prototype to production**.
### Usage
```bash
# Request Spec Mode from Claude (minimal requirement input)
"Create a spec for [feature description]"
# Kiro's step-by-step development:
# 1. Simple requirements → Automatic generation of detailed user stories
# 2. Structured requirement descriptions using EARS notation
# 3. Refinement of specifications through step-by-step dialogue
# 4. Generation of 3 independent files:
# - requirements.md: Requirement definitions using EARS notation
# - design.md: Design including Mermaid diagrams and TypeScript interfaces
# - tasks.md: Implementation plan with automatic application of best practices
```
### Proven Results (Kiro Track Record)
**Secure File Sharing App in 2 Days**
```bash
"Create a spec for a file sharing system (with encryption)"
→ Production-level encrypted file sharing application completed in 2 days
→ Automatic application of security best practices
→ No additional prompts needed
```
**Game Development in One Night (For Beginners)**
```bash
"Create a spec for a 2D puzzle game"
→ Open source developer with no game development experience
→ Game completed in one night
→ Kiro handles implementation logic, allowing developers to focus on creativity
```
**Weekend Prototype→Production**
```bash
"Create a spec for an EC site product management system"
→ Concept to working prototype in one weekend
→ Consistent quality from prototype to production
→ Structured approach through spec-driven development
```
### Basic Examples
```bash
# Create spec for new feature (minimal input)
"Product review system
- Star rating functionality
- Comment posting
- Image upload"
# Create spec for system feature
"User authentication
- OAuth support
- Multi-factor authentication"
# Create spec for API feature
"Payment system API
- Stripe integration
- Security-focused"
```
### Collaboration with Claude
```bash
# Complex feature spec
"Create a spec for chat functionality including WebSocket, real-time notifications, and history management"
# Database integration feature spec
"Create a spec for EC site inventory management functionality including product addition, inventory updates, and alert functionality"
# Frontend feature spec
"Create a spec for a React dashboard including graph display, filtering, and export functionality"
# Backend feature spec
"Create a spec for a RESTful API including authentication, validation, and logging"
```
### Features of Spec Mode
**Step-by-Step Dialogue Workflow**
- Fully reproduces Kiro's original value of step-by-step discussion
- Refines specifications collaboratively with users at each phase
- Generates files through question resolution, option discussion, and approval processes
**Three Phases of Interactive Development**
- **Phase 1**: Requirements Discovery → Discussion → Approval → `requirements.md` generation
- **Phase 2**: Design Exploration → Discussion → Approval → `design.md` generation
- **Phase 3**: Implementation Planning → Discussion → Approval → `tasks.md` generation
**Dynamic Specification Development**
- Resolves requirement ambiguities through dialogue
- Presents and discusses technical options
- Explains trade-offs and confirms decisions
### Detailed Examples
```bash
# Complex system feature
"Create a spec for an online payment system including Stripe integration, security, and error handling"
# Frontend feature
"Create a spec for a React component including state management, property design, and testing"
# Backend feature
"Create a spec for a GraphQL API including schema design, resolvers, and authentication"
# DevOps feature
"Create a spec for a CI/CD pipeline including test automation, deployment, and monitoring"
```
### Phase-by-Phase Dialogue Workflow
#### Phase 1: Requirements Discovery & Discussion
**1. Automatic Expansion from Minimal Input (Kiro Style)**
```text
User: "User management system
- Login functionality
- Profile editing
- Administrator permissions"
Claude: "I'll generate detailed user stories from your requirements.
Is this understanding correct?
Automatically generated user stories:
- User registration and authentication (OAuth compatible)
- Profile management (avatar, settings)
- Permission management (RBAC)
- Password reset functionality
- Security audit logs
I've added some details you didn't mention. Anything else we need?"
```
**2. Structured Requirements Description Using EARS Notation**
```text
Claude: "Let me structure this using EARS notation:
Here are the requirements in EARS format:
WHEN a user attempts to log in
GIVEN they enter the correct email address and password
THEN the system shall confirm authentication
AND redirect to the dashboard on success
WHILE a user is on the profile editing screen
THE SYSTEM SHALL perform real-time validation
AND display a preview of changes
WHERE access is attempted to a feature requiring administrator privileges
IS when the system shall check permission levels
AND deny access if insufficient"
```
**3. Automatic Completion of Edge Cases and Security Requirements**
```text
Claude: "Following best practices, I've also thought of these:
Security requirements (auto-added):
- Brute force protection
- Session management and timeouts
- CSRF and XSS protection
- Password strength rules
Edge cases (auto-detected):
- Multiple login handling
- Account lockout
- Data consistency during migration
- High traffic handling
Do these look good?"
```
**5. Phase 1 Completion and Approval**
```text
Claude: "Requirements phase done.
Should I create requirements.md?"
→ "Yes" → generates requirements.md
```
#### Phase 2: Design Exploration & Discussion
**1. Codebase Analysis + Automatic Design Proposal (Kiro Style)**
```text
Claude: "After analyzing your codebase, here's my recommended architecture:
Codebase analysis results:
- Authentication: JWT + Express.js (v4.18.x)
- Database: PostgreSQL (v14+)
- Frontend: React + TypeScript (v18+)
- Existing patterns: Repository pattern, service layer separation
Kiro recommended architecture:
- Clean Architecture compliant
- Domain-Driven Design elements
- Partial application of CQRS pattern
- Automated security best practices
Shall we proceed with this design?"
```
**2. Visual Design with Mermaid Diagrams**
```text
Claude: "I've automatically generated a system design using Mermaid diagrams:
Automatically generated architecture diagram:
graph TB
Client[React Client] --> API[Express API Gateway]
API --> Auth[Auth Service]
API --> User[User Service]
API --> Profile[Profile Service]
Auth --> AuthDB[(Auth Database)]
User --> UserDB[(User Database)]
Profile --> ProfileDB[(Profile Database)]
Auth --> JWT[JWT Service]
Auth --> OAuth[OAuth Provider]
Shall I also generate a data flow diagram?"
```
**3. Automatic Generation of TypeScript Interfaces**
```text
Claude: "I'll automatically generate TypeScript interfaces based on the design:
Automatically generated TypeScript interfaces:
// Domain entities
interface User {
readonly id: UserId;
readonly email: Email;
readonly profile: UserProfile;
readonly auth: AuthSettings;
readonly permissions: Permission[];
readonly createdAt: Date;
readonly updatedAt: Date;
}
interface UserProfile {
readonly displayName: string;
readonly avatar: AvatarUrl;
readonly preferences: UserPreferences;
}
interface AuthSettings {
readonly passwordHash: PasswordHash;
readonly mfaEnabled: boolean;
readonly lastLoginAt: Date;
readonly failedAttempts: number;
}
// Use cases
interface UserRepository {
findById(id: UserId): Promise<User | null>;
findByEmail(email: Email): Promise<User | null>;
save(user: User): Promise<void>;
}
interface AuthenticationService {
authenticate(credentials: LoginCredentials): Promise<AuthResult>;
refreshToken(token: RefreshToken): Promise<AccessToken>;
}
Shall we proceed with this interface design?"
```
**4. Phase 2 Completion and Approval**
```text
Claude: "The Design Phase is complete.
May I generate design.md?"
→ "Proceed" → design.md generation
```
#### Phase 3: Implementation Planning & Discussion
**1. Automatic Application of Best Practices (Kiro Style)**
```text
Claude: "I've generated an implementation plan with automatic application of Kiro's best practices:
Automatically generated implementation strategy:
1. Security-first development
- Automatic incorporation of OWASP Top 10 countermeasures
- Automatic implementation of encryption and hashing
- Automatic generation of security tests
2. Enforcement of clean architecture
- Automatic implementation of layer separation
- Automatic application of dependency inversion
- Enforcement of testable design
3. Performance optimization
- Automatic implementation of caching strategy
- Database optimization
- Utilization of asynchronous processing
Shall we proceed with this approach?"
```
**2. Automatic Setting of Quality Checkpoints**
```text
Claude: "I've automatically set quality checkpoints for each implementation phase:
Automatic quality assurance process:
Phase 3a: Backend implementation
- [ ] Security scan (SAST/DAST)
- [ ] API specification test (OpenAPI compliant)
- [ ] Performance test (load/response)
- [ ] Vulnerability scan (dependencies/CVE)
Phase 3b: Frontend implementation
- [ ] Accessibility test (WCAG 2.1 AA)
- [ ] Browser compatibility test
- [ ] Responsive design verification
- [ ] Security header confirmation
Phase 3c: Integration/Deployment
- [ ] E2E test suite
- [ ] CI/CD pipeline setup
- [ ] Monitoring/logging setup
- [ ] Production environment security audit
All will be executed automatically. Are there any additional quality standards?"
```
**3. Automatic Analysis of Dependencies and Risk Mitigation**
```text
Claude: "I've automatically analyzed the implementation order and risk mitigation measures:
Optimized implementation order:
Week 1: Infrastructure and security foundation
- Database design and schema creation
- Authentication foundation (JWT + session management)
- Security middleware implementation
- Basic API endpoints
Week 2: Core functionality implementation
- User management functionality
- Profile management functionality
- Permission management system
- Validation and error handling
Week 3: Advanced features and optimization
- Multi-factor authentication implementation
- Audit log functionality
- Performance optimization
- Frontend integration
Automatically detected risk mitigation measures:
- Multi-factor authentication: Gradual rollout (SMS→app authentication)
- Session management: Redis cluster configuration
- High traffic: Rate limiting + CDN utilization
- Data consistency: Enhanced transaction management
Is this plan acceptable?"
```
**4. Phase 3 Completion and Approval**
```text
Claude: "The Implementation Planning Phase is complete.
May I generate tasks.md?"
→ "Proceed" → tasks.md generation
```
### Kiro-Specific Features
**EARS Notation (Easy Approach to Requirements Syntax)**
```text
# Kiro Standard EARS Notation Patterns
WHEN [situation/trigger]
GIVEN [precondition]
THEN [system behavior]
AND [additional behavior]
WHILE [state/process]
THE SYSTEM SHALL [mandatory behavior]
AND [related behavior]
WHERE [function/component]
IS [condition/state]
THE SYSTEM SHALL [corresponding behavior]
```
**Automatic Generation Features**
- **Mermaid diagrams**: Automatic generation of architecture and data flow diagrams
- **TypeScript interfaces**: Automatic creation of type definitions based on design
- **Best practices**: Automatic incorporation of security and performance measures
- **Quality checkpoints**: Automatic setting of phase-specific quality standards
**Hooks Integration**
- Automatic quality checks on file save
- Automatic application of code standards
- Automatic execution of security scans
- Automatic verification of OWASP Top 10 countermeasures
**Prototype→Production Quality Assurance**
- Consistent design through structured approach
- Enforcement of security-first development
- Automatic application of scalable architecture
- Integration of continuous quality management
### Notes
**Scope of Application**
- Spec Mode is optimized for feature implementation
- Use normal implementation format for simple fixes or small changes
- Recommended for new feature development or complex feature modifications
**Quality Assurance**
- Clarification of completion criteria at each stage
- Design review before implementation
- Comprehensive quality standards including testing and accessibility
**Operational Notes**
- Resolve requirement ambiguities before design phase
- Generate implementation tasks after design completion
- Emphasize approval process at each stage
### Trigger Phrases and Controls
#### Step-by-Step Workflow Control
**Start Triggers**
- "Create a spec for [feature name]"
- "I want to develop [feature name] using spec-driven development"
- "Design [feature name] from specifications"
**Phase Progress Control**
- **"Proceed"**: Complete current phase, generate file, move to next phase
- **"Revise"**: Adjust or improve content within current phase
- **"Restart"**: Restart current phase from beginning
- **"Explain in detail"**: Provide more detailed explanations or options
- **"Skip"**: Skip current phase and move to next (not recommended)
**File Generation Timing**
```text
Phase 1 completion → "Proceed" → requirements.md generation
Phase 2 completion → "Proceed" → design.md generation
Phase 3 completion → "Proceed" → tasks.md generation
```
### Execution Example (Step-by-Step Flow)
```bash
# Usage example
User: "Create a spec for a user management system"
# Phase 1: Requirements Discovery
Claude: [Begins requirement confirmation and discussion]
User: [Responds, discusses, makes revisions]
Claude: "The Requirements Phase is complete. May I proceed?"
User: "Proceed"
→ requirements.md generation
# Phase 2: Design Exploration
Claude: [Begins design proposal and discussion]
User: [Discusses technology selection and architecture]
Claude: "The Design Phase is complete. May I proceed?"
User: "Proceed"
→ design.md generation
# Phase 3: Implementation Planning
Claude: [Begins implementation plan discussion]
User: [Discusses priorities, risks, and effort]
Claude: "The Implementation Phase is complete. May I proceed?"
User: "Proceed"
→ tasks.md generation
# Completion
Claude: "Spec-driven development preparation is complete. You can begin implementation."
```
### Differences from /plan
| Feature | /plan | /spec |
| ---------------------- | ------------------------------ | ------------------------------------------------------------------------ |
| Target | General implementation plan | Feature specification-driven development |
| Output format | Single plan document | 3 independent files (requirements.md, design.md, tasks.md) |
| Requirement definition | Basic requirement organization | Detailed acceptance criteria using EARS notation |
| Design | Technology selection focused | Codebase analysis-based |
| Implementation | General task decomposition | Dependency-aware sequence |
| Quality assurance | Basic test strategy | Comprehensive quality requirements (testing, accessibility, performance) |
| Synchronization | Static plan | Dynamic spec updates |
### Recommended Use Cases
**Recommended for spec use**
- New feature development
- Complex feature modifications
- API design
- Database design
- UI/UX implementation
**Recommended for plan use**
- System-wide design
- Infrastructure construction
- Refactoring
- Technology selection
- Architecture changes

View File

@@ -0,0 +1,186 @@
## AI Writing Check
Detects mechanical patterns in AI-generated text and provides suggestions for improving to more natural Japanese.
### Usage
```bash
/ai-writing-check [options]
```
### Options
- None: Analyze current file or selected text
- `--file <path>`: Analyze specific file
- `--dir <path>`: Batch analyze files in directory
- `--severity <level>`: Detection level (all/high/medium)
- `--fix`: Automatically fix detected patterns
### Basic Examples
```bash
# Check AI writing style in file
cat README.md
/ai-writing-check
"Check this document for AI writing style and suggest improvements"
# Analyze specific file
/ai-writing-check --file docs/guide.md
"Detect AI-like expressions and suggest corrections to natural expressions"
# Scan entire project
/ai-writing-check --dir . --severity high
"Report only critical AI writing issues in the project"
```
### Detected Patterns
#### 1. Mechanical List Format Patterns
```markdown
Examples detected:
- **Important**: This is an important item
- Completed item (with checkmark emoji)
- Hot topic (with fire emoji)
- Ready to start (with rocket emoji)
Improved examples:
- Important item: This is an important item
- Completed item
- Notable topic
- Ready to start
```
#### 2. Exaggerated/Hype Expressions
```markdown
Examples detected:
Revolutionary technology will change the industry.
This completely solves the problem.
Works like magic.
Improved examples:
Effective technology brings change to the industry.
Solves many problems.
Works smoothly.
```
#### 3. Mechanical Emphasis Patterns
```markdown
Examples detected:
**Idea**: New proposal (with lightbulb emoji)
**Caution**: Important warning (with warning emoji)
Improved examples:
Idea: New proposal
Note: Important warning
```
#### 4. Redundant Technical Writing
```markdown
Examples detected:
First, we will perform the setup.
You can use this tool.
Performance is greatly improved.
Improved examples:
First, perform setup.
You can use this tool.
Performance improves by 30%.
```
### Collaboration with Claude
```bash
# Analyze entire document for AI writing style
cat article.md
/ai-writing-check
"Analyze and suggest improvements from these perspectives:
1. Detection of mechanical expressions
2. Suggestions for correction to natural Japanese
3. Priority-based improvement list"
# Focus on specific patterns
/ai-writing-check --file blog.md
"Pay special attention to exaggerated and redundant expressions and suggest improvements"
# Batch check multiple files
find . -name "*.md" -type f
/ai-writing-check --dir docs/
"Analyze AI writing style throughout the documentation and create a summary"
```
### Detailed Examples
```bash
# Compare before and after improvement
/ai-writing-check --file draft.md
"Detect AI-like expressions and present them in the following format:
- Problem areas (with line numbers)
- Type of problem and reason
- Specific improvement suggestions
- Effect of improvement"
# Auto-fix mode
/ai-writing-check --file report.md --fix
"Automatically fix detected patterns and report results"
# Project AI writing style report
/ai-writing-check --dir . --severity all
"Analyze AI writing style throughout the project and provide:
1. Statistical information (detection count by pattern)
2. Top 5 most problematic files
3. Improvement priority matrix
4. Step-by-step improvement plan"
```
### Advanced Usage Examples
```bash
# Apply custom rules
/ai-writing-check --file spec.md
"Check technical specifications with these additional criteria:
- Ambiguous expressions (appropriate, as needed)
- Lack of specificity (fast → specific numbers)
- Inconsistent terminology usage"
# Check for CI/CD integration
/ai-writing-check --dir docs/ --severity high
"Output results in GitHub Actions executable format:
- Number of errors and filenames
- Line numbers requiring correction
- Exit code configuration"
# Style guide compliance check
/ai-writing-check --file manual.md
"Additional checks based on company style guide:
- Honorific usage (unification of desu/masu form)
- Appropriate use of technical terms
- Consideration for readers"
```
### Notes
- AI writing style determination varies by context, so treat suggestions as reference
- Adjust criteria according to document type (technical documents, blogs, manuals, etc.)
- You don't need to accept all suggestions; select appropriate ones
- The `--fix` option automatically corrects detected patterns
### Command Execution Behavior
When you run the `/ai-writing-check` command, Claude performs the following processes:
1. **Pattern Detection**: Detects AI-like patterns from specified files or text
2. **Specific Correction Suggestions**: Presents correction suggestions with line numbers for each issue
3. **--fix Mode**: Automatically fixes detected patterns and displays a summary of results
4. **Report Generation**: Provides detection count, improvement priority, and comparison before/after correction
Claude reads the actual file contents and performs analysis based on the rules of textlint-rule-preset-ai-writing.
### Reference
This command is created with reference to the [textlint-rule-preset-ai-writing](https://github.com/textlint-ja/textlint-rule-preset-ai-writing) rule set. It is a textlint rule preset for detecting mechanical patterns in AI-generated text and promoting more natural expressions.

223
commands/task.md Normal file
View File

@@ -0,0 +1,223 @@
## Task
Launches a smart agent to handle complex searches and investigations. Great for large-scale work without eating up context.
### Usage
```bash
# Request Task from Claude
"Investigate [task] using Task"
```
### What Task Does
**Works Independently**
- Combines multiple tools automatically
- Gathers and analyzes step by step
- Puts results together in clear reports
**Saves Context**
- Uses less memory than manual searching
- Searches lots of files efficiently
- Pulls data from outside sources
**Ensures Quality**
- Checks if sources are reliable
- Verifies from different angles
- Fills in missing pieces
### Basic Examples
```bash
# Complex codebase investigation
"Investigate which files implement this feature using Task"
# Large-scale file search
"Identify configuration file inconsistencies using Task"
# External information collection
"Investigate the latest AI technology trends using Task"
```
### Collaboration with Claude
```bash
# Complex problem analysis
"Analyze the cause of memory leaks using Task, including profiling results and logs"
# Dependency investigation
"Investigate vulnerabilities of this npm package using Task"
# Competitor analysis
"Investigate API specifications of competing services using Task"
# Architecture analysis
"Analyze dependencies of this microservice using Task"
```
### Task vs Other Commands
#### When to Use What
| Command | Main Use Case | Execution Method | Information Collection |
| ------------------- | ------------------------------- | --------------------- | -------------------------- |
| **Task** | Investigation, analysis, search | Autonomous execution | Multiple sources |
| ultrathink | Deep thinking, judgment | Structured thinking | Existing knowledge-focused |
| sequential-thinking | Problem-solving, design | Step-by-step thinking | As needed |
| plan | Implementation planning | Approval process | Requirement analysis |
#### Quick Decision Guide
```text
Need to gather info?
├─ Yes → From many places or lots of files?
│ ├─ Yes → **Use Task**
│ └─ No → Just ask normally
└─ No → Need deep thinking?
├─ Yes → Use ultrathink/sequential-thinking
└─ No → Just ask normally
```
### When Task Works Best
**Great For**
- Exploring complex codebases (dependencies, architecture)
- Searching many files (patterns, configs)
- Gathering external info (tech trends, libraries)
- Combining data from multiple places (logs, metrics)
- Repetitive investigations (audits, debt checks)
- Big searches that would eat too much context
**Not Great For**
- Simple questions I already know
- Quick one-time tasks
- Things needing back-and-forth discussion
- Design decisions (use plan or thinking commands instead)
### Detailed Examples by Category
#### System Analysis and Investigation
```bash
# Complex system analysis
"Identify bottlenecks in the EC site using Task, investigating database, API, and frontend"
# Architecture analysis
"Analyze dependencies of this microservice using Task, including API communication and data flow"
# Technical debt investigation
"Analyze technical debt in legacy code using Task, including refactoring priorities"
```
#### Security and Compliance
```bash
# Security audit
"Investigate vulnerabilities in this application using Task, based on OWASP Top 10"
# License investigation
"Investigate license issues in project dependencies using Task"
# Configuration file audit
"Identify security configuration inconsistencies using Task, including environment differences"
```
#### Performance and Optimization
```bash
# Performance analysis
"Identify heavy queries in the application using Task, including execution plans and optimization proposals"
# Resource usage investigation
"Investigate causes of memory leaks using Task, including profiling results and code analysis"
# Bundle size analysis
"Investigate frontend bundle size issues using Task, including optimization suggestions"
```
#### External Information Collection
```bash
# Technology trend investigation
"Investigate 2024 JavaScript framework trends using Task"
# Competitor analysis
"Investigate API specifications of competing services using Task, including feature comparison table"
# Library evaluation
"Compare state management libraries using Task, including performance and learning costs"
```
### Execution Flow and Quality Assurance
#### Task Execution Flow
```text
1. Initial Analysis
├─ Decomposition of task and identification of investigation scope
├─ Selection of necessary tools and information sources
└─ Development of execution plan
2. Information Collection
├─ File search and code analysis
├─ Collection of external information
└─ Data structuring
3. Analysis and Integration
├─ Relevance analysis of collected information
├─ Identification of patterns and issues
└─ Verification of hypotheses
4. Reporting and Proposal
├─ Structuring of results
├─ Creation of improvement proposals
└─ Presentation of next actions
```
#### Quality Assurance
- **Reliability check of information sources**: Fact confirmation from multiple sources
- **Completeness check**: Verification of no gaps in investigation targets
- **Consistency verification**: Confirmation of consistency in conflicting information
- **Practicality evaluation**: Assessment of feasibility and effectiveness of proposals
### Error Handling and Constraints
#### Common Constraints
- **External API usage limits**: Rate limits and authentication errors
- **Large file processing limits**: Memory and timeout constraints
- **Access permission issues**: Restrictions on file and directory access
#### Error Handling
- **Partial result reporting**: Analysis with only obtainable information
- **Alternative proposals**: Suggestion of alternative investigation methods under constraints
- **Stepwise execution**: Division of large-scale tasks for execution
### Notes
- Task is optimal for complex, autonomous investigation and analysis tasks
- For simple questions or when immediate answers are needed, use normal question format
- Treat investigation results as reference information and always verify important decisions
- When collecting external information, pay attention to the freshness and accuracy of information
### Execution Example
```bash
# Usage example
"Investigate issues in GraphQL schema using Task"
# Expected behavior
# 1. Dedicated agent starts
# 2. Search for GraphQL-related files
# 3. Analyze schema definitions
# 4. Compare with best practices
# 5. Identify issues and propose improvements
# 6. Create structured report
```

185
commands/tech-debt.md Normal file
View File

@@ -0,0 +1,185 @@
## Tech Debt
Quantitatively analyzes technical debt in projects, visualizes health scores and impact on development efficiency. Tracks improvement status through trend analysis and creates prioritized improvement plans with calculated time costs.
### Usage
```bash
# Analyze project structure for technical debt
ls -la
"Analyze technical debt in this project and create improvement plan"
```
### Project Health Dashboard
```text
Project Health Score: 72/100
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Category-wise Scores
├─ Dependency freshness: ████████░░ 82% (Up-to-date: 45/55)
├─ Documentation completeness: ███░░░░░░░ 35% (Missing README, API docs)
├─ Test coverage: ██████░░░░ 65% (Target: 80%)
├─ Security: ███████░░░ 78% (Vulnerabilities: 2 Medium)
├─ Architecture: ██████░░░░ 60% (Circular dependencies: 3 locations)
└─ Code quality: ███████░░░ 70% (High complexity: 12 files)
📈 Trends (Past 30 days)
├─ Overall score: 68 → 72 (+4) ↗️
├─ Improvements: 12 items ✅
├─ New debt: 3 items ⚠️
├─ Resolved debt: 8 items 🎉
└─ Improvement rate: +0.13/day
⏱️ Time Impact of Debt
├─ Development speed reduction: -20% (New features take 1.25x normal time)
├─ Bug fix time increase: +15% (Average fix time 2h → 2.3h)
├─ Code review overhead: +30% (Increased understanding time due to complexity)
├─ Onboarding delay: +50% (Time for new members to understand)
└─ Cumulative delay: 40 hours/week equivalent
🎯 Expected Benefits from Improvement (Time-based)
├─ Immediate effect: +10% development speed (After 1 week)
├─ Short-term effect: -25% bug rate (After 1 month)
├─ Medium-term effect: +30% development speed (After 3 months)
├─ Long-term effect: -50% maintenance time (After 6 months)
└─ ROI: Invest 40 hours → Recover 120 hours (3 months)
```
### Basic Examples
```bash
# Detailed health score analysis
find . -name "*.js" -o -name "*.ts" | xargs wc -l | sort -rn | head -10
"Calculate project health score and evaluate by category"
# TODO/FIXME debt impact analysis
grep -r "TODO\|FIXME\|HACK\|XXX\|WORKAROUND" . --exclude-dir=node_modules --exclude-dir=.git
"Evaluate these TODOs by debt impact (time × importance)"
# Dependency health check
ls -la | grep -E "package.json|Cargo.toml|pubspec.yaml|go.mod|requirements.txt"
"Calculate dependency freshness score and analyze update risks and benefits"
```
### Collaboration with Claude
```bash
# Comprehensive technical debt analysis
ls -la && find . -name "*.md" -maxdepth 2 -exec head -20 {} \;
"Analyze technical debt in this project from the following perspectives:
1. Code quality (complexity, duplication, maintainability)
2. Dependency health
3. Security risks
4. Performance issues
5. Test coverage gaps"
# Architecture debt analysis
find . -type d -name "src" -o -name "lib" -o -name "app" | head -10 | xargs ls -la
"Identify architecture-level technical debt and propose refactoring plan"
# Prioritized improvement plan
"Evaluate technical debt using the following criteria and present in table format:
- Impact level (High/Medium/Low)
- Fix cost (time)
- Technical risk (system failure, data loss potential)
- Time savings from improvement
- Recommended implementation timing"
```
### Detailed Examples
```bash
# Auto-detect project type and analyze
find . -maxdepth 2 -type f \( -name "package.json" -o -name "Cargo.toml" -o -name "pubspec.yaml" -o -name "go.mod" -o -name "pom.xml" \)
"Based on detected project type, analyze:
1. Language/framework-specific technical debt
2. Deviations from best practices
3. Modernization opportunities
4. Step-by-step improvement strategy"
# Code quality metrics analysis
find . -type f -name "*" | grep -E "\.(js|ts|py|rs|go|dart|kotlin|swift|java)$" | wc -l
"Analyze project code quality and present these metrics:
- Functions with high cyclomatic complexity
- Duplicate code detection
- Overly long files/functions
- Missing proper error handling"
# Security debt detection
grep -r "password\|secret\|key\|token" . --exclude-dir=.git --exclude-dir=node_modules | grep -v ".env.example"
"Detect security-related technical debt and propose fix priorities and countermeasures"
# Test coverage gap analysis
find . -type f \( -name "*test*" -o -name "*spec*" \) | wc -l && find . -type f -name "*.md" | xargs grep -l "test"
"Analyze technical debt in test coverage and propose testing strategy"
```
### Debt Priority Matrix
```text
Priority = (Impact × Frequency) ÷ Fix Cost
```
| Priority | Development Impact | Fix Cost | Time Savings | Investment Efficiency | Response Deadline |
| ------------------------ | ------------------ | -------- | ------------ | --------------------- | ----------------- |
| **[P0] Fix Immediately** | High | Low | > 5x | Invest 1h → Save 5h+ | Immediately |
| **[P1] This Week** | High | Medium | 2-5x | Invest 1h → Save 2-5h | Within 1 week |
| **[P2] This Month** | Low | High | 1-2x | Invest 1h → Save 1-2h | Within 1 month |
| **[P3] This Quarter** | Low | Low | < 1x | Investment = Savings | Within 3 months |
### Debt Type Evaluation Criteria
| Debt Type | Detection Method | Development Impact | Fix Time |
| ---------------------- | ------------------------------------- | ------------------------------------------------- | -------- |
| **Architecture Debt** | Circular dependencies, tight coupling | Large impact scope on changes, testing difficulty | 40-80h |
| **Security Debt** | CVE scans, OWASP | Vulnerability risks, compliance issues | 8-40h |
| **Performance Debt** | N+1 queries, memory leaks | Increased response time, resource consumption | 16-40h |
| **Test Debt** | Coverage < 60% | Delayed bug detection, unstable quality | 20-60h |
| **Documentation Debt** | Missing README, API docs | Increased onboarding time | 8-24h |
| **Dependency Debt** | 2+ years without updates | Security risks, compatibility issues | 4-16h |
| **Code Quality Debt** | Complexity > 10 | Increased understanding/modification time | 2-8h |
### Technical Debt Impact Calculation
```text
Impact = Σ(Weight of each factor × Measured value)
📊 Measurable Impact Indicators:
├─ Development Speed Impact
│ ├─ Code understanding time: +X% (proportional to complexity)
│ ├─ Change impact scope: Y files (measured by coupling)
│ └─ Test execution time: Z minutes (CI/CD pipeline)
├─ Quality Impact
│ ├─ Bug occurrence rate: +25% per complexity score of 10
│ ├─ Review time: Lines of code × Complexity coefficient
│ └─ Test gap risk: High risk when coverage < 60%
└─ Team Efficiency Impact
├─ Onboarding time: +100% when documentation lacking
├─ Knowledge silos: Caution when single contributor rate >80%
└─ Code duplication fix locations: Duplication rate × Change frequency
```
### Time-based ROI Calculation
```text
ROI = (Time Saved - Investment Time) ÷ Investment Time × 100
Example: Resolving circular dependencies
├─ Investment time: 16 hours (refactoring)
├─ Monthly savings:
│ ├─ Compilation time: -10 min/day × 20 days = 200 min
│ ├─ Debug time: -2 hours/week × 4 weeks = 8 hours
│ └─ New feature development: -30% time reduction = 12 hours
├─ Monthly time savings: 23.3 hours
└─ 3-month ROI: (70 - 16) ÷ 16 × 100 = 337%
```
### Notes
- Automatically detects project language and framework for tailored analysis
- Health score evaluated on 0-100 scale: 70+ healthy, 50 or below needs improvement
- Calculates specific time costs and improvement benefits to support data-driven decision making
- For monetary conversion, specify team average hourly rate or project-specific coefficients separately

242
commands/token-efficient.md Normal file
View File

@@ -0,0 +1,242 @@
# Token Efficiency Mode
Reduces AI response context usage by 30-50% through compression efficiency mode.
## Overview
Token Efficiency Mode leverages visual symbols and abbreviation systems to compress Claude's responses.
**Generated code quality and content remain unchanged**. Only the explanation method changes.
## Usage
```bash
# Enable mode
"Respond in Token Efficiency Mode"
"--uc mode"
"Concise mode"
```
## How It Works
### 1. Symbol System
#### Logic & Flow
| Symbol | Meaning | Example |
| ------ | ---------------- | ------------------------------- |
| → | leads to, causes | `auth.js:45 → 🛡️ security risk` |
| ⇒ | converts to | `input ⇒ validated_output` |
| ← | rollback, revert | `migration ← rollback` |
| ⇄ | bidirectional | `sync ⇄ remote` |
| & | and, combine | `🛡️ security & ⚡ performance` |
| \| | or, separator | `react\|vue\|angular` |
| : | define, specify | `scope: file\|module` |
| » | then, sequence | `build » test » deploy` |
| ∴ | therefore | `tests ❌ ∴ code broken` |
| ∵ | because | `slow ∵ O(n²) algorithm` |
#### Status & Progress
| Symbol | Meaning | Usage |
| ------ | ----------------- | ----------------------- |
| ✅ | complete, success | Task completed normally |
| ❌ | failed, error | Immediate action needed |
| ⚠️ | warning | Review recommended |
| 🔄 | in progress | Currently active |
| ⏳ | pending | Scheduled for later |
| 🚨 | urgent, critical | High priority |
#### Technical Domains
| Symbol | Domain | Usage |
| ------ | ------------- | --------------------- |
| ⚡ | Performance | Speed, optimization |
| 🔍 | Analysis | Search, investigation |
| 🔧 | Configuration | Setup, tools |
| 🛡️ | Security | Protection, safety |
| 📦 | Deployment | Package, bundle |
| 🎨 | Design | UI, frontend |
| 🏗️ | Architecture | System structure |
| 🗄️ | Database | Data persistence |
| ⚙️ | Backend | Server processing |
| 🧪 | Testing | Quality assurance |
### 2. Abbreviation System
#### System & Architecture
- `cfg` → configuration
- `impl` → implementation
- `arch` → architecture
- `perf` → performance
- `ops` → operations
- `env` → environment
#### Development Process
- `req` → requirements
- `deps` → dependencies
- `val` → validation
- `auth` → authentication
- `docs` → documentation
- `std` → standards
#### Quality & Analysis
- `qual` → quality
- `sec` → security
- `err` → error
- `rec` → recovery
- `sev` → severity
- `opt` → optimization
## Example Comparisons
### Example 1: Error Report
**Normal Mode (85 chars)**
```text
Security vulnerability found in the user validation function at line 45 of the auth system.
```
**Token Efficient (39 chars)**
```text
auth.js:45 → 🛡️ sec vuln in user val()
```
### Example 2: Build Status
**Normal Mode (112 chars)**
```text
Build process completed successfully. Tests are currently running, followed by deployment.
```
**Token Efficient (35 chars)**
```text
build ✅ » test 🔄 » deploy ⏳
```
### Example 3: Performance Analysis
**Normal Mode (95 chars)**
```text
Performance analysis revealed slow processing due to O(n²) algorithm complexity.
```
**Token Efficient (42 chars)**
```text
⚡ perf: slow ∵ O(n²) → optimize to O(n)
```
## Use Cases
### ✅ Effective Scenarios
- **Long debugging sessions**: Efficiently maintaining history
- **Large code reviews**: Concise analysis of many files
- **CI/CD monitoring**: Real-time status updates
- **Project progress reports**: Overview of multiple task states
- **Error tracking**: Visual representation of problem chains
### ❌ Scenarios to Avoid
- Explanations for beginners
- Detailed documentation creation
- Initial requirements definition
- Communication with non-technical stakeholders
## Implementation Examples
### Debugging Session
```text
[14:23] breakpoint → vars: {user: null, token: expired}
[14:24] step → auth.validate() ❌
[14:25] check → token.exp < Date.now() ∴ expired
[14:26] fix → refresh() → ✅
[14:27] continue → main flow 🔄
```
### File Analysis Results
```text
/src/auth/: 🛡️ issues × 3
/src/api/: ⚡ bottleneck in handler()
/src/db/: ✅ clean
/src/utils/: ⚠️ deprecated methods
/tests/: 🧪 coverage 78%
```
### Project Status
```text
Frontend: 🎨 ✅ 100%
Backend: ⚙️ 🔄 75%
Database: 🗄️ ✅ migrated
Tests: 🧪 ⚠️ 68% (target: 80%)
Deploy: 📦 ⏳ scheduled
Security: 🛡️ 🚨 1 critical
```
## Configuration Options
```javascript
// Compression levels
--uc; // Ultra Compressed: Maximum compression
--mc; // Moderate Compressed: Medium compression
--lc; // Light Compressed: Light compression
// Domain-specific
--dev; // Development-focused compression
--ops; // Operations-focused compression
--sec; // Security-focused compression
```
## Benefits
1. **Context saving**: 30-50% token reduction
2. **Visual understanding**: Intuitive grasp through symbols
3. **Information density**: More information in the same space
4. **History retention**: Maintain longer conversation history
5. **Pattern recognition**: Easier problem detection through visual patterns
## Notes
- This mode only changes **AI response style**
- **Code quality** remains unchanged
- Can switch with "explain in normal mode" as needed
- Normal mode recommended for beginners and non-technical users
## Command Examples
```bash
# Enable
"Token Efficient Mode on"
"Respond concisely"
"Analyze with --uc"
# Disable
"Return to normal mode"
"Explain in detail"
"Token Efficient Mode off"
```
## Implementation Impact
| Item | Impact |
| ----------------------- | ------------------- |
| Generated code quality | No change ✅ |
| Implementation accuracy | No change ✅ |
| Functionality | No change ✅ |
| AI explanation method | Compressed 🔄 |
| Context usage | 30-50% reduction ⚡ |
---
💡 **Pro Tip**: For long work sessions, start with normal mode to build understanding, then switch to Token Efficient Mode to optimize efficiency and context retention.

65
commands/ultrathink.md Normal file
View File

@@ -0,0 +1,65 @@
## Ultrathink
Execute a step-by-step, structured thinking process for complex tasks and important decisions.
### Usage
```bash
# Request deep thinking from Claude
"Analyze [task] using ultrathink"
```
### Basic Examples
```bash
# Examine architecture design
"Analyze whether to choose microservices or monolith using ultrathink"
# Analyze technology selection
"Analyze whether Rust or TypeScript is suitable for this project using ultrathink"
# Deep dive into problem solving
"Analyze the causes of poor application performance and improvement methods using ultrathink"
```
### Collaboration with Claude
```bash
# Business decisions
"Prioritize new features using ultrathink. Consider user value, development cost, and technical risk"
# System design
"Design an authentication system using ultrathink. Consider security, scalability, and maintainability"
# Trade-off analysis
"Analyze the choice between GraphQL vs REST API using ultrathink. Based on project requirements"
# Refactoring strategy
cat src/legacy_code.js
"Develop a refactoring strategy for this legacy code using ultrathink"
```
### Thinking Process
1. **Problem Decomposition** - Break down tasks into components
2. **MECE Analysis** - Organize without gaps or overlaps
3. **Multiple Perspective Review** - Analyze from technical, business, and user perspectives
4. **Interactive Confirmation** - Confirm with users at important decision points
5. **Evidence-Based Proposal** - Conclusions based on data and logic
### Detailed Examples
```bash
# Resolve complex technical debt
"Develop a strategy to modernize a 10-year legacy system using ultrathink. Include phased migration, risks, and ROI"
# Organizational challenges
"Develop a scaling strategy for the development team using ultrathink. Assume expansion from 5 to 20 people"
# Database migration
"Analyze migrating from PostgreSQL to DynamoDB using ultrathink. Consider cost, performance, and operational aspects"
```
### Notes
Ultrathink is ideal for tasks that require deep thinking over time. For simple questions or immediate answers, use the normal question format.

202
commands/update-dart-doc.md Normal file
View File

@@ -0,0 +1,202 @@
## Update Dart Doc
Systematically manages DartDoc comments in Dart files and maintains high-quality Japanese documentation.
### Usage
```bash
# Perform new additions and updates simultaneously
"Add DartDoc comments to classes without them and update comments that don't meet standards"
# Check changed files in PR
"Check if there are Claude markers in the DartDoc of files changed in PR #4308"
# Maintain documentation for specific directories
"Add DartDoc to Widget classes under packages/app/lib/ui/screen/"
# Execute without markers
/update-dart-doc --marker false
"Improve DartDoc in existing project (without Claude markers)"
```
### Options
- `--marker <true|false>` : Whether to add Claude markers (default: true)
### Basic Examples
```bash
# 1. Analyze target files
find . -name "*.dart" -not -path "*/.*" | grep -v "_test.dart" | grep -v "_vrt.dart"
"Identify classes with insufficient DartDoc (0 lines or less than 30 characters)"
# 2. Add documentation
"Add DartDoc comments containing required elements to the identified classes"
# 3. Check markers
"Ensure all added/updated DartDoc have Claude markers"
```
### Execution Procedure
#### 1. Priority of Target Elements
1. 🔴 **Highest priority**: Elements without DartDoc comments (0 comment lines)
2. 🟡 **Next priority**: Elements not meeting standards (less than 30 characters or missing required elements)
3. 🟢 **Verification target**: Existing comments without Claude markers
**Target elements**:
- Classes (all class definitions)
- Enums
- Extensions
- Important functions (top-level functions, optional)
#### 2. DartDoc Writing Rules
**Basic structure**:
```dart
/// {Element summary} (30-60 characters, required)
///
/// {Detailed description} (must include role, usage context, and notes, 50-200 characters)
///
/// Generated by Claude 🤖
@annotation // Do not change existing annotations
class ClassName {
```
**Text style**:
- Polite language (desu/masu form): "displays", "is a class that manages"
- Use Japanese punctuation: 「。」「、」
- Add half-width space between Japanese and alphanumeric characters
- Use English for technical terms: "Authentication state"
- Keep each line within 80 characters
#### 3. Writing Examples by Class Category
**State management class (Riverpod)**:
```dart
/// State that manages the disabled state of horizontal swipe gestures.
///
/// Used when horizontal swipes need to be disabled during specific screens or operations,
/// such as during carousel displays or specific inputs.
///
/// Generated by Claude 🤖
@Riverpod(keepAlive: true, dependencies: [])
class HorizontalDragGestureIgnoreState extends _$HorizontalDragGestureIgnoreState {
```
**Widget class**:
```dart
/// Widget that displays a user profile.
///
/// Vertically arranges avatar image, username, and status information,
/// and navigates to the profile detail screen when tapped.
///
/// Generated by Claude 🤖
class UserProfileWidget extends HookConsumerWidget {
```
#### 4. Rules for Preserving Existing Content
1. **If existing comment meets standards**: Keep as is (do not add new comment)
- Standards: 30+ characters and includes required elements (summary, details, marker)
2. **If existing comment does not meet standards**: Completely replace (no duplication)
3. **If no existing comment**: Add new comment
**Important information to preserve**:
- URLs and links: References starting with `See also:`
- TODO comments: In the format `TODO(user_name):`
- Notes: Warnings like `Note:` or `Warning:`
- Usage examples: Code starting with `Example:` or `Usage:`
- Technical constraints: Descriptions of performance or limitations
### Claude Marker Management
```bash
# Marker format
/// Generated by Claude 🤖
# Check markers in PR changed files
gh pr diff 4308 --name-only | grep "\.dart$" | xargs grep -l "Generated by Claude"
"Add markers to files that don't have them"
```
### Quality Check List
-**Character count**: Strictly adhere to 30-60 characters for summary, 50-200 for details
-**Required elements**: Always include 3 elements - summary, detailed explanation, and Claude marker
-**Completeness**: Describe role, usage context, and notes
-**Consistency**: Unify style with polite language (desu/masu form)
-**Format**: Add half-width space between Japanese and English
-**Accuracy**: Analyze implementation and only include fact-based descriptions
-**Structure**: Preserve annotations, place comments above
-**Length**: Keep each line within 80 characters
-**Marker**: Always add marker for changes by Claude
### Notes
**🔴 Absolute prohibitions**:
- ❌ Code changes other than documentation comments
- ❌ Speculation about implementation details (only factual descriptions)
- ❌ Unnatural mixing of English and Japanese
- ❌ Deletion or modification of existing annotations
- ❌ Duplication with existing comments
- ❌ Comments under character count standards in test files (`*_test.dart`)
- ❌ Comments under character count standards in VRT files (`*_vrt.dart`)
**Static analysis and commit**:
```bash
# Record execution results
ADDED_COMMENTS=0
UPDATED_COMMENTS=0
ERRORS=0
# Check after changes
melos analyze
if [ $? -ne 0 ]; then
echo "🔴 Error: Static analysis failed"
exit 1
fi
# Output execution summary
echo "📊 Execution results:"
echo "- Added comments: $ADDED_COMMENTS"
echo "- Updated comments: $UPDATED_COMMENTS"
echo "- Errors: $ERRORS"
# Example commit
git commit -m "docs: Add and update DartDoc comments
- Add DartDoc to classes, enums, and extensions that don't meet standards
- Update comments under 30 characters to meet standards
- Uniformly add Claude markers
Execution results:
- Added: $ADDED_COMMENTS
- Updated: $UPDATED_COMMENTS
Generated by Claude 🤖"
```
### Execution Success Criteria
1. **Complete success**: When all of the following are met
- `melos analyze` PASSED
- 0 errors
- All added/updated comments meet standards
2. **Partial success**: When
- Fewer than 5 errors
- 90% or more of all comments meet standards
3. **Failure**: When
- `melos analyze` FAILED
- 5 or more errors

View File

@@ -0,0 +1,306 @@
## Update Doc String
Systematically manage multilingual docstrings/comments and maintain high-quality documentation.
### Usage
```bash
# Run with automatic language detection
"Please add docstrings to classes and functions without them, and update comments that don't meet standards"
# Run with specified language
/update-doc-string --lang python
"Please update docstrings in Python files to comply with PEP 257"
# Maintain documentation for specific directories
"Please add JSDoc to functions under src/components/"
```
### Options
- `--lang <language>` : Documentation language (default: en)
- `--style <style>` : Specify documentation style (has language-specific defaults)
- `--marker <true|false>` : Whether to add Claude markers (default: true)
### Basic Examples
```bash
# 1. Analyze target files (programming language is auto-detected)
find . -type f \( -name "*.py" -o -name "*.js" -o -name "*.ts" -o -name "*.dart" -o -name "*.go" -o -name "*.rs" \) | grep -v test
"Please identify elements with insufficient docstrings (0 comment lines or fewer than 30 characters)"
# 2. Add documentation (uses English by default)
"Please add docstrings containing language-specific required elements to the identified elements"
# → Uses English for all documentation
# 3. Add documentation (explicitly specify English)
/update-doc-string --lang en
"Add docstrings with required elements to the identified elements"
# 4. Check markers
"Please confirm that all added/updated docstrings have Claude markers"
```
### Execution Steps
#### 1. Priority of Target Elements
1. 🔴 **Highest Priority**: Elements without docstrings/comments (0 comment lines)
2. 🟡 **Next Priority**: Elements not meeting standards (fewer than 30 characters or missing required elements)
3. 🟢 **Verification Target**: Existing comments without Claude markers
**Target Elements (Common Across Languages)**:
- Class definitions
- Functions/methods
- Modules (Python, Go)
- Enums
- Interfaces (TypeScript, Go)
#### 2. Language-Specific Documentation Rules
**Python (PEP 257)**:
```python
# English version (default)
def calculate_total(items: List[Item]) -> float:
"""Calculate the total amount for a list of items. (30-60 characters)
Multiplies the price and quantity of each item and returns
the total with tax. Returns 0.0 for empty lists. (50-200 characters)
Args:
items: List of items to calculate
Returns:
Total amount with tax
Generated by Claude 🤖
"""
# English version (--lang en)
def calculate_total(items: List[Item]) -> float:
"""Calculate the total amount for a list of items. (30-60 chars)
Multiplies the price and quantity of each item and returns
the total with tax. Returns 0.0 for empty lists. (50-200 chars)
Args:
items: List of items to calculate
Returns:
Total amount with tax
Generated by Claude 🤖
"""
```
**JavaScript/TypeScript (JSDoc)**:
```javascript
/**
* Component that displays a user profile. (30-60 characters)
*
* Displays avatar image, username, and status information,
* and navigates to the profile detail screen when clicked. (50-200 characters)
*
* @param {Object} props - Component properties
* @param {string} props.userId - User ID
* @param {boolean} [props.showStatus=true] - Status display flag
* @returns {JSX.Element} Rendered component
*
* @generated by Claude 🤖
*/
const UserProfile = ({ userId, showStatus = true }) => {
```
**Go**:
```go
// CalculateTotal calculates the total amount for a list of items. (30-60 characters)
//
// Multiplies the price and quantity of each item and returns
// the total with tax. Returns 0.0 for empty slices. (50-200 characters)
//
// Generated by Claude 🤖
func CalculateTotal(items []Item) float64 {
```
**Rust**:
```rust
/// Calculate the total amount for a list of items. (30-60 characters)
///
/// Multiplies the price and quantity of each item and returns
/// the total with tax. Returns 0.0 for empty vectors. (50-200 characters)
///
/// Generated by Claude 🤖
pub fn calculate_total(items: &[Item]) -> f64 {
```
**Dart (DartDoc)**:
```dart
/// Widget that displays a user profile. (30-60 characters)
///
/// Vertically arranges avatar image, username, and status information,
/// and navigates to the profile detail screen when tapped. (50-200 characters)
///
/// Generated by Claude 🤖
class UserProfileWidget extends StatelessWidget {
```
#### 3. Existing Content Retention Rules
1. **If existing comments meet standards**: Keep as-is (do not add new ones)
- Standards: At least 30 characters and includes required elements (summary, details, marker)
2. **If existing comments do not meet standards**: Completely replace (no duplication)
3. **If no existing comments**: Add new comments
**Important Information to Retain**:
- URLs and links: `See also:`, `@see`, `References:` etc.
- TODO comments: `TODO:`, `FIXME:`, `XXX:` format
- Notes: `Note:`, `Warning:`, `Important:` etc.
- Examples: `Example:`, `Usage:`, `# Examples` etc.
- Existing parameter and return value descriptions
### Language-Specific Settings
```yaml
# Language-specific default settings
languages:
python:
style: "google" # google, numpy, sphinx
indent: 4
quotes: '"""
javascript:
style: "jsdoc"
indent: 2
prefix: "/**"
suffix: "*/"
typescript:
style: "tsdoc"
indent: 2
prefix: "/**"
suffix: "*/"
go:
style: "godoc"
indent: 0
prefix: "//"
rust:
style: "rustdoc"
indent: 0
prefix: "///"
dart:
style: "dartdoc"
indent: 0
prefix: "///"
```
### Quality Checklist
-**Character Count**: Strictly adhere to 30-60 characters for summary, 50-200 for details
-**Required Elements**: Always include summary, detailed description, and Claude marker
-**Completeness**: Describe role, usage context, and notes
-**Language Conventions**: Comply with official style guides for each language
-**Exceptions**: Explain errors and exceptions (when applicable)
-**Accuracy**: Analyze implementation and only include fact-based descriptions
### Notes
**🔴 Strict Prohibitions**:
- ❌ Code changes other than documentation comments
- ❌ Speculation about implementation details (only facts)
- ❌ Formats that violate language conventions
- ❌ Deletion or modification of existing type annotations
- ❌ Duplication with existing comments
- ❌ Comments below character count standards in test files
**Execution and Verification**:
```bash
# Record execution results
ADDED_COMMENTS=0
UPDATED_COMMENTS=0
ERRORS=0
# Auto-detect language from existing comments
# If Japanese characters (hiragana, katakana, kanji) are detected, use ja; otherwise, use en
DOC_LANGUAGE="en" # Default
if grep -E -r '[\u3040-\u309F\u30A0-\u30FF\u4E00-\u9FAF]' --include="*.py" --include="*.js" --include="*.ts" --include="*.dart" --include="*.go" --include="*.rs" . 2>/dev/null | head -n 1; then
DOC_LANGUAGE="ja"
fi
# Auto-detect programming language and perform static analysis
if [ -f "*.py" ]; then
pylint --disable=all --enable=missing-docstring .
elif [ -f "*.js" ] || [ -f "*.ts" ]; then
eslint . --rule 'jsdoc/require-jsdoc: error'
elif [ -f "*.go" ]; then
golint ./...
elif [ -f "*.rs" ]; then
cargo doc --no-deps
elif [ -f "*.dart" ]; then
melos analyze
fi
if [ $? -ne 0 ]; then
echo "🔴 Error: Static analysis failed"
exit 1
fi
# Output execution summary
echo "📊 Execution Results:"
echo "- Documentation Language: $DOC_LANGUAGE"
echo "- Added Comments: $ADDED_COMMENTS"
echo "- Updated Comments: $UPDATED_COMMENTS"
echo "- Errors: $ERRORS"
```
### Success Criteria
1. **Completion Criteria**: Success when all of the following are met:
- Language-specific static analysis PASSED
- Error count is 0
- All added/updated comments meet standards
2. **Partial Success**: In the following cases:
- Error count is less than 5
- More than 90% meet standards
3. **Failure**: In the following cases:
- Static analysis FAILED
- Error count is 5 or more
### Integration with Claude
```bash
# Analyze entire project (auto language detection)
find . -type f \( -name "*.py" -o -name "*.js" -o -name "*.ts" \)
/update-doc-string
"Update this project's docstrings following language-specific best practices"
# → If existing comments contain Japanese, run with ja; otherwise, run with en
# Explicitly run with English documentation
/update-doc-string --lang en
"Update docstrings following language-specific best practices"
# Explicitly run with Japanese documentation
/update-doc-string --lang ja
"Update this project's docstrings following language-specific best practices"
# Run without markers (auto language detection)
/update-doc-string --marker false
"Improve existing docstrings without adding Claude markers"
# English documentation, no markers
/update-doc-string --lang en --marker false
"Improve existing docstrings without adding Claude markers"
```

View File

@@ -0,0 +1,104 @@
## Flutter Dependencies Update
Safely update dependencies in your Flutter project.
### Usage
```bash
# Check dependency status and request Claude's help
flutter pub deps --style=compact
"Please update the dependencies in pubspec.yaml to their latest versions"
```
### Basic Examples
```bash
# Check current dependencies
cat pubspec.yaml
"Analyze this Flutter project's dependencies and tell me which packages can be updated"
# Check before upgrading
flutter pub upgrade --dry-run
"Check if there are any breaking changes in this planned upgrade"
```
### Integration with Claude
```bash
# Comprehensive dependency update
cat pubspec.yaml
"Analyze Flutter dependencies and perform the following:
1. Research the latest version of each package
2. Check for breaking changes
3. Evaluate risk level (safe, caution, dangerous)
4. Suggest necessary code changes
5. Generate updated pubspec.yaml"
# Safe, gradual update
flutter pub outdated
"Update only packages that can be safely updated, avoiding major version upgrades"
# Impact analysis for specific package update
"Tell me the impact and necessary changes when updating provider to the latest version"
```
### Detailed Examples
```bash
# Detailed analysis including release notes
cat pubspec.yaml && flutter pub outdated
"Analyze dependencies and provide the following for each package in table format:
1. Current → Latest version
2. Risk evaluation (safe, caution, dangerous)
3. Main changes (from CHANGELOG)
4. Required code fixes"
# Null Safety migration analysis
cat pubspec.yaml
"Identify packages not compatible with Null Safety and create a migration plan"
```
### Risk Criteria
```text
Safe (🟢):
- Patch version upgrade (1.2.3 → 1.2.4)
- Bug fixes only
- Backward compatibility guaranteed
Caution (🟡):
- Minor version upgrade (1.2.3 → 1.3.0)
- New features added
- Deprecation warnings
Dangerous (🔴):
- Major version upgrade (1.2.3 → 2.0.0)
- Breaking changes
- API removals or modifications
```
### Execution of Update
```bash
# Create backups
cp pubspec.yaml pubspec.yaml.backup
cp pubspec.lock pubspec.lock.backup
# Execute update
flutter pub upgrade
# Verify after update
flutter analyze
flutter test
flutter pub deps --style=compact
```
### Notes
Always verify functionality after updates. If issues occur, restore with:
```bash
cp pubspec.yaml.backup pubspec.yaml
cp pubspec.lock.backup pubspec.lock
flutter pub get
```

View File

@@ -0,0 +1,104 @@
## Node Dependencies Update
Safely update dependencies in your Node.js project.
### Usage
```bash
# Check dependency status and request Claude's help
npm outdated
"Please update the dependencies in package.json to their latest versions"
```
### Basic Examples
```bash
# Check current dependencies
cat package.json
"Analyze this Node.js project's dependencies and tell me which packages can be updated"
# Check list of updatable packages
npm outdated
"Analyze the risk level of updating these packages"
```
### Integration with Claude
```bash
# Comprehensive dependency update
cat package.json
"Analyze Node.js dependencies and perform the following:
1. Research the latest version of each package
2. Check for breaking changes
3. Evaluate risk level (safe, caution, dangerous)
4. Suggest necessary code changes
5. Generate updated package.json"
# Safe, gradual update
npm outdated
"Update only packages that can be safely updated, avoiding major version upgrades"
# Impact analysis for specific package update
"Tell me the impact and necessary changes when updating express to the latest version"
```
### Detailed Examples
```bash
# Detailed analysis including release notes
cat package.json && npm outdated
"Analyze dependencies and provide the following for each package in table format:
1. Current → Latest version
2. Risk evaluation (safe, caution, dangerous)
3. Main changes (from CHANGELOG)
4. Required code fixes"
# TypeScript project with type definitions consideration
cat package.json tsconfig.json
"Update dependencies including TypeScript type definitions and create an update plan that avoids type errors"
```
### Risk Criteria
```text
Safe (🟢):
- Patch version upgrade (1.2.3 → 1.2.4)
- Bug fixes only
- Backward compatibility guaranteed
Caution (🟡):
- Minor version upgrade (1.2.3 → 1.3.0)
- New features added
- Deprecation warnings
Dangerous (🔴):
- Major version upgrade (1.2.3 → 2.0.0)
- Breaking changes
- API removals or modifications
```
### Execution of Update
```bash
# Create backups
cp package.json package.json.backup
cp package-lock.json package-lock.json.backup
# Execute update
npm update
# Verify after update
npm test
npm run build
npm audit
```
### Notes
Always verify functionality after updates. If issues occur, restore with:
```bash
cp package.json.backup package.json
cp package-lock.json.backup package-lock.json
npm install
```

View File

@@ -0,0 +1,106 @@
## Rust Dependencies Update
Safely update dependencies in your Rust project.
### Usage
```bash
# Check dependency status and request Claude's help
cargo tree
"Please update the dependencies in Cargo.toml to their latest versions"
```
### Basic Examples
```bash
# Check current dependencies
cat Cargo.toml
"Analyze this Rust project's dependencies and tell me which crates can be updated"
# Check list of updatable crates
cargo update --dry-run
"Analyze the risk level of updating these crates"
```
### Integration with Claude
```bash
# Comprehensive dependency update
cat Cargo.toml
"Analyze Rust dependencies and perform the following:
1. Research the latest version of each crate
2. Check for breaking changes
3. Evaluate risk level (safe, caution, dangerous)
4. Suggest necessary code changes
5. Generate updated Cargo.toml"
# Safe, gradual update
cargo tree
"Update only crates that can be safely updated, avoiding major version upgrades"
# Impact analysis for specific crate update
"Tell me the impact and necessary changes when updating tokio to the latest version"
```
### Detailed Examples
```bash
# Detailed analysis including release notes
cat Cargo.toml && cargo tree
"Analyze dependencies and provide the following for each crate in table format:
1. Current → Latest version
2. Risk evaluation (safe, caution, dangerous)
3. Main changes (from CHANGELOG)
4. Trait bound changes
5. Required code fixes"
# Async runtime migration analysis
cat Cargo.toml src/main.rs
"Present all necessary changes for migrating from async-std to tokio or upgrading tokio to a new major version"
```
### Risk Criteria
```text
Safe (🟢):
- Patch version upgrade (0.1.2 → 0.1.3)
- Bug fixes only
- Backward compatibility guaranteed
Caution (🟡):
- Minor version upgrade (0.1.0 → 0.2.0)
- New features added
- Deprecation warnings
Dangerous (🔴):
- Major version upgrade (0.x.y → 1.0.0, 1.x.y → 2.0.0)
- Breaking changes
- API removals or modifications
- Trait bound changes
```
### Execution of Update
```bash
# Create backups
cp Cargo.toml Cargo.toml.backup
cp Cargo.lock Cargo.lock.backup
# Execute update
cargo update
# Verify after update
cargo check
cargo test
cargo clippy
```
### Notes
Always verify functionality after updates. If issues occur, restore with:
```bash
cp Cargo.toml.backup Cargo.toml
cp Cargo.lock.backup Cargo.lock
cargo build
```