Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:47:23 +08:00
commit eb42d25f7b
51 changed files with 9966 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "claude-vibe",
"description": "A comprehensive Claude Code plugin with intelligent context management, vibe coding support, and context window optimization through MCP/agent selection",
"version": "0.3.0",
"author": {
"name": "physics91",
"url": "https://github.com/physics91"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# claude-vibe
A comprehensive Claude Code plugin with intelligent context management, vibe coding support, and context window optimization through MCP/agent selection

24
commands/a.md Normal file
View File

@@ -0,0 +1,24 @@
---
description: "[Express] Analyze code - alias for /analyze-code"
allowed-tools: Read, Glob, Grep
---
# Express Analyze (/a)
Quick alias for code analysis. Uses the same functionality as `/analyze-code`.
## Usage
- `/a` - Analyze current project
- `/a <file-path>` - Analyze specific file
- `/a security` - Focus on security analysis
- `/a perf` - Focus on performance analysis
$ARGUMENTS
Analyze for:
1. Code quality and maintainability
2. Performance bottlenecks
3. Security vulnerabilities
4. Best practice violations
Output findings with severity levels: CRITICAL, HIGH, MEDIUM, LOW.

75
commands/context-setup.md Normal file
View File

@@ -0,0 +1,75 @@
---
description: Configure and optimize project context (MCP servers, agent selection)
allowed-tools: Read, Glob, Grep, Write, AskUserQuestion
---
# Context Setup
Interactively configure project context settings.
**[Activate Skill: context-manager]**
## Tasks
### 1. Project Analysis
Detect project type by checking:
- `package.json` - Node.js/JavaScript/TypeScript
- `requirements.txt` / `pyproject.toml` - Python
- `go.mod` - Go
- `Cargo.toml` - Rust
- `Dockerfile` / `docker-compose.yml` - DevOps
- `*.tf` / `Chart.yaml` - Infrastructure
- Directory structure (src/, app/, internal/, etc.)
### 2. Preset Recommendations
Recommend presets based on detected project type:
- **web-dev**: React, Vue, Next.js (web frontend)
- **python-web**: FastAPI, Django, Flask (Python web)
- **go-backend**: Gin, Echo, Fiber, Chi (Go backend)
- **rust-systems**: Actix-web, Axum (Rust systems)
- **devops**: Docker, Kubernetes, Terraform (DevOps)
- **api-dev**: Express, NestJS (backend API)
- **data-science**: Pandas, TensorFlow (data/ML)
- **full-stack**: Web + API + Database (full stack)
- **minimal**: Core tools only (maximum token savings)
### 3. Interactive Selection
Using AskUserQuestion:
1. Select preset or Custom
2. If Custom: Select individual MCP servers
3. If Custom: Select agent categories
### 4. Generate Config Files
- `.claude/context-profile.json`: Project settings
- `.claude/.mcp.json`: MCP server settings
### 5. Restart Instructions
Guide user to restart Claude Code for MCP settings to take effect.
## Output Example
```
Analyzing project...
Detected: React + TypeScript (Next.js)
Recommended preset: Web Development
Expected token savings: ~28,000 tokens (14%)
[Preset selection question]
---
Setup complete!
Active MCP servers:
- github, playwright, brave-search
Active agents:
- react-expert, vue-expert, css-expert, nodejs-expert, frontend-optimizer
Generated files:
- .claude/context-profile.json
- .claude/.mcp.json
Restart Claude Code to apply MCP settings.
```

View File

@@ -0,0 +1,62 @@
---
description: Display current project context configuration status
allowed-tools: Read, Glob
---
# Context Status
Check current project context configuration.
## Checks
### 1. Project Profile
Read `.claude/context-profile.json` for current settings.
### 2. MCP Configuration
Read `.claude/.mcp.json` for active MCP servers.
### 3. Compare with Global Settings
Compare with `~/.claude/claude_code_config.json` to show active/inactive status.
## Output Format
**With profile:**
```
## Current Context Status
**Profile**: [profile name]
**Project**: [project path]
**Last updated**: [date]
### MCP Servers (N/M active)
[x] github - GitHub integration
[x] playwright - Browser automation
[x] brave-search - Web search
[ ] filesystem - Filesystem (inactive)
[ ] openrouter - AI routing (inactive)
### Active Agents
react-expert, vue-expert, css-expert, nodejs-expert, frontend-optimizer
### Inactive Agents
ios-expert, android-expert, flutter-expert, ml-engineer, ...
### Token Savings
Expected: ~28,000 tokens (14%)
---
Change settings: /context-setup
```
**Without profile:**
```
## Context Status
No context profile configured for this project.
**Detected project type**: [detected type]
**Recommended preset**: [preset]
**Expected token savings**: ~[N] tokens
Run `/context-setup` to optimize context.
```

26
commands/cs.md Normal file
View File

@@ -0,0 +1,26 @@
---
description: "[Express] Context setup - alias for /context-setup"
allowed-tools: Read, Write, Glob, Grep, AskUserQuestion
---
# Express Context Setup (/cs)
Quick alias for context configuration. Uses the same functionality as `/context-setup`.
## Usage
- `/cs` - Interactive context setup wizard
- `/cs <preset>` - Apply specific preset (minimal, web-dev, python-web, etc.)
- `/cs auto` - Auto-detect and configure
$ARGUMENTS
Available presets:
- `minimal` - Core tools only (~45k tokens saved)
- `web-dev` - React/Vue/Next.js frontend
- `python-web` - FastAPI/Django/Flask
- `go-backend` - Gin/Echo/Fiber
- `rust-systems` - Actix-web/Axum
- `devops` - Docker/K8s/Terraform
- `api-dev` - Backend microservices
- `data-science` - ML/AI projects
- `full-stack` - Combined stack

25
commands/d.md Normal file
View File

@@ -0,0 +1,25 @@
---
description: "[Express] Debug mode - alias for /debug"
allowed-tools: Read, Write, Glob
---
# Express Debug (/d)
Quick alias for debug mode toggle. Uses the same functionality as `/debug`.
## Usage
- `/d` - Toggle debug mode
- `/d on` - Enable debug mode
- `/d off` - Disable debug mode
- `/d status` - Show current debug status
$ARGUMENTS
When debug mode is enabled:
- Show skill selection reasoning
- Display context token counts
- Log MCP server calls
- Show performance metrics
- Verbose hook execution logs
Debug logs are stored in `~/.claude/claude-vibe/debug.log`

119
commands/debug.md Normal file
View File

@@ -0,0 +1,119 @@
---
description: Toggle debug mode for verbose skill and context information
allowed-tools: Read, Write
---
# Debug Mode
Enable or disable verbose debugging output for troubleshooting.
## Usage
```
/debug [on|off|status]
```
## Parameters
- `on`: Enable debug mode
- `off`: Disable debug mode
- `status`: Show current debug status (default)
## What Debug Mode Shows
When enabled, debug mode provides additional output:
### 1. Skill Selection Reasoning
```
[DEBUG] Skill Selection:
- Analyzing prompt: "Review my API endpoint"
- Candidate skills: fastapi-reviewer (0.85), go-api-reviewer (0.32), api-documenter (0.28)
- Selected: fastapi-reviewer (highest match)
- Reason: Found "fastapi" in requirements.txt, prompt mentions "API"
```
### 2. Context Token Usage
```
[DEBUG] Context Tokens:
- System prompt: 4,200 tokens
- Skill content: 2,100 tokens
- MCP tools: 8,500 tokens
- Conversation: 12,300 tokens
- Available: 172,900 tokens
- Usage: 13.5%
```
### 3. MCP Server Calls
```
[DEBUG] MCP Call:
- Server: github
- Tool: search_code
- Query: "def create_user"
- Duration: 234ms
- Result: 3 matches
```
### 4. File Operations
```
[DEBUG] File Read:
- Path: src/api/users.py
- Size: 2.4KB
- Lines: 89
- Tokens: ~450
```
## Output Format
**Status Check:**
```
## Debug Mode Status
**Current**: OFF
When enabled, shows:
- Skill selection reasoning
- Context token breakdown
- MCP server call details
- File operation metrics
- Performance timing
To enable: /debug on
```
**When Enabled:**
```
## Debug Mode
**Status**: ON ✓
Debug information will appear in [DEBUG] blocks.
Example output:
[DEBUG] Skill Selection: python-reviewer (confidence: 0.92)
[DEBUG] Tokens: 15,200 / 200,000 (7.6%)
To disable: /debug off
```
## Configuration
Debug settings can be persisted in `.claude/settings.json`:
```json
{
"debug": {
"enabled": false,
"showSkillSelection": true,
"showTokenUsage": true,
"showMCPCalls": true,
"showFileOps": false
}
}
```
## Notes
- Debug mode adds overhead to responses
- Useful for understanding skill behavior
- Recommended to disable in normal usage
- Does not persist across sessions by default

25
commands/e.md Normal file
View File

@@ -0,0 +1,25 @@
---
description: "[Express] Explain code - alias for /explain-code"
allowed-tools: Read, Glob, Grep
---
# Express Explain (/e)
Quick alias for code explanation. Uses the same functionality as `/explain-code`.
## Usage
- `/e` - Explain current file/function
- `/e <file-path>` - Explain specific file
- `/e <function-name>` - Explain specific function
- `/e architecture` - Explain overall architecture
$ARGUMENTS
Provide clear explanation including:
1. Purpose and functionality
2. How it works (step by step)
3. Key algorithms or patterns used
4. Dependencies and side effects
5. Usage examples
Adjust explanation depth based on code complexity.

23
commands/f.md Normal file
View File

@@ -0,0 +1,23 @@
---
description: "[Express] Fix issues - quick bug fix command"
allowed-tools: Read, Write, Edit, Glob, Grep, Bash
---
# Express Fix (/f)
Quick command to fix identified issues, errors, or bugs.
## Usage
- `/f` - Fix the last mentioned issue
- `/f <error-message>` - Fix specific error
- `/f <file-path>` - Fix issues in specific file
$ARGUMENTS
Fix workflow:
1. Identify the root cause
2. Implement the minimal fix
3. Verify the fix doesn't break existing functionality
4. Suggest any related issues that should be addressed
Focus on minimal, targeted fixes rather than refactoring.

60
commands/init-agents.md Normal file
View File

@@ -0,0 +1,60 @@
---
description: Initialize AGENTS.md with codebase documentation
allowed-tools: Read, Glob, Grep, Write, Bash, LS
---
# Initialize AGENTS.md
**Check if AGENTS.md exists before proceeding.**
- If exists: Inform user and ask if they want to update manually
- If not exists: Create comprehensive AGENTS.md in project root
## Required Sections
### 1. Project Overview
- Brief project description
- Main technologies and frameworks
- Project structure overview
### 2. Build & Run Commands
- Install dependencies
- Build project
- Run project
- Run tests
### 3. Code Style & Conventions
- Naming conventions (files, functions, variables, classes)
- Code formatting rules
- Import/export patterns
- Error handling patterns
### 4. Architecture Guidelines
- Directory structure explanation
- Key design patterns
- Module organization
### 5. Agent Instructions
- Key directives for AI agents
- Important constraints (MUST, NEVER, ALWAYS)
- Project-specific rules
### 6. Progress Management (AGENTS_PROGRESS.md)
- Create AGENTS_PROGRESS.md for tracking work progress
- Keep only recent 5 items (including completed)
- Format: `[status] Task description (timestamp)`
- Statuses: `[ ]` Pending, `[~]` In Progress, `[x]` Completed
- Clean up old items to prevent document bloat
## Analysis Steps
1. Scan config files: package.json, Cargo.toml, pyproject.toml, go.mod, pom.xml
2. Check README.md, CONTRIBUTING.md for context
3. Analyze directory structure
4. Check .editorconfig, .prettierrc, eslint config
5. Identify test frameworks and patterns
6. Look for CI/CD configurations
## Output
Create AGENTS.md in project root with all gathered information, formatted clearly and concisely.

28
commands/init.md Normal file
View File

@@ -0,0 +1,28 @@
---
description: "[Express] Initialize AGENTS.md - alias for /init-agents"
allowed-tools: Read, Write, Glob, Grep
---
# Express Init (/init)
Quick alias for AGENTS.md initialization. Uses the same functionality as `/init-agents`.
## Usage
- `/init` - Initialize AGENTS.md for current project
- `/init minimal` - Create minimal AGENTS.md
- `/init full` - Create comprehensive AGENTS.md
$ARGUMENTS
Initialize workflow:
1. Detect project type and structure
2. Analyze existing codebase patterns
3. Generate appropriate AGENTS.md content
4. Include relevant context for Claude Code
AGENTS.md sections:
- Project overview
- Tech stack
- Key conventions
- Important files
- Testing approach

24
commands/r.md Normal file
View File

@@ -0,0 +1,24 @@
---
description: "[Express] Review PR - alias for /review-pr"
allowed-tools: Read, Glob, Grep, Task
---
# Express Review (/r)
Quick alias for PR/code review. Uses the same functionality as `/review-pr`.
## Usage
- `/r` - Review current changes
- `/r <pr-number>` - Review specific PR
- `/r <file-path>` - Review specific file
$ARGUMENTS
Review the code thoroughly for:
1. Logic errors and bugs
2. Security vulnerabilities
3. Performance issues
4. Code style and best practices
5. Test coverage gaps
Provide actionable feedback with specific line references.

25
commands/rf.md Normal file
View File

@@ -0,0 +1,25 @@
---
description: "[Express] Refactor code - alias for /refactor-code"
allowed-tools: Read, Write, Edit, Glob, Grep
---
# Express Refactor (/rf)
Quick alias for code refactoring. Uses the same functionality as `/refactor-code`.
## Usage
- `/rf` - Suggest refactoring for current code
- `/rf <file-path>` - Refactor specific file
- `/rf extract` - Extract methods/functions
- `/rf simplify` - Simplify complex logic
$ARGUMENTS
Refactoring focus:
1. Improve code readability
2. Reduce complexity
3. Remove duplication (DRY)
4. Improve naming
5. Apply design patterns where appropriate
Maintain existing behavior - no functional changes unless requested.

88
commands/skill-log.md Normal file
View File

@@ -0,0 +1,88 @@
---
description: View skill activation history and logs
allowed-tools: Read, Glob
---
# Skill Log
View the history of skill activations in the current session.
## Usage
```
/skill-log [options]
```
## Options
- `--tail <n>`: Show last N entries (default: 10)
- `--filter <skill>`: Filter by skill name
- `--verbose`: Show detailed activation context
## Workflow
### Step 1: Check Log Location
Look for skill activation logs in:
- Session memory (current conversation)
- `.claude/logs/skill-activations.json` (if persistent logging enabled)
### Step 2: Parse Log Entries
Each entry contains:
- Timestamp
- Activated skill name
- Trigger reason
- User prompt (truncated)
- Token usage estimate
### Step 3: Display Results
**Output Format:**
```
## Skill Activation Log
### Recent Activations (Last 10)
| Time | Skill | Trigger | Tokens |
|------|-------|---------|--------|
| 14:32:15 | fastapi-reviewer | "Review my FastAPI code" | ~2.1k |
| 14:28:03 | python-reviewer | "Check this Python file" | ~1.8k |
| 14:15:22 | security-scanner | "Is this code secure?" | ~2.5k |
### Session Summary
- **Total Activations**: 12
- **Most Used**: python-reviewer (4x)
- **Est. Tokens Used**: ~24,000
### Activation Details (--verbose)
**14:32:15 - fastapi-reviewer**
```
Trigger: User prompt contained "FastAPI"
Detection: Found fastapi in requirements.txt
Context: Reviewing routers/users.py
Token estimate: 2,100 tokens
```
```
## Examples
### View Last 5 Activations
```
/skill-log --tail 5
```
### Filter by Skill
```
/skill-log --filter security-scanner
```
### Verbose Output
```
/skill-log --verbose
```
## Notes
- Log is session-scoped by default
- Enable persistent logging in settings for cross-session history
- Useful for understanding token usage patterns

84
commands/skill-test.md Normal file
View File

@@ -0,0 +1,84 @@
---
description: Test skill detection and activation logic
allowed-tools: Read, Glob, Grep
---
# Skill Test
Test a skill's detection logic against the current project or sample input.
## Usage
```
/skill-test [skill-name]
```
## Parameters
- `skill-name` (optional): Specific skill to test. If omitted, tests all skills.
## Workflow
### Step 1: Load Skill Definition
Read the specified skill's SKILL.md file from `skills/<skill-name>/SKILL.md`.
### Step 2: Parse Detection Rules
Extract detection criteria from the skill:
- Project detection patterns (files, dependencies)
- WHEN conditions
- WHEN NOT conditions
### Step 3: Test Against Current Project
Check the current project for:
- File patterns (Glob)
- Dependencies (package.json, requirements.txt, etc.)
- Code patterns (Grep)
### Step 4: Report Results
**Output Format:**
```
## Skill Test Results
### [skill-name]
**Detection Status**: ✓ Would Activate / ✗ Would Not Activate
**Matched Criteria:**
- [x] File detected: package.json
- [x] Dependency found: react
- [x] Pattern matched: src/components/**
**Unmatched Criteria:**
- [ ] File not found: next.config.js
- [ ] Dependency missing: next
**WHEN Conditions:**
- "React project review" → Matches current context
**WHEN NOT Conditions:**
- "Next.js specific" → Not triggered (no Next.js detected)
**Confidence Score**: 85%
**Recommendation**: This skill would activate for general React review.
```
## Examples
### Test Specific Skill
```
/skill-test fastapi-reviewer
```
### Test All Skills
```
/skill-test
```
Output shows which skills would activate for current project.
## Notes
- This is a dry-run test; no actual skill activation occurs
- Useful for debugging custom skills
- Helps understand skill selection logic

22
commands/st.md Normal file
View File

@@ -0,0 +1,22 @@
---
description: "[Express] Skill test - alias for /skill-test"
allowed-tools: Read, Glob, Grep
---
# Express Skill Test (/st)
Quick alias for skill testing. Uses the same functionality as `/skill-test`.
## Usage
- `/st` - Test skill detection for current context
- `/st <skill-name>` - Test specific skill
- `/st <file-path>` - Test which skills match file
$ARGUMENTS
Test workflow:
1. Analyze input/context
2. Run skill detection logic
3. Show matched skills and confidence scores
4. Display generated prompts (dry run)
5. Validate SKILL.md syntax if testing specific skill

23
commands/t.md Normal file
View File

@@ -0,0 +1,23 @@
---
description: "[Express] Generate tests - alias for /generate-tests"
allowed-tools: Read, Write, Glob, Grep
---
# Express Test (/t)
Quick alias for test generation. Uses the same functionality as `/generate-tests`.
## Usage
- `/t` - Generate tests for recent changes
- `/t <file-path>` - Generate tests for specific file
- `/t <function-name>` - Generate tests for specific function
$ARGUMENTS
Generate comprehensive tests including:
1. Unit tests for individual functions
2. Edge cases and boundary conditions
3. Error handling scenarios
4. Integration tests where appropriate
Use the project's existing test framework and conventions.

167
commands/validate-skill.md Normal file
View File

@@ -0,0 +1,167 @@
---
description: Validate custom skill definition for correctness
allowed-tools: Read, Glob
---
# Validate Skill
Validate a custom skill's SKILL.md file for correct structure and format.
## Usage
```
/validate-skill <skill-name|path>
```
## Parameters
- `skill-name`: Name of skill in skills/ directory
- `path`: Direct path to SKILL.md file
## Validation Checks
### 1. YAML Frontmatter
- [ ] Has valid YAML frontmatter (between `---` markers)
- [ ] Contains `name` field
- [ ] Contains `description` field
- [ ] Description uses WHEN/WHAT/WHEN NOT format
### 2. Required Sections
- [ ] Has `# [Skill Name] Skill` heading
- [ ] Has `## Purpose` section
- [ ] Has `## When to Use` section
- [ ] Has `## Workflow` section
### 3. Description Format
- [ ] WHEN: Describes trigger conditions
- [ ] WHAT: Lists capabilities
- [ ] WHEN NOT: Specifies exclusions
### 4. Content Quality
- [ ] Has code examples
- [ ] Has detection rules table
- [ ] Has response template
- [ ] Has integration section
## Output Format
**Valid Skill:**
```
## Skill Validation: my-custom-skill
**Status**: ✓ VALID
### Frontmatter
- [x] Valid YAML syntax
- [x] name: my-custom-skill
- [x] description: WHEN/WHAT/WHEN NOT format ✓
### Required Sections
- [x] Purpose
- [x] When to Use
- [x] Workflow
- [x] Detection Rules
### Content Quality
- [x] Code examples present
- [x] Response template defined
- [x] Integration references
**Result**: Skill is ready for use.
```
**Invalid Skill:**
```
## Skill Validation: my-custom-skill
**Status**: ✗ INVALID
### Errors (3)
1. **Missing WHEN NOT in description**
Line 3: description field should include "WHEN NOT:" clause
```yaml
description: |
WHEN: ...
WHAT: ...
WHEN NOT: ... # Missing!
```
2. **Missing Workflow section**
Expected `## Workflow` heading not found
3. **No code examples**
Skill should include code examples in detection rules
### Warnings (1)
1. **No Response Template**
Consider adding a response template for consistent output
**Result**: Fix 3 errors before using this skill.
```
## Example Validations
### Validate by Name
```
/validate-skill fastapi-reviewer
```
### Validate by Path
```
/validate-skill ./my-skills/custom-reviewer/SKILL.md
```
### Validate All Skills
```
/validate-skill --all
```
## Creating Valid Skills
### Minimum Valid Structure
```markdown
---
name: my-skill
description: |
WHEN: Trigger conditions
WHAT: Capabilities
WHEN NOT: Exclusions
---
# My Skill
## Purpose
Brief description of what this skill does.
## When to Use
- Condition 1
- Condition 2
## Workflow
### Step 1: Analyze
Description of first step.
## Detection Rules
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Issue | Fix | HIGH |
## Response Template
```
## Results
...
```
## Integration
- Related skill 1
- Related skill 2
```
## Notes
- Run validation before deploying custom skills
- Fix all errors; warnings are optional
- Use `--verbose` for detailed syntax checking

233
plugin.lock.json Normal file
View File

@@ -0,0 +1,233 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:physics91/claude-vibe:",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "e572ae623e0a24bba9132ac2252daed355261347",
"treeHash": "8b9998120095f2044ac4a38bec9adcf4c2c82f3751adc993d2f5a98210bb94a7",
"generatedAt": "2025-11-28T10:27:37.445217Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "claude-vibe",
"description": "A comprehensive Claude Code plugin with intelligent context management, vibe coding support, and context window optimization through MCP/agent selection",
"version": "0.3.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "552a138aa3895332a3ee84d3acfbcebc281aeca100d4fb59ef44aea72ffb127b"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "943e4ad8e1207d0bd12b78d2175b67848678b45698b0f4626ccd0fb6e6070cbe"
},
{
"path": "commands/validate-skill.md",
"sha256": "fdb114510f0f5c2aad4a0f15c68f4be988664d36a831917565a7a06728c473e5"
},
{
"path": "commands/debug.md",
"sha256": "7b3c853fd6b8b923b63e1cd286220ae265455429e1458493f27d150049e1c677"
},
{
"path": "commands/f.md",
"sha256": "8db8be39a5cd0d28e47d6e3556971f83de673540a446eefab721cbca7ebb572d"
},
{
"path": "commands/cs.md",
"sha256": "82d1a9d2a3f566ae6ae21319b5c6e8518860a390a485f1a6db44f7251e4e4b4b"
},
{
"path": "commands/skill-test.md",
"sha256": "2120653002dedb1f8faf65539ac8ddb9ed887f66a8b17af19a90f964cc1d1695"
},
{
"path": "commands/context-status.md",
"sha256": "98a7c6a0dc8e741272b08c5be0c351e0f89678e1bc952943083dc2bb6330856c"
},
{
"path": "commands/t.md",
"sha256": "956f323a9c60491c6d8ba068451c1d311ad4bd8bb0220a832d96dceb153b3479"
},
{
"path": "commands/context-setup.md",
"sha256": "c85cf6e7e445d0228e7d4df332572389e1b6b8c71620fe4f5e0cf5e0a55d9087"
},
{
"path": "commands/init.md",
"sha256": "def879c71438f6668ba90901635612aff5e48bb5965ceb554c11e033151bb6b2"
},
{
"path": "commands/r.md",
"sha256": "9f61671662fce0834a8b25d83fcbc046e18eaf91aa904acbc4ebdd52a8089b4f"
},
{
"path": "commands/rf.md",
"sha256": "9a7bc1334e60c0de59c03d6d83d545ad78d31ba91edac2e9a7ebd39803f62164"
},
{
"path": "commands/d.md",
"sha256": "36f029ca0b1161f1f8118666ce5c859bf5df7d19b41f068715652c66fff63449"
},
{
"path": "commands/init-agents.md",
"sha256": "d1203e478b4f94c3a28e40b559ddee0bcbc322e89eaeb8732340ac01a986a712"
},
{
"path": "commands/a.md",
"sha256": "c25896909594c4356b7d980f4c310c3c3e045156fe9af686d301a41e0954728e"
},
{
"path": "commands/st.md",
"sha256": "90c2df816fde5648569c7b8acd31ad92fcd564a6881bb53f5eef8e741be138c9"
},
{
"path": "commands/e.md",
"sha256": "0f742a0c509c6bfbfe2315aac2381f0742e47d3a9810fdf59140714867dbc38d"
},
{
"path": "commands/skill-log.md",
"sha256": "054903aa8e57e806cf63f0fefa0d0b60b60b80eaa42a5c1d5ebd60f281fdabc8"
},
{
"path": "skills/context-manager/SKILL.md",
"sha256": "0dc633ffe56a78120b008fa15824c2f3f6f66c424fcd21368d15e2cb68f5b823"
},
{
"path": "skills/django-reviewer/SKILL.md",
"sha256": "ca2cd86ea64db22b4c13757995cce199f64ff317fb2a1b4ea249942d1e71b596"
},
{
"path": "skills/kotlin-multiplatform-reviewer/SKILL.md",
"sha256": "a9a442022e3baf651c826e1fde1fe8d1b9d9cab7b9f4770abc64b8b1f808708f"
},
{
"path": "skills/test-generator/SKILL.md",
"sha256": "5e012f3b482b860abef87add79db5bc9c5bde8fcbe1c4d39c9eae8496e708fc2"
},
{
"path": "skills/infra-security-reviewer/SKILL.md",
"sha256": "9e3ff3d3d834b31af14a0460aa4187a6866f9afbd3c92ab79f086f47d44720d8"
},
{
"path": "skills/k8s-reviewer/SKILL.md",
"sha256": "cf266fa8494891c409a3102684e838748cb4cbc83530553ade4963272699b755"
},
{
"path": "skills/docker-reviewer/SKILL.md",
"sha256": "c9a1cee3cc35ffd6ea26f1b780b2c152ae1b23b8a9e83983b5d53c9f7ce5fbcf"
},
{
"path": "skills/schema-reviewer/SKILL.md",
"sha256": "77a63b55500281cccd38fc00101df4fa3cde4ed17293ee42e8a8511631a55b27"
},
{
"path": "skills/kotlin-spring-reviewer/SKILL.md",
"sha256": "ef77cbb45a0fe8e3e619718358934e0f52c4f461e3ecf3b7393f8edc3fd636cc"
},
{
"path": "skills/terraform-reviewer/SKILL.md",
"sha256": "07a4916fef6efeac5ac21df6fd6e9a77fba3af0f5d07e9716eb2b83efbc19814"
},
{
"path": "skills/api-documenter/SKILL.md",
"sha256": "6cffc4c8fffb738b256cc46a6350037d5745f8f856186ca4af98c53f04adc907"
},
{
"path": "skills/sql-optimizer/SKILL.md",
"sha256": "c6e9c549743361d05bf1b3840efbfc574b98c72884d6e1292d25f412125309d1"
},
{
"path": "skills/python-data-reviewer/SKILL.md",
"sha256": "095909f592a6ed616217c9b2026a606961793f593d899b8ec0735f91123e74f8"
},
{
"path": "skills/orm-reviewer/SKILL.md",
"sha256": "570bec8565086ce2d7b68d45f45c62744225512e24699524dadc4ef1b37cc2f9"
},
{
"path": "skills/fastapi-reviewer/SKILL.md",
"sha256": "c57736f9a6ce9bea4f9ccac3efff5da6b8776bbba9305395f42ed8284e1a56bb"
},
{
"path": "skills/prompt-clarifier/SKILL.md",
"sha256": "7c1a6cd7adf3dc5deee3eb70d8756342bc7658594701004567d6a4e5bf66de47"
},
{
"path": "skills/perf-analyzer/SKILL.md",
"sha256": "429c5ddac6a77b7d1d66a84ec21730921a46bf4fc6e521376861f24fec9a695d"
},
{
"path": "skills/go-reviewer/SKILL.md",
"sha256": "d4ee2b448eab0cd143d701ecc2eb19b3c295881dc1170253f9358df832944374"
},
{
"path": "skills/rust-api-reviewer/SKILL.md",
"sha256": "4078f297a29ee5eb8284e4ab42dcf7f82692953eec3de2347b06e24fb3b632bd"
},
{
"path": "skills/coverage-analyzer/SKILL.md",
"sha256": "ecc07b4d70d17c5c4fe157deca8db0ef44d3ab3ae0d749fb7ea6bb4e6105fed9"
},
{
"path": "skills/nextjs-reviewer/SKILL.md",
"sha256": "0f6038c0f6de0a319d3b71c4c0def381e41269908666e7ec689171906dcea20f"
},
{
"path": "skills/go-api-reviewer/SKILL.md",
"sha256": "fde7df49d161b0aa946078a720d58d081a7c19a16fca26e5ba2d932a6a09c788"
},
{
"path": "skills/flask-reviewer/SKILL.md",
"sha256": "e0b882d9947b14b867c0e31aa909903aa5331c89141854ccf5222703055543b9"
},
{
"path": "skills/readme-generator/SKILL.md",
"sha256": "e9bb66012c8034a9681dfbf38725b7135827fb12687fafc2a58473c576a16924"
},
{
"path": "skills/rust-reviewer/SKILL.md",
"sha256": "29ac9619e766b35e338770a559592a9feb922f0dfb89e25daab654382ef740cf"
},
{
"path": "skills/code-reviewer/SKILL.md",
"sha256": "4141239b34911ea6616713d8c6ded403c7914de369b37c249fe4c6cf3c92a800"
},
{
"path": "skills/kotlin-android-reviewer/SKILL.md",
"sha256": "0f3c9f65f02e980f9f5f0dcc2abe438585c4c41c803b12ec5aebfb0670076760"
},
{
"path": "skills/python-reviewer/SKILL.md",
"sha256": "556738447c47d6e61fe714fcab1167a500671a57cb14ca9844cf44d9a189fe1e"
},
{
"path": "skills/security-scanner/SKILL.md",
"sha256": "aba47398cf880ef97f6eaf1ef718c5279ff9ffca1ac7ea940b4d4c30b8cca3bc"
},
{
"path": "skills/migration-checker/SKILL.md",
"sha256": "da9cb0cb47cac5229547c9a417fbe366a2e1f876ef6831cd8a9ab20db69d908b"
},
{
"path": "skills/ci-cd-reviewer/SKILL.md",
"sha256": "4e0469ef68cdf870536210fa0229fd96f20cbd64fbc5843dd98e392668e44125"
}
],
"dirSha256": "8b9998120095f2044ac4a38bec9adcf4c2c82f3751adc993d2f5a98210bb94a7"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,281 @@
---
name: api-documenter
description: |
WHEN: API documentation, JSDoc/TSDoc comments, Props documentation, Storybook story writing
WHAT: Function/class/type JSDoc + React Props table + Markdown API docs + Storybook stories
WHEN NOT: README writing → readme-generator, Code explanation → code-reviewer
---
# API Documenter Skill
## Purpose
Automatically generates API documentation for code including functions, components, and types with JSDoc/TSDoc comments.
## When to Use
- "API docs", "jsdoc", "documentation" requests
- Component Props documentation needed
- Adding comments to functions/classes
- Storybook story generation
## Workflow
### Step 1: Select Documentation Target
**AskUserQuestion:**
```
"What code to document?"
Options:
- Specific file/function
- Undocumented export functions
- React component Props
- All public APIs
```
### Step 2: Select Documentation Type
**AskUserQuestion:**
```
"What format to generate?"
Options:
- JSDoc/TSDoc comments
- Markdown API docs
- Storybook stories
- All of the above
multiSelect: true
```
## Documentation Templates
### JSDoc/TSDoc Comments
**Function:**
```typescript
/**
* Formats user data for display.
*
* @param user - User object to format
* @param options - Formatting options
* @param options.locale - Locale setting (default: 'en-US')
* @param options.includeAge - Include age (default: false)
*
* @returns Formatted user string
*
* @example
* ```typescript
* const formatted = formatUser({ name: 'John', age: 30 })
* // Returns: 'John'
*
* const withAge = formatUser({ name: 'John', age: 30 }, { includeAge: true })
* // Returns: 'John (30)'
* ```
*
* @throws {ValidationError} When user object is invalid
* @see {@link User} User type definition
* @since 1.0.0
*/
export function formatUser(user: User, options?: FormatOptions): string
```
**Interface:**
```typescript
/**
* User information interface
*
* @interface User
* @property {string} id - Unique identifier (UUID)
* @property {string} name - User name
* @property {string} email - Email address
* @property {number} [age] - Age (optional)
* @property {UserRole} role - User role
*/
interface User {
id: string
name: string
email: string
age?: number
role: UserRole
}
```
**Class:**
```typescript
/**
* API client for REST communication
*
* @class ApiClient
* @example
* ```typescript
* const client = new ApiClient({ baseUrl: 'https://api.example.com' })
* const users = await client.get<User[]>('/users')
* ```
*/
class ApiClient {
/**
* Creates ApiClient instance
* @param config - Client configuration
*/
constructor(config: ApiClientConfig) {}
/**
* Performs GET request
* @template T - Response type
* @param endpoint - API endpoint
* @returns Response data
*/
async get<T>(endpoint: string): Promise<T> {}
}
```
### React Component Documentation
**Props Interface:**
```typescript
/**
* Button component Props
*/
interface ButtonProps {
/** Button content */
children: React.ReactNode
/** Style variant @default 'primary' */
variant?: 'primary' | 'secondary' | 'outline' | 'ghost'
/** Button size @default 'medium' */
size?: 'small' | 'medium' | 'large'
/** Disabled state @default false */
disabled?: boolean
/** Loading state @default false */
loading?: boolean
/** Click event handler */
onClick?: (event: React.MouseEvent<HTMLButtonElement>) => void
}
/**
* Button component with various styles and sizes
*
* @component
* @example
* ```tsx
* <Button onClick={handleClick}>Click me</Button>
* <Button variant="secondary" size="large">Large Button</Button>
* <Button loading disabled>Processing...</Button>
* ```
*/
export function Button({ children, variant = 'primary', ...props }: ButtonProps)
```
### Markdown API Docs
```markdown
## formatUser
Formats user data for display.
### Signature
\`\`\`typescript
function formatUser(user: User, options?: FormatOptions): string
\`\`\`
### Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `user` | `User` | Yes | - | User object to format |
| `options.locale` | `string` | No | `'en-US'` | Locale setting |
### Returns
`string` - Formatted user string
### Example
\`\`\`typescript
const formatted = formatUser({ name: 'John', age: 30 })
\`\`\`
```
### Storybook Stories
```typescript
import type { Meta, StoryObj } from '@storybook/react'
import { Button } from './Button'
const meta: Meta<typeof Button> = {
title: 'Components/Button',
component: Button,
tags: ['autodocs'],
argTypes: {
variant: {
control: 'select',
options: ['primary', 'secondary', 'outline', 'ghost'],
},
size: {
control: 'select',
options: ['small', 'medium', 'large'],
},
},
}
export default meta
type Story = StoryObj<typeof Button>
/** Default button style */
export const Default: Story = {
args: { children: 'Button', variant: 'primary' },
}
/** Primary variant for main actions */
export const Primary: Story = {
args: { children: 'Primary', variant: 'primary' },
}
/** Various sizes */
export const Sizes: Story = {
render: () => (
<div style={{ display: 'flex', gap: '1rem' }}>
<Button size="small">Small</Button>
<Button size="medium">Medium</Button>
<Button size="large">Large</Button>
</div>
),
}
```
## Response Template
```
## API Documentation Generated
**Target**: src/components/Button.tsx
### JSDoc Comments
- ButtonProps interface: 7 properties documented
- Button component: Fully documented
### Markdown Docs
- File: docs/components/Button.md
- Sections: Props, Usage, Accessibility
### Storybook
- File: src/components/Button.stories.tsx
- Stories: 6 (Default, Primary, Secondary, Sizes, Loading, Disabled)
### Statistics
| Item | Count |
|------|-------|
| Documented Props | 7 |
| Code Examples | 5 |
| Stories | 6 |
```
## Best Practices
1. **Consistent Style**: Same documentation style across project
2. **Include Examples**: Usage examples for all public APIs
3. **Type Accuracy**: Match TypeScript types with documentation
4. **Keep Updated**: Update docs when code changes
5. **Accessibility Info**: Include a11y information for components
## Integration
- `readme-generator` skill: README API section
- `/explain-code` command: Code understanding
- `code-reviewer` skill: Documentation quality check
## Notes
- Merges or overwrites if existing docs present
- Auto-infers from TypeScript types
- Excludes @internal tagged code

View File

@@ -0,0 +1,389 @@
---
name: ci-cd-reviewer
description: |
WHEN: CI/CD pipeline review, GitHub Actions, GitLab CI, Jenkins, build optimization
WHAT: Pipeline structure + Job optimization + Security scanning + Caching strategy + Deployment patterns
WHEN NOT: Kubernetes → k8s-reviewer, Terraform → terraform-reviewer
---
# CI/CD Reviewer Skill
## Purpose
Reviews CI/CD pipelines for structure, security, optimization, and best practices.
## When to Use
- GitHub Actions workflow review
- GitLab CI pipeline review
- Jenkins pipeline review
- Build optimization
- Deployment strategy review
## Project Detection
- `.github/workflows/*.yml`
- `.gitlab-ci.yml`
- `Jenkinsfile`
- `azure-pipelines.yml`
- `.circleci/config.yml`
## Workflow
### Step 1: Analyze Project
```
**Platform**: GitHub Actions
**Triggers**: push, pull_request
**Jobs**: build, test, deploy
**Environments**: staging, production
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full CI/CD review (recommended)
- Pipeline structure
- Security and secrets
- Caching and optimization
- Deployment strategy
multiSelect: true
```
## Detection Rules
### GitHub Actions
#### Workflow Structure
| Check | Recommendation | Severity |
|-------|----------------|----------|
| All jobs sequential | Parallelize independent jobs | MEDIUM |
| No job dependencies | Add needs for proper order | HIGH |
| Duplicate steps | Extract to composite action | MEDIUM |
| No concurrency control | Add concurrency group | MEDIUM |
```yaml
# BAD: Sequential, no optimization
name: CI
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm install
- run: npm run build
- run: npm test
- run: npm run lint
- run: docker build .
- run: docker push
# GOOD: Parallel jobs with dependencies
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm test
build:
needs: [lint, test] # Run after lint and test pass
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- run: npm ci
- run: npm run build
- uses: actions/upload-artifact@v4
with:
name: build
path: dist/
deploy:
needs: build
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/download-artifact@v4
with:
name: build
- run: ./deploy.sh
```
#### Security
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Secrets in plain text | Use secrets context | CRITICAL |
| No permissions defined | Add explicit permissions | HIGH |
| Third-party actions unpinned | Pin to SHA | HIGH |
| No environment protection | Use environments | MEDIUM |
```yaml
# BAD: Security issues
name: Deploy
on: push
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: |
curl -X POST https://api.example.com \
-H "Authorization: Bearer ${{ secrets.API_KEY }}"
- uses: some-org/some-action@main # Unpinned!
# GOOD: Secure workflow
name: Deploy
on:
push:
branches: [main]
permissions:
contents: read
id-token: write # For OIDC
jobs:
deploy:
runs-on: ubuntu-latest
environment: production # Requires approval
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502 # v4.0.2
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- name: Deploy
run: ./deploy.sh
env:
DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}
```
#### Caching
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No dependency caching | Add cache action | HIGH |
| No Docker layer cache | Use buildx cache | MEDIUM |
| Cache key not specific | Include lockfile hash | MEDIUM |
```yaml
# GOOD: Comprehensive caching
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Node.js caching
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
# Or manual cache
- uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
# Docker layer caching
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max
```
### GitLab CI
```yaml
# .gitlab-ci.yml
stages:
- lint
- test
- build
- deploy
variables:
NODE_VERSION: "20"
.node_template: &node_template
image: node:${NODE_VERSION}-alpine
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
before_script:
- npm ci --cache .npm --prefer-offline
lint:
<<: *node_template
stage: lint
script:
- npm run lint
test:
<<: *node_template
stage: test
script:
- npm test
coverage: '/Coverage: \d+\.\d+%/'
artifacts:
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
build:
<<: *node_template
stage: build
script:
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 week
deploy_staging:
stage: deploy
environment:
name: staging
url: https://staging.example.com
script:
- ./deploy.sh staging
only:
- main
deploy_production:
stage: deploy
environment:
name: production
url: https://example.com
script:
- ./deploy.sh production
when: manual
only:
- main
```
### Deployment Strategies
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Direct to production | Add staging environment | HIGH |
| No rollback plan | Add rollback mechanism | HIGH |
| No health checks | Add post-deploy verification | MEDIUM |
```yaml
# Blue-Green / Canary with GitHub Actions
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- name: Deploy canary (10%)
run: |
kubectl set image deployment/app app=myapp:${{ github.sha }}
kubectl rollout status deployment/app --timeout=5m
- name: Verify canary
run: |
sleep 60
./verify-deployment.sh
- name: Promote to 100%
if: success()
run: kubectl scale deployment/app --replicas=10
- name: Rollback on failure
if: failure()
run: kubectl rollout undo deployment/app
```
## Response Template
```
## CI/CD Review Results
**Project**: [name]
**Platform**: GitHub Actions
**Jobs**: 4 | **Workflows**: 2
### Pipeline Structure
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | ci.yml | All jobs run sequentially |
### Security
| Status | File | Issue |
|--------|------|-------|
| HIGH | deploy.yml | Actions not pinned to SHA |
### Caching
| Status | File | Issue |
|--------|------|-------|
| HIGH | ci.yml | No dependency caching |
### Deployment
| Status | File | Issue |
|--------|------|-------|
| HIGH | deploy.yml | No staging environment |
### Recommended Actions
1. [ ] Parallelize lint and test jobs
2. [ ] Pin all actions to commit SHA
3. [ ] Add npm caching
4. [ ] Add staging deployment step
```
## Best Practices
1. **Structure**: Parallel jobs, proper dependencies
2. **Security**: Pin actions, use OIDC, minimal permissions
3. **Caching**: Cache dependencies and Docker layers
4. **Deployment**: Staging → Production with approvals
5. **Monitoring**: Add status checks and notifications
## Integration
- `docker-reviewer`: Container build steps
- `k8s-reviewer`: Kubernetes deployments
- `security-scanner`: SAST/DAST in pipeline

View File

@@ -0,0 +1,110 @@
---
name: code-reviewer
description: |
WHEN: Code review, quality check, code smell detection, refactoring suggestions
WHAT: Complexity analysis + code smell list + severity-based issues + improvement suggestions
WHEN NOT: Next.js specific → nextjs-reviewer, Security → security-scanner, Performance → perf-analyzer
---
# Code Reviewer Skill
## Purpose
Analyzes code quality, detects code smells, and suggests improvements.
## When to Use
- Code review requests
- Code quality, code smell mentions
- Post-implementation review
- Pre-merge PR review
## Workflow
### Step 1: Review Scope
**AskUserQuestion:**
```
"What code should I review?"
Options:
- Current changes (git diff)
- Specific file/folder
- Full project scan
- Recent commits
```
### Step 2: Review Focus
**AskUserQuestion:**
```
"What should I focus on?"
Options:
- Full quality check (recommended)
- Bugs/Logic errors
- Code style/Readability
- Performance issues
- Security vulnerabilities
multiSelect: true
```
### Step 3: Analysis
- **Complexity**: Cyclomatic, Cognitive
- **Duplication**: DRY violations
- **Naming**: Variable/function naming quality
- **Structure**: Function length, nesting depth, parameter count
## Detection Rules
### Code Smells
| Smell | Threshold | Severity |
|-------|-----------|----------|
| Long Function | > 50 lines | MEDIUM |
| Deep Nesting | > 3 levels | HIGH |
| Magic Numbers | Hardcoded numbers | LOW |
| Long Parameter List | > 4 params | MEDIUM |
| God Object | > 20 methods | HIGH |
| Duplicate Code | > 10 lines | HIGH |
### Naming Conventions
| Type | Pattern |
|------|---------|
| Function | camelCase, verb prefix |
| Variable | camelCase, noun |
| Constant | UPPER_SNAKE_CASE |
| Class | PascalCase |
| File | kebab-case or PascalCase |
## Response Template
```
## Code Review Results
**Target**: [path]
### CRITICAL (Fix immediately)
- **[Issue]** `file:line`
- Problem: [description]
- Solution: [suggestion]
### HIGH | MEDIUM | LOW
- ...
### Positive Patterns
- [Well-written code mentions]
### Summary
- Total issues: X
- Critical: X | High: X | Medium: X | Low: X
```
## Best Practices
1. Provide specific, actionable feedback with solutions
2. Group by severity: Critical > High > Medium > Low
3. Include positive feedback
4. Provide copy-paste ready code fixes
5. Respect project conventions
## Integration
- `/analyze-code` command
- `security-scanner` skill
- `perf-analyzer` skill
## Notes
- Reviews are suggestions, final decisions are developer's
- Maintain consistency with existing project conventions
- Use with automated linters (ESLint, Prettier)

View File

@@ -0,0 +1,201 @@
---
name: context-manager
description: |
WHEN: Context setup, token optimization, MCP/agent configuration requests
WHAT: Project analysis + Preset recommendation + MCP server selection + Agent activation + Config generation
WHEN NOT: Status check only → /context-status command
---
# Context Manager Skill
## Purpose
Optimizes Claude Code's context window by interactively selecting MCP servers, Custom Agents, and Slash Commands per project and generating configuration files.
## When to Use
Activate when:
1. User requests "context setup", "context management", "token optimization"
2. `/context-setup` command is executed
3. Starting work on a new project (auto-detect and recommend)
## Workflow
### Step 1: Analyze Project
Analyze project root files to detect project type.
Analysis targets:
- `package.json` (Node.js/JavaScript)
- `requirements.txt`, `pyproject.toml` (Python)
- `go.mod` (Go)
- `Cargo.toml` (Rust)
- `pom.xml`, `build.gradle` (Java)
### Step 2: Recommend Preset
Recommend preset based on detected project type.
**AskUserQuestion:**
```
"Select a context preset for your project"
Options:
- Web Development - React/Vue/Next.js (~28k saved)
- Python Web - FastAPI/Django/Flask (~30k saved)
- Go Backend - Gin/Echo/Fiber (~28k saved)
- Rust Systems - Actix/Axum (~32k saved)
- DevOps - Docker/K8s/Terraform (~25k saved)
- Data Science - ML/AI/Data (~30k saved)
- Full Stack - Web + API + DB (~20k saved)
- Minimal - Core tools only (~45k saved)
- Custom - Select manually
```
### Step 3: Custom Configuration
If Custom is selected, choose by category.
**MCP Server Selection:**
```
AskUserQuestion: "Select MCP servers to enable"
Options:
- GitHub (PR/issue management) - ~8k tokens
- Playwright (browser automation) - ~12k tokens
- Brave Search (web search) - ~3k tokens
- Select all
multiSelect: true
```
**Agent Category Selection:**
```
AskUserQuestion: "Which agent categories do you need?"
Options:
- Frontend (React, Vue, Next.js, CSS)
- Python (FastAPI, Django, Flask, Data)
- Go/Rust (Go API, Rust Systems)
- Backend (API, DB, Security)
- DevOps (Docker, K8s, Terraform, CI/CD)
- Database (SQL, ORM, Migrations)
- Mobile (iOS, Android, Flutter)
- AI/ML (ML, DL, NLP)
multiSelect: true
```
### Step 4: Generate Config Files
Create based on selections:
1. **`.claude/context-profile.json`**: Project context settings
2. **`.claude/.mcp.json`**: MCP server settings (selected servers only)
### Step 5: Restart Instructions
MCP settings apply after session restart.
```
Setup complete!
Generated files:
- .claude/context-profile.json
- .claude/.mcp.json
Active MCP servers: github, playwright, brave-search
Active agents: react-expert, css-expert, nodejs-expert
Estimated token savings: ~28,000 tokens (14%)
Restart Claude Code to apply MCP settings.
```
## Response Templates
### Project Analysis Result
```
Project analyzed.
**Detected project type**: Web Development (React + TypeScript)
**Confidence**: 85%
**Recommended preset**: web-dev
**Estimated token savings**: ~28,000 tokens
[Present preset selection with AskUserQuestion]
```
### Setup Complete
```
Context setup complete!
**Profile**: Web Development
**Project**: G:\ai-dev\my-project
### Active MCP Servers (3/6)
- github
- playwright
- brave-search
### Active Agents (5/55)
- react-expert
- css-expert
- nodejs-expert
- frontend-optimizer
- ui-ux-designer
### Generated Files
- .claude/context-profile.json
- .claude/.mcp.json
### Estimated Token Savings
~28,000 tokens (14% saved)
Restart Claude Code to apply MCP settings.
To change settings, run `/context-setup`.
```
### Current Status (/context-status)
```
## Current Context Status
**Profile**: Web Development
**Project**: G:\ai-dev\my-project
### MCP Servers (3/6 active)
[x] github
[x] playwright
[x] brave-search
[ ] filesystem (inactive)
[ ] openrouter (inactive)
[ ] context7 (inactive)
### Agents (5/55 active)
[x] react-expert, css-expert, nodejs-expert, frontend-optimizer, ui-ux-designer
### Estimated Token Savings
~28,000 tokens
```
## Best Practices
1. **Analyze first**: Analyze project files before asking questions
2. **Provide recommendations**: Recommend appropriate preset based on analysis
3. **Keep options concise**: Keep AskUserQuestion options to 4 or fewer
4. **Show token savings**: Clearly communicate impact of each choice
5. **Restart notice**: Always mention restart required for MCP changes
## Integration
This skill integrates with:
- `SessionStart` hook: Check project profile existence and notify
- `/context-setup` command: Activate this skill
- `/context-status` command: Display current status
## Available Presets
| Preset | Description | MCP Servers | Token Savings |
|--------|-------------|-------------|---------------|
| minimal | Core tools only | None | ~45k |
| web-dev | React/Vue/Next.js | github, playwright, brave-search | ~28k |
| python-web | FastAPI/Django/Flask | github, brave-search, context7 | ~30k |
| go-backend | Gin/Echo/Fiber/Chi | github, brave-search, context7 | ~28k |
| rust-systems | Actix-web/Axum | github, brave-search, context7 | ~32k |
| devops | Docker/K8s/Terraform | github, brave-search, context7 | ~25k |
| api-dev | Backend API | github, brave-search, context7 | ~25k |
| data-science | ML/AI/Data | brave-search, context7, github | ~30k |
| full-stack | Web + API + DB | github, playwright, brave-search, context7 | ~20k |
## Notes
- Project `.claude/.mcp.json` takes precedence over global settings
- Agent enable/disable is handled via system prompt instructions
- Config files can be committed to git for team sharing

View File

@@ -0,0 +1,207 @@
---
name: coverage-analyzer
description: |
WHEN: Coverage analysis, finding untested files, test prioritization, coverage gap identification
WHAT: Line/Branch/Function coverage + untested file list + priority by importance + improvement roadmap
WHEN NOT: Test generation → test-generator, Test quality → code-reviewer
---
# Coverage Analyzer Skill
## Purpose
Analyzes project test coverage and identifies areas lacking tests. Recommends code to test with priority.
## When to Use
- Coverage analysis requests
- Finding untested files
- Coverage improvement guidance
- Verifying critical code test status
## Workflow
### Step 1: Check Coverage Status
```
Analyzing test environment...
**Test Runner**: Jest
**Test Files**: 45
**Coverage Report**: coverage/lcov-report/
**Current Coverage**: 68%
```
### Step 2: Select Analysis Type
**AskUserQuestion:**
```
"What coverage analysis do you need?"
Options:
- Overall coverage status
- Find untested files
- Low coverage files
- Critical code test status
- Coverage improvement priorities
```
## Analysis Types
### Untested File Detection
```
Targets:
- src/**/*.ts
- src/**/*.tsx
- lib/**/*.ts
Excluded:
- *.d.ts (type definitions)
- *.test.ts (test files)
- index.ts (barrel files)
- *.config.ts (config files)
```
**Results Format:**
```
## Untested Files (15)
### High Priority (Business Logic)
| File | Lines | Complexity | Recommendation |
|------|-------|------------|----------------|
| src/utils/payment.ts | 120 | HIGH | Needs immediate tests |
| src/services/auth.ts | 85 | HIGH | Needs immediate tests |
### Medium Priority (Utilities)
| File | Lines | Complexity | Recommendation |
|------|-------|------------|----------------|
| src/utils/format.ts | 45 | MEDIUM | Tests recommended |
### Low Priority (Simple Code)
| File | Lines | Complexity | Recommendation |
|------|-------|------------|----------------|
| src/constants/index.ts | 20 | LOW | Optional |
```
### Coverage Metrics
| Metric | Description | Target |
|--------|-------------|--------|
| Line Coverage | Executed lines ratio | > 80% |
| Branch Coverage | Executed branches ratio | > 75% |
| Function Coverage | Executed functions ratio | > 85% |
| Statement Coverage | Executed statements ratio | > 80% |
**Analysis Results:**
```
### Overall Status
| Metric | Current | Target | Status |
|--------|---------|--------|--------|
| Lines | 68% | 80% | WARNING |
| Branches | 55% | 75% | CRITICAL |
| Functions | 72% | 85% | WARNING |
### Low Coverage Files (< 50%)
| File | Lines | Branches | Impact |
|------|-------|----------|--------|
| src/services/payment.ts | 23% | 10% | HIGH |
| src/components/Form.tsx | 45% | 30% | MEDIUM |
```
### Critical Code Analysis
**Priority Criteria:**
| Criteria | Description | Weight |
|----------|-------------|--------|
| Business Logic | Payment, auth, data processing | HIGH |
| Usage Frequency | Import count | MEDIUM |
| Complexity | Cyclomatic Complexity | MEDIUM |
| Change Frequency | git log based | LOW |
```
### CRITICAL (Must Test)
| File | Importance | Coverage | Recommendation |
|------|------------|----------|----------------|
| src/services/payment.ts | CRITICAL | 23% | Add tests immediately |
| src/utils/auth.ts | CRITICAL | 0% | Create test file |
### HIGH (Should Test)
| File | Importance | Coverage | Recommendation |
|------|------------|----------|----------------|
| src/hooks/useCart.ts | HIGH | 45% | Add edge cases |
```
### Improvement Priorities
```
## Coverage Improvement Priorities
### Priority 1: High Importance + Low Coverage
Needs immediate tests
| File | Current | Expected Gain | Effort |
|------|---------|---------------|--------|
| src/services/payment.ts | 23% | +57% → 80% | 3 hours |
| src/utils/auth.ts | 0% | +80% → 80% | 2 hours |
### Priority 2: High Usage Frequency
Frequently used utilities
| File | Current | Imports | Effort |
|------|---------|---------|--------|
| src/utils/format.ts | 45% | 32 | 1 hour |
| src/hooks/useApi.ts | 60% | 28 | 1 hour |
### Priority 3: High Complexity
Bug-prone code
| File | Current | Complexity | Effort |
|------|---------|------------|--------|
| src/components/Form.tsx | 45% | 15 | 2 hours |
### Expected Results
Current: 68%
After Priority 1: 75%
After Priority 2: 80%
After Priority 3: 85%
```
## Response Template
```
## Test Coverage Analysis Results
**Project**: [name]
### Overall Status
| Metric | Current | Target | Gap |
|--------|---------|--------|-----|
| Lines | 68% | 80% | -12% |
| Branches | 55% | 75% | -20% |
| Functions | 72% | 85% | -13% |
### Untested Critical Files (5)
1. `src/services/payment.ts` - Payment logic (CRITICAL)
2. `src/utils/auth.ts` - Auth utilities (HIGH)
3. `src/hooks/useCart.ts` - Cart hook (HIGH)
### Low Coverage Files (< 50%)
1. `src/components/CheckoutForm.tsx` - 45%
2. `src/services/api.ts` - 38%
### Recommended Actions
1. [ ] Add payment.ts unit tests (expected +5% overall)
2. [ ] Create auth.ts test file (expected +3% overall)
3. [ ] Improve CheckoutForm component tests
### Next Steps
Use `test-generator` skill to auto-generate tests for these files.
```
## Best Practices
1. **Meaningful Coverage**: Focus on critical code, not just numbers
2. **Gradual Improvement**: Don't aim for 100% at once
3. **Branch Coverage**: More important than line coverage
4. **Test Quality**: Avoid meaningless tests just for coverage
5. **CI Integration**: Fail build on coverage decrease
## Integration
- `test-generator` skill: Auto-generate missing tests
- `code-reviewer` skill: Test quality review
- `/generate-tests` command: Execute test generation
## Notes
- Uses coverage report files (lcov, coverage.json) if available
- Estimates based on file analysis if no report
- 100% coverage is not the goal, focus on critical code

View File

@@ -0,0 +1,268 @@
---
name: django-reviewer
description: |
WHEN: Django project review, ORM queries, views/templates, admin customization
WHAT: ORM optimization + View patterns + Template security + Admin config + Migration safety
WHEN NOT: FastAPI → fastapi-reviewer, Flask → flask-reviewer, DRF API only → consider api-expert
---
# Django Reviewer Skill
## Purpose
Reviews Django projects for ORM usage, view patterns, security, and best practices.
## When to Use
- Django project code review
- ORM query optimization
- View/template review
- Admin customization review
- Migration safety check
## Project Detection
- `django` in requirements.txt/pyproject.toml
- `manage.py` in project root
- `settings.py` with INSTALLED_APPS
- `urls.py`, `views.py`, `models.py` files
## Workflow
### Step 1: Analyze Project
```
**Django**: 4.2+ / 5.0+
**Database**: PostgreSQL/MySQL/SQLite
**Template Engine**: Django Templates/Jinja2
**REST**: Django REST Framework
**Admin**: Django Admin / Unfold
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Django review (recommended)
- ORM and query optimization
- Views and URL patterns
- Templates and security
- Admin customization
multiSelect: true
```
## Detection Rules
### ORM Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| N+1 query | Use select_related/prefetch_related | CRITICAL |
| .all() in template | Prefetch in view | HIGH |
| filter().first() | Use .filter().first() or get_or_none | LOW |
| Multiple .save() | Use bulk_update/bulk_create | MEDIUM |
```python
# BAD: N+1 query
def get_orders(request):
orders = Order.objects.all()
# Each order.customer triggers a query!
return render(request, "orders.html", {"orders": orders})
# GOOD: select_related for ForeignKey
def get_orders(request):
orders = Order.objects.select_related("customer").all()
return render(request, "orders.html", {"orders": orders})
# GOOD: prefetch_related for ManyToMany/reverse FK
def get_orders(request):
orders = Order.objects.prefetch_related("items").all()
return render(request, "orders.html", {"orders": orders})
# BAD: Multiple saves
for item in items:
item.status = "processed"
item.save()
# GOOD: Bulk update
Item.objects.filter(id__in=item_ids).update(status="processed")
```
### View Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Function view for CRUD | Use Class-Based Views | LOW |
| No permission check | Add @login_required or PermissionMixin | HIGH |
| Business logic in view | Move to service/manager | MEDIUM |
| No pagination | Add Paginator | MEDIUM |
```python
# BAD: Logic in view
def create_order(request):
if request.method == "POST":
# Too much logic here
user = request.user
cart = Cart.objects.get(user=user)
order = Order.objects.create(user=user, total=cart.total)
for item in cart.items.all():
OrderItem.objects.create(order=order, product=item.product)
cart.items.all().delete()
send_confirmation_email(user, order)
return redirect("order_detail", order.id)
# GOOD: Service layer
# services/order_service.py
class OrderService:
@staticmethod
def create_from_cart(user: User) -> Order:
cart = Cart.objects.get(user=user)
order = Order.objects.create(user=user, total=cart.total)
OrderItem.objects.bulk_create([
OrderItem(order=order, product=item.product)
for item in cart.items.all()
])
cart.items.all().delete()
send_confirmation_email.delay(user.id, order.id) # Celery task
return order
# views.py
class CreateOrderView(LoginRequiredMixin, View):
def post(self, request):
order = OrderService.create_from_cart(request.user)
return redirect("order_detail", order.id)
```
### Template Security
| Check | Recommendation | Severity |
|-------|----------------|----------|
| \|safe filter misuse | Only use for trusted content | CRITICAL |
| {% autoescape off %} | Avoid, use mark_safe in Python | HIGH |
| User input in JS | Use json_script filter | HIGH |
| No CSRF token | Add {% csrf_token %} | CRITICAL |
```html
<!-- BAD: XSS vulnerability -->
<div>{{ user_content|safe }}</div>
<!-- GOOD: Auto-escaped (default) -->
<div>{{ user_content }}</div>
<!-- BAD: User data in JS -->
<script>
var data = "{{ user_data }}"; // XSS risk!
</script>
<!-- GOOD: json_script filter -->
{{ user_data|json_script:"user-data" }}
<script>
const data = JSON.parse(
document.getElementById("user-data").textContent
);
</script>
```
### Model Design
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No indexes on filter fields | Add db_index=True | HIGH |
| CharField without max_length | Always specify max_length | MEDIUM |
| No __str__ method | Add for admin display | LOW |
| No Meta ordering | Add default ordering | LOW |
```python
# GOOD: Well-designed model
class Order(models.Model):
user = models.ForeignKey(
User,
on_delete=models.CASCADE,
related_name="orders",
)
status = models.CharField(
max_length=20,
choices=OrderStatus.choices,
default=OrderStatus.PENDING,
db_index=True, # Frequently filtered
)
created_at = models.DateTimeField(auto_now_add=True, db_index=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
ordering = ["-created_at"]
indexes = [
models.Index(fields=["user", "status"]),
]
def __str__(self):
return f"Order {self.id} - {self.user.email}"
```
### Migration Safety
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Non-nullable field without default | Add default or null=True first | CRITICAL |
| Large table ALTER | Use RunSQL with concurrent index | HIGH |
| Data migration in schema migration | Separate into data migration | MEDIUM |
```python
# BAD: Adding non-nullable field
class Migration(migrations.Migration):
operations = [
migrations.AddField(
model_name="order",
name="tracking_number",
field=models.CharField(max_length=100), # Will fail!
),
]
# GOOD: Safe migration strategy
# Step 1: Add nullable field
migrations.AddField(
model_name="order",
name="tracking_number",
field=models.CharField(max_length=100, null=True),
),
# Step 2: Data migration to populate
# Step 3: Make non-nullable with default
```
## Response Template
```
## Django Code Review Results
**Project**: [name]
**Django**: 5.0 | **DB**: PostgreSQL | **DRF**: 3.14
### ORM Optimization
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | views.py:45 | N+1 query - missing select_related |
### View Patterns
| Status | File | Issue |
|--------|------|-------|
| HIGH | views.py:23 | Missing LoginRequiredMixin |
### Template Security
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | templates/user.html:12 | Unsafe |safe filter usage |
### Migrations
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | 0015_add_field.py | Non-nullable field without default |
### Recommended Actions
1. [ ] Add select_related to order queries
2. [ ] Add permission mixins to views
3. [ ] Remove |safe from user content
4. [ ] Fix migration for tracking_number field
```
## Best Practices
1. **ORM**: Always prefetch related objects
2. **Views**: Use CBV, service layer for logic
3. **Security**: CSRF, XSS prevention, permissions
4. **Migrations**: Zero-downtime strategies
5. **Testing**: Factory Boy, pytest-django
## Integration
- `python-reviewer`: General Python patterns
- `security-scanner`: Django security audit
- `sql-optimizer`: Query analysis

View File

@@ -0,0 +1,317 @@
---
name: docker-reviewer
description: |
WHEN: Dockerfile review, multi-stage builds, layer optimization, docker-compose
WHAT: Image optimization + Layer caching + Security scanning + Compose best practices + Build efficiency
WHEN NOT: Kubernetes → k8s-reviewer, Terraform → terraform-reviewer
---
# Docker Reviewer Skill
## Purpose
Reviews Dockerfiles and docker-compose configurations for optimization, security, and best practices.
## When to Use
- Dockerfile code review
- Docker image optimization
- docker-compose.yml review
- Container security audit
- Build time optimization
## Project Detection
- `Dockerfile` in project
- `docker-compose.yml` or `docker-compose.yaml`
- `.dockerignore` file
- `Dockerfile.*` variants
## Workflow
### Step 1: Analyze Project
```
**Base Image**: node:20-alpine
**Build Type**: Multi-stage
**Compose**: v3.8
**Registry**: Docker Hub / ECR / GCR
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Docker review (recommended)
- Dockerfile optimization
- Layer caching strategy
- Security hardening
- docker-compose review
multiSelect: true
```
## Detection Rules
### Image Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Large base image | Use alpine/slim/distroless | HIGH |
| No multi-stage build | Add build stage | MEDIUM |
| Too many layers | Combine RUN commands | MEDIUM |
| Installing dev deps | Separate build/runtime | HIGH |
```dockerfile
# BAD: Large image with dev dependencies
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["node", "dist/index.js"]
# Result: ~1GB image
# GOOD: Multi-stage with alpine
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER node
CMD ["node", "dist/index.js"]
# Result: ~150MB image
```
### Layer Caching
| Check | Recommendation | Severity |
|-------|----------------|----------|
| COPY . before install | Copy package files first | HIGH |
| No .dockerignore | Add .dockerignore | MEDIUM |
| Changing files early | Order by change frequency | MEDIUM |
```dockerfile
# BAD: Cache invalidation on every code change
FROM node:20-alpine
WORKDIR /app
COPY . . # Invalidates cache on ANY change
RUN npm install # Always reinstalls
# GOOD: Leverage layer caching
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./ # Only invalidates on package change
RUN npm ci # Cached if packages unchanged
COPY . . # Code changes don't affect npm cache
RUN npm run build
```
```gitignore
# .dockerignore
node_modules
.git
.gitignore
*.md
.env*
dist
coverage
.nyc_output
```
### Security
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Running as root | Add USER directive | CRITICAL |
| Latest tag | Pin specific version | HIGH |
| Secrets in build | Use build secrets | CRITICAL |
| No health check | Add HEALTHCHECK | MEDIUM |
```dockerfile
# BAD: Security issues
FROM node:latest # Unpinned version
WORKDIR /app
COPY . .
ENV API_KEY=secret123 # Secret in image!
RUN npm install
CMD ["node", "index.js"] # Running as root
# GOOD: Secure Dockerfile
FROM node:20.10-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:20.10-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -s /bin/sh -D appuser
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --chown=appuser:appgroup . .
USER appuser
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1
EXPOSE 3000
CMD ["node", "index.js"]
```
### Build Secrets (Docker BuildKit)
```dockerfile
# syntax=docker/dockerfile:1.4
FROM node:20-alpine
WORKDIR /app
# Mount secret during build (not stored in layer)
RUN --mount=type=secret,id=npm_token \
NPM_TOKEN=$(cat /run/secrets/npm_token) \
npm ci
# Build command:
# DOCKER_BUILDKIT=1 docker build --secret id=npm_token,src=.npmrc .
```
### RUN Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Multiple RUN for cleanup | Combine in single RUN | MEDIUM |
| No cleanup after install | Remove cache in same layer | MEDIUM |
```dockerfile
# BAD: Multiple layers, cache not cleaned
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean
# GOOD: Single layer with cleanup
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
```
### Docker Compose
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No resource limits | Add deploy.resources | HIGH |
| No health checks | Add healthcheck | MEDIUM |
| Hardcoded config | Use environment variables | MEDIUM |
| No restart policy | Add restart: unless-stopped | MEDIUM |
```yaml
# BAD: Minimal compose
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
db:
image: postgres
environment:
POSTGRES_PASSWORD: password123
# GOOD: Production-ready compose
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:${DB_PASSWORD}@db:5432/app
depends_on:
db:
condition: service_healthy
restart: unless-stopped
deploy:
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
db:
image: postgres:15-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: app
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d app"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
```
## Response Template
```
## Docker Review Results
**Project**: [name]
**Base Image**: node:20-alpine
**Build**: Multi-stage | **Compose**: v3.8
### Image Optimization
| Status | File | Issue |
|--------|------|-------|
| HIGH | Dockerfile | Using node:latest (~1GB) |
### Layer Caching
| Status | File | Issue |
|--------|------|-------|
| HIGH | Dockerfile:5 | COPY . before npm install |
### Security
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | Dockerfile | Running as root user |
### Compose
| Status | File | Issue |
|--------|------|-------|
| HIGH | docker-compose.yml | No resource limits |
### Recommended Actions
1. [ ] Switch to node:20-alpine base image
2. [ ] Add multi-stage build
3. [ ] Add USER directive for non-root
4. [ ] Add resource limits in compose
```
## Best Practices
1. **Base Image**: Use alpine/slim/distroless
2. **Multi-stage**: Separate build and runtime
3. **Caching**: Order by change frequency
4. **Security**: Non-root, pinned versions, no secrets
5. **Compose**: Health checks, resource limits
## Integration
- `k8s-reviewer`: Kubernetes deployments
- `security-scanner`: Container security
- `ci-cd-reviewer`: Build pipelines

View File

@@ -0,0 +1,258 @@
---
name: fastapi-reviewer
description: |
WHEN: FastAPI project review, Pydantic models, async endpoints, dependency injection
WHAT: Pydantic validation + Dependency injection + Async patterns + OpenAPI docs + Security
WHEN NOT: Django → django-reviewer, Flask → flask-reviewer, General Python → python-reviewer
---
# FastAPI Reviewer Skill
## Purpose
Reviews FastAPI projects for API design, Pydantic usage, async patterns, and security.
## When to Use
- FastAPI project code review
- Pydantic model review
- API endpoint design review
- Async/await pattern check
- Dependency injection review
## Project Detection
- `fastapi` in requirements.txt/pyproject.toml
- `from fastapi import` imports
- `main.py` with FastAPI() app
- `routers/` directory structure
## Workflow
### Step 1: Analyze Project
```
**FastAPI**: 0.100+
**Pydantic**: v2
**Database**: SQLAlchemy/Tortoise/Prisma
**Auth**: OAuth2/JWT
**Docs**: OpenAPI auto-generated
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full FastAPI review (recommended)
- Pydantic models and validation
- Dependency injection patterns
- Async/await usage
- Security and authentication
multiSelect: true
```
## Detection Rules
### Pydantic Models
| Check | Recommendation | Severity |
|-------|----------------|----------|
| dict instead of model | Use Pydantic BaseModel | MEDIUM |
| Missing Field validation | Add Field constraints | MEDIUM |
| No Config class | Add model_config | LOW |
| Mutable default in Field | Use default_factory | HIGH |
```python
# BAD: Plain dict response
@app.get("/user")
async def get_user() -> dict:
return {"name": "John", "age": 30}
# GOOD: Pydantic model
class UserResponse(BaseModel):
name: str = Field(..., min_length=1, max_length=100)
age: int = Field(..., ge=0, le=150)
model_config = ConfigDict(from_attributes=True)
@app.get("/user")
async def get_user() -> UserResponse:
return UserResponse(name="John", age=30)
# BAD: Mutable default
class Config(BaseModel):
items: list[str] = [] # Shared across instances!
# GOOD: default_factory
class Config(BaseModel):
items: list[str] = Field(default_factory=list)
```
### Dependency Injection
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Repeated code in endpoints | Extract to Depends() | MEDIUM |
| Global state access | Use dependency injection | HIGH |
| No cleanup in deps | Use yield for cleanup | MEDIUM |
| Hardcoded dependencies | Use Depends for testability | MEDIUM |
```python
# BAD: Repeated DB session code
@app.get("/users")
async def get_users():
db = SessionLocal()
try:
return db.query(User).all()
finally:
db.close()
# GOOD: Dependency injection
async def get_db() -> AsyncGenerator[AsyncSession, None]:
async with async_session() as session:
yield session
@app.get("/users")
async def get_users(db: AsyncSession = Depends(get_db)):
result = await db.execute(select(User))
return result.scalars().all()
```
### Async Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| sync def for I/O | Use async def | HIGH |
| Blocking call in async | Use run_in_executor | CRITICAL |
| No async DB driver | Use asyncpg/aiosqlite | HIGH |
| sync file I/O | Use aiofiles | MEDIUM |
```python
# BAD: Blocking call in async
@app.get("/data")
async def get_data():
response = requests.get(url) # Blocks event loop!
return response.json()
# GOOD: Async HTTP client
@app.get("/data")
async def get_data():
async with httpx.AsyncClient() as client:
response = await client.get(url)
return response.json()
# BAD: Sync file read
@app.get("/file")
async def read_file():
with open("data.txt") as f:
return f.read()
# GOOD: Async file read
@app.get("/file")
async def read_file():
async with aiofiles.open("data.txt") as f:
return await f.read()
```
### API Design
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No response_model | Add response_model param | MEDIUM |
| Missing status codes | Add responses param | LOW |
| No tags | Add tags for grouping | LOW |
| Inconsistent naming | Use RESTful conventions | MEDIUM |
```python
# BAD: Minimal endpoint
@app.post("/user")
async def create(data: dict):
return {"id": 1}
# GOOD: Full specification
@app.post(
"/users",
response_model=UserResponse,
status_code=status.HTTP_201_CREATED,
responses={
409: {"model": ErrorResponse, "description": "User exists"},
},
tags=["users"],
summary="Create a new user",
)
async def create_user(
user: UserCreate,
db: AsyncSession = Depends(get_db),
) -> UserResponse:
"""Create a new user with the provided details."""
return await user_service.create(db, user)
```
### Security
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No auth on endpoints | Add Depends(get_current_user) | CRITICAL |
| Secrets in code | Use environment variables | CRITICAL |
| No rate limiting | Add slowapi/fastapi-limiter | HIGH |
| Missing CORS config | Configure CORSMiddleware | HIGH |
```python
# Security setup
from fastapi.security import OAuth2PasswordBearer
from fastapi.middleware.cors import CORSMiddleware
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
async def get_current_user(
token: str = Depends(oauth2_scheme),
db: AsyncSession = Depends(get_db),
) -> User:
user = await verify_token(token, db)
if not user:
raise HTTPException(status_code=401, detail="Invalid token")
return user
# Protected endpoint
@app.get("/me")
async def get_me(user: User = Depends(get_current_user)):
return user
```
## Response Template
```
## FastAPI Code Review Results
**Project**: [name]
**FastAPI**: 0.109 | **Pydantic**: v2 | **DB**: SQLAlchemy async
### Pydantic Models
| Status | File | Issue |
|--------|------|-------|
| HIGH | schemas.py:15 | Mutable default in Field |
### Dependency Injection
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | routers/users.py | Repeated DB session code |
### Async Patterns
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | services/external.py:34 | Blocking requests.get() call |
### Security
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | main.py | No CORS configuration |
### Recommended Actions
1. [ ] Replace blocking HTTP calls with httpx async
2. [ ] Add CORS middleware configuration
3. [ ] Extract repeated code to dependencies
4. [ ] Add response_model to all endpoints
```
## Best Practices
1. **Pydantic v2**: Use model_config, Field validators
2. **Async Everything**: DB, HTTP, file I/O
3. **Dependencies**: Extract common logic
4. **Security**: OAuth2, CORS, rate limiting
5. **Documentation**: OpenAPI auto-docs with examples
## Integration
- `python-reviewer`: General Python patterns
- `security-scanner`: API security audit
- `api-documenter`: OpenAPI enhancement

View File

@@ -0,0 +1,286 @@
---
name: flask-reviewer
description: |
WHEN: Flask project review, Blueprint structure, extensions, request handling
WHAT: Blueprint organization + Extension patterns + Request/response handling + Configuration + Testing
WHEN NOT: FastAPI → fastapi-reviewer, Django → django-reviewer, General Python → python-reviewer
---
# Flask Reviewer Skill
## Purpose
Reviews Flask projects for application structure, extension usage, and best practices.
## When to Use
- Flask project code review
- Blueprint structure review
- Extension configuration review
- Request handling patterns
- Flask API design
## Project Detection
- `flask` in requirements.txt/pyproject.toml
- `from flask import Flask` imports
- `app.py` or `__init__.py` with Flask()
- `blueprints/` or `routes/` directory
## Workflow
### Step 1: Analyze Project
```
**Flask**: 3.0+
**Extensions**: Flask-SQLAlchemy, Flask-Login, Flask-WTF
**API**: Flask-RESTful / Flask-RESTX
**Database**: SQLAlchemy
**Template**: Jinja2
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Flask review (recommended)
- Application structure
- Blueprint organization
- Extension configuration
- Security and validation
multiSelect: true
```
## Detection Rules
### Application Factory
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Global app instance | Use application factory | HIGH |
| Config in code | Use config classes | MEDIUM |
| No extension init | Use init_app pattern | MEDIUM |
| Circular imports | Use factory + blueprints | HIGH |
```python
# BAD: Global app instance
# app.py
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
db = SQLAlchemy(app) # Tight coupling
# GOOD: Application factory
# app/__init__.py
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def create_app(config_name="default"):
app = Flask(__name__)
app.config.from_object(config[config_name])
db.init_app(app)
from app.routes import main_bp, api_bp
app.register_blueprint(main_bp)
app.register_blueprint(api_bp, url_prefix="/api")
return app
```
### Blueprint Organization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| All routes in one file | Split into blueprints | MEDIUM |
| No URL prefix | Add url_prefix to blueprints | LOW |
| Mixed concerns | Separate by domain | MEDIUM |
| No __init__.py exports | Export blueprint properly | LOW |
```python
# GOOD: Blueprint structure
# app/routes/users.py
from flask import Blueprint, request, jsonify
users_bp = Blueprint("users", __name__, url_prefix="/users")
@users_bp.route("/", methods=["GET"])
def list_users():
return jsonify(users=User.query.all())
@users_bp.route("/<int:user_id>", methods=["GET"])
def get_user(user_id):
user = User.query.get_or_404(user_id)
return jsonify(user=user.to_dict())
# app/routes/__init__.py
from app.routes.users import users_bp
from app.routes.products import products_bp
__all__ = ["users_bp", "products_bp"]
```
### Request Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No input validation | Use marshmallow/pydantic | HIGH |
| request.json without check | Handle None case | MEDIUM |
| No error handlers | Add @app.errorhandler | MEDIUM |
| Sync blocking calls | Consider async or Celery | MEDIUM |
```python
# BAD: No validation
@app.route("/user", methods=["POST"])
def create_user():
data = request.json # Could be None!
user = User(name=data["name"]) # KeyError risk
return jsonify(user.to_dict())
# GOOD: With validation (marshmallow)
from marshmallow import Schema, fields, validate
class UserSchema(Schema):
name = fields.Str(required=True, validate=validate.Length(min=1, max=100))
email = fields.Email(required=True)
user_schema = UserSchema()
@app.route("/user", methods=["POST"])
def create_user():
data = request.get_json()
if not data:
return jsonify(error="No JSON data"), 400
errors = user_schema.validate(data)
if errors:
return jsonify(errors=errors), 400
user = User(**user_schema.load(data))
db.session.add(user)
db.session.commit()
return jsonify(user=user.to_dict()), 201
```
### Configuration
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Secrets in code | Use environment variables | CRITICAL |
| No config classes | Use config hierarchy | MEDIUM |
| DEBUG=True in prod | Environment-based config | CRITICAL |
| No instance folder | Use instance config | LOW |
```python
# config.py
import os
class Config:
SECRET_KEY = os.environ.get("SECRET_KEY") or "dev-key-change-me"
SQLALCHEMY_TRACK_MODIFICATIONS = False
class DevelopmentConfig(Config):
DEBUG = True
SQLALCHEMY_DATABASE_URI = os.environ.get("DEV_DATABASE_URL") or \
"sqlite:///dev.db"
class ProductionConfig(Config):
DEBUG = False
SQLALCHEMY_DATABASE_URI = os.environ.get("DATABASE_URL")
@classmethod
def init_app(cls, app):
# Production-specific initialization
import logging
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler("app.log", maxBytes=10240, backupCount=10)
handler.setLevel(logging.INFO)
app.logger.addHandler(handler)
config = {
"development": DevelopmentConfig,
"production": ProductionConfig,
"default": DevelopmentConfig,
}
```
### Error Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No custom error pages | Add error handlers | MEDIUM |
| Exception details in response | Hide in production | HIGH |
| No logging | Add structured logging | MEDIUM |
```python
# app/errors.py
from flask import jsonify, render_template
def register_error_handlers(app):
@app.errorhandler(400)
def bad_request(error):
if request_wants_json():
return jsonify(error="Bad request"), 400
return render_template("errors/400.html"), 400
@app.errorhandler(404)
def not_found(error):
if request_wants_json():
return jsonify(error="Not found"), 404
return render_template("errors/404.html"), 404
@app.errorhandler(500)
def internal_error(error):
db.session.rollback()
app.logger.error(f"Internal error: {error}")
if request_wants_json():
return jsonify(error="Internal server error"), 500
return render_template("errors/500.html"), 500
def request_wants_json():
return request.accept_mimetypes.best_match(
["application/json", "text/html"]
) == "application/json"
```
## Response Template
```
## Flask Code Review Results
**Project**: [name]
**Flask**: 3.0 | **SQLAlchemy**: 2.0 | **Extensions**: Login, WTF
### Application Structure
| Status | File | Issue |
|--------|------|-------|
| HIGH | app.py | Global app instance - use factory |
### Blueprint Organization
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | routes.py | 50+ routes - split into blueprints |
### Request Handling
| Status | File | Issue |
|--------|------|-------|
| HIGH | views.py:34 | No input validation on POST |
### Configuration
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | config.py | SECRET_KEY hardcoded |
### Recommended Actions
1. [ ] Implement application factory pattern
2. [ ] Split routes into domain blueprints
3. [ ] Add marshmallow validation schemas
4. [ ] Move secrets to environment variables
```
## Best Practices
1. **Factory Pattern**: Always use create_app()
2. **Blueprints**: Organize by domain/feature
3. **Validation**: marshmallow or pydantic
4. **Config**: Environment-based hierarchy
5. **Extensions**: Use init_app pattern
## Integration
- `python-reviewer`: General Python patterns
- `security-scanner`: Flask security audit
- `api-documenter`: API documentation

View File

@@ -0,0 +1,356 @@
---
name: go-api-reviewer
description: |
WHEN: Go API review with Gin/Echo/Fiber/Chi, router patterns, middleware, request handling
WHAT: Router organization + Middleware patterns + Request validation + Error responses + OpenAPI
WHEN NOT: General Go → go-reviewer, Rust API → rust-api-reviewer
---
# Go API Reviewer Skill
## Purpose
Reviews Go API projects using Gin, Echo, Fiber, or Chi for routing, middleware, and API patterns.
## When to Use
- Go REST API code review
- Gin/Echo/Fiber/Chi project review
- Middleware implementation review
- API request/response handling
- API documentation review
## Project Detection
- `github.com/gin-gonic/gin` import
- `github.com/labstack/echo` import
- `github.com/gofiber/fiber` import
- `github.com/go-chi/chi` import
- `handlers/`, `routes/`, `middleware/` directories
## Workflow
### Step 1: Analyze Project
```
**Framework**: Gin v1.9+
**Router**: Group-based routing
**Middleware**: Auth, CORS, Logger, Recovery
**Validation**: go-playground/validator
**Docs**: Swagger/OpenAPI
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full API review (recommended)
- Router and handler patterns
- Middleware implementation
- Request validation
- Error handling and responses
multiSelect: true
```
## Detection Rules
### Router Organization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| All routes in main | Use router groups | MEDIUM |
| No versioning | Add /api/v1 prefix | MEDIUM |
| Inconsistent naming | Follow REST conventions | LOW |
| No route grouping | Group by resource | MEDIUM |
```go
// BAD: All routes in main.go
func main() {
r := gin.Default()
r.GET("/users", getUsers)
r.POST("/users", createUser)
r.GET("/users/:id", getUser)
r.GET("/products", getProducts)
// ... 50 more routes
}
// GOOD: Organized route groups (Gin)
func SetupRouter() *gin.Engine {
r := gin.Default()
api := r.Group("/api/v1")
{
users := api.Group("/users")
{
users.GET("", listUsers)
users.POST("", createUser)
users.GET("/:id", getUser)
users.PUT("/:id", updateUser)
users.DELETE("/:id", deleteUser)
}
products := api.Group("/products")
{
products.GET("", listProducts)
products.GET("/:id", getProduct)
}
}
return r
}
// GOOD: Separate route files
// routes/users.go
func RegisterUserRoutes(rg *gin.RouterGroup) {
users := rg.Group("/users")
h := NewUserHandler()
users.GET("", h.List)
users.POST("", h.Create)
users.GET("/:id", h.Get)
}
```
### Middleware Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Auth in handler | Extract to middleware | HIGH |
| No recovery middleware | Add panic recovery | HIGH |
| No request ID | Add request ID middleware | MEDIUM |
| Middleware order wrong | Order: Logger → Recovery → Auth | MEDIUM |
```go
// GOOD: Middleware stack (Gin)
func SetupMiddleware(r *gin.Engine) {
// Order matters!
r.Use(gin.Logger())
r.Use(gin.Recovery())
r.Use(RequestIDMiddleware())
r.Use(CORSMiddleware())
}
// GOOD: Auth middleware
func AuthMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
token := c.GetHeader("Authorization")
if token == "" {
c.AbortWithStatusJSON(401, gin.H{"error": "unauthorized"})
return
}
claims, err := ValidateToken(token)
if err != nil {
c.AbortWithStatusJSON(401, gin.H{"error": "invalid token"})
return
}
c.Set("user_id", claims.UserID)
c.Next()
}
}
// Usage
api := r.Group("/api/v1")
api.Use(AuthMiddleware())
// GOOD: Request ID middleware
func RequestIDMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
requestID := c.GetHeader("X-Request-ID")
if requestID == "" {
requestID = uuid.New().String()
}
c.Set("request_id", requestID)
c.Header("X-Request-ID", requestID)
c.Next()
}
}
```
### Request Validation
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Manual validation | Use validator tags | MEDIUM |
| No binding errors | Return validation errors | HIGH |
| No request DTOs | Define request structs | MEDIUM |
| Missing required fields | Add binding:"required" | HIGH |
```go
// GOOD: Request struct with validation
type CreateUserRequest struct {
Name string `json:"name" binding:"required,min=1,max=100"`
Email string `json:"email" binding:"required,email"`
Age int `json:"age" binding:"gte=0,lte=150"`
Role string `json:"role" binding:"oneof=admin user guest"`
Password string `json:"password" binding:"required,min=8"`
}
// GOOD: Handler with validation
func (h *UserHandler) Create(c *gin.Context) {
var req CreateUserRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(400, gin.H{
"error": "validation_error",
"details": formatValidationErrors(err),
})
return
}
user, err := h.service.Create(c.Request.Context(), &req)
if err != nil {
handleError(c, err)
return
}
c.JSON(201, user)
}
// GOOD: Format validation errors
func formatValidationErrors(err error) map[string]string {
errors := make(map[string]string)
var ve validator.ValidationErrors
if errors.As(err, &ve) {
for _, e := range ve {
field := strings.ToLower(e.Field())
errors[field] = getErrorMessage(e)
}
}
return errors
}
```
### Error Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Inconsistent error format | Use standard error response | HIGH |
| Internal errors exposed | Hide implementation details | HIGH |
| No error codes | Add error codes | MEDIUM |
| HTTP status inconsistent | Follow REST conventions | MEDIUM |
```go
// GOOD: Standard error response
type ErrorResponse struct {
Error string `json:"error"`
Code string `json:"code,omitempty"`
Details map[string]string `json:"details,omitempty"`
}
// GOOD: Custom errors
var (
ErrNotFound = &AppError{Code: "NOT_FOUND", Status: 404}
ErrUnauthorized = &AppError{Code: "UNAUTHORIZED", Status: 401}
ErrConflict = &AppError{Code: "CONFLICT", Status: 409}
)
type AppError struct {
Code string
Status int
Message string
}
func (e *AppError) Error() string {
return e.Message
}
// GOOD: Error handler
func handleError(c *gin.Context, err error) {
var appErr *AppError
if errors.As(err, &appErr) {
c.JSON(appErr.Status, ErrorResponse{
Error: appErr.Message,
Code: appErr.Code,
})
return
}
// Log internal error, return generic message
log.Printf("internal error: %v", err)
c.JSON(500, ErrorResponse{
Error: "Internal server error",
Code: "INTERNAL_ERROR",
})
}
```
### Framework-Specific (Echo)
```go
// Echo example
func SetupEcho() *echo.Echo {
e := echo.New()
e.Use(middleware.Logger())
e.Use(middleware.Recover())
e.Use(middleware.RequestID())
e.Use(middleware.CORS())
api := e.Group("/api/v1")
api.Use(AuthMiddleware)
RegisterUserRoutes(api)
return e
}
// Echo handler
func (h *UserHandler) Create(c echo.Context) error {
var req CreateUserRequest
if err := c.Bind(&req); err != nil {
return echo.NewHTTPError(400, "invalid request")
}
if err := c.Validate(&req); err != nil {
return err
}
user, err := h.service.Create(c.Request().Context(), &req)
if err != nil {
return err
}
return c.JSON(201, user)
}
```
## Response Template
```
## Go API Code Review Results
**Project**: [name]
**Framework**: Gin 1.9 | **Go**: 1.22
### Router Organization
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | main.go | 40+ routes in single file |
### Middleware
| Status | File | Issue |
|--------|------|-------|
| HIGH | handlers/user.go | Auth check in handler, not middleware |
### Validation
| Status | File | Issue |
|--------|------|-------|
| HIGH | handlers/user.go:34 | No request validation |
### Error Handling
| Status | File | Issue |
|--------|------|-------|
| HIGH | handlers/product.go | Inconsistent error response format |
### Recommended Actions
1. [ ] Split routes into separate files by resource
2. [ ] Extract auth logic to middleware
3. [ ] Add request struct validation
4. [ ] Implement standard error response format
```
## Best Practices
1. **Router**: Group by resource, version API
2. **Middleware**: Proper order, reusable
3. **Validation**: Use validator tags
4. **Errors**: Standard format, hide internals
5. **Docs**: Generate OpenAPI from code
## Integration
- `go-reviewer`: General Go patterns
- `security-scanner`: API security
- `api-documenter`: OpenAPI documentation

322
skills/go-reviewer/SKILL.md Normal file
View File

@@ -0,0 +1,322 @@
---
name: go-reviewer
description: |
WHEN: Go project review, error handling, goroutines, interfaces, testing
WHAT: Error handling patterns + Concurrency safety + Interface design + Testing + Idiomatic Go
WHEN NOT: Go API frameworks → go-api-reviewer, Rust → rust-reviewer
---
# Go Reviewer Skill
## Purpose
Reviews Go code for idiomatic patterns, error handling, concurrency, and best practices.
## When to Use
- Go project code review
- Error handling review
- Goroutine/channel review
- Interface design review
- Go testing patterns
## Project Detection
- `go.mod` in project root
- `.go` files
- `cmd/`, `internal/`, `pkg/` structure
- `*_test.go` test files
## Workflow
### Step 1: Analyze Project
```
**Go Version**: 1.21+
**Module**: github.com/org/project
**Structure**: Standard Go layout
**Testing**: go test / testify
**Linter**: golangci-lint
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Go review (recommended)
- Error handling patterns
- Concurrency and goroutines
- Interface design
- Testing and benchmarks
multiSelect: true
```
## Detection Rules
### Error Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Ignored error | Always handle errors | CRITICAL |
| err != nil only | Add context with fmt.Errorf | MEDIUM |
| Panic for errors | Return error instead | HIGH |
| No error wrapping | Use %w for wrapping | MEDIUM |
```go
// BAD: Ignored error
data, _ := ioutil.ReadFile("config.json")
// GOOD: Handle error
data, err := os.ReadFile("config.json")
if err != nil {
return fmt.Errorf("reading config: %w", err)
}
// BAD: No context
if err != nil {
return err
}
// GOOD: Add context
if err != nil {
return fmt.Errorf("failed to process user %d: %w", userID, err)
}
// BAD: Panic for recoverable error
func GetUser(id int) *User {
user, err := db.FindUser(id)
if err != nil {
panic(err) // Don't panic!
}
return user
}
// GOOD: Return error
func GetUser(id int) (*User, error) {
user, err := db.FindUser(id)
if err != nil {
return nil, fmt.Errorf("getting user %d: %w", id, err)
}
return user, nil
}
```
### Concurrency
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Data race potential | Use mutex or channels | CRITICAL |
| Goroutine leak | Ensure goroutines exit | HIGH |
| Unbuffered channel deadlock | Use buffered or select | HIGH |
| No context cancellation | Pass context.Context | MEDIUM |
```go
// BAD: Data race
type Counter struct {
count int
}
func (c *Counter) Increment() {
c.count++ // Race condition!
}
// GOOD: Mutex protection
type Counter struct {
mu sync.Mutex
count int
}
func (c *Counter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.count++
}
// BAD: Goroutine leak
func process(items []Item) {
for _, item := range items {
go processItem(item) // No way to wait or cancel!
}
}
// GOOD: WaitGroup and context
func process(ctx context.Context, items []Item) error {
g, ctx := errgroup.WithContext(ctx)
for _, item := range items {
item := item // Capture loop variable
g.Go(func() error {
return processItem(ctx, item)
})
}
return g.Wait()
}
// BAD: No timeout
resp, err := client.Do(req)
// GOOD: With context timeout
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
req = req.WithContext(ctx)
resp, err := client.Do(req)
```
### Interface Design
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Large interface | Keep interfaces small | MEDIUM |
| Interface in implementation | Define at consumer | MEDIUM |
| Concrete type in signature | Accept interfaces | MEDIUM |
| No interface for testing | Add interface for mocking | MEDIUM |
```go
// BAD: Large interface
type UserService interface {
GetUser(id int) (*User, error)
CreateUser(u *User) error
UpdateUser(u *User) error
DeleteUser(id int) error
ListUsers() ([]*User, error)
SearchUsers(query string) ([]*User, error)
// ... 20 more methods
}
// GOOD: Small, focused interfaces
type UserGetter interface {
GetUser(ctx context.Context, id int) (*User, error)
}
type UserCreator interface {
CreateUser(ctx context.Context, u *User) error
}
// Consumer defines the interface it needs
type UserHandler struct {
getter UserGetter // Only what it needs
}
// BAD: Concrete type dependency
func ProcessFile(f *os.File) error {
// Hard to test
}
// GOOD: Interface dependency
func ProcessFile(r io.Reader) error {
// Easy to test with strings.Reader, bytes.Buffer, etc.
}
```
### Code Organization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No package structure | Use cmd/internal/pkg | MEDIUM |
| Exported unnecessary | Keep internal private | LOW |
| Package name mismatch | Match directory name | LOW |
| Circular import | Restructure packages | HIGH |
```
// GOOD: Standard Go project layout
project/
├── cmd/
│ └── server/
│ └── main.go
├── internal/ # Private packages
│ ├── service/
│ │ └── user.go
│ └── repository/
│ └── user.go
├── pkg/ # Public packages
│ └── client/
│ └── client.go
├── go.mod
└── go.sum
```
### Testing
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No table-driven tests | Use test tables | MEDIUM |
| No test helpers | Extract common setup | LOW |
| No benchmarks | Add for hot paths | LOW |
| Mocking concrete types | Use interfaces | MEDIUM |
```go
// GOOD: Table-driven test
func TestParseSize(t *testing.T) {
tests := []struct {
name string
input string
want int64
wantErr bool
}{
{"bytes", "100", 100, false},
{"kilobytes", "1KB", 1024, false},
{"megabytes", "1MB", 1048576, false},
{"invalid", "abc", 0, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseSize(tt.input)
if (err != nil) != tt.wantErr {
t.Errorf("ParseSize() error = %v, wantErr %v", err, tt.wantErr)
return
}
if got != tt.want {
t.Errorf("ParseSize() = %v, want %v", got, tt.want)
}
})
}
}
// GOOD: Benchmark
func BenchmarkParseSize(b *testing.B) {
for i := 0; i < b.N; i++ {
ParseSize("1MB")
}
}
```
## Response Template
```
## Go Code Review Results
**Project**: [name]
**Go**: 1.22 | **Linter**: golangci-lint
### Error Handling
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | service.go:45 | Ignored error from db.Query |
### Concurrency
| Status | File | Issue |
|--------|------|-------|
| HIGH | worker.go:23 | Potential goroutine leak |
### Interface Design
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | handler.go:12 | Concrete type in function signature |
### Testing
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | service_test.go | No table-driven tests |
### Recommended Actions
1. [ ] Handle all returned errors
2. [ ] Add context cancellation to goroutines
3. [ ] Define interfaces at consumer side
4. [ ] Convert tests to table-driven format
```
## Best Practices
1. **Errors**: Always handle, wrap with context
2. **Concurrency**: Use context, errgroup, proper sync
3. **Interfaces**: Small, defined at consumer
4. **Testing**: Table-driven, interfaces for mocking
5. **Linting**: Use golangci-lint
## Integration
- `go-api-reviewer`: API framework patterns
- `security-scanner`: Go security audit
- `perf-analyzer`: Go performance profiling

View File

@@ -0,0 +1,428 @@
---
name: infra-security-reviewer
description: |
WHEN: Infrastructure security audit, secrets management, network policies, compliance checks
WHAT: Secrets scanning + Network policies + IAM/RBAC audit + Compliance validation + Security hardening
WHEN NOT: Application security → security-scanner, Docker only → docker-reviewer
---
# Infrastructure Security Reviewer Skill
## Purpose
Reviews infrastructure configurations for security, compliance, and best practices.
## When to Use
- Infrastructure security audit
- Secrets management review
- Network policy review
- IAM/RBAC audit
- Compliance check (SOC2, HIPAA, PCI)
## Project Detection
- Terraform files with IAM/security resources
- Kubernetes NetworkPolicy, RBAC
- AWS/GCP/Azure security configs
- `.env` files, secret references
## Workflow
### Step 1: Analyze Project
```
**Cloud**: AWS/GCP/Azure
**IaC**: Terraform/Pulumi
**Secrets**: Vault/AWS Secrets Manager
**Compliance**: SOC2/HIPAA/PCI
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full security review (recommended)
- Secrets management
- Network security
- IAM and access control
- Compliance validation
multiSelect: true
```
## Detection Rules
### Secrets Management
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Hardcoded secrets | Use secret manager | CRITICAL |
| Secrets in env files | Use vault/KMS | CRITICAL |
| No secret rotation | Enable auto-rotation | HIGH |
| Secrets in logs | Mask in logging | CRITICAL |
```hcl
# BAD: Hardcoded secrets
resource "aws_db_instance" "main" {
password = "SuperSecret123!" # CRITICAL!
}
variable "api_key" {
default = "sk-1234567890" # CRITICAL!
}
# GOOD: Using AWS Secrets Manager
data "aws_secretsmanager_secret_version" "db_password" {
secret_id = aws_secretsmanager_secret.db_password.id
}
resource "aws_db_instance" "main" {
password = data.aws_secretsmanager_secret_version.db_password.secret_string
}
# GOOD: Secret rotation
resource "aws_secretsmanager_secret_rotation" "db_password" {
secret_id = aws_secretsmanager_secret.db_password.id
rotation_lambda_arn = aws_lambda_function.rotation.arn
rotation_rules {
automatically_after_days = 30
}
}
```
```yaml
# Kubernetes - BAD: Secret in plain text
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
password: cGFzc3dvcmQxMjM= # Just base64, not encrypted!
# GOOD: External Secrets Operator
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-secret
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: db-secret
data:
- secretKey: password
remoteRef:
key: prod/db/password
```
### Network Security
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Public subnet for DB | Use private subnet | CRITICAL |
| No network policies | Add K8s NetworkPolicy | HIGH |
| Open security groups | Restrict to needed ports | CRITICAL |
| No VPC flow logs | Enable flow logs | MEDIUM |
```hcl
# AWS - GOOD: Network segmentation
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_subnet" "private" {
count = 3
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "private-${count.index + 1}"
Type = "private"
}
}
resource "aws_flow_log" "main" {
vpc_id = aws_vpc.main.id
traffic_type = "ALL"
log_destination = aws_cloudwatch_log_group.flow_logs.arn
iam_role_arn = aws_iam_role.flow_logs.arn
}
# Security group - least privilege
resource "aws_security_group" "app" {
name = "app-sg"
description = "Application security group"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTPS from ALB"
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
egress {
description = "HTTPS to internet"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
```
```yaml
# Kubernetes NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-network-policy
namespace: production
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- podSelector:
matchLabels:
app: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
- to:
- namespaceSelector:
matchLabels:
name: production
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
- to: # Allow DNS
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
```
### IAM / Access Control
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Wildcard permissions | Use specific resources | CRITICAL |
| No MFA requirement | Require MFA | HIGH |
| Long-lived credentials | Use OIDC/roles | HIGH |
| Over-permissive roles | Apply least privilege | HIGH |
```hcl
# BAD: Overly permissive
resource "aws_iam_policy" "bad" {
policy = jsonencode({
Statement = [{
Effect = "Allow"
Action = "*"
Resource = "*"
}]
})
}
# GOOD: Least privilege
resource "aws_iam_policy" "app" {
name = "app-policy"
description = "Application specific permissions"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "S3ReadWrite"
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
Resource = [
"${aws_s3_bucket.app.arn}/*"
]
},
{
Sid = "SecretsRead"
Effect = "Allow"
Action = [
"secretsmanager:GetSecretValue"
]
Resource = [
aws_secretsmanager_secret.app.arn
]
Condition = {
StringEquals = {
"aws:ResourceTag/Environment" = var.environment
}
}
}
]
})
}
# GOOD: OIDC for GitHub Actions
resource "aws_iam_openid_connect_provider" "github" {
url = "https://token.actions.githubusercontent.com"
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.github.certificates[0].sha1_fingerprint]
}
resource "aws_iam_role" "github_actions" {
name = "github-actions-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Federated = aws_iam_openid_connect_provider.github.arn
}
Action = "sts:AssumeRoleWithWebIdentity"
Condition = {
StringEquals = {
"token.actions.githubusercontent.com:aud" = "sts.amazonaws.com"
}
StringLike = {
"token.actions.githubusercontent.com:sub" = "repo:myorg/myrepo:*"
}
}
}]
})
}
```
### Encryption
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Unencrypted storage | Enable encryption at rest | HIGH |
| No TLS | Enforce TLS 1.2+ | HIGH |
| Default KMS key | Use customer managed key | MEDIUM |
```hcl
# GOOD: Encryption at rest
resource "aws_s3_bucket_server_side_encryption_configuration" "app" {
bucket = aws_s3_bucket.app.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
kms_master_key_id = aws_kms_key.app.arn
}
bucket_key_enabled = true
}
}
resource "aws_rds_cluster" "main" {
storage_encrypted = true
kms_key_id = aws_kms_key.rds.arn
}
# GOOD: Enforce TLS
resource "aws_lb_listener" "https" {
load_balancer_arn = aws_lb.main.arn
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-TLS13-1-2-2021-06"
certificate_arn = aws_acm_certificate.main.arn
}
```
### Compliance Checklist
```
## SOC2 / HIPAA / PCI Compliance
### Access Control
[ ] MFA enforced for all users
[ ] Least privilege IAM policies
[ ] Regular access reviews
[ ] Service accounts with minimal permissions
### Data Protection
[ ] Encryption at rest (S3, RDS, EBS)
[ ] Encryption in transit (TLS 1.2+)
[ ] Customer managed KMS keys
[ ] Key rotation enabled
### Network Security
[ ] VPC with private subnets
[ ] Security groups with least privilege
[ ] Network segmentation
[ ] VPC flow logs enabled
### Logging & Monitoring
[ ] CloudTrail enabled
[ ] GuardDuty enabled
[ ] Config rules for compliance
[ ] Alerting on security events
### Secrets Management
[ ] No hardcoded secrets
[ ] Secrets in Secrets Manager/Vault
[ ] Automatic rotation enabled
[ ] Audit logging for secret access
```
## Response Template
```
## Infrastructure Security Review Results
**Project**: [name]
**Cloud**: AWS | **IaC**: Terraform
### Secrets Management
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | rds.tf:15 | Hardcoded database password |
### Network Security
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | sg.tf:23 | Security group allows 0.0.0.0/0 |
### IAM
| Status | File | Issue |
|--------|------|-------|
| HIGH | iam.tf:45 | Wildcard permissions in policy |
### Encryption
| Status | File | Issue |
|--------|------|-------|
| HIGH | s3.tf:12 | S3 bucket not encrypted |
### Recommended Actions
1. [ ] Move secrets to AWS Secrets Manager
2. [ ] Restrict security group ingress rules
3. [ ] Apply least privilege to IAM policies
4. [ ] Enable encryption for all storage
```
## Best Practices
1. **Secrets**: Never hardcode, use secret managers
2. **Network**: Private subnets, strict security groups
3. **IAM**: Least privilege, no wildcards
4. **Encryption**: At rest and in transit
5. **Audit**: Enable logging everywhere
## Integration
- `terraform-reviewer`: IaC review
- `k8s-reviewer`: Kubernetes security
- `security-scanner`: Application security

View File

@@ -0,0 +1,363 @@
---
name: k8s-reviewer
description: |
WHEN: Kubernetes manifest review, Helm charts, resource limits, probes, RBAC
WHAT: Resource configuration + Health probes + Security context + RBAC policies + Helm best practices
WHEN NOT: Docker only → docker-reviewer, Terraform → terraform-reviewer
---
# Kubernetes Reviewer Skill
## Purpose
Reviews Kubernetes manifests and Helm charts for resource configuration, security, and best practices.
## When to Use
- Kubernetes YAML review
- Helm chart review
- Pod security review
- Resource limits check
- RBAC configuration review
## Project Detection
- `*.yaml` in k8s/, manifests/, deploy/
- `Chart.yaml` (Helm)
- `kustomization.yaml`
- `deployment.yaml`, `service.yaml`
## Workflow
### Step 1: Analyze Project
```
**Manifest Type**: Deployment, Service, Ingress
**Helm**: Chart v3
**Namespace**: production
**Cluster**: EKS/GKE/AKS
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full K8s review (recommended)
- Resource limits and requests
- Health probes configuration
- Security context and RBAC
- Helm chart structure
multiSelect: true
```
## Detection Rules
### Resource Limits
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No resource limits | Add limits and requests | CRITICAL |
| Limits = Requests | Set different values | MEDIUM |
| Too high limits | Right-size based on usage | MEDIUM |
| No LimitRange | Add namespace LimitRange | MEDIUM |
```yaml
# BAD: No resource management
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
image: myapp:latest
# No resources defined!
# GOOD: Proper resource management
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:v1.2.3
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
```
### Health Probes
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No liveness probe | Add liveness check | HIGH |
| No readiness probe | Add readiness check | HIGH |
| No startup probe | Add for slow-starting apps | MEDIUM |
| Same liveness/readiness | Differentiate purposes | MEDIUM |
```yaml
# GOOD: Complete probe configuration
spec:
containers:
- name: app
image: myapp:v1.2.3
ports:
- containerPort: 8080
# Startup probe - for slow starting containers
startupProbe:
httpGet:
path: /health/startup
port: 8080
failureThreshold: 30
periodSeconds: 10
# Liveness probe - restart if unhealthy
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# Readiness probe - remove from service if not ready
readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
```
### Security Context
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Running as root | Set runAsNonRoot: true | CRITICAL |
| No security context | Add pod/container security | HIGH |
| Privileged container | Remove privileged: true | CRITICAL |
| Writable root filesystem | Set readOnlyRootFilesystem | HIGH |
```yaml
# BAD: No security constraints
spec:
containers:
- name: app
image: myapp:latest
# No security context!
# GOOD: Secure configuration
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: app
image: myapp:v1.2.3
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
```
### Image Policy
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Using :latest tag | Pin specific version | HIGH |
| No image pull policy | Set imagePullPolicy | MEDIUM |
| Public registry | Use private registry | MEDIUM |
```yaml
# BAD
image: myapp:latest
# or
image: myapp # Implies :latest
# GOOD
image: gcr.io/myproject/myapp:v1.2.3
imagePullPolicy: IfNotPresent
```
### Pod Disruption Budget
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No PDB for critical apps | Add PodDisruptionBudget | HIGH |
```yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
spec:
minAvailable: 2 # or maxUnavailable: 1
selector:
matchLabels:
app: myapp
```
### RBAC
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Using default SA | Create dedicated ServiceAccount | HIGH |
| Cluster-wide permissions | Use namespaced Role | HIGH |
| Wildcard permissions | Specify explicit resources | CRITICAL |
```yaml
# BAD: Overly permissive
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# GOOD: Least privilege
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapp-role
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myapp-rolebinding
namespace: production
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: production
roleRef:
kind: Role
name: myapp-role
apiGroup: rbac.authorization.k8s.io
```
### Helm Charts
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Hardcoded values | Use values.yaml | MEDIUM |
| No default values | Provide sensible defaults | MEDIUM |
| No NOTES.txt | Add post-install notes | LOW |
```yaml
# values.yaml
replicaCount: 3
image:
repository: myapp
tag: "v1.2.3"
pullPolicy: IfNotPresent
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# templates/deployment.yaml
spec:
replicas: {{ .Values.replicaCount }}
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
resources:
{{- toYaml .Values.resources | nindent 12 }}
```
## Response Template
```
## Kubernetes Review Results
**Project**: [name]
**Type**: Deployment, Service, Ingress
**Namespace**: production
### Resource Limits
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | deployment.yaml | No resource limits defined |
### Health Probes
| Status | File | Issue |
|--------|------|-------|
| HIGH | deployment.yaml | Missing readiness probe |
### Security
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | deployment.yaml | Running as root |
### RBAC
| Status | File | Issue |
|--------|------|-------|
| HIGH | rbac.yaml | Wildcard permissions |
### Recommended Actions
1. [ ] Add resource requests and limits
2. [ ] Configure liveness and readiness probes
3. [ ] Add security context with non-root user
4. [ ] Apply least privilege RBAC
```
## Best Practices
1. **Resources**: Always set requests and limits
2. **Probes**: All three probe types for production
3. **Security**: Non-root, read-only fs, drop capabilities
4. **RBAC**: Least privilege, namespaced roles
5. **Images**: Pin versions, use private registry
## Integration
- `docker-reviewer`: Container image review
- `terraform-reviewer`: Infrastructure as code
- `infra-security-reviewer`: Cluster security

View File

@@ -0,0 +1,237 @@
---
name: kotlin-android-reviewer
description: |
WHEN: Android Kotlin code review, Jetpack Compose patterns, Coroutines/Flow checks, ViewModel structure analysis
WHAT: Compose best practices + Coroutines patterns + State management + Memory leak detection + Performance optimization
WHEN NOT: KMP shared code → kotlin-multiplatform-reviewer, Backend → kotlin-spring-reviewer
---
# Kotlin Android Reviewer Skill
## Purpose
Reviews Android Kotlin code for Jetpack Compose, Coroutines, Flow, and ViewModel best practices.
## When to Use
- Android Kotlin project code review
- "Compose pattern", "Coroutines", "Flow", "ViewModel" mentions
- Android performance, memory leak inspection
- Projects with Android plugin in `build.gradle.kts`
## Project Detection
- `com.android.application` or `com.android.library` in build.gradle
- `AndroidManifest.xml` exists
- `src/main/java` or `src/main/kotlin` directories
## Workflow
### Step 1: Analyze Project
```
**Kotlin**: 1.9.x
**Compose**: 1.5.x
**minSdk**: 24
**targetSdk**: 34
**Architecture**: MVVM + Clean Architecture
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Android pattern check (recommended)
- Jetpack Compose UI patterns
- Coroutines/Flow usage
- ViewModel/State management
- Memory leaks/Performance
multiSelect: true
```
## Detection Rules
### Jetpack Compose Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Side-effect in Composable | Use LaunchedEffect/SideEffect | HIGH |
| Object creation without remember | Use remember { } | HIGH |
| Missing State hoisting | Hoist state to parent | MEDIUM |
| Missing derivedStateOf | Use for derived state | LOW |
| LazyColumn without key | Add key parameter | HIGH |
```kotlin
// BAD: Object creation without remember
@Composable
fun MyScreen() {
val list = mutableListOf<String>() // New every recomposition
}
// GOOD: Use remember
@Composable
fun MyScreen() {
val list = remember { mutableListOf<String>() }
}
// BAD: Direct suspend call in Composable
@Composable
fun MyScreen(viewModel: MyViewModel) {
viewModel.loadData() // Side-effect!
}
// GOOD: Use LaunchedEffect
@Composable
fun MyScreen(viewModel: MyViewModel) {
LaunchedEffect(Unit) {
viewModel.loadData()
}
}
```
### Coroutines Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| GlobalScope usage | Use viewModelScope/lifecycleScope | CRITICAL |
| Missing Dispatcher | Specify Dispatchers.IO/Default | MEDIUM |
| Missing exception handling | try-catch or CoroutineExceptionHandler | HIGH |
| runBlocking abuse | Convert to suspend function | HIGH |
```kotlin
// BAD: GlobalScope
GlobalScope.launch {
repository.fetchData()
}
// GOOD: viewModelScope
viewModelScope.launch {
repository.fetchData()
}
// BAD: Network on Main
viewModelScope.launch {
val result = api.getData() // NetworkOnMainThreadException
}
// GOOD: IO Dispatcher
viewModelScope.launch {
val result = withContext(Dispatchers.IO) {
api.getData()
}
}
```
### Flow Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Direct collect in Composable | Use collectAsState | HIGH |
| SharedFlow without replay | Set appropriate replay value | MEDIUM |
| Nullable StateFlow initial | Provide meaningful initial value | MEDIUM |
```kotlin
// BAD: Direct collect in Composable
@Composable
fun MyScreen(viewModel: MyViewModel) {
var data by remember { mutableStateOf<Data?>(null) }
LaunchedEffect(Unit) {
viewModel.dataFlow.collect { data = it }
}
}
// GOOD: collectAsState
@Composable
fun MyScreen(viewModel: MyViewModel) {
val data by viewModel.dataFlow.collectAsState()
}
// BAD: Nullable StateFlow initial
private val _state = MutableStateFlow<UiState?>(null)
// GOOD: Sealed class with clear initial state
private val _state = MutableStateFlow<UiState>(UiState.Loading)
```
### ViewModel Patterns
| Check | Issue | Severity |
|-------|-------|----------|
| Direct Context reference | Memory leak risk | CRITICAL |
| View reference | Memory leak risk | CRITICAL |
| Missing SavedStateHandle | Process death handling | MEDIUM |
| Bidirectional data flow | Use UiState + Event pattern | MEDIUM |
```kotlin
// BAD: Activity Context reference
class MyViewModel(private val context: Context) : ViewModel()
// GOOD: Application Context with Hilt
class MyViewModel(
@ApplicationContext private val context: Context
) : ViewModel()
// BAD: Bidirectional binding
class MyViewModel : ViewModel() {
var name = MutableLiveData<String>()
}
// GOOD: Unidirectional + sealed class
class MyViewModel : ViewModel() {
private val _uiState = MutableStateFlow(UiState())
val uiState: StateFlow<UiState> = _uiState.asStateFlow()
fun onNameChanged(name: String) {
_uiState.update { it.copy(name = name) }
}
}
```
### Memory Leak Detection
| Check | Problem | Solution |
|-------|---------|----------|
| Inner class with outer reference | Activity leak | WeakReference or static |
| Unremoved Listener | Memory leak | Remove in onDestroy |
| Uncancelled Coroutine Job | Job leak | Structured concurrency |
| Unreleased Bitmap | OOM risk | recycle() or Coil/Glide |
## Response Template
```
## Android Kotlin Code Review Results
**Project**: [name]
**Kotlin**: 1.9.x | **Compose**: 1.5.x
**Files Analyzed**: X
### Jetpack Compose
| Status | File | Issue |
|--------|------|-------|
| HIGH | ui/HomeScreen.kt | Object creation without remember (line 45) |
| MEDIUM | ui/ProfileScreen.kt | State hoisting recommended |
### Coroutines/Flow
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | data/Repository.kt | GlobalScope usage (line 23) |
| HIGH | viewmodel/MainViewModel.kt | Missing exception handling |
### ViewModel/State
| Status | File | Issue |
|--------|------|-------|
| HIGH | viewmodel/DetailViewModel.kt | Activity Context reference |
### Recommended Actions
1. [ ] GlobalScope → viewModelScope
2. [ ] Add remember { }
3. [ ] Apply UiState sealed class pattern
```
## Best Practices
1. **Compose**: Prefer Stateless Composable, State hoisting
2. **Coroutines**: Structured concurrency, appropriate Dispatcher
3. **Flow**: Distinguish Hot/Cold Flow, StateFlow for UI state
4. **ViewModel**: Unidirectional data flow, avoid Context
5. **Testing**: Turbine for Flow, Compose test rules
## Integration
- `code-reviewer` skill: General Kotlin code quality
- `kotlin-multiplatform-reviewer` skill: KMP shared code
- `test-generator` skill: Android test generation
## Notes
- Based on Compose 1.0+
- Kotlin 1.9+, Coroutines 1.7+ recommended
- Supports Hilt/Dagger DI patterns

View File

@@ -0,0 +1,276 @@
---
name: kotlin-multiplatform-reviewer
description: |
WHEN: Kotlin Multiplatform (KMP) project review, expect/actual patterns, shared module structure, iOS interop
WHAT: Module structure analysis + expect/actual validation + platform separation + iOS/Android interop + dependency management
WHEN NOT: Android UI → kotlin-android-reviewer, Server → kotlin-spring-reviewer
---
# Kotlin Multiplatform Reviewer Skill
## Purpose
Reviews Kotlin Multiplatform (KMP) project structure and patterns including shared code design, expect/actual mechanism, and iOS interop.
## When to Use
- KMP project code review
- "expect/actual", "shared module", "commonMain", "multiplatform" mentions
- iOS/Android code sharing design review
- Projects with `kotlin("multiplatform")` plugin
## Project Detection
- `kotlin("multiplatform")` plugin in `build.gradle.kts`
- `src/commonMain`, `src/androidMain`, `src/iosMain` directories
- `shared` or `common` module exists
## Workflow
### Step 1: Analyze Structure
```
**Kotlin**: 2.0.x
**Targets**: Android, iOS (arm64, simulatorArm64)
**Shared Module**: shared
**Source Sets**:
- commonMain (shared code)
- androidMain (Android specific)
- iosMain (iOS specific)
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full KMP pattern check (recommended)
- Module structure/dependencies
- expect/actual implementation
- Platform code separation
- iOS interop (Swift/ObjC)
multiSelect: true
```
## Detection Rules
### Module Structure
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Bloated shared module | Split by layer | MEDIUM |
| Circular dependencies | Unidirectional deps | HIGH |
| Platform code in commonMain | Move to androidMain/iosMain | HIGH |
| Missing test module | Add commonTest | MEDIUM |
**Recommended Structure:**
```
project/
├── shared/
│ └── src/
│ ├── commonMain/kotlin/ # Shared business logic
│ ├── commonTest/kotlin/ # Shared tests
│ ├── androidMain/kotlin/ # Android specific
│ ├── iosMain/kotlin/ # iOS specific
│ └── iosTest/kotlin/
├── androidApp/ # Android app
└── iosApp/ # iOS app (Xcode)
```
### expect/actual Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| actual without expect | expect declaration required | CRITICAL |
| Missing actual impl | Provide actual for all targets | CRITICAL |
| Excessive expect/actual | Consider interface + DI | MEDIUM |
| Direct platform API in actual | Add abstraction layer | MEDIUM |
```kotlin
// commonMain - expect declaration
expect class Platform() {
val name: String
fun getDeviceId(): String
}
// androidMain - actual implementation
actual class Platform actual constructor() {
actual val name: String = "Android ${Build.VERSION.SDK_INT}"
actual fun getDeviceId(): String = Settings.Secure.getString(
context.contentResolver,
Settings.Secure.ANDROID_ID
)
}
// iosMain - actual implementation
actual class Platform actual constructor() {
actual val name: String = UIDevice.currentDevice.systemName()
actual fun getDeviceId(): String = UIDevice.currentDevice
.identifierForVendor?.UUIDString ?: ""
}
```
**BAD: expect/actual overuse**
```kotlin
// BAD: expect/actual for simple values
expect val platformName: String
actual val platformName: String = "Android"
// GOOD: interface + DI
interface PlatformInfo {
val name: String
}
// androidMain
class AndroidPlatformInfo : PlatformInfo {
override val name = "Android"
}
```
### Platform Separation
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Platform import in commonMain | Move to platform source set | CRITICAL |
| Java class in commonMain | expect/actual or pure Kotlin | HIGH |
| UIKit/Android SDK in common | Separate to platform source set | CRITICAL |
```kotlin
// BAD: Android import in commonMain
// commonMain/kotlin/Repository.kt
import android.content.Context // Compile error!
// GOOD: expect/actual separation
// commonMain
expect class DataStore {
fun save(key: String, value: String)
fun get(key: String): String?
}
// androidMain
actual class DataStore(private val context: Context) {
private val prefs = context.getSharedPreferences("app", Context.MODE_PRIVATE)
actual fun save(key: String, value: String) {
prefs.edit().putString(key, value).apply()
}
actual fun get(key: String): String? = prefs.getString(key, null)
}
// iosMain
actual class DataStore {
actual fun save(key: String, value: String) {
NSUserDefaults.standardUserDefaults.setObject(value, key)
}
actual fun get(key: String): String? =
NSUserDefaults.standardUserDefaults.stringForKey(key)
}
```
### iOS Interop
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Missing @ObjCName | Swift-friendly naming | LOW |
| Sealed class iOS exposure | Use enum or @ObjCName | MEDIUM |
| Direct Flow exposure to iOS | Provide wrapper function | HIGH |
| suspend function iOS call | Provide completion handler wrapper | HIGH |
```kotlin
// BAD: Direct suspend function exposure
class Repository {
suspend fun fetchData(): Data // Hard to call from iOS
}
// GOOD: iOS wrapper provided
class Repository {
suspend fun fetchData(): Data
// iOS completion handler wrapper
fun fetchDataAsync(completion: (Data?, Error?) -> Unit) {
MainScope().launch {
try {
val data = fetchData()
completion(data, null)
} catch (e: Exception) {
completion(null, e)
}
}
}
}
```
**Flow iOS Exposure:**
```kotlin
// BAD: Direct Flow exposure
val dataFlow: Flow<Data>
// GOOD: iOS wrapper
fun observeData(onEach: (Data) -> Unit): Closeable {
val job = MainScope().launch {
dataFlow.collect { onEach(it) }
}
return object : Closeable {
override fun close() { job.cancel() }
}
}
```
### Dependency Management
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Platform library in commonMain | Use multiplatform library | HIGH |
| Version mismatch | Use Version Catalog | MEDIUM |
| Unused dependencies | Remove unused | LOW |
**Multiplatform Library Recommendations:**
| Purpose | Library |
|---------|---------|
| HTTP | Ktor Client |
| Serialization | Kotlinx Serialization |
| Async | Kotlinx Coroutines |
| DI | Koin, Kodein |
| Date/Time | Kotlinx Datetime |
| Settings | Multiplatform Settings |
| Logging | Napier, Kermit |
| DB | SQLDelight |
## Response Template
```
## KMP Project Review Results
**Project**: [name]
**Kotlin**: 2.0.x
**Targets**: Android, iOS (arm64, simulatorArm64)
### Module Structure
| Status | Item | Issue |
|--------|------|-------|
| OK | Source set separation | commonMain/androidMain/iosMain correct |
| MEDIUM | Tests | Add commonTest recommended |
### expect/actual
| Status | File | Issue |
|--------|------|-------|
| OK | Platform.kt | expect/actual correctly implemented |
| HIGH | DataStore.kt | Missing iosMain actual implementation |
### iOS Interop
| Status | Item | Issue |
|--------|------|-------|
| HIGH | Repository.kt | suspend function needs iOS wrapper |
| MEDIUM | UiState.kt | Add @ObjCName to sealed class |
### Recommended Actions
1. [ ] Add DataStore iosMain actual implementation
2. [ ] Add completion handler wrapper to fetchData()
3. [ ] Add commonTest source set
```
## Best Practices
1. **Share Scope**: Business logic > Data layer > UI (optional)
2. **expect/actual**: Minimize usage, prefer interface + DI
3. **iOS Interop**: Use SKIE library or manual wrappers
4. **Testing**: Test shared logic in commonTest
5. **Dependencies**: Prefer multiplatform libraries
## Integration
- `kotlin-android-reviewer` skill: Android specific code
- `kotlin-spring-reviewer` skill: Server shared code
- `code-reviewer` skill: General code quality
## Notes
- Based on Kotlin 2.0+
- KMP 1.9.20+ recommended (Stable)
- Compose Multiplatform requires separate review

View File

@@ -0,0 +1,355 @@
---
name: kotlin-spring-reviewer
description: |
WHEN: Spring Boot + Kotlin, Ktor backend review, coroutine-based server, WebFlux/R2DBC pattern checks
WHAT: Spring Kotlin idioms + Coroutines integration + WebFlux patterns + Data class usage + Test strategies
WHEN NOT: Android → kotlin-android-reviewer, KMP shared code → kotlin-multiplatform-reviewer
---
# Kotlin Spring Reviewer Skill
## Purpose
Reviews Spring Boot + Kotlin and Ktor backend projects for Kotlin idioms, Coroutines integration, WebFlux, and data class best practices.
## When to Use
- Spring Boot + Kotlin project code review
- Ktor server project review
- "WebFlux", "R2DBC", "Coroutines server" mentions
- Projects with `spring-boot` or `ktor` in `build.gradle.kts`
## Project Detection
- `org.springframework.boot` plugin in `build.gradle.kts`
- `io.ktor` dependency in `build.gradle.kts`
- `application.yml` or `application.properties`
- `Application.kt` main class
## Workflow
### Step 1: Analyze Project
```
**Framework**: Spring Boot 3.2.x
**Kotlin**: 1.9.x
**Build Tool**: Gradle (Kotlin DSL)
**Dependencies**:
- Spring WebFlux (reactive)
- Spring Data R2DBC
- Kotlinx Coroutines
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Kotlin Spring pattern check (recommended)
- Kotlin idiom usage
- Coroutines/WebFlux integration
- Data class/DTO design
- Test strategies
multiSelect: true
```
## Detection Rules
### Kotlin Idioms
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Java-style getter/setter | Use Kotlin property | LOW |
| if-based null check | Use ?.let, ?:, avoid !! | MEDIUM |
| if-else chain | Use when expression | LOW |
| Missing extension functions | Utility → extension function | LOW |
| Missing scope functions | Use apply, let, run, also | LOW |
```kotlin
// BAD: Java style
class User {
private var name: String? = null
fun getName(): String? = name
fun setName(name: String?) { this.name = name }
}
// GOOD: Kotlin property
class User {
var name: String? = null
}
// BAD: Java-style null check
fun process(user: User?) {
if (user != null) {
if (user.name != null) {
println(user.name)
}
}
}
// GOOD: Kotlin null-safe operators
fun process(user: User?) {
user?.name?.let { println(it) }
}
// BAD: if-else chain
fun getStatus(code: Int): String {
if (code == 200) return "OK"
else if (code == 404) return "Not Found"
else return "Unknown"
}
// GOOD: when expression
fun getStatus(code: Int): String = when (code) {
200 -> "OK"
404 -> "Not Found"
else -> "Unknown"
}
```
### Spring + Kotlin Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| @Autowired field injection | Constructor injection | HIGH |
| lateinit var abuse | Constructor injection or lazy | MEDIUM |
| Missing open class | Use all-open plugin | HIGH |
| data class @Entity | Use regular class | HIGH |
```kotlin
// BAD: Field injection
@Service
class UserService {
@Autowired
private lateinit var userRepository: UserRepository
}
// GOOD: Constructor injection (Kotlin default)
@Service
class UserService(
private val userRepository: UserRepository
)
// BAD: data class as JPA Entity
@Entity
data class User(
@Id val id: Long,
val name: String
) // equals/hashCode issues
// GOOD: Regular class with explicit equals/hashCode
@Entity
class User(
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
var id: Long? = null,
var name: String
) {
override fun equals(other: Any?): Boolean {
if (this === other) return true
if (other !is User) return false
return id != null && id == other.id
}
override fun hashCode(): Int = javaClass.hashCode()
}
```
**Gradle Plugin Check:**
```kotlin
// build.gradle.kts
plugins {
kotlin("plugin.spring") // all-open for Spring
kotlin("plugin.jpa") // no-arg for JPA entities
}
```
### Coroutines Integration
| Check | Recommendation | Severity |
|-------|----------------|----------|
| runBlocking in controller | Use suspend function | CRITICAL |
| GlobalScope in server | Use structured concurrency | CRITICAL |
| Missing Dispatcher | Specify IO/Default | MEDIUM |
| Missing exception handling | Use CoroutineExceptionHandler | HIGH |
```kotlin
// BAD: runBlocking in controller
@GetMapping("/users")
fun getUsers(): List<User> = runBlocking {
userService.getUsers()
}
// GOOD: suspend function (WebFlux/Coroutines)
@GetMapping("/users")
suspend fun getUsers(): List<User> =
userService.getUsers()
// BAD: GlobalScope in service
@Service
class UserService {
fun processAsync() {
GlobalScope.launch {
// Dangerous: Not cancelled on app shutdown
}
}
}
// GOOD: Structured concurrency
@Service
class UserService(
private val applicationScope: CoroutineScope
) {
fun processAsync() = applicationScope.launch {
// Properly cancelled on app shutdown
}
}
```
### WebFlux + Coroutines
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Direct Mono/Flux usage | Convert to suspend/Flow | MEDIUM |
| awaitSingle abuse | Use coRouter DSL | LOW |
| Blocking call | Use Dispatchers.IO | CRITICAL |
```kotlin
// OK: Direct Mono/Flux
@GetMapping("/user/{id}")
fun getUser(@PathVariable id: Long): Mono<User> =
userRepository.findById(id)
// BETTER: Kotlin Coroutines
@GetMapping("/user/{id}")
suspend fun getUser(@PathVariable id: Long): User? =
userRepository.findById(id).awaitSingleOrNull()
// BEST: coRouter DSL (functional endpoints)
@Configuration
class RouterConfig {
@Bean
fun routes(handler: UserHandler) = coRouter {
"/api/users".nest {
GET("", handler::getAll)
GET("/{id}", handler::getById)
POST("", handler::create)
}
}
}
class UserHandler(private val service: UserService) {
suspend fun getAll(request: ServerRequest): ServerResponse =
ServerResponse.ok().bodyAndAwait(service.getAll())
}
```
### Ktor Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Excessive routing nesting | Split into modules | MEDIUM |
| No DI | Use Koin/Kodein | MEDIUM |
| Missing error handling | Use StatusPages plugin | HIGH |
| Missing serialization | Use ContentNegotiation | HIGH |
```kotlin
// BAD: All routes in one file
fun Application.module() {
routing {
get("/users") { /* ... */ }
get("/users/{id}") { /* ... */ }
get("/products") { /* ... */ }
// ... 100 more
}
}
// GOOD: Split into modules
fun Application.module() {
configureRouting()
configureSerialization()
configureDI()
}
fun Application.configureRouting() {
routing {
userRoutes()
productRoutes()
}
}
fun Route.userRoutes() {
route("/users") {
get { /* ... */ }
get("/{id}") { /* ... */ }
post { /* ... */ }
}
}
```
### Data Class Design
| Check | Recommendation | Severity |
|-------|----------------|----------|
| var in DTO | Use val (immutable) | MEDIUM |
| Excessive nullable | Use defaults or required | LOW |
| Missing validation | Use @field:Valid, init {} | MEDIUM |
```kotlin
// BAD: Mutable DTO
data class CreateUserRequest(
var name: String?,
var email: String?
)
// GOOD: Immutable + validation
data class CreateUserRequest(
@field:NotBlank
val name: String,
@field:Email
val email: String
) {
init {
require(name.length <= 100) { "Name too long" }
}
}
```
## Response Template
```
## Kotlin Spring Code Review Results
**Project**: [name]
**Spring Boot**: 3.2.x | **Kotlin**: 1.9.x
**Stack**: WebFlux + R2DBC + Coroutines
### Kotlin Idioms
| Status | File | Issue |
|--------|------|-------|
| LOW | UserService.kt | Java-style null check → ?.let recommended |
### Spring Patterns
| Status | File | Issue |
|--------|------|-------|
| HIGH | ProductService.kt | @Autowired field injection → constructor injection |
| HIGH | User.kt | data class @Entity → regular class |
### Coroutines
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | ReportService.kt | runBlocking in controller |
| HIGH | BatchJob.kt | GlobalScope usage |
### Recommended Actions
1. [ ] Verify kotlin-spring, kotlin-jpa plugins
2. [ ] runBlocking → suspend function conversion
3. [ ] GlobalScope → applicationScope injection
4. [ ] data class Entity → regular class change
```
## Best Practices
1. **Constructor Injection**: Use default constructor instead of @Autowired
2. **Immutability**: val, data class (except Entity)
3. **Coroutines**: suspend functions, structured concurrency
4. **Kotlin DSL**: coRouter, bean { }
5. **Testing**: MockK, Kotest, @SpringBootTest
## Integration
- `code-reviewer` skill: General code quality
- `kotlin-multiplatform-reviewer` skill: KMP server sharing
- `security-scanner` skill: API security checks
## Notes
- Based on Spring Boot 3.x + Kotlin 1.9+
- WebFlux/R2DBC reactive stack support
- Ktor 2.x support

View File

@@ -0,0 +1,365 @@
---
name: migration-checker
description: |
WHEN: Database migration review, backwards compatibility, rollback safety, data integrity
WHAT: Migration safety + Rollback validation + Downtime analysis + Data preservation + Lock prevention
WHEN NOT: Query optimization → sql-optimizer, Schema design → schema-reviewer
---
# Migration Checker Skill
## Purpose
Reviews database migrations for safety, backwards compatibility, rollback capability, and zero-downtime deployment.
## When to Use
- Database migration review
- Schema change safety check
- Rollback plan validation
- Zero-downtime deployment check
- Data migration review
## Project Detection
- `migrations/` directory
- Prisma migrations
- Alembic (Python)
- TypeORM migrations
- Rails migrations
- Flyway/Liquibase
## Workflow
### Step 1: Analyze Migration
```
**Type**: Schema change / Data migration
**Risk Level**: High (adds non-nullable column)
**Tables Affected**: users, orders
**Estimated Rows**: 1M+
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full migration review (recommended)
- Backwards compatibility
- Rollback safety
- Lock and downtime analysis
- Data integrity
multiSelect: true
```
## Detection Rules
### Dangerous Operations
| Operation | Risk | Mitigation |
|-----------|------|------------|
| Add NOT NULL without default | CRITICAL | Add default or multi-step |
| Drop column | HIGH | Ensure no code references |
| Rename column | HIGH | Use multi-step migration |
| Change column type | HIGH | Check data compatibility |
| Add unique constraint | HIGH | Verify no duplicates first |
| Drop table | CRITICAL | Ensure no references |
### Adding Non-Nullable Column
```sql
-- BAD: Direct NOT NULL column (will fail on existing rows)
ALTER TABLE users ADD COLUMN phone VARCHAR(20) NOT NULL;
-- ERROR: column "phone" contains null values
-- GOOD: Multi-step migration
-- Step 1: Add nullable column
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- Step 2: Backfill data (in batches for large tables)
UPDATE users SET phone = 'unknown' WHERE phone IS NULL;
-- Step 3: Add NOT NULL constraint
ALTER TABLE users ALTER COLUMN phone SET NOT NULL;
```
### Renaming Columns (Zero-Downtime)
```sql
-- BAD: Direct rename (breaks running code)
ALTER TABLE users RENAME COLUMN name TO full_name;
-- Old code still uses "name"!
-- GOOD: Multi-step migration
-- Migration 1: Add new column
ALTER TABLE users ADD COLUMN full_name VARCHAR(200);
-- Migration 2: Sync data (application writes to both)
UPDATE users SET full_name = name WHERE full_name IS NULL;
-- Deploy: Update code to read from full_name
-- Migration 3: Drop old column (after code deployment)
ALTER TABLE users DROP COLUMN name;
```
### Index Operations
```sql
-- BAD: Regular CREATE INDEX (blocks writes)
CREATE INDEX idx_orders_user_id ON orders(user_id);
-- Locks table for duration!
-- GOOD: Concurrent index (PostgreSQL)
CREATE INDEX CONCURRENTLY idx_orders_user_id ON orders(user_id);
-- Doesn't block writes, but takes longer
-- Note: CONCURRENTLY cannot be in transaction
-- Must be separate migration without transaction wrapper
```
### Data Migrations
```python
# BAD: Load all data into memory
def migrate():
users = User.objects.all() # 1M rows in memory!
for user in users:
user.email = user.email.lower()
user.save()
# GOOD: Batch processing
def migrate():
batch_size = 1000
total = User.objects.count()
for offset in range(0, total, batch_size):
users = User.objects.all()[offset:offset + batch_size]
for user in users:
user.email = user.email.lower()
User.objects.bulk_update(users, ['email'])
# BETTER: Raw SQL for large tables
def migrate():
with connection.cursor() as cursor:
cursor.execute("""
UPDATE users
SET email = LOWER(email)
WHERE id IN (
SELECT id FROM users
WHERE email != LOWER(email)
LIMIT 10000
)
""")
```
### Prisma Migration Safety
```typescript
// schema.prisma
// BAD: Direct non-nullable addition
model User {
id Int @id @default(autoincrement())
email String
phone String // New non-nullable field - will fail!
}
// GOOD: Add with default first
model User {
id Int @id @default(autoincrement())
email String
phone String @default("") // Safe to add
}
// Or nullable first, then make required
model User {
id Int @id @default(autoincrement())
email String
phone String? // Step 1: nullable
}
```
### TypeORM Migration
```typescript
// BAD: Unsafe migration
export class AddUserPhone1234567890 implements MigrationInterface {
async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE users ADD phone VARCHAR(20) NOT NULL`
);
}
async down(queryRunner: QueryRunner): Promise<void> {
// No rollback!
}
}
// GOOD: Safe migration with rollback
export class AddUserPhone1234567890 implements MigrationInterface {
async up(queryRunner: QueryRunner): Promise<void> {
// Step 1: Add nullable column
await queryRunner.query(
`ALTER TABLE users ADD phone VARCHAR(20)`
);
// Step 2: Backfill
await queryRunner.query(
`UPDATE users SET phone = '' WHERE phone IS NULL`
);
// Step 3: Add constraint
await queryRunner.query(
`ALTER TABLE users ALTER COLUMN phone SET NOT NULL`
);
}
async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`ALTER TABLE users DROP COLUMN phone`
);
}
}
```
### Django Migrations
```python
# BAD: RunPython without reverse
class Migration(migrations.Migration):
operations = [
migrations.RunPython(forwards_func), # No reverse!
]
# GOOD: With reverse operation
class Migration(migrations.Migration):
operations = [
migrations.RunPython(
forwards_func,
reverse_code=backwards_func
),
]
# BAD: Atomic migration for large data change
class Migration(migrations.Migration):
operations = [
migrations.RunPython(update_millions_of_rows),
]
# GOOD: Non-atomic for large changes
class Migration(migrations.Migration):
atomic = False
operations = [
migrations.RunPython(update_in_batches),
]
```
### Rollback Checklist
```sql
-- Verify rollback for each operation
-- ADD COLUMN → DROP COLUMN
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- Rollback: ALTER TABLE users DROP COLUMN phone;
-- ADD CONSTRAINT → DROP CONSTRAINT
ALTER TABLE orders ADD CONSTRAINT fk_user FOREIGN KEY (user_id) REFERENCES users(id);
-- Rollback: ALTER TABLE orders DROP CONSTRAINT fk_user;
-- CREATE INDEX → DROP INDEX
CREATE INDEX idx_email ON users(email);
-- Rollback: DROP INDEX idx_email;
-- RENAME COLUMN → Complex (need old column back)
-- Better to use add/copy/drop pattern
```
### Lock Analysis
```sql
-- PostgreSQL: Check what operations lock tables
-- https://www.postgresql.org/docs/current/explicit-locking.html
-- ACCESS EXCLUSIVE (blocks everything):
-- - DROP TABLE
-- - ALTER TABLE ... ADD COLUMN with DEFAULT (< PG 11)
-- - ALTER TABLE ... ALTER COLUMN TYPE
-- SHARE UPDATE EXCLUSIVE (blocks DDL, allows DML):
-- - CREATE INDEX CONCURRENTLY
-- - VACUUM
-- Check for blocking locks before migration
SELECT blocked_locks.pid AS blocked_pid,
blocked_activity.usename AS blocked_user,
blocking_locks.pid AS blocking_pid,
blocking_activity.usename AS blocking_user,
blocked_activity.query AS blocked_statement
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity
ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
WHERE NOT blocked_locks.granted;
```
## Response Template
```
## Migration Review Results
**Migration**: 20240115_add_user_phone
**Type**: Schema change
**Risk Level**: HIGH
### Safety Analysis
| Status | Issue | Impact |
|--------|-------|--------|
| CRITICAL | Non-nullable column without default | Migration will fail |
| HIGH | No rollback operation defined | Cannot revert |
| MEDIUM | Missing concurrent index | Table lock during creation |
### Lock Analysis
| Operation | Lock Type | Duration |
|-----------|-----------|----------|
| ADD COLUMN | ACCESS EXCLUSIVE | Brief |
| CREATE INDEX | ACCESS EXCLUSIVE | Long (use CONCURRENTLY) |
### Rollback Plan
- [ ] Add rollback script
- [ ] Test rollback in staging
- [ ] Document manual rollback steps
### Recommended Changes
```sql
-- Step 1: Add nullable column
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- Step 2: Backfill data
UPDATE users SET phone = '' WHERE phone IS NULL;
-- Step 3: Add NOT NULL
ALTER TABLE users ALTER COLUMN phone SET NOT NULL;
-- Rollback
ALTER TABLE users DROP COLUMN phone;
```
### Pre-Migration Checklist
- [ ] Backup database
- [ ] Test in staging with production data
- [ ] Schedule during low-traffic period
- [ ] Notify team of potential downtime
- [ ] Prepare rollback script
```
## Best Practices
1. **Multi-Step**: Break dangerous operations into steps
2. **Rollback**: Always define down/reverse migration
3. **Concurrent**: Use CONCURRENTLY for indexes
4. **Batching**: Process large data in chunks
5. **Testing**: Test with production-like data volume
## Integration
- `schema-reviewer`: Schema design review
- `sql-optimizer`: Query performance
- `ci-cd-reviewer`: Migration in pipelines

View File

@@ -0,0 +1,158 @@
---
name: nextjs-reviewer
description: |
WHEN: Next.js project review, App Router patterns, Server/Client Component separation, data fetching
WHAT: Router pattern analysis + SC/CC separation + next/image·font optimization + Server Actions review
WHEN NOT: General code quality → code-reviewer, Bundle performance → perf-analyzer
---
# Next.js Reviewer Skill
## Purpose
Reviews Next.js specific patterns: App Router, Server Components, data fetching, and optimizations.
## When to Use
- Next.js project code review
- App Router, Server Component mentions
- Next.js optimization requests
- Projects with `next.config.js` or `next.config.mjs`
## Project Detection
- `next.config.js` or `next.config.mjs`
- `next` dependency in `package.json`
- `app/` directory (App Router)
- `pages/` directory (Pages Router)
## Workflow
### Step 1: Analyze Configuration
```
**Next.js**: 14.x
**Router**: App Router
**TypeScript**: Enabled
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Next.js pattern check (recommended)
- Server/Client Component separation
- Data fetching patterns
- Image/Font optimization
- Routing and layouts
multiSelect: true
```
## Detection Rules
### Server vs Client Components
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Unnecessary 'use client' | Remove if SC possible | MEDIUM |
| async in Client Component | Move to SC | HIGH |
| useState/useEffect | Check 'use client' need | INFO |
```typescript
// BAD: Unnecessary 'use client'
'use client'
export default function StaticPage() {
return <div>Static content</div>
}
// GOOD: Keep as Server Component
export default function StaticPage() {
return <div>Static content</div>
}
```
### App Router Files
| File | Purpose | Check |
|------|---------|-------|
| `page.tsx` | Route page | Default export |
| `layout.tsx` | Layout | children prop |
| `loading.tsx` | Loading UI | Suspense alternative |
| `error.tsx` | Error handling | 'use client' required |
### Data Fetching
| Pattern | Recommendation |
|---------|----------------|
| SC fetch | Recommended (auto caching) |
| Route Handler | API endpoints |
| Client SWR/React Query | Real-time data |
```typescript
// GOOD: Direct fetch in Server Component
async function ProductPage({ params }) {
const product = await fetch(`/api/products/${params.id}`)
return <Product data={product} />
}
// WARNING: useEffect fetch in Client
'use client'
function ProductPage({ params }) {
useEffect(() => { fetch(...) }, [])
// Recommend: Move to Server Component
}
```
### Image Optimization
| Check | Issue | Severity |
|-------|-------|----------|
| `<img>` tag | Use `next/image` | HIGH |
| Missing width/height | Layout shift | MEDIUM |
| Missing priority | LCP image needs it | MEDIUM |
### Server Actions
| Check | Description | Severity |
|-------|-------------|----------|
| 'use server' location | Top of function/file | HIGH |
| Input validation | Use zod | HIGH |
| Error handling | try-catch | MEDIUM |
## Response Template
```
## Next.js Review Results
**Project**: [name]
**Next.js**: 14.x (App Router)
### Server/Client Components
| Status | File | Issue |
|--------|------|-------|
| WARNING | app/products/page.tsx | Unnecessary 'use client' |
### Data Fetching
| Status | Location | Recommendation |
|--------|----------|----------------|
| WARNING | app/dashboard/page.tsx | Move useEffect fetch to SC |
### Optimization
| Area | Status | Details |
|------|--------|---------|
| Images | WARNING | 3 files using <img> |
| Fonts | OK | Using next/font |
### Actions
1. [ ] Remove 'use client' from `app/products/page.tsx`
2. [ ] Move fetch to Server Component
3. [ ] Use next/image in `components/Banner.tsx`
```
## Best Practices
1. **Server First**: Keep SC when possible
2. **Colocation**: Fetch data where needed
3. **Streaming**: Use loading.tsx and Suspense
4. **Caching**: Explicit fetch caching strategy
5. **Metadata**: Use generateMetadata
## Integration
- `code-reviewer` skill
- `perf-analyzer` skill
- `security-scanner` skill
## Notes
- Based on Next.js 13+ App Router
- Pages Router has different rules
- Review next.config.js settings too

View File

@@ -0,0 +1,372 @@
---
name: orm-reviewer
description: |
WHEN: ORM code review, Prisma/TypeORM/SQLAlchemy/GORM patterns, lazy loading, transactions
WHAT: Query efficiency + Lazy/eager loading + Transaction handling + N+1 prevention + Model design
WHEN NOT: Raw SQL → sql-optimizer, Schema design → schema-reviewer
---
# ORM Reviewer Skill
## Purpose
Reviews ORM code for query efficiency, loading strategies, transactions, and best practices.
## When to Use
- ORM code review
- Prisma/TypeORM/SQLAlchemy patterns
- N+1 query detection
- Transaction handling review
- Model relationship design
## Project Detection
- `schema.prisma` (Prisma)
- `@Entity` decorators (TypeORM)
- `models.py` (Django/SQLAlchemy)
- GORM struct tags
## Workflow
### Step 1: Analyze ORM
```
**ORM**: Prisma / TypeORM / SQLAlchemy
**Database**: PostgreSQL
**Models**: 15
**Relationships**: HasMany, BelongsTo, ManyToMany
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full ORM review (recommended)
- Query efficiency
- Loading strategies (N+1)
- Transaction handling
- Model design
multiSelect: true
```
## Detection Rules
### Prisma Patterns
#### N+1 Prevention
```typescript
// BAD: N+1 queries
const users = await prisma.user.findMany();
for (const user of users) {
const orders = await prisma.order.findMany({
where: { userId: user.id }
});
// N+1 queries!
}
// GOOD: Include related data
const users = await prisma.user.findMany({
include: {
orders: true
}
});
// GOOD: Selective include
const users = await prisma.user.findMany({
include: {
orders: {
where: { status: 'pending' },
take: 5,
orderBy: { createdAt: 'desc' }
}
}
});
```
#### Select Optimization
```typescript
// BAD: Fetching all fields
const users = await prisma.user.findMany();
// GOOD: Select only needed fields
const users = await prisma.user.findMany({
select: {
id: true,
name: true,
email: true
}
});
// GOOD: Combining select with relations
const orders = await prisma.order.findMany({
select: {
id: true,
total: true,
user: {
select: {
name: true,
email: true
}
}
}
});
```
#### Transactions
```typescript
// BAD: No transaction for related operations
await prisma.order.create({ data: orderData });
await prisma.orderItem.createMany({ data: items });
await prisma.inventory.updateMany({ ... });
// If inventory update fails, order is orphaned!
// GOOD: Interactive transaction
const result = await prisma.$transaction(async (tx) => {
const order = await tx.order.create({ data: orderData });
await tx.orderItem.createMany({
data: items.map(item => ({ ...item, orderId: order.id }))
});
await tx.inventory.updateMany({
where: { productId: { in: items.map(i => i.productId) } },
data: { quantity: { decrement: 1 } }
});
return order;
});
```
### TypeORM Patterns
#### Eager vs Lazy Loading
```typescript
// BAD: Lazy loading causing N+1
@Entity()
class User {
@OneToMany(() => Order, order => order.user)
orders: Order[]; // Lazy by default
}
// Triggers N+1:
const users = await userRepo.find();
for (const user of users) {
console.log(user.orders); // Each access = query!
}
// GOOD: Eager loading with relations
const users = await userRepo.find({
relations: ['orders']
});
// GOOD: QueryBuilder for complex queries
const users = await userRepo
.createQueryBuilder('user')
.leftJoinAndSelect('user.orders', 'order')
.where('order.status = :status', { status: 'pending' })
.getMany();
```
#### Query Optimization
```typescript
// BAD: Multiple queries
const user = await userRepo.findOne({ where: { id } });
const orders = await orderRepo.find({ where: { userId: id } });
const reviews = await reviewRepo.find({ where: { userId: id } });
// GOOD: Single query with joins
const user = await userRepo.findOne({
where: { id },
relations: ['orders', 'reviews']
});
// GOOD: Select specific columns
const users = await userRepo.find({
select: ['id', 'name', 'email'],
where: { status: 'active' }
});
```
### SQLAlchemy Patterns
#### Loading Strategies
```python
# BAD: Lazy loading N+1
users = session.query(User).all()
for user in users:
print(user.orders) # N queries!
# GOOD: Eager loading
from sqlalchemy.orm import joinedload, selectinload
# For one-to-many: selectinload (2 queries)
users = session.query(User).options(
selectinload(User.orders)
).all()
# For many-to-one: joinedload (1 query)
orders = session.query(Order).options(
joinedload(Order.user)
).all()
# Combined
users = session.query(User).options(
selectinload(User.orders).selectinload(Order.items),
joinedload(User.profile)
).all()
```
#### Session Management
```python
# BAD: Long-lived session
session = Session()
# ... many operations over time
session.close() # Objects may be stale
# GOOD: Context manager
with Session() as session:
user = session.query(User).first()
# Session auto-closes
# GOOD: Scoped session in web apps
from sqlalchemy.orm import scoped_session
Session = scoped_session(sessionmaker(bind=engine))
# In request handler
def handle_request():
try:
# Use Session()
Session.commit()
except:
Session.rollback()
raise
finally:
Session.remove() # Clean up for this thread
```
### Django ORM Patterns
#### N+1 Prevention
```python
# BAD: N+1 queries
orders = Order.objects.all()
for order in orders:
print(order.user.name) # N queries!
# GOOD: select_related for ForeignKey
orders = Order.objects.select_related('user').all()
# GOOD: prefetch_related for reverse FK / M2M
users = User.objects.prefetch_related('orders').all()
# GOOD: Combined with Prefetch for filtering
from django.db.models import Prefetch
users = User.objects.prefetch_related(
Prefetch(
'orders',
queryset=Order.objects.filter(status='pending')
)
).all()
```
#### Query Optimization
```python
# BAD: .count() after .all()
count = len(Order.objects.all()) # Loads all objects!
# GOOD: Database count
count = Order.objects.count()
# BAD: Multiple queries
user = User.objects.get(id=user_id)
order_count = Order.objects.filter(user=user).count()
# GOOD: Annotation
from django.db.models import Count
user = User.objects.annotate(
order_count=Count('orders')
).get(id=user_id)
# values() for specific columns only
emails = User.objects.values_list('email', flat=True)
```
### GORM (Go) Patterns
```go
// BAD: N+1 queries
var users []User
db.Find(&users)
for _, user := range users {
var orders []Order
db.Where("user_id = ?", user.ID).Find(&orders)
}
// GOOD: Preload
var users []User
db.Preload("Orders").Find(&users)
// GOOD: Preload with conditions
db.Preload("Orders", "status = ?", "pending").Find(&users)
// GOOD: Nested preload
db.Preload("Orders.Items").Find(&users)
// Select specific columns
var users []User
db.Select("id", "name", "email").Find(&users)
```
## Response Template
```
## ORM Code Review Results
**ORM**: Prisma
**Database**: PostgreSQL
**Models**: 12
### N+1 Queries
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | userService.ts:45 | Loop with findMany causing N+1 |
### Loading Strategy
| Status | File | Issue |
|--------|------|-------|
| HIGH | orderController.ts:23 | Missing include for user relation |
### Transactions
| Status | File | Issue |
|--------|------|-------|
| HIGH | checkoutService.ts:67 | Related operations without transaction |
### Query Efficiency
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | reportService.ts:12 | Selecting all columns |
### Recommended Fixes
```typescript
// Fix N+1 in userService.ts
const users = await prisma.user.findMany({
include: { orders: true }
});
// Add transaction in checkoutService.ts
await prisma.$transaction(async (tx) => {
// ... atomic operations
});
```
```
## Best Practices
1. **Loading**: Eager load known relations
2. **N+1**: Always include/preload in lists
3. **Select**: Only needed columns
4. **Transactions**: Wrap related operations
5. **Batching**: Use createMany, bulk operations
## Integration
- `sql-optimizer`: Raw query optimization
- `schema-reviewer`: Model design
- `perf-analyzer`: Application performance

View File

@@ -0,0 +1,223 @@
---
name: perf-analyzer
description: |
WHEN: Performance analysis, bundle size optimization, rendering, Core Web Vitals, code splitting
WHAT: Bundle analysis + large dependency detection + re-render issues + useMemo/useCallback suggestions + LCP/FID/CLS improvements
WHEN NOT: Code quality → code-reviewer, Security → security-scanner
---
# Performance Analyzer Skill
## Purpose
Analyzes frontend application performance and suggests optimizations for bundle size, rendering, and images.
## When to Use
- Performance analysis requests
- "Slow", "long loading" mentions
- Bundle size, rendering, Core Web Vitals questions
- Pre-production performance review
## Workflow
### Step 1: Select Analysis Areas
**AskUserQuestion:**
```
"Which areas to analyze?"
Options:
- Full performance analysis (recommended)
- Bundle size analysis
- Rendering performance (re-renders)
- Image/asset optimization
- Code splitting opportunities
multiSelect: true
```
## Analysis Areas
### Bundle Size
| Item | Threshold | Severity |
|------|-----------|----------|
| Total bundle | > 500KB | HIGH |
| Initial JS | > 200KB | HIGH |
| Single chunk | > 100KB | MEDIUM |
| Unused code | Tree-shaking failures | MEDIUM |
**Large Dependencies:**
```typescript
// WARNING: Large libraries
import _ from 'lodash' // 71KB
import moment from 'moment' // 280KB
// BETTER: Lightweight alternatives
import debounce from 'lodash/debounce' // 2KB
import { format } from 'date-fns' // Only needed functions
```
**Tree-shaking Issues:**
```typescript
// BAD: Full import (no tree-shaking)
import * as utils from './utils'
// GOOD: Named import
import { specificFunction } from './utils'
```
### Rendering Performance
**Unnecessary Re-renders:**
```typescript
// WARNING: New object/array every render
function Component() {
return <Child style={{ color: 'red' }} /> // New object each time
}
// BETTER: External definition
const style = { color: 'red' }
function Component() {
return <Child style={style} />
}
```
**Expensive Computations:**
```typescript
// WARNING: Computed every render
function Component({ data }) {
const processed = data.map(item => expensiveOp(item))
return <List items={processed} />
}
// BETTER: useMemo caching
const processed = useMemo(
() => data.map(item => expensiveOp(item)),
[data]
)
```
**Callback Optimization:**
```typescript
// WARNING: New function every render
<Child onClick={() => handleClick()} />
// BETTER: useCallback
const handleClick = useCallback(() => { /* ... */ }, [])
<Child onClick={handleClick} />
```
### Image Optimization
| Issue | Problem | Solution |
|-------|---------|----------|
| Large image | > 200KB | Compress or WebP |
| Unoptimized format | PNG/JPG | WebP/AVIF |
| Missing lazy load | Offscreen images | loading="lazy" |
| Fixed size | Non-responsive | srcset/sizes |
```typescript
// BAD: No optimization
<img src="/hero.jpg" alt="Hero" />
// GOOD: next/image
<Image
src="/hero.jpg"
alt="Hero"
width={1200}
height={600}
priority // LCP image
placeholder="blur"
/>
```
### Code Splitting
```typescript
// WARNING: Unnecessary initial load
import HeavyComponent from './HeavyComponent'
// BETTER: Load on demand
const HeavyComponent = dynamic(() => import('./HeavyComponent'), {
loading: () => <Skeleton />
})
```
### Core Web Vitals
**LCP (Largest Contentful Paint)**
| Grade | Time | Improvements |
|-------|------|--------------|
| Good | < 2.5s | - |
| Needs Work | 2.5-4s | Image optimization, server response |
| Poor | > 4s | CDN, caching, code splitting |
**FID (First Input Delay)**
| Grade | Time | Improvements |
|-------|------|--------------|
| Good | < 100ms | - |
| Needs Work | 100-300ms | Reduce main thread blocking |
| Poor | > 300ms | Split long tasks, Web Workers |
**CLS (Cumulative Layout Shift)**
| Grade | Score | Improvements |
|-------|-------|--------------|
| Good | < 0.1 | - |
| Needs Work | 0.1-0.25 | Specify image/font dimensions |
| Poor | > 0.25 | Reserve space for dynamic content |
## Response Template
```
## Performance Analysis Results
**Project**: [name]
### Bundle Size
| Item | Size | Status |
|------|------|--------|
| Total bundle | 650KB | WARNING |
| Initial JS | 180KB | OK |
**Large Dependencies:**
| Package | Size | Alternative |
|---------|------|-------------|
| moment | 280KB | date-fns (7KB) |
| lodash | 71KB | lodash-es + individual imports |
### Rendering Performance
| Component | Issue | Recommendation |
|-----------|-------|----------------|
| ProductList | Unnecessary re-renders | Add useMemo |
### Image Optimization
| Image | Size | Recommendation |
|-------|------|----------------|
| hero.jpg | 450KB | Convert to WebP, use next/image |
### Code Splitting Opportunities
| Component | Size | Recommendation |
|-----------|------|----------------|
| Dashboard | 85KB | dynamic import |
### Priority Actions
1. [ ] moment → date-fns migration (-273KB)
2. [ ] Add useMemo to ProductList
3. [ ] Convert hero.jpg to WebP
4. [ ] Dynamic import Dashboard
### Expected Improvements
- Initial bundle: 650KB → ~350KB (-46%)
- LCP: Expected improvement
- TTI: Expected improvement
```
## Best Practices
1. **Measure First**: Always measure before optimizing
2. **Incremental**: Apply one change at a time
3. **Trade-offs**: Avoid over-optimization
4. **Real Device Testing**: Test on low-end devices
5. **Continuous Monitoring**: Prevent performance regression
## Integration
- `code-reviewer` skill
- `nextjs-reviewer` skill
- `/analyze-code` command
## Notes
- Static analysis based, runtime performance may differ
- Use with Lighthouse for actual measurements
- Analyze production builds

View File

@@ -0,0 +1,153 @@
---
name: prompt-clarifier
description: |
WHEN: Ambiguous prompts, vague requirements, missing context, unclear instructions
WHAT: Ambiguity detection + AskUserQuestion clarification + Interactive option selection
WHEN NOT: Clear detailed instructions → proceed directly
---
# Prompt Clarifier Skill
## Purpose
Detects ambiguous prompts and asks clarification questions using AskUserQuestion with interactive selections.
## When to Use
Activate when:
1. Prompt seems ambiguous or lacks necessary details
2. User wants to create/build something without specifying technical details
3. Vague instructions like "fix this", "optimize", or "improve" without context
4. Excessive pronouns ("this", "that", "it") without clear references
## Detection Criteria
Consider prompt ambiguous if it:
- Is very short (< 5 words) and lacks context
- Mentions project type without specifying:
- Technology stack
- Main features
- Project scope
- Contains optimization requests without specifying aspect:
- Performance/speed
- Memory usage
- Code readability
- Bundle size
- References code/files without paths
- Uses vague verbs without specifying target
## Workflow
### Step 1: Acknowledge
Briefly acknowledge what the user is asking for.
### Step 2: Use AskUserQuestion
Present targeted questions with multiple-choice options.
**Example Question Patterns:**
For missing tech stack:
```
AskUserQuestion: "Which tech stack would you like to use?"
Options:
- React + TypeScript + Node.js
- Vue + JavaScript + Express
- Vanilla JS + Python + FastAPI
- I'll specify manually
```
For missing features:
```
AskUserQuestion: "Select main features (multiple selection)"
Options:
- User auth/login
- Database CRUD
- File upload
- Real-time (WebSocket)
- Payment system
- Admin dashboard
multiSelect: true
```
For vague optimization:
```
AskUserQuestion: "What type of optimization do you need?"
Options:
- Execution speed/performance
- Memory usage reduction
- Bundle size reduction
- Code readability
```
For project scope:
```
AskUserQuestion: "What's the project scale?"
Options:
- Simple prototype/MVP
- Medium-scale project
- Large enterprise-level
- Not sure (recommend for me)
```
### Step 3: Provide Guidance
After receiving answers, proceed with enhanced context.
## Response Template
When detecting ambiguity:
1. **Acknowledge**: "Got it. I'll help you with [request]."
2. **Explain**: "To provide better results, I have a few questions."
3. **Ask**: Use AskUserQuestion with relevant options
4. **Proceed**: Start implementation with full context
## Examples
### Example 1: Vague Web Project
**User**: "Build me a website"
**Response**:
```
Got it. I'll help you build a website.
For better results, I have a few questions.
[Use AskUserQuestion]
1. Tech stack selection
2. Main features selection
3. Project scope
[After answers, proceed with implementation]
```
### Example 2: Vague Optimization
**User**: "Optimize this code"
**Response**:
```
I'll help optimize your code.
First, let me understand what type of optimization you need.
[Use AskUserQuestion for optimization type]
[Ask for file path if not specified]
[Proceed with targeted optimization]
```
## Best Practices
1. **Keep questions focused**: Ask only what's necessary
2. **Provide sensible defaults**: Include common choices
3. **Allow custom input**: Include "Other" option
4. **Be conversational**: Don't make it feel like a form
5. **Group related questions**: Ask related questions together
6. **Proceed efficiently**: Once you have enough context, start working
## Integration with Hook
Works with UserPromptSubmit hook. When you see:
```
<!-- VIBE CODING ASSISTANT: PROMPT CLARIFICATION NEEDED -->
```
Automatically activate this skill and use AskUserQuestion.
## Notes
- Enhances vibe coding by ensuring sufficient context
- Interactive selections make it easy to provide details
- Don't ask unnecessary questions if prompt is clear

View File

@@ -0,0 +1,313 @@
---
name: python-data-reviewer
description: |
WHEN: Pandas/NumPy code review, data processing, vectorization, memory optimization
WHAT: Vectorization patterns + Memory efficiency + Data validation + Performance optimization + Best practices
WHEN NOT: Web framework → fastapi/django/flask-reviewer, General Python → python-reviewer
---
# Python Data Reviewer Skill
## Purpose
Reviews data science code for Pandas/NumPy efficiency, memory usage, and best practices.
## When to Use
- Pandas/NumPy code review
- Data processing optimization
- Memory efficiency check
- "Why is my data code slow?"
- ETL pipeline review
## Project Detection
- `pandas`, `numpy` in requirements.txt
- `.ipynb` Jupyter notebooks
- `data/`, `notebooks/` directories
- DataFrame operations in code
## Workflow
### Step 1: Analyze Project
```
**Pandas**: 2.0+
**NumPy**: 1.24+
**Other**: polars, dask, vaex
**Visualization**: matplotlib, seaborn, plotly
**ML**: scikit-learn, xgboost
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full data code review (recommended)
- Vectorization and performance
- Memory optimization
- Data validation
- Code organization
multiSelect: true
```
## Detection Rules
### Vectorization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| iterrows() loop | Use vectorized operations | CRITICAL |
| apply() with simple func | Use built-in vectorized | HIGH |
| Manual loop over array | Use NumPy broadcasting | HIGH |
| List comprehension on Series | Use .map() or vectorize | MEDIUM |
```python
# BAD: iterrows (extremely slow)
for idx, row in df.iterrows():
df.loc[idx, "total"] = row["price"] * row["quantity"]
# GOOD: Vectorized operation
df["total"] = df["price"] * df["quantity"]
# BAD: apply with simple operation
df["upper_name"] = df["name"].apply(lambda x: x.upper())
# GOOD: Built-in string method
df["upper_name"] = df["name"].str.upper()
# BAD: apply with condition
df["status"] = df["score"].apply(lambda x: "pass" if x >= 60 else "fail")
# GOOD: np.where or np.select
df["status"] = np.where(df["score"] >= 60, "pass", "fail")
# Multiple conditions
conditions = [
df["score"] >= 90,
df["score"] >= 60,
df["score"] < 60,
]
choices = ["A", "B", "F"]
df["grade"] = np.select(conditions, choices)
```
### Memory Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| int64 for small ints | Use int8/int16/int32 | MEDIUM |
| object dtype for categories | Use category dtype | HIGH |
| Loading full file | Use chunks or usecols | HIGH |
| Keeping unused columns | Drop early | MEDIUM |
```python
# BAD: Default dtypes waste memory
df = pd.read_csv("large_file.csv") # All int64, object
# GOOD: Specify dtypes
dtype_map = {
"id": "int32",
"age": "int8",
"status": "category",
"price": "float32",
}
df = pd.read_csv("large_file.csv", dtype=dtype_map)
# GOOD: Load only needed columns
df = pd.read_csv(
"large_file.csv",
usecols=["id", "name", "price"],
dtype=dtype_map,
)
# GOOD: Process in chunks
chunks = pd.read_csv("huge_file.csv", chunksize=100_000)
result = pd.concat([process_chunk(chunk) for chunk in chunks])
# Memory check
print(df.info(memory_usage="deep"))
# Convert object to category if low cardinality
for col in df.select_dtypes(include=["object"]).columns:
if df[col].nunique() / len(df) < 0.5: # < 50% unique
df[col] = df[col].astype("category")
```
### Data Validation
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No null check | Validate nulls early | HIGH |
| No dtype validation | Assert expected types | MEDIUM |
| No range validation | Check value bounds | MEDIUM |
| Silent data issues | Raise or log warnings | HIGH |
```python
# GOOD: Data validation function
def validate_dataframe(df: pd.DataFrame) -> pd.DataFrame:
"""Validate and clean input DataFrame."""
# Required columns
required_cols = ["id", "name", "price", "quantity"]
missing = set(required_cols) - set(df.columns)
if missing:
raise ValueError(f"Missing columns: {missing}")
# Null checks
null_counts = df[required_cols].isnull().sum()
if null_counts.any():
logger.warning(f"Null values found:\n{null_counts[null_counts > 0]}")
# Type validation
assert df["id"].dtype in ["int32", "int64"], "id must be integer"
assert df["price"].dtype in ["float32", "float64"], "price must be float"
# Range validation
invalid_prices = df[df["price"] < 0]
if len(invalid_prices) > 0:
logger.warning(f"Found {len(invalid_prices)} negative prices")
df = df[df["price"] >= 0]
# Duplicate check
duplicates = df.duplicated(subset=["id"])
if duplicates.any():
logger.warning(f"Removing {duplicates.sum()} duplicates")
df = df.drop_duplicates(subset=["id"])
return df
```
### NumPy Patterns
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Python loop over array | Use broadcasting | CRITICAL |
| np.append in loop | Pre-allocate array | HIGH |
| Repeated array creation | Reuse arrays | MEDIUM |
| Copy when not needed | Use views | MEDIUM |
```python
# BAD: Python loop
result = []
for i in range(len(arr)):
result.append(arr[i] * 2 + 1)
result = np.array(result)
# GOOD: Vectorized
result = arr * 2 + 1
# BAD: np.append in loop (creates new array each time)
result = np.array([])
for x in data:
result = np.append(result, process(x))
# GOOD: Pre-allocate
result = np.empty(len(data))
for i, x in enumerate(data):
result[i] = process(x)
# BETTER: Vectorize if possible
result = np.vectorize(process)(data)
# BEST: Pure NumPy operation
result = np.sqrt(data) + np.log(data)
# BAD: Unnecessary copy
subset = arr[arr > 0].copy() # copy() often not needed
# GOOD: View when modification not needed
subset = arr[arr > 0] # Returns view
```
### Pandas Best Practices
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Chained indexing | Use .loc/.iloc | HIGH |
| inplace=True | Assign result instead | MEDIUM |
| reset_index() abuse | Keep index when useful | LOW |
| df.append (deprecated) | Use pd.concat | HIGH |
```python
# BAD: Chained indexing (unpredictable)
df[df["price"] > 100]["status"] = "premium" # May not work!
# GOOD: Use .loc
df.loc[df["price"] > 100, "status"] = "premium"
# BAD: df.append (deprecated)
result = pd.DataFrame()
for chunk in chunks:
result = result.append(chunk)
# GOOD: pd.concat
result = pd.concat(chunks, ignore_index=True)
# BAD: Multiple operations creating copies
df = df.dropna()
df = df.reset_index(drop=True)
df = df.sort_values("date")
# GOOD: Method chaining
df = (
df
.dropna()
.reset_index(drop=True)
.sort_values("date")
)
```
## Response Template
```
## Python Data Code Review Results
**Project**: [name]
**Pandas**: 2.1 | **NumPy**: 1.26 | **Size**: ~1M rows
### Vectorization
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | process.py:45 | iterrows() loop - use vectorized ops |
### Memory
| Status | File | Issue |
|--------|------|-------|
| HIGH | load.py:23 | object dtype for 'status' - use category |
### Data Validation
| Status | File | Issue |
|--------|------|-------|
| HIGH | etl.py:67 | No null value handling |
### NumPy
| Status | File | Issue |
|--------|------|-------|
| HIGH | calc.py:12 | np.append in loop - pre-allocate |
### Recommended Actions
1. [ ] Replace iterrows with vectorized operations
2. [ ] Convert low-cardinality columns to category
3. [ ] Add data validation before processing
4. [ ] Pre-allocate NumPy arrays in loops
```
## Performance Tips
```python
# Profile memory usage
df.info(memory_usage="deep")
# Profile time
%timeit df.apply(func) # Jupyter magic
%timeit df["col"].map(func)
# Use query() for complex filters
df.query("price > 100 and category == 'A'")
# Use eval() for complex expressions
df.eval("profit = revenue - cost")
```
## Best Practices
1. **Vectorize**: Avoid loops, use broadcasting
2. **Dtypes**: Use smallest sufficient type
3. **Validate**: Check data quality early
4. **Profile**: Measure before optimizing
5. **Chunk**: Process large files incrementally
## Integration
- `python-reviewer`: General Python patterns
- `perf-analyzer`: Performance profiling
- `coverage-analyzer`: Test coverage for data code

View File

@@ -0,0 +1,220 @@
---
name: python-reviewer
description: |
WHEN: General Python code review, PEP8 compliance, type hints, Pythonic patterns
WHAT: PEP8/style check + Type hint validation + Pythonic idioms + Error handling + Documentation
WHEN NOT: FastAPI → fastapi-reviewer, Django → django-reviewer, Data science → python-data-reviewer
---
# Python Reviewer Skill
## Purpose
Reviews Python code for style, idioms, type safety, and best practices.
## When to Use
- Python code review requests
- PEP8 compliance check
- Type hint review
- "Is this Pythonic?" questions
- General Python project review
## Project Detection
- `requirements.txt`, `pyproject.toml`, `setup.py`, `setup.cfg`
- `.py` files in project
- `__init__.py` module structure
## Workflow
### Step 1: Analyze Project
```
**Python Version**: 3.11+
**Package Manager**: pip/poetry/uv
**Type Checking**: mypy/pyright
**Linter**: ruff/flake8/pylint
**Formatter**: black/ruff
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Python review (recommended)
- PEP8/Style compliance
- Type hints and safety
- Error handling patterns
- Performance and idioms
multiSelect: true
```
## Detection Rules
### PEP8 & Style
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Line > 88 chars | Break line or refactor | LOW |
| Missing docstring | Add module/function docstring | MEDIUM |
| Import order wrong | Use isort or ruff | LOW |
| Inconsistent naming | snake_case for functions/vars | MEDIUM |
```python
# BAD: Inconsistent naming
def getUserName(userId):
pass
# GOOD: PEP8 naming
def get_user_name(user_id: int) -> str:
pass
```
### Type Hints
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Missing return type | Add -> ReturnType | MEDIUM |
| Any type overuse | Use specific types | MEDIUM |
| Optional without None check | Add None handling | HIGH |
| Missing generic types | Use list[T], dict[K,V] | LOW |
```python
# BAD: No type hints
def process(data):
return data.get("name")
# GOOD: Full type hints
def process(data: dict[str, Any]) -> str | None:
return data.get("name")
```
### Pythonic Idioms
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Manual loop for list | Use list comprehension | LOW |
| if x == True | Use if x | LOW |
| Manual dict iteration | Use .items(), .keys(), .values() | LOW |
| try/except pass | Handle or log exception | HIGH |
| Mutable default arg | Use None default | CRITICAL |
```python
# BAD: Mutable default argument
def append_to(item, target=[]):
target.append(item)
return target
# GOOD: None default
def append_to(item, target: list | None = None) -> list:
if target is None:
target = []
target.append(item)
return target
# BAD: Manual loop
result = []
for x in items:
if x > 0:
result.append(x * 2)
# GOOD: List comprehension
result = [x * 2 for x in items if x > 0]
```
### Error Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Bare except | Catch specific exceptions | HIGH |
| except Exception | Be more specific | MEDIUM |
| No logging in except | Add logging | MEDIUM |
| Missing finally | Add cleanup if needed | LOW |
```python
# BAD: Bare except
try:
process()
except:
pass
# GOOD: Specific exception with logging
try:
process()
except ValueError as e:
logger.error(f"Invalid value: {e}")
raise
except IOError as e:
logger.warning(f"IO error: {e}")
return None
```
### Modern Python (3.10+)
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Union[X, Y] | Use X \| Y | LOW |
| Optional[X] | Use X \| None | LOW |
| Dict, List from typing | Use dict, list builtin | LOW |
| No match statement | Consider match for complex branching | LOW |
```python
# OLD: typing imports
from typing import Optional, Union, List, Dict
def func(x: Optional[int]) -> Union[str, None]:
pass
# MODERN: Built-in syntax (3.10+)
def func(x: int | None) -> str | None:
pass
# Match statement (3.10+)
match status:
case 200:
return "OK"
case 404:
return "Not Found"
case _:
return "Unknown"
```
## Response Template
```
## Python Code Review Results
**Project**: [name]
**Python**: 3.11 | **Tools**: ruff, mypy, pytest
### Style & PEP8
| Status | File | Issue |
|--------|------|-------|
| LOW | utils.py:45 | Line exceeds 88 characters |
### Type Hints
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | service.py:23 | Missing return type annotation |
### Pythonic Idioms
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | models.py:12 | Mutable default argument |
### Error Handling
| Status | File | Issue |
|--------|------|-------|
| HIGH | api.py:67 | Bare except clause |
### Recommended Actions
1. [ ] Fix mutable default arguments
2. [ ] Add specific exception handling
3. [ ] Add type hints to public functions
4. [ ] Run ruff --fix for style issues
```
## Best Practices
1. **Type Hints**: Use for all public APIs
2. **Docstrings**: Google or NumPy style
3. **Error Handling**: Specific exceptions, always log
4. **Testing**: pytest with fixtures
5. **Tooling**: ruff (lint+format), mypy (types)
## Integration
- `fastapi-reviewer`: FastAPI specific patterns
- `django-reviewer`: Django specific patterns
- `python-data-reviewer`: Pandas/NumPy patterns
- `security-scanner`: Python security checks

View File

@@ -0,0 +1,197 @@
---
name: readme-generator
description: |
WHEN: README generation/update, project documentation, installation/usage/contribution guides
WHAT: Project analysis + sectioned README templates + badges + environment variable docs
WHEN NOT: API docs → api-documenter, Code comments → api-documenter
---
# README Generator Skill
## Purpose
Analyzes project structure to generate or update README.md with installation, usage, API docs, and more.
## When to Use
- README generation requests
- New project needs README
- Existing README update needed
- Installation, usage documentation requests
## Workflow
### Step 1: Analyze Project
```
**Project**: my-awesome-app
**Type**: Next.js Web Application
**Language**: TypeScript
**Package Manager**: npm
**Dependencies**: React, Next.js, Tailwind CSS
**Scripts**: dev, build, start, test, lint
```
### Step 2: Check Existing README
```
README Status:
- Exists: [Yes/No]
- Current sections: [list]
- Last modified: [date]
```
### Step 3: Select Sections
**AskUserQuestion:**
```
"Select README sections"
Options:
- Project intro/overview
- Installation
- Usage/Getting started
- Environment variables
- API documentation
- Contributing guide
- License
multiSelect: true
```
### Step 4: Select Style
**AskUserQuestion:**
```
"Select README style"
Options:
- Concise (essentials only)
- Detailed (screenshots/GIFs)
- Technical (API-focused)
- Open source (badges, contributing)
```
## README Templates
### Basic Structure
```markdown
# Project Name
![License](https://img.shields.io/badge/license-MIT-blue.svg)
Brief description (1-2 sentences)
## Features
- Feature 1
- Feature 2
## Installation
\`\`\`bash
git clone https://github.com/username/project.git
cd project
npm install
\`\`\`
## Usage
\`\`\`bash
npm run dev # Development
npm run build # Production build
\`\`\`
## Environment Variables
Create `.env.local`:
\`\`\`env
DATABASE_URL=your_database_url
NEXT_PUBLIC_API_URL=your_api_url
\`\`\`
| Variable | Required | Description |
|----------|----------|-------------|
| `DATABASE_URL` | Yes | Database connection string |
## Tech Stack
- **Framework**: Next.js 14
- **Language**: TypeScript
## Project Structure
\`\`\`
├── app/ # Next.js App Router
├── components/ # React components
├── lib/ # Utilities
└── public/ # Static assets
\`\`\`
## Contributing
1. Fork the repository
2. Create feature branch
3. Commit changes
4. Push to branch
5. Open Pull Request
## License
MIT License
```
### Open Source Template
```markdown
# Project Name
[![npm](https://badge.fury.io/js/package.svg)](https://www.npmjs.com/package/package)
[![CI](https://github.com/user/repo/workflows/CI/badge.svg)](https://github.com/user/repo/actions)
[![Coverage](https://codecov.io/gh/user/repo/branch/main/graph/badge.svg)](https://codecov.io/gh/user/repo)
> Compelling project description
## Installation
\`\`\`bash
npm install package-name
\`\`\`
## Usage
\`\`\`typescript
import { feature } from 'package-name'
const result = feature({ option: 'value' })
\`\`\`
## API Reference
### `feature(options)`
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `option` | `string` | `'default'` | Description |
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md)
## License
MIT - see [LICENSE](LICENSE)
```
## Response Template
```
## README Generated
**File**: README.md
**Style**: Detailed
### Included Sections
- [x] Project intro
- [x] Installation
- [x] Usage
- [x] Environment variables
- [x] Tech stack
- [x] Contributing
- [x] License
### Recommendations
- [ ] Add screenshots/demo GIF
- [ ] Detail API documentation
- [ ] Create CONTRIBUTING.md
```
## Best Practices
1. **Concise**: Quick access to key info
2. **Structured**: Clear section separation
3. **Examples**: Copy-paste ready code
4. **Current**: Keep versions updated
5. **Visual**: Badges, screenshots for readability
## Integration
- `api-documenter` skill: API section details
- `/explain-code` command: Project structure understanding
## Notes
- Preserves existing README style when updating
- Excludes sensitive info (.env values)
- Project structure based on actual analysis

View File

@@ -0,0 +1,392 @@
---
name: rust-api-reviewer
description: |
WHEN: Rust API review with Actix-web/Axum/Rocket, async patterns, extractors, middleware
WHAT: Async handlers + Extractors + State management + Error responses + Tower middleware
WHEN NOT: General Rust → rust-reviewer, Go API → go-api-reviewer
---
# Rust API Reviewer Skill
## Purpose
Reviews Rust web API projects using Actix-web, Axum, or Rocket for async patterns and API design.
## When to Use
- Rust REST API code review
- Actix-web/Axum/Rocket project review
- Async handler patterns
- Middleware and extractors
- State management review
## Project Detection
- `actix-web` in Cargo.toml
- `axum` in Cargo.toml
- `rocket` in Cargo.toml
- `handlers/`, `routes/` directories
- `tower` middleware usage
## Workflow
### Step 1: Analyze Project
```
**Framework**: Axum 0.7+
**Runtime**: Tokio
**Database**: SQLx / SeaORM / Diesel
**Validation**: validator crate
**Auth**: JWT / OAuth2
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full API review (recommended)
- Async handlers and patterns
- Extractors and validation
- State and dependency injection
- Error handling and responses
multiSelect: true
```
## Detection Rules
### Axum Patterns
#### Handler Organization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| All routes in main | Use Router::nest | MEDIUM |
| No route grouping | Group by resource | MEDIUM |
| Missing method routing | Use method_router | LOW |
```rust
// BAD: All routes in main
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/users", get(list_users).post(create_user))
.route("/users/:id", get(get_user).put(update_user))
.route("/products", get(list_products))
// ... 50 more routes
.with_state(state);
}
// GOOD: Organized with nest
fn user_routes() -> Router<AppState> {
Router::new()
.route("/", get(list_users).post(create_user))
.route("/:id", get(get_user).put(update_user).delete(delete_user))
}
fn product_routes() -> Router<AppState> {
Router::new()
.route("/", get(list_products).post(create_product))
.route("/:id", get(get_product))
}
fn api_routes() -> Router<AppState> {
Router::new()
.nest("/users", user_routes())
.nest("/products", product_routes())
}
#[tokio::main]
async fn main() {
let state = AppState::new().await;
let app = Router::new()
.nest("/api/v1", api_routes())
.layer(TraceLayer::new_for_http())
.with_state(state);
// ...
}
```
#### Extractors
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Manual JSON parsing | Use Json extractor | HIGH |
| No validation | Use ValidatedJson | HIGH |
| Order of extractors | Body extractors last | HIGH |
```rust
// BAD: Manual parsing
async fn create_user(body: String) -> Result<Json<User>, AppError> {
let req: CreateUserRequest = serde_json::from_str(&body)?;
// ...
}
// GOOD: Json extractor
async fn create_user(
Json(req): Json<CreateUserRequest>,
) -> Result<Json<User>, AppError> {
// req is already deserialized
}
// GOOD: With validation (using validator crate)
#[derive(Debug, Deserialize, Validate)]
struct CreateUserRequest {
#[validate(length(min = 1, max = 100))]
name: String,
#[validate(email)]
email: String,
}
async fn create_user(
State(state): State<AppState>,
ValidatedJson(req): ValidatedJson<CreateUserRequest>,
) -> Result<Json<User>, AppError> {
// req is validated
}
// IMPORTANT: Extractor order matters!
// Body-consuming extractors must be last
async fn handler(
State(state): State<AppState>, // First: doesn't consume
Path(id): Path<i32>, // Second: from URL
Query(params): Query<Params>, // Third: from query
Json(body): Json<Request>, // LAST: consumes body
) -> Response {
// ...
}
```
#### State Management
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Global static state | Use State extractor | HIGH |
| No connection pool | Use shared pool | HIGH |
| Mutex in async | Use tokio::sync::Mutex | CRITICAL |
```rust
// BAD: Global static
static DB: OnceCell<Database> = OnceCell::new();
async fn get_user(Path(id): Path<i32>) -> Json<User> {
let db = DB.get().unwrap(); // Bad pattern
// ...
}
// GOOD: State extractor
#[derive(Clone)]
struct AppState {
db: PgPool,
cache: Arc<Cache>,
config: Arc<Config>,
}
async fn get_user(
State(state): State<AppState>,
Path(id): Path<i32>,
) -> Result<Json<User>, AppError> {
let user = sqlx::query_as!(User, "SELECT * FROM users WHERE id = $1", id)
.fetch_one(&state.db)
.await?;
Ok(Json(user))
}
// BAD: std::sync::Mutex in async
struct AppState {
cache: std::sync::Mutex<HashMap<String, Value>>, // Blocks!
}
// GOOD: tokio Mutex or RwLock
struct AppState {
cache: tokio::sync::RwLock<HashMap<String, Value>>,
}
async fn get_cached(
State(state): State<AppState>,
Path(key): Path<String>,
) -> Result<Json<Value>, AppError> {
let cache = state.cache.read().await;
cache.get(&key).cloned().ok_or(AppError::NotFound)
}
```
### Error Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Panic in handler | Return Result | CRITICAL |
| No IntoResponse impl | Implement for errors | HIGH |
| Leaking internal errors | Map to HTTP errors | HIGH |
```rust
// GOOD: Custom error type with IntoResponse
#[derive(Debug)]
enum AppError {
NotFound,
Unauthorized,
Validation(String),
Internal(anyhow::Error),
}
impl IntoResponse for AppError {
fn into_response(self) -> Response {
let (status, message) = match self {
AppError::NotFound => (StatusCode::NOT_FOUND, "Not found"),
AppError::Unauthorized => (StatusCode::UNAUTHORIZED, "Unauthorized"),
AppError::Validation(msg) => {
return (StatusCode::BAD_REQUEST, Json(json!({
"error": "validation_error",
"message": msg,
}))).into_response();
}
AppError::Internal(err) => {
tracing::error!("Internal error: {:?}", err);
(StatusCode::INTERNAL_SERVER_ERROR, "Internal server error")
}
};
(status, Json(json!({ "error": message }))).into_response()
}
}
// Implement From for automatic conversion
impl From<sqlx::Error> for AppError {
fn from(err: sqlx::Error) -> Self {
match err {
sqlx::Error::RowNotFound => AppError::NotFound,
_ => AppError::Internal(err.into()),
}
}
}
```
### Middleware (Tower)
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No tracing | Add TraceLayer | MEDIUM |
| No timeout | Add TimeoutLayer | MEDIUM |
| Auth in handler | Use middleware | MEDIUM |
```rust
use tower::ServiceBuilder;
use tower_http::{
trace::TraceLayer,
timeout::TimeoutLayer,
cors::CorsLayer,
};
fn app(state: AppState) -> Router {
Router::new()
.nest("/api", api_routes())
.layer(
ServiceBuilder::new()
.layer(TraceLayer::new_for_http())
.layer(TimeoutLayer::new(Duration::from_secs(30)))
.layer(CorsLayer::permissive())
)
.with_state(state)
}
// GOOD: Auth middleware
async fn auth_middleware(
State(state): State<AppState>,
mut req: Request,
next: Next,
) -> Result<Response, AppError> {
let token = req
.headers()
.get(AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|v| v.strip_prefix("Bearer "))
.ok_or(AppError::Unauthorized)?;
let claims = validate_token(token, &state.jwt_secret)?;
req.extensions_mut().insert(claims);
Ok(next.run(req).await)
}
// Apply to routes
let protected = Router::new()
.route("/me", get(get_me))
.route_layer(middleware::from_fn_with_state(state.clone(), auth_middleware));
```
### Actix-web Patterns
```rust
// Actix-web equivalent
use actix_web::{web, App, HttpServer, HttpResponse};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let pool = create_pool().await;
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(pool.clone()))
.service(
web::scope("/api/v1")
.service(
web::scope("/users")
.route("", web::get().to(list_users))
.route("", web::post().to(create_user))
.route("/{id}", web::get().to(get_user))
)
)
.wrap(actix_web::middleware::Logger::default())
})
.bind("127.0.0.1:8080")?
.run()
.await
}
// Actix handler
async fn create_user(
pool: web::Data<PgPool>,
req: web::Json<CreateUserRequest>,
) -> Result<HttpResponse, actix_web::Error> {
let user = User::create(&pool, req.into_inner()).await?;
Ok(HttpResponse::Created().json(user))
}
```
## Response Template
```
## Rust API Code Review Results
**Project**: [name]
**Framework**: Axum 0.7 | **Runtime**: Tokio
### Handler Organization
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | main.rs | 30+ routes without nesting |
### Extractors
| Status | File | Issue |
|--------|------|-------|
| HIGH | handlers/user.rs:23 | Manual JSON parsing |
### State Management
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | state.rs | std::sync::Mutex in async context |
### Error Handling
| Status | File | Issue |
|--------|------|-------|
| HIGH | handlers/product.rs | No IntoResponse for errors |
### Recommended Actions
1. [ ] Organize routes with Router::nest
2. [ ] Use Json extractor with validation
3. [ ] Replace std Mutex with tokio Mutex
4. [ ] Implement IntoResponse for AppError
```
## Best Practices
1. **Extractors**: Use typed extractors, body last
2. **State**: Clone-friendly, async-safe
3. **Errors**: Custom type with IntoResponse
4. **Middleware**: Tower layers for cross-cutting
5. **Tracing**: Use tracing crate for observability
## Integration
- `rust-reviewer`: General Rust patterns
- `security-scanner`: API security audit
- `perf-analyzer`: Async performance

View File

@@ -0,0 +1,380 @@
---
name: rust-reviewer
description: |
WHEN: Rust project review, ownership/borrowing, error handling, unsafe code, performance
WHAT: Ownership patterns + Lifetime analysis + Error handling (Result/Option) + Unsafe audit + Idiomatic Rust
WHEN NOT: Rust API → rust-api-reviewer, Go → go-reviewer
---
# Rust Reviewer Skill
## Purpose
Reviews Rust code for ownership, lifetimes, error handling, safety, and idiomatic patterns.
## When to Use
- Rust project code review
- Ownership/borrowing review
- Lifetime annotation review
- Unsafe code audit
- Error handling patterns
## Project Detection
- `Cargo.toml` in project root
- `.rs` files
- `src/lib.rs` or `src/main.rs`
- `tests/` directory
## Workflow
### Step 1: Analyze Project
```
**Rust Edition**: 2021
**Type**: Library / Binary
**Dependencies**: Key crates
**Testing**: cargo test
**Linter**: clippy
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Rust review (recommended)
- Ownership and borrowing
- Lifetimes and references
- Error handling (Result/Option)
- Unsafe code audit
multiSelect: true
```
## Detection Rules
### Ownership & Borrowing
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Unnecessary clone() | Borrow instead | MEDIUM |
| &String parameter | Use &str | MEDIUM |
| Vec ownership transfer | Consider slice | MEDIUM |
| Excessive Rc/Arc | Restructure ownership | MEDIUM |
```rust
// BAD: Unnecessary clone
fn process(data: Vec<String>) {
for item in data.clone().iter() { // Clone not needed!
println!("{}", item);
}
}
// GOOD: Borrow
fn process(data: &[String]) {
for item in data {
println!("{}", item);
}
}
// BAD: &String parameter
fn greet(name: &String) {
println!("Hello, {}", name);
}
// GOOD: &str parameter (more flexible)
fn greet(name: &str) {
println!("Hello, {}", name);
}
// BAD: Taking ownership unnecessarily
fn sum(numbers: Vec<i32>) -> i32 {
numbers.iter().sum()
}
// GOOD: Borrow slice
fn sum(numbers: &[i32]) -> i32 {
numbers.iter().sum()
}
```
### Lifetimes
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Missing lifetime annotation | Add explicit lifetime | HIGH |
| 'static overuse | Use specific lifetime | MEDIUM |
| Lifetime in struct | Consider ownership | MEDIUM |
| Complex lifetime bounds | Simplify if possible | LOW |
```rust
// BAD: Missing lifetime
struct Parser {
input: &str, // Error: missing lifetime
}
// GOOD: Explicit lifetime
struct Parser<'a> {
input: &'a str,
}
impl<'a> Parser<'a> {
fn new(input: &'a str) -> Self {
Parser { input }
}
fn parse(&self) -> Result<Ast<'a>, Error> {
// ...
}
}
// BAD: 'static when not needed
fn process(data: &'static str) {
// Requires static lifetime!
}
// GOOD: Generic lifetime
fn process(data: &str) {
// Works with any lifetime
}
// Consider: Ownership instead of lifetime
struct Config {
name: String, // Owned, no lifetime needed
}
```
### Error Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| unwrap() in library | Return Result | HIGH |
| expect() without message | Add descriptive message | MEDIUM |
| panic! for recoverable | Return Err | HIGH |
| No custom error type | Define error enum | MEDIUM |
```rust
// BAD: unwrap in library code
pub fn parse_config(path: &str) -> Config {
let content = fs::read_to_string(path).unwrap(); // Will panic!
serde_json::from_str(&content).unwrap()
}
// GOOD: Return Result
pub fn parse_config(path: &str) -> Result<Config, ConfigError> {
let content = fs::read_to_string(path)
.map_err(|e| ConfigError::IoError(e))?;
serde_json::from_str(&content)
.map_err(|e| ConfigError::ParseError(e))
}
// GOOD: Custom error type
#[derive(Debug, thiserror::Error)]
pub enum ConfigError {
#[error("failed to read config file: {0}")]
IoError(#[from] std::io::Error),
#[error("failed to parse config: {0}")]
ParseError(#[from] serde_json::Error),
#[error("missing required field: {0}")]
MissingField(String),
}
// BAD: expect without message
let value = map.get("key").expect("failed");
// GOOD: Descriptive expect
let value = map.get("key")
.expect("config must contain 'key' field");
```
### Option Handling
| Check | Recommendation | Severity |
|-------|----------------|----------|
| if let Some + else | Use map_or/unwrap_or | LOW |
| Nested Options | Use and_then/flatten | MEDIUM |
| match for single case | Use if let | LOW |
```rust
// BAD: Verbose Option handling
let result = match value {
Some(v) => v.to_string(),
None => "default".to_string(),
};
// GOOD: Using combinators
let result = value
.map(|v| v.to_string())
.unwrap_or_else(|| "default".to_string());
// Or even simpler
let result = value.map_or("default".to_string(), |v| v.to_string());
// BAD: Nested Options
let result: Option<Option<i32>> = Some(Some(42));
let value = match result {
Some(Some(v)) => v,
_ => 0,
};
// GOOD: Flatten
let value = result.flatten().unwrap_or(0);
// Using and_then for chaining
let value = get_user(id)
.and_then(|user| user.profile)
.and_then(|profile| profile.avatar)
.map(|avatar| avatar.url);
```
### Unsafe Code
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Unnecessary unsafe | Remove if safe alternative | HIGH |
| No safety comment | Document invariants | HIGH |
| Raw pointer dereference | Validate pointer | CRITICAL |
| Transmute usage | Use safe cast if possible | CRITICAL |
```rust
// BAD: Unsafe without documentation
unsafe fn get_value(ptr: *const i32) -> i32 {
*ptr
}
// GOOD: Documented unsafe
/// Gets the value at the given pointer.
///
/// # Safety
///
/// - `ptr` must be valid and properly aligned
/// - `ptr` must point to initialized memory
/// - The memory must not be mutated while this reference exists
unsafe fn get_value(ptr: *const i32) -> i32 {
debug_assert!(!ptr.is_null(), "ptr must not be null");
*ptr
}
// BAD: Transmute for casting
let bytes: [u8; 4] = unsafe { std::mem::transmute(value) };
// GOOD: Safe alternative
let bytes = value.to_ne_bytes();
// Encapsulating unsafe in safe API
pub struct Buffer {
ptr: *mut u8,
len: usize,
}
impl Buffer {
/// Creates a new buffer. Safe because we control allocation.
pub fn new(size: usize) -> Self {
let ptr = unsafe { alloc(Layout::array::<u8>(size).unwrap()) };
Self { ptr, len: size }
}
/// Safe accessor - bounds checked
pub fn get(&self, index: usize) -> Option<u8> {
if index < self.len {
// SAFETY: index is bounds-checked above
Some(unsafe { *self.ptr.add(index) })
} else {
None
}
}
}
```
### Idiomatic Rust
| Check | Recommendation | Severity |
|-------|----------------|----------|
| C-style loop | Use iterators | LOW |
| Manual index tracking | Use enumerate() | LOW |
| Return in match | Return match expression | LOW |
| Redundant type annotation | Use inference | LOW |
```rust
// BAD: C-style loop
let mut i = 0;
while i < items.len() {
process(&items[i]);
i += 1;
}
// GOOD: Iterator
for item in &items {
process(item);
}
// With index
for (i, item) in items.iter().enumerate() {
println!("{}: {}", i, item);
}
// BAD: Redundant type
let numbers: Vec<i32> = vec![1, 2, 3];
let sum: i32 = numbers.iter().sum();
// GOOD: Type inference
let numbers = vec![1, 2, 3];
let sum: i32 = numbers.iter().sum(); // Only needed here
// BAD: Return in match arms
fn classify(n: i32) -> &'static str {
match n {
0 => { return "zero"; }
_ if n > 0 => { return "positive"; }
_ => { return "negative"; }
}
}
// GOOD: Match as expression
fn classify(n: i32) -> &'static str {
match n {
0 => "zero",
_ if n > 0 => "positive",
_ => "negative",
}
}
```
## Response Template
```
## Rust Code Review Results
**Project**: [name]
**Rust Edition**: 2021 | **Clippy**: Enabled
### Ownership & Borrowing
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | parser.rs:45 | Unnecessary clone() |
### Lifetimes
| Status | File | Issue |
|--------|------|-------|
| HIGH | cache.rs:23 | Missing lifetime on struct field |
### Error Handling
| Status | File | Issue |
|--------|------|-------|
| HIGH | lib.rs:67 | unwrap() in public function |
### Unsafe
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | ffi.rs:12 | Unsafe block without safety comment |
### Recommended Actions
1. [ ] Replace clone() with borrowing
2. [ ] Add lifetime annotations to Cache struct
3. [ ] Return Result instead of panicking
4. [ ] Document all unsafe blocks with safety invariants
```
## Best Practices
1. **Ownership**: Borrow when possible, clone when needed
2. **Lifetimes**: Explicit > implicit, owned > referenced
3. **Errors**: thiserror for libs, anyhow for apps
4. **Unsafe**: Minimize, document, encapsulate in safe API
5. **Clippy**: Run with `cargo clippy -- -W clippy::pedantic`
## Integration
- `rust-api-reviewer`: Web framework patterns
- `security-scanner`: Rust security audit
- `perf-analyzer`: Performance profiling

View File

@@ -0,0 +1,336 @@
---
name: schema-reviewer
description: |
WHEN: Database schema review, table design, normalization, constraints, index planning
WHAT: Normalization analysis + Constraint validation + Index strategy + Data types + Relationship design
WHEN NOT: Query optimization → sql-optimizer, ORM code → orm-reviewer
---
# Schema Reviewer Skill
## Purpose
Reviews database schema design for normalization, constraints, indexes, and best practices.
## When to Use
- Database schema review
- Table design review
- Normalization check
- Index planning
- Constraint validation
## Project Detection
- Schema files (`.sql`, `schema.prisma`)
- Migration files
- Entity definitions
- Database documentation
## Workflow
### Step 1: Analyze Schema
```
**Database**: PostgreSQL/MySQL
**Tables**: 15
**Relationships**: 1:N, N:M
**Normalization**: 3NF
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full schema review (recommended)
- Normalization and design
- Constraints and integrity
- Index strategy
- Data type optimization
multiSelect: true
```
## Detection Rules
### Normalization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Repeating groups | Move to separate table | HIGH |
| Partial dependency | Apply 2NF | MEDIUM |
| Transitive dependency | Apply 3NF | MEDIUM |
| Over-normalization | Consider denormalization for reads | LOW |
```sql
-- BAD: 1NF violation (repeating groups)
CREATE TABLE orders (
id INT PRIMARY KEY,
product1_id INT,
product1_qty INT,
product2_id INT,
product2_qty INT,
product3_id INT,
product3_qty INT
);
-- GOOD: Normalized with separate table
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INT NOT NULL REFERENCES users(id),
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE order_items (
id SERIAL PRIMARY KEY,
order_id INT NOT NULL REFERENCES orders(id),
product_id INT NOT NULL REFERENCES products(id),
quantity INT NOT NULL CHECK (quantity > 0),
price DECIMAL(10,2) NOT NULL
);
-- BAD: 2NF violation (partial dependency)
CREATE TABLE order_items (
order_id INT,
product_id INT,
product_name VARCHAR(100), -- Depends only on product_id!
quantity INT,
PRIMARY KEY (order_id, product_id)
);
-- GOOD: Product name in products table only
CREATE TABLE order_items (
order_id INT REFERENCES orders(id),
product_id INT REFERENCES products(id),
quantity INT NOT NULL,
PRIMARY KEY (order_id, product_id)
);
```
### Constraints
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Missing PRIMARY KEY | Add primary key | CRITICAL |
| Missing FOREIGN KEY | Add for relationships | HIGH |
| No NOT NULL | Add where appropriate | MEDIUM |
| No CHECK constraints | Validate data at DB level | MEDIUM |
| Missing UNIQUE | Add for natural keys | HIGH |
```sql
-- BAD: No constraints
CREATE TABLE users (
id INT,
email VARCHAR(255),
status VARCHAR(20)
);
-- GOOD: Proper constraints
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
status VARCHAR(20) NOT NULL DEFAULT 'active'
CHECK (status IN ('active', 'inactive', 'banned')),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- With proper foreign keys
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INT NOT NULL REFERENCES users(id)
ON DELETE RESTRICT -- Prevent user deletion with orders
ON UPDATE CASCADE,
status VARCHAR(20) NOT NULL DEFAULT 'pending'
CHECK (status IN ('pending', 'processing', 'shipped', 'delivered', 'cancelled')),
total DECIMAL(12,2) NOT NULL CHECK (total >= 0)
);
```
### Data Types
| Check | Recommendation | Severity |
|-------|----------------|----------|
| VARCHAR for all strings | Use appropriate types | MEDIUM |
| INT for monetary values | Use DECIMAL | HIGH |
| FLOAT for money | Use DECIMAL | CRITICAL |
| TEXT without limit | Consider VARCHAR with limit | LOW |
| Missing timezone | Use TIMESTAMPTZ | HIGH |
```sql
-- BAD: Poor data type choices
CREATE TABLE products (
id INT,
price FLOAT, -- Precision issues!
quantity VARCHAR(10), -- Should be INT
created_at TIMESTAMP, -- No timezone!
description TEXT -- Unlimited
);
-- GOOD: Appropriate data types
CREATE TABLE products (
id SERIAL PRIMARY KEY,
sku VARCHAR(50) NOT NULL UNIQUE,
name VARCHAR(200) NOT NULL,
price DECIMAL(10,2) NOT NULL CHECK (price >= 0),
quantity INT NOT NULL DEFAULT 0 CHECK (quantity >= 0),
description TEXT, -- OK for long text
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Use ENUMs for fixed values (PostgreSQL)
CREATE TYPE order_status AS ENUM (
'pending', 'processing', 'shipped', 'delivered', 'cancelled'
);
CREATE TABLE orders (
status order_status NOT NULL DEFAULT 'pending'
);
```
### Index Strategy
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No index on FK | Add index | HIGH |
| No index on filter columns | Add index | HIGH |
| Too many single-column indexes | Use composite | MEDIUM |
| Missing unique index | Add for unique constraints | HIGH |
```sql
-- Index planning
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INT NOT NULL REFERENCES users(id),
status VARCHAR(20) NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Foreign key index (not automatic in PostgreSQL!)
CREATE INDEX idx_orders_user_id ON orders(user_id);
-- Composite index for common queries
-- WHERE status = ? ORDER BY created_at DESC
CREATE INDEX idx_orders_status_created ON orders(status, created_at DESC);
-- Partial index for specific cases
CREATE INDEX idx_orders_pending ON orders(created_at)
WHERE status = 'pending';
-- Cover index (includes all needed columns)
CREATE INDEX idx_orders_user_summary ON orders(user_id, status, total);
```
### Relationship Design
| Check | Recommendation | Severity |
|-------|----------------|----------|
| N:M without junction table | Create junction table | CRITICAL |
| Self-reference without depth | Add level/path column | MEDIUM |
| Circular references | Redesign relationships | HIGH |
```sql
-- N:M relationship with junction table
CREATE TABLE products (
id SERIAL PRIMARY KEY,
name VARCHAR(200) NOT NULL
);
CREATE TABLE categories (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL
);
CREATE TABLE product_categories (
product_id INT NOT NULL REFERENCES products(id) ON DELETE CASCADE,
category_id INT NOT NULL REFERENCES categories(id) ON DELETE CASCADE,
PRIMARY KEY (product_id, category_id)
);
-- Hierarchical data (adjacency list)
CREATE TABLE categories (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
parent_id INT REFERENCES categories(id),
level INT NOT NULL DEFAULT 0,
path LTREE -- PostgreSQL ltree extension
);
-- Create index for hierarchical queries
CREATE INDEX idx_categories_path ON categories USING GIST (path);
```
### Audit Columns
```sql
-- Standard audit columns
CREATE TABLE entities (
id SERIAL PRIMARY KEY,
-- ... business columns ...
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_by INT REFERENCES users(id),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_by INT REFERENCES users(id),
deleted_at TIMESTAMPTZ, -- Soft delete
deleted_by INT REFERENCES users(id)
);
-- Auto-update trigger
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_update_updated_at
BEFORE UPDATE ON entities
FOR EACH ROW
EXECUTE FUNCTION update_updated_at();
```
## Response Template
```
## Database Schema Review Results
**Database**: PostgreSQL 15
**Tables**: 12 | **Relationships**: 8
### Normalization
| Status | Table | Issue |
|--------|-------|-------|
| HIGH | orders | Repeating product columns |
### Constraints
| Status | Table | Issue |
|--------|-------|-------|
| CRITICAL | users | Missing PRIMARY KEY |
| HIGH | orders | Missing FOREIGN KEY to users |
### Data Types
| Status | Table.Column | Issue |
|--------|--------------|-------|
| CRITICAL | products.price | Using FLOAT instead of DECIMAL |
### Indexes
| Status | Table | Issue |
|--------|-------|-------|
| HIGH | orders.user_id | Missing index on foreign key |
### Recommended Changes
```sql
-- Add missing primary key
ALTER TABLE users ADD PRIMARY KEY (id);
-- Fix price data type
ALTER TABLE products
ALTER COLUMN price TYPE DECIMAL(10,2);
-- Add foreign key index
CREATE INDEX idx_orders_user_id ON orders(user_id);
```
```
## Best Practices
1. **Normalization**: 3NF default, denormalize for performance
2. **Constraints**: PK, FK, NOT NULL, CHECK, UNIQUE
3. **Data Types**: DECIMAL for money, TIMESTAMPTZ for time
4. **Indexes**: FK columns, filter columns, composites
5. **Audit**: created_at, updated_at, soft delete
## Integration
- `sql-optimizer`: Query performance
- `migration-checker`: Migration safety
- `orm-reviewer`: ORM mapping

View File

@@ -0,0 +1,179 @@
---
name: security-scanner
description: |
WHEN: Security scan, vulnerability detection, XSS/CSRF analysis, secret exposure, OWASP Top 10
WHAT: XSS/injection detection + hardcoded secrets + auth/authz issues + severity-based vulnerability list
WHEN NOT: Performance → perf-analyzer, Cloud security → cloud-security-expert
---
# Security Scanner Skill
## Purpose
Detects web application security vulnerabilities based on OWASP Top 10: XSS, hardcoded secrets, authentication issues.
## When to Use
- Security scan requests
- XSS, CSRF, injection mentions
- Pre-production security review
- Auth/authz code review
## Workflow
### Step 1: Scan Scope
**AskUserQuestion:**
```
"Select scan scope"
Options:
- Frontend only
- Include API/backend
- Full project
- Specific file/folder
```
### Step 2: Vulnerability Categories
**AskUserQuestion:**
```
"Select vulnerability types"
Options:
- Full security scan (recommended)
- XSS (Cross-Site Scripting)
- Secrets/API key exposure
- Auth/authz issues
- Dependency vulnerabilities
multiSelect: true
```
## Security Rules
### OWASP Top 10
| Rank | Vulnerability | Check | Severity |
|------|---------------|-------|----------|
| A01 | Broken Access Control | Auth bypass | CRITICAL |
| A02 | Cryptographic Failures | Plaintext, weak crypto | CRITICAL |
| A03 | Injection | SQL, NoSQL, XSS | CRITICAL |
| A05 | Security Misconfiguration | CORS, headers | HIGH |
| A07 | Auth Failures | Weak passwords, sessions | CRITICAL |
### XSS Detection
```typescript
// CRITICAL: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{ __html: userInput }} />
// Fix: Use DOMPurify.sanitize(userInput)
// HIGH: innerHTML direct use
element.innerHTML = userInput
// Fix: Use textContent or sanitize
// CRITICAL: eval or Function constructor
eval(userCode)
new Function(userCode)()
// Fix: Never use
```
### Secret Exposure
```typescript
// CRITICAL: Hardcoded API key
const API_KEY = 'sk-1234567890abcdef'
// Patterns detected:
// - API keys: /[a-zA-Z0-9_-]{20,}/
// - AWS keys: /AKIA[0-9A-Z]{16}/
// - JWT: /eyJ[A-Za-z0-9-_]+\.[A-Za-z0-9-_]+/
// Fix: Use environment variables
headers: { 'X-API-Key': process.env.NEXT_PUBLIC_API_KEY }
```
### Auth/Authz Issues
```typescript
// HIGH: Token in localStorage
localStorage.setItem('authToken', token)
// Fix: Use httpOnly cookies
// HIGH: Missing authorization check
async function deleteUser(userId) {
await db.users.delete(userId) // No auth check
}
// Fix: Add authorization
if (!currentUser.isAdmin) throw new ForbiddenError()
```
### Unsafe Data Handling
```typescript
// HIGH: SQL Injection
const query = `SELECT * FROM users WHERE id = ${userId}`
// Fix: Parameterized query
const query = 'SELECT * FROM users WHERE id = $1'
// HIGH: Path traversal
const filePath = path.join(baseDir, userInput)
// Fix: Validate path
if (!safePath.startsWith(baseDir)) throw new Error()
```
### Next.js/React Security
```typescript
// HIGH: Server Action without validation
'use server'
async function updateProfile(formData) {
const name = formData.get('name')
await db.users.update({ name }) // No validation
}
// Fix: Use zod validation
const schema = z.object({ name: z.string().min(1).max(100) })
```
## Response Template
```
## Security Scan Results
**Project**: [name]
**Files Scanned**: X
### CRITICAL (Immediate action)
#### 1. [A03] XSS Vulnerability
**File**: `src/components/Comment.tsx:45`
**Code**:
\`\`\`tsx
<div dangerouslySetInnerHTML={{ __html: comment.body }} />
\`\`\`
**Risk**: User input rendered without sanitization
**Fix**:
\`\`\`tsx
import DOMPurify from 'dompurify'
<div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(comment.body) }} />
\`\`\`
### HIGH | MEDIUM | LOW
...
### Summary
| Severity | Count |
|----------|-------|
| CRITICAL | 2 |
| HIGH | 3 |
| MEDIUM | 5 |
### Actions
1. [ ] Install DOMPurify: `npm install dompurify`
2. [ ] Move API keys to .env
3. [ ] Use httpOnly cookies instead of localStorage
```
## Best Practices
1. **Least Privilege**: Grant only necessary permissions
2. **Defense in Depth**: Validate at multiple layers
3. **Input Validation**: Validate all user input
4. **Output Encoding**: Context-appropriate encoding
5. **Secret Management**: Use env vars or secret managers
## Integration
- `code-reviewer` skill
- `nextjs-reviewer` skill
- `/analyze-code` command
## Notes
- Static analysis may have false positives
- Runtime security testing needs separate tools
- Sensitive files (.env, credentials) excluded from scan

View File

@@ -0,0 +1,299 @@
---
name: sql-optimizer
description: |
WHEN: SQL query review, query optimization, index usage, N+1 detection, performance analysis
WHAT: Query plan analysis + Index recommendations + N+1 detection + Join optimization + Performance tuning
WHEN NOT: Schema design → schema-reviewer, ORM code → orm-reviewer
---
# SQL Optimizer Skill
## Purpose
Analyzes and optimizes SQL queries for performance, index usage, and best practices.
## When to Use
- SQL query optimization
- Query performance review
- Index usage analysis
- N+1 query detection
- Slow query troubleshooting
## Project Detection
- `.sql` files
- Query strings in code
- Database migration files
- ORM query logs
## Workflow
### Step 1: Analyze Query
```
**Database**: PostgreSQL/MySQL/SQLite
**Tables**: users, orders, products
**Query Type**: SELECT with JOINs
**Estimated Rows**: 100K+
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full query optimization (recommended)
- Index usage analysis
- Join optimization
- Subquery refactoring
- N+1 detection
multiSelect: true
```
## Detection Rules
### Index Usage
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Full table scan | Add appropriate index | CRITICAL |
| Index not used | Check column order | HIGH |
| Too many indexes | Consolidate indexes | MEDIUM |
| Missing composite index | Add multi-column index | HIGH |
```sql
-- BAD: No index on filter columns
SELECT * FROM orders
WHERE created_at > '2024-01-01'
AND status = 'pending';
-- Full table scan!
-- GOOD: Add composite index
CREATE INDEX idx_orders_status_created
ON orders(status, created_at);
-- Index order matters!
-- For WHERE status = ? AND created_at > ?
-- Index(status, created_at) ✓
-- Index(created_at, status) ✗ (less effective)
```
### SELECT Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| SELECT * | Select specific columns | HIGH |
| Unnecessary columns | Remove unused columns | MEDIUM |
| No LIMIT | Add LIMIT for large results | HIGH |
```sql
-- BAD: SELECT * with large result
SELECT * FROM orders
WHERE user_id = 123;
-- Returns all columns, no limit
-- GOOD: Specific columns, limited results
SELECT id, status, total, created_at
FROM orders
WHERE user_id = 123
ORDER BY created_at DESC
LIMIT 20;
```
### JOIN Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Cartesian product | Add join condition | CRITICAL |
| Join on non-indexed column | Add index | HIGH |
| Too many joins | Consider denormalization | MEDIUM |
| Implicit join | Use explicit JOIN syntax | LOW |
```sql
-- BAD: Implicit join (harder to read, error-prone)
SELECT o.*, u.name
FROM orders o, users u
WHERE o.user_id = u.id;
-- GOOD: Explicit JOIN
SELECT o.id, o.total, u.name
FROM orders o
INNER JOIN users u ON o.user_id = u.id;
-- BAD: Join on non-indexed column
SELECT o.*, p.name
FROM orders o
JOIN products p ON o.product_code = p.code;
-- If products.code has no index → slow!
-- FIX: Add index
CREATE INDEX idx_products_code ON products(code);
```
### Subquery Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Correlated subquery | Convert to JOIN | HIGH |
| IN with subquery | Use EXISTS or JOIN | MEDIUM |
| Subquery in SELECT | Move to JOIN | HIGH |
```sql
-- BAD: Correlated subquery (runs for each row)
SELECT *
FROM orders o
WHERE total > (
SELECT AVG(total)
FROM orders
WHERE user_id = o.user_id
);
-- GOOD: Use window function
SELECT *
FROM (
SELECT *,
AVG(total) OVER (PARTITION BY user_id) as avg_total
FROM orders
) sub
WHERE total > avg_total;
-- BAD: IN with large subquery
SELECT * FROM users
WHERE id IN (SELECT user_id FROM orders WHERE status = 'vip');
-- GOOD: Use EXISTS or JOIN
SELECT u.* FROM users u
WHERE EXISTS (
SELECT 1 FROM orders o
WHERE o.user_id = u.id AND o.status = 'vip'
);
-- Or with JOIN
SELECT DISTINCT u.*
FROM users u
INNER JOIN orders o ON o.user_id = u.id
WHERE o.status = 'vip';
```
### N+1 Query Detection
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Loop with query | Batch fetch | CRITICAL |
| Lazy load in loop | Eager load | CRITICAL |
```sql
-- N+1 Pattern (application code)
-- Query 1: Get all users
SELECT * FROM users;
-- Then for each user (N queries):
SELECT * FROM orders WHERE user_id = 1;
SELECT * FROM orders WHERE user_id = 2;
SELECT * FROM orders WHERE user_id = 3;
-- ... N more queries
-- SOLUTION 1: JOIN
SELECT u.*, o.*
FROM users u
LEFT JOIN orders o ON o.user_id = u.id;
-- SOLUTION 2: IN query (for separate queries)
SELECT * FROM orders WHERE user_id IN (1, 2, 3, ...);
```
### Aggregation Optimization
| Check | Recommendation | Severity |
|-------|----------------|----------|
| COUNT(*) on large table | Use approximate count | MEDIUM |
| GROUP BY without index | Add index | HIGH |
| HAVING vs WHERE | Filter early with WHERE | MEDIUM |
```sql
-- BAD: COUNT on entire table
SELECT COUNT(*) FROM orders;
-- Scans entire table
-- GOOD: Approximate count (PostgreSQL)
SELECT reltuples::bigint AS estimate
FROM pg_class
WHERE relname = 'orders';
-- BAD: WHERE in HAVING
SELECT user_id, COUNT(*)
FROM orders
GROUP BY user_id
HAVING status = 'completed'; -- Wrong place!
-- GOOD: Filter before grouping
SELECT user_id, COUNT(*)
FROM orders
WHERE status = 'completed' -- Filter first
GROUP BY user_id;
-- Index for GROUP BY
CREATE INDEX idx_orders_user_status
ON orders(user_id, status);
```
### EXPLAIN Analysis
```sql
-- PostgreSQL EXPLAIN
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.name, COUNT(o.id)
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
GROUP BY u.id;
-- Look for:
-- ✗ Seq Scan (full table scan)
-- ✗ Nested Loop with high rows
-- ✗ Hash Join with large hash
-- ✓ Index Scan
-- ✓ Index Only Scan
-- ✓ Bitmap Index Scan
```
## Response Template
```
## SQL Query Optimization Results
**Database**: PostgreSQL 15
**Query Type**: SELECT with JOIN
**Estimated Impact**: ~10x improvement
### Index Usage
| Status | Issue | Recommendation |
|--------|-------|----------------|
| CRITICAL | Full table scan on orders | Add index on (status, created_at) |
### Join Analysis
| Status | Issue | Recommendation |
|--------|-------|----------------|
| HIGH | Non-indexed join column | Add index on products.code |
### Query Structure
| Status | Issue | Recommendation |
|--------|-------|----------------|
| HIGH | SELECT * with no LIMIT | Select specific columns, add LIMIT |
### Recommended Indexes
```sql
CREATE INDEX idx_orders_status_created ON orders(status, created_at);
CREATE INDEX idx_products_code ON products(code);
```
### Optimized Query
```sql
SELECT o.id, o.total, p.name
FROM orders o
INNER JOIN products p ON o.product_id = p.id
WHERE o.status = 'pending'
ORDER BY o.created_at DESC
LIMIT 100;
```
```
## Best Practices
1. **Indexes**: Add for WHERE, JOIN, ORDER BY columns
2. **SELECT**: Only needed columns, with LIMIT
3. **JOINs**: Explicit syntax, indexed columns
4. **Subqueries**: Prefer JOINs or CTEs
5. **EXPLAIN**: Always analyze query plans
## Integration
- `schema-reviewer`: Database design
- `orm-reviewer`: ORM query patterns
- `perf-analyzer`: Application performance

View File

@@ -0,0 +1,344 @@
---
name: terraform-reviewer
description: |
WHEN: Terraform code review, module structure, state management, security policies
WHAT: Module organization + State backend + Security policies + Variable validation + Best practices
WHEN NOT: Kubernetes → k8s-reviewer, Docker → docker-reviewer
---
# Terraform Reviewer Skill
## Purpose
Reviews Terraform code for module structure, state management, security, and best practices.
## When to Use
- Terraform code review
- Module structure review
- State backend configuration
- Security policy review
- Variable and output review
## Project Detection
- `*.tf` files in project
- `main.tf`, `variables.tf`, `outputs.tf`
- `modules/` directory
- `terraform.tfvars`
## Workflow
### Step 1: Analyze Project
```
**Terraform**: 1.6+
**Provider**: AWS/GCP/Azure
**Backend**: S3/GCS/Azure Blob
**Modules**: Custom + Registry
```
### Step 2: Select Review Areas
**AskUserQuestion:**
```
"Which areas to review?"
Options:
- Full Terraform review (recommended)
- Module structure
- State management
- Security and compliance
- Variable validation
multiSelect: true
```
## Detection Rules
### Module Structure
| Check | Recommendation | Severity |
|-------|----------------|----------|
| All resources in main.tf | Split by resource type | MEDIUM |
| No modules | Extract reusable modules | MEDIUM |
| Hardcoded values | Use variables | HIGH |
| No outputs | Add relevant outputs | MEDIUM |
```
# GOOD: Project structure
terraform/
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ └── prod/
│ ├── main.tf
│ ├── variables.tf
│ └── terraform.tfvars
├── modules/
│ ├── vpc/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── eks/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── README.md
```
```hcl
# BAD: Hardcoded values
resource "aws_instance" "web" {
ami = "ami-12345678"
instance_type = "t3.micro"
tags = {
Name = "web-server"
}
}
# GOOD: Parameterized with variables
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
validation {
condition = can(regex("^t3\\.", var.instance_type))
error_message = "Instance type must be t3 family."
}
}
variable "environment" {
description = "Environment name"
type = string
validation {
condition = contains(["dev", "staging", "prod"], var.environment)
error_message = "Environment must be dev, staging, or prod."
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
tags = merge(local.common_tags, {
Name = "${var.project}-${var.environment}-web"
})
}
```
### State Management
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Local state | Use remote backend | CRITICAL |
| No state locking | Enable DynamoDB/GCS lock | HIGH |
| No state encryption | Enable encryption | HIGH |
| Shared state file | Split by environment | MEDIUM |
```hcl
# BAD: Local state (default)
# No backend configuration
# GOOD: Remote backend with locking
terraform {
backend "s3" {
bucket = "mycompany-terraform-state"
key = "prod/vpc/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}
# For GCP
terraform {
backend "gcs" {
bucket = "mycompany-terraform-state"
prefix = "prod/vpc"
}
}
```
### Security
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Secrets in tfvars | Use secret manager | CRITICAL |
| Public S3 bucket | Set ACL private | CRITICAL |
| Open security group | Restrict CIDR | CRITICAL |
| No encryption | Enable encryption | HIGH |
```hcl
# BAD: Security issues
resource "aws_security_group" "web" {
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Wide open!
}
}
resource "aws_s3_bucket" "data" {
bucket = "my-data-bucket"
acl = "public-read" # Public!
}
# GOOD: Secure configuration
resource "aws_security_group" "web" {
name = "${var.project}-web-sg"
description = "Security group for web servers"
vpc_id = var.vpc_id
ingress {
description = "HTTPS from load balancer"
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
egress {
description = "Allow all outbound"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = local.common_tags
}
resource "aws_s3_bucket" "data" {
bucket = "${var.project}-${var.environment}-data"
}
resource "aws_s3_bucket_public_access_block" "data" {
bucket = aws_s3_bucket.data.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_server_side_encryption_configuration" "data" {
bucket = aws_s3_bucket.data.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
kms_master_key_id = aws_kms_key.s3.arn
}
}
}
```
### Variable Validation
| Check | Recommendation | Severity |
|-------|----------------|----------|
| No type constraint | Add type | MEDIUM |
| No validation | Add validation block | MEDIUM |
| No description | Add description | LOW |
| Sensitive not marked | Add sensitive = true | HIGH |
```hcl
# GOOD: Well-defined variables
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
validation {
condition = can(cidrnetmask(var.vpc_cidr))
error_message = "Must be a valid CIDR block."
}
}
variable "db_password" {
description = "Database password"
type = string
sensitive = true # Won't show in logs
validation {
condition = length(var.db_password) >= 16
error_message = "Password must be at least 16 characters."
}
}
variable "allowed_environments" {
description = "List of allowed environment names"
type = list(string)
default = ["dev", "staging", "prod"]
}
```
### Resource Naming
| Check | Recommendation | Severity |
|-------|----------------|----------|
| Inconsistent naming | Use naming convention | MEDIUM |
| No tags | Add standard tags | MEDIUM |
```hcl
# GOOD: Consistent naming and tagging
locals {
name_prefix = "${var.project}-${var.environment}"
common_tags = {
Project = var.project
Environment = var.environment
ManagedBy = "terraform"
Owner = var.owner
}
}
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
tags = merge(local.common_tags, {
Name = "${local.name_prefix}-vpc"
})
}
```
## Response Template
```
## Terraform Review Results
**Project**: [name]
**Terraform**: 1.6 | **Provider**: AWS
### Module Structure
| Status | File | Issue |
|--------|------|-------|
| MEDIUM | main.tf | 500+ lines, should split |
### State Management
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | - | Using local state |
### Security
| Status | File | Issue |
|--------|------|-------|
| CRITICAL | security.tf:23 | Security group allows 0.0.0.0/0 |
### Variables
| Status | File | Issue |
|--------|------|-------|
| HIGH | variables.tf | db_password not marked sensitive |
### Recommended Actions
1. [ ] Configure remote state backend with locking
2. [ ] Restrict security group ingress rules
3. [ ] Mark sensitive variables
4. [ ] Split main.tf into logical files
```
## Best Practices
1. **Structure**: Separate by environment, use modules
2. **State**: Remote backend with locking and encryption
3. **Security**: No secrets in code, least privilege
4. **Variables**: Type constraints, validation, descriptions
5. **Naming**: Consistent convention, standard tags
## Integration
- `k8s-reviewer`: EKS/GKE cluster configs
- `infra-security-reviewer`: Compliance checks
- `ci-cd-reviewer`: Terraform in pipelines

View File

@@ -0,0 +1,230 @@
---
name: test-generator
description: |
WHEN: Test code generation, unit/integration/E2E test writing, component/hook/utility tests
WHAT: Framework detection + Jest/Vitest/RTL/Playwright templates + Happy Path/Edge/Error case tests
WHEN NOT: Coverage analysis → coverage-analyzer, Test quality review → code-reviewer
---
# Test Generator Skill
## Purpose
Automatically generates tests by detecting project test framework and applying appropriate patterns.
## When to Use
- Test generation requests
- New component/function needs tests
- Unit, E2E, integration test mentions
- Coverage improvement needed
## Framework Detection
### Test Runners
| Framework | Detection | package.json |
|-----------|-----------|--------------|
| Jest | `jest.config.*` | `jest` |
| Vitest | `vitest.config.*` | `vitest` |
| Playwright | `playwright.config.*` | `@playwright/test` |
| Cypress | `cypress.config.*` | `cypress` |
### Test Libraries
| Library | Purpose | package.json |
|---------|---------|--------------|
| RTL | React components | `@testing-library/react` |
| Vue Test Utils | Vue components | `@vue/test-utils` |
## Workflow
### Step 1: Detect Environment
```
**Runner**: Jest
**Library**: React Testing Library
**Config**: jest.config.js
**Test Dir**: __tests__/, *.test.tsx
```
### Step 2: Select Target
**AskUserQuestion:**
```
"Which code to test?"
Options:
- Specific file/component
- Auto-detect untested files
- Recently changed files
- Entire directory
```
### Step 3: Select Test Type
**AskUserQuestion:**
```
"What type of tests?"
Options:
- Unit Test
- Integration Test
- Component Test
- E2E Test (Playwright/Cypress)
```
### Step 4: Coverage Goal
**AskUserQuestion:**
```
"Coverage goal?"
Options:
- Happy Path only
- Happy Path + Edge Cases
- Full (including errors)
- Specify scenarios
```
## Test Templates
### React Component (Jest + RTL)
```typescript
import { render, screen } from '@testing-library/react'
import userEvent from '@testing-library/user-event'
import { Component } from './Component'
describe('Component', () => {
const defaultProps = { /* ... */ }
const renderComponent = (props = {}) =>
render(<Component {...defaultProps} {...props} />)
describe('Rendering', () => {
it('renders with default state', () => {
renderComponent()
expect(screen.getByRole('button')).toBeInTheDocument()
})
})
describe('Interactions', () => {
it('calls callback on click', async () => {
const onClick = jest.fn()
renderComponent({ onClick })
await userEvent.click(screen.getByRole('button'))
expect(onClick).toHaveBeenCalledTimes(1)
})
})
describe('Edge Cases', () => {
it('handles empty data', () => {
renderComponent({ data: [] })
expect(screen.getByText('No data')).toBeInTheDocument()
})
})
})
```
### React Hook
```typescript
import { renderHook, act, waitFor } from '@testing-library/react'
import { useCustomHook } from './useCustomHook'
describe('useCustomHook', () => {
it('returns initial state', () => {
const { result } = renderHook(() => useCustomHook())
expect(result.current.value).toBe(initialValue)
})
it('updates state', () => {
const { result } = renderHook(() => useCustomHook())
act(() => { result.current.setValue('new') })
expect(result.current.value).toBe('new')
})
})
```
### Utility Function
```typescript
import { utilityFunction } from './utils'
describe('utilityFunction', () => {
it('processes valid input', () => {
expect(utilityFunction('valid')).toBe('expected')
})
describe('Edge Cases', () => {
it('handles empty string', () => {
expect(utilityFunction('')).toBe('')
})
it('handles null', () => {
expect(utilityFunction(null)).toBeNull()
})
})
describe('Errors', () => {
it('throws on invalid input', () => {
expect(() => utilityFunction(undefined)).toThrow()
})
})
})
```
### E2E (Playwright)
```typescript
import { test, expect } from '@playwright/test'
test.describe('User Flow: Login', () => {
test.beforeEach(async ({ page }) => {
await page.goto('/login')
})
test('logs in successfully', async ({ page }) => {
await page.getByLabel('Email').fill('user@example.com')
await page.getByLabel('Password').fill('password123')
await page.getByRole('button', { name: 'Login' }).click()
await expect(page).toHaveURL('/dashboard')
})
test('shows error on invalid credentials', async ({ page }) => {
await page.getByLabel('Email').fill('wrong@example.com')
await page.getByLabel('Password').fill('wrong')
await page.getByRole('button', { name: 'Login' }).click()
await expect(page.getByRole('alert')).toContainText('Login failed')
})
})
```
## Response Template
```
## Tests Generated
**Target**: src/components/Button.tsx
**Output**: src/components/__tests__/Button.test.tsx
**Runner**: Jest + RTL
### Test Cases
| Category | Test | Description |
|----------|------|-------------|
| Rendering | Default render | Renders correctly |
| Interaction | Click event | onClick callback |
| Edge Case | Long text | Overflow handling |
### Run
\`\`\`bash
npm test -- Button.test.tsx
npm test -- --coverage Button.test.tsx
\`\`\`
### Expected Coverage
- Lines: ~90%
- Branches: ~85%
- Functions: ~100%
```
## Best Practices
1. **AAA Pattern**: Arrange-Act-Assert
2. **Clear Names**: Expected behavior in test name
3. **Independence**: Each test runs independently
4. **Minimal Mocking**: Mock only when necessary
5. **Real User Behavior**: Prefer user-event
## Integration
- `/generate-tests` command
- `coverage-analyzer` skill
- `code-reviewer` skill
## Notes
- Follows existing test patterns if present
- Test file location matches project structure
- Mocking based on actual implementation analysis