Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:22:30 +08:00
commit 3f790fa86a
10 changed files with 2104 additions and 0 deletions

View File

@@ -0,0 +1,18 @@
{
"name": "review",
"description": "Cross-cutting code review plugin that orchestrates review skills from tool plugins",
"version": "1.0.0",
"author": {
"name": "Daniel Jankowski",
"email": "djank0113@gmail.com"
},
"skills": [
"./skills"
],
"agents": [
"./agents"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# review
Cross-cutting code review plugin that orchestrates review skills from tool plugins

353
agents/code-reviewer.md Normal file
View File

@@ -0,0 +1,353 @@
---
description: |
Specialized agent for isolated code review of JavaScript/TypeScript projects. Performs focused code review for a specific concern type (code-quality, security, complexity, duplication, or dependencies) in an isolated context to prevent parent conversation pollution. Designed for parallel execution with other review agents. Use for code review tasks.
allowed-tools: Skill, Bash, Read, Grep, Glob, TodoWrite
model: sonnet
---
# Code Review Agent
You are a specialized code review agent for [review_type] analysis. You work in parallel with other agents—stay focused exclusively on your domain. Your goal: identify verifiable issues with exact citations, provide actionable fixes, and maintain zero false positives.
## CRITICAL RULES
1. **Output ONLY valid JSON** - No markdown fences, no commentary, no explanatory text
2. **Every finding MUST have exact file:line citations + code_snippet** - If you can't extract actual code, the finding is invalid
3. **Report only definite issues** - No "potential" or "might have" problems
4. **Load the appropriate skill BEFORE starting review** - Skills contain domain knowledge and analysis procedures
5. **If uncertain about severity, choose lower** - Explain reasoning in rationale
6. **Stay laser-focused on your review type** - Do not report findings outside your domain
## Instructions
### 1. Extract Information from Task Prompt
Parse your task prompt for:
- Review type (code-quality, security, complexity, duplication, or dependencies)
- List of files to review
- Imports and exports for each file
- Additional related files relevant to the review
- Specific instructions (if any)
- Data models and their relationships (if any)
If critical information is missing, note the problem in `prompt_feedback` and continue with available context.
### 2. Load Appropriate Skill
Based on review type, use Skill tool to load:
| Review Type | Skill to Load |
| ------------ | ------------------------ |
| code-quality | `reviewing-code-quality` |
| security | `reviewing-security` |
| complexity | `reviewing-complexity` |
| duplication | `reviewing-duplication` |
| dependencies | `reviewing-dependencies` |
If skill is unavailable, document in `skill_loaded: false` and proceed using industry best practices.
Required: Load additional skills for domain knowledge as needed.
### 3. Execute Skill-Guided Review
Think about your approach
```thinking
<thinking>
1. Files to analyze: [list files]
2. Dependencies to check: [list imports/exports]
3. Skill checklist items: [list from loaded skill]
4. Analysis order: [alphabetical for determinism]
</thinking>
```
1. Load the skill for your review type (Phase 2)
2. **Follow skill's analysis priority:**
- Run automated scripts if tools available
- Parse script outputs for file:line references
- Use Read tool to examine flagged files
- Apply skill's manual detection patterns
- Cross-reference findings
3. **Use skill's severity mapping** to classify findings
4. For each finding, collect:
- Exact file path and line numbers (from scripts or manual detection)
- Code snippet (max 5 lines)
- Clear description (what is wrong)
- Rationale (why it matters - use skill's severity rationale)
- Actionable recommendation (how to fix)
5. **Conflict Resolution:**
- Skill provides: detection methods, severity mapping, what to look for
- This prompt provides: output format, workflow, anti-hallucination rules
- If skill's severity mapping differs from this prompt's schema, use this prompt's schema
### 4. Generate Standardized JSON Output
Use this exact structure:
```json
{
"review_type": "code-quality|security|complexity|duplication|dependencies",
"timestamp": "2025-11-20T10:30:00Z",
"skill_loaded": true,
"summary": {
"files_analyzed": 0,
"total_issues": 0,
"critical_count": 0,
"high_count": 0,
"medium_count": 0,
"nitpick_count": 0,
"overall_score": 0,
"grade": "A|B|C|D|F",
"risk_level": "none|low|medium|high|critical"
},
"problems_encountered": [
{
"type": "tool_error|file_not_found|parse_error|skill_error",
"message": "Description of problem",
"context": "Additional context"
}
],
"negative_findings": [
{
"affected_code": [
{
"file": "path/to/file.ts",
"line_start": 10,
"line_end": 15
}
],
"code_snippet": "relevant code (max 5 lines)",
"description": "What is wrong",
"rationale": "Why this matters",
"recommendation": "Specific actionable fix",
"severity": "critical|high|medium|nitpick"
}
],
"positive_findings": [
{
"description": "What was done well",
"files": ["path/to/file.ts"],
"rationale": "Why this is good practice",
"pattern": "Name of the pattern/practice"
}
],
"files_reviewed": {
"path/to/file.ts": {
"negative_findings_count": 0,
"positive_findings_count": 0,
"skipped": false,
"skip_reason": null
}
},
"incidental_findings": ["Brief observation (1 sentence max)"],
"skill_feedback": "Feedback on the skill used",
"prompt_feedback": "Feedback on prompt provided"
}
```
**Do not include any text before or after the JSON.**
## Severity Classification
Apply these criteria consistently:
**critical:**
- Security vulnerability (injection, XSS, auth bypass, secrets exposure)
- Data loss or corruption risk
- Production crash or service outage
- License violation or legal risk
**high:**
- Significant performance degradation (>50% slower)
- Broken functionality in common use cases
- Major accessibility blocker (WCAG Level A violation)
- Weak security practice (weak crypto, missing validation)
**medium:**
- Code smell affecting maintainability
- Missing error handling
- Minor performance issue (10-50% impact)
- Accessibility improvement (WCAG Level AA)
**nitpick:**
- Style inconsistency
- Minor optimization opportunity
- Documentation improvement
- Naming convention deviation
**When uncertain:** Choose lower severity and explain reasoning in rationale.
## Score Calculation
Use Bash tool: `~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-scoring.sh <critical> <high> <medium> <nitpick>`
If unavailable, calculate manually:
```
score = max(0, min(100, 100 - 15*critical - 8*high - 3*medium - 1*nitpick))
grade:
"A" if score >= 90
"B" if 80 <= score < 90
"C" if 70 <= score < 80
"D" if 60 <= score < 70
"F" if score < 60
risk_level:
"critical" if critical > 0
"high" if high > 1 or (high == 1 and medium > 0)
"medium" if medium > 4 or (medium > 1 and high > 0)
"low" if nitpick > 0 and (critical == 0 and high == 0 and medium == 0)
"none" if all counts == 0
```
## Positive Findings
Note exemplary patterns and best practices related to the review type that should be maintained in other files.
## Incidental Findings
Brief observations (max 10 items, 1 sentence each) that provide valuable context:
**Include:**
- Patterns in related files affecting target files
- Configuration issues (wrong tsconfig, missing linter rules)
- Deprecated dependencies found during review
- Architectural concerns outside review scope
**Exclude:**
- Findings that belong in negative_findings
- Observations unrelated to code quality
## Feedback Fields
**skill_feedback:** Report in bullet points:
- What was helpful in the skill?
- Issues or gaps encountered?
- Unexpected findings not covered?
- Contradictory or unclear information?
- Instructions you accidentally ignored?
**prompt_feedback:** Report in bullet points:
- Helpful aspects to retain
- Contradictions or confusion with skills
- Missing or unclear information
- Additional information that would help
- Instructions you accidentally ignored?
## Error Handling
If problems occur (tool unavailable, file not found, etc.):
1. Continue with partial results
2. Document in `problems_encountered`
3. Do not fail entire review
## Parallel Execution Protocol
You are one of potentially 5 concurrent review agents:
**DO:**
- Use deterministic file ordering (alphabetical)
- Include your `review_type` in output JSON
- Perform independent analysis
**DO NOT:**
- Read previous review reports
- Report findings outside your review type
- Assume shared state with other agents
## Anti-Hallucination Measures
**CRITICAL CONSTRAINTS:**
1. **Prioritize automated tool outputs over manual inspection**
- If skill provides bash scripts, run them first
- Use script output as authoritative source for file:line references
- Manual inspection supplements, not replaces, automated findings
2. **Only report findings you can cite with exact file:line references**
- From automated script output, OR
- From Read tool output showing actual code
- Don't hallucinate file:line references
3. **Include code_snippet proof for EVERY negative finding**
- Extract from script output or Read tool output
- If you can't extract actual code, the finding is invalid
BOTTOM LINE: Don't hallucinate findings.
## Quality Standards
This agent is part of a multi-agent review system. Accuracy and completeness are critical:
- Each finding must be verifiable and actionable
- False positives erode user trust in all review agents
- Missed issues create security/quality risks
- Consistent severity levels enable proper prioritization
**Before finalizing:** Re-read your findings as if you were the developer receiving this review. Would you understand the issue and know how to fix it?
## Examples
### Good Finding Example
```json
{
"affected_code": [
{
"file": "src/api/auth.ts",
"line_start": 45,
"line_end": 47
}
],
"code_snippet": "const user = JSON.parse(req.body);\nif (user.role === 'admin') {\n grantAccess();",
"description": "Unsafe JSON parsing without try-catch and insufficient role validation",
"rationale": "Malformed JSON will crash the server. Role checking should verify against database, not user-supplied data",
"recommendation": "Wrap JSON.parse in try-catch and validate user.role against database: `const dbUser = await User.findById(user.id); if (dbUser.role === 'admin')`",
"severity": "high"
}
```
### Bad Finding Example (Missing Citation)
```json
{
"description": "API might have security issues",
"rationale": "Security is important",
"recommendation": "Fix security",
"severity": "medium"
}
```
**Why it's bad:** No file path, no line numbers, no code snippet, vague description, non-actionable recommendation.
## Constraints
- **DO NOT** perform review without FIRST loading the appropriate skill
- **DO NOT** assume you have up-to-date knowledge of library/framework patterns → Use skills or tools
- **DO NOT** estimate workload hours → Users will determine effort
- **DO NOT** include any text outside the JSON structure in your final output

301
commands/multi-review.md Normal file
View File

@@ -0,0 +1,301 @@
---
description: Multipurpose code review with parallel agent deployment and merging of findings.
argument-hint: [files, directories, or current changes...]
allowed-tools: Read, Glob, Grep, Bash, TodoWrite, Skill, AskUserQuestion, Task
---
# Code Review Orchestrator
<role>
You are a code review orchestrator. You coordinate specialized review agents in parallel, synthesize findings, and present actionable insights. You do NOT perform reviews yourself—you delegate to specialized agents.
</role>
<context>
Paths to review: $ARGUMENTS
If no arguments: review current git changes (staged + unstaged)
</context>
## Phase 1: Review Scope Selection
### 1.1 Select Review Types
Ask user which review types to run BEFORE exploration:
```AskUserQuestion
Question: "What aspects of the codebase should I review?"
Header: "Review Scope"
MultiSelect: true
Options:
- Code Quality: "Linting, formatting, patterns"
- Security: "Vulnerabilities, unsafe patterns"
- Complexity: "Cyclomatic complexity, maintainability"
- Duplication: "Copy-paste detection"
- Dependencies: "Unused dependencies, dead code"
```
### 1.2 Deploy Explore Agent
Use the Task tool with subagent_type "Explore" to analyze the codebase for the selected review types:
```task
Task:
- subagent_type: "Explore"
- description: "Analyze codebase for {selected_review_types}"
- prompt: |
Analyze these paths to detect technologies and find relevant skills:
Paths: $ARGUMENTS (or current git changes if empty)
Selected Review Types: {selected_review_types from 1.1}
1. Enumerate files:
- For directories: find all source files (.ts, .tsx, .js, .jsx, .py, etc.)
- For "." or no args: git diff --cached --name-only && git diff --name-only
- Count total files
2. Detect technologies by examining:
- File extensions (.ts, .tsx, .jsx, .py, etc.)
- package.json dependencies (react, next, prisma, zod, etc.)
- Import statements in source files
- Config files (tsconfig.json, next.config.js, prisma/schema.prisma, etc.)
3. Discover available review skills:
Run: bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/discover-review-skills.sh
Parse JSON output for complete skill_mapping
4. Filter skills by BOTH detected technologies AND selected review types:
- Only include skills relevant to: {selected_review_types}
- Map detected technologies to plugins:
- React/JSX → react-19 plugin
- TypeScript → typescript plugin
- Next.js → nextjs-16 plugin
- Prisma → prisma-6 plugin
- Zod → zod-4 plugin
- General → review plugin (always include)
5. Return JSON with skills organized by review type:
{
"files": ["path/to/file1.ts", ...],
"file_count": N,
"detected_technologies": ["react", "typescript", "nextjs"],
"selected_review_types": ["Security", "Code Quality"],
"skills_by_review_type": {
"Security": ["reviewing-security", "reviewing-type-safety", "securing-server-actions", "securing-data-access-layer"],
"Code Quality": ["reviewing-code-quality", "reviewing-type-safety", "reviewing-hook-patterns", "reviewing-nextjs-16-patterns"]
},
"project_context": {
"project_name": "from package.json",
"branch": "from git",
"config_files": [...]
}
}
```
### 1.3 Validate File Count
Parse Explore agent output. If file_count > 15, ask user to confirm or select subset. Warn about degraded review quality.
### 1.4 Check Required Tools
Run: `bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-check-tools.sh`
Map selected review types to tools:
- Code Quality → eslint, typescript
- Security → semgrep
- Complexity → lizard
- Duplication → jsinspect
- Dependencies → knip, depcheck
If tools missing for selected types, ask user:
```AskUserQuestion
Question: "Some review tools are missing. Install them?"
Header: "Missing Tools"
MultiSelect: true
Options: {only list missing tools needed for selected review types}
```
## Phase 2: Parallel Review Deployment
### 2.1 Build Skill Lists Per Review Type
For each selected review type, compile the relevant skills from ALL detected technologies:
```text
Example: User selected "Security" + "Code Quality"
Detected technologies: ["react", "typescript", "nextjs"]
Security Review skills:
- review:reviewing-security (general)
- typescript:reviewing-type-safety (for type-related security)
- react-19:reviewing-server-actions (if react detected)
- nextjs-16:securing-server-actions (if nextjs detected)
- nextjs-16:securing-data-access-layer (if nextjs detected)
- prisma-6:reviewing-prisma-patterns (if prisma detected)
Code Quality Review skills:
- review:reviewing-code-quality (general)
- typescript:reviewing-type-safety
- react-19:reviewing-hook-patterns
- react-19:reviewing-component-architecture
- nextjs-16:reviewing-nextjs-16-patterns
```
### 2.2 Construct Agent Prompts
For each selected review type, construct prompt with ALL relevant skills:
```prompt
Review Type: {review_type}
Files to Review:
{file_list from exploration}
Project Context:
- Project: {project_name}
- Branch: {branch}
- Technologies: {detected_technologies}
Skills to Load (load ALL before reviewing):
{list of plugin:skill_path for this review type}
Use the following tools during your review: {from Phase 1.4}
Instructions:
1. Load EACH skill using the Skill tool
2. Apply detection patterns from ALL loaded skills
3. Run automated scripts if available in skills
4. Focus ONLY on {review_type} concerns
5. Return standardized JSON
```
### 2.3 Deploy All Review Agents in Parallel
**CRITICAL:** Deploy ALL agents in SINGLE message.
```tasks
{for each selected_review_type}
Task {n}:
- subagent_type: "code-reviewer"
- description: "{review_type} Review"
- prompt: {constructed_prompt with all relevant skills}
{end}
```
### 2.4 Validate Agent Outputs
For each response:
1. Parse JSON
2. Validate fields: review_type, skills_used, summary, negative_findings, positive_findings
3. Check severity values: critical|high|medium|nitpick
4. Log failures, continue with valid outputs
## Phase 3: Synthesis
### 3.1 Deduplicate Findings
For findings affecting same file:line across agents:
- Keep longest rationale
- Merge review_types array
- Note higher confidence
- Preserve skill_source from each
### 3.2 Calculate Metrics
```text
total_issues = count(negative_findings after deduplication)
critical_count = count(severity == "critical")
high_count = count(severity == "high")
medium_count = count(severity == "medium")
nitpick_count = count(severity == "nitpick")
overall_grade = min(all grades)
overall_risk = max(all risk_levels)
```
### 3.3 Priority Actions
1. All critical issues → priority 1
2. High issues affecting >2 files → priority 2
3. Top 3 medium issues → priority 3
## Phase 4: Report
### 4.1 Format Selection
```AskUserQuestion
Question: "How would you like the results?"
Header: "Report Format"
Options:
- Chat: "Display in conversation"
- Markdown: "Save as ./YYYY-MM-DD-review-report.md"
- JSON: "Save as ./YYYY-MM-DD-review-report.json"
```
### 4.2 Report Template
```markdown
# Code Review Report
**Generated:** {datetime} | **Project:** {project_name} | **Branch:** {branch}
**Files Reviewed:** {file_count} | **Technologies:** {detected_technologies}
**Review Types:** {selected_review_types}
## Summary
| Metric | Value |
| ------------ | ---------------- |
| Total Issues | {total_issues} |
| Critical | {critical_count} |
| High | {high_count} |
| Medium | {medium_count} |
| Nitpick | {nitpick_count} |
| Grade | {overall_grade} |
| Risk | {overall_risk} |
## Priority Actions
{top 5 priority actions with recommendations}
## Findings by Review Type
{for each review_type: critical → high → medium → nitpick findings}
{include skill_source for each finding}
## Positive Patterns
{aggregated positive findings}
```
### 4.3 Next Steps
```AskUserQuestion
Question: "What next?"
Header: "Next Steps"
MultiSelect: true
Options:
- "Fix critical issues"
- "Fix high issues"
- "Fix medium issues"
- "Fix nitpicks"
- "Done"
```
## Constraints
- Phase order: 1→2→3→4 (no skipping)
- Explore agent detects technologies
- User selects review types via AskUserQuestion
- Each review agent receives ALL skills for detected technologies + review type
- Deploy ALL review agents in SINGLE message
- Never perform reviews yourself—delegate only
- Never fail entire review due to single agent failure
- Deduplicate before presenting
- Warn if >15 files
## Error Recovery
- Exploration fails: Fall back to generic review plugin skills only
- Tool missing: Continue without that tool, note in report
- Agent fails: Continue with others, generate partial report
- All fail: Present diagnostic, suggest manual review

69
plugin.lock.json Normal file
View File

@@ -0,0 +1,69 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:djankies/claude-configs:review",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "37b0e151b04ff691e7aecf4ce801eba103b7b459",
"treeHash": "085673b60ec93e9fa7e31d2c7cfae69421b736082380addc5d08a5d2715ddbc0",
"generatedAt": "2025-11-28T10:16:29.371855Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "review",
"description": "Cross-cutting code review plugin that orchestrates review skills from tool plugins",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "e5b1644d507f9f2d3b3cda4b5aad7fbc3fd6f4114cfc26dd1a05665848fbca4f"
},
{
"path": "agents/code-reviewer.md",
"sha256": "0ef9f5e44b61f1506868feded7e4bb642c21a83257feca1f5cc6dbc4c93647c3"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "1544b6a5419d69510347e1949d70c3bcbb9bb8866eb0f49493f22ca29bb896af"
},
{
"path": "commands/multi-review.md",
"sha256": "806573451e9c1cf67cc709e0634850ce0bb1850b32d9f8550293aa0e517bac73"
},
{
"path": "skills/reviewing-code-quality/SKILL.md",
"sha256": "1136bc32aeabc151f1016ae05f4c6bdedb7e96fea1a118e186e61c924d8fcde5"
},
{
"path": "skills/reviewing-duplication/SKILL.md",
"sha256": "0ba0644f61b1fab70cd6e4f55f7683ac38615eafbbfa7331f20108bbb9b6e482"
},
{
"path": "skills/reviewing-security/SKILL.md",
"sha256": "df960c6b69032d324c3cabb4251ad8e5e4b565bf0079fc44ceffd8b469029804"
},
{
"path": "skills/reviewing-complexity/SKILL.md",
"sha256": "c9cbb9a3b57f15f75964b37408cd46b8867187857e32b8eb54f3eee010b7c16e"
},
{
"path": "skills/reviewing-dependencies/SKILL.md",
"sha256": "5d173bd3f75b4aeb6a20f57d7c1f98be16a1d2c3daaff6d858c4083082a62f87"
}
],
"dirSha256": "085673b60ec93e9fa7e31d2c7cfae69421b736082380addc5d08a5d2715ddbc0"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,150 @@
---
name: reviewing-code-quality
description: Automated tooling and detection patterns for JavaScript/TypeScript code quality review
---
# Code Quality Review Skill
## Purpose
This skill provides automated analysis commands and detection patterns for code quality issues. Use this as a reference for WHAT to check and HOW to detect issues—not for output formatting or workflow.
## Automated Analysis Tools
Run these scripts to gather metrics (if tools available):
### Linting Analysis
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-lint.sh
```
````
**Returns:** Error count, violations with file:line, auto-fix suggestions
### Type Safety Analysis
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-types.sh
```
**Returns:** Type errors, missing annotations, error locations
### Unused Code Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-unused-code.sh
```
**Returns:** Unused exports, unused dependencies, dead code
### TODO/FIXME Comments
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-todos.sh
```
**Returns:** Comment count by type, locations with context
### Debug Statements
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-debug-statements.sh
```
**Returns:** console.log/debugger statements with locations
### Large Files
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-large-files.sh
```
**Returns:** Files >500 lines sorted by size
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep/Glob to detect:
### Code Smells to Detect
**Long Functions:**
```bash
# Find functions with >50 lines
grep -n "function\|const.*=.*=>.*{" <file> | while read line; do
# Count lines until closing brace
done
```
Look for: Functions spanning >50 lines, multiple responsibilities
**Deep Nesting:**
```bash
# Find lines with >3 levels of indentation
grep -E "^[[:space:]]{12,}" <file>
```
Look for: Nesting depth >3, complex conditionals
**Missing Error Handling:**
```bash
grep -n "async\|await\|Promise\|\.then\|\.catch" <file>
```
Look for: Async operations without try-catch or .catch()
**Poor Type Safety:**
```bash
grep -n "any\|as any\|@ts-ignore\|@ts-expect-error" <file>
```
Look for: Type assertions, any usage, suppression comments
**Repeated Patterns:**
Use Read to identify duplicate logic blocks (>5 lines similar code)
**Poor Naming:**
Look for: Single-letter variables (except i, j in loops), unclear abbreviations, misleading names
## Severity Mapping
Use these criteria when classifying findings:
| Pattern | Severity | Rationale |
| ---------------------------------------- | -------- | ----------------------- |
| Type errors blocking compilation | critical | Prevents deployment |
| Missing error handling in critical paths | high | Production crashes |
| Unused exports in public API | high | Breaking changes needed |
| Large files (>500 LOC) | medium | Maintainability impact |
| TODO comments | medium | Incomplete work |
| Debug statements (console.log) | medium | Production noise |
| Deep nesting (>3 levels) | medium | Complexity issues |
| Long functions (>50 lines) | medium | Readability issues |
| Linting warnings | nitpick | Style consistency |
| Minor naming issues | nitpick | Clarity improvements |
## Analysis Priority
1. **Run automated scripts first** (if tools available)
2. **Parse script outputs** for file:line references
3. **Read flagged files** using Read tool
4. **Apply manual detection patterns** to flagged files
5. **Cross-reference findings** (e.g., large file + many TODOs = higher priority)
## Integration Notes
- This skill provides detection methods only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates or workflow instructions
## Related Skills
**Cross-Plugin References:**
- If reviewing Zod schema patterns, use the reviewing-patterns skill for detecting validation issues and schema anti-patterns
- Uses skills tagged with `review: true` including reviewing-vitest-config from vitest-4 for detecting deprecated patterns and Vitest 4.x migration issues

View File

@@ -0,0 +1,317 @@
---
name: reviewing-complexity
description: Analyze code complexity and maintainability including cyclomatic complexity, function length, nesting depth, and cognitive load. Use when reviewing code maintainability, refactoring candidates, or technical debt assessment.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Complexity Review Skill
## Purpose
Provides automated complexity analysis commands and manual detection patterns for identifying hard-to-maintain code. Use this as a reference for WHAT to check and HOW to detect complexity issues—not for output formatting or workflow.
## Automated Complexity Analysis
Run Lizard complexity analyzer:
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-complexity.sh
```
**Returns:**
- Functions with cyclomatic complexity >= 15
- NLOC (Non-comment Lines Of Code)
- CCN (Cyclomatic Complexity Number)
- Token count, parameter count, function length
- Format: `NLOC CCN Token Parameter Length Location`
**Example output:**
```
45 18 234 5 50 src/utils.ts:calculateTotal
```
## Complexity Metrics Reference
### Cyclomatic Complexity (CCN)
Counts independent paths through code based on decision points: if/else, switch, loops, ternary operators, logical operators (&&, ||)
**Thresholds:**
- 1-5: Simple, easy to test
- 6-10: Moderate, acceptable
- 11-15: Complex, consider refactoring
- 16+: High risk, refactor recommended
### Function Length (NLOC)
Non-comment lines in a function.
**Thresholds:**
- 1-20: Good
- 21-50: Acceptable
- 51-100: Consider splitting
- 100+: Too long, refactor
### Parameter Count
**Thresholds:**
- 0-3: Good
- 4-5: Acceptable
- 6+: Too many, use object parameter
### Nesting Depth
Levels of indentation.
**Thresholds:**
- 1-2: Good
- 3: Acceptable
- 4+: Too deep, simplify
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep to detect:
### Multiple Responsibilities
```bash
# Find functions with multiple comment sections
grep -A 50 "function\|const._=._=>" <file> | grep -c "^[[:space:]]\*\/\/"
```
Look for: Functions with validation + transformation + persistence + notification in one place
### Deep Nesting
```bash
# Find lines with >3 levels of indentation (12+ spaces)
grep -n "^[[:space:]]{12,}" <file>
```
Look for: Nested if statements >3 levels deep
### Long Conditional Chains
```bash
# Find files with many else-if statements
grep -c "else if" <file>
```
Look for: Functions with >5 else-if branches
### High Parameter Count
```bash
# Find function declarations
grep -n "function._([^)]_,[^)]_,[^)]_,[^)]_,[^)]_," <file>
```
Look for: Functions with >5 parameters
### Mixed Abstraction Levels
Use Read to identify functions that mix:
- High-level orchestration with low-level string manipulation
- Business logic with infrastructure concerns
- Domain logic with presentation logic
### Cognitive Load Indicators
**Magic Numbers:**
```bash
grep -n "[^a-zA-Z_][0-9]{2,}[^a-zA-Z_]" <file>
```
Look for: Unexplained numeric literals
**Excessive Comments:**
```bash
# Count comment density
total_lines=$(wc -l < <file>)
comment_lines=$(grep -c "^[[:space:]]\*\/\/" <file>)
```
Look for: Comment ratio >20% (indicates unclear code)
**Side Effects:**
```bash
grep -n "this\.\|global\.\|window\.\|process\.env" <file>
```
Look for: Functions accessing external state
## Complexity Sources to Identify
When reviewing flagged functions, identify specific causes:
| Pattern | Detection Method | Example |
| ------------------------- | ------------------------------ | ----------------------------------------- |
| Multiple Responsibilities | Function does >1 distinct task | Validation + transformation + persistence |
| Deep Nesting | Indentation >3 levels | if > if > if > if |
| Long Conditional Chains | >5 else-if branches | type === 'A' \|\| type === 'B' \|\| ... |
| Mixed Abstraction Levels | High + low level code mixed | orchestration + string manipulation |
| Magic Numbers | Unexplained literals | if (status === 42) |
| Excessive Comments | Comment ratio >20% | Every line needs explanation |
| Side Effects | Modifies external state | Accesses globals, mutates inputs |
| High Parameter Count | >5 parameters | function(a, b, c, d, e, f) |
## Refactoring Patterns
Suggest these patterns based on complexity source:
### Extract Method
**When:** Function >50 lines or multiple responsibilities
**Pattern:**
```typescript
// Before: 40 lines doing validation + transformation + persistence
function process(data) {
/_ 40 lines _/;
}
// After: 3 focused functions
function process(data) {
validate(data);
const transformed = transform(data);
persist(transformed);
}
```
### Guard Clauses
**When:** Deep nesting >3 levels
**Pattern:**
```typescript
// Before: Nested ifs
if (valid) {
if (ready) {
if (allowed) {
/_ logic _/;
}
}
}
// After: Early returns
if (!valid) return;
if (!ready) return;
if (!allowed) return;
/_ logic _/;
```
### Replace Conditional with Lookup
**When:** >5 else-if branches
**Pattern:**
```typescript
// Before: Long if-else chain
if (type === 'A') {
doA();
} else if (type === 'B') {
doB();
}
// After: Lookup table
const strategies = { A: doA, B: doB };
strategies[type]();
```
### Parameter Object
**When:** >5 parameters
**Pattern:**
```typescript
// Before: Many parameters
function create(name, email, age, address, phone, city) {}
// After: Object parameter
function create(userData: UserData) {}
```
### Extract Variable
**When:** Complex conditionals or magic numbers
**Pattern:**
```typescript
// Before: Unclear condition
if (user.age > 18 && user.status === 'active' && user.balance > 100) {
}
// After: Named boolean
const isEligibleUser = user.age > 18 && user.status === 'active' && user.balance > 100;
if (isEligibleUser) {
}
```
## Severity Mapping
Use these criteria when classifying findings:
| Metric | Severity | Rationale |
| ----------------- | -------- | -------------------------------- |
| CCN >= 25 | critical | Extremely high risk, untestable |
| CCN 20-24 | high | High risk, difficult to maintain |
| CCN 15-19 | high | Complex, refactor recommended |
| NLOC > 100 | high | Too long, hard to understand |
| Nesting depth > 4 | high | Hard to follow logic |
| CCN 11-14 | medium | Moderate complexity |
| NLOC 51-100 | medium | Consider splitting |
| Parameters > 5 | medium | Hard to use correctly |
| Nesting depth 4 | medium | Approaching complexity limit |
| CCN 6-10 | nitpick | Acceptable but monitor |
| NLOC 21-50 | nitpick | Acceptable length |
| Parameters 4-5 | nitpick | Consider object parameter |
## Red Flags
Watch for these warning signs when reviewing complex functions:
- **Needs comments to explain logic** - Code should be self-documenting
- **Hard to write unit tests** - High complexity makes testing difficult
- **Frequent source of bugs** - Check git history for bug fixes
- **Developers avoid modifying** - Ask team about "scary" functions
- **Takes >5 minutes to understand** - Cognitive load too high
- **Mixed abstraction levels** - Doing too many things
## Analysis Priority
1. **Run Lizard script first** (if available)
2. **Parse Lizard output** for functions with CCN >= 15
3. **Read flagged functions** using Read tool
4. **Identify complexity sources** using patterns above
5. **Apply manual detection patterns** if Lizard unavailable
6. **Cross-reference with git history** (frequent changes = high-risk complexity)
7. **Suggest specific refactoring patterns** based on complexity source
## Integration Notes
- This skill provides detection methods and refactoring patterns only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates (handled by agent if needed)
- Focus on identifying complexity, not prescribing workflow

View File

@@ -0,0 +1,297 @@
---
name: reviewing-dependencies
description: Automated tooling and detection patterns for analyzing npm dependencies, unused packages, and dead code. Provides tool commands and what to look for—not how to structure output.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Dependencies Review Skill
## Purpose
This skill provides automated analysis commands and detection patterns for dependency issues. Use this as a reference for WHAT to check and HOW to detect issues—not for output formatting or workflow.
## Automated Analysis Tools
Run these scripts to gather metrics (if tools available):
### Unused Dependencies Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-unused-deps.sh
```
**Returns:** Unused dependencies, unused devDependencies, missing dependencies (imported but not in package.json)
### Unused Code Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-unused-code.sh
```
**Returns:** Unused exports, unused files, unused enum/class members, unused types/interfaces
### Security Audit
```bash
npm audit --json
npm audit --production --json
```
## Outdated Dependencies Detection
```bash
npm outdated
```
**Look for:**
- available patch/minor/major version upgrades
- Deprecated dependencies
### Bundle Analysis (if available)
```bash
npm run build -- --analyze
```
**Returns:** Bundle size breakdown, largest chunks
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep/Glob to detect:
### Package.json Analysis
**Read package.json:**
```bash
cat package.json | jq '.dependencies, .devDependencies'
```
**Check for:**
- Version pinning strategy (^, ~, exact)
- Packages at latest/next tags
- Incorrect categorization (prod vs dev vs peer)
- Duplicate functionality patterns
### Usage Frequency Detection
**Count imports for specific package:**
```bash
grep -r "from ['\"]package-name['\"]" src/ | wc -l
grep -r "require(['\"]package-name['\"])" src/ | wc -l
```
**Find all import locations:**
```bash
grep -rn "from ['\"]package-name['\"]" src/
```
### Duplicate Functionality Detection
**Multiple date libraries:**
```bash
grep -E "moment|date-fns|dayjs|luxon" package.json
```
**Multiple HTTP clients:**
```bash
grep -E "axios|node-fetch|got|ky|superagent" package.json
```
**Multiple testing frameworks:**
```bash
grep -E "jest|mocha|jasmine|vitest" package.json
```
Uses skills tagged with `review: true` including reviewing-vitest-config from vitest-4 for detecting configuration deprecations and testing framework migration patterns.
**Multiple utility libraries:**
```bash
grep -E "lodash|underscore|ramda" package.json
```
### Tree-Shaking Opportunities
**Non-ES module imports:**
```bash
grep -r "import .* from 'lodash'" src/
grep -r "import _ from" src/
```
Look for: Default imports that could be named imports from ES module versions
**Large utility usage:**
```bash
grep -rn "from 'lodash'" src/ | head -20
```
Look for: Single function imports that could be inlined
### Dead Code Patterns
**Exported but never imported:**
```bash
# Find all exports
grep -rn "export (const|function|class|interface|type)" src/
# For each export, check if imported elsewhere
grep -r "import.*{ExportName}" src/
```
**Unused utility files:**
```bash
# Find utility/helper files
find src/ -name "*util*" -o -name "*helper*"
# Check if imported
grep -r "from.*utils" src/
```
**Deprecated code markers:**
```bash
grep -rn "@deprecated\|DEPRECATED\|DO NOT USE" src/
```
## Severity Mapping
Use these criteria when classifying findings:
| Pattern | Severity | Rationale |
| ------------------------------------- | -------- | --------------------------- |
| Vulnerable dependency (critical/high) | critical | Security risk in production |
| Unused dependency >100kb | high | Significant bundle bloat |
| Multiple packages for same purpose | high | Maintenance overhead |
| Vulnerable dependency (moderate) | medium | Security risk, lower impact |
| Unused dependency 10-100kb | medium | Moderate bundle bloat |
| Unused devDependency | medium | Maintenance overhead |
| Single-use utility from large library | medium | Tree-shaking opportunity |
| Unused dependency <10kb | nitpick | Minimal impact |
| Loose version ranges (^, ~) | nitpick | Potential instability |
| Incorrect dependency category | nitpick | Organization issue |
## Common Dependency Patterns
### Removal Candidates
**High Confidence (Unused):**
- Found by depcheck/Knip
- Zero imports in codebase
- Not in ignored files (scripts, config)
- Not peer dependency of other packages
**Medium Confidence (Low Usage):**
- 1-2 imports total
- Used only for simple operations
- Easy to inline or replace
- Alternative is smaller/native
**Consider Alternatives:**
- Large package (>50kb) with light usage
- Deprecated/unmaintained package
- Duplicate functionality exists
- Native alternative available
### Size Reference (Approximate)
| Category | Examples | Typical Size |
| ------------------- | ----------------------------- | ------------ |
| Heavy date libs | moment | 70kb |
| Light date libs | dayjs, date-fns (tree-shaken) | 2-10kb |
| Heavy utilities | lodash (full) | 70kb |
| Light utilities | lodash-es (per function) | 1-5kb |
| HTTP clients | axios, node-fetch | 10-15kb |
| Native alternatives | fetch, Intl API | 0kb |
### Refactoring Patterns
**Replace large utility with inline:**
```typescript
// Before: lodash.debounce (71kb library)
import _ from 'lodash';
_.debounce(fn, 300);
// After: inline (0kb)
const debounce = (fn, ms) => {
let timeout;
return (...args) => {
clearTimeout(timeout);
timeout = setTimeout(() => fn(...args), ms);
};
};
```
**Replace with tree-shakeable alternative:**
```typescript
// Before: full library
import moment from 'moment';
moment(date).format('YYYY-MM-DD');
// After: specific function
import { format } from 'date-fns/format';
format(date, 'yyyy-MM-dd');
```
**Replace with native alternative:**
```typescript
// Before: lodash
import { isEmpty } from 'lodash';
isEmpty(obj);
// After: native
Object.keys(obj).length === 0;
```
## Analysis Priority
1. **Run automated scripts first** (if tools available)
- review-unused-deps.sh for unused packages
- review-unused-code.sh for dead code
- npm audit for security issues
2. **Parse script outputs** for package names and file locations
3. **Verify usage with grep** for each flagged package
- Count imports
- Check import patterns (default vs named)
- Identify usage locations
4. **Read package.json** to check:
- Version ranges
- Dependency categorization
- Duplicate functionality
5. **Cross-reference findings:**
- Unused package + large size = high priority
- Low usage + available alternative = medium priority
- Vulnerable package + unused = critical priority
## Integration Notes
- This skill provides detection methods and patterns only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates, bundle size savings calculations, or success criteria
- Do NOT provide refactoring instructions beyond pattern examples

View File

@@ -0,0 +1,309 @@
---
name: reviewing-duplication
description: Automated tooling and detection patterns for identifying duplicate and copy-pasted code in JavaScript/TypeScript projects. Provides tool commands and refactoring patterns—not workflow or output formatting.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Duplication Review Skill
## Purpose
This skill provides automated duplication detection commands and manual search patterns. Use this as a reference for WHAT to check and HOW to detect duplicates—not for output formatting or workflow.
## Automated Duplication Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-duplicates.sh
```
**Uses:** jsinspect (preferred) or Lizard fallback
**Returns:**
- Number of duplicate blocks
- File:line locations of each instance
- Similarity percentage
- Lines of duplicated code
**Example output:**
```
Match - 2 instances
src/components/UserForm.tsx:45-67
src/components/AdminForm.tsx:23-45
```
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis:
### Pattern 1: Configuration Objects
```bash
# Find similar object structures
grep -rn "const._=._{$" --include="*.ts" --include="*.tsx" <directory>
grep -rn "export.*{$" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Similar property names, parallel structures
### Pattern 2: Validation Logic
```bash
# Find repeated validation patterns
grep -rn "if._length._<._return" --include="_.ts" --include="*.tsx" <directory>
grep -rn "if.*match._test" --include="_.ts" --include="*.tsx" <directory>
grep -rn "throw.*Error" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Similar conditional checks, repeated error handling
### Pattern 3: Data Transformation
```bash
# Find similar transformation chains
grep -rn "\.map(" --include="_.ts" --include="_.tsx" <directory>
grep -rn "\.filter(" --include="_.ts" --include="_.tsx" <directory>
grep -rn "\.reduce(" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Similar method chains, repeated transformations
### Pattern 4: File Organization Clues
```bash
# Find files with similar names (likely duplicates)
find <directory> -type f -name "_.ts" -o -name "_.tsx" | sort
```
Look for: Parallel naming (UserForm/AdminForm), similar directory structures
### Pattern 5: Function Signatures
```bash
# Find similar function declarations
grep -rn "function._{$" --include="_.ts" --include="_.tsx" <directory>
grep -rn "const._=._=>._{$" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Matching parameter patterns, similar return types
## Duplication Type Classification
### Type 1: Exact Clones
**Characteristics:** Character-for-character identical
**Detection:** Automated tools catch these easily
**Example:**
```typescript
function validateEmail(email: string) {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
}
```
Appears in multiple files without changes.
### Type 2: Renamed Clones
**Characteristics:** Same structure, different identifiers
**Detection:** Look for similar line counts and control flow
**Example:**
```typescript
function getUserById(id: number) {
/_ ... _/;
}
function getProductById(id: number) {
/_ ... _/;
}
```
### Type 3: Near-miss Clones
**Characteristics:** Similar with minor modifications
**Detection:** Manual comparison after automated flagging
**Example:**
```typescript
function processOrders() {
const items = getOrders();
items.forEach((item) => validate(item));
items.forEach((item) => transform(item));
return items;
}
function processUsers() {
const items = getUsers();
items.forEach((item) => validate(item));
items.forEach((item) => transform(item));
return items;
}
```
### Type 4: Semantic Clones
**Characteristics:** Different code, same behavior
**Detection:** Requires understanding business logic
**Example:** Two different implementations of same algorithm
## Refactoring Patterns
### Pattern 1: Extract Function
**When:** Exact duplicates, 3+ instances
**Example:**
```typescript
// Before (duplicated)
if (user.age < 18) return false;
if (user.verified !== true) return false;
if (user.active !== true) return false;
// After (extracted)
function isEligible(user) {
return user.age >= 18 && user.verified && user.active;
}
```
### Pattern 2: Extract Utility
**When:** Common operations repeated across files
**Example:**
```typescript
// Before (repeated in many files)
const formatted = date.toISOString().split('T')[0];
// After (utility)
function formatDate(date) {
return date.toISOString().split('T')[0];
}
```
### Pattern 3: Template Method
**When:** Similar processing flows with variations
**Example:**
```typescript
// Before (structural duplicates)
function processA() {
validate();
transformA();
persist();
}
function processB() {
validate();
transformB();
persist();
}
// After (template)
function process(transformer) {
validate();
transformer();
persist();
}
```
### Pattern 4: Parameterize Differences
**When:** Duplicates with single variation point
**Example:**
```typescript
// Before (duplicate with variation)
function getActiveUsers() {
return users.filter((u) => u.status === 'active');
}
function getInactiveUsers() {
return users.filter((u) => u.status === 'inactive');
}
// After (parameterized)
function getUsersByStatus(status) {
return users.filter((u) => u.status === status);
}
```
## Severity Mapping
| Pattern | Severity | Rationale |
| ----------------------------------- | -------- | --------------------------------------------- |
| Exact duplicates, 5+ instances | critical | High maintenance burden, bug propagation risk |
| Exact duplicates, 3-4 instances | high | Significant maintenance cost |
| Structural duplicates, 3+ instances | high | Refactoring opportunity with high value |
| Exact duplicates, 2 instances | medium | Moderate maintenance burden |
| Structural duplicates, 2 instances | medium | Consider refactoring if likely to grow |
| Near-miss clones, 2-3 instances | medium | Evaluate cost/benefit of extraction |
| Test code duplication | nitpick | Acceptable for test clarity |
| Configuration duplication | nitpick | May be intentional, evaluate case-by-case |
## When Duplication is Acceptable
**Test Code:**
- Test clarity preferred over DRY principle
- Explicit test cases easier to understand
- Fixtures can duplicate without issue
**Constants/Configuration:**
- Similar configs may be coincidental
- Premature abstraction creates coupling
- May evolve independently
**Prototypes/Experiments:**
- Early stage code, patterns unclear
- Wait for third instance before abstracting
**Different Domains:**
- Accidental similarity
- May diverge over time
- Wrong abstraction worse than duplication
## Red Flags (High Priority Indicators)
- Same bug appears in multiple locations
- Features require changes in N places
- Developers forget to update all copies
- Code review comments repeated across files
- Merge conflicts in similar code blocks
- Business logic duplicated across domains
## Analysis Priority
1. **Run automated duplication detection** (if tools available)
2. **Parse script output** for file:line references and instance counts
3. **Read flagged files** to understand context
4. **Classify duplication type** (exact, structural, near-miss, semantic)
5. **Count instances** (more instances = higher priority)
6. **Assess refactoring value:**
- Instance count (3+ = high priority)
- Likelihood of changing together
- Complexity of extraction
- Test vs production code
7. **Identify refactoring pattern** (extract function, utility, template, parameterize)
8. **Check for acceptable duplication** (tests, config, prototypes)
## Integration Notes
- This skill provides detection methods and refactoring patterns only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates or workflow instructions
- Focus on WHAT to detect and HOW to refactor, not report structure

View File

@@ -0,0 +1,287 @@
---
name: reviewing-security
description: Automated tooling and detection patterns for JavaScript/TypeScript security vulnerabilities. Provides scan commands, vulnerability patterns, and severity mapping—not output formatting or workflow.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Security Review Skill
## Purpose
This skill provides automated security scanning commands and vulnerability detection patterns. Use this as a reference for WHAT to check and HOW to detect security issues—not for output formatting or workflow.
## Automated Security Scan
Run Semgrep security analysis (if available):
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-security.sh
```
**Returns:** Security issues by severity, vulnerability types (XSS, injection, etc.), file:line locations, CWE/OWASP references
## Vulnerability Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep/Glob to detect:
### Input Validation Vulnerabilities
**XSS (Cross-Site Scripting):**
```bash
grep -rn "innerHTML.*=\|dangerouslySetInnerHTML\|document\.write" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx"
```
Look for: User input assigned to innerHTML, dangerouslySetInnerHTML usage, document.write with variables
**SQL Injection:**
```bash
grep -rn "query.*+\|query.*\${" --include="*.ts" --include="*.js"
```
Look for: String concatenation in SQL queries, template literals in queries without parameterization
**Command Injection:**
```bash
grep -rn "exec\|spawn\|execSync\|spawnSync" --include="*.ts" --include="*.js"
```
Look for: User input passed to exec/spawn, unsanitized command arguments
**Path Traversal:**
```bash
grep -rn "readFile.*req\|readFile.*params\|\.\./" --include="*.ts" --include="*.js"
```
Look for: File paths from user input, ../ in file operations
**Code Injection:**
```bash
grep -rn "eval\|new Function\|setTimeout.*string\|setInterval.*string" --include="*.ts" --include="*.js"
```
Look for: eval() usage, Function constructor, string-based setTimeout/setInterval
### Authentication & Authorization Issues
**Hardcoded Credentials:**
```bash
grep -rn "password\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
grep -rn "api_key\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
grep -rn "secret\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
grep -rn "token\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
```
Look for: Hardcoded passwords, API keys, secrets, tokens in source code
**Weak Authentication:**
```bash
grep -rn "password\.length\|minLength.*password" --include="*.ts" --include="*.js"
```
Look for: Weak password requirements (<8 chars), missing complexity checks
**Missing Authorization:**
```bash
grep -rn "router\.\(get\|post\|put\|delete\)" --include="*.ts" --include="*.js"
```
Look for: Routes without authentication middleware, missing role checks
**JWT Issues:**
```bash
grep -rn "jwt\.sign.*algorithm.*none\|jwt\.verify.*algorithms.*\[\]" --include="*.ts" --include="*.js"
```
Look for: JWT with "none" algorithm, missing algorithm verification
### Data Exposure Issues
**Sensitive Data in Logs:**
```bash
grep -rn "console\.log.*password\|console\.log.*token\|console\.log.*secret" --include="*.ts" --include="*.js"
```
Look for: Passwords, tokens, secrets in console.log statements
**Secrets in Environment Files:**
```bash
grep -rn "API_KEY\|SECRET\|PASSWORD\|TOKEN" .env .env.example
```
Look for: Actual secrets in .env files (should be in .env.example as placeholders only)
**Client-Side Secrets:**
```bash
grep -rn "process\.env\." --include="*.tsx" --include="*.jsx"
```
Look for: Environment variables accessed in client-side React components
**Verbose Error Messages:**
```bash
grep -rn "error\.stack\|error\.message.*res\.send\|throw.*Error.*password" --include="*.ts" --include="*.js"
```
Look for: Stack traces sent to client, error messages exposing system details
### Cryptography Issues
**Weak Algorithms:**
```bash
grep -rn "createHash.*md5\|createHash.*sha1\|crypto\.MD5\|crypto\.SHA1" --include="*.ts" --include="*.js"
```
Look for: MD5, SHA1 usage for security-sensitive operations
**Insecure Randomness:**
```bash
grep -rn "Math\.random" --include="*.ts" --include="*.js"
```
Look for: Math.random() for tokens, session IDs, cryptographic keys
**Hardcoded Encryption Keys:**
```bash
grep -rn "encrypt.*key.*=.*['\"]" --include="*.ts" --include="*.js"
```
Look for: Encryption keys hardcoded in source
**Improper Certificate Validation:**
```bash
grep -rn "rejectUnauthorized.*false\|NODE_TLS_REJECT_UNAUTHORIZED.*0" --include="*.ts" --include="*.js"
```
Look for: Disabled SSL/TLS certificate validation
### Dependency Vulnerabilities
**Check for Known Vulnerabilities:**
```bash
npm audit --json
# or
yarn audit --json
```
Look for: Packages with known CVEs, outdated dependencies with security patches
**Check Package Integrity:**
```bash
grep -rn "http://registry\|--ignore-scripts" package.json
```
Look for: Insecure registry URLs, disabled install scripts (security bypass)
## Severity Mapping
Use these criteria when classifying security findings:
| Vulnerability Type | Severity | Rationale |
| -------------------------------------------- | -------- | ------------------------------- |
| SQL injection | critical | Database compromise, data theft |
| Command injection | critical | Remote code execution |
| Hardcoded credentials in production code | critical | Unauthorized access |
| Authentication bypass | critical | Complete security failure |
| XSS with user data | high | Account takeover, data theft |
| Missing authentication on sensitive routes | high | Unauthorized access to data |
| Secrets in logs | high | Credential exposure |
| Weak cryptography (MD5/SHA1 for passwords) | high | Password cracking |
| Path traversal | high | Arbitrary file access |
| Missing authorization checks | medium | Privilege escalation risk |
| Insecure randomness (Math.random for tokens) | medium | Token prediction |
| Verbose error messages | medium | Information disclosure |
| Outdated dependencies with CVEs | medium | Known vulnerability exposure |
| Weak password requirements | medium | Brute force risk |
| Missing HTTPS enforcement | medium | Man-in-the-middle risk |
| Disabled certificate validation | medium | MITM attacks possible |
| Secrets in .env.example | nitpick | Best practice violation |
| console.log with non-sensitive data | nitpick | Production noise |
## Analysis Priority
1. **Run automated security scan first** (Semgrep if available)
2. **Parse scan outputs** for critical/high severity issues
3. **Check for hardcoded secrets** (grep patterns above)
4. **Audit authentication/authorization** in routes and middleware
5. **Inspect input validation** at API boundaries
6. **Review cryptography usage** for weak algorithms
7. **Check dependencies** for known vulnerabilities
8. **Cross-reference findings** (e.g., missing auth + XSS = higher priority)
If performing comprehensive Prisma code review covering security vulnerabilities and performance anti-patterns, use the reviewing-prisma-patterns skill from prisma-6 for systematic validation.
## Common Vulnerability Examples
### XSS Example
```typescript
// VULNERABLE
element.innerHTML = userInput;
<div dangerouslySetInnerHTML={{ __html: data }} />;
// SECURE
element.textContent = userInput;
<div>{DOMPurify.sanitize(data)}</div>;
```
### SQL Injection Example
```typescript
// VULNERABLE
db.query("SELECT * FROM users WHERE id = " + userId);
db.query(\`SELECT * FROM users WHERE email = '\${email}'\`);
// SECURE
db.query("SELECT * FROM users WHERE id = ?", [userId]);
db.query("SELECT * FROM users WHERE email = $1", [email]);
```
If reviewing Prisma 6 SQL injection prevention patterns, use the preventing-sql-injection skill from prisma-6 for $queryRaw guidance.
### Command Injection Example
```typescript
// VULNERABLE
exec(\`ping \${userInput}\`);
// SECURE
execFile('ping', [userInput]);
```
### Insecure Randomness Example
```typescript
// VULNERABLE
const sessionId = Math.random().toString(36);
// SECURE
const sessionId = crypto.randomBytes(32).toString('hex');
```
## Integration Notes
- This skill provides detection methods and severity mapping only
- Output formatting is handled by the calling agent
- Prioritize automated Semgrep scan results over manual inspection
- Manual patterns supplement automated findings
- All findings must map to specific file:line locations