Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:22:30 +08:00
commit 3f790fa86a
10 changed files with 2104 additions and 0 deletions

View File

@@ -0,0 +1,150 @@
---
name: reviewing-code-quality
description: Automated tooling and detection patterns for JavaScript/TypeScript code quality review
---
# Code Quality Review Skill
## Purpose
This skill provides automated analysis commands and detection patterns for code quality issues. Use this as a reference for WHAT to check and HOW to detect issues—not for output formatting or workflow.
## Automated Analysis Tools
Run these scripts to gather metrics (if tools available):
### Linting Analysis
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-lint.sh
```
````
**Returns:** Error count, violations with file:line, auto-fix suggestions
### Type Safety Analysis
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-types.sh
```
**Returns:** Type errors, missing annotations, error locations
### Unused Code Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-unused-code.sh
```
**Returns:** Unused exports, unused dependencies, dead code
### TODO/FIXME Comments
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-todos.sh
```
**Returns:** Comment count by type, locations with context
### Debug Statements
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-debug-statements.sh
```
**Returns:** console.log/debugger statements with locations
### Large Files
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-large-files.sh
```
**Returns:** Files >500 lines sorted by size
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep/Glob to detect:
### Code Smells to Detect
**Long Functions:**
```bash
# Find functions with >50 lines
grep -n "function\|const.*=.*=>.*{" <file> | while read line; do
# Count lines until closing brace
done
```
Look for: Functions spanning >50 lines, multiple responsibilities
**Deep Nesting:**
```bash
# Find lines with >3 levels of indentation
grep -E "^[[:space:]]{12,}" <file>
```
Look for: Nesting depth >3, complex conditionals
**Missing Error Handling:**
```bash
grep -n "async\|await\|Promise\|\.then\|\.catch" <file>
```
Look for: Async operations without try-catch or .catch()
**Poor Type Safety:**
```bash
grep -n "any\|as any\|@ts-ignore\|@ts-expect-error" <file>
```
Look for: Type assertions, any usage, suppression comments
**Repeated Patterns:**
Use Read to identify duplicate logic blocks (>5 lines similar code)
**Poor Naming:**
Look for: Single-letter variables (except i, j in loops), unclear abbreviations, misleading names
## Severity Mapping
Use these criteria when classifying findings:
| Pattern | Severity | Rationale |
| ---------------------------------------- | -------- | ----------------------- |
| Type errors blocking compilation | critical | Prevents deployment |
| Missing error handling in critical paths | high | Production crashes |
| Unused exports in public API | high | Breaking changes needed |
| Large files (>500 LOC) | medium | Maintainability impact |
| TODO comments | medium | Incomplete work |
| Debug statements (console.log) | medium | Production noise |
| Deep nesting (>3 levels) | medium | Complexity issues |
| Long functions (>50 lines) | medium | Readability issues |
| Linting warnings | nitpick | Style consistency |
| Minor naming issues | nitpick | Clarity improvements |
## Analysis Priority
1. **Run automated scripts first** (if tools available)
2. **Parse script outputs** for file:line references
3. **Read flagged files** using Read tool
4. **Apply manual detection patterns** to flagged files
5. **Cross-reference findings** (e.g., large file + many TODOs = higher priority)
## Integration Notes
- This skill provides detection methods only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates or workflow instructions
## Related Skills
**Cross-Plugin References:**
- If reviewing Zod schema patterns, use the reviewing-patterns skill for detecting validation issues and schema anti-patterns
- Uses skills tagged with `review: true` including reviewing-vitest-config from vitest-4 for detecting deprecated patterns and Vitest 4.x migration issues

View File

@@ -0,0 +1,317 @@
---
name: reviewing-complexity
description: Analyze code complexity and maintainability including cyclomatic complexity, function length, nesting depth, and cognitive load. Use when reviewing code maintainability, refactoring candidates, or technical debt assessment.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Complexity Review Skill
## Purpose
Provides automated complexity analysis commands and manual detection patterns for identifying hard-to-maintain code. Use this as a reference for WHAT to check and HOW to detect complexity issues—not for output formatting or workflow.
## Automated Complexity Analysis
Run Lizard complexity analyzer:
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-complexity.sh
```
**Returns:**
- Functions with cyclomatic complexity >= 15
- NLOC (Non-comment Lines Of Code)
- CCN (Cyclomatic Complexity Number)
- Token count, parameter count, function length
- Format: `NLOC CCN Token Parameter Length Location`
**Example output:**
```
45 18 234 5 50 src/utils.ts:calculateTotal
```
## Complexity Metrics Reference
### Cyclomatic Complexity (CCN)
Counts independent paths through code based on decision points: if/else, switch, loops, ternary operators, logical operators (&&, ||)
**Thresholds:**
- 1-5: Simple, easy to test
- 6-10: Moderate, acceptable
- 11-15: Complex, consider refactoring
- 16+: High risk, refactor recommended
### Function Length (NLOC)
Non-comment lines in a function.
**Thresholds:**
- 1-20: Good
- 21-50: Acceptable
- 51-100: Consider splitting
- 100+: Too long, refactor
### Parameter Count
**Thresholds:**
- 0-3: Good
- 4-5: Acceptable
- 6+: Too many, use object parameter
### Nesting Depth
Levels of indentation.
**Thresholds:**
- 1-2: Good
- 3: Acceptable
- 4+: Too deep, simplify
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep to detect:
### Multiple Responsibilities
```bash
# Find functions with multiple comment sections
grep -A 50 "function\|const._=._=>" <file> | grep -c "^[[:space:]]\*\/\/"
```
Look for: Functions with validation + transformation + persistence + notification in one place
### Deep Nesting
```bash
# Find lines with >3 levels of indentation (12+ spaces)
grep -n "^[[:space:]]{12,}" <file>
```
Look for: Nested if statements >3 levels deep
### Long Conditional Chains
```bash
# Find files with many else-if statements
grep -c "else if" <file>
```
Look for: Functions with >5 else-if branches
### High Parameter Count
```bash
# Find function declarations
grep -n "function._([^)]_,[^)]_,[^)]_,[^)]_,[^)]_," <file>
```
Look for: Functions with >5 parameters
### Mixed Abstraction Levels
Use Read to identify functions that mix:
- High-level orchestration with low-level string manipulation
- Business logic with infrastructure concerns
- Domain logic with presentation logic
### Cognitive Load Indicators
**Magic Numbers:**
```bash
grep -n "[^a-zA-Z_][0-9]{2,}[^a-zA-Z_]" <file>
```
Look for: Unexplained numeric literals
**Excessive Comments:**
```bash
# Count comment density
total_lines=$(wc -l < <file>)
comment_lines=$(grep -c "^[[:space:]]\*\/\/" <file>)
```
Look for: Comment ratio >20% (indicates unclear code)
**Side Effects:**
```bash
grep -n "this\.\|global\.\|window\.\|process\.env" <file>
```
Look for: Functions accessing external state
## Complexity Sources to Identify
When reviewing flagged functions, identify specific causes:
| Pattern | Detection Method | Example |
| ------------------------- | ------------------------------ | ----------------------------------------- |
| Multiple Responsibilities | Function does >1 distinct task | Validation + transformation + persistence |
| Deep Nesting | Indentation >3 levels | if > if > if > if |
| Long Conditional Chains | >5 else-if branches | type === 'A' \|\| type === 'B' \|\| ... |
| Mixed Abstraction Levels | High + low level code mixed | orchestration + string manipulation |
| Magic Numbers | Unexplained literals | if (status === 42) |
| Excessive Comments | Comment ratio >20% | Every line needs explanation |
| Side Effects | Modifies external state | Accesses globals, mutates inputs |
| High Parameter Count | >5 parameters | function(a, b, c, d, e, f) |
## Refactoring Patterns
Suggest these patterns based on complexity source:
### Extract Method
**When:** Function >50 lines or multiple responsibilities
**Pattern:**
```typescript
// Before: 40 lines doing validation + transformation + persistence
function process(data) {
/_ 40 lines _/;
}
// After: 3 focused functions
function process(data) {
validate(data);
const transformed = transform(data);
persist(transformed);
}
```
### Guard Clauses
**When:** Deep nesting >3 levels
**Pattern:**
```typescript
// Before: Nested ifs
if (valid) {
if (ready) {
if (allowed) {
/_ logic _/;
}
}
}
// After: Early returns
if (!valid) return;
if (!ready) return;
if (!allowed) return;
/_ logic _/;
```
### Replace Conditional with Lookup
**When:** >5 else-if branches
**Pattern:**
```typescript
// Before: Long if-else chain
if (type === 'A') {
doA();
} else if (type === 'B') {
doB();
}
// After: Lookup table
const strategies = { A: doA, B: doB };
strategies[type]();
```
### Parameter Object
**When:** >5 parameters
**Pattern:**
```typescript
// Before: Many parameters
function create(name, email, age, address, phone, city) {}
// After: Object parameter
function create(userData: UserData) {}
```
### Extract Variable
**When:** Complex conditionals or magic numbers
**Pattern:**
```typescript
// Before: Unclear condition
if (user.age > 18 && user.status === 'active' && user.balance > 100) {
}
// After: Named boolean
const isEligibleUser = user.age > 18 && user.status === 'active' && user.balance > 100;
if (isEligibleUser) {
}
```
## Severity Mapping
Use these criteria when classifying findings:
| Metric | Severity | Rationale |
| ----------------- | -------- | -------------------------------- |
| CCN >= 25 | critical | Extremely high risk, untestable |
| CCN 20-24 | high | High risk, difficult to maintain |
| CCN 15-19 | high | Complex, refactor recommended |
| NLOC > 100 | high | Too long, hard to understand |
| Nesting depth > 4 | high | Hard to follow logic |
| CCN 11-14 | medium | Moderate complexity |
| NLOC 51-100 | medium | Consider splitting |
| Parameters > 5 | medium | Hard to use correctly |
| Nesting depth 4 | medium | Approaching complexity limit |
| CCN 6-10 | nitpick | Acceptable but monitor |
| NLOC 21-50 | nitpick | Acceptable length |
| Parameters 4-5 | nitpick | Consider object parameter |
## Red Flags
Watch for these warning signs when reviewing complex functions:
- **Needs comments to explain logic** - Code should be self-documenting
- **Hard to write unit tests** - High complexity makes testing difficult
- **Frequent source of bugs** - Check git history for bug fixes
- **Developers avoid modifying** - Ask team about "scary" functions
- **Takes >5 minutes to understand** - Cognitive load too high
- **Mixed abstraction levels** - Doing too many things
## Analysis Priority
1. **Run Lizard script first** (if available)
2. **Parse Lizard output** for functions with CCN >= 15
3. **Read flagged functions** using Read tool
4. **Identify complexity sources** using patterns above
5. **Apply manual detection patterns** if Lizard unavailable
6. **Cross-reference with git history** (frequent changes = high-risk complexity)
7. **Suggest specific refactoring patterns** based on complexity source
## Integration Notes
- This skill provides detection methods and refactoring patterns only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates (handled by agent if needed)
- Focus on identifying complexity, not prescribing workflow

View File

@@ -0,0 +1,297 @@
---
name: reviewing-dependencies
description: Automated tooling and detection patterns for analyzing npm dependencies, unused packages, and dead code. Provides tool commands and what to look for—not how to structure output.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Dependencies Review Skill
## Purpose
This skill provides automated analysis commands and detection patterns for dependency issues. Use this as a reference for WHAT to check and HOW to detect issues—not for output formatting or workflow.
## Automated Analysis Tools
Run these scripts to gather metrics (if tools available):
### Unused Dependencies Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-unused-deps.sh
```
**Returns:** Unused dependencies, unused devDependencies, missing dependencies (imported but not in package.json)
### Unused Code Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-unused-code.sh
```
**Returns:** Unused exports, unused files, unused enum/class members, unused types/interfaces
### Security Audit
```bash
npm audit --json
npm audit --production --json
```
## Outdated Dependencies Detection
```bash
npm outdated
```
**Look for:**
- available patch/minor/major version upgrades
- Deprecated dependencies
### Bundle Analysis (if available)
```bash
npm run build -- --analyze
```
**Returns:** Bundle size breakdown, largest chunks
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep/Glob to detect:
### Package.json Analysis
**Read package.json:**
```bash
cat package.json | jq '.dependencies, .devDependencies'
```
**Check for:**
- Version pinning strategy (^, ~, exact)
- Packages at latest/next tags
- Incorrect categorization (prod vs dev vs peer)
- Duplicate functionality patterns
### Usage Frequency Detection
**Count imports for specific package:**
```bash
grep -r "from ['\"]package-name['\"]" src/ | wc -l
grep -r "require(['\"]package-name['\"])" src/ | wc -l
```
**Find all import locations:**
```bash
grep -rn "from ['\"]package-name['\"]" src/
```
### Duplicate Functionality Detection
**Multiple date libraries:**
```bash
grep -E "moment|date-fns|dayjs|luxon" package.json
```
**Multiple HTTP clients:**
```bash
grep -E "axios|node-fetch|got|ky|superagent" package.json
```
**Multiple testing frameworks:**
```bash
grep -E "jest|mocha|jasmine|vitest" package.json
```
Uses skills tagged with `review: true` including reviewing-vitest-config from vitest-4 for detecting configuration deprecations and testing framework migration patterns.
**Multiple utility libraries:**
```bash
grep -E "lodash|underscore|ramda" package.json
```
### Tree-Shaking Opportunities
**Non-ES module imports:**
```bash
grep -r "import .* from 'lodash'" src/
grep -r "import _ from" src/
```
Look for: Default imports that could be named imports from ES module versions
**Large utility usage:**
```bash
grep -rn "from 'lodash'" src/ | head -20
```
Look for: Single function imports that could be inlined
### Dead Code Patterns
**Exported but never imported:**
```bash
# Find all exports
grep -rn "export (const|function|class|interface|type)" src/
# For each export, check if imported elsewhere
grep -r "import.*{ExportName}" src/
```
**Unused utility files:**
```bash
# Find utility/helper files
find src/ -name "*util*" -o -name "*helper*"
# Check if imported
grep -r "from.*utils" src/
```
**Deprecated code markers:**
```bash
grep -rn "@deprecated\|DEPRECATED\|DO NOT USE" src/
```
## Severity Mapping
Use these criteria when classifying findings:
| Pattern | Severity | Rationale |
| ------------------------------------- | -------- | --------------------------- |
| Vulnerable dependency (critical/high) | critical | Security risk in production |
| Unused dependency >100kb | high | Significant bundle bloat |
| Multiple packages for same purpose | high | Maintenance overhead |
| Vulnerable dependency (moderate) | medium | Security risk, lower impact |
| Unused dependency 10-100kb | medium | Moderate bundle bloat |
| Unused devDependency | medium | Maintenance overhead |
| Single-use utility from large library | medium | Tree-shaking opportunity |
| Unused dependency <10kb | nitpick | Minimal impact |
| Loose version ranges (^, ~) | nitpick | Potential instability |
| Incorrect dependency category | nitpick | Organization issue |
## Common Dependency Patterns
### Removal Candidates
**High Confidence (Unused):**
- Found by depcheck/Knip
- Zero imports in codebase
- Not in ignored files (scripts, config)
- Not peer dependency of other packages
**Medium Confidence (Low Usage):**
- 1-2 imports total
- Used only for simple operations
- Easy to inline or replace
- Alternative is smaller/native
**Consider Alternatives:**
- Large package (>50kb) with light usage
- Deprecated/unmaintained package
- Duplicate functionality exists
- Native alternative available
### Size Reference (Approximate)
| Category | Examples | Typical Size |
| ------------------- | ----------------------------- | ------------ |
| Heavy date libs | moment | 70kb |
| Light date libs | dayjs, date-fns (tree-shaken) | 2-10kb |
| Heavy utilities | lodash (full) | 70kb |
| Light utilities | lodash-es (per function) | 1-5kb |
| HTTP clients | axios, node-fetch | 10-15kb |
| Native alternatives | fetch, Intl API | 0kb |
### Refactoring Patterns
**Replace large utility with inline:**
```typescript
// Before: lodash.debounce (71kb library)
import _ from 'lodash';
_.debounce(fn, 300);
// After: inline (0kb)
const debounce = (fn, ms) => {
let timeout;
return (...args) => {
clearTimeout(timeout);
timeout = setTimeout(() => fn(...args), ms);
};
};
```
**Replace with tree-shakeable alternative:**
```typescript
// Before: full library
import moment from 'moment';
moment(date).format('YYYY-MM-DD');
// After: specific function
import { format } from 'date-fns/format';
format(date, 'yyyy-MM-dd');
```
**Replace with native alternative:**
```typescript
// Before: lodash
import { isEmpty } from 'lodash';
isEmpty(obj);
// After: native
Object.keys(obj).length === 0;
```
## Analysis Priority
1. **Run automated scripts first** (if tools available)
- review-unused-deps.sh for unused packages
- review-unused-code.sh for dead code
- npm audit for security issues
2. **Parse script outputs** for package names and file locations
3. **Verify usage with grep** for each flagged package
- Count imports
- Check import patterns (default vs named)
- Identify usage locations
4. **Read package.json** to check:
- Version ranges
- Dependency categorization
- Duplicate functionality
5. **Cross-reference findings:**
- Unused package + large size = high priority
- Low usage + available alternative = medium priority
- Vulnerable package + unused = critical priority
## Integration Notes
- This skill provides detection methods and patterns only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates, bundle size savings calculations, or success criteria
- Do NOT provide refactoring instructions beyond pattern examples

View File

@@ -0,0 +1,309 @@
---
name: reviewing-duplication
description: Automated tooling and detection patterns for identifying duplicate and copy-pasted code in JavaScript/TypeScript projects. Provides tool commands and refactoring patterns—not workflow or output formatting.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Duplication Review Skill
## Purpose
This skill provides automated duplication detection commands and manual search patterns. Use this as a reference for WHAT to check and HOW to detect duplicates—not for output formatting or workflow.
## Automated Duplication Detection
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-duplicates.sh
```
**Uses:** jsinspect (preferred) or Lizard fallback
**Returns:**
- Number of duplicate blocks
- File:line locations of each instance
- Similarity percentage
- Lines of duplicated code
**Example output:**
```
Match - 2 instances
src/components/UserForm.tsx:45-67
src/components/AdminForm.tsx:23-45
```
## Manual Detection Patterns
When automated tools unavailable or for deeper analysis:
### Pattern 1: Configuration Objects
```bash
# Find similar object structures
grep -rn "const._=._{$" --include="*.ts" --include="*.tsx" <directory>
grep -rn "export.*{$" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Similar property names, parallel structures
### Pattern 2: Validation Logic
```bash
# Find repeated validation patterns
grep -rn "if._length._<._return" --include="_.ts" --include="*.tsx" <directory>
grep -rn "if.*match._test" --include="_.ts" --include="*.tsx" <directory>
grep -rn "throw.*Error" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Similar conditional checks, repeated error handling
### Pattern 3: Data Transformation
```bash
# Find similar transformation chains
grep -rn "\.map(" --include="_.ts" --include="_.tsx" <directory>
grep -rn "\.filter(" --include="_.ts" --include="_.tsx" <directory>
grep -rn "\.reduce(" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Similar method chains, repeated transformations
### Pattern 4: File Organization Clues
```bash
# Find files with similar names (likely duplicates)
find <directory> -type f -name "_.ts" -o -name "_.tsx" | sort
```
Look for: Parallel naming (UserForm/AdminForm), similar directory structures
### Pattern 5: Function Signatures
```bash
# Find similar function declarations
grep -rn "function._{$" --include="_.ts" --include="_.tsx" <directory>
grep -rn "const._=._=>._{$" --include="_.ts" --include="_.tsx" <directory>
```
Look for: Matching parameter patterns, similar return types
## Duplication Type Classification
### Type 1: Exact Clones
**Characteristics:** Character-for-character identical
**Detection:** Automated tools catch these easily
**Example:**
```typescript
function validateEmail(email: string) {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
}
```
Appears in multiple files without changes.
### Type 2: Renamed Clones
**Characteristics:** Same structure, different identifiers
**Detection:** Look for similar line counts and control flow
**Example:**
```typescript
function getUserById(id: number) {
/_ ... _/;
}
function getProductById(id: number) {
/_ ... _/;
}
```
### Type 3: Near-miss Clones
**Characteristics:** Similar with minor modifications
**Detection:** Manual comparison after automated flagging
**Example:**
```typescript
function processOrders() {
const items = getOrders();
items.forEach((item) => validate(item));
items.forEach((item) => transform(item));
return items;
}
function processUsers() {
const items = getUsers();
items.forEach((item) => validate(item));
items.forEach((item) => transform(item));
return items;
}
```
### Type 4: Semantic Clones
**Characteristics:** Different code, same behavior
**Detection:** Requires understanding business logic
**Example:** Two different implementations of same algorithm
## Refactoring Patterns
### Pattern 1: Extract Function
**When:** Exact duplicates, 3+ instances
**Example:**
```typescript
// Before (duplicated)
if (user.age < 18) return false;
if (user.verified !== true) return false;
if (user.active !== true) return false;
// After (extracted)
function isEligible(user) {
return user.age >= 18 && user.verified && user.active;
}
```
### Pattern 2: Extract Utility
**When:** Common operations repeated across files
**Example:**
```typescript
// Before (repeated in many files)
const formatted = date.toISOString().split('T')[0];
// After (utility)
function formatDate(date) {
return date.toISOString().split('T')[0];
}
```
### Pattern 3: Template Method
**When:** Similar processing flows with variations
**Example:**
```typescript
// Before (structural duplicates)
function processA() {
validate();
transformA();
persist();
}
function processB() {
validate();
transformB();
persist();
}
// After (template)
function process(transformer) {
validate();
transformer();
persist();
}
```
### Pattern 4: Parameterize Differences
**When:** Duplicates with single variation point
**Example:**
```typescript
// Before (duplicate with variation)
function getActiveUsers() {
return users.filter((u) => u.status === 'active');
}
function getInactiveUsers() {
return users.filter((u) => u.status === 'inactive');
}
// After (parameterized)
function getUsersByStatus(status) {
return users.filter((u) => u.status === status);
}
```
## Severity Mapping
| Pattern | Severity | Rationale |
| ----------------------------------- | -------- | --------------------------------------------- |
| Exact duplicates, 5+ instances | critical | High maintenance burden, bug propagation risk |
| Exact duplicates, 3-4 instances | high | Significant maintenance cost |
| Structural duplicates, 3+ instances | high | Refactoring opportunity with high value |
| Exact duplicates, 2 instances | medium | Moderate maintenance burden |
| Structural duplicates, 2 instances | medium | Consider refactoring if likely to grow |
| Near-miss clones, 2-3 instances | medium | Evaluate cost/benefit of extraction |
| Test code duplication | nitpick | Acceptable for test clarity |
| Configuration duplication | nitpick | May be intentional, evaluate case-by-case |
## When Duplication is Acceptable
**Test Code:**
- Test clarity preferred over DRY principle
- Explicit test cases easier to understand
- Fixtures can duplicate without issue
**Constants/Configuration:**
- Similar configs may be coincidental
- Premature abstraction creates coupling
- May evolve independently
**Prototypes/Experiments:**
- Early stage code, patterns unclear
- Wait for third instance before abstracting
**Different Domains:**
- Accidental similarity
- May diverge over time
- Wrong abstraction worse than duplication
## Red Flags (High Priority Indicators)
- Same bug appears in multiple locations
- Features require changes in N places
- Developers forget to update all copies
- Code review comments repeated across files
- Merge conflicts in similar code blocks
- Business logic duplicated across domains
## Analysis Priority
1. **Run automated duplication detection** (if tools available)
2. **Parse script output** for file:line references and instance counts
3. **Read flagged files** to understand context
4. **Classify duplication type** (exact, structural, near-miss, semantic)
5. **Count instances** (more instances = higher priority)
6. **Assess refactoring value:**
- Instance count (3+ = high priority)
- Likelihood of changing together
- Complexity of extraction
- Test vs production code
7. **Identify refactoring pattern** (extract function, utility, template, parameterize)
8. **Check for acceptable duplication** (tests, config, prototypes)
## Integration Notes
- This skill provides detection methods and refactoring patterns only
- Output formatting is handled by the calling agent
- Severity classification should align with agent's schema
- Do NOT include effort estimates or workflow instructions
- Focus on WHAT to detect and HOW to refactor, not report structure

View File

@@ -0,0 +1,287 @@
---
name: reviewing-security
description: Automated tooling and detection patterns for JavaScript/TypeScript security vulnerabilities. Provides scan commands, vulnerability patterns, and severity mapping—not output formatting or workflow.
allowed-tools: Bash, Read, Grep, Glob
version: 1.0.0
---
# Security Review Skill
## Purpose
This skill provides automated security scanning commands and vulnerability detection patterns. Use this as a reference for WHAT to check and HOW to detect security issues—not for output formatting or workflow.
## Automated Security Scan
Run Semgrep security analysis (if available):
```bash
bash ~/.claude/plugins/marketplaces/claude-configs/review/scripts/review-security.sh
```
**Returns:** Security issues by severity, vulnerability types (XSS, injection, etc.), file:line locations, CWE/OWASP references
## Vulnerability Detection Patterns
When automated tools unavailable or for deeper analysis, use Read/Grep/Glob to detect:
### Input Validation Vulnerabilities
**XSS (Cross-Site Scripting):**
```bash
grep -rn "innerHTML.*=\|dangerouslySetInnerHTML\|document\.write" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx"
```
Look for: User input assigned to innerHTML, dangerouslySetInnerHTML usage, document.write with variables
**SQL Injection:**
```bash
grep -rn "query.*+\|query.*\${" --include="*.ts" --include="*.js"
```
Look for: String concatenation in SQL queries, template literals in queries without parameterization
**Command Injection:**
```bash
grep -rn "exec\|spawn\|execSync\|spawnSync" --include="*.ts" --include="*.js"
```
Look for: User input passed to exec/spawn, unsanitized command arguments
**Path Traversal:**
```bash
grep -rn "readFile.*req\|readFile.*params\|\.\./" --include="*.ts" --include="*.js"
```
Look for: File paths from user input, ../ in file operations
**Code Injection:**
```bash
grep -rn "eval\|new Function\|setTimeout.*string\|setInterval.*string" --include="*.ts" --include="*.js"
```
Look for: eval() usage, Function constructor, string-based setTimeout/setInterval
### Authentication & Authorization Issues
**Hardcoded Credentials:**
```bash
grep -rn "password\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
grep -rn "api_key\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
grep -rn "secret\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
grep -rn "token\s*=\s*['\"][^'\"]\+['\"]" --include="*.ts" --include="*.js"
```
Look for: Hardcoded passwords, API keys, secrets, tokens in source code
**Weak Authentication:**
```bash
grep -rn "password\.length\|minLength.*password" --include="*.ts" --include="*.js"
```
Look for: Weak password requirements (<8 chars), missing complexity checks
**Missing Authorization:**
```bash
grep -rn "router\.\(get\|post\|put\|delete\)" --include="*.ts" --include="*.js"
```
Look for: Routes without authentication middleware, missing role checks
**JWT Issues:**
```bash
grep -rn "jwt\.sign.*algorithm.*none\|jwt\.verify.*algorithms.*\[\]" --include="*.ts" --include="*.js"
```
Look for: JWT with "none" algorithm, missing algorithm verification
### Data Exposure Issues
**Sensitive Data in Logs:**
```bash
grep -rn "console\.log.*password\|console\.log.*token\|console\.log.*secret" --include="*.ts" --include="*.js"
```
Look for: Passwords, tokens, secrets in console.log statements
**Secrets in Environment Files:**
```bash
grep -rn "API_KEY\|SECRET\|PASSWORD\|TOKEN" .env .env.example
```
Look for: Actual secrets in .env files (should be in .env.example as placeholders only)
**Client-Side Secrets:**
```bash
grep -rn "process\.env\." --include="*.tsx" --include="*.jsx"
```
Look for: Environment variables accessed in client-side React components
**Verbose Error Messages:**
```bash
grep -rn "error\.stack\|error\.message.*res\.send\|throw.*Error.*password" --include="*.ts" --include="*.js"
```
Look for: Stack traces sent to client, error messages exposing system details
### Cryptography Issues
**Weak Algorithms:**
```bash
grep -rn "createHash.*md5\|createHash.*sha1\|crypto\.MD5\|crypto\.SHA1" --include="*.ts" --include="*.js"
```
Look for: MD5, SHA1 usage for security-sensitive operations
**Insecure Randomness:**
```bash
grep -rn "Math\.random" --include="*.ts" --include="*.js"
```
Look for: Math.random() for tokens, session IDs, cryptographic keys
**Hardcoded Encryption Keys:**
```bash
grep -rn "encrypt.*key.*=.*['\"]" --include="*.ts" --include="*.js"
```
Look for: Encryption keys hardcoded in source
**Improper Certificate Validation:**
```bash
grep -rn "rejectUnauthorized.*false\|NODE_TLS_REJECT_UNAUTHORIZED.*0" --include="*.ts" --include="*.js"
```
Look for: Disabled SSL/TLS certificate validation
### Dependency Vulnerabilities
**Check for Known Vulnerabilities:**
```bash
npm audit --json
# or
yarn audit --json
```
Look for: Packages with known CVEs, outdated dependencies with security patches
**Check Package Integrity:**
```bash
grep -rn "http://registry\|--ignore-scripts" package.json
```
Look for: Insecure registry URLs, disabled install scripts (security bypass)
## Severity Mapping
Use these criteria when classifying security findings:
| Vulnerability Type | Severity | Rationale |
| -------------------------------------------- | -------- | ------------------------------- |
| SQL injection | critical | Database compromise, data theft |
| Command injection | critical | Remote code execution |
| Hardcoded credentials in production code | critical | Unauthorized access |
| Authentication bypass | critical | Complete security failure |
| XSS with user data | high | Account takeover, data theft |
| Missing authentication on sensitive routes | high | Unauthorized access to data |
| Secrets in logs | high | Credential exposure |
| Weak cryptography (MD5/SHA1 for passwords) | high | Password cracking |
| Path traversal | high | Arbitrary file access |
| Missing authorization checks | medium | Privilege escalation risk |
| Insecure randomness (Math.random for tokens) | medium | Token prediction |
| Verbose error messages | medium | Information disclosure |
| Outdated dependencies with CVEs | medium | Known vulnerability exposure |
| Weak password requirements | medium | Brute force risk |
| Missing HTTPS enforcement | medium | Man-in-the-middle risk |
| Disabled certificate validation | medium | MITM attacks possible |
| Secrets in .env.example | nitpick | Best practice violation |
| console.log with non-sensitive data | nitpick | Production noise |
## Analysis Priority
1. **Run automated security scan first** (Semgrep if available)
2. **Parse scan outputs** for critical/high severity issues
3. **Check for hardcoded secrets** (grep patterns above)
4. **Audit authentication/authorization** in routes and middleware
5. **Inspect input validation** at API boundaries
6. **Review cryptography usage** for weak algorithms
7. **Check dependencies** for known vulnerabilities
8. **Cross-reference findings** (e.g., missing auth + XSS = higher priority)
If performing comprehensive Prisma code review covering security vulnerabilities and performance anti-patterns, use the reviewing-prisma-patterns skill from prisma-6 for systematic validation.
## Common Vulnerability Examples
### XSS Example
```typescript
// VULNERABLE
element.innerHTML = userInput;
<div dangerouslySetInnerHTML={{ __html: data }} />;
// SECURE
element.textContent = userInput;
<div>{DOMPurify.sanitize(data)}</div>;
```
### SQL Injection Example
```typescript
// VULNERABLE
db.query("SELECT * FROM users WHERE id = " + userId);
db.query(\`SELECT * FROM users WHERE email = '\${email}'\`);
// SECURE
db.query("SELECT * FROM users WHERE id = ?", [userId]);
db.query("SELECT * FROM users WHERE email = $1", [email]);
```
If reviewing Prisma 6 SQL injection prevention patterns, use the preventing-sql-injection skill from prisma-6 for $queryRaw guidance.
### Command Injection Example
```typescript
// VULNERABLE
exec(\`ping \${userInput}\`);
// SECURE
execFile('ping', [userInput]);
```
### Insecure Randomness Example
```typescript
// VULNERABLE
const sessionId = Math.random().toString(36);
// SECURE
const sessionId = crypto.randomBytes(32).toString('hex');
```
## Integration Notes
- This skill provides detection methods and severity mapping only
- Output formatting is handled by the calling agent
- Prioritize automated Semgrep scan results over manual inspection
- Manual patterns supplement automated findings
- All findings must map to specific file:line locations