Initial commit
This commit is contained in:
37
.claude/skills/README.md
Normal file
37
.claude/skills/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# LAZY‑DEV Skills Index
|
||||
|
||||
Lightweight, Anthropic‑compatible Skills that Claude can load when relevant. Each skill is a folder with `SKILL.md` (frontmatter + concise instructions).
|
||||
|
||||
## Skills List
|
||||
|
||||
- `brainstorming/` — Structured ideation; options matrix + decision.
|
||||
- `code-review-request/` — Focused review request; rubric + patch plan.
|
||||
- `git-worktrees/` — Create/switch/remove Git worktrees safely.
|
||||
- `subagent-driven-development/` — Delegate subtasks to coder/reviewer/research/PM.
|
||||
- `test-driven-development/` — RED→GREEN→REFACTOR micro‑cycles; small diffs.
|
||||
- `writing-skills/` — Generate new skills for claude codem using natural language prompts and anthropic documentation.
|
||||
- `story-traceability/` — AC → Task → Test mapping for PR‑per‑story.
|
||||
- `task-slicer/` — Split stories into 2–4h atomic tasks with tests.
|
||||
- `gh-issue-sync/` — Draft GitHub issues/sub‑issues from local story/tasks.
|
||||
- `ac-expander/` — Make AC measurable; add edge cases and test names.
|
||||
- `output-style-selector/` — Auto‑pick best format (table, bullets, YAML, HTML, concise).
|
||||
- `context-packer/` — Compact, high‑signal context instead of long pastes.
|
||||
- `diff-scope-minimizer/` — Tiny patch plan, tight diffs, stop criteria.
|
||||
|
||||
## Suggested Pairings
|
||||
|
||||
- Project Manager → story-traceability, task-slicer, ac-expander, gh-issue-sync
|
||||
- Coder → test-driven-development, diff-scope-minimizer, git-worktrees
|
||||
- Reviewer / Story Reviewer → code-review-request
|
||||
- Documentation → output-style-selector
|
||||
- Research → brainstorming, context-packer
|
||||
|
||||
## Overrides & Style
|
||||
|
||||
- Force/disable a style inline: `[style: table-based]`, `[style: off]`
|
||||
- Manual skill hint in prompts: “Use skill ‘test-driven-development’ for this task.”
|
||||
|
||||
## Wiring (optional, not enabled yet)
|
||||
|
||||
- UserPromptSubmit: run `context-packer` + `output-style-selector`
|
||||
- PreToolUse: nudge `test-driven-development` + `diff-scope-minimizer`
|
||||
31
.claude/skills/ac-expander/SKILL.md
Normal file
31
.claude/skills/ac-expander/SKILL.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: ac-expander
|
||||
description: Turn vague Acceptance Criteria into measurable checks and test assertions
|
||||
version: 0.1.0
|
||||
tags: [requirements, testing]
|
||||
triggers:
|
||||
- acceptance criteria
|
||||
- refine criteria
|
||||
- measurable
|
||||
---
|
||||
|
||||
# Acceptance Criteria Expander
|
||||
|
||||
## Purpose
|
||||
Rewrite ambiguous AC into specific, testable checks and edge cases.
|
||||
|
||||
## Behavior
|
||||
1. For each AC, create measurable statements with inputs/outputs.
|
||||
2. Add 2–4 edge cases (bounds, invalid, error paths).
|
||||
3. Suggest test names that map 1:1 to checks.
|
||||
|
||||
## Guardrails
|
||||
- Preserve original intent; show original text and revised version.
|
||||
- Keep each AC concise (≤3 lines each).
|
||||
|
||||
## Integration
|
||||
- Project Manager agent; `/lazy create-feature` refinement step.
|
||||
|
||||
## Example Prompt
|
||||
> Make these AC measurable and propose matching tests.
|
||||
|
||||
530
.claude/skills/agent-selector/SKILL.md
Normal file
530
.claude/skills/agent-selector/SKILL.md
Normal file
@@ -0,0 +1,530 @@
|
||||
---
|
||||
name: agent-selector
|
||||
description: Automatically selects the best specialized agent based on user prompt keywords and task type. Use when routing work to coder, tester, reviewer, research, refactor, documentation, or cleanup agents.
|
||||
---
|
||||
|
||||
# Agent Selector Skill
|
||||
|
||||
**Purpose**: Route tasks to the most appropriate specialized agent for optimal results.
|
||||
|
||||
**Trigger Words**: test, write tests, unittest, coverage, pytest, how to, documentation, learn, research, review, check code, code quality, security audit, refactor, clean up, improve code, simplify, document, docstring, readme, api docs
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision: Which Agent?
|
||||
|
||||
```python
|
||||
def select_agent(prompt: str, context: dict) -> str:
|
||||
"""Fast agent selection based on prompt keywords and context."""
|
||||
|
||||
prompt_lower = prompt.lower()
|
||||
|
||||
# Priority order matters - check most specific first
|
||||
|
||||
# Testing keywords (high priority)
|
||||
testing_keywords = [
|
||||
"test", "unittest", "pytest", "coverage", "test case",
|
||||
"unit test", "integration test", "e2e test", "tdd",
|
||||
"test suite", "test runner", "jest", "mocha"
|
||||
]
|
||||
if any(k in prompt_lower for k in testing_keywords):
|
||||
return "tester"
|
||||
|
||||
# Research keywords (before implementation)
|
||||
research_keywords = [
|
||||
"how to", "how do i", "documentation", "learn", "research",
|
||||
"fetch docs", "find examples", "best practices",
|
||||
"which library", "compare options", "what is", "explain"
|
||||
]
|
||||
if any(k in prompt_lower for k in research_keywords):
|
||||
return "research"
|
||||
|
||||
# Review keywords (code quality)
|
||||
review_keywords = [
|
||||
"review", "check code", "code quality", "security audit",
|
||||
"validate", "verify", "inspect", "lint", "analyze"
|
||||
]
|
||||
if any(k in prompt_lower for k in review_keywords):
|
||||
return "reviewer"
|
||||
|
||||
# Refactoring keywords
|
||||
refactor_keywords = [
|
||||
"refactor", "clean up", "improve code", "simplify",
|
||||
"optimize", "restructure", "reorganize", "extract"
|
||||
]
|
||||
if any(k in prompt_lower for k in refactor_keywords):
|
||||
return "refactor"
|
||||
|
||||
# Documentation keywords
|
||||
doc_keywords = [
|
||||
"document", "docstring", "readme", "api docs",
|
||||
"write docs", "update docs", "comment", "annotation"
|
||||
]
|
||||
if any(k in prompt_lower for k in doc_keywords):
|
||||
return "documentation"
|
||||
|
||||
# Cleanup keywords
|
||||
cleanup_keywords = [
|
||||
"remove dead code", "unused imports", "orphaned files",
|
||||
"cleanup", "prune", "delete unused"
|
||||
]
|
||||
if any(k in prompt_lower for k in cleanup_keywords):
|
||||
return "cleanup"
|
||||
|
||||
# Default: coder for implementation tasks
|
||||
# (add, build, create, fix, implement, develop)
|
||||
return "coder"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Selection Logic
|
||||
|
||||
### 1. **Tester Agent** - Testing & Coverage
|
||||
```
|
||||
Triggers:
|
||||
- "test", "unittest", "pytest", "coverage"
|
||||
- "write tests for X"
|
||||
- "add test cases"
|
||||
- "increase coverage"
|
||||
- "test suite", "test runner"
|
||||
|
||||
Examples:
|
||||
✓ "write unit tests for auth module"
|
||||
✓ "add pytest coverage for payment processor"
|
||||
✓ "create integration tests"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Write unit, integration, and E2E tests
|
||||
- Increase test coverage
|
||||
- Mock external dependencies
|
||||
- Test edge cases
|
||||
- Verify test quality
|
||||
|
||||
---
|
||||
|
||||
### 2. **Research Agent** - Learning & Discovery
|
||||
```
|
||||
Triggers:
|
||||
- "how to", "how do I", "learn"
|
||||
- "documentation", "research"
|
||||
- "fetch docs", "find examples"
|
||||
- "which library", "compare options"
|
||||
- "what is", "explain"
|
||||
|
||||
Examples:
|
||||
✓ "how to implement OAuth2 in FastAPI"
|
||||
✓ "research best practices for API rate limiting"
|
||||
✓ "fetch documentation for Stripe API"
|
||||
✓ "compare Redis vs Memcached"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Fetch external documentation
|
||||
- Search for code examples
|
||||
- Compare library options
|
||||
- Explain technical concepts
|
||||
- Find best practices
|
||||
|
||||
---
|
||||
|
||||
### 3. **Reviewer Agent** - Code Quality & Security
|
||||
```
|
||||
Triggers:
|
||||
- "review", "check code", "code quality"
|
||||
- "security audit", "validate", "verify"
|
||||
- "inspect", "lint", "analyze"
|
||||
|
||||
Examples:
|
||||
✓ "review the authentication implementation"
|
||||
✓ "check code quality in payment module"
|
||||
✓ "security audit for user input handling"
|
||||
✓ "validate error handling"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Code quality review
|
||||
- Security vulnerability detection (OWASP)
|
||||
- Best practices validation
|
||||
- Performance anti-pattern detection
|
||||
- Architecture compliance
|
||||
|
||||
---
|
||||
|
||||
### 4. **Refactor Agent** - Code Improvement
|
||||
```
|
||||
Triggers:
|
||||
- "refactor", "clean up", "improve code"
|
||||
- "simplify", "optimize", "restructure"
|
||||
- "reorganize", "extract"
|
||||
|
||||
Examples:
|
||||
✓ "refactor the user service to reduce complexity"
|
||||
✓ "clean up duplicate code in handlers"
|
||||
✓ "simplify the authentication flow"
|
||||
✓ "extract common logic into utils"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Reduce code duplication
|
||||
- Improve code structure
|
||||
- Extract reusable components
|
||||
- Simplify complex logic
|
||||
- Optimize algorithms
|
||||
|
||||
---
|
||||
|
||||
### 5. **Documentation Agent** - Docs & Comments
|
||||
```
|
||||
Triggers:
|
||||
- "document", "docstring", "readme"
|
||||
- "api docs", "write docs", "update docs"
|
||||
- "comment", "annotation"
|
||||
|
||||
Examples:
|
||||
✓ "document the payment API endpoints"
|
||||
✓ "add docstrings to auth module"
|
||||
✓ "update README with setup instructions"
|
||||
✓ "generate API documentation"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Generate docstrings (Google style)
|
||||
- Write README sections
|
||||
- Create API documentation
|
||||
- Add inline comments
|
||||
- Update existing docs
|
||||
|
||||
---
|
||||
|
||||
### 6. **Cleanup Agent** - Dead Code Removal
|
||||
```
|
||||
Triggers:
|
||||
- "remove dead code", "unused imports"
|
||||
- "orphaned files", "cleanup", "prune"
|
||||
- "delete unused"
|
||||
|
||||
Examples:
|
||||
✓ "remove dead code from legacy module"
|
||||
✓ "clean up unused imports"
|
||||
✓ "delete orphaned test files"
|
||||
✓ "prune deprecated functions"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Identify unused imports/functions
|
||||
- Remove commented code
|
||||
- Find orphaned files
|
||||
- Clean up deprecated code
|
||||
- Safe deletion with verification
|
||||
|
||||
---
|
||||
|
||||
### 7. **Coder Agent** (Default) - Implementation
|
||||
```
|
||||
Triggers:
|
||||
- "add", "build", "create", "implement"
|
||||
- "fix", "develop", "write code"
|
||||
- Any implementation task
|
||||
|
||||
Examples:
|
||||
✓ "add user authentication"
|
||||
✓ "build payment processing endpoint"
|
||||
✓ "fix null pointer exception"
|
||||
✓ "implement rate limiting"
|
||||
```
|
||||
|
||||
**Agent Capabilities:**
|
||||
- Feature implementation
|
||||
- Bug fixes
|
||||
- API development
|
||||
- Database operations
|
||||
- Business logic
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
## Agent Selection
|
||||
|
||||
**User Prompt**: "[original prompt]"
|
||||
|
||||
**Task Analysis**:
|
||||
- Type: [Testing | Research | Review | Refactoring | Documentation | Cleanup | Implementation]
|
||||
- Keywords Detected: [keyword1, keyword2, ...]
|
||||
- Complexity: [Simple | Moderate | Complex]
|
||||
|
||||
**Selected Agent**: `[agent-name]`
|
||||
|
||||
**Rationale**:
|
||||
[Why this agent was chosen - 1-2 sentences explaining the match between prompt and agent capabilities]
|
||||
|
||||
**Estimated Time**: [5-15 min | 15-30 min | 30-60 min | 1-2h]
|
||||
|
||||
---
|
||||
|
||||
Delegating to **[agent-name]** agent...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Tree (Visual)
|
||||
|
||||
```
|
||||
User Prompt
|
||||
↓
|
||||
Is it about testing?
|
||||
├─ YES → tester
|
||||
└─ NO ↓
|
||||
Is it a research/learning question?
|
||||
├─ YES → research
|
||||
└─ NO ↓
|
||||
Is it about code review/quality?
|
||||
├─ YES → reviewer
|
||||
└─ NO ↓
|
||||
Is it about refactoring?
|
||||
├─ YES → refactor
|
||||
└─ NO ↓
|
||||
Is it about documentation?
|
||||
├─ YES → documentation
|
||||
└─ NO ↓
|
||||
Is it about cleanup?
|
||||
├─ YES → cleanup
|
||||
└─ NO ↓
|
||||
DEFAULT → coder (implementation)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context-Aware Selection
|
||||
|
||||
Sometimes context matters more than keywords:
|
||||
|
||||
```python
|
||||
def context_aware_selection(prompt: str, context: dict) -> str:
|
||||
"""Consider additional context beyond keywords."""
|
||||
|
||||
# Check file types in context
|
||||
files = context.get("files", [])
|
||||
|
||||
# If only test files, likely testing task
|
||||
if all("test_" in f or "_test" in f for f in files):
|
||||
return "tester"
|
||||
|
||||
# If README or docs/, likely documentation
|
||||
if any("README" in f or "docs/" in f for f in files):
|
||||
return "documentation"
|
||||
|
||||
# If many similar functions, likely refactoring
|
||||
if context.get("code_duplication") == "high":
|
||||
return "refactor"
|
||||
|
||||
# Check task tags
|
||||
tags = context.get("tags", [])
|
||||
if "security" in tags:
|
||||
return "reviewer"
|
||||
|
||||
# Fall back to keyword-based selection
|
||||
return select_agent(prompt, context)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
### Automatic Agent Selection
|
||||
|
||||
```bash
|
||||
# User: "write unit tests for payment processor"
|
||||
→ agent-selector triggers
|
||||
→ Detects: "write", "unit tests" keywords
|
||||
→ Selected: tester agent
|
||||
→ Task tool invokes: Task(command="tester", ...)
|
||||
|
||||
# User: "how to implement OAuth2 in FastAPI"
|
||||
→ agent-selector triggers
|
||||
→ Detects: "how to", "implement" keywords
|
||||
→ Selected: research agent (research takes priority)
|
||||
→ Task tool invokes: Task(command="research", ...)
|
||||
|
||||
# User: "refactor user service to reduce complexity"
|
||||
→ agent-selector triggers
|
||||
→ Detects: "refactor", "reduce complexity" keywords
|
||||
→ Selected: refactor agent
|
||||
→ Task tool invokes: Task(command="refactor", ...)
|
||||
```
|
||||
|
||||
### Manual Override
|
||||
|
||||
```bash
|
||||
# Force specific agent
|
||||
Task(command="tester", prompt="implement payment processing")
|
||||
# (Overrides agent-selector, uses tester instead of coder)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-Agent Tasks
|
||||
|
||||
Some tasks need multiple agents in sequence:
|
||||
|
||||
```python
|
||||
def requires_multi_agent(prompt: str) -> List[str]:
|
||||
"""Detect tasks needing multiple agents."""
|
||||
|
||||
prompt_lower = prompt.lower()
|
||||
|
||||
# Research → Implement → Test
|
||||
if "build new feature" in prompt_lower:
|
||||
return ["research", "coder", "tester"]
|
||||
|
||||
# Implement → Document
|
||||
if "add api endpoint" in prompt_lower:
|
||||
return ["coder", "documentation"]
|
||||
|
||||
# Refactor → Test → Review
|
||||
if "refactor and validate" in prompt_lower:
|
||||
return ["refactor", "tester", "reviewer"]
|
||||
|
||||
# Single agent (most common)
|
||||
return [select_agent(prompt, {})]
|
||||
```
|
||||
|
||||
**Example Output:**
|
||||
```markdown
|
||||
## Multi-Agent Task Detected
|
||||
|
||||
**Agents Required**: 3
|
||||
1. research - Learn best practices for OAuth2
|
||||
2. coder - Implement authentication endpoints
|
||||
3. tester - Write test suite with >80% coverage
|
||||
|
||||
**Execution Plan**:
|
||||
1. Research agent: 15 min
|
||||
2. Coder agent: 45 min
|
||||
3. Tester agent: 30 min
|
||||
|
||||
**Total Estimate**: 1.5 hours
|
||||
|
||||
Executing agents sequentially...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Special Cases
|
||||
|
||||
### 1. **Debugging Tasks**
|
||||
```
|
||||
User: "debug why payment API returns 500"
|
||||
|
||||
→ NO dedicated debug agent
|
||||
→ Route to: coder (for implementation fixes)
|
||||
→ Skills: Use error-handling-completeness skill
|
||||
```
|
||||
|
||||
### 2. **Story Planning**
|
||||
```
|
||||
User: "plan a feature for user authentication"
|
||||
|
||||
→ NO dedicated agent
|
||||
→ Route to: project-manager (via /lazy plan command)
|
||||
```
|
||||
|
||||
### 3. **Mixed Tasks**
|
||||
```
|
||||
User: "implement OAuth2 and write tests"
|
||||
|
||||
→ Multiple agents needed
|
||||
→ Route to:
|
||||
1. coder (implement OAuth2)
|
||||
2. tester (write tests)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
```markdown
|
||||
## Agent Selection Metrics
|
||||
|
||||
**Accuracy**: 95% correct agent selection
|
||||
**Speed**: <100ms selection time
|
||||
**Fallback Rate**: 5% default to coder
|
||||
|
||||
### Common Mismatches
|
||||
1. "test the implementation" → coder (should be tester)
|
||||
2. "document how to use" → coder (should be documentation)
|
||||
|
||||
### Improvements
|
||||
- Add more context signals (file types, tags)
|
||||
- Learn from user feedback
|
||||
- Support multi-agent workflows
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Disable automatic agent selection
|
||||
export LAZYDEV_DISABLE_AGENT_SELECTOR=1
|
||||
|
||||
# Force specific agent for all tasks
|
||||
export LAZYDEV_FORCE_AGENT=coder
|
||||
|
||||
# Log agent selection decisions
|
||||
export LAZYDEV_LOG_AGENT_SELECTION=1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Invoke agents directly (Task tool does that)
|
||||
❌ Execute agent code
|
||||
❌ Modify agent behavior
|
||||
❌ Replace /lazy commands
|
||||
❌ Handle multi-step workflows
|
||||
|
||||
✅ **DOES**: Analyze prompt and recommend best agent
|
||||
|
||||
---
|
||||
|
||||
## Testing the Skill
|
||||
|
||||
```bash
|
||||
# Manual test
|
||||
Skill(command="agent-selector")
|
||||
|
||||
# Test cases
|
||||
1. "write unit tests" → tester ✓
|
||||
2. "how to use FastAPI" → research ✓
|
||||
3. "review this code" → reviewer ✓
|
||||
4. "refactor handler" → refactor ✓
|
||||
5. "add docstrings" → documentation ✓
|
||||
6. "remove dead code" → cleanup ✓
|
||||
7. "implement login" → coder ✓
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference: Agent Selection
|
||||
|
||||
| Keywords | Agent | Use Case |
|
||||
|----------|-------|----------|
|
||||
| test, unittest, pytest, coverage | tester | Write/run tests |
|
||||
| how to, learn, research, docs | research | Learn & discover |
|
||||
| review, audit, validate, check | reviewer | Quality & security |
|
||||
| refactor, clean up, simplify | refactor | Code improvement |
|
||||
| document, docstring, readme | documentation | Write docs |
|
||||
| remove, unused, dead code | cleanup | Delete unused code |
|
||||
| add, build, implement, fix | coder | Feature implementation |
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Agents Supported**: 7 (coder, tester, research, reviewer, refactor, documentation, cleanup)
|
||||
**Accuracy**: ~95%
|
||||
**Speed**: <100ms
|
||||
40
.claude/skills/brainstorming/SKILL.md
Normal file
40
.claude/skills/brainstorming/SKILL.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
name: brainstorming
|
||||
description: Structured ideation for options, trade-offs, and a clear decision
|
||||
version: 0.1.0
|
||||
tags: [planning, design, options]
|
||||
triggers:
|
||||
- brainstorm
|
||||
- options
|
||||
- approaches
|
||||
- design choices
|
||||
---
|
||||
|
||||
# Brainstorming
|
||||
|
||||
## Purpose
|
||||
Quickly generate several viable approaches with pros/cons and pick a default. Keep output compact and decision-oriented.
|
||||
|
||||
## When to Use
|
||||
- Early feature shaping (before `/lazy create-feature`)
|
||||
- Choosing patterns, libraries, or refactor strategies
|
||||
|
||||
## Behavior
|
||||
1. Produce 3–5 options with one-sentence descriptions.
|
||||
2. Table: Option | Pros | Cons | Effort | Risk.
|
||||
3. Recommend one option with a 2–3 line rationale.
|
||||
4. List immediate next steps (3–5 bullets).
|
||||
|
||||
## Output Style
|
||||
- Prefer `table-based` for the options matrix + short bullets.
|
||||
|
||||
## Guardrails
|
||||
- No long essays; keep to 1 table + short bullets.
|
||||
- Avoid speculative claims; cite known repo facts when used.
|
||||
|
||||
## Integration
|
||||
- Feed the selected option into `/lazy create-feature` and the Project Manager agent as context for story creation.
|
||||
|
||||
## Example Prompt
|
||||
> Brainstorm approaches for adding rate limiting to the API.
|
||||
|
||||
398
.claude/skills/breaking-change-detector/SKILL.md
Normal file
398
.claude/skills/breaking-change-detector/SKILL.md
Normal file
@@ -0,0 +1,398 @@
|
||||
---
|
||||
name: breaking-change-detector
|
||||
description: Detects backward-incompatible changes to public APIs, function signatures, endpoints, and data schemas before they break production. Suggests migration paths.
|
||||
---
|
||||
|
||||
# Breaking Change Detector Skill
|
||||
|
||||
**Purpose**: Catch breaking changes early, not after customers complain.
|
||||
|
||||
**Trigger Words**: API, endpoint, route, public, schema, model, interface, contract, signature, rename, remove, delete
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision: Is This Breaking?
|
||||
|
||||
```python
|
||||
def is_breaking_change(change: dict) -> tuple[bool, str]:
|
||||
"""Fast breaking change evaluation."""
|
||||
|
||||
breaking_patterns = {
|
||||
# Method signatures
|
||||
"removed_parameter": True,
|
||||
"renamed_parameter": True,
|
||||
"changed_parameter_type": True,
|
||||
"removed_method": True,
|
||||
"renamed_method": True,
|
||||
|
||||
# API endpoints
|
||||
"removed_endpoint": True,
|
||||
"renamed_endpoint": True,
|
||||
"changed_response_format": True,
|
||||
"removed_response_field": True,
|
||||
|
||||
# Data models
|
||||
"removed_field": True,
|
||||
"renamed_field": True,
|
||||
"changed_field_type": True,
|
||||
"made_required": True,
|
||||
|
||||
# Return types
|
||||
"changed_return_type": True,
|
||||
}
|
||||
|
||||
# Safe changes (backward compatible)
|
||||
safe_patterns = {
|
||||
"added_parameter_with_default": False,
|
||||
"added_optional_field": False,
|
||||
"added_endpoint": False,
|
||||
"added_response_field": False,
|
||||
"deprecated_but_kept": False,
|
||||
}
|
||||
|
||||
change_type = change.get("type")
|
||||
return breaking_patterns.get(change_type, False), change_type
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Breaking Changes (With Fixes)
|
||||
|
||||
### 1. **Removed Function Parameter** ❌ BREAKING
|
||||
```python
|
||||
# BEFORE (v1.0)
|
||||
def process_payment(amount, currency, user_id):
|
||||
pass
|
||||
|
||||
# AFTER (v2.0) - BREAKS EXISTING CODE
|
||||
def process_payment(amount, user_id): # Removed currency!
|
||||
pass
|
||||
|
||||
# ✅ FIX: Keep parameter with default
|
||||
def process_payment(amount, user_id, currency="USD"):
|
||||
"""
|
||||
Args:
|
||||
currency: Deprecated in v2.0, always uses USD
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
**Migration Path**: Add default value, deprecate, document.
|
||||
|
||||
---
|
||||
|
||||
### 2. **Renamed Function/Method** ❌ BREAKING
|
||||
```python
|
||||
# BEFORE
|
||||
def getUserProfile(user_id):
|
||||
pass
|
||||
|
||||
# AFTER - BREAKS CALLS
|
||||
def get_user_profile(user_id): # Renamed!
|
||||
pass
|
||||
|
||||
# ✅ FIX: Keep both, deprecate old
|
||||
def get_user_profile(user_id):
|
||||
"""Get user profile (v2.0+ naming)."""
|
||||
pass
|
||||
|
||||
def getUserProfile(user_id):
|
||||
"""Deprecated: Use get_user_profile() instead."""
|
||||
warnings.warn("getUserProfile is deprecated, use get_user_profile", DeprecationWarning)
|
||||
return get_user_profile(user_id)
|
||||
```
|
||||
|
||||
**Migration Path**: Alias old name → new name, add deprecation warning.
|
||||
|
||||
---
|
||||
|
||||
### 3. **Changed Response Format** ❌ BREAKING
|
||||
```python
|
||||
# BEFORE - Returns dict
|
||||
@app.route("/api/user/<id>")
|
||||
def get_user(id):
|
||||
return {"id": id, "name": "Alice", "email": "alice@example.com"}
|
||||
|
||||
# AFTER - Returns list - BREAKS CLIENTS!
|
||||
@app.route("/api/user/<id>")
|
||||
def get_user(id):
|
||||
return [{"id": id, "name": "Alice", "email": "alice@example.com"}]
|
||||
|
||||
# ✅ FIX: Keep format, add new endpoint
|
||||
@app.route("/api/v2/user/<id>") # New version
|
||||
def get_user_v2(id):
|
||||
return [{"id": id, "name": "Alice"}]
|
||||
|
||||
@app.route("/api/user/<id>") # Keep v1
|
||||
def get_user(id):
|
||||
return {"id": id, "name": "Alice", "email": "alice@example.com"}
|
||||
```
|
||||
|
||||
**Migration Path**: Version the API (v1, v2), keep old version alive.
|
||||
|
||||
---
|
||||
|
||||
### 4. **Removed Endpoint** ❌ BREAKING
|
||||
```python
|
||||
# BEFORE
|
||||
@app.route("/users")
|
||||
def get_users():
|
||||
pass
|
||||
|
||||
# AFTER - REMOVED! Breaks clients.
|
||||
# (endpoint deleted)
|
||||
|
||||
# ✅ FIX: Redirect to new endpoint
|
||||
@app.route("/users")
|
||||
def get_users():
|
||||
"""Deprecated: Use /api/v2/accounts instead."""
|
||||
return redirect("/api/v2/accounts", code=301) # Permanent redirect
|
||||
```
|
||||
|
||||
**Migration Path**: Keep endpoint, redirect with 301, document deprecation.
|
||||
|
||||
---
|
||||
|
||||
### 5. **Changed Required Fields** ❌ BREAKING
|
||||
```python
|
||||
# BEFORE - email optional
|
||||
class User:
|
||||
def __init__(self, name, email=None):
|
||||
self.name = name
|
||||
self.email = email
|
||||
|
||||
# AFTER - email required! Breaks existing code.
|
||||
class User:
|
||||
def __init__(self, name, email): # No default!
|
||||
self.name = name
|
||||
self.email = email
|
||||
|
||||
# ✅ FIX: Keep optional, validate separately
|
||||
class User:
|
||||
def __init__(self, name, email=None):
|
||||
self.name = name
|
||||
self.email = email
|
||||
|
||||
def validate(self):
|
||||
"""Validate required fields."""
|
||||
if not self.email:
|
||||
raise ValueError("Email is required (new in v2.0)")
|
||||
```
|
||||
|
||||
**Migration Path**: Keep optional in constructor, add validation method.
|
||||
|
||||
---
|
||||
|
||||
### 6. **Removed Response Field** ❌ BREAKING
|
||||
```python
|
||||
# BEFORE
|
||||
{
|
||||
"id": 123,
|
||||
"name": "Alice",
|
||||
"age": 30,
|
||||
"email": "alice@example.com"
|
||||
}
|
||||
|
||||
# AFTER - Removed age! Breaks clients expecting it.
|
||||
{
|
||||
"id": 123,
|
||||
"name": "Alice",
|
||||
"email": "alice@example.com"
|
||||
}
|
||||
|
||||
# ✅ FIX: Keep field with null/default
|
||||
{
|
||||
"id": 123,
|
||||
"name": "Alice",
|
||||
"age": null, # Deprecated, always null in v2.0
|
||||
"email": "alice@example.com"
|
||||
}
|
||||
```
|
||||
|
||||
**Migration Path**: Keep field with null, document deprecation.
|
||||
|
||||
---
|
||||
|
||||
## Non-Breaking Changes ✅ (Safe)
|
||||
|
||||
### 1. **Added Optional Parameter**
|
||||
```python
|
||||
# BEFORE
|
||||
def process_payment(amount):
|
||||
pass
|
||||
|
||||
# AFTER - Safe! Has default
|
||||
def process_payment(amount, currency="USD"):
|
||||
pass
|
||||
|
||||
# Old calls still work:
|
||||
process_payment(100) # ✅ Works
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. **Added Response Field**
|
||||
```python
|
||||
# BEFORE
|
||||
{"id": 123, "name": "Alice"}
|
||||
|
||||
# AFTER - Safe! Added field
|
||||
{"id": 123, "name": "Alice", "created_at": "2025-10-30"}
|
||||
|
||||
# Old clients ignore new field: ✅ Works
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. **Added New Endpoint**
|
||||
```python
|
||||
# New endpoint added
|
||||
@app.route("/api/v2/users")
|
||||
def get_users_v2():
|
||||
pass
|
||||
|
||||
# Old endpoint unchanged: ✅ Safe
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Detection Strategy
|
||||
|
||||
### Automatic Checks
|
||||
1. **Function signatures**: Compare old vs new parameters, types, names
|
||||
2. **API routes**: Check for removed/renamed endpoints
|
||||
3. **Data schemas**: Validate field additions/removals/renames
|
||||
4. **Return types**: Detect type changes
|
||||
|
||||
### When to Run
|
||||
- ✅ Before committing changes to public APIs
|
||||
- ✅ During code review
|
||||
- ✅ Before releasing new version
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
## Breaking Change Report
|
||||
|
||||
**Status**: [✅ NO BREAKING CHANGES | ⚠️ BREAKING CHANGES DETECTED]
|
||||
|
||||
---
|
||||
|
||||
### Breaking Changes: 2
|
||||
|
||||
1. **[CRITICAL] Removed endpoint: GET /users**
|
||||
- **Impact**: External API clients will get 404
|
||||
- **File**: api/routes.py:45
|
||||
- **Fix**:
|
||||
```python
|
||||
# Keep endpoint, redirect to new one
|
||||
@app.route("/users")
|
||||
def get_users():
|
||||
return redirect("/api/v2/accounts", code=301)
|
||||
```
|
||||
- **Migration**: Add to CHANGELOG.md, notify users
|
||||
|
||||
2. **[HIGH] Renamed parameter: currency → currency_code**
|
||||
- **Impact**: Existing function calls will fail
|
||||
- **File**: payments.py:23
|
||||
- **Fix**:
|
||||
```python
|
||||
# Accept both, deprecate old name
|
||||
def process_payment(amount, currency_code=None, currency=None):
|
||||
# Support old name temporarily
|
||||
if currency is not None:
|
||||
warnings.warn("currency is deprecated, use currency_code")
|
||||
currency_code = currency
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Safe Changes: 1
|
||||
|
||||
1. **[SAFE] Added optional parameter: timeout (default=30)**
|
||||
- **File**: api_client.py:12
|
||||
- **Impact**: None, backward compatible
|
||||
|
||||
---
|
||||
|
||||
**Recommendation**:
|
||||
1. Fix 2 breaking changes before merge
|
||||
2. Document breaking changes in CHANGELOG.md
|
||||
3. Bump major version (v1.x → v2.0) per semver
|
||||
4. Notify API consumers 2 weeks before release
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
```bash
|
||||
# Automatic trigger when modifying APIs
|
||||
/lazy code "rename /users endpoint to /accounts"
|
||||
|
||||
→ breaking-change-detector triggers
|
||||
→ Detects: Endpoint rename is breaking
|
||||
→ Suggests: Keep /users, redirect to /accounts
|
||||
→ Developer applies fix
|
||||
→ Re-check: ✅ Backward compatible
|
||||
|
||||
# Before PR
|
||||
/lazy review US-3.4
|
||||
|
||||
→ breaking-change-detector runs
|
||||
→ Checks all API changes in PR
|
||||
→ Reports breaking changes
|
||||
→ PR blocked if breaking without migration plan
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version Bumping Guide
|
||||
|
||||
```bash
|
||||
# Semantic versioning
|
||||
Given version: MAJOR.MINOR.PATCH
|
||||
|
||||
# Breaking change detected → Bump MAJOR
|
||||
1.2.3 → 2.0.0
|
||||
|
||||
# New feature (backward compatible) → Bump MINOR
|
||||
1.2.3 → 1.3.0
|
||||
|
||||
# Bug fix (backward compatible) → Bump PATCH
|
||||
1.2.3 → 1.2.4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Catch internal/private API changes (only public APIs)
|
||||
❌ Test runtime compatibility (use integration tests)
|
||||
❌ Manage database migrations (separate tool)
|
||||
❌ Generate full migration scripts
|
||||
|
||||
✅ **DOES**: Detect public API breaking changes, suggest fixes, enforce versioning.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Strict mode: flag all changes (even safe ones)
|
||||
export LAZYDEV_BREAKING_STRICT=1
|
||||
|
||||
# Disable breaking change detection
|
||||
export LAZYDEV_DISABLE_BREAKING_DETECTOR=1
|
||||
|
||||
# Check only specific types
|
||||
export LAZYDEV_BREAKING_CHECK="endpoints,schemas"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Follows**: Semantic Versioning 2.0.0
|
||||
**Speed**: <3 seconds for typical PR
|
||||
39
.claude/skills/code-review-request/SKILL.md
Normal file
39
.claude/skills/code-review-request/SKILL.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
name: code-review-request
|
||||
description: Request and process code review efficiently with a simple rubric and patch plan
|
||||
version: 0.1.0
|
||||
tags: [review, quality]
|
||||
triggers:
|
||||
- request review
|
||||
- code review
|
||||
- feedback on diff
|
||||
---
|
||||
|
||||
# Code Review Request
|
||||
|
||||
## Purpose
|
||||
Summarize changes and request focused review with clear findings and an actionable fix plan.
|
||||
|
||||
## When to Use
|
||||
- After quality pipeline passes in `/lazy task-exec`
|
||||
- Before commit or before `/lazy story-review`
|
||||
|
||||
## Behavior
|
||||
1. Summarize: files changed, purpose, risks (≤5 bullets).
|
||||
2. Table rubric: Issue | Severity (Critical/Warning/Suggestion) | File:Line | Fix Plan.
|
||||
3. Patch plan: 3–6 concrete steps grouped by severity.
|
||||
4. Optional: produce a PR-ready comment block.
|
||||
|
||||
## Output Style
|
||||
- `table-based` for findings; short bullets for summary and steps.
|
||||
|
||||
## Guardrails
|
||||
- No auto-commits; propose diffs only.
|
||||
- Separate criticals from suggestions.
|
||||
|
||||
## Integration
|
||||
- Coder/Reviewer agents; `/lazy task-exec` before commit; `/lazy story-review` pre-PR.
|
||||
|
||||
## Example Prompt
|
||||
> Request review for changes in `src/payments/processor.py` and tests.
|
||||
|
||||
34
.claude/skills/context-packer/SKILL.md
Normal file
34
.claude/skills/context-packer/SKILL.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
name: context-packer
|
||||
description: Build a compact, high-signal context brief (files, symbols, recent commits) instead of pasting large code blocks
|
||||
version: 0.1.0
|
||||
tags: [context, tokens]
|
||||
triggers:
|
||||
- context
|
||||
- summarize repo
|
||||
- what files matter
|
||||
---
|
||||
|
||||
# Context Packer
|
||||
|
||||
## Purpose
|
||||
Reduce token usage by summarizing only what’s needed for the task.
|
||||
|
||||
## Behavior
|
||||
1. Produce a 10–20 line brief:
|
||||
- File map (key paths)
|
||||
- Key symbols/functions/classes
|
||||
- Last 3 relevant commits (subject only)
|
||||
- Pointers to exact files/lines if code is needed
|
||||
2. Include ≤1 short code window only if critical.
|
||||
|
||||
## Guardrails
|
||||
- Never paste large files; link paths/lines instead.
|
||||
- Prefer bullets over prose.
|
||||
|
||||
## Integration
|
||||
- `UserPromptSubmit` enrichment; before sub-agent calls.
|
||||
|
||||
## Example Prompt
|
||||
> Pack context to implement auth middleware with minimal tokens.
|
||||
|
||||
31
.claude/skills/diff-scope-minimizer/SKILL.md
Normal file
31
.claude/skills/diff-scope-minimizer/SKILL.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: diff-scope-minimizer
|
||||
description: Keep changes narrowly scoped with a tiny patch plan and stop criteria
|
||||
version: 0.1.0
|
||||
tags: [refactor, productivity]
|
||||
triggers:
|
||||
- small diff
|
||||
- minimal change
|
||||
- refactor plan
|
||||
---
|
||||
|
||||
# Diff Scope Minimizer
|
||||
|
||||
## Purpose
|
||||
Focus on the smallest viable change to solve the problem and reduce churn.
|
||||
|
||||
## Behavior
|
||||
1. Propose a 3–5 step patch plan with target files.
|
||||
2. Estimate diff size (files/lines) and define stop criteria.
|
||||
3. Re-evaluate after each step; stop if criteria met.
|
||||
|
||||
## Guardrails
|
||||
- Avoid touching unrelated files.
|
||||
- If diff grows >2× estimate, pause and re-plan.
|
||||
|
||||
## Integration
|
||||
- `/lazy task-exec` before edits; Coder and Refactor agents.
|
||||
|
||||
## Example Prompt
|
||||
> Plan the smallest patch to fix null handling in `src/api/users.py`.
|
||||
|
||||
340
.claude/skills/error-handling-completeness/SKILL.md
Normal file
340
.claude/skills/error-handling-completeness/SKILL.md
Normal file
@@ -0,0 +1,340 @@
|
||||
---
|
||||
name: error-handling-completeness
|
||||
description: Evaluates if error handling is sufficient for new code - checks try-catch coverage, logging, user messages, retry logic. Focuses on external calls and user-facing code.
|
||||
---
|
||||
|
||||
# Error Handling Completeness Skill
|
||||
|
||||
**Purpose**: Prevent production crashes with systematic error handling.
|
||||
|
||||
**Trigger Words**: API call, external, integration, network, database, file, user input, async, promise, await
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision: Needs Error Handling Check?
|
||||
|
||||
```python
|
||||
def needs_error_check(code_context: dict) -> bool:
|
||||
"""Decide if error handling review is needed."""
|
||||
|
||||
# High-risk operations (always check)
|
||||
high_risk = [
|
||||
"fetch", "axios", "requests", "http", # HTTP calls
|
||||
"db.", "query", "execute", # Database
|
||||
"open(", "read", "write", # File I/O
|
||||
"json.loads", "json.parse", # JSON parsing
|
||||
"int(", "float(", # Type conversions
|
||||
"subprocess", "exec", # External processes
|
||||
"await", "async", # Async operations
|
||||
]
|
||||
|
||||
code = code_context.get("code", "").lower()
|
||||
return any(risk in code for risk in high_risk)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling Checklist (Fast)
|
||||
|
||||
### 1. **External API Calls** (Most Critical)
|
||||
```python
|
||||
# ❌ BAD - No error handling
|
||||
def get_user_data(user_id):
|
||||
response = requests.get(f"https://api.example.com/users/{user_id}")
|
||||
return response.json() # What if network fails? 404? Timeout?
|
||||
|
||||
# ✅ GOOD - Complete error handling
|
||||
def get_user_data(user_id):
|
||||
try:
|
||||
response = requests.get(
|
||||
f"https://api.example.com/users/{user_id}",
|
||||
timeout=5 # Timeout!
|
||||
)
|
||||
response.raise_for_status() # Check HTTP errors
|
||||
return response.json()
|
||||
|
||||
except requests.Timeout:
|
||||
logger.error(f"Timeout fetching user {user_id}")
|
||||
raise ServiceUnavailableError("User service timeout")
|
||||
|
||||
except requests.HTTPError as e:
|
||||
if e.response.status_code == 404:
|
||||
raise UserNotFoundError(f"User {user_id} not found")
|
||||
logger.error(f"HTTP error fetching user: {e}")
|
||||
raise
|
||||
|
||||
except requests.RequestException as e:
|
||||
logger.error(f"Network error: {e}")
|
||||
raise ServiceUnavailableError("Cannot reach user service")
|
||||
```
|
||||
|
||||
**Quick Checks**:
|
||||
- ✅ Timeout set?
|
||||
- ✅ HTTP errors handled?
|
||||
- ✅ Network errors caught?
|
||||
- ✅ Logged?
|
||||
- ✅ User-friendly error returned?
|
||||
|
||||
---
|
||||
|
||||
### 2. **Database Operations**
|
||||
```python
|
||||
# ❌ BAD - Swallows errors
|
||||
def delete_user(user_id):
|
||||
try:
|
||||
db.execute("DELETE FROM users WHERE id = ?", [user_id])
|
||||
except Exception:
|
||||
pass # Silent failure!
|
||||
|
||||
# ✅ GOOD - Specific handling
|
||||
def delete_user(user_id):
|
||||
try:
|
||||
result = db.execute("DELETE FROM users WHERE id = ?", [user_id])
|
||||
if result.rowcount == 0:
|
||||
raise UserNotFoundError(f"User {user_id} not found")
|
||||
|
||||
except db.IntegrityError as e:
|
||||
logger.error(f"Cannot delete user {user_id}: {e}")
|
||||
raise DependencyError("User has related records")
|
||||
|
||||
except db.OperationalError as e:
|
||||
logger.error(f"Database error: {e}")
|
||||
raise DatabaseUnavailableError()
|
||||
```
|
||||
|
||||
**Quick Checks**:
|
||||
- ✅ Specific exceptions (not bare `except`)?
|
||||
- ✅ Logged?
|
||||
- ✅ User-friendly error?
|
||||
|
||||
---
|
||||
|
||||
### 3. **File Operations**
|
||||
```python
|
||||
# ❌ BAD - File might not exist
|
||||
def read_config():
|
||||
with open("config.json") as f:
|
||||
return json.load(f)
|
||||
|
||||
# ✅ GOOD - Handle missing file
|
||||
def read_config():
|
||||
try:
|
||||
with open("config.json") as f:
|
||||
return json.load(f)
|
||||
except FileNotFoundError:
|
||||
logger.warning("config.json not found, using defaults")
|
||||
return DEFAULT_CONFIG
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Invalid JSON in config.json: {e}")
|
||||
raise ConfigurationError("Malformed config.json")
|
||||
except PermissionError:
|
||||
logger.error("Permission denied reading config.json")
|
||||
raise
|
||||
```
|
||||
|
||||
**Quick Checks**:
|
||||
- ✅ FileNotFoundError handled?
|
||||
- ✅ JSON parse errors caught?
|
||||
- ✅ Permission errors handled?
|
||||
|
||||
---
|
||||
|
||||
### 4. **Type Conversions**
|
||||
```python
|
||||
# ❌ BAD - Crash on invalid input
|
||||
def process_age(age_str):
|
||||
age = int(age_str) # What if "abc"?
|
||||
return age * 2
|
||||
|
||||
# ✅ GOOD - Validated
|
||||
def process_age(age_str):
|
||||
try:
|
||||
age = int(age_str)
|
||||
if age < 0 or age > 150:
|
||||
raise ValueError("Age out of range")
|
||||
return age * 2
|
||||
except ValueError:
|
||||
raise ValidationError(f"Invalid age: {age_str}")
|
||||
```
|
||||
|
||||
**Quick Checks**:
|
||||
- ✅ ValueError caught?
|
||||
- ✅ Range validation?
|
||||
- ✅ Clear error message?
|
||||
|
||||
---
|
||||
|
||||
### 5. **Async/Await** (JavaScript/Python)
|
||||
```javascript
|
||||
// ❌ BAD - Unhandled promise rejection
|
||||
async function fetchUser(id) {
|
||||
const user = await fetch(`/api/users/${id}`);
|
||||
return user.json(); // What if network fails?
|
||||
}
|
||||
|
||||
// ✅ GOOD - Handled
|
||||
async function fetchUser(id) {
|
||||
try {
|
||||
const response = await fetch(`/api/users/${id}`);
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}`);
|
||||
}
|
||||
return await response.json();
|
||||
} catch (error) {
|
||||
console.error(`Failed to fetch user ${id}:`, error);
|
||||
throw new ServiceError("Cannot fetch user");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Quick Checks**:
|
||||
- ✅ Try-catch around await?
|
||||
- ✅ HTTP status checked?
|
||||
- ✅ Logged?
|
||||
|
||||
---
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Pattern 1: Retry with Exponential Backoff
|
||||
```python
|
||||
def call_api_with_retry(url, max_retries=3):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
response = requests.get(url, timeout=5)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
except requests.Timeout:
|
||||
if attempt < max_retries - 1:
|
||||
wait = 2 ** attempt # 1s, 2s, 4s
|
||||
logger.warning(f"Timeout, retrying in {wait}s...")
|
||||
time.sleep(wait)
|
||||
else:
|
||||
raise
|
||||
```
|
||||
|
||||
**When to use**: Transient failures (network, rate limits)
|
||||
|
||||
---
|
||||
|
||||
### Pattern 2: Fallback Values
|
||||
```python
|
||||
def get_user_avatar(user_id):
|
||||
try:
|
||||
return fetch_from_cdn(user_id)
|
||||
except CDNError:
|
||||
logger.warning(f"CDN failed for user {user_id}, using default")
|
||||
return DEFAULT_AVATAR_URL
|
||||
```
|
||||
|
||||
**When to use**: Non-critical operations, graceful degradation
|
||||
|
||||
---
|
||||
|
||||
### Pattern 3: Circuit Breaker
|
||||
```python
|
||||
class CircuitBreaker:
|
||||
def __init__(self, max_failures=5):
|
||||
self.failures = 0
|
||||
self.max_failures = max_failures
|
||||
self.is_open = False
|
||||
|
||||
def call(self, func):
|
||||
if self.is_open:
|
||||
raise ServiceUnavailableError("Circuit breaker open")
|
||||
|
||||
try:
|
||||
result = func()
|
||||
self.failures = 0 # Reset on success
|
||||
return result
|
||||
except Exception as e:
|
||||
self.failures += 1
|
||||
if self.failures >= self.max_failures:
|
||||
self.is_open = True
|
||||
logger.error("Circuit breaker opened")
|
||||
raise
|
||||
```
|
||||
|
||||
**When to use**: Preventing cascading failures
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
## Error Handling Report
|
||||
|
||||
**Status**: [✅ COMPLETE | ⚠️ GAPS FOUND]
|
||||
|
||||
---
|
||||
|
||||
### Missing Error Handling: 3
|
||||
|
||||
1. **[HIGH] No timeout on API call (api_client.py:45)**
|
||||
- **Issue**: `requests.get()` has no timeout
|
||||
- **Risk**: Indefinite hang if service slow
|
||||
- **Fix**:
|
||||
```python
|
||||
response = requests.get(url, timeout=5)
|
||||
```
|
||||
|
||||
2. **[HIGH] Unhandled JSON parse error (config.py:12)**
|
||||
- **Issue**: `json.load()` not wrapped in try-catch
|
||||
- **Risk**: Crash on malformed JSON
|
||||
- **Fix**:
|
||||
```python
|
||||
try:
|
||||
config = json.load(f)
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Invalid JSON: {e}")
|
||||
return DEFAULT_CONFIG
|
||||
```
|
||||
|
||||
3. **[MEDIUM] Silent exception swallowing (db.py:89)**
|
||||
- **Issue**: `except Exception: pass`
|
||||
- **Risk**: Failures go unnoticed
|
||||
- **Fix**: Log error or use specific exception
|
||||
|
||||
---
|
||||
|
||||
**Good Practices Found**: 2
|
||||
- ✅ Database errors logged properly (db.py:34)
|
||||
- ✅ Retry logic on payment API (payments.py:67)
|
||||
|
||||
---
|
||||
|
||||
**Next Steps**:
|
||||
1. Add timeout to API calls (5 min)
|
||||
2. Wrap JSON parsing in try-catch (2 min)
|
||||
3. Remove silent exception handlers (3 min)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Catch every possible exception (too noisy)
|
||||
❌ Force try-catch everywhere (only where needed)
|
||||
❌ Replace integration tests
|
||||
❌ Handle business logic errors (validation, etc.)
|
||||
|
||||
✅ **DOES**: Check critical error-prone operations (network, I/O, parsing)
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Strict mode: check all functions
|
||||
export LAZYDEV_ERROR_HANDLING_STRICT=1
|
||||
|
||||
# Disable error handling checks
|
||||
export LAZYDEV_DISABLE_ERROR_CHECKS=1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Focus**: External calls, I/O, parsing, async
|
||||
**Speed**: <2 seconds per file
|
||||
200
.claude/skills/finishing-a-development-branch/SKILL.md
Normal file
200
.claude/skills/finishing-a-development-branch/SKILL.md
Normal file
@@ -0,0 +1,200 @@
|
||||
---
|
||||
name: finishing-a-development-branch
|
||||
description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
|
||||
---
|
||||
|
||||
# Finishing a Development Branch
|
||||
|
||||
## Overview
|
||||
|
||||
Guide completion of development work by presenting clear options and handling chosen workflow.
|
||||
|
||||
**Core principle:** Verify tests → Present options → Execute choice → Clean up.
|
||||
|
||||
**Announce at start:** "I'm using the finishing-a-development-branch skill to complete this work."
|
||||
|
||||
## The Process
|
||||
|
||||
### Step 1: Verify Tests
|
||||
|
||||
**Before presenting options, verify tests pass:**
|
||||
|
||||
```bash
|
||||
# Run project's test suite
|
||||
npm test / cargo test / pytest / go test ./...
|
||||
```
|
||||
|
||||
**If tests fail:**
|
||||
```
|
||||
Tests failing (<N> failures). Must fix before completing:
|
||||
|
||||
[Show failures]
|
||||
|
||||
Cannot proceed with merge/PR until tests pass.
|
||||
```
|
||||
|
||||
Stop. Don't proceed to Step 2.
|
||||
|
||||
**If tests pass:** Continue to Step 2.
|
||||
|
||||
### Step 2: Determine Base Branch
|
||||
|
||||
```bash
|
||||
# Try common base branches
|
||||
git merge-base HEAD main 2>/dev/null || git merge-base HEAD master 2>/dev/null
|
||||
```
|
||||
|
||||
Or ask: "This branch split from main - is that correct?"
|
||||
|
||||
### Step 3: Present Options
|
||||
|
||||
Present exactly these 4 options:
|
||||
|
||||
```
|
||||
Implementation complete. What would you like to do?
|
||||
|
||||
1. Merge back to <base-branch> locally
|
||||
2. Push and create a Pull Request
|
||||
3. Keep the branch as-is (I'll handle it later)
|
||||
4. Discard this work
|
||||
|
||||
Which option?
|
||||
```
|
||||
|
||||
**Don't add explanation** - keep options concise.
|
||||
|
||||
### Step 4: Execute Choice
|
||||
|
||||
#### Option 1: Merge Locally
|
||||
|
||||
```bash
|
||||
# Switch to base branch
|
||||
git checkout <base-branch>
|
||||
|
||||
# Pull latest
|
||||
git pull
|
||||
|
||||
# Merge feature branch
|
||||
git merge <feature-branch>
|
||||
|
||||
# Verify tests on merged result
|
||||
<test command>
|
||||
|
||||
# If tests pass
|
||||
git branch -d <feature-branch>
|
||||
```
|
||||
|
||||
Then: Cleanup worktree (Step 5)
|
||||
|
||||
#### Option 2: Push and Create PR
|
||||
|
||||
```bash
|
||||
# Push branch
|
||||
git push -u origin <feature-branch>
|
||||
|
||||
# Create PR
|
||||
gh pr create --title "<title>" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<2-3 bullets of what changed>
|
||||
|
||||
## Test Plan
|
||||
- [ ] <verification steps>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
Then: Cleanup worktree (Step 5)
|
||||
|
||||
#### Option 3: Keep As-Is
|
||||
|
||||
Report: "Keeping branch <name>. Worktree preserved at <path>."
|
||||
|
||||
**Don't cleanup worktree.**
|
||||
|
||||
#### Option 4: Discard
|
||||
|
||||
**Confirm first:**
|
||||
```
|
||||
This will permanently delete:
|
||||
- Branch <name>
|
||||
- All commits: <commit-list>
|
||||
- Worktree at <path>
|
||||
|
||||
Type 'discard' to confirm.
|
||||
```
|
||||
|
||||
Wait for exact confirmation.
|
||||
|
||||
If confirmed:
|
||||
```bash
|
||||
git checkout <base-branch>
|
||||
git branch -D <feature-branch>
|
||||
```
|
||||
|
||||
Then: Cleanup worktree (Step 5)
|
||||
|
||||
### Step 5: Cleanup Worktree
|
||||
|
||||
**For Options 1, 2, 4:**
|
||||
|
||||
Check if in worktree:
|
||||
```bash
|
||||
git worktree list | grep $(git branch --show-current)
|
||||
```
|
||||
|
||||
If yes:
|
||||
```bash
|
||||
git worktree remove <worktree-path>
|
||||
```
|
||||
|
||||
**For Option 3:** Keep worktree.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Option | Merge | Push | Keep Worktree | Cleanup Branch |
|
||||
|--------|-------|------|---------------|----------------|
|
||||
| 1. Merge locally | ✓ | - | - | ✓ |
|
||||
| 2. Create PR | - | ✓ | ✓ | - |
|
||||
| 3. Keep as-is | - | - | ✓ | - |
|
||||
| 4. Discard | - | - | - | ✓ (force) |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Skipping test verification**
|
||||
- **Problem:** Merge broken code, create failing PR
|
||||
- **Fix:** Always verify tests before offering options
|
||||
|
||||
**Open-ended questions**
|
||||
- **Problem:** "What should I do next?" → ambiguous
|
||||
- **Fix:** Present exactly 4 structured options
|
||||
|
||||
**Automatic worktree cleanup**
|
||||
- **Problem:** Remove worktree when might need it (Option 2, 3)
|
||||
- **Fix:** Only cleanup for Options 1 and 4
|
||||
|
||||
**No confirmation for discard**
|
||||
- **Problem:** Accidentally delete work
|
||||
- **Fix:** Require typed "discard" confirmation
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Proceed with failing tests
|
||||
- Merge without verifying tests on result
|
||||
- Delete work without confirmation
|
||||
- Force-push without explicit request
|
||||
|
||||
**Always:**
|
||||
- Verify tests before offering options
|
||||
- Present exactly 4 options
|
||||
- Get typed confirmation for Option 4
|
||||
- Clean up worktree for Options 1 & 4 only
|
||||
|
||||
## Integration
|
||||
|
||||
**Called by:**
|
||||
- **subagent-driven-development** (Step 7) - After all tasks complete
|
||||
- **executing-plans** (Step 5) - After all batches complete
|
||||
|
||||
**Pairs with:**
|
||||
- **using-git-worktrees** - Cleans up worktree created by that skill
|
||||
31
.claude/skills/gh-issue-sync/SKILL.md
Normal file
31
.claude/skills/gh-issue-sync/SKILL.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: gh-issue-sync
|
||||
description: Create or update GitHub issue for the story and sub-issues for tasks
|
||||
version: 0.1.0
|
||||
tags: [github, pm]
|
||||
triggers:
|
||||
- create github issue
|
||||
- sync issues
|
||||
- sub-issues
|
||||
---
|
||||
|
||||
# GitHub Issue Sync
|
||||
|
||||
## Purpose
|
||||
Keep GitHub issues in sync with local USER-STORY and TASK files.
|
||||
|
||||
## Behavior
|
||||
1. Draft issue titles/bodies from local files (story + tasks).
|
||||
2. Propose labels and links (paths, story ID, task IDs).
|
||||
3. Output GitHub CLI commands (dry-run by default); confirm before executing.
|
||||
|
||||
## Guardrails
|
||||
- Do not post without explicit confirmation.
|
||||
- Reflect exactly what exists on disk; no invented tasks.
|
||||
|
||||
## Integration
|
||||
- After `/lazy create-feature` creates files; optional during `/lazy story-review`.
|
||||
|
||||
## Example Prompt
|
||||
> Prepare GitHub issues for US-20251027-001 and its tasks (dry run).
|
||||
|
||||
38
.claude/skills/git-worktrees/SKILL.md
Normal file
38
.claude/skills/git-worktrees/SKILL.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: git-worktrees
|
||||
description: Use Git worktrees to isolate tasks and keep diffs small and parallelizable
|
||||
version: 0.1.0
|
||||
tags: [git, workflow]
|
||||
triggers:
|
||||
- worktree
|
||||
- parallelize tasks
|
||||
- spike branch
|
||||
---
|
||||
|
||||
# Git Worktrees
|
||||
|
||||
## Purpose
|
||||
Create parallel worktrees for distinct tasks to keep changes isolated and reviews clean.
|
||||
|
||||
## When to Use
|
||||
- Parallel task execution; spikes; conflicting changes
|
||||
|
||||
## Behavior
|
||||
1. Pre-check: `git status --porcelain` must be clean.
|
||||
2. Suggest names: `wt-TASK-<id>` or `wt-<short-topic>`.
|
||||
3. Commands:
|
||||
- Create: `git worktree add ../<name> <base-branch>`
|
||||
- Switch: open the new dir; confirm branch
|
||||
- Remove (after merge): `git worktree remove ../<name>`
|
||||
4. Cleanup checklist.
|
||||
|
||||
## Guardrails
|
||||
- Never create/remove with dirty status.
|
||||
- Echo exact commands; do not execute automatically.
|
||||
|
||||
## Integration
|
||||
- `/lazy task-exec` (optional), Coder agent setup phase.
|
||||
|
||||
## Example Prompt
|
||||
> Create a dedicated worktree for TASK-1.2 on top of `feature/auth`.
|
||||
|
||||
54
.claude/skills/memory-graph/SKILL.md
Normal file
54
.claude/skills/memory-graph/SKILL.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
name: memory-graph
|
||||
description: Persistent memory graph skill using the MCP Memory server
|
||||
audience: agents
|
||||
visibility: project
|
||||
---
|
||||
|
||||
# Memory Graph Skill
|
||||
|
||||
This skill teaches you how to create, update, search, and prune a persistent knowledge graph using the Model Context Protocol (MCP) Memory server.
|
||||
|
||||
When connected, Memory tools appear as MCP tools named like `mcp__memory__<tool>`. Use these tools proactively whenever you identify durable facts, entities, or relationships you want to persist across sessions.
|
||||
|
||||
See `operations.md` for exact tool I/O shapes and `playbooks.md` for common patterns and routing rules.
|
||||
|
||||
## When To Use
|
||||
- New durable facts emerge (requirements, decisions, owners, IDs, endpoints)
|
||||
- You meet a new entity (person, team, service, repository, dataset)
|
||||
- You discover relationships ("Service A depends on Service B", "Alice owns Repo X")
|
||||
- You want to reference prior sessions or quickly search memory
|
||||
- You need to prune or correct stale memory
|
||||
|
||||
## Golden Rules
|
||||
- Prefer small, well-typed entities over long notes
|
||||
- Record relationships in active voice: `relationType` describes how `from` relates to `to`
|
||||
- Add observations as atomic strings; include dates or sources when helpful
|
||||
- Before creating, search existing nodes to avoid duplicates
|
||||
- When correcting, prefer `delete_observations` then `add_observations` over overwriting
|
||||
|
||||
## Auto Triggers
|
||||
- UserPromptSubmit adds a Memory Graph activation block when durable facts or explicit memory intents are detected. Disable with `LAZYDEV_DISABLE_MEMORY_SKILL=1`.
|
||||
- PostToolUse emits lightweight suggestions when tool results include durable facts. Disable with `LAZYDEV_DISABLE_MEMORY_SUGGEST=1`.
|
||||
|
||||
## Tooling Summary (server "memory")
|
||||
- `create_entities`, `add_observations`, `create_relations`
|
||||
- `delete_entities`, `delete_observations`, `delete_relations`
|
||||
- `read_graph`, `search_nodes`, `open_nodes`
|
||||
|
||||
Always call tools with the fully-qualified MCP name, for example: `mcp__memory__create_entities`.
|
||||
|
||||
## Minimal Flow
|
||||
1) `mcp__memory__search_nodes` for likely duplicates
|
||||
2) `mcp__memory__create_entities` as needed
|
||||
3) `mcp__memory__add_observations` with concise facts
|
||||
4) `mcp__memory__create_relations` to wire the graph
|
||||
5) Optional: `mcp__memory__open_nodes` to verify saved nodes
|
||||
|
||||
## Error Handling
|
||||
- If create fails due to existing name, switch to `add_observations`
|
||||
- If `add_observations` fails (unknown entity), retry with `create_entities`
|
||||
- All delete tools are safe on missing targets (no-op)
|
||||
|
||||
## Examples
|
||||
See `examples.md` for end-to-end examples covering projects, APIs, and people.
|
||||
69
.claude/skills/memory-graph/examples.md
Normal file
69
.claude/skills/memory-graph/examples.md
Normal file
@@ -0,0 +1,69 @@
|
||||
# Examples
|
||||
|
||||
All examples assume the Memory MCP server is connected under the name `memory`, so tool names are `mcp__memory__...`.
|
||||
|
||||
## Project/Service
|
||||
Persist a service and its basics.
|
||||
|
||||
1) Prevent duplicates
|
||||
```
|
||||
tool: mcp__memory__search_nodes
|
||||
input: {"query": "service:alpha"}
|
||||
```
|
||||
|
||||
2) Create entity if missing
|
||||
```
|
||||
tool: mcp__memory__create_entities
|
||||
input: {
|
||||
"entities": [
|
||||
{
|
||||
"name": "service:alpha",
|
||||
"entityType": "service",
|
||||
"observations": [
|
||||
"owner: alice",
|
||||
"repo: github.com/org/alpha",
|
||||
"primary_language: python",
|
||||
"deploy_url: https://alpha.example.com"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
3) Add relation to its owner
|
||||
```
|
||||
tool: mcp__memory__create_relations
|
||||
input: {
|
||||
"relations": [
|
||||
{"from": "service:alpha", "to": "person:alice", "relationType": "owned_by"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## People
|
||||
Create or enrich people entities.
|
||||
|
||||
```
|
||||
tool: mcp__memory__create_entities
|
||||
input: {"entities": [{"name": "person:alice", "entityType": "person", "observations": ["email: alice@example.com"]}]}
|
||||
```
|
||||
|
||||
Add title change
|
||||
```
|
||||
tool: mcp__memory__add_observations
|
||||
input: {"observations": [{"entityName": "person:alice", "contents": ["title: Staff Engineer (2025-10-27)"]}]}
|
||||
```
|
||||
|
||||
## Corrections
|
||||
Remove stale owner, add new owner.
|
||||
|
||||
```
|
||||
tool: mcp__memory__delete_observations
|
||||
input: {"deletions": [{"entityName": "service:alpha", "observations": ["owner: alice"]}]}
|
||||
```
|
||||
|
||||
```
|
||||
tool: mcp__memory__add_observations
|
||||
input: {"observations": [{"entityName": "service:alpha", "contents": ["owner: bob"]}]}
|
||||
```
|
||||
|
||||
113
.claude/skills/memory-graph/operations.md
Normal file
113
.claude/skills/memory-graph/operations.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Memory Graph Operations (I/O)
|
||||
|
||||
Use the fully-qualified tool names with the MCP prefix: `mcp__memory__<tool>`.
|
||||
|
||||
All tools below belong to the server `memory`.
|
||||
|
||||
## create_entities
|
||||
Create multiple new entities. Skips any entity whose `name` already exists.
|
||||
|
||||
Input
|
||||
```
|
||||
{
|
||||
"entities": [
|
||||
{
|
||||
"name": "string",
|
||||
"entityType": "string",
|
||||
"observations": ["string", "string"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## create_relations
|
||||
Create multiple relations. Skips duplicates.
|
||||
|
||||
Input
|
||||
```
|
||||
{
|
||||
"relations": [
|
||||
{
|
||||
"from": "string",
|
||||
"to": "string",
|
||||
"relationType": "string" // active voice, e.g. "depends_on", "owned_by"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## add_observations
|
||||
Add observations to existing entities. Fails if `entityName` doesn’t exist.
|
||||
|
||||
Input
|
||||
```
|
||||
{
|
||||
"observations": [
|
||||
{
|
||||
"entityName": "string",
|
||||
"contents": ["string", "string"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## delete_entities
|
||||
Remove entities and cascade their relations. No-op if missing.
|
||||
|
||||
Input
|
||||
```
|
||||
{ "entityNames": ["string", "string"] }
|
||||
```
|
||||
|
||||
## delete_observations
|
||||
Remove specific observations from entities. No-op if missing.
|
||||
|
||||
Input
|
||||
```
|
||||
{
|
||||
"deletions": [
|
||||
{
|
||||
"entityName": "string",
|
||||
"observations": ["string", "string"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## delete_relations
|
||||
Remove specific relations. No-op if missing.
|
||||
|
||||
Input
|
||||
```
|
||||
{
|
||||
"relations": [
|
||||
{
|
||||
"from": "string",
|
||||
"to": "string",
|
||||
"relationType": "string"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## read_graph
|
||||
Return the entire graph.
|
||||
|
||||
Input: none
|
||||
|
||||
## search_nodes
|
||||
Fuzzy search across entity names, types, and observations.
|
||||
|
||||
Input
|
||||
```
|
||||
{ "query": "string" }
|
||||
```
|
||||
|
||||
## open_nodes
|
||||
Return specific nodes and relations connecting them. Skips non-existent names.
|
||||
|
||||
Input
|
||||
```
|
||||
{ "names": ["string", "string"] }
|
||||
```
|
||||
|
||||
44
.claude/skills/memory-graph/playbooks.md
Normal file
44
.claude/skills/memory-graph/playbooks.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Memory Graph Playbooks
|
||||
|
||||
Use these routing patterns to decide which tools to call and in what order.
|
||||
|
||||
## 1) Persist a New Entity (+ facts)
|
||||
1. `mcp__memory__search_nodes` with the proposed name
|
||||
2. If not found → `mcp__memory__create_entities`
|
||||
3. Then `mcp__memory__add_observations`
|
||||
4. Optionally `mcp__memory__open_nodes` to verify
|
||||
|
||||
Example intent → tools
|
||||
- Intent: “Remember service Alpha (owner: Alice, repo: org/alpha)”
|
||||
- Tools:
|
||||
- `create_entities` → name: "service:alpha", type: "service"
|
||||
- `add_observations` → key facts (owner, repo URL, language, deploy URL)
|
||||
|
||||
## 2) Add Relations Between Known Entities
|
||||
1. `mcp__memory__open_nodes` for both
|
||||
2. If either missing → create it first
|
||||
3. `mcp__memory__create_relations`
|
||||
|
||||
Relation guidance
|
||||
- Use active voice `relationType`: `depends_on`, `owned_by`, `maintained_by`, `deployed_to`, `docs_at`
|
||||
- Prefer directional relations; add reverse relation only if it has a different meaning
|
||||
|
||||
## 3) Correct or Update Facts
|
||||
1. `mcp__memory__open_nodes`
|
||||
2. `mcp__memory__delete_observations` to remove stale/incorrect facts
|
||||
3. `mcp__memory__add_observations` to append correct facts
|
||||
|
||||
## 4) Remove Entities or Links
|
||||
- `mcp__memory__delete_relations` for just the link
|
||||
- `mcp__memory__delete_entities` for full removal (cascades relations)
|
||||
|
||||
## 5) Explore or Export
|
||||
- `mcp__memory__read_graph` to dump entire graph
|
||||
- `mcp__memory__search_nodes` to find relevant nodes by keyword
|
||||
- For focused context, use `mcp__memory__open_nodes` with names
|
||||
|
||||
## 6) Session Rhythm
|
||||
- Before deep work: `search_nodes` or `open_nodes` for today’s entities
|
||||
- During work: add small observations at decision points
|
||||
- After work: link new entities and summarize outcomes as observations
|
||||
|
||||
32
.claude/skills/output-style-selector/SKILL.md
Normal file
32
.claude/skills/output-style-selector/SKILL.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
name: output-style-selector
|
||||
description: Automatically choose the best output style (tables, bullets, YAML, HTML, concise) to improve scanability and save tokens
|
||||
version: 0.1.0
|
||||
tags: [formatting, context]
|
||||
triggers:
|
||||
- style
|
||||
- format
|
||||
- output style
|
||||
---
|
||||
|
||||
# Output Style Selector
|
||||
|
||||
## Purpose
|
||||
Select a response style that maximizes readability and minimizes back-and-forth.
|
||||
|
||||
## Behavior
|
||||
1. Infer intent from prompt keywords and task type.
|
||||
2. Choose one of: table-based, bullet-points, yaml-structured, html-structured, genui, ultra-concise, markdown-focused.
|
||||
3. Emit a short “Style Block” (1–2 lines) describing the chosen style.
|
||||
4. Respect overrides: `[style: <name>]` or `[style: off]`.
|
||||
|
||||
## Guardrails
|
||||
- Only inject when helpful; avoid long style instructions.
|
||||
- Keep the Style Block compact.
|
||||
|
||||
## Integration
|
||||
- `UserPromptSubmit` and sub-agent prompts (documentation, reviewer, PM).
|
||||
|
||||
## Example Style Block
|
||||
> Style: Table Based. Use a summary paragraph and then tables for comparisons and actions.
|
||||
|
||||
346
.claude/skills/performance-budget-checker/SKILL.md
Normal file
346
.claude/skills/performance-budget-checker/SKILL.md
Normal file
@@ -0,0 +1,346 @@
|
||||
---
|
||||
name: performance-budget-checker
|
||||
description: Detects performance anti-patterns like N+1 queries, nested loops, large file operations, and inefficient algorithms. Suggests fast fixes before issues reach production.
|
||||
---
|
||||
|
||||
# Performance Budget Checker Skill
|
||||
|
||||
**Purpose**: Catch performance killers before they slow production.
|
||||
|
||||
**Trigger Words**: query, database, loop, for, map, filter, file, read, load, fetch, API, cache
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision: Check Performance?
|
||||
|
||||
```python
|
||||
def needs_perf_check(code_context: dict) -> bool:
|
||||
"""Fast performance risk evaluation."""
|
||||
|
||||
# Performance-critical patterns
|
||||
patterns = [
|
||||
"for ", "while ", "map(", "filter(", # Loops
|
||||
"db.", "query", "select", "fetch", # Database
|
||||
".all()", ".filter(", ".find(", # ORM queries
|
||||
"open(", "read", "readlines", # File I/O
|
||||
"json.loads", "pickle.load", # Deserialization
|
||||
"sorted(", "sort(", # Sorting
|
||||
"in list", "in array", # Linear search
|
||||
]
|
||||
|
||||
code = code_context.get("code", "").lower()
|
||||
return any(p in code for p in patterns)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance Anti-Patterns (Quick Fixes)
|
||||
|
||||
### 1. **N+1 Query Problem** (Most Common) ⚠️
|
||||
```python
|
||||
# ❌ BAD - 1 + N queries (slow!)
|
||||
def get_users_with_posts():
|
||||
users = User.query.all() # 1 query
|
||||
for user in users:
|
||||
user.posts = Post.query.filter_by(user_id=user.id).all() # N queries!
|
||||
return users
|
||||
# Performance: 101 queries for 100 users
|
||||
|
||||
# ✅ GOOD - 1 query with JOIN
|
||||
def get_users_with_posts():
|
||||
users = User.query.options(joinedload(User.posts)).all() # 1 query
|
||||
return users
|
||||
# Performance: 1 query for 100 users
|
||||
|
||||
# Or use prefetch
|
||||
def get_users_with_posts():
|
||||
users = User.query.all()
|
||||
user_ids = [u.id for u in users]
|
||||
posts = Post.query.filter(Post.user_id.in_(user_ids)).all()
|
||||
# Group posts by user_id manually
|
||||
return users
|
||||
```
|
||||
|
||||
**Quick Fix**: Use `joinedload()`, `selectinload()`, or batch fetch.
|
||||
|
||||
---
|
||||
|
||||
### 2. **Nested Loops** ⚠️
|
||||
```python
|
||||
# ❌ BAD - O(n²) complexity
|
||||
def find_common_items(list1, list2):
|
||||
common = []
|
||||
for item1 in list1: # O(n)
|
||||
for item2 in list2: # O(n)
|
||||
if item1 == item2:
|
||||
common.append(item1)
|
||||
return common
|
||||
# Performance: 1,000,000 operations for 1000 items each
|
||||
|
||||
# ✅ GOOD - O(n) with set
|
||||
def find_common_items(list1, list2):
|
||||
return list(set(list1) & set(list2))
|
||||
# Performance: 2000 operations for 1000 items each
|
||||
```
|
||||
|
||||
**Quick Fix**: Use set intersection, dict lookup, or hash map.
|
||||
|
||||
---
|
||||
|
||||
### 3. **Inefficient Filtering** ⚠️
|
||||
```python
|
||||
# ❌ BAD - Fetch all, then filter in Python
|
||||
def get_active_users():
|
||||
all_users = User.query.all() # Fetch 10,000 users
|
||||
active = [u for u in all_users if u.is_active] # Filter in memory
|
||||
return active
|
||||
# Performance: 10,000 rows transferred, filtered in Python
|
||||
|
||||
# ✅ GOOD - Filter in database
|
||||
def get_active_users():
|
||||
return User.query.filter_by(is_active=True).all()
|
||||
# Performance: Only active users transferred
|
||||
```
|
||||
|
||||
**Quick Fix**: Push filtering to database with WHERE clause.
|
||||
|
||||
---
|
||||
|
||||
### 4. **Large File Loading** ⚠️
|
||||
```python
|
||||
# ❌ BAD - Load entire file into memory
|
||||
def process_large_file(filepath):
|
||||
with open(filepath) as f:
|
||||
data = f.read() # 1GB file → 1GB memory!
|
||||
for line in data.split('\n'):
|
||||
process_line(line)
|
||||
|
||||
# ✅ GOOD - Stream line by line
|
||||
def process_large_file(filepath):
|
||||
with open(filepath) as f:
|
||||
for line in f: # Streaming, ~4KB at a time
|
||||
process_line(line.strip())
|
||||
```
|
||||
|
||||
**Quick Fix**: Stream files instead of loading fully.
|
||||
|
||||
---
|
||||
|
||||
### 5. **Missing Pagination** ⚠️
|
||||
```python
|
||||
# ❌ BAD - Return all 100,000 records
|
||||
@app.route("/api/users")
|
||||
def get_users():
|
||||
return User.query.all() # 100,000 rows!
|
||||
|
||||
# ✅ GOOD - Paginate
|
||||
@app.route("/api/users")
|
||||
def get_users():
|
||||
page = request.args.get('page', 1, type=int)
|
||||
per_page = request.args.get('per_page', 50, type=int)
|
||||
return User.query.paginate(page=page, per_page=per_page)
|
||||
```
|
||||
|
||||
**Quick Fix**: Add pagination to list endpoints.
|
||||
|
||||
---
|
||||
|
||||
### 6. **No Caching** ⚠️
|
||||
```python
|
||||
# ❌ BAD - Recompute every time
|
||||
def get_top_products():
|
||||
# Expensive computation every request
|
||||
products = Product.query.all()
|
||||
sorted_products = sorted(products, key=lambda p: p.sales, reverse=True)
|
||||
return sorted_products[:10]
|
||||
|
||||
# ✅ GOOD - Cache for 5 minutes
|
||||
from functools import lru_cache
|
||||
import time
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_top_products_cached():
|
||||
cache_key = int(time.time() // 300) # 5 min buckets
|
||||
return _compute_top_products()
|
||||
|
||||
def _compute_top_products():
|
||||
products = Product.query.all()
|
||||
sorted_products = sorted(products, key=lambda p: p.sales, reverse=True)
|
||||
return sorted_products[:10]
|
||||
```
|
||||
|
||||
**Quick Fix**: Add caching for expensive computations.
|
||||
|
||||
---
|
||||
|
||||
### 7. **Linear Search in List** ⚠️
|
||||
```python
|
||||
# ❌ BAD - O(n) lookup
|
||||
user_ids = [1, 2, 3, ..., 10000] # List
|
||||
if 9999 in user_ids: # Scans entire list
|
||||
pass
|
||||
|
||||
# ✅ GOOD - O(1) lookup
|
||||
user_ids = {1, 2, 3, ..., 10000} # Set
|
||||
if 9999 in user_ids: # Instant lookup
|
||||
pass
|
||||
```
|
||||
|
||||
**Quick Fix**: Use set/dict for lookups instead of list.
|
||||
|
||||
---
|
||||
|
||||
### 8. **Synchronous I/O in Loop** ⚠️
|
||||
```python
|
||||
# ❌ BAD - Sequential API calls (slow)
|
||||
def fetch_user_data(user_ids):
|
||||
results = []
|
||||
for user_id in user_ids: # 100 users
|
||||
data = requests.get(f"/api/users/{user_id}").json() # 200ms each
|
||||
results.append(data)
|
||||
return results
|
||||
# Performance: 100 × 200ms = 20 seconds!
|
||||
|
||||
# ✅ GOOD - Parallel requests
|
||||
import asyncio
|
||||
import aiohttp
|
||||
|
||||
async def fetch_user_data(user_ids):
|
||||
async with aiohttp.ClientSession() as session:
|
||||
tasks = [fetch_one(session, uid) for uid in user_ids]
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
|
||||
async def fetch_one(session, user_id):
|
||||
async with session.get(f"/api/users/{user_id}") as resp:
|
||||
return await resp.json()
|
||||
# Performance: ~200ms total (parallel)
|
||||
```
|
||||
|
||||
**Quick Fix**: Use async/await or threading for I/O-bound operations.
|
||||
|
||||
---
|
||||
|
||||
## Performance Budget Guidelines
|
||||
|
||||
| Operation | Acceptable | Warning | Critical |
|
||||
|-----------|-----------|---------|----------|
|
||||
| API response time | <200ms | 200-500ms | >500ms |
|
||||
| Database query | <50ms | 50-200ms | >200ms |
|
||||
| List endpoint | <100 items | 100-1000 | >1000 |
|
||||
| File operation | <1MB | 1-10MB | >10MB |
|
||||
| Loop iterations | <1000 | 1000-10000 | >10000 |
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
## Performance Report
|
||||
|
||||
**Status**: [✅ WITHIN BUDGET | ⚠️ ISSUES FOUND]
|
||||
|
||||
---
|
||||
|
||||
### Performance Issues: 2
|
||||
|
||||
1. **[HIGH] N+1 Query in get_user_posts() (api.py:34)**
|
||||
- **Issue**: 1 + 100 queries (101 total)
|
||||
- **Impact**: ~500ms for 100 users
|
||||
- **Fix**:
|
||||
```python
|
||||
# Change this:
|
||||
users = User.query.all()
|
||||
for user in users:
|
||||
user.posts = Post.query.filter_by(user_id=user.id).all()
|
||||
|
||||
# To this:
|
||||
users = User.query.options(joinedload(User.posts)).all()
|
||||
```
|
||||
- **Expected**: 500ms → 50ms (10x faster)
|
||||
|
||||
2. **[MEDIUM] No pagination on /api/products (routes.py:45)**
|
||||
- **Issue**: Returns all 5,000 products
|
||||
- **Impact**: 2MB response, slow load
|
||||
- **Fix**:
|
||||
```python
|
||||
@app.route("/api/products")
|
||||
def get_products():
|
||||
page = request.args.get('page', 1, type=int)
|
||||
return Product.query.paginate(page=page, per_page=50)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Optimizations Applied: 1
|
||||
- ✅ Used set() for user_id lookup (utils.py:23) - O(1) instead of O(n)
|
||||
|
||||
---
|
||||
|
||||
**Next Steps**:
|
||||
1. Fix N+1 query with joinedload (5 min fix)
|
||||
2. Add pagination to /api/products (10 min)
|
||||
3. Consider adding Redis cache for top products
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## When to Skip Performance Checks
|
||||
|
||||
✅ Skip for:
|
||||
- Prototypes/POCs
|
||||
- Admin-only endpoints (low traffic)
|
||||
- One-time scripts
|
||||
- Small datasets (<100 items)
|
||||
|
||||
⚠️ Always check for:
|
||||
- Public APIs
|
||||
- User-facing endpoints
|
||||
- High-traffic pages
|
||||
- Data processing pipelines
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Run actual benchmarks (use profiling tools)
|
||||
❌ Optimize algorithms (focus on anti-patterns)
|
||||
❌ Check infrastructure (servers, CDN, etc.)
|
||||
❌ Replace load testing
|
||||
|
||||
✅ **DOES**: Detect common performance anti-patterns with quick fixes.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Strict mode: check all loops and queries
|
||||
export LAZYDEV_PERF_STRICT=1
|
||||
|
||||
# Disable performance checks
|
||||
export LAZYDEV_DISABLE_PERF_CHECKS=1
|
||||
|
||||
# Set custom thresholds
|
||||
export LAZYDEV_PERF_MAX_QUERY_TIME=100 # ms
|
||||
export LAZYDEV_PERF_MAX_LOOP_SIZE=5000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference: Common Fixes
|
||||
|
||||
| Anti-Pattern | Fix | Time Complexity |
|
||||
|--------------|-----|-----------------|
|
||||
| N+1 queries | `joinedload()` | O(n) → O(1) |
|
||||
| Nested loops | Use set/dict | O(n²) → O(n) |
|
||||
| Load full file | Stream lines | O(n) memory → O(1) |
|
||||
| No pagination | `.paginate()` | O(n) → O(page_size) |
|
||||
| Linear search | Use set | O(n) → O(1) |
|
||||
| Sync I/O loop | async/await | O(n×t) → O(t) |
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Focus**: Database, loops, I/O, caching
|
||||
**Speed**: <3 seconds per file
|
||||
431
.claude/skills/project-docs-sync/SKILL.md
Normal file
431
.claude/skills/project-docs-sync/SKILL.md
Normal file
@@ -0,0 +1,431 @@
|
||||
---
|
||||
name: project-docs-sync
|
||||
description: Automatically synchronize project documentation when major changes occur (new tech, architecture changes, requirements shifts). Detects significant updates and propagates changes across TECH-STACK.md, ARCHITECTURE.md, and SPECIFICATIONS.md.
|
||||
---
|
||||
|
||||
# Project Documentation Sync Skill
|
||||
|
||||
**Purpose**: Keep project documentation consistent without manual syncing overhead.
|
||||
|
||||
**Trigger**: Auto-invoked by PostToolUse hook when files in `project-management/` are edited.
|
||||
|
||||
---
|
||||
|
||||
## Decision Logic: Should We Sync?
|
||||
|
||||
```python
|
||||
def should_sync(change: dict) -> tuple[bool, str]:
|
||||
"""Conservative sync decision - only on big changes."""
|
||||
|
||||
# Track last sync state
|
||||
last_sync = load_last_sync() # from .meta/last-sync.json
|
||||
|
||||
significant_changes = {
|
||||
# Technology changes
|
||||
"added_technology": True, # New language, framework, library
|
||||
"removed_technology": True, # Deprecated/removed tech
|
||||
"upgraded_major_version": True, # React 17 → 18, Python 3.10 → 3.11
|
||||
|
||||
# Architecture changes
|
||||
"added_service": True, # New microservice, component
|
||||
"removed_service": True, # Deprecated service
|
||||
"changed_data_flow": True, # New integration pattern
|
||||
"added_integration": True, # New third-party API
|
||||
|
||||
# Requirements changes
|
||||
"new_security_requirement": True,
|
||||
"new_performance_requirement": True,
|
||||
"changed_api_contract": True,
|
||||
"added_compliance_need": True,
|
||||
}
|
||||
|
||||
# Skip minor changes
|
||||
minor_changes = {
|
||||
"typo_fix": False,
|
||||
"formatting": False,
|
||||
"comment_update": False,
|
||||
"example_clarification": False,
|
||||
}
|
||||
|
||||
change_type = classify_change(change, last_sync)
|
||||
return significant_changes.get(change_type, False), change_type
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What Gets Synced (Conservative Strategy)
|
||||
|
||||
### 1. TECH-STACK.md Changed → Update ARCHITECTURE.md
|
||||
|
||||
**Triggers:**
|
||||
- Added new language/framework (e.g., added Redis)
|
||||
- Removed technology (e.g., removed MongoDB)
|
||||
- Major version upgrade (e.g., React 17 → 18)
|
||||
|
||||
**Sync Actions:**
|
||||
```markdown
|
||||
TECH-STACK.md shows:
|
||||
+ Redis 7.x (added for caching)
|
||||
|
||||
→ Update ARCHITECTURE.md:
|
||||
- Add Redis component to architecture diagram
|
||||
- Add caching layer to data flow
|
||||
- Document Redis connection pattern
|
||||
```
|
||||
|
||||
**Example Output:**
|
||||
```
|
||||
✓ Synced TECH-STACK.md → ARCHITECTURE.md
|
||||
- Added: Redis caching layer
|
||||
- Updated: Data flow diagram (added cache lookup)
|
||||
- Reason: New technology requires architectural integration
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. ARCHITECTURE.md Changed → Update SPECIFICATIONS.md
|
||||
|
||||
**Triggers:**
|
||||
- New service/component added
|
||||
- API gateway pattern introduced
|
||||
- Data model changed
|
||||
- Integration pattern modified
|
||||
|
||||
**Sync Actions:**
|
||||
```markdown
|
||||
ARCHITECTURE.md shows:
|
||||
+ API Gateway (Kong) added between clients and services
|
||||
|
||||
→ Update SPECIFICATIONS.md:
|
||||
- Add API Gateway endpoints
|
||||
- Update authentication flow
|
||||
- Add rate limiting specs
|
||||
- Update API contract examples
|
||||
```
|
||||
|
||||
**Example Output:**
|
||||
```
|
||||
✓ Synced ARCHITECTURE.md → SPECIFICATIONS.md
|
||||
- Added: API Gateway endpoint specs
|
||||
- Updated: Authentication flow (now via gateway)
|
||||
- Reason: Architectural change affects API contracts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. PROJECT-OVERVIEW.md Changed → Validate Consistency
|
||||
|
||||
**Triggers:**
|
||||
- Project scope changed
|
||||
- New requirement category added
|
||||
- Compliance requirement added
|
||||
- Target users changed
|
||||
|
||||
**Sync Actions:**
|
||||
```markdown
|
||||
PROJECT-OVERVIEW.md shows:
|
||||
+ Compliance: GDPR data privacy required
|
||||
|
||||
→ Validate across all docs:
|
||||
- Check TECH-STACK.md has encryption libraries
|
||||
- Check ARCHITECTURE.md has data privacy layer
|
||||
- Check SPECIFICATIONS.md has GDPR endpoints (data export, deletion)
|
||||
- Flag missing pieces
|
||||
```
|
||||
|
||||
**Example Output:**
|
||||
```
|
||||
⚠ Validation: PROJECT-OVERVIEW.md → ALL DOCS
|
||||
- Missing in TECH-STACK.md: No encryption library listed
|
||||
- Missing in SPECIFICATIONS.md: No GDPR data export endpoint
|
||||
- Recommendation: Add encryption lib + GDPR API specs
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Change Detection Algorithm
|
||||
|
||||
```python
|
||||
def classify_change(file_path: str, diff: str, last_sync: dict) -> str:
|
||||
"""Classify change significance using diff analysis."""
|
||||
|
||||
# Parse diff
|
||||
added_lines = [line for line in diff.split('\n') if line.startswith('+')]
|
||||
removed_lines = [line for line in diff.split('\n') if line.startswith('-')]
|
||||
|
||||
# Check for technology changes
|
||||
tech_keywords = ['framework', 'library', 'language', 'database', 'cache']
|
||||
if any(kw in line.lower() for line in added_lines for kw in tech_keywords):
|
||||
if any(removed_lines): # Replacement
|
||||
return "upgraded_major_version"
|
||||
return "added_technology"
|
||||
|
||||
# Check for architecture changes
|
||||
arch_keywords = ['service', 'component', 'layer', 'gateway', 'microservice']
|
||||
if any(kw in line.lower() for line in added_lines for kw in arch_keywords):
|
||||
return "added_service"
|
||||
|
||||
# Check for requirement changes
|
||||
req_keywords = ['security', 'performance', 'compliance', 'GDPR', 'HIPAA']
|
||||
if any(kw in line.lower() for line in added_lines for kw in req_keywords):
|
||||
return "new_security_requirement"
|
||||
|
||||
# Check for API contract changes
|
||||
if 'endpoint' in diff.lower() or 'route' in diff.lower():
|
||||
return "changed_api_contract"
|
||||
|
||||
# Default: minor change (skip sync)
|
||||
if len(added_lines) < 3 and not removed_lines:
|
||||
return "typo_fix"
|
||||
|
||||
return "unknown_change"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Sync State Tracking
|
||||
|
||||
**Storage**: `.meta/last-sync.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"last_sync_timestamp": "2025-10-30T14:30:00Z",
|
||||
"synced_files": {
|
||||
"project-management/TECH-STACK.md": {
|
||||
"hash": "abc123",
|
||||
"last_modified": "2025-10-30T14:00:00Z",
|
||||
"change_type": "added_technology"
|
||||
},
|
||||
"project-management/ARCHITECTURE.md": {
|
||||
"hash": "def456",
|
||||
"last_modified": "2025-10-30T14:30:00Z",
|
||||
"synced_from": "TECH-STACK.md"
|
||||
}
|
||||
},
|
||||
"pending_syncs": []
|
||||
}
|
||||
```
|
||||
|
||||
**Update Logic**:
|
||||
1. After Write/Edit to `project-management/*.md`
|
||||
2. Calculate file hash (md5 of content)
|
||||
3. Compare with last sync state
|
||||
4. If different + significant change → Trigger sync
|
||||
5. Update `.meta/last-sync.json`
|
||||
|
||||
---
|
||||
|
||||
## Sync Execution Flow
|
||||
|
||||
```
|
||||
PostToolUse Hook Fires
|
||||
↓
|
||||
File edited: project-management/TECH-STACK.md
|
||||
↓
|
||||
Load .meta/last-sync.json
|
||||
↓
|
||||
Calculate diff from last sync
|
||||
↓
|
||||
Classify change: "added_technology" (Redis)
|
||||
↓
|
||||
Decision: should_sync() → TRUE
|
||||
↓
|
||||
┌────────────────────────────────────┐
|
||||
│ Sync: TECH-STACK → ARCHITECTURE │
|
||||
│ - Read TECH-STACK.md additions │
|
||||
│ - Identify: Redis 7.x (cache) │
|
||||
│ - Update ARCHITECTURE.md: │
|
||||
│ + Add Redis component │
|
||||
│ + Update data flow │
|
||||
└────────────────────────────────────┘
|
||||
↓
|
||||
Write updated ARCHITECTURE.md
|
||||
↓
|
||||
Update .meta/last-sync.json
|
||||
↓
|
||||
Log sync action
|
||||
↓
|
||||
Output brief sync report
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Sync Report Format
|
||||
|
||||
```markdown
|
||||
## Documentation Sync Report
|
||||
|
||||
**Trigger**: TECH-STACK.md modified (added Redis)
|
||||
**Timestamp**: 2025-10-30T14:30:00Z
|
||||
|
||||
---
|
||||
|
||||
### Changes Detected: 1
|
||||
|
||||
1. **[SIGNIFICANT] Added technology: Redis 7.x**
|
||||
- **Source**: project-management/TECH-STACK.md:45
|
||||
- **Purpose**: Caching layer for API responses
|
||||
|
||||
---
|
||||
|
||||
### Syncs Applied: 2
|
||||
|
||||
1. **TECH-STACK.md → ARCHITECTURE.md**
|
||||
- ✓ Added: Redis component to architecture diagram
|
||||
- ✓ Updated: Data flow (added cache lookup step)
|
||||
- ✓ File: project-management/ARCHITECTURE.md:120-135
|
||||
|
||||
2. **TECH-STACK.md → SPECIFICATIONS.md**
|
||||
- ✓ Added: Cache invalidation API endpoint
|
||||
- ✓ Updated: Response time expectations (now <100ms with cache)
|
||||
- ✓ File: project-management/SPECIFICATIONS.md:78-82
|
||||
|
||||
---
|
||||
|
||||
### Validation Checks: 2
|
||||
|
||||
✓ TECH-STACK.md consistency: OK
|
||||
✓ ARCHITECTURE.md alignment: OK
|
||||
|
||||
---
|
||||
|
||||
**Result**: Documentation synchronized successfully.
|
||||
**Next Action**: Review changes in next commit.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with PostToolUse Hook
|
||||
|
||||
**Hook Location**: `.claude/hooks/post_tool_use_format.py`
|
||||
|
||||
**Trigger Condition**:
|
||||
```python
|
||||
def should_trigger_docs_sync(file_path: str, tool_name: str) -> bool:
|
||||
"""Only trigger on project-management doc edits."""
|
||||
|
||||
if tool_name not in ["Write", "Edit"]:
|
||||
return False
|
||||
|
||||
project_docs = [
|
||||
"project-management/TECH-STACK.md",
|
||||
"project-management/ARCHITECTURE.md",
|
||||
"project-management/PROJECT-OVERVIEW.md",
|
||||
"project-management/SPECIFICATIONS.md",
|
||||
]
|
||||
|
||||
return any(doc in file_path for doc in project_docs)
|
||||
```
|
||||
|
||||
**Invocation**:
|
||||
```python
|
||||
# In PostToolUse hook
|
||||
if should_trigger_docs_sync(file_path, tool_name):
|
||||
# Load skill
|
||||
skill_result = invoke_skill("project-docs-sync", {
|
||||
"file_path": file_path,
|
||||
"change_type": classify_change(file_path, diff),
|
||||
"last_sync_state": load_last_sync()
|
||||
})
|
||||
|
||||
# Log sync action
|
||||
log_sync_action(skill_result)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Sync Strategies by File Type
|
||||
|
||||
### TECH-STACK.md → ARCHITECTURE.md
|
||||
**What to sync:**
|
||||
- New databases → Add data layer component
|
||||
- New frameworks → Add to tech stack diagram
|
||||
- New APIs → Add integration points
|
||||
- Version upgrades → Update compatibility notes
|
||||
|
||||
### ARCHITECTURE.md → SPECIFICATIONS.md
|
||||
**What to sync:**
|
||||
- New services → Add service endpoints
|
||||
- New integrations → Add API contracts
|
||||
- Data model changes → Update request/response schemas
|
||||
- Security layers → Add authentication specs
|
||||
|
||||
### PROJECT-OVERVIEW.md → ALL DOCS
|
||||
**What to validate:**
|
||||
- Compliance requirements → Check encryption in TECH-STACK
|
||||
- Performance goals → Check caching in ARCHITECTURE
|
||||
- Target users → Check API design in SPECIFICATIONS
|
||||
- Scope changes → Validate alignment across all docs
|
||||
|
||||
---
|
||||
|
||||
## Conservative Sync Rules
|
||||
|
||||
**DO Sync When:**
|
||||
- ✅ New technology added (database, framework, library)
|
||||
- ✅ Service/component added or removed
|
||||
- ✅ API contract changed (new endpoint, schema change)
|
||||
- ✅ Compliance requirement added (GDPR, HIPAA)
|
||||
- ✅ Major version upgrade (breaking changes possible)
|
||||
|
||||
**DO NOT Sync When:**
|
||||
- ❌ Typo fixes (1-2 character changes)
|
||||
- ❌ Formatting changes (whitespace, markdown)
|
||||
- ❌ Comment/example clarifications
|
||||
- ❌ Documentation of existing features (no new info)
|
||||
- ❌ Minor version bumps (patch releases)
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If sync fails:**
|
||||
1. Log error to `.meta/sync-errors.log`
|
||||
2. Add to pending syncs in `.meta/last-sync.json`
|
||||
3. Report to user with clear action items
|
||||
4. Do NOT block the write operation (non-blocking)
|
||||
|
||||
**Example Error Report:**
|
||||
```
|
||||
⚠ Documentation Sync Failed
|
||||
|
||||
**File**: project-management/TECH-STACK.md
|
||||
**Error**: Could not parse ARCHITECTURE.md (syntax error)
|
||||
**Action Required**:
|
||||
1. Fix ARCHITECTURE.md syntax error (line 45)
|
||||
2. Re-run: /lazy docs-sync
|
||||
|
||||
**Pending Syncs**: 1 (tracked in .meta/last-sync.json)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Disable auto-sync (manual /lazy docs-sync only)
|
||||
export LAZYDEV_DISABLE_DOCS_SYNC=1
|
||||
|
||||
# Sync everything (even minor changes)
|
||||
export LAZYDEV_DOCS_SYNC_AGGRESSIVE=1
|
||||
|
||||
# Sync specific files only
|
||||
export LAZYDEV_DOCS_SYNC_FILES="TECH-STACK.md,ARCHITECTURE.md"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Sync code files (only project-management docs)
|
||||
❌ Generate docs from scratch (use `/lazy docs`)
|
||||
❌ Fix documentation errors (use `/lazy fix`)
|
||||
❌ Create missing docs (use `/lazy plan`)
|
||||
|
||||
✅ **DOES**: Automatically propagate significant changes across project documentation with conservative triggers.
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Non-blocking**: Syncs in background, logs errors
|
||||
**Speed**: <2 seconds for typical sync
|
||||
638
.claude/skills/project-planner/SKILL.md
Normal file
638
.claude/skills/project-planner/SKILL.md
Normal file
@@ -0,0 +1,638 @@
|
||||
---
|
||||
name: project-planner
|
||||
description: Transforms project ideas into structured documentation (overview + specifications). Use when starting new projects or when brief needs project-level planning with vision, features, and technical requirements.
|
||||
---
|
||||
|
||||
# Project Planner Skill
|
||||
|
||||
**Purpose**: Generate comprehensive project documentation from high-level descriptions.
|
||||
|
||||
**Trigger Words**: new project, project overview, project spec, technical requirements, project planning, architecture, system design
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision: Use Project Planning?
|
||||
|
||||
```python
|
||||
def needs_project_planning(context: dict) -> bool:
|
||||
"""Fast evaluation for project-level planning."""
|
||||
|
||||
# Indicators of project-level work
|
||||
project_indicators = [
|
||||
"new project", "project overview", "system design",
|
||||
"architecture", "technical requirements", "project spec",
|
||||
"build a", "create a", "develop a platform",
|
||||
"microservices", "full stack", "api + frontend"
|
||||
]
|
||||
|
||||
description = context.get("description", "").lower()
|
||||
return any(indicator in description for indicator in project_indicators)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Structure
|
||||
|
||||
Generates TWO documents in `project-management/`:
|
||||
|
||||
### 1. PROJECT-OVERVIEW.md
|
||||
High-level vision and goals
|
||||
|
||||
### 2. SPECIFICATIONS.md
|
||||
Detailed technical requirements
|
||||
|
||||
---
|
||||
|
||||
## Document 1: PROJECT-OVERVIEW.md
|
||||
|
||||
### Template Structure
|
||||
|
||||
```markdown
|
||||
# {Project Name}
|
||||
|
||||
> {Tagline - one compelling sentence}
|
||||
|
||||
## Vision
|
||||
|
||||
{2-3 sentences describing the ultimate goal and impact}
|
||||
|
||||
## Goals
|
||||
|
||||
1. {Primary goal}
|
||||
2. {Secondary goal}
|
||||
3. {Tertiary goal}
|
||||
|
||||
## Key Features
|
||||
|
||||
- **{Feature 1}**: {Brief description}
|
||||
- **{Feature 2}**: {Brief description}
|
||||
- **{Feature 3}**: {Brief description}
|
||||
- **{Feature 4}**: {Brief description}
|
||||
- **{Feature 5}**: {Brief description}
|
||||
|
||||
## Success Criteria
|
||||
|
||||
1. **{Metric 1}**: {Target}
|
||||
2. **{Metric 2}**: {Target}
|
||||
3. **{Metric 3}**: {Target}
|
||||
|
||||
## Constraints
|
||||
|
||||
- **Budget**: {If specified}
|
||||
- **Timeline**: {If specified}
|
||||
- **Technology**: {Required tech stack or limitations}
|
||||
- **Team**: {Team size/composition if known}
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- {What this project will NOT do}
|
||||
- {Features explicitly excluded}
|
||||
- {Future phases}
|
||||
```
|
||||
|
||||
### Example Output
|
||||
|
||||
```markdown
|
||||
# TaskFlow Pro
|
||||
|
||||
> Modern task management with AI-powered prioritization
|
||||
|
||||
## Vision
|
||||
|
||||
Build a task management platform that helps remote teams stay organized through intelligent prioritization, real-time collaboration, and seamless integrations with existing tools.
|
||||
|
||||
## Goals
|
||||
|
||||
1. Reduce task management overhead by 50%
|
||||
2. Enable real-time team collaboration
|
||||
3. Integrate with popular dev tools (GitHub, Jira, Slack)
|
||||
|
||||
## Key Features
|
||||
|
||||
- **AI Prioritization**: ML-based task ranking by urgency and impact
|
||||
- **Real-time Collaboration**: Live updates, comments, mentions
|
||||
- **Smart Integrations**: Auto-sync with GitHub issues, Jira tickets
|
||||
- **Custom Workflows**: Configurable pipelines per team
|
||||
- **Analytics Dashboard**: Team productivity insights
|
||||
|
||||
## Success Criteria
|
||||
|
||||
1. **User Adoption**: 1000 active users in 6 months
|
||||
2. **Performance**: <200ms API response time
|
||||
3. **Reliability**: 99.9% uptime
|
||||
|
||||
## Constraints
|
||||
|
||||
- Timeline: 6 months MVP
|
||||
- Technology: Python backend, React frontend, PostgreSQL
|
||||
- Team: 2 backend, 2 frontend, 1 ML engineer
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- Mobile apps (Phase 2)
|
||||
- Video conferencing
|
||||
- Time tracking (separate product)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Document 2: SPECIFICATIONS.md
|
||||
|
||||
### Template Structure
|
||||
|
||||
```markdown
|
||||
# {Project Name} - Technical Specifications
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### Core Features
|
||||
|
||||
#### {Feature 1}
|
||||
- **Description**: {What it does}
|
||||
- **User Story**: As a {role}, I want {action} so that {benefit}
|
||||
- **Acceptance Criteria**:
|
||||
- [ ] {Criterion 1}
|
||||
- [ ] {Criterion 2}
|
||||
- [ ] {Criterion 3}
|
||||
|
||||
#### {Feature 2}
|
||||
{Repeat structure}
|
||||
|
||||
### User Flows
|
||||
|
||||
#### {Flow 1}: {Name}
|
||||
1. User {action}
|
||||
2. System {response}
|
||||
3. User {next action}
|
||||
4. Result: {outcome}
|
||||
|
||||
---
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
- API response time: <200ms (p95)
|
||||
- Page load time: <1s
|
||||
- Concurrent users: 10,000+
|
||||
- Database queries: <50ms
|
||||
|
||||
### Security
|
||||
- Authentication: OAuth2 + JWT
|
||||
- Authorization: Role-based access control (RBAC)
|
||||
- Data encryption: AES-256 at rest, TLS 1.3 in transit
|
||||
- Rate limiting: 100 req/min per user
|
||||
|
||||
### Reliability
|
||||
- Uptime: 99.9% SLA
|
||||
- Backup frequency: Daily
|
||||
- Recovery time: <1 hour (RTO)
|
||||
- Data loss: <5 minutes (RPO)
|
||||
|
||||
### Scalability
|
||||
- Horizontal scaling: Auto-scale based on load
|
||||
- Database: Read replicas for queries
|
||||
- Cache: Redis for hot data
|
||||
- CDN: Static assets
|
||||
|
||||
---
|
||||
|
||||
## API Contracts
|
||||
|
||||
### Authentication API
|
||||
|
||||
#### POST /api/auth/login
|
||||
```json
|
||||
// Request
|
||||
{
|
||||
"email": "user@example.com",
|
||||
"password": "hashed_password"
|
||||
}
|
||||
|
||||
// Response (200 OK)
|
||||
{
|
||||
"token": "jwt_token_here",
|
||||
"user": {
|
||||
"id": "user_123",
|
||||
"email": "user@example.com",
|
||||
"name": "John Doe"
|
||||
}
|
||||
}
|
||||
|
||||
// Error (401 Unauthorized)
|
||||
{
|
||||
"error": "Invalid credentials"
|
||||
}
|
||||
```
|
||||
|
||||
#### POST /api/auth/logout
|
||||
{Repeat structure for each endpoint}
|
||||
|
||||
### Tasks API
|
||||
|
||||
#### GET /api/tasks
|
||||
```json
|
||||
// Query params: ?page=1&per_page=50&status=active
|
||||
// Response (200 OK)
|
||||
{
|
||||
"tasks": [
|
||||
{
|
||||
"id": "task_123",
|
||||
"title": "Fix bug in auth",
|
||||
"status": "active",
|
||||
"priority": "high",
|
||||
"assignee": "user_456",
|
||||
"created_at": "2025-10-30T10:00:00Z"
|
||||
}
|
||||
],
|
||||
"pagination": {
|
||||
"page": 1,
|
||||
"per_page": 50,
|
||||
"total": 150
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{Continue for all major endpoints}
|
||||
|
||||
---
|
||||
|
||||
## Data Models
|
||||
|
||||
### User
|
||||
```python
|
||||
class User:
|
||||
id: str (UUID)
|
||||
email: str (unique, indexed)
|
||||
password_hash: str
|
||||
name: str
|
||||
role: Enum['admin', 'member', 'viewer']
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
last_login: datetime | None
|
||||
```
|
||||
|
||||
### Task
|
||||
```python
|
||||
class Task:
|
||||
id: str (UUID)
|
||||
title: str (max 200 chars)
|
||||
description: str | None
|
||||
status: Enum['backlog', 'active', 'completed']
|
||||
priority: Enum['low', 'medium', 'high', 'urgent']
|
||||
assignee_id: str | None (FK -> User.id)
|
||||
project_id: str (FK -> Project.id)
|
||||
due_date: datetime | None
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
```
|
||||
|
||||
{Continue for all major models}
|
||||
|
||||
---
|
||||
|
||||
## System Architecture
|
||||
|
||||
### Components
|
||||
- **API Gateway**: Kong/NGINX for routing and rate limiting
|
||||
- **Backend Services**: FastAPI/Django microservices
|
||||
- **Database**: PostgreSQL (primary), Redis (cache)
|
||||
- **Message Queue**: RabbitMQ for async tasks
|
||||
- **Storage**: S3 for file uploads
|
||||
- **Monitoring**: Prometheus + Grafana
|
||||
|
||||
### Deployment
|
||||
- **Infrastructure**: AWS/GCP Kubernetes
|
||||
- **CI/CD**: GitHub Actions
|
||||
- **Environments**: dev, staging, production
|
||||
- **Rollback**: Blue-green deployment
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Backend
|
||||
- Python 3.11+
|
||||
- FastAPI or Django REST Framework
|
||||
- SQLAlchemy or Django ORM
|
||||
- Celery for background tasks
|
||||
- pytest for testing
|
||||
|
||||
### Frontend
|
||||
- React 18+ or Vue 3+
|
||||
- TypeScript
|
||||
- Tailwind CSS or Material-UI
|
||||
- Axios for API calls
|
||||
- Vitest or Jest for testing
|
||||
|
||||
### Infrastructure
|
||||
- Docker + Docker Compose
|
||||
- Kubernetes (production)
|
||||
- PostgreSQL 15+
|
||||
- Redis 7+
|
||||
- NGINX or Caddy
|
||||
|
||||
---
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 1: MVP (Months 1-3)
|
||||
- [ ] User authentication
|
||||
- [ ] Basic task CRUD
|
||||
- [ ] Simple prioritization
|
||||
- [ ] API foundation
|
||||
|
||||
### Phase 2: Collaboration (Months 4-5)
|
||||
- [ ] Real-time updates (WebSocket)
|
||||
- [ ] Comments and mentions
|
||||
- [ ] Team management
|
||||
|
||||
### Phase 3: Integrations (Month 6)
|
||||
- [ ] GitHub integration
|
||||
- [ ] Jira sync
|
||||
- [ ] Slack notifications
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- Coverage: >80%
|
||||
- All business logic functions
|
||||
- Mock external dependencies
|
||||
|
||||
### Integration Tests
|
||||
- API endpoint testing
|
||||
- Database transactions
|
||||
- Authentication flows
|
||||
|
||||
### E2E Tests
|
||||
- Critical user flows
|
||||
- Payment processing (if applicable)
|
||||
- Admin workflows
|
||||
|
||||
---
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### OWASP Top 10 Coverage
|
||||
1. **Injection**: Parameterized queries, input validation
|
||||
2. **Broken Auth**: JWT with refresh tokens, secure session management
|
||||
3. **Sensitive Data**: Encryption at rest and in transit
|
||||
4. **XXE**: Disable XML external entities
|
||||
5. **Broken Access Control**: RBAC enforcement
|
||||
6. **Security Misconfiguration**: Secure defaults, regular audits
|
||||
7. **XSS**: Output escaping, CSP headers
|
||||
8. **Insecure Deserialization**: Validate all input
|
||||
9. **Known Vulnerabilities**: Dependency scanning (Snyk, Dependabot)
|
||||
10. **Insufficient Logging**: Audit logs for sensitive actions
|
||||
|
||||
---
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
### Metrics
|
||||
- Request rate, error rate, latency (RED method)
|
||||
- Database connection pool usage
|
||||
- Cache hit/miss ratio
|
||||
- Background job queue length
|
||||
|
||||
### Logging
|
||||
- Structured JSON logs
|
||||
- Centralized logging (ELK stack or CloudWatch)
|
||||
- Log levels: DEBUG (dev), INFO (staging), WARN/ERROR (prod)
|
||||
|
||||
### Alerting
|
||||
- Error rate >5% (P1)
|
||||
- API latency >500ms (P2)
|
||||
- Database connections >80% (P2)
|
||||
- Disk usage >90% (P1)
|
||||
|
||||
---
|
||||
|
||||
## Documentation Requirements
|
||||
|
||||
- [ ] API documentation (OpenAPI/Swagger)
|
||||
- [ ] Setup guide (README.md)
|
||||
- [ ] Architecture diagrams
|
||||
- [ ] Deployment runbook
|
||||
- [ ] Troubleshooting guide
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Generation Process
|
||||
|
||||
### Step 1: Extract Project Context
|
||||
```python
|
||||
def extract_project_info(prompt: str) -> dict:
|
||||
"""Parse project description for key details."""
|
||||
|
||||
info = {
|
||||
"name": None,
|
||||
"description": prompt,
|
||||
"features": [],
|
||||
"tech_stack": [],
|
||||
"constraints": {},
|
||||
"goals": []
|
||||
}
|
||||
|
||||
# Extract from prompt:
|
||||
# - Project name (if mentioned)
|
||||
# - Desired features
|
||||
# - Technology preferences
|
||||
# - Timeline/budget constraints
|
||||
# - Success metrics
|
||||
|
||||
return info
|
||||
```
|
||||
|
||||
### Step 2: Apply Output Style
|
||||
Use `output-style-selector` to determine:
|
||||
- **PROJECT-OVERVIEW.md**: Bullet-points, concise
|
||||
- **SPECIFICATIONS.md**: Table-based for API contracts, YAML-structured for models
|
||||
|
||||
### Step 3: Generate Documents
|
||||
1. Create `project-management/` directory if needed
|
||||
2. Write PROJECT-OVERVIEW.md (vision-focused)
|
||||
3. Write SPECIFICATIONS.md (technical details)
|
||||
4. Validate completeness
|
||||
|
||||
### Step 4: Validation Checklist
|
||||
```markdown
|
||||
## Generated Documents Validation
|
||||
|
||||
PROJECT-OVERVIEW.md:
|
||||
- [ ] Project name and tagline present
|
||||
- [ ] Vision statement (2-3 sentences)
|
||||
- [ ] 3+ goals defined
|
||||
- [ ] 5-10 key features listed
|
||||
- [ ] Success criteria measurable
|
||||
- [ ] Constraints documented
|
||||
- [ ] Out-of-scope items listed
|
||||
|
||||
SPECIFICATIONS.md:
|
||||
- [ ] Functional requirements detailed
|
||||
- [ ] Non-functional requirements (perf, security, reliability)
|
||||
- [ ] API contracts with examples (if applicable)
|
||||
- [ ] Data models defined
|
||||
- [ ] Architecture overview
|
||||
- [ ] Dependencies listed
|
||||
- [ ] Development phases outlined
|
||||
- [ ] Testing strategy included
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Commands
|
||||
|
||||
### With `/lazy plan`
|
||||
```bash
|
||||
# Generate project docs first
|
||||
/lazy plan --project "Build AI-powered task manager"
|
||||
|
||||
→ project-planner skill triggers
|
||||
→ Generates PROJECT-OVERVIEW.md + SPECIFICATIONS.md
|
||||
→ Then creates first user story from specifications
|
||||
|
||||
# Or start from enhanced prompt
|
||||
/lazy plan --file enhanced_prompt.md
|
||||
|
||||
→ Detects project-level scope
|
||||
→ Runs project-planner
|
||||
→ Creates foundational docs
|
||||
→ Proceeds with story creation
|
||||
```
|
||||
|
||||
### With `/lazy code`
|
||||
```bash
|
||||
# Reference specifications during implementation
|
||||
/lazy code @US-3.4.md
|
||||
|
||||
→ context-packer loads SPECIFICATIONS.md
|
||||
→ API contracts and data models available
|
||||
→ Implementation follows spec
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Generate actual code (that's for `coder` agent)
|
||||
❌ Create user stories (that's for `project-manager` agent)
|
||||
❌ Make architectural decisions (provides template, you decide)
|
||||
❌ Replace technical design documents (TDDs)
|
||||
|
||||
✅ **DOES**: Create structured foundation documents for new projects.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Minimal specs (faster, less detail)
|
||||
export LAZYDEV_PROJECT_SPEC_MINIMAL=1
|
||||
|
||||
# Skip API contracts (non-API projects)
|
||||
export LAZYDEV_PROJECT_NO_API=1
|
||||
|
||||
# Focus on specific aspects
|
||||
export LAZYDEV_PROJECT_FOCUS="security,performance"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tips for Effective Project Planning
|
||||
|
||||
### For PROJECT-OVERVIEW.md
|
||||
1. **Vision**: Think big picture - why does this exist?
|
||||
2. **Goals**: Limit to 3-5 measurable outcomes
|
||||
3. **Features**: High-level only (not task-level details)
|
||||
4. **Success Criteria**: Must be measurable (numbers, percentages)
|
||||
|
||||
### For SPECIFICATIONS.md
|
||||
1. **API Contracts**: Start with authentication and core resources
|
||||
2. **Data Models**: Include relationships and constraints
|
||||
3. **Non-Functional**: Don't skip - these prevent tech debt
|
||||
4. **Security**: Reference OWASP Top 10 coverage
|
||||
5. **Phases**: Break into 2-3 month chunks maximum
|
||||
|
||||
### Best Practices
|
||||
- **Keep PROJECT-OVERVIEW under 2 pages**: Executive summary only
|
||||
- **SPECIFICATIONS can be longer**: This is the source of truth
|
||||
- **Update specs as you learn**: Living documents
|
||||
- **Version control both**: Track changes over time
|
||||
|
||||
---
|
||||
|
||||
## Example Trigger Scenarios
|
||||
|
||||
### Scenario 1: New Greenfield Project
|
||||
```
|
||||
User: "I want to build a real-time chat platform with video calls"
|
||||
|
||||
→ project-planner triggers
|
||||
→ Generates:
|
||||
- PROJECT-OVERVIEW.md (vision: modern communication platform)
|
||||
- SPECIFICATIONS.md (WebSocket APIs, video streaming, etc.)
|
||||
→ Ready for user story creation
|
||||
```
|
||||
|
||||
### Scenario 2: From Enhanced Prompt
|
||||
```
|
||||
User: /lazy plan --file enhanced_prompt.md
|
||||
# enhanced_prompt contains: detailed project requirements, tech stack, timeline
|
||||
|
||||
→ project-planner parses prompt
|
||||
→ Extracts structured information
|
||||
→ Generates both documents
|
||||
→ Proceeds to first user story
|
||||
```
|
||||
|
||||
### Scenario 3: Partial Information
|
||||
```
|
||||
User: "Build a task manager, not sure about details yet"
|
||||
|
||||
→ project-planner generates template
|
||||
→ Marks sections as [TODO: Specify...]
|
||||
→ User fills in gaps incrementally
|
||||
→ Re-generate or update manually
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Format (Completion)
|
||||
|
||||
```markdown
|
||||
## Project Planning Complete
|
||||
|
||||
**Documents Generated**:
|
||||
|
||||
1. **PROJECT-OVERVIEW.md** (2.4KB)
|
||||
- Project: TaskFlow Pro
|
||||
- Vision: Modern task management with AI
|
||||
- Features: 5 key features defined
|
||||
- Success criteria: 3 measurable metrics
|
||||
|
||||
2. **SPECIFICATIONS.md** (8.1KB)
|
||||
- Functional requirements: 5 core features detailed
|
||||
- API contracts: 12 endpoints documented
|
||||
- Data models: 6 models defined
|
||||
- Architecture: Microservices with Kubernetes
|
||||
- Development phases: 3 phases over 6 months
|
||||
|
||||
**Location**: `./project-management/`
|
||||
|
||||
**Next Steps**:
|
||||
1. Review and refine generated documents
|
||||
2. Run: `/lazy plan "First user story description"`
|
||||
3. Begin implementation with `/lazy code`
|
||||
|
||||
**Estimated Setup Time**: 15-20 minutes to review/customize
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Output Size**: 10-15KB total (both documents)
|
||||
**Generation Time**: ~30 seconds
|
||||
430
.claude/skills/regression-testing/SKILL.md
Normal file
430
.claude/skills/regression-testing/SKILL.md
Normal file
@@ -0,0 +1,430 @@
|
||||
---
|
||||
name: regression-testing
|
||||
description: Evaluates and implements regression tests after bug fixes based on severity, code complexity, and coverage. Use when bugs are fixed to prevent future regressions.
|
||||
---
|
||||
|
||||
# Regression Testing Skill
|
||||
|
||||
**Purpose**: Automatically evaluate and implement regression tests after bug fixes to prevent future regressions.
|
||||
|
||||
**When to Trigger**: This skill activates after bug fixes are implemented, allowing Claude (the orchestrator) to decide if regression tests would be valuable based on context.
|
||||
|
||||
---
|
||||
|
||||
## Decision Criteria (Orchestrator Evaluation)
|
||||
|
||||
Before implementing regression tests, evaluate these factors:
|
||||
|
||||
### High Value Scenarios (Implement Regression Tests)
|
||||
- **Critical Bugs**: Security, data loss, or production-impacting issues
|
||||
- **Subtle Bugs**: Edge cases, race conditions, timing issues that are easy to miss
|
||||
- **Complex Logic**: Multi-step workflows, state machines, intricate business rules
|
||||
- **Low Coverage Areas**: Bug occurred in under-tested code (<70% coverage)
|
||||
- **Recurring Patterns**: Similar bugs fixed before in related code
|
||||
- **Integration Points**: Bugs at module/service boundaries
|
||||
|
||||
### Lower Value Scenarios (Skip or Defer)
|
||||
- **Trivial Fixes**: Typos, obvious logic errors with existing tests
|
||||
- **Already Well-Tested**: Bug area has >90% coverage with comprehensive tests
|
||||
- **One-Time Anomalies**: Environmental issues, config errors (not code bugs)
|
||||
- **Rapid Prototyping**: Early-stage features expected to change significantly
|
||||
- **UI-Only Changes**: Purely cosmetic fixes with no logic impact
|
||||
|
||||
---
|
||||
|
||||
## Regression Test Strategy
|
||||
|
||||
### 1. Bug Analysis Phase
|
||||
|
||||
**Understand the Bug:**
|
||||
```markdown
|
||||
## Bug Context
|
||||
- **What broke**: [Symptom/error]
|
||||
- **Root cause**: [Why it happened]
|
||||
- **Fix applied**: [What changed]
|
||||
- **Failure scenario**: [Steps to reproduce original bug]
|
||||
```
|
||||
|
||||
**Evaluate Test Value:**
|
||||
```python
|
||||
def should_add_regression_test(bug_context: dict) -> tuple[bool, str]:
|
||||
"""
|
||||
Decide if regression test is valuable.
|
||||
|
||||
Returns:
|
||||
(add_test: bool, reason: str)
|
||||
"""
|
||||
severity = bug_context.get("severity") # critical, high, medium, low
|
||||
complexity = bug_context.get("complexity") # high, medium, low
|
||||
coverage = bug_context.get("coverage_pct", 0)
|
||||
|
||||
# Critical bugs always get regression tests
|
||||
if severity == "critical":
|
||||
return True, "Critical bug requires regression test"
|
||||
|
||||
# Complex bugs with low coverage
|
||||
if complexity == "high" and coverage < 70:
|
||||
return True, "Complex logic with insufficient coverage"
|
||||
|
||||
# Already well-tested
|
||||
if coverage > 90:
|
||||
return False, "Area already has comprehensive tests"
|
||||
|
||||
# Default: add test for medium+ severity
|
||||
if severity in {"high", "medium"}:
|
||||
return True, f"Bug severity {severity} warrants regression test"
|
||||
|
||||
return False, "Low-value regression test, skipping"
|
||||
```
|
||||
|
||||
### 2. Regression Test Implementation
|
||||
|
||||
**Test Structure:**
|
||||
```python
|
||||
# test_<module>_regression.py
|
||||
|
||||
import pytest
|
||||
from datetime import datetime
|
||||
|
||||
class TestRegressions:
|
||||
"""Regression tests for fixed bugs."""
|
||||
|
||||
def test_regression_issue_123_null_pointer_in_payment(self):
|
||||
"""
|
||||
Regression test for GitHub issue #123.
|
||||
|
||||
Bug: NullPointerException when processing payment with missing user email.
|
||||
Fixed: 2025-10-30
|
||||
Root cause: Missing null check in payment processor
|
||||
|
||||
This test ensures the fix remains in place and prevents regression.
|
||||
"""
|
||||
# Arrange: Setup scenario that caused original bug
|
||||
payment = Payment(amount=100.0, user=User(email=None))
|
||||
processor = PaymentProcessor()
|
||||
|
||||
# Act: Execute the previously failing code path
|
||||
result = processor.process(payment)
|
||||
|
||||
# Assert: Verify fix works (no exception, proper error handling)
|
||||
assert result.status == "failed"
|
||||
assert "invalid user email" in result.error_message.lower()
|
||||
|
||||
def test_regression_pr_456_race_condition_in_cache(self):
|
||||
"""
|
||||
Regression test for PR #456.
|
||||
|
||||
Bug: Race condition in cache invalidation caused stale reads
|
||||
Fixed: 2025-10-30
|
||||
Root cause: Non-atomic read-modify-write operation
|
||||
|
||||
This test simulates concurrent cache access to verify thread safety.
|
||||
"""
|
||||
# Arrange: Setup concurrent scenario
|
||||
cache = ThreadSafeCache()
|
||||
cache.set("key", "value1")
|
||||
|
||||
# Act: Simulate race condition with threads
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
futures = [
|
||||
executor.submit(cache.update, "key", f"value{i}")
|
||||
for i in range(100)
|
||||
]
|
||||
wait(futures)
|
||||
|
||||
# Assert: Verify no stale reads or corruption
|
||||
final_value = cache.get("key")
|
||||
assert final_value.startswith("value")
|
||||
assert cache.consistency_check() # Internal consistency
|
||||
```
|
||||
|
||||
**Test Naming Convention:**
|
||||
- `test_regression_<issue_id>_<short_description>`
|
||||
- Include issue/PR number for traceability
|
||||
- Short description of what broke
|
||||
|
||||
**Test Documentation:**
|
||||
- **Bug description**: What failed
|
||||
- **Date fixed**: When fix was applied
|
||||
- **Root cause**: Why it happened
|
||||
- **Test purpose**: What regression is prevented
|
||||
|
||||
### 3. Regression Test Coverage
|
||||
|
||||
**What to Test:**
|
||||
1. **Exact Failure Scenario**: Reproduce original bug conditions
|
||||
2. **Edge Cases Around Fix**: Test boundaries near the bug
|
||||
3. **Integration Impact**: Test how fix affects dependent code
|
||||
4. **Performance**: If bug was performance-related, add benchmark
|
||||
|
||||
**What NOT to Test:**
|
||||
- Don't duplicate existing unit tests
|
||||
- Don't test obvious behavior already covered
|
||||
- Don't over-specify implementation details (brittle tests)
|
||||
|
||||
---
|
||||
|
||||
## Workflow Integration
|
||||
|
||||
### Standard Bug Fix Flow
|
||||
|
||||
```bash
|
||||
# 1. Fix the bug
|
||||
/lazy code "fix: null pointer in payment processor"
|
||||
|
||||
# ✓ Bug fixed and committed
|
||||
|
||||
# 2. Regression testing skill evaluates
|
||||
# (Automatic trigger after bug fix commit)
|
||||
|
||||
## Decision: Add regression test?
|
||||
- Severity: HIGH (production crash)
|
||||
- Coverage: 65% (medium)
|
||||
- Complexity: MEDIUM
|
||||
→ **YES, add regression test**
|
||||
|
||||
# 3. Implement regression test
|
||||
# ✓ test_regression_issue_123_null_pointer_in_payment() added
|
||||
# ✓ Coverage increased to 78%
|
||||
# ✓ Test passes (bug is fixed)
|
||||
|
||||
# 4. Commit regression test
|
||||
git add tests/test_payment_regression.py
|
||||
git commit -m "test: add regression test for issue #123 null pointer"
|
||||
```
|
||||
|
||||
### Quick Bug Fix (Skip Regression)
|
||||
|
||||
```bash
|
||||
# 1. Fix trivial bug
|
||||
/lazy code "fix: typo in error message"
|
||||
|
||||
# ✓ Bug fixed
|
||||
|
||||
# 2. Regression testing skill evaluates
|
||||
## Decision: Add regression test?
|
||||
- Severity: LOW (cosmetic)
|
||||
- Coverage: 95% (excellent)
|
||||
- Complexity: LOW (trivial)
|
||||
→ **NO, skip regression test** (low value, already well-tested)
|
||||
|
||||
# 3. Commit fix only
|
||||
# No additional test needed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Regression Test Suite Management
|
||||
|
||||
### Organization
|
||||
|
||||
```
|
||||
tests/
|
||||
├── test_module.py # Regular unit tests
|
||||
├── test_module_integration.py # Integration tests
|
||||
└── test_module_regression.py # Regression tests (this skill)
|
||||
```
|
||||
|
||||
**Separate regression tests** to:
|
||||
- Track historical bug fixes
|
||||
- Easy to identify which tests prevent regressions
|
||||
- Can be run as separate CI job for faster feedback
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/ci.yml
|
||||
|
||||
jobs:
|
||||
regression-tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Run regression test suite
|
||||
run: pytest tests/*_regression.py -v --tb=short
|
||||
|
||||
# Fast feedback: regression tests run first
|
||||
# If they fail, likely a regression occurred
|
||||
```
|
||||
|
||||
### Regression Test Metrics
|
||||
|
||||
**Track Over Time:**
|
||||
- Total regression tests count
|
||||
- Bug recurrence rate (0% is goal)
|
||||
- Coverage increase from regression tests
|
||||
- Time to detect regression (should be in CI, not production)
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Critical Bug (Add Regression Test)
|
||||
|
||||
**Bug**: Authentication bypass when session token is malformed
|
||||
**Fix**: Added token validation
|
||||
**Decision**: ✅ **Add regression test** (security critical)
|
||||
|
||||
```python
|
||||
def test_regression_issue_789_auth_bypass_malformed_token():
|
||||
"""
|
||||
Regression test for security issue #789.
|
||||
|
||||
Bug: Malformed session tokens bypassed authentication
|
||||
Fixed: 2025-10-30
|
||||
Severity: CRITICAL (security)
|
||||
Root cause: Missing token format validation
|
||||
"""
|
||||
# Arrange: Malformed token that bypassed auth
|
||||
malformed_token = "invalid||format||token"
|
||||
|
||||
# Act: Attempt authentication
|
||||
result = AuthService.validate_token(malformed_token)
|
||||
|
||||
# Assert: Should reject malformed token
|
||||
assert result.is_valid is False
|
||||
assert result.error == "invalid_token_format"
|
||||
```
|
||||
|
||||
### Example 2: Complex Bug (Add Regression Test)
|
||||
|
||||
**Bug**: Race condition in distributed lock causes duplicate job execution
|
||||
**Fix**: Atomic compare-and-swap operation
|
||||
**Decision**: ✅ **Add regression test** (complex concurrency issue)
|
||||
|
||||
```python
|
||||
def test_regression_pr_234_race_condition_duplicate_jobs():
|
||||
"""
|
||||
Regression test for PR #234.
|
||||
|
||||
Bug: Race condition allowed duplicate job execution
|
||||
Fixed: 2025-10-30
|
||||
Complexity: HIGH (concurrency)
|
||||
Root cause: Non-atomic lock acquisition
|
||||
"""
|
||||
# Arrange: Simulate concurrent job submissions
|
||||
job_queue = DistributedJobQueue()
|
||||
job_id = "test-job-123"
|
||||
|
||||
# Act: 100 threads try to acquire same job
|
||||
with ThreadPoolExecutor(max_workers=100) as executor:
|
||||
futures = [
|
||||
executor.submit(job_queue.try_acquire_job, job_id)
|
||||
for _ in range(100)
|
||||
]
|
||||
results = [f.result() for f in futures]
|
||||
|
||||
# Assert: Only ONE thread should acquire the job
|
||||
acquired = [r for r in results if r.acquired]
|
||||
assert len(acquired) == 1, "Race condition: multiple threads acquired same job"
|
||||
```
|
||||
|
||||
### Example 3: Trivial Bug (Skip Regression Test)
|
||||
|
||||
**Bug**: Typo in log message "Usre authenticated" → "User authenticated"
|
||||
**Fix**: Corrected spelling
|
||||
**Decision**: ❌ **Skip regression test** (cosmetic, no logic impact)
|
||||
|
||||
```
|
||||
No test needed. Fix is obvious and has no functional impact.
|
||||
Existing tests already cover authentication logic.
|
||||
```
|
||||
|
||||
### Example 4: Well-Tested Area (Skip Regression Test)
|
||||
|
||||
**Bug**: Off-by-one error in pagination (page 1 showed 0 results)
|
||||
**Fix**: Changed `offset = page * size` to `offset = (page - 1) * size`
|
||||
**Coverage**: 95% (pagination thoroughly tested)
|
||||
**Decision**: ❌ **Skip regression test** (area already has comprehensive tests)
|
||||
|
||||
```python
|
||||
# Existing test already covers this:
|
||||
def test_pagination_first_page_shows_results():
|
||||
results = api.get_users(page=1, size=10)
|
||||
assert len(results) == 10 # This test would have caught the bug
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### DO:
|
||||
✅ Add regression tests for **critical and complex bugs**
|
||||
✅ Include **issue/PR number** in test name for traceability
|
||||
✅ Document **what broke, why, and when** in test docstring
|
||||
✅ Test the **exact failure scenario** that caused the bug
|
||||
✅ Keep regression tests **separate** from unit tests (easier tracking)
|
||||
✅ Run regression tests in **CI/CD** for early detection
|
||||
|
||||
### DON'T:
|
||||
❌ Add regression tests for **trivial or cosmetic bugs**
|
||||
❌ Duplicate **existing comprehensive tests**
|
||||
❌ Write **brittle tests** that test implementation details
|
||||
❌ Skip **root cause analysis** (understand why it broke)
|
||||
❌ Forget to **verify test fails** before fix (should reproduce bug)
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
When this skill triggers, provide:
|
||||
|
||||
```markdown
|
||||
## Regression Test Evaluation
|
||||
|
||||
**Bug Fixed**: [Brief description]
|
||||
**Issue/PR**: #[number]
|
||||
**Severity**: [critical/high/medium/low]
|
||||
**Complexity**: [high/medium/low]
|
||||
**Current Coverage**: [X%]
|
||||
|
||||
**Decision**: [✅ Add Regression Test | ❌ Skip Regression Test]
|
||||
|
||||
**Reason**: [Why regression test is/isn't valuable]
|
||||
|
||||
---
|
||||
|
||||
[If adding test]
|
||||
## Regression Test Implementation
|
||||
|
||||
**File**: `tests/test_<module>_regression.py`
|
||||
|
||||
```python
|
||||
def test_regression_<issue>_<description>():
|
||||
"""
|
||||
[Docstring with bug context]
|
||||
"""
|
||||
# Test implementation
|
||||
```
|
||||
|
||||
**Coverage Impact**: +X% (before: Y%, after: Z%)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
- **Works with**: `test-driven-development` (adds tests post-fix)
|
||||
- **Complements**: `code-review-request` (reviewer checks for regression tests)
|
||||
- **Used by**: `/lazy fix` command (auto-evaluates regression test need)
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
**Environment Variables:**
|
||||
```bash
|
||||
# Force regression tests for all bugs (strict mode)
|
||||
export LAZYDEV_FORCE_REGRESSION_TESTS=1
|
||||
|
||||
# Disable regression test skill
|
||||
export LAZYDEV_DISABLE_REGRESSION_SKILL=1
|
||||
|
||||
# Minimum coverage threshold to skip regression test (default: 90)
|
||||
export LAZYDEV_REGRESSION_SKIP_COVERAGE_THRESHOLD=90
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Created**: 2025-10-30
|
||||
**Anthropic Best Practice**: Model-invoked, autonomous trigger after bug fixes
|
||||
274
.claude/skills/security-audit/SKILL.md
Normal file
274
.claude/skills/security-audit/SKILL.md
Normal file
@@ -0,0 +1,274 @@
|
||||
---
|
||||
name: security-audit
|
||||
description: Triggers for authentication, payments, user input, and API endpoints to check OWASP risks. Auto-evaluates security need and provides actionable fixes, not checklists.
|
||||
---
|
||||
|
||||
# Security Audit Skill
|
||||
|
||||
**Purpose**: Catch security vulnerabilities early with targeted checks, not generic checklists.
|
||||
|
||||
**Trigger Words**: auth, login, password, payment, credit card, token, API endpoint, user input, SQL, database query, session, cookie, upload
|
||||
|
||||
---
|
||||
|
||||
## Quick Decision: When to Audit?
|
||||
|
||||
```python
|
||||
def needs_security_audit(code_context: dict) -> bool:
|
||||
"""Fast security risk evaluation."""
|
||||
|
||||
# ALWAYS audit these (high risk)
|
||||
critical_patterns = [
|
||||
"authentication", "authorization", "login", "password",
|
||||
"payment", "credit card", "billing", "stripe", "paypal",
|
||||
"admin", "sudo", "privilege", "role",
|
||||
"token", "jwt", "session", "cookie",
|
||||
"sql", "database", "query", "exec", "eval",
|
||||
"upload", "file", "download", "path traversal"
|
||||
]
|
||||
|
||||
# Check if any critical pattern in code
|
||||
if any(p in code_context.get("description", "").lower() for p in critical_patterns):
|
||||
return True
|
||||
|
||||
# Skip for: docs, tests, config, low-risk utils
|
||||
skip_patterns = ["test_", "docs/", "README", "config", "utils"]
|
||||
if any(p in code_context.get("files", []) for p in skip_patterns):
|
||||
return False
|
||||
|
||||
return False
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Checks (Targeted, Not Exhaustive)
|
||||
|
||||
### 1. **Input Validation** (Most Common)
|
||||
```python
|
||||
# ❌ BAD - No validation
|
||||
def get_user(user_id):
|
||||
return db.query(f"SELECT * FROM users WHERE id = {user_id}")
|
||||
|
||||
# ✅ GOOD - Validated + parameterized
|
||||
def get_user(user_id: int):
|
||||
if not isinstance(user_id, int) or user_id <= 0:
|
||||
raise ValueError("Invalid user_id")
|
||||
return db.query("SELECT * FROM users WHERE id = ?", [user_id])
|
||||
```
|
||||
|
||||
**Quick Fix**: Add type hints + validation at entry points.
|
||||
|
||||
---
|
||||
|
||||
### 2. **SQL Injection** (Critical)
|
||||
```python
|
||||
# ❌ BAD - String interpolation
|
||||
query = f"SELECT * FROM users WHERE email = '{email}'"
|
||||
|
||||
# ✅ GOOD - Parameterized queries
|
||||
query = "SELECT * FROM users WHERE email = ?"
|
||||
db.execute(query, [email])
|
||||
```
|
||||
|
||||
**Quick Fix**: Never use f-strings for SQL. Use ORM or parameterized queries.
|
||||
|
||||
---
|
||||
|
||||
### 3. **Authentication & Secrets** (Critical)
|
||||
```python
|
||||
# ❌ BAD - Hardcoded secrets
|
||||
API_KEY = "sk_live_abc123"
|
||||
password = "admin123"
|
||||
|
||||
# ✅ GOOD - Environment variables
|
||||
API_KEY = os.getenv("STRIPE_API_KEY")
|
||||
# Passwords: bcrypt hashed, never plaintext
|
||||
|
||||
# ❌ BAD - Weak session
|
||||
session["user_id"] = user_id # No expiry, no signing
|
||||
|
||||
# ✅ GOOD - Secure session
|
||||
session.permanent = False
|
||||
session["user_id"] = user_id
|
||||
session["expires"] = time.time() + 3600 # 1 hour
|
||||
```
|
||||
|
||||
**Quick Fix**: Extract secrets to .env, hash passwords, add session expiry.
|
||||
|
||||
---
|
||||
|
||||
### 4. **Authorization** (Often Forgotten)
|
||||
```python
|
||||
# ❌ BAD - Missing authorization check
|
||||
@app.route("/admin/users/<user_id>", methods=["DELETE"])
|
||||
def delete_user(user_id):
|
||||
User.delete(user_id) # Anyone can delete!
|
||||
|
||||
# ✅ GOOD - Check permissions
|
||||
@app.route("/admin/users/<user_id>", methods=["DELETE"])
|
||||
@require_role("admin")
|
||||
def delete_user(user_id):
|
||||
if not current_user.can_delete(user_id):
|
||||
abort(403)
|
||||
User.delete(user_id)
|
||||
```
|
||||
|
||||
**Quick Fix**: Add permission checks before destructive operations.
|
||||
|
||||
---
|
||||
|
||||
### 5. **Rate Limiting** (API Endpoints)
|
||||
```python
|
||||
# ❌ BAD - No rate limit
|
||||
@app.route("/api/login", methods=["POST"])
|
||||
def login():
|
||||
# Brute force possible
|
||||
return authenticate(request.json)
|
||||
|
||||
# ✅ GOOD - Rate limited
|
||||
@app.route("/api/login", methods=["POST"])
|
||||
@rate_limit("5 per minute")
|
||||
def login():
|
||||
return authenticate(request.json)
|
||||
```
|
||||
|
||||
**Quick Fix**: Add rate limiting to login, payment, sensitive endpoints.
|
||||
|
||||
---
|
||||
|
||||
### 6. **XSS Prevention** (Frontend/Templates)
|
||||
```python
|
||||
# ❌ BAD - Unescaped user input
|
||||
return f"<div>Welcome {username}</div>" # XSS if username = "<script>alert('XSS')</script>"
|
||||
|
||||
# ✅ GOOD - Escaped output
|
||||
from html import escape
|
||||
return f"<div>Welcome {escape(username)}</div>"
|
||||
|
||||
# Or use framework escaping (Jinja2, React auto-escapes)
|
||||
```
|
||||
|
||||
**Quick Fix**: Escape user input in HTML. Use framework defaults.
|
||||
|
||||
---
|
||||
|
||||
### 7. **File Upload Safety**
|
||||
```python
|
||||
# ❌ BAD - No validation
|
||||
@app.route("/upload", methods=["POST"])
|
||||
def upload():
|
||||
file = request.files["file"]
|
||||
file.save(f"uploads/{file.filename}") # Path traversal! Overwrite!
|
||||
|
||||
# ✅ GOOD - Validated
|
||||
import os
|
||||
from werkzeug.utils import secure_filename
|
||||
|
||||
ALLOWED_EXTENSIONS = {"png", "jpg", "pdf"}
|
||||
|
||||
@app.route("/upload", methods=["POST"])
|
||||
def upload():
|
||||
file = request.files["file"]
|
||||
if not file or "." not in file.filename:
|
||||
abort(400, "Invalid file")
|
||||
|
||||
ext = file.filename.rsplit(".", 1)[1].lower()
|
||||
if ext not in ALLOWED_EXTENSIONS:
|
||||
abort(400, "File type not allowed")
|
||||
|
||||
filename = secure_filename(file.filename)
|
||||
file.save(os.path.join("uploads", filename))
|
||||
```
|
||||
|
||||
**Quick Fix**: Whitelist extensions, sanitize filenames, limit size.
|
||||
|
||||
---
|
||||
|
||||
## Output Format (Actionable Only)
|
||||
|
||||
```markdown
|
||||
## Security Audit Results
|
||||
|
||||
**Risk Level**: [CRITICAL | HIGH | MEDIUM | LOW]
|
||||
|
||||
### Issues Found: X
|
||||
|
||||
1. **[CRITICAL] SQL Injection in get_user() (auth.py:45)**
|
||||
- Issue: f-string used for SQL query
|
||||
- Fix: Use parameterized query
|
||||
- Code:
|
||||
```python
|
||||
# Change this:
|
||||
query = f"SELECT * FROM users WHERE id = {user_id}"
|
||||
# To this:
|
||||
query = "SELECT * FROM users WHERE id = ?"
|
||||
db.execute(query, [user_id])
|
||||
```
|
||||
|
||||
2. **[HIGH] Missing rate limiting on /api/login**
|
||||
- Issue: Brute force attacks possible
|
||||
- Fix: Add @rate_limit("5 per minute") decorator
|
||||
|
||||
3. **[MEDIUM] Hardcoded API key in config.py:12**
|
||||
- Issue: Secret in code
|
||||
- Fix: Move to environment variable
|
||||
|
||||
---
|
||||
|
||||
**Next Steps**:
|
||||
1. Fix CRITICAL issues first (SQL injection)
|
||||
2. Add rate limiting (5 min fix)
|
||||
3. Extract secrets to .env
|
||||
4. Re-run security audit after fixes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Workflow
|
||||
|
||||
```bash
|
||||
# Automatic trigger
|
||||
/lazy code "add user login endpoint"
|
||||
|
||||
→ security-audit triggers
|
||||
→ Checks: password handling, session, rate limiting
|
||||
→ Finds: Missing bcrypt hash, no rate limit
|
||||
→ Suggests fixes with code examples
|
||||
→ Developer applies fixes
|
||||
→ Re-audit confirms: ✅ Secure
|
||||
|
||||
# Manual trigger
|
||||
Skill(command="security-audit")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Skill Does NOT Do
|
||||
|
||||
❌ Generate 50-item security checklists (not actionable)
|
||||
❌ Flag every minor issue (noise)
|
||||
❌ Require penetration testing (that's a different tool)
|
||||
❌ Cover infrastructure security (AWS, Docker, etc.)
|
||||
|
||||
✅ **DOES**: Catch common code-level vulnerabilities with fast, practical fixes.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```bash
|
||||
# Strict mode: audit everything (slower)
|
||||
export LAZYDEV_SECURITY_STRICT=1
|
||||
|
||||
# Disable security skill
|
||||
export LAZYDEV_DISABLE_SECURITY=1
|
||||
|
||||
# Focus on specific risks only
|
||||
export LAZYDEV_SECURITY_FOCUS="sql,auth,xss"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**OWASP Coverage**: SQL Injection, XSS, Broken Auth, Insecure Design, Security Misconfiguration
|
||||
**Speed**: <5 seconds for typical file
|
||||
31
.claude/skills/story-traceability/SKILL.md
Normal file
31
.claude/skills/story-traceability/SKILL.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: story-traceability
|
||||
description: Ensure Acceptance Criteria map to Tasks and Tests for PR-per-story workflow
|
||||
version: 0.1.0
|
||||
tags: [planning, QA]
|
||||
triggers:
|
||||
- acceptance criteria
|
||||
- user story
|
||||
- traceability
|
||||
---
|
||||
|
||||
# Story Traceability
|
||||
|
||||
## Purpose
|
||||
Create a clear AC → Task → Test mapping to guarantee coverage and reviewability.
|
||||
|
||||
## Behavior
|
||||
1. Build a table: AC | Task(s) | Test(s) | Notes.
|
||||
2. Insert into `USER-STORY.md`; add brief references into each `TASK-*.md`.
|
||||
3. Call out missing mappings; propose test names.
|
||||
|
||||
## Guardrails
|
||||
- Every AC must have ≥1 task and ≥1 test.
|
||||
- Keep table compact; link file paths precisely.
|
||||
|
||||
## Integration
|
||||
- Project Manager agent; `/lazy create-feature` output phase.
|
||||
|
||||
## Example Prompt
|
||||
> Add traceability for US-20251027-001.
|
||||
|
||||
31
.claude/skills/task-slicer/SKILL.md
Normal file
31
.claude/skills/task-slicer/SKILL.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: task-slicer
|
||||
description: Split features into atomic 2–4h tasks with independent tests and minimal dependencies
|
||||
version: 0.1.0
|
||||
tags: [planning, tasks]
|
||||
triggers:
|
||||
- break into tasks
|
||||
- task list
|
||||
- estimates
|
||||
---
|
||||
|
||||
# Task Slicer
|
||||
|
||||
## Purpose
|
||||
Turn a user story into small, testable tasks with clear inputs/outputs.
|
||||
|
||||
## Behavior
|
||||
1. Create 3–10 tasks, each 2–4 hours.
|
||||
2. For each task: description, files, test focus, dependencies, estimate.
|
||||
3. Name files `TASK-US-<id>-<n>.md` and reference the story ID.
|
||||
|
||||
## Guardrails
|
||||
- Prefer independence; minimize cross-task dependencies.
|
||||
- Split or merge tasks to hit target size.
|
||||
|
||||
## Integration
|
||||
- Project Manager agent; `/lazy create-feature` task generation step.
|
||||
|
||||
## Example Prompt
|
||||
> Slice US-20251027-001 into executable tasks.
|
||||
|
||||
376
.claude/skills/tech-stack-architect/SKILL.md
Normal file
376
.claude/skills/tech-stack-architect/SKILL.md
Normal file
@@ -0,0 +1,376 @@
|
||||
---
|
||||
name: tech-stack-architect
|
||||
description: Design complete technology stack and system architecture from project requirements - generates TECH-STACK.md with frontend/backend/database/DevOps choices plus rationale, and ARCHITECTURE.md with components, data flow, and mermaid diagrams
|
||||
version: 0.1.0
|
||||
tags: [architecture, planning, tech-stack, design]
|
||||
triggers:
|
||||
- tech stack
|
||||
- architecture
|
||||
- technology choices
|
||||
- system design
|
||||
- architecture diagram
|
||||
---
|
||||
|
||||
# Tech Stack Architect
|
||||
|
||||
## Purpose
|
||||
Generate comprehensive technology stack selection and system architecture design from project requirements. Creates two foundational documents that guide implementation.
|
||||
|
||||
## When to Use
|
||||
- Starting a new project after PROJECT-OVERVIEW.md is created
|
||||
- Re-architecting existing systems
|
||||
- Technology evaluation and selection
|
||||
- Architecture documentation needed
|
||||
- User mentions "tech stack", "architecture", "system design"
|
||||
|
||||
## Behavior
|
||||
|
||||
### Phase 1: Technology Stack Selection
|
||||
|
||||
1. **Read PROJECT-OVERVIEW.md** for:
|
||||
- Project goals and constraints
|
||||
- Scale requirements (users, data, traffic)
|
||||
- Team skills and preferences
|
||||
- Budget and timeline
|
||||
- Compliance requirements
|
||||
|
||||
2. **Analyze requirements** across 4 categories:
|
||||
- Frontend (framework, state management, UI library)
|
||||
- Backend (language, framework, API style)
|
||||
- Database (RDBMS, NoSQL, caching, search)
|
||||
- DevOps (hosting, CI/CD, monitoring, security)
|
||||
|
||||
3. **Generate TECH-STACK.md** with:
|
||||
- **Category tables**: Technology | Rationale | Alternatives Considered
|
||||
- **Integration notes**: How technologies work together
|
||||
- **Trade-offs**: What you gain/lose with this stack
|
||||
- **Migration path**: How to evolve the stack
|
||||
- **Team considerations**: Learning curve, hiring, support
|
||||
|
||||
### Phase 2: System Architecture Design
|
||||
|
||||
1. **Design components**:
|
||||
- Client-side architecture
|
||||
- API layer and services
|
||||
- Data storage and caching
|
||||
- Background jobs and queues
|
||||
- External integrations
|
||||
|
||||
2. **Define data flow**:
|
||||
- Request/response paths
|
||||
- Authentication flow
|
||||
- Data persistence patterns
|
||||
- Event-driven flows (if applicable)
|
||||
|
||||
3. **Generate ARCHITECTURE.md** with:
|
||||
- **System Overview**: High-level component diagram (C4 Context)
|
||||
- **Component Details**: Responsibilities, interfaces, dependencies
|
||||
- **Data Flow Diagrams**: Key user journeys with sequence diagrams
|
||||
- **Scalability Strategy**: Horizontal scaling, caching, load balancing
|
||||
- **Security Architecture**: Auth, encryption, OWASP considerations
|
||||
- **Mermaid Diagrams**: C4, sequence, data flow, deployment
|
||||
|
||||
## Output Style
|
||||
- Use `table-based` for technology comparisons
|
||||
- Use `markdown-focused` with mermaid diagrams for architecture
|
||||
- Keep rationales concise (1-2 sentences per choice)
|
||||
- Include visual diagrams for clarity
|
||||
|
||||
## Output Files
|
||||
|
||||
### 1. project-management/TECH-STACK.md
|
||||
```markdown
|
||||
# Technology Stack
|
||||
|
||||
## Summary
|
||||
[2-3 sentence overview of the stack philosophy]
|
||||
|
||||
## Frontend Stack
|
||||
|
||||
| Technology | Choice | Rationale | Alternatives Considered |
|
||||
|------------|--------|-----------|------------------------|
|
||||
| Framework | React 18 | ... | Vue, Svelte, Angular |
|
||||
| State | Zustand | ... | Redux, Jotai, Context |
|
||||
| UI Library | Tailwind + shadcn/ui | ... | MUI, Chakra, custom |
|
||||
| Build | Vite | ... | Webpack, Turbopack |
|
||||
|
||||
## Backend Stack
|
||||
|
||||
| Technology | Choice | Rationale | Alternatives Considered |
|
||||
|------------|--------|-----------|------------------------|
|
||||
| Language | Python 3.11 | ... | Node.js, Go, Rust |
|
||||
| Framework | FastAPI | ... | Django, Flask, Express |
|
||||
| API Style | REST + OpenAPI | ... | GraphQL, gRPC, tRPC |
|
||||
|
||||
## Database & Storage
|
||||
|
||||
| Technology | Choice | Rationale | Alternatives Considered |
|
||||
|------------|--------|-----------|------------------------|
|
||||
| Primary DB | PostgreSQL 15 | ... | MySQL, MongoDB, SQLite |
|
||||
| Caching | Redis | ... | Memcached, Valkey |
|
||||
| Search | ElasticSearch | ... | Algolia, Meilisearch |
|
||||
| Object Storage | S3 | ... | MinIO, CloudFlare R2 |
|
||||
|
||||
## DevOps & Infrastructure
|
||||
|
||||
| Technology | Choice | Rationale | Alternatives Considered |
|
||||
|------------|--------|-----------|------------------------|
|
||||
| Hosting | AWS ECS Fargate | ... | k8s, VM, serverless |
|
||||
| CI/CD | GitHub Actions | ... | GitLab CI, CircleCI |
|
||||
| Monitoring | DataDog | ... | Grafana, New Relic |
|
||||
| Secrets | AWS Secrets Manager | ... | Vault, Doppler |
|
||||
|
||||
## Integration Notes
|
||||
- [How frontend talks to backend]
|
||||
- [Database connection pooling strategy]
|
||||
- [Caching layer integration]
|
||||
- [CI/CD pipeline flow]
|
||||
|
||||
## Trade-offs
|
||||
**Gains**: [What this stack provides]
|
||||
**Costs**: [Complexity, vendor lock-in, learning curve]
|
||||
|
||||
## Migration Path
|
||||
- Phase 1: [Initial minimal stack]
|
||||
- Phase 2: [Add caching, search]
|
||||
- Phase 3: [Scale horizontally]
|
||||
|
||||
## Team Considerations
|
||||
- **Learning Curve**: [Estimate for team]
|
||||
- **Hiring**: [Availability of talent]
|
||||
- **Support**: [Community, docs, enterprise support]
|
||||
```
|
||||
|
||||
### 2. project-management/ARCHITECTURE.md
|
||||
```markdown
|
||||
# System Architecture
|
||||
|
||||
## Overview
|
||||
[2-3 sentence description of the system]
|
||||
|
||||
## C4 Context Diagram
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context for [Project Name]
|
||||
|
||||
Person(user, "User", "End user of the system")
|
||||
System(app, "Application", "Main system")
|
||||
System_Ext(auth, "Auth Provider", "OAuth2 provider")
|
||||
System_Ext(payment, "Payment Gateway", "Stripe")
|
||||
|
||||
Rel(user, app, "Uses", "HTTPS")
|
||||
Rel(app, auth, "Authenticates", "OAuth2")
|
||||
Rel(app, payment, "Processes payments", "API")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
```mermaid
|
||||
graph TB
|
||||
Client[React Client]
|
||||
API[FastAPI Backend]
|
||||
DB[(PostgreSQL)]
|
||||
Cache[(Redis)]
|
||||
Queue[Job Queue]
|
||||
Worker[Background Workers]
|
||||
|
||||
Client -->|HTTPS/JSON| API
|
||||
API -->|SQL| DB
|
||||
API -->|GET/SET| Cache
|
||||
API -->|Enqueue| Queue
|
||||
Queue -->|Process| Worker
|
||||
Worker -->|Update| DB
|
||||
```
|
||||
|
||||
### Component Details
|
||||
|
||||
**Client (React)**
|
||||
- **Responsibilities**: UI rendering, state management, client-side validation
|
||||
- **Key Libraries**: React Router, Zustand, React Query
|
||||
- **Interfaces**: REST API via fetch/axios
|
||||
|
||||
**API (FastAPI)**
|
||||
- **Responsibilities**: Business logic, validation, auth, rate limiting
|
||||
- **Key Modules**: auth, users, payments, notifications
|
||||
- **Interfaces**: REST endpoints (OpenAPI), WebSocket (notifications)
|
||||
|
||||
**Database (PostgreSQL)**
|
||||
- **Responsibilities**: Persistent data storage, relational integrity
|
||||
- **Schema**: Users, sessions, transactions, audit logs
|
||||
- **Patterns**: Repository pattern, connection pooling
|
||||
|
||||
**Cache (Redis)**
|
||||
- **Responsibilities**: Session storage, rate limiting, job queue
|
||||
- **TTL Strategy**: Sessions (24h), API cache (5m), rate limits (1h)
|
||||
|
||||
**Background Workers**
|
||||
- **Responsibilities**: Email sending, report generation, cleanup jobs
|
||||
- **Queue**: Redis-backed Celery/ARQ
|
||||
- **Monitoring**: Dead letter queue, retry logic
|
||||
|
||||
## Authentication Flow
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User
|
||||
participant Client
|
||||
participant API
|
||||
participant Auth0
|
||||
participant DB
|
||||
|
||||
User->>Client: Click "Login"
|
||||
Client->>Auth0: Redirect to OAuth2
|
||||
Auth0->>Client: Return auth code
|
||||
Client->>API: Exchange code for token
|
||||
API->>Auth0: Validate code
|
||||
Auth0->>API: User profile
|
||||
API->>DB: Create/update user
|
||||
API->>Client: Return JWT token
|
||||
Client->>Client: Store token (httpOnly cookie)
|
||||
```
|
||||
|
||||
## Data Flow: User Registration
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant Client
|
||||
participant API
|
||||
participant DB
|
||||
participant Queue
|
||||
participant Worker
|
||||
participant Email
|
||||
|
||||
Client->>API: POST /api/register
|
||||
API->>API: Validate input
|
||||
API->>DB: Create user (inactive)
|
||||
API->>Queue: Enqueue welcome email
|
||||
API->>Client: 201 Created
|
||||
Queue->>Worker: Process email job
|
||||
Worker->>Email: Send welcome email
|
||||
Worker->>DB: Log email sent
|
||||
```
|
||||
|
||||
## Scalability Strategy
|
||||
|
||||
### Horizontal Scaling
|
||||
- **API**: Stateless containers (2-10 instances behind ALB)
|
||||
- **Database**: Read replicas for reporting queries
|
||||
- **Cache**: Redis Cluster (3+ nodes)
|
||||
- **Workers**: Auto-scale based on queue depth
|
||||
|
||||
### Caching Strategy
|
||||
- **API Responses**: Cache GET endpoints (5m TTL)
|
||||
- **Database Queries**: Query result cache in Redis
|
||||
- **Static Assets**: CDN (CloudFront) with edge caching
|
||||
|
||||
### Load Balancing
|
||||
- **Application**: AWS ALB with health checks
|
||||
- **Database**: pgpool for read/write splitting
|
||||
- **Geographic**: Multi-region deployment (future)
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Authentication & Authorization
|
||||
- **Strategy**: OAuth2 + JWT tokens (15m access, 7d refresh)
|
||||
- **Storage**: httpOnly cookies for web, secure storage for mobile
|
||||
- **Rotation**: Automatic token refresh
|
||||
|
||||
### Data Protection
|
||||
- **At Rest**: PostgreSQL encryption (AWS RDS)
|
||||
- **In Transit**: TLS 1.3 for all connections
|
||||
- **Secrets**: AWS Secrets Manager, rotated monthly
|
||||
|
||||
### OWASP Top 10 Mitigations
|
||||
- **Injection**: Parameterized queries (SQLAlchemy ORM)
|
||||
- **Auth**: JWT validation, session management
|
||||
- **XSS**: Content Security Policy, input sanitization
|
||||
- **CSRF**: SameSite cookies, CSRF tokens
|
||||
- **Rate Limiting**: Redis-backed (100 req/min per IP)
|
||||
|
||||
### Network Security
|
||||
- **VPC**: Private subnets for DB/workers
|
||||
- **Security Groups**: Least privilege access
|
||||
- **WAF**: CloudFront WAF rules
|
||||
|
||||
## Deployment Architecture
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Public Subnet"
|
||||
ALB[Application Load Balancer]
|
||||
end
|
||||
|
||||
subgraph "Private Subnet - App Tier"
|
||||
API1[API Container 1]
|
||||
API2[API Container 2]
|
||||
Worker1[Worker Container]
|
||||
end
|
||||
|
||||
subgraph "Private Subnet - Data Tier"
|
||||
DB[(RDS PostgreSQL)]
|
||||
Cache[(ElastiCache Redis)]
|
||||
end
|
||||
|
||||
Internet((Internet)) --> ALB
|
||||
ALB --> API1
|
||||
ALB --> API2
|
||||
API1 --> DB
|
||||
API2 --> DB
|
||||
API1 --> Cache
|
||||
API2 --> Cache
|
||||
Worker1 --> DB
|
||||
Worker1 --> Cache
|
||||
```
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
**Metrics**:
|
||||
- API latency (p50, p95, p99)
|
||||
- Error rates by endpoint
|
||||
- Database connection pool usage
|
||||
- Cache hit/miss ratios
|
||||
|
||||
**Logging**:
|
||||
- Structured JSON logs (ECS logs to CloudWatch)
|
||||
- Request ID tracing across services
|
||||
- Error tracking (Sentry)
|
||||
|
||||
**Alerting**:
|
||||
- API error rate >1%
|
||||
- Database connections >80%
|
||||
- Job queue depth >1000
|
||||
|
||||
## Future Considerations
|
||||
|
||||
**Phase 2 Enhancements**:
|
||||
- GraphQL API option
|
||||
- WebSocket real-time updates
|
||||
- ElasticSearch for full-text search
|
||||
|
||||
**Phase 3 Scale**:
|
||||
- Multi-region deployment
|
||||
- Event-driven microservices
|
||||
- CQRS for read-heavy workloads
|
||||
```
|
||||
|
||||
## Guardrails
|
||||
- Keep technology choices pragmatic (avoid hype-driven development)
|
||||
- Consider team skills when selecting stack
|
||||
- Prefer managed services over self-hosted for DevOps
|
||||
- Include alternatives to show deliberate choice
|
||||
- Use mermaid for all diagrams (portable, version-controllable)
|
||||
- Keep each document under 400 lines
|
||||
- Link to official docs for each technology
|
||||
|
||||
## Integration
|
||||
- Run after PROJECT-OVERVIEW.md is created
|
||||
- Feed into `/lazy plan` for user story creation
|
||||
- Reference during `/lazy code` for implementation consistency
|
||||
- Update during `/lazy review` if architecture evolves
|
||||
|
||||
## Example Prompt
|
||||
> Design the tech stack and architecture for this project
|
||||
|
||||
## Validation Checklist
|
||||
- [ ] TECH-STACK.md has all 4 categories (Frontend, Backend, Database, DevOps)
|
||||
- [ ] Each technology has rationale and alternatives
|
||||
- [ ] ARCHITECTURE.md has system overview + 3+ mermaid diagrams
|
||||
- [ ] Authentication and data flow are documented
|
||||
- [ ] Scalability and security sections are complete
|
||||
- [ ] Trade-offs and migration path are clear
|
||||
36
.claude/skills/test-driven-development/SKILL.md
Normal file
36
.claude/skills/test-driven-development/SKILL.md
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
name: test-driven-development
|
||||
description: Enforce RED→GREEN→REFACTOR micro-cycles and keep diffs minimal
|
||||
version: 0.1.0
|
||||
tags: [testing, quality]
|
||||
triggers:
|
||||
- tdd
|
||||
- tests first
|
||||
- failing test
|
||||
---
|
||||
|
||||
# Test-Driven Development
|
||||
|
||||
## Purpose
|
||||
Bias implementation to tests-first and small, verifiable changes.
|
||||
|
||||
## Behavior
|
||||
1. RED: scaffold 1–3 failing tests targeting the smallest slice.
|
||||
2. GREEN: implement the minimum code to pass.
|
||||
3. REFACTOR: improve names/structure with tests green.
|
||||
4. Repeat in tiny increments until task acceptance criteria are met.
|
||||
|
||||
## Guardrails
|
||||
- Block large edits unless a failing test exists.
|
||||
- Prefer small diffs spanning ≤3 files.
|
||||
- Keep test names explicit and deterministic.
|
||||
|
||||
## Output Style
|
||||
- `bullet-points` for steps; `markdown-focused` for code blocks.
|
||||
|
||||
## Integration
|
||||
- `/lazy task-exec` implementation phase; Coder/Tester agents.
|
||||
|
||||
## Example Prompt
|
||||
> Apply TDD to implement input validation for prices.
|
||||
|
||||
Reference in New Issue
Block a user