Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:46 +08:00
commit 6902106648
49 changed files with 11466 additions and 0 deletions

758
.claude/commands/clean.md Normal file
View File

@@ -0,0 +1,758 @@
---
description: Remove dead code safely from codebase
argument-hint: [scope]
allowed-tools: Read, Write, Edit, Bash, Task, Glob, Grep
model: claude-haiku-4-5-20251001
---
# `/lazy cleanup` - Safe Dead Code Removal
You are the **Cleanup Command Handler** for LAZY-DEV-FRAMEWORK. Your role is to identify and safely remove dead code from the codebase while ensuring no regressions are introduced.
## Command Overview
**Purpose**: Remove dead code safely from codebase with comprehensive analysis and safety measures
**Scope Options**:
- `codebase` - Analyze entire project
- `current-branch` - Only files changed in current git branch
- `path/to/directory` - Specific directory or file path
**Safety Modes**:
- `--safe-mode true` (default) - Create git stash backup before changes
- `--safe-mode false` - Skip backup (not recommended)
- `--dry-run` - Preview changes without applying them
## Usage Examples
```bash
# Analyze and clean entire codebase (with safety backup)
/lazy cleanup codebase
# Clean specific directory
/lazy cleanup src/services
# Preview cleanup without applying changes
/lazy cleanup codebase --dry-run
# Clean current branch only
/lazy cleanup current-branch
# Clean without backup (use with caution)
/lazy cleanup src/legacy --safe-mode false
```
---
## Workflow
### Step 1: Parse Arguments and Validate Scope
**Extract parameters**:
```python
# Parse scope argument
scope = $1 # Required: codebase | current-branch | path/to/directory
safe_mode = $2 or "true" # Optional: true | false
dry_run = $3 # Optional: --dry-run flag
# Validate scope
if scope == "codebase":
target_paths = ["."] # Entire project
elif scope == "current-branch":
# Get changed files in current branch
git diff --name-only main..HEAD
target_paths = [list of changed files]
elif scope is a path:
# Verify path exists
if not os.path.exists(scope):
ERROR: "Path not found: {scope}"
target_paths = [scope]
```
**Display scan scope**:
```
🧹 Cleanup Analysis Starting...
Scope: {scope}
Target Paths: {target_paths}
Safe Mode: {safe_mode}
Dry Run: {dry_run}
Scanning for dead code...
```
---
### Step 2: Scan for Dead Code Patterns
**Use Glob and Grep tools to identify**:
#### A. Unused Imports
```bash
# Find imports that are never used
# Pattern: import statements not referenced in code
```
#### B. Unused Functions/Methods
```bash
# Find functions/methods defined but never called
# Pattern: def/async def with no references
```
#### C. Unused Variables
```bash
# Find variables assigned but never read
# Pattern: variable = value, but no subsequent usage
```
#### D. Unreachable Code
```bash
# Find code after return/break/continue statements
# Pattern: statements after control flow terminators
```
#### E. Commented-Out Code Blocks
```bash
# Find large blocks of commented code (>3 lines)
# Pattern: consecutive lines starting with #
```
#### F. Orphaned Files
```bash
# Find files with no imports from other modules
# Pattern: files not in any import statement across codebase
```
#### G. Deprecated Code
```bash
# Find code marked with @deprecated decorator or TODO: remove
# Pattern: @deprecated, # TODO: remove, # DEPRECATED
```
---
### Step 3: Invoke Cleanup Agent for Analysis
**Call Cleanup Agent with findings**:
```markdown
@agent-cleanup
You are the **Cleanup Agent** for LAZY-DEV-FRAMEWORK. Analyze code to identify dead code that can be safely removed.
## Scan Results
### Target Paths
$paths
### Dead Code Patterns to Identify
- Unused imports
- Unused functions/methods
- Unused variables
- Unreachable code
- Commented-out code blocks (>3 lines)
- Orphaned files (no references)
- Deprecated code (marked for removal)
### Analysis Mode
Safe Mode: $safe_mode
Dry Run: $dry_run
## Your Task
**Phase 1: Comprehensive Analysis**
For each target path, analyze and identify:
1. **Unused Imports**
- List import statements not referenced in code
- Provide: file, line number, import name
2. **Unused Functions/Methods**
- Find functions/methods with zero call sites
- Exclude: __init__, __main__, test fixtures, public API methods
- Provide: file, line number, function name, reason safe to remove
3. **Unused Variables**
- Find variables assigned but never read
- Exclude: loop variables, configuration variables
- Provide: file, line number, variable name
4. **Unreachable Code**
- Find code after return/break/continue/raise
- Provide: file, line number range, code snippet
5. **Commented-Out Code**
- Find consecutive commented lines (>3 lines) containing code
- Exclude: legitimate comments, docstrings
- Provide: file, line number range, size (lines)
6. **Orphaned Files**
- Find files not imported anywhere in codebase
- Exclude: entry points, scripts, tests, __init__.py
- Provide: file path, size (lines), last modified date
7. **Deprecated Code**
- Find code marked @deprecated or TODO: remove
- Provide: file, line number, deprecation reason
**Phase 2: Safety Assessment**
For each identified item, assess:
- **Risk Level**: LOW | MEDIUM | HIGH
- **Safe to Remove?**: YES | NO | MAYBE (requires review)
- **Reason**: Why it's safe (or not safe) to remove
- **Dependencies**: Any code that depends on this
**Phase 3: Removal Recommendations**
Categorize findings:
**Safe to Remove (Low Risk)**
- Items with zero dependencies
- Clearly unused code with no side effects
⚠️ **Review Recommended (Medium Risk)**
- Items with unclear usage patterns
- Code that might be used via reflection/dynamic imports
**Do Not Remove (High Risk)**
- Public API methods (even if unused internally)
- Code with external dependencies
- Configuration code
- Test fixtures
## Output Format
```yaml
dead_code_analysis:
summary:
total_items: N
safe_to_remove: N
review_recommended: N
do_not_remove: N
safe_removals:
unused_imports:
- file: "path/to/file.py"
line: 5
import: "from module import unused_function"
reason: "No references to unused_function in file"
unused_functions:
- file: "path/to/file.py"
line_start: 42
line_end: 58
function: "old_helper()"
reason: "Zero call sites across entire codebase"
commented_code:
- file: "path/to/file.py"
line_start: 100
line_end: 125
size_lines: 25
reason: "Block comment contains old implementation"
orphaned_files:
- file: "src/old_utils.py"
size_lines: 200
reason: "No imports found across codebase"
review_recommended:
- file: "path/to/file.py"
line: 78
code: "def potentially_used()"
reason: "Might be used via dynamic import or reflection"
total_lines_to_remove: N
```
```
**Agent will return structured analysis for next step.**
---
### Step 4: Present Findings for User Approval
**Display findings summary**:
```
🧹 Cleanup Analysis Complete
Dead Code Found:
✅ SAFE TO REMOVE:
✓ {N} unused imports in {X} files
✓ {N} unused functions ({X} lines)
✓ {N} unused variables
✓ {N} unreachable code blocks ({X} lines)
✓ {N} lines of commented code
✓ {N} orphaned files ({file1.py, file2.py})
⚠️ REVIEW RECOMMENDED:
! {N} items need manual review:
- {item1}: {reason}
- {item2}: {reason}
📊 Impact:
Total lines to remove: {N}
Files affected: {X}
Estimated time saved: {Y} minutes in future maintenance
🔒 Safety:
Safe mode: {enabled/disabled}
Dry run: {yes/no}
Backup: {will be created/skipped}
```
**Ask user for approval**:
```
Apply cleanup?
Options:
[y] Yes - Apply all safe removals
[n] No - Cancel cleanup
[p] Preview - Show detailed preview of each change
[s] Selective - Review each item individually
Your choice:
```
---
### Step 5: Apply Cleanup (If Approved)
#### If user selects "Preview" (p):
```
📝 Detailed Preview:
1. UNUSED IMPORT: path/to/file.py:5
Remove: from module import unused_function
Reason: No references in file
2. UNUSED FUNCTION: path/to/file.py:42-58
Remove: def old_helper(): ...
Reason: Zero call sites
Code preview:
```python
def old_helper():
# 16 lines
...
```
3. ORPHANED FILE: src/old_utils.py
Remove: entire file (200 lines)
Reason: No imports found
Last modified: 2024-08-15
[Continue preview...]
Apply these changes? (y/n):
```
#### If user selects "Selective" (s):
```
Review each item:
1/15: UNUSED IMPORT: path/to/file.py:5
Remove: from module import unused_function
Apply? (y/n/q):
```
#### If user approves (y):
**Create safety backup** (if safe_mode=true):
```bash
# Create git stash with timestamp
git stash push -m "cleanup-backup-$(date +%Y%m%d-%H%M%S)" --include-untracked
# Output:
💾 Safety backup created: stash@{0}
To restore: git stash apply stash@{0}
```
**Apply removals using Edit tool**:
```python
# For each safe removal:
# 1. Use Edit tool to remove unused imports
# 2. Use Edit tool to remove unused functions
# 3. Use Edit tool to remove commented blocks
# 4. Use Bash tool to remove orphaned files
# Track changes
changes_applied = []
```
**Display progress**:
```
🧹 Applying cleanup...
✅ Removed unused imports (5 files)
✅ Removed unused functions (3 files)
✅ Removed commented code (8 files)
✅ Removed orphaned files (2 files)
Total: 250 lines removed from 12 files
```
---
### Step 6: Run Quality Pipeline
**CRITICAL: Must pass quality gates before commit**
```bash
# Run quality checks
1. Format: python scripts/format.py {changed_files}
2. Lint: python scripts/lint.py {changed_files}
3. Type: python scripts/type_check.py {changed_files}
4. Test: python scripts/test_runner.py
# If ANY check fails:
- Restore from backup: git stash apply
- Report error to user
- Return: "Cleanup failed quality checks, changes reverted"
```
**Display quality results**:
```
📊 Quality Pipeline: RUNNING...
✅ Format (Black/Ruff): PASS
✅ Lint (Ruff): PASS
✅ Type Check (Mypy): PASS
✅ Tests (Pytest): PASS
- 124/124 tests passing
- Coverage: 87% (unchanged)
All quality checks passed! ✅
```
---
### Step 7: Commit Changes
**Create commit** (if quality passes):
```bash
git add {changed_files}
git commit -m "$(cat <<'EOF'
chore(cleanup): remove dead code
Cleanup scope: {scope}
Files affected: {N}
Lines removed: {M}
Items removed:
- {N} unused imports
- {N} unused functions
- {N} commented code blocks
- {N} orphaned files
Quality pipeline: PASSED
All tests passing: ✅
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```
---
### Step 8: Return Summary
**Final output**:
```
✅ Cleanup Complete
📊 Summary:
Scope: {scope}
Files modified: {N}
Files deleted: {N}
Lines removed: {M}
Items removed:
✓ {N} unused imports
✓ {N} unused functions
✓ {N} unused variables
✓ {N} unreachable code blocks
✓ {N} lines of commented code
✓ {N} orphaned files
📊 Quality Pipeline: PASS
✅ Format (Black/Ruff)
✅ Lint (Ruff)
✅ Type Check (Mypy)
✅ Tests (Pytest)
💾 Committed: {commit_sha}
Message: "chore(cleanup): remove dead code"
💾 Safety Backup: {stash_id}
To restore if needed: git stash apply {stash_id}
🎯 Impact:
Code reduction: {M} lines
Maintainability: Improved
Future time saved: ~{X} minutes
```
---
## Error Handling
### Git Issues
| Error | Cause | Recovery |
|-------|-------|----------|
| **Not a git repository** | No .git directory | Initialize: `git init`, retry |
| **Dirty working tree** | Uncommitted changes | Commit or stash changes first |
| **Stash creation failed** | No changes to stash | Disable safe mode, retry |
### Cleanup Issues
| Error | Cause | Recovery |
|-------|-------|----------|
| **No dead code found** | Clean codebase | Return: "No dead code detected" |
| **Agent timeout** | Large codebase | Reduce scope, retry with specific path |
| **Path not found** | Invalid scope argument | Verify path exists, retry |
### Quality Pipeline Failures
| Error | Cause | Recovery |
|-------|-------|----------|
| **Format failed** | Syntax errors introduced | Restore from stash: `git stash apply`, report issue |
| **Lint failed** | Code quality regressions | Restore from stash, report issue |
| **Type check failed** | Type errors introduced | Restore from stash, report issue |
| **Tests failed** | Removed code was not actually dead | Restore from stash, mark as unsafe removal |
**Failure recovery pattern**:
```bash
# If quality pipeline fails:
echo "❌ Cleanup failed at: {stage}"
echo "🔄 Restoring from backup..."
git stash apply {stash_id}
echo "✅ Changes reverted"
echo ""
echo "Issue: {error_details}"
echo "Action: Review removals manually or report issue"
```
---
## Safety Constraints
**DO NOT REMOVE**:
- Public API methods (even if unused internally)
- Test fixtures and test utilities
- Configuration variables
- __init__.py files
- __main__.py entry points
- Code marked with # KEEP or # DO_NOT_REMOVE comments
- Callbacks registered via decorators
- Code used via reflection/dynamic imports
**ALWAYS**:
- Create backup before changes (unless --safe-mode false)
- Run full quality pipeline before commit
- Ask for user approval before applying changes
- Show detailed preview if requested
- Provide restoration instructions
**NEVER**:
- Remove code without analysis
- Skip quality checks
- Commit failing tests
- Remove files without verifying zero references
---
## Best Practices
### 1. Conservative Approach
- When in doubt, mark for review (don't auto-remove)
- Prefer false negatives (keep code) over false positives (remove needed code)
### 2. Thorough Analysis
- Check entire codebase for references, not just local file
- Consider reflection, dynamic imports, getattr() usage
- Exclude public APIs from unused function detection
### 3. Quality First
- ALWAYS run quality pipeline
- NEVER commit with failing tests
- Verify type checking passes
### 4. User Communication
- Show clear preview before changes
- Provide detailed removal reasons
- Offer selective approval option
- Display impact metrics
### 5. Safety Nets
- Default to safe mode (backup)
- Provide restoration instructions
- Auto-revert on quality failures
- Log all removals for audit
---
## Example Execution
### Command
```bash
/lazy cleanup src/services
```
### Output
```
🧹 Cleanup Analysis Starting...
Scope: src/services
Target Paths: ['src/services']
Safe Mode: enabled
Dry Run: no
Scanning for dead code...
🔍 Analyzing: src/services/auth.py
🔍 Analyzing: src/services/payment.py
🔍 Analyzing: src/services/notification.py
🧹 Cleanup Analysis Complete
Dead Code Found:
✅ SAFE TO REMOVE:
✓ 5 unused imports in 3 files
✓ 2 unused functions (35 lines)
✓ 3 unused variables
✓ 1 unreachable code block (8 lines)
✓ 50 lines of commented code
✓ 1 orphaned file (old_utils.py, 200 lines)
📊 Impact:
Total lines to remove: 293
Files affected: 4
Estimated time saved: 15 minutes in future maintenance
🔒 Safety:
Safe mode: enabled
Dry run: no
Backup: will be created
Apply cleanup? (y/n/p/s): y
💾 Safety backup created: stash@{0}
To restore: git stash apply stash@{0}
🧹 Applying cleanup...
✅ Removed unused imports (3 files)
✅ Removed unused functions (2 files)
✅ Removed commented code (3 files)
✅ Removed orphaned file: src/services/old_utils.py
Total: 293 lines removed from 4 files
📊 Quality Pipeline: RUNNING...
✅ Format (Black/Ruff): PASS
✅ Lint (Ruff): PASS
✅ Type Check (Mypy): PASS
✅ Tests (Pytest): PASS
- 124/124 tests passing
- Coverage: 87% (unchanged)
All quality checks passed! ✅
💾 Committing changes...
✅ Cleanup Complete
📊 Summary:
Scope: src/services
Files modified: 3
Files deleted: 1
Lines removed: 293
Items removed:
✓ 5 unused imports
✓ 2 unused functions
✓ 3 unused variables
✓ 1 unreachable code block
✓ 50 lines of commented code
✓ 1 orphaned file
📊 Quality Pipeline: PASS
💾 Committed: abc123def
Message: "chore(cleanup): remove dead code"
💾 Safety Backup: stash@{0}
To restore if needed: git stash apply stash@{0}
🎯 Impact:
Code reduction: 293 lines
Maintainability: Improved
Future time saved: ~15 minutes
```
---
## Session Logging
All cleanup activities logged to `logs/<session-id>/cleanup.json`:
```json
{
"command": "/lazy cleanup",
"scope": "src/services",
"safe_mode": true,
"dry_run": false,
"timestamp": "2025-10-26T10:30:00Z",
"analysis": {
"total_items_found": 15,
"safe_to_remove": 12,
"review_recommended": 3,
"do_not_remove": 0
},
"removals": {
"unused_imports": 5,
"unused_functions": 2,
"unused_variables": 3,
"unreachable_code": 1,
"commented_code": 1,
"orphaned_files": 1
},
"impact": {
"files_modified": 3,
"files_deleted": 1,
"lines_removed": 293
},
"quality_pipeline": {
"format": "pass",
"lint": "pass",
"type_check": "pass",
"tests": "pass"
},
"commit": {
"sha": "abc123def",
"message": "chore(cleanup): remove dead code"
},
"backup": {
"stash_id": "stash@{0}",
"created": true
}
}
```
---
**Version**: 1.0
**Last Updated**: 2025-10-26
**Framework**: LAZY-DEV-FRAMEWORK

666
.claude/commands/code.md Normal file
View File

@@ -0,0 +1,666 @@
---
description: Implement feature from flexible input (story file, task ID, brief, or issue)
argument-hint: "<input>"
allowed-tools: Read, Write, Edit, Bash, Task, Glob, Grep
---
# Code Command: Flexible Feature Implementation
Transform any input into working code with intelligent orchestration.
## Core Philosophy
**Accept anything, infer everything, build intelligently.**
No flags, no ceremony - just provide context and get code.
## Usage Examples
```bash
# Quick feature from brief
/lazy code "add logout button to header"
# From user story file
/lazy code @US-3.4.md
/lazy code US-3.4.md
# From task ID (auto-finds story)
/lazy code TASK-003
# From GitHub issue
/lazy code #456
/lazy code 456
```
## Input Detection Logic
### Phase 0: Parse Input
**Detect input type:**
```python
input = "$ARGUMENTS".strip()
if input.startswith("@") or input.endswith(".md"):
# User story file reference
input_type = "story_file"
story_file = input.lstrip("@")
elif input.startswith("TASK-") or input.startswith("task-"):
# Task ID - need to find story
input_type = "task_id"
task_id = input.upper()
elif input.startswith("#") or input.isdigit():
# GitHub issue
input_type = "github_issue"
issue_number = input.lstrip("#")
else:
# Brief description
input_type = "brief"
feature_brief = input
```
### Phase 1: Load Context
**From User Story File:**
```bash
# If input is @US-3.4.md or US-3.4.md
story_path="./project-management/US-STORY/*/US-story.md"
story_path=$(find ./project-management/US-STORY -name "*${story_id}*" -type d -exec find {} -name "US-story.md" \; | head -1)
# Read full story content
story_content=$(cat "$story_path")
# Find next pending task or use all tasks
next_task=$(grep -E "^### TASK-[0-9]+" "$story_path" | head -1)
```
**From Task ID:**
```bash
# If input is TASK-003
# Find which story contains this task
story_path=$(grep -r "### ${task_id}:" ./project-management/US-STORY --include="US-story.md" -l | head -1)
# Extract story content
story_content=$(cat "$story_path")
# Extract specific task section
task_section=$(sed -n "/^### ${task_id}:/,/^### TASK-/p" "$story_path" | sed '$d')
```
**From GitHub Issue:**
```bash
# If input is #456 or 456
issue_content=$(gh issue view ${issue_number} --json title,body,labels --jq '{title, body, labels: [.labels[].name]}')
# Parse as story or task
# If issue has "user-story" label, treat as story
# Otherwise treat as single task
```
**From Brief Description:**
```bash
# If input is "add logout button to header"
# Create minimal context
feature_brief="$input"
# Generate inline task
task_section="
### TASK-1: ${feature_brief}
**Description:**
${feature_brief}
**Acceptance Criteria:**
- Implementation works as described
- Code follows project conventions
- Basic error handling included
"
```
## Smart Orchestration Logic
### Auto-Detection Rules
**1. Test Detection:**
```python
# Check if project uses tests
has_tests = any([
exists("pytest.ini"),
exists("tests/"),
exists("__tests__/"),
exists("*.test.js"),
exists("*_test.py"),
])
# Check if TDD mentioned in docs
tdd_required = any([
"TDD" in read("README.md"),
"test-driven" in read("CLAUDE.md"),
"LAZYDEV_ENFORCE_TDD" in env,
])
# Decision
run_tests = has_tests or tdd_required or "test" in task_section.lower()
```
**2. Complexity Detection:**
```python
# Analyze task complexity
complexity_indicators = [
"security", "authentication", "auth", "payment",
"database", "migration", "critical", "api",
]
# Check task content
is_complex = any(keyword in task_section.lower() for keyword in complexity_indicators)
# Check estimate
if "Estimate:" in task_section:
estimate = extract_estimate(task_section)
is_complex = is_complex or estimate in ["L", "Large"]
# Default to simple
complexity = "complex" if is_complex else "simple"
```
**3. Review Detection:**
```python
# Always review complex tasks
needs_review = is_complex
# Review if multi-file changes expected
if not needs_review:
# Check if task mentions multiple files/modules
multi_file_keywords = [
"refactor", "restructure", "multiple files",
"across", "integration", "system-wide"
]
needs_review = any(kw in task_section.lower() for kw in multi_file_keywords)
# Can skip for simple single-file changes
skip_review = not needs_review and complexity == "simple"
```
**4. User Story Detection:**
```python
# Check if we have a full story or single task
if input_type == "story_file":
has_story = True
# Work through tasks sequentially
elif input_type == "task_id":
has_story = True
# Story was found, implement specific task
elif input_type == "github_issue":
# Check if issue is tagged as story
has_story = "user-story" in issue_labels
else: # brief
has_story = False
# Single task, quick implementation
```
## Execution Workflow
### Phase 2: Git Branch Setup
```bash
# Only create branch if working from story
if [ "$has_story" = true ]; then
# Extract story ID from path or content
story_id=$(extract_story_id)
branch_name="feat/${story_id}-$(slugify_story_title)"
# Create or checkout branch
if git show-ref --verify --quiet "refs/heads/$branch_name"; then
git checkout "$branch_name"
else
git checkout -b "$branch_name"
fi
else
# Work on current branch for quick tasks
current_branch=$(git branch --show-current)
echo "Working on current branch: $current_branch"
fi
```
### Phase 3: Implementation
**Delegate to coder agent:**
```python
Task(
prompt=f"""
You are the Coder Agent for LAZY-DEV-FRAMEWORK.
## Context Provided
{story_content if has_story else ""}
## Task to Implement
{task_section}
## Implementation Guidelines
1. **Read existing code first:**
- Check README.md for project structure and conventions
- Look for similar implementations in codebase
- Identify existing patterns and styles
2. **Write clean, maintainable code:**
- Type hints on all functions (if Python/TypeScript)
- Docstrings for public APIs
- Clear variable names
- Error handling with specific exceptions
3. **Follow project conventions:**
- Check for .editorconfig, .prettierrc, pyproject.toml
- Match existing code style
- Use project's logging/error patterns
4. **Tests (if required):**
- TDD required: {run_tests}
- Write tests if TDD enabled or "test" mentioned in task
- Follow existing test patterns in repo
- Aim for edge case coverage
5. **Security considerations:**
- Input validation
- No hardcoded secrets
- Proper error messages (no sensitive data leaks)
- Follow OWASP guidelines for web/API code
## Quality Standards
Code will be automatically checked by PostToolUse hook:
- Formatting (Black/Ruff/Prettier if configured)
- Linting (Ruff/ESLint if configured)
- Type checking (Mypy/TSC if configured)
- Tests (Pytest/Jest if TDD enabled)
Write quality code to pass these checks on first run.
## Output
Provide:
1. Implementation files (with full paths)
2. Test files (if TDD enabled)
3. Updated documentation (if API changes)
4. Brief summary of changes
DO NOT create a commit - that happens after review.
"""
)
```
### Phase 4: Quality Checks (Automatic)
**PostToolUse hook handles this automatically after Write/Edit operations:**
- Format: Auto-applied (Black/Ruff/Prettier)
- Lint: Auto-checked, warns if issues
- Type: Auto-checked, warns if issues
- Tests: Auto-run if TDD required
**No manual action needed** - hook runs after coder agent completes.
### Phase 5: Code Review (Conditional)
**Review decision:**
```python
if needs_review:
# Invoke reviewer agent for complex/critical tasks
Task(
prompt=f"""
You are the Reviewer Agent for LAZY-DEV-FRAMEWORK.
## Task Being Reviewed
{task_section}
## Changes Made
{git_diff_output}
## Review Checklist
**Code Quality:**
- Readability and maintainability
- Follows project conventions
- Appropriate abstractions
- Clear naming
**Correctness:**
- Meets acceptance criteria
- Edge cases handled
- Error handling appropriate
**Security (if applicable):**
- Input validation
- No hardcoded secrets
- Proper authentication/authorization
- No SQL injection / XSS vulnerabilities
**Tests (if TDD required):**
- Tests cover main functionality
- Edge cases tested
- Tests are clear and maintainable
## Output
Return ONE of:
- **APPROVED**: Changes look good, ready to commit
- **REQUEST_CHANGES**: List specific issues to fix
Keep feedback concise and actionable.
"""
)
else:
echo "Review skipped: Simple task, single-file change"
fi
```
### Phase 6: Commit
**Only commit if approved or review skipped:**
```bash
# Prepare commit message
if [ "$has_story" = true ]; then
commit_msg="feat(${task_id}): $(extract_task_title)
Implements ${task_id} from ${story_id}
$(summarize_changes)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
else
commit_msg="feat: ${feature_brief}
Quick implementation from brief description.
$(summarize_changes)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
fi
# Create commit
git add .
git commit -m "$(cat <<'EOF'
$commit_msg
EOF
)"
# Tag if task completion
if [ "$input_type" = "task_id" ]; then
git tag "task/${task_id}-done"
fi
```
### Phase 7: Output Summary
```
Task Complete
=============
Input: {input}
Type: {input_type}
{Story: {story_id}}
{Task: {task_id}}
Branch: {branch_name}
Complexity: {complexity}
Review: {skipped|passed}
Tests: {passed|skipped} {(n/n)}
Commit: {commit_hash}
Files Changed:
- {file1}
- {file2}
Next Steps:
{- Continue with next task: /lazy code TASK-{next}}
{- Review story and create PR: /lazy review {story_id}}
{- Work on new feature: /lazy code "description"}
```
## Intelligence Matrix
| Input | Detection | Story Lookup | Tests | Review | Branch |
|-------|-----------|--------------|-------|--------|--------|
| "brief" | Brief text | No | Auto | Simple=No | Current |
| @US-3.4.md | File reference | Yes (read file) | Auto | Smart | feat/US-3.4-* |
| TASK-003 | Task ID pattern | Yes (grep) | Auto | Smart | feat/US-* |
| #456 | Issue number | Yes (gh) | Auto | Smart | feat/issue-456 |
**Auto = Check project for test framework and TDD requirement**
**Smart = Complex/multi-file/security = Yes, Simple/single-file = No**
## Decision Trees
### Test Execution Decision
```
Has test framework in repo? ────No───→ Skip tests
Yes
TDD in docs or LAZYDEV_ENFORCE_TDD? ──Yes──→ Run tests (required)
No
"test" mentioned in task? ────Yes───→ Run tests (requested)
No
Skip tests
```
### Review Decision
```
Task complexity = complex? ───Yes───→ Review required
No
Security/auth/payment related? ──Yes──→ Review required
No
Multi-file refactor? ────Yes────→ Review required
No
Skip review (simple task)
```
### Branch Strategy
```
Input has story context? ───Yes───→ Create/use feat/{story-id}-* branch
No
Work on current branch
```
## Examples in Action
### Example 1: Quick Feature
```bash
$ /lazy code "add logout button to header"
Detected: Brief description
Complexity: Simple (single feature)
Tests: Auto-detected (pytest found in repo)
Review: Skipped (simple, single-file)
Branch: Current branch
Implementing...
✓ Added logout button to src/components/Header.tsx
✓ Added test in tests/components/Header.test.tsx
✓ Quality checks passed
✓ Committed: feat: add logout button to header
Complete! (1 file changed)
```
### Example 2: From Story File
```bash
$ /lazy code @US-3.4.md
Detected: User story file
Story: US-3.4 - OAuth2 Authentication
Next pending: TASK-002 - Implement token refresh
Complexity: Complex (auth + security)
Tests: Required (TDD in CLAUDE.md)
Review: Required (complex task)
Branch: feat/US-3.4-oauth2-authentication
Implementing...
✓ Implemented token refresh in src/auth/refresh.py
✓ Added tests in tests/auth/test_refresh.py
✓ Quality checks passed
✓ Code review: APPROVED
✓ Committed: feat(TASK-002): implement token refresh
Complete! Continue with: /lazy code TASK-003
```
### Example 3: From Task ID
```bash
$ /lazy code TASK-007
Detected: Task ID
Finding story... Found in US-2.1-payment-processing
Task: TASK-007 - Add retry logic to payment API
Complexity: Complex (payment + API)
Tests: Required (pytest found)
Review: Required (payment-related)
Branch: feat/US-2.1-payment-processing
Implementing...
✓ Added retry logic to src/payment/api.py
✓ Added retry tests in tests/payment/test_api.py
✓ Quality checks passed
✓ Code review: APPROVED
✓ Committed: feat(TASK-007): add retry logic to payment API
✓ Tagged: task/TASK-007-done
Complete! Continue with: /lazy code TASK-008
```
### Example 4: From GitHub Issue
```bash
$ /lazy code #456
Detected: GitHub issue
Fetching issue #456...
Issue: "Fix validation error in user signup form"
Labels: bug, frontend
Complexity: Simple (bug fix)
Tests: Required (jest found in repo)
Review: Skipped (simple bug fix)
Branch: Current branch
Implementing...
✓ Fixed validation in src/components/SignupForm.tsx
✓ Added regression test in tests/components/SignupForm.test.tsx
✓ Quality checks passed
✓ Committed: fix: validation error in user signup form (closes #456)
Complete! Issue #456 will be closed on PR merge.
```
## Key Principles
1. **Zero Configuration**: No flags, no setup - just provide input
2. **Smart Defaults**: Infer tests, review, complexity from context
3. **Flexible Input**: Accept stories, tasks, briefs, issues
4. **Auto Quality**: PostToolUse hook handles formatting/linting/tests
5. **Contextual Branching**: Stories get branches, briefs work on current
6. **Progressive Enhancement**: More context = smarter orchestration
## Integration Points
**With plan command:**
```bash
/lazy plan "feature description" # Creates US-story.md
/lazy code @US-story.md # Implements first task
/lazy code TASK-002 # Continues with next task
```
**With review command:**
```bash
/lazy code TASK-001 # Commit 1
/lazy code TASK-002 # Commit 2
/lazy code TASK-003 # Commit 3
/lazy review US-3.4 # Review all tasks, create PR
```
**With fix command:**
```bash
/lazy code TASK-001 # Implementation
/lazy review US-3.4 # Generates review-report.md
/lazy fix review-report.md # Apply fixes
```
## Environment Variables
```bash
# Force TDD for all tasks
export LAZYDEV_ENFORCE_TDD=1
# Minimum test count
export LAZYDEV_MIN_TESTS=3
# Skip review for all tasks (not recommended)
export LAZYDEV_SKIP_REVIEW=1
```
## Troubleshooting
**Issue: Task ID not found**
```bash
# Check story files exist
ls -la ./project-management/US-STORY/*/US-story.md
# Search for task manually
grep -r "TASK-003" ./project-management/US-STORY
```
**Issue: Tests not running**
```bash
# Check test framework installed
pytest --version # or: npm test
# Check TDD configuration
cat CLAUDE.md | grep -i tdd
echo $LAZYDEV_ENFORCE_TDD
```
**Issue: Review not triggering**
```bash
# Reviews trigger automatically for:
# - Complex tasks (security/auth/database)
# - Multi-file changes
# - Large estimates
# To force review, set in task:
### TASK-X: ... [REVIEW_REQUIRED]
```
---
**Version:** 2.2.0
**Status:** Production-Ready
**Philosophy:** Accept anything, infer everything, build intelligently.

216
.claude/commands/docs.md Normal file
View File

@@ -0,0 +1,216 @@
---
description: Generate documentation for codebase, branch, commit, or file
argument-hint: [scope] [format]
allowed-tools: Read, Write, Bash, Glob, Grep, Edit, Task
model: claude-haiku-4-5-20251001
---
# Documentation Generator
Generate or update documentation for the specified scope with the selected format.
## Variables
SCOPE: $1
FORMAT: ${2:-docstrings}
PROJECT_ROOT: $(pwd)
## Instructions
You are the Documentation Command Handler for LAZY-DEV-FRAMEWORK.
Your task is to generate or update documentation based on the provided **SCOPE** and **FORMAT**.
### Step 1: Parse Scope and Identify Target Files
Analyze the **SCOPE** variable to determine which files need documentation:
- **`codebase`**: Document all Python files in the project
- Use Glob to find: `**/*.py`
- Exclude: `__pycache__`, `.venv`, `venv`, `node_modules`, `tests/`, `.git`
- **`current-branch`**: Document files changed in the current git branch
- Run: `git diff --name-only main...HEAD` (or default branch)
- Filter for relevant file extensions based on FORMAT
- **`last-commit`**: Document files in the most recent commit
- Run: `git diff-tree --no-commit-id --name-only -r HEAD`
- Filter for relevant file extensions
- **Specific file path** (e.g., `src/auth.py` or `.`): Document the specified file or directory
- If directory: Use Glob to find relevant files
- If file: Document that specific file
- Validate the path exists before proceeding
### Step 2: Validate Format
Ensure **FORMAT** is one of the supported formats:
- `docstrings` - Add/update Google-style docstrings (default)
- `readme` - Generate or update README.md
- `api` - Generate API documentation
- `security` - Generate security considerations document
- `setup` - Generate setup/installation guide
If FORMAT is invalid, report an error and stop.
### Step 3: Prepare Agent Invocation
For each target file or module group, prepare to invoke the Documentation Agent with:
**Agent Call Structure**:
```markdown
You are the Documentation Agent. Generate documentation for the following scope:
## Scope
[List of files or description of scope]
## Format
$FORMAT
## Target
[Output directory based on format - docs/ for files, ./ for README]
## Instructions
[Format-specific instructions will be provided by the agent template]
```
**Use the Task tool** to invoke the Documentation Agent. The agent will:
1. Read the target files
2. Analyze code structure, functions, classes, and modules
3. Generate appropriate documentation based on FORMAT
4. Write updated files (for docstrings) or new documentation files (for readme/api/security/setup)
### Step 4: Track Coverage Changes
**Before Agent Invocation**:
- Count existing docstrings/documentation
- Calculate current documentation coverage percentage
**After Agent Invocation**:
- Count new/updated docstrings/documentation
- Calculate new documentation coverage percentage
- Report the improvement
### Step 5: Generate Summary Report
After all files are processed, generate a structured report in this format:
```
📖 Documentation Generated
[For docstrings format:]
Docstrings added: X files
✓ path/to/file1.py (Y functions/classes)
✓ path/to/file2.py (Z functions/classes)
✓ path/to/file3.py (W functions/classes)
Coverage: XX% → YY% ✅
[For readme/api/security/setup formats:]
Files created/updated:
✓ README.md
✓ docs/API.md
✓ docs/SECURITY.md
✓ docs/SETUP.md
Documentation status: Complete ✅
```
## Workflow
1. **Parse Arguments**
- Extract SCOPE from $1
- Extract FORMAT from $2 (default: docstrings)
- Validate both parameters
2. **Identify Target Files**
- Based on SCOPE, use Glob, Grep, or Bash (git commands) to locate files
- Build a list of absolute file paths
- Verify files exist and are readable
3. **Invoke Documentation Agent**
- Use Task tool to invoke the Documentation Agent
- Pass scope, format, and target directory
- Agent reads files, generates documentation, writes output
4. **Calculate Coverage**
- Compare before/after documentation metrics
- Calculate coverage percentage improvement
5. **Generate Report**
- List all files documented
- Show coverage improvement
- Confirm successful completion
## Error Handling
- If SCOPE is invalid or empty: Report error and ask user to specify scope
- If FORMAT is not supported: Report valid formats and ask user to choose
- If no files found for given SCOPE: Report no files found and suggest alternative scope
- If git commands fail (for branch/commit scopes): Report git error and suggest using file path
- If Documentation Agent fails: Report agent error and suggest manual review
## Examples
### Example 1: Document entire codebase with docstrings
```bash
/lazy documentation codebase docstrings
```
Expected flow:
1. Find all .py files in project
2. Invoke Documentation Agent for each module/file group
3. Agent adds Google-style docstrings to functions/classes
4. Report coverage improvement
### Example 2: Generate README for current branch changes
```bash
/lazy documentation current-branch readme
```
Expected flow:
1. Run git diff to find changed files
2. Invoke Documentation Agent with scope=changed files, format=readme
3. Agent generates comprehensive README.md
4. Report README created
### Example 3: Generate API docs for specific module
```bash
/lazy documentation src/auth.py api
```
Expected flow:
1. Validate src/auth.py exists
2. Invoke Documentation Agent with scope=src/auth.py, format=api
3. Agent generates docs/API.md with module documentation
4. Report API documentation created
### Example 4: Generate security documentation
```bash
/lazy documentation . security
```
Expected flow:
1. Find all relevant files in current directory
2. Invoke Documentation Agent with scope=current directory, format=security
3. Agent analyzes code for security patterns and generates docs/SECURITY.md
4. Report security documentation created
## Output Format Requirements
- Use emoji indicators for visual clarity (📖, ✓, ✅)
- Report absolute file paths in output
- Show clear before/after metrics for coverage
- List all files processed
- Indicate success/failure clearly
- Include actionable next steps if applicable
## Notes
- Documentation Agent is a sub-agent defined in `.claude/agents/documentation.md`
- Agent uses Haiku model for cost efficiency
- For large codebases (>50 files), process in batches of 10-15 files
- Coverage calculation counts docstrings/functions ratio for docstrings format
- For readme/api/security/setup formats, "coverage" means documentation completeness
- Always use absolute paths in reports
- Git commands are cross-platform compatible (Windows/Linux/macOS)

1006
.claude/commands/fix.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,713 @@
---
description: Initialize new project with comprehensive documentation (overview, specs, tech stack, architecture)
argument-hint: "[project-description] [--file FILE] [--minimal] [--no-sync] [--no-arch]"
allowed-tools: Read, Write, Bash, Skill, Glob
model: claude-sonnet-4-5-20250929
---
# Init Project: Bootstrap Project Foundation
## Introduction
Transform a project idea into complete foundational documentation including project overview, technical specifications, technology stack selection, and system architecture design.
**Purpose**: Create the documentation foundation before writing any code - ensures alignment, reduces rework, and provides clear technical direction.
**Output Structure:**
```
./project-management/
├── PROJECT-OVERVIEW.md # Vision, goals, features, success criteria
├── SPECIFICATIONS.md # Functional/non-functional requirements, API contracts, data models
├── TECH-STACK.md # Technology selections with rationale and trade-offs
├── ARCHITECTURE.md # System design with mermaid diagrams
└── .meta/
└── last-sync.json # Tracking metadata for document sync
```
**Integration**: Generated documents serve as input for `/lazy plan` when creating user stories.
---
## When to Use
**Use `/lazy init-project` when:**
- Starting a brand new greenfield project
- Need structured project documentation before coding
- Want technology stack guidance and architecture design
- Transitioning from idea to implementation
- Building POC/MVP and need technical foundation
**Skip this command when:**
- Project already has established documentation
- Working on existing codebase
- Only need single user story (use `/lazy plan` directly)
- Quick prototype without formal planning
---
## Usage Examples
```bash
# From project description
/lazy init-project "Build a real-time task management platform with AI prioritization"
# From enhanced prompt file (recommended)
/lazy init-project --file enhanced_prompt.md
# Minimal mode (skip architecture, faster)
/lazy init-project "E-commerce marketplace" --minimal
# Skip architecture generation
/lazy init-project "API service" --no-arch
# Disable auto-sync tracking
/lazy init-project "Chat app" --no-sync
```
---
## Requirements
### Prerequisites
- Working directory is project root
- Git repository initialized (recommended)
- PROJECT-OVERVIEW.md should not already exist (will be overwritten)
### Input Requirements
- **Project description** (required): Either inline text or file path via `--file`
- **Sufficient detail**: Mention key features, tech preferences, scale expectations
- **Clear goals**: What problem does this solve?
### Optional Flags
- `--file FILE`: Read project description from file (STT enhanced prompt recommended)
- `--minimal`: Generate only PROJECT-OVERVIEW.md and SPECIFICATIONS.md (skip tech stack and architecture)
- `--no-arch`: Generate overview, specs, and tech stack but skip architecture diagrams
- `--no-sync`: Skip creating `.meta/last-sync.json` tracking file
---
## Execution
### Step 1: Parse Arguments and Load Project Description
**Parse flags:**
```python
args = parse_arguments("$ARGUMENTS")
# Extract flags
file_path = args.get("--file")
minimal_mode = "--minimal" in args
skip_arch = "--no-arch" in args
disable_sync = "--no-sync" in args
# Get project description
if file_path:
# Read from file
project_description = read_file(file_path)
if not project_description:
return error(f"File not found or empty: {file_path}")
else:
# Use inline description
project_description = remove_flags(args)
if not project_description.strip():
return error("No project description provided. Use inline text or --file FILE")
```
**Validation:**
- Project description must be non-empty
- If `--file` used, file must exist and be readable
- Minimum 50 characters for meaningful planning (warn if less)
---
### Step 2: Create Project Management Directory
**Setup directory structure:**
```bash
# Create base directory
mkdir -p ./project-management/.meta
# Check if PROJECT-OVERVIEW.md exists
if [ -f "./project-management/PROJECT-OVERVIEW.md" ]; then
echo "Warning: PROJECT-OVERVIEW.md already exists and will be overwritten"
fi
```
**Output location**: Always `./project-management/` relative to current working directory.
---
### Step 3: Invoke Project Planner Skill
**Generate overview and specifications:**
```python
# Invoke project-planner skill
result = Skill(
command="project-planner",
context={
"description": project_description,
"output_dir": "./project-management/"
}
)
# Skill generates:
# - PROJECT-OVERVIEW.md (vision, goals, features, constraints)
# - SPECIFICATIONS.md (requirements, API contracts, data models)
# Verify both files were created
assert exists("./project-management/PROJECT-OVERVIEW.md"), "PROJECT-OVERVIEW.md not created"
assert exists("./project-management/SPECIFICATIONS.md"), "SPECIFICATIONS.md not created"
```
**What project-planner does:**
1. Extracts project context (name, features, goals, constraints)
2. Generates PROJECT-OVERVIEW.md with vision and high-level features
3. Generates SPECIFICATIONS.md with detailed technical requirements
4. Validates completeness of both documents
**Expected output:**
- `PROJECT-OVERVIEW.md`: 2-3KB, executive summary format
- `SPECIFICATIONS.md`: 8-15KB, comprehensive technical details
---
### Step 4: Invoke Tech Stack Architect Skill (unless --minimal or --no-arch)
**Generate technology stack selection:**
```python
# Skip if minimal mode or no-arch flag
if not minimal_mode:
# Read PROJECT-OVERVIEW.md for context
overview_content = read_file("./project-management/PROJECT-OVERVIEW.md")
# Invoke tech-stack-architect skill
result = Skill(
command="tech-stack-architect",
context={
"project_overview": overview_content,
"specifications": read_file("./project-management/SPECIFICATIONS.md"),
"output_dir": "./project-management/",
"skip_architecture": skip_arch # Only generate TECH-STACK.md if true
}
)
# Skill generates:
# - TECH-STACK.md (frontend, backend, database, DevOps choices with rationale)
# - ARCHITECTURE.md (system design with mermaid diagrams) [unless skip_arch]
# Verify tech stack file created
assert exists("./project-management/TECH-STACK.md"), "TECH-STACK.md not created"
if not skip_arch:
assert exists("./project-management/ARCHITECTURE.md"), "ARCHITECTURE.md not created"
```
**What tech-stack-architect does:**
1. Reads PROJECT-OVERVIEW.md for requirements and constraints
2. Analyzes technology needs across 4 categories: Frontend, Backend, Database, DevOps
3. Generates TECH-STACK.md with choices, rationale, alternatives, trade-offs
4. Designs system architecture with component diagrams
5. Generates ARCHITECTURE.md with mermaid diagrams for structure, data flow, deployment
**Expected output:**
- `TECH-STACK.md`: 5-8KB, table-based technology selections
- `ARCHITECTURE.md`: 10-15KB, system design with 3-5 mermaid diagrams
---
### Step 5: Create Tracking Metadata (unless --no-sync)
**Generate sync tracking file:**
```python
if not disable_sync:
metadata = {
"initialized_at": datetime.now().isoformat(),
"documents": {
"PROJECT-OVERVIEW.md": {
"created": datetime.now().isoformat(),
"size_bytes": file_size("./project-management/PROJECT-OVERVIEW.md"),
"checksum": sha256("./project-management/PROJECT-OVERVIEW.md")
},
"SPECIFICATIONS.md": {
"created": datetime.now().isoformat(),
"size_bytes": file_size("./project-management/SPECIFICATIONS.md"),
"checksum": sha256("./project-management/SPECIFICATIONS.md")
},
"TECH-STACK.md": {
"created": datetime.now().isoformat(),
"size_bytes": file_size("./project-management/TECH-STACK.md"),
"checksum": sha256("./project-management/TECH-STACK.md")
} if not minimal_mode else None,
"ARCHITECTURE.md": {
"created": datetime.now().isoformat(),
"size_bytes": file_size("./project-management/ARCHITECTURE.md"),
"checksum": sha256("./project-management/ARCHITECTURE.md")
} if not minimal_mode and not skip_arch else None
},
"flags": {
"minimal": minimal_mode,
"skip_architecture": skip_arch
}
}
# Write metadata
write_json("./project-management/.meta/last-sync.json", metadata)
```
**Purpose of tracking:**
- Detect manual changes to generated files
- Support future re-sync or update operations
- Track generation history
---
### Step 6: Git Add (if in repository)
**Stage generated files:**
```bash
# Check if in git repo
if git rev-parse --git-dir > /dev/null 2>&1; then
# Add all generated files
git add ./project-management/PROJECT-OVERVIEW.md
git add ./project-management/SPECIFICATIONS.md
if [ "$minimal_mode" = false ]; then
git add ./project-management/TECH-STACK.md
[ "$skip_arch" = false ] && git add ./project-management/ARCHITECTURE.md
fi
[ "$disable_sync" = false ] && git add ./project-management/.meta/last-sync.json
echo "✓ Files staged for commit (git add)"
echo "Note: Review files before committing"
else
echo "Not a git repository - skipping git add"
fi
```
**Important**: Files are staged but NOT committed. User should review before committing.
---
### Step 7: Output Summary
**Display comprehensive summary:**
```markdown
## Project Initialization Complete
**Project Name**: {extracted from PROJECT-OVERVIEW.md}
**Documents Generated**:
1.**PROJECT-OVERVIEW.md** ({size}KB)
- Vision and goals defined
- {N} key features identified
- {N} success criteria established
- Constraints and scope documented
2.**SPECIFICATIONS.md** ({size}KB)
- {N} functional requirements detailed
- Non-functional requirements defined
- {N} API endpoints documented (if applicable)
- {N} data models specified
- Development phases outlined
{if not minimal_mode:}
3.**TECH-STACK.md** ({size}KB)
- Frontend stack selected: {tech}
- Backend stack selected: {tech}
- Database choices: {tech}
- DevOps infrastructure: {tech}
- Trade-offs and migration path documented
{if not skip_arch:}
4.**ARCHITECTURE.md** ({size}KB)
- System architecture designed
- {N} component diagrams included
- Data flow documented
- Security architecture defined
- Scalability strategy outlined
{if not disable_sync:}
5.**Tracking metadata** (.meta/last-sync.json)
- Document checksums recorded
- Sync tracking enabled
**Location**: `./project-management/`
**Next Steps**:
1. **Review Documentation** (~15-20 minutes)
- Read PROJECT-OVERVIEW.md for accuracy
- Verify SPECIFICATIONS.md completeness
- Check TECH-STACK.md technology choices
- Review ARCHITECTURE.md diagrams
2. **Customize** (Optional)
- Refine goals and success criteria
- Add missing requirements
- Adjust technology choices if needed
- Enhance architecture diagrams
3. **Commit Initial Docs**
```bash
git commit -m "docs: initialize project documentation
- Add PROJECT-OVERVIEW.md with vision and goals
- Add SPECIFICATIONS.md with technical requirements
- Add TECH-STACK.md with technology selections
- Add ARCHITECTURE.md with system design
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
```
4. **Start Planning User Stories**
```bash
# Create first user story from specifications
/lazy plan "Implement user authentication system"
# Or plan from specific requirement
/lazy plan --file ./project-management/SPECIFICATIONS.md --section "Authentication"
```
5. **Begin Implementation**
```bash
# After creating user story
/lazy code @US-1.1.md
```
**Estimated Time to Review/Customize**: 15-30 minutes
**Documentation Size**: {total}KB across {N} files
---
## Tips for Success
**Review Phase:**
- Don't skip the review - these docs guide all future development
- Check if technology choices match team skills
- Verify success criteria are measurable
- Ensure API contracts match business requirements
**Customization:**
- Feel free to edit generated docs manually
- Add project-specific constraints or requirements
- Refine architecture based on team preferences
- Update specs as you learn more
**Next Phase:**
- Use generated docs as input to `/lazy plan`
- Reference TECH-STACK.md during implementation
- Keep ARCHITECTURE.md updated as system evolves
- Revisit SUCCESS CRITERIA monthly
```
---
## Validation
### Success Criteria
**Documents Generated:**
- ✅ PROJECT-OVERVIEW.md exists and is non-empty (>1KB)
- ✅ SPECIFICATIONS.md exists and is comprehensive (>5KB)
- ✅ TECH-STACK.md exists (unless --minimal) and has 4 categories
- ✅ ARCHITECTURE.md exists (unless --minimal or --no-arch) and has mermaid diagrams
- ✅ .meta/last-sync.json exists (unless --no-sync) with checksums
**Content Quality:**
- ✅ PROJECT-OVERVIEW.md has vision, goals, features, success criteria, constraints
- ✅ SPECIFICATIONS.md has functional requirements, API contracts, data models
- ✅ TECH-STACK.md has rationale and alternatives for each technology
- ✅ ARCHITECTURE.md has C4 diagram, component details, data flow diagrams
**Git Integration:**
- ✅ Files staged for commit (if in git repo)
- ✅ No automatic commit created (user reviews first)
### Error Conditions
**Handle gracefully:**
- Empty or insufficient project description → Return error with guidance
- File not found (--file flag) → Clear error message with path
- PROJECT-OVERVIEW.md already exists → Warn but continue (overwrite)
- Skill execution failure → Display error and suggest manual creation
- Not in git repo → Skip git operations, warn user
---
## Examples in Action
### Example 1: Full Initialization (Recommended)
```bash
$ /lazy init-project "Build a real-time task management platform with AI-powered prioritization, team collaboration, and GitHub integration. Target 1000 users, 99.9% uptime. Python backend, React frontend."
Initializing project...
Step 1/5: Generating project overview and specifications...
✓ PROJECT-OVERVIEW.md created (2.8KB)
✓ SPECIFICATIONS.md created (11.4KB)
Step 2/5: Designing technology stack...
✓ TECH-STACK.md created (6.2KB)
- Frontend: React 18 + Zustand + Tailwind
- Backend: FastAPI + SQLAlchemy
- Database: PostgreSQL + Redis
- DevOps: AWS ECS + GitHub Actions
Step 3/5: Architecting system design...
✓ ARCHITECTURE.md created (13.7KB)
- Component architecture with mermaid diagrams
- Authentication flow documented
- Scalability strategy defined
Step 4/5: Creating tracking metadata...
✓ .meta/last-sync.json created
Step 5/5: Staging files for git...
5 files staged (git add)
## Project Initialization Complete
Project: TaskFlow Pro - Modern task management with AI
Documents Generated:
1. ✅ PROJECT-OVERVIEW.md (2.8KB)
2. ✅ SPECIFICATIONS.md (11.4KB) - 12 API endpoints, 6 data models
3. ✅ TECH-STACK.md (6.2KB) - Full stack defined
4. ✅ ARCHITECTURE.md (13.7KB) - 5 mermaid diagrams
Next Steps:
1. Review docs (15-20 min)
2. Commit: git commit -m "docs: initialize project"
3. Create first story: /lazy plan "User authentication"
Complete! Ready for user story planning.
```
### Example 2: Minimal Mode (Fast)
```bash
$ /lazy init-project "E-commerce marketplace with product catalog and checkout" --minimal
Initializing project (minimal mode)...
Step 1/2: Generating project overview and specifications...
✓ PROJECT-OVERVIEW.md created (1.9KB)
✓ SPECIFICATIONS.md created (8.3KB)
Step 2/2: Staging files...
2 files staged (git add)
## Project Initialization Complete (Minimal)
Project: E-Commerce Marketplace
Documents Generated:
1. ✅ PROJECT-OVERVIEW.md (1.9KB)
2. ✅ SPECIFICATIONS.md (8.3KB)
Skipped (minimal mode):
- TECH-STACK.md (technology selection)
- ARCHITECTURE.md (system design)
Note: Use full mode if you need tech stack guidance and architecture diagrams.
Next Steps:
1. Review specs
2. Manually define tech stack (or run: /lazy init-project --no-minimal)
3. Create stories: /lazy plan "Product catalog"
```
### Example 3: From Enhanced Prompt File
```bash
$ /lazy init-project --file enhanced_prompt.md
Reading project description from: enhanced_prompt.md
Initializing project...
Step 1/5: Generating project overview and specifications...
✓ PROJECT-OVERVIEW.md created (3.2KB)
✓ SPECIFICATIONS.md created (14.8KB)
- Extracted 15 functional requirements
- Defined 8 API contracts
- Specified 9 data models
Step 2/5: Designing technology stack...
✓ TECH-STACK.md created (7.1KB)
Step 3/5: Architecting system design...
✓ ARCHITECTURE.md created (16.4KB)
...
Complete! High-quality docs generated from enhanced prompt.
```
---
## Integration with Other Commands
### With `/lazy plan`
```bash
# Initialize project foundation
/lazy init-project "Project description"
# Create first user story (references SPECIFICATIONS.md automatically)
/lazy plan "Implement authentication"
# → project-manager uses SPECIFICATIONS.md for context
# → Generates US-1.1.md aligned with project specs
```
### With `/lazy code`
```bash
# During implementation
/lazy code @US-1.1.md
# → context-packer loads TECH-STACK.md and ARCHITECTURE.md
# → Implementation follows defined architecture patterns
# → Technology choices match TECH-STACK.md
```
### With `/lazy review`
```bash
# During story review
/lazy review US-1.1
# → reviewer-story agent checks alignment with SPECIFICATIONS.md
# → Validates implementation matches ARCHITECTURE.md
# → Ensures success criteria from PROJECT-OVERVIEW.md are met
```
---
## Environment Variables
```bash
# Skip architecture generation by default
export LAZYDEV_INIT_SKIP_ARCH=1
# Minimal mode by default
export LAZYDEV_INIT_MINIMAL=1
# Disable sync tracking
export LAZYDEV_INIT_NO_SYNC=1
# Custom output directory
export LAZYDEV_PROJECT_DIR="./docs/project"
```
---
## Troubleshooting
### Issue: "Insufficient project description"
**Problem**: Description too vague or short.
**Solution**:
```bash
# Provide more detail
/lazy init-project "Build task manager with:
- Real-time collaboration
- AI prioritization
- GitHub/Jira integration
- Target: 10k users, 99.9% uptime
- Stack preference: Python + React"
# Or use enhanced prompt file
/lazy init-project --file enhanced_prompt.md
```
### Issue: "PROJECT-OVERVIEW.md already exists"
**Problem**: Running init-project in directory that's already initialized.
**Solution**:
```bash
# Review existing docs first
ls -la ./project-management/
# If you want to regenerate (will overwrite)
/lazy init-project "New description"
# Or work with existing docs
/lazy plan "First feature"
```
### Issue: "Skill execution failed"
**Problem**: project-planner or tech-stack-architect skill error.
**Solution**:
```bash
# Check skill files exist
ls .claude/skills/project-planner/SKILL.md
ls .claude/skills/tech-stack-architect/SKILL.md
# Try minimal mode (skips tech-stack-architect)
/lazy init-project "Description" --minimal
# Manual fallback: create docs manually using templates
# See .claude/skills/project-planner/SKILL.md for templates
```
### Issue: "No technology preferences detected"
**Problem**: TECH-STACK.md has generic choices that don't match needs.
**Solution**:
```bash
# Be specific about tech preferences in description
/lazy init-project "API service using FastAPI, PostgreSQL, deployed on AWS ECS with GitHub Actions CI/CD"
# Or edit TECH-STACK.md manually after generation
# File is meant to be customized
```
---
## Key Principles
1. **Documentation-First**: Create foundation before writing code
2. **Smart Defaults**: Skills generate opinionated but reasonable choices
3. **Customizable**: All generated docs are meant to be refined
4. **Integration**: Docs feed into planning and implementation commands
5. **Version Control**: Track docs alongside code
6. **Living Documents**: Update as project evolves
7. **No Lock-In**: Skip sections with flags, edit freely
---
## Related Commands
- `/lazy plan` - Create user stories from initialized project
- `/lazy code` - Implement features following architecture
- `/lazy review` - Validate against project specifications
- `/lazy docs` - Generate additional documentation
---
## Skills Used
- `project-planner` - Generates PROJECT-OVERVIEW.md and SPECIFICATIONS.md
- `tech-stack-architect` - Generates TECH-STACK.md and ARCHITECTURE.md
- `output-style-selector` (automatic) - Formats output optimally
---
**Version:** 1.0.0
**Status:** Production-Ready
**Philosophy:** Document first, build second. Clear foundation, faster development.

View File

@@ -0,0 +1,25 @@
---
description: Verify Memory MCP connectivity and list basic stats
argument-hint: [action]
allowed-tools: Read, Write, Grep, Glob, Bash, Task
---
# Memory Connectivity Check
ACTION: ${1:-status}
If the `memory` MCP server is available, call the following tools:
- `mcp__memory__read_graph` and report entity/relation counts
- `mcp__memory__search_nodes` with a sample query like `service:`
If any calls fail, print a clear remediation guide:
1) Ensure `.mcp.json` exists at workspace root (see LAZY_DEV/.claude/.mcp.json)
2) Ensure Node.js is installed and `npx -y @modelcontextprotocol/server-memory` works
3) Reload Claude Code for this workspace
Output a concise summary:
- Server reachable: yes/no
- Entities: N, Relations: M (or unknown)
- Sample search results: top 5 names

View File

@@ -0,0 +1,66 @@
---
description: Manage persistent knowledge via MCP Memory Graph
argument-hint: [intent]
allowed-tools: Read, Write, Grep, Glob, Bash, Task
---
# Memory Graph Command
This command activates the Memory Graph Skill and guides you to use the MCP Memory server tools to persist, update, search, and prune knowledge.
## Inputs
INTENT: ${1:-auto}
## Skill
Include the Memory Graph Skill files:
- .claude/skills/memory-graph/SKILL.md
- .claude/skills/memory-graph/operations.md
- .claude/skills/memory-graph/playbooks.md
- .claude/skills/memory-graph/examples.md
If any file is missing, read the repo to locate them under `.claude/skills/memory-graph/`.
## Behavior
- Detect entities, relations, and observations in the current context
- Use MCP tool names prefixed with `mcp__memory__`
- Avoid duplicates by searching before creating
- Keep observations small and factual, include dates when relevant
- Verify writes with `open_nodes` when helpful
## Planner
1. If INTENT == `auto`, infer one of: `persist-new`, `enrich`, `link`, `correct`, `prune`, `explore`
2. Route per `playbooks.md`
3. Execute the needed MCP tool calls
4. Print a short summary of what changed
5. When appropriate, suggest next relations or entities
## MCP Tooling
Target server name: `memory` (tools appear as `mcp__memory__<tool>`)
Core tools:
- create_entities, add_observations, create_relations
- delete_entities, delete_observations, delete_relations
- read_graph, search_nodes, open_nodes
## Examples
Persist a new service and owner
```
/lazy memory-graph "persist-new service:alpha (owner: alice, repo: org/alpha)"
```
Explore existing graph
```
/lazy memory-graph explore
```
Correct a stale fact
```
/lazy memory-graph "correct owner for service:alpha -> bob"
```

97
.claude/commands/plan.md Normal file
View File

@@ -0,0 +1,97 @@
---
description: Create user story with inline tasks from feature brief
argument-hint: "[feature-description] [--file FILE] [--output-dir DIR]"
allowed-tools: Read, Write, Task, Bash, Grep, Glob
model: claude-haiku-4-5-20251001
---
# Create Feature: Transform Brief to User Story
## Introduction
Transform a brief feature description into a single user story file with tasks included inline.
**Input Sources:**
1. **From STT Prompt Enhancer** (recommended): Enhanced, structured feature description from the STT_PROMPT_ENHANCER project
2. **Direct Input**: Brief text provided directly to command
**Output Structure:**
```
./project-management/US-STORY/US-{STORY_ID}-{story-name}/
└── US-story.md # User story with inline tasks
```
**GitHub Integration:**
Optionally creates a GitHub issue for the story (can be disabled with --no-issue flag).
## Usage Examples
```bash
# From direct input
/lazy create-feature "Add user authentication with OAuth2"
# From STT enhanced file
/lazy create-feature --file enhanced_prompt.md
# With custom output directory
/lazy create-feature "Add analytics dashboard" --output-dir ./docs/project-management/US-STORY
# Skip GitHub issue creation
/lazy create-feature "Build payment processing" --no-issue
```
## Feature Description
<feature_description>
$ARGUMENTS
</feature_description>
## Instructions
### Step 1: Parse Arguments and Load Brief
**Parse Arguments:**
- Check for `--file` flag and read file if provided, otherwise use $ARGUMENTS
- Parse optional `--output-dir` flag (default: `./project-management/US-STORY/`)
- Parse optional `--no-issue` flag (skip GitHub issue creation)
- Verify the brief is not empty
**Error Handling:**
- If `--file` provided but file not found: Return error "File not found at: {path}"
- If no input provided: Return error "No feature brief provided"
### Step 2: Generate Story ID and Create Directory
- Scan `./project-management/US-STORY/` for existing US-* folders
- Generate next story ID (e.g., if US-3.2 exists, next is US-3.3; if none exist, start with US-1.1)
- Create directory: `./project-management/US-STORY/US-{ID}-{story-name}/`
### Step 3: Invoke Project Manager Agent
**Agent**: `project-manager` (at `.claude/agents/project-manager.md`)
The agent will:
1. Read feature brief from conversation
2. Create single US-story.md file with:
- Story description and acceptance criteria
- Tasks listed inline (TASK-1, TASK-2, etc.)
- Security and testing requirements
3. Write file to output directory
### Step 4: Optionally Create GitHub Issue
If `--no-issue` flag NOT provided:
- Create GitHub issue with story content
- Update US-story.md with issue number
### Step 5: Git Add (if in repository)
- Add story file to git: `git add ./project-management/US-STORY/US-{ID}-{name}/`
- Do NOT commit (user commits when ready)
### Step 6: Output Summary
Display:
- Story location
- GitHub issue number (if created)
- Next steps: review story and start implementation

View File

@@ -0,0 +1,563 @@
---
description: Answer questions about code or technical topics without creating artifacts
argument-hint: "<question>"
allowed-tools: Read, Glob, Grep, Bash, Task
---
# Question Command: Intelligent Q&A System
Answer questions about your codebase or general technical topics with zero artifacts.
## Core Philosophy
**Ask anything, get answers, create nothing.**
This command is for Q&A ONLY - no file creation, no documentation generation, no code changes.
## Usage Examples
```bash
# Codebase questions
/lazy question "where is user authentication handled?"
/lazy question "how does the payment processor work?"
/lazy question "what files implement the REST API?"
# General technical questions
/lazy question "what is the difference between REST and GraphQL?"
/lazy question "how to implement OAuth2 in Python?"
/lazy question "best practices for API versioning?"
```
## When to Use
**Use this command when:**
- You need to understand how something works in the codebase
- You want to locate specific functionality
- You have general technical questions
- You need quick documentation lookups
**Do NOT use for:**
- Creating documentation files
- Modifying code
- Generating new files
- Planning features (use `/lazy plan` instead)
## Requirements
**Input:**
- Single question string (clear and specific)
- Can be about codebase OR general knowledge
**Critical:**
- **NO file creation** - answers only
- **NO .md files** - inline responses only
- **NO code generation** - explanation only
- **NO documentation updates** - read-only operation
## Question Type Detection
### Decision Logic
```python
def should_use_codebase(question: str) -> bool:
"""Decide if question is about codebase or general knowledge."""
codebase_indicators = [
"in this", "this codebase", "this project", "this repo",
"where is", "how does", "why does", "what does",
"in our", "our codebase", "our project",
"file", "function", "class", "module",
"implemented", "defined", "located",
"show me", "find", "which file"
]
question_lower = question.lower()
# If question mentions codebase-specific terms → use codebase
if any(ind in question_lower for ind in codebase_indicators):
return True
# If question is general knowledge → use research agent
general_indicators = [
"what is", "how to", "difference between",
"best practice", "tutorial", "documentation",
"learn", "explain", "guide", "introduction"
]
if any(ind in question_lower for ind in general_indicators):
return False
# Default: assume codebase question
return True
```
### Examples by Type
**Codebase Questions (searches project):**
- "where is user authentication handled?"
- "how does this project structure payments?"
- "what files implement the API endpoints?"
- "in our codebase, how is logging configured?"
- "show me where database migrations are defined"
- "which function handles token validation?"
**General Questions (uses research agent):**
- "what is the difference between JWT and session tokens?"
- "how to implement OAuth2 in Python?"
- "best practices for API versioning?"
- "explain what GraphQL is"
- "tutorial on writing pytest fixtures"
## Execution Workflow
### Phase 1: Analyze Question
```python
question = "$ARGUMENTS".strip()
# Determine question type
is_codebase_question = should_use_codebase(question)
if is_codebase_question:
approach = "codebase_search"
tools = ["Grep", "Glob", "Read"]
else:
approach = "research_agent"
tools = ["Task (research agent)"]
```
### Phase 2a: Codebase Question Path
**If question is about the codebase:**
```python
# 1. Extract search terms from question
search_terms = extract_keywords(question)
# Example: "where is authentication handled?" → ["authentication", "auth", "login"]
# 2. Search codebase with Grep
for term in search_terms:
# Search for term in code
matches = grep(pattern=term, output_mode="files_with_matches")
# Search for term in comments/docstrings
doc_matches = grep(pattern=f"(#|//|\"\"\"|\"\"\").*{term}", output_mode="content", -n=True)
# 3. Prioritize results
relevant_files = prioritize_by_relevance(matches, question)
# Priority: src/ > tests/ > docs/
# 4. Read top relevant files
for file in relevant_files[:5]: # Top 5 most relevant
content = Read(file_path=file)
# Extract relevant sections based on search terms
# 5. Analyze and answer
answer = """
Based on codebase analysis:
{synthesized answer from code}
**References:**
- {file1}:{line1}
- {file2}:{line2}
"""
```
**Search Strategy:**
```python
# Identify search terms based on question type
if "where" in question or "which file" in question:
# Location question - find files
search_mode = "files_with_matches"
search_scope = "filenames and content"
elif "how does" in question or "how is" in question:
# Implementation question - show code
search_mode = "content"
search_scope = "function definitions and logic"
context_lines = 10 # Use -C flag
elif "what is" in question and is_codebase_question:
# Definition question - find docstrings/comments
search_mode = "content"
search_scope = "docstrings, comments, README"
```
### Phase 2b: General Question Path
**If question is general knowledge:**
```python
Task(
prompt=f"""
You are the Research Agent for LAZY-DEV-FRAMEWORK.
## Question to Answer
{question}
## Instructions
1. This is a GENERAL technical question (not codebase-specific)
2. Answer based on:
- Your training knowledge
- Industry best practices
- Official documentation (if available)
- Common patterns and conventions
3. Provide a clear, concise answer with:
- Direct answer to the question
- Key concepts explained
- Code examples if relevant (generic, not project-specific)
- Links to official docs/resources
4. Structure answer for readability:
- Use bullet points for lists
- Use code blocks for examples
- Use clear section headers
## Output Format
**Answer:**
{direct answer}
**Key Concepts:**
- {concept 1}
- {concept 2}
**Example (if applicable):**
```language
{generic code example}
```
**Further Reading:**
- {resource 1}
- {resource 2}
**Critical Reminder:**
- Do NOT create any files
- Do NOT search the codebase
- Do NOT reference project-specific code
- This is general knowledge only
"""
)
```
## Output Format
### For Codebase Questions
```markdown
## Answer
Based on codebase analysis:
{synthesized answer from code analysis}
**Key Files:**
- `{file1}` - {purpose}
- `{file2}` - {purpose}
**Code References:**
- {file1}:{line1} - {what happens here}
- {file2}:{line2} - {what happens here}
**Flow:**
{if applicable, explain the flow through multiple files}
```
### For General Questions
```markdown
## Answer
{direct answer to question}
**Key Concepts:**
- {concept 1 with explanation}
- {concept 2 with explanation}
**Example:**
```language
{generic code example if relevant}
```
**Further Reading:**
- {official documentation link}
- {tutorial or guide link}
```
## Validation
**Success criteria:**
- Question answered clearly and accurately
- References provided (for codebase questions)
- No files created or modified
- No commits made
- Response is conversational, not a file
**Anti-patterns (MUST AVOID):**
- Creating documentation files
- Generating .md files
- Writing code to files
- Running git commands
- Creating pull requests
## Examples in Action
### Example 1: Codebase Location Question
```bash
$ /lazy question "where is user authentication handled?"
## Answer
Based on codebase analysis:
User authentication is handled primarily in the `src/auth/` module:
**Key Files:**
- `src/auth/login.py` - Main login endpoint and session management
- `src/auth/middleware.py` - Authentication middleware for protected routes
- `src/auth/jwt_handler.py` - JWT token generation and validation
**Code References:**
- src/auth/login.py:45 - `authenticate_user()` validates credentials
- src/auth/middleware.py:23 - `require_auth()` decorator protects endpoints
- src/auth/jwt_handler.py:67 - `generate_token()` creates JWT tokens
**Authentication Flow:**
1. User submits credentials → `login.py:authenticate_user()`
2. Credentials validated against database
3. JWT token generated → `jwt_handler.py:generate_token()`
4. Token returned to client
5. Protected routes checked by → `middleware.py:require_auth()`
```
### Example 2: Codebase Implementation Question
```bash
$ /lazy question "how does the payment processor work?"
## Answer
Based on codebase analysis:
The payment processor uses a strategy pattern with multiple payment providers:
**Key Files:**
- `src/payment/processor.py` - Main processor coordinator
- `src/payment/providers/stripe.py` - Stripe integration
- `src/payment/providers/paypal.py` - PayPal integration
**Processing Flow:**
1. Client initiates payment → `processor.py:process_payment()`
2. Processor selects provider based on payment method
3. Provider-specific handler called (Stripe/PayPal)
4. Payment validated and processed
5. Result logged and webhook sent
**Code References:**
- src/payment/processor.py:89 - Main `process_payment()` entry point
- src/payment/processor.py:145 - Provider selection logic
- src/payment/providers/stripe.py:34 - Stripe charge creation
- src/payment/webhook.py:23 - Payment confirmation webhook
**Error Handling:**
- Retries configured in processor.py:67 (3 attempts, exponential backoff)
- Failed payments logged to `payment_failures` table
- Customer notified via email on failure
```
### Example 3: General Technical Question
```bash
$ /lazy question "what is the difference between REST and GraphQL?"
## Answer
REST and GraphQL are both API architectural patterns, but they differ in how clients request and receive data:
**Key Differences:**
1. **Data Fetching:**
- REST: Multiple endpoints, each returns fixed data structure
- GraphQL: Single endpoint, client specifies exact data needed
2. **Over/Under-fetching:**
- REST: Often returns too much or too little data
- GraphQL: Client gets exactly what they request
3. **Versioning:**
- REST: Typically requires version in URL (v1, v2)
- GraphQL: Schema evolution without versions
4. **Request Structure:**
- REST: HTTP methods (GET, POST, PUT, DELETE)
- GraphQL: All requests are POST with query body
**Example:**
REST - Multiple requests needed:
```bash
GET /api/users/123 # Get user
GET /api/users/123/posts # Get user's posts
GET /api/posts/456/comments # Get post comments
```
GraphQL - Single request:
```graphql
query {
user(id: 123) {
name
email
posts {
title
comments {
text
author
}
}
}
}
```
**Further Reading:**
- [GraphQL Official Docs](https://graphql.org/learn/)
- [REST API Best Practices](https://restfulapi.net/)
```
### Example 4: Edge Case - Ambiguous Question
```bash
$ /lazy question "how does authentication work?"
# Decision: Too ambiguous - clarify first
## Clarification Needed
Your question "how does authentication work?" could mean:
1. **Codebase-specific:** "How does authentication work in THIS project?"
- Run: `/lazy question "how does authentication work in this codebase?"`
2. **General concept:** "How does authentication work as a concept?"
- Run: `/lazy question "explain authentication concepts and best practices"`
Please rephrase your question to be more specific.
```
## Decision Tree
```
Parse question
Contains codebase indicators? ──Yes──→ Codebase Search Path
│ ↓
No Extract keywords
↓ ↓
Contains general indicators? ──Yes──→ Grep/Glob codebase
│ ↓
No Read relevant files
↓ ↓
Ambiguous Synthesize answer
↓ ↓
Ask for clarification Format with references
Return answer
(NO FILES CREATED)
Research Agent Path:
Delegate to research agent
Agent uses training knowledge
Format answer with examples
Return answer
(NO FILES CREATED)
```
## Key Principles
1. **Read-Only Operation**: Never create, modify, or delete files
2. **Zero Artifacts**: No .md files, no commits, no PRs
3. **Smart Detection**: Auto-determine codebase vs general question
4. **Cite Sources**: Always reference file:line for codebase answers
5. **Conversational**: Return inline answers, not documentation
6. **Focused Search**: Top 5 most relevant files only
7. **Context-Aware**: Use -C flag for code context when needed
## Integration Points
**With other commands:**
```bash
# Learn about codebase before implementing
/lazy question "where is user validation implemented?"
/lazy code "add email validation to user signup"
# Understand before documenting
/lazy question "how does the API rate limiting work?"
/lazy docs src/api/rate_limiter.py
# Research before planning
/lazy question "best practices for OAuth2 implementation"
/lazy plan "add OAuth2 authentication"
```
## Environment Variables
None required - this is a pure Q&A command.
## Troubleshooting
**Issue: "No results found"**
```
Try rephrasing your question:
- Use different keywords
- Be more specific about file types or modules
- Check if functionality exists in project
```
**Issue: "Too many results"**
```
Narrow your question:
- Specify module or component
- Add context about feature area
- Ask about specific file/function
```
**Issue: "Wrong type detected"**
```
Force codebase search:
- Add "in this codebase" to question
Force general search:
- Add "explain" or "what is" to question
```
## Anti-Patterns to Avoid
**DO NOT:**
- Create documentation files from answers
- Generate code files based on research
- Write .md files with Q&A content
- Make commits or PRs
- Modify existing files
- Create new directories
**DO:**
- Answer questions inline
- Provide file references
- Show code snippets in response
- Explain concepts clearly
- Link to external resources
---
**Version:** 2.2.0
**Status:** Production-Ready
**Philosophy:** Ask anything, get answers, create nothing.

1308
.claude/commands/review.md Normal file

File diff suppressed because it is too large Load Diff