Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:40:14 +08:00
commit 0c7b748696
11 changed files with 1157 additions and 0 deletions

View File

@@ -0,0 +1,148 @@
---
name: feature-planning
description: Break down feature requests into detailed, implementable plans with clear tasks. Use when user requests a new feature, enhancement, or complex change.
---
# Feature Planning
Systematically analyze feature requests and create detailed, actionable implementation plans.
## When to Use
- Requests new feature ("add user authentication", "build dashboard")
- Asks for enhancements ("improve performance", "add export")
- Describes complex multi-step changes
- Explicitly asks for planning ("plan how to implement X")
- Provides vague requirements needing clarification
## Planning Workflow
### 1. Understand Requirements
**Ask clarifying questions:**
- What problem does this solve?
- Who are the users?
- Specific technical constraints?
- What does success look like?
**Explore the codebase:**
Use Task tool with `subagent_type='Explore'` and `thoroughness='medium'` to understand:
- Existing architecture and patterns
- Similar features to reference
- Where new code should live
- What will be affected
### 2. Analyze & Design
**Identify components:**
- Database changes (models, migrations, schemas)
- Backend logic (API endpoints, business logic, services)
- Frontend changes (UI, state, routing)
- Testing requirements
- Documentation updates
**Consider architecture:**
- Follow existing patterns (check CLAUDE.md)
- Identify reusable components
- Plan error handling and edge cases
- Consider performance implications
- Think about security and validation
**Check dependencies:**
- New packages/libraries needed
- Compatibility with existing stack
- Configuration changes required
### 3. Create Implementation Plan
Break feature into **discrete, sequential tasks**:
```markdown
## Feature: [Feature Name]
### Overview
[Brief description of what will be built and why]
### Architecture Decisions
- [Key decision 1 and rationale]
- [Key decision 2 and rationale]
### Implementation Tasks
#### Task 1: [Component Name]
- **File**: `path/to/file.py:123`
- **Description**: [What needs to be done]
- **Details**:
- [Specific requirement 1]
- [Specific requirement 2]
- **Dependencies**: None (or list task numbers)
#### Task 2: [Component Name]
...
### Testing Strategy
- [What types of tests needed]
- [Critical test cases to cover]
### Integration Points
- [How this connects with existing code]
- [Potential impacts on other features]
```
**Include specific references:**
- File paths with line numbers (`src/utils/auth.py:45`)
- Existing patterns to follow
- Relevant documentation
### 4. Review Plan with User
Confirm:
- Does this match expectations?
- Missing requirements?
- Adjust priorities or approach?
- Ready to proceed?
### 5. Execute with plan-implementer
Launch plan-implementer agent for each task:
```
Task tool with:
- subagent_type: 'plan-implementer'
- description: 'Implement [task name]'
- prompt: Detailed task description from plan
```
**Execution strategy:**
- Implement sequentially (respect dependencies)
- Verify each task before next
- Adjust plan if issues discovered
- Let test-fixing skill handle failures
- Let git-pushing skill handle commits
## Best Practices
**Planning:**
- Start broad, then specific
- Reference existing code patterns
- Include file paths and line numbers
- Think through edge cases upfront
- Keep tasks focused and atomic
**Communication:**
- Explain architectural decisions
- Highlight trade-offs and alternatives
- Be explicit about assumptions
- Provide context for future maintainers
**Execution:**
- Implement one task at a time
- Verify before moving forward
- Keep user informed
- Adapt based on discoveries
## Integration
- **plan-implementer agent**: Receives task specs, implements
- **test-fixing skill**: Auto-triggered on test failures
- **git-pushing skill**: Triggered for commits

View File

@@ -0,0 +1,381 @@
# Feature Planning Best Practices
This guide provides detailed best practices for creating effective implementation plans.
## Task Granularity
### Right-Sized Tasks
**Too Large:**
- "Implement user authentication system"
- "Build the entire dashboard"
- "Add social media integration"
**Too Small:**
- "Import the uuid library"
- "Add a single line of code"
- "Create an empty file"
**Just Right:**
- "Create User model with email, password hash, and timestamps"
- "Implement JWT token generation and validation utility"
- "Add login endpoint with rate limiting"
- "Create protected route middleware"
### Task Independence
Each task should be implementable without needing to switch context frequently:
**Good:**
```markdown
#### Task 1: Database Schema
Create the `users` table with proper indexes and constraints.
Dependencies: None
#### Task 2: User Model
Implement the User model class with validation methods.
Dependencies: Task 1
#### Task 3: Auth Service
Create authentication service for login/logout operations.
Dependencies: Task 2
```
**Avoid:**
```markdown
#### Task 1: Part 1 of User Authentication
Add some user fields...
#### Task 2: Part 2 of User Authentication
Finish the user fields from Task 1...
```
## Specificity in Task Descriptions
### Include File Paths
**Vague:**
"Add the authentication logic"
**Specific:**
"Add authentication logic in `src/services/auth.py:45` following the pattern from `src/services/session.py:23-67`"
### Provide Context
**Vague:**
"Update the user handler"
**Specific:**
"Update the user handler in `api/handlers/user.py:120` to validate email format using the existing `validators.email()` utility. Follow the validation pattern from `api/handlers/auth.py:89`"
### Specify Expected Behavior
**Vague:**
"Add error handling"
**Specific:**
"Add try/except block to catch `DatabaseError` and return 500 status with error message. Log the full exception using the existing logger instance. See similar handling in `api/handlers/base.py:34`"
## Architecture Decision Documentation
Always explain **why**, not just **what**:
**Good:**
```markdown
### Architecture Decisions
**Choice: JWT tokens over sessions**
- Rationale: API needs to be stateless for horizontal scaling
- Trade-off: Tokens can't be revoked until expiry (mitigated with 15min expiry)
- Implementation: Use existing `pyjwt` library already in project
**Choice: Bcrypt for password hashing**
- Rationale: Industry standard, already used in related project
- Configuration: Cost factor of 12 (matches security team guidelines)
```
**Avoid:**
```markdown
### Architecture Decisions
- Using JWT
- Using Bcrypt
```
## Dependencies and Ordering
### Explicit Dependencies
```markdown
#### Task 1: Create User Model
Dependencies: None
#### Task 2: Create UserRepository
Dependencies: Task 1 (needs User model definition)
#### Task 3: Implement Registration Endpoint
Dependencies: Task 1, Task 2
#### Task 4: Add Registration Tests
Dependencies: Task 3
```
### Natural Ordering
Follow the natural flow of development:
1. **Foundation first**: Models, schemas, database
2. **Core logic**: Services, utilities, business logic
3. **API layer**: Endpoints, handlers, middleware
4. **Frontend**: UI components, state management
5. **Integration**: Connecting pieces
6. **Testing**: Comprehensive tests
7. **Documentation**: User guides, API docs
## Edge Cases and Error Handling
### Identify Early
For each task, consider:
- What could go wrong?
- What are the edge cases?
- How should errors be handled?
- What validation is needed?
**Example:**
```markdown
#### Task 3: User Registration Endpoint
**Core functionality:**
- Accept email and password
- Validate input
- Create user record
- Return success response
**Edge cases to handle:**
- Email already exists → Return 409 Conflict
- Invalid email format → Return 400 with validation error
- Password too weak → Return 400 with requirements
- Database connection failure → Return 500, log error
- Rate limiting → Return 429 if >5 attempts/minute
**Validation:**
- Email: Valid format, max 255 chars, lowercase
- Password: Min 8 chars, must include number and special char
```
## Testing Strategy
### Test Coverage Planning
For each feature, identify:
- **Unit tests**: Individual functions and methods
- **Integration tests**: API endpoints, database operations
- **Edge case tests**: Error conditions, boundary values
- **Security tests**: Authentication, authorization, input validation
**Example:**
```markdown
### Testing Strategy
**Unit Tests:**
- `test_hash_password()` - Password hashing utility
- `test_validate_email()` - Email validation
- `test_generate_jwt_token()` - Token generation
**Integration Tests:**
- `test_register_user_success()` - Happy path registration
- `test_register_duplicate_email()` - Duplicate prevention
- `test_login_with_valid_credentials()` - Authentication flow
- `test_login_with_invalid_credentials()` - Auth failure
**Security Tests:**
- `test_sql_injection_prevention()` - Input sanitization
- `test_rate_limiting()` - Brute force protection
- `test_password_strength_requirements()` - Password policy
```
## Example: Complete Feature Plan
```markdown
## Feature: User Profile Management
### Overview
Allow users to view and edit their profile information including name, bio, and avatar. Profiles are public but only editable by the owner.
### Architecture Decisions
**Choice: Store avatars in S3**
- Rationale: Existing S3 bucket and upload utilities available
- Location: Follow pattern from `src/utils/s3_upload.py`
- Size limit: 5MB (enforced in backend)
**Choice: Optimistic UI updates**
- Rationale: Better UX, matches existing patterns
- Rollback on error via Redux state management
### Implementation Tasks
#### Task 1: Extend User Model
- **File**: `src/models/user.py:23`
- **Description**: Add profile fields to User model
- **Details**:
- Add fields: `bio` (text, max 500 chars), `avatar_url` (string, nullable)
- Create migration for new fields
- Add validation methods
- **Dependencies**: None
- **Reference**: See model pattern in `src/models/post.py:15-45`
#### Task 2: Create Profile API Endpoints
- **File**: `src/api/routes/profile.py` (new file)
- **Description**: Add GET and PATCH endpoints for profile
- **Details**:
- `GET /api/users/:id/profile` - Public, anyone can view
- `PATCH /api/users/:id/profile` - Protected, owner only
- Authorization: Use existing `@require_auth` decorator
- Validation: Bio max 500 chars, avatar_url is valid URL
- **Dependencies**: Task 1
- **Reference**: Follow pattern from `src/api/routes/posts.py:67-89`
#### Task 3: Avatar Upload Utility
- **File**: `src/services/avatar_service.py` (new file)
- **Description**: Handle avatar upload to S3 with validation
- **Details**:
- Accept image files (jpg, png, gif)
- Validate size (max 5MB)
- Resize to 400x400px
- Upload to S3 using existing `S3Upload` class
- Return public URL
- **Dependencies**: None (can work parallel to Task 1-2)
- **Reference**: Use `src/utils/s3_upload.py` and `src/services/image_processor.py`
#### Task 4: Frontend Profile Component
- **File**: `src/components/Profile/ProfileView.tsx` (new file)
- **Description**: Display user profile information
- **Details**:
- Show avatar, name, bio
- Conditional "Edit" button if viewing own profile
- Responsive layout
- Loading and error states
- **Dependencies**: Task 2 (needs API)
- **Reference**: Similar layout to `src/components/Post/PostView.tsx`
#### Task 5: Frontend Profile Edit Component
- **File**: `src/components/Profile/ProfileEdit.tsx` (new file)
- **Description**: Edit profile with avatar upload
- **Details**:
- Form with bio textarea, avatar file upload
- Character counter for bio (500 max)
- Image preview before upload
- Optimistic updates on save
- Validation and error display
- **Dependencies**: Task 3, Task 4
- **Reference**: Form pattern from `src/components/Post/PostEditor.tsx`
#### Task 6: Integration Tests
- **File**: `tests/integration/test_profile_api.py` (new file)
- **Description**: Test profile endpoints
- **Tests**:
- View any user's profile (public)
- Edit own profile (success)
- Edit other's profile (forbidden)
- Bio validation (too long)
- Avatar upload (success and size limit)
- **Dependencies**: Task 1, Task 2, Task 3
#### Task 7: Frontend Tests
- **File**: `tests/components/Profile.test.tsx` (new file)
- **Description**: Test profile components
- **Tests**:
- ProfileView renders correctly
- Edit button shown for own profile only
- ProfileEdit form validation
- Avatar upload preview
- Optimistic update behavior
- **Dependencies**: Task 4, Task 5
### Testing Strategy
**Unit Tests:**
- User model validation methods
- Avatar resizing logic
- S3 upload utility
**Integration Tests:**
- API endpoints (all success and error cases)
- Authorization (owner vs non-owner)
- End-to-end profile update flow
**Security:**
- Authorization enforcement (can't edit others' profiles)
- File upload validation (type, size)
- XSS prevention in bio field
### Integration Points
**Existing systems affected:**
- User model (extends with new fields)
- S3 bucket (shares with post images)
- Auth middleware (reuses existing decorators)
**New routes:**
- `GET /api/users/:id/profile`
- `PATCH /api/users/:id/profile`
- `POST /api/users/:id/avatar` (upload)
```
## Anti-Patterns to Avoid
### 1. Over-Planning
**Bad:**
Planning every single line of code, including variable names and exact syntax.
**Good:**
Plan structure, key decisions, and task boundaries. Let implementer handle details.
### 2. Under-Planning
**Bad:**
"Add user profiles" with no further detail.
**Good:**
Break down into tasks with context, files, and acceptance criteria.
### 3. Ignoring Existing Patterns
**Bad:**
Designing from scratch without checking the codebase.
**Good:**
Reference existing code patterns and maintain consistency.
### 4. Skipping Edge Cases
**Bad:**
Only planning the happy path.
**Good:**
Explicitly call out error handling and validation for each task.
### 5. Creating Dependent Chains
**Bad:**
Every task depends on the previous one, forcing strict sequential execution.
**Good:**
Identify tasks that can be parallelized (e.g., backend and frontend utilities).
## Checklist for a Good Plan
Before handing off to plan-implementer, verify:
- [ ] Requirements are clearly understood
- [ ] Architecture decisions are documented with rationale
- [ ] Tasks are right-sized (30min - 2hrs each)
- [ ] Each task has clear acceptance criteria
- [ ] Dependencies are explicitly stated
- [ ] File paths and code references are included
- [ ] Edge cases and error handling are addressed
- [ ] Testing strategy is defined
- [ ] Follows existing project patterns (checked CLAUDE.md)
- [ ] User has reviewed and approved the plan

View File

@@ -0,0 +1,31 @@
---
name: git-pushing
description: Stage, commit, and push git changes with conventional commit messages. Use when user wants to commit and push changes, mentions pushing to remote, or asks to save and push their work. Also activates when user says "push changes", "commit and push", "push this", "push to github", or similar git workflow requests.
---
# Git Push Workflow
Stage all changes, create a conventional commit, and push to the remote branch.
## When to Use
Automatically activate when the user:
- Explicitly asks to push changes ("push this", "commit and push")
- Mentions saving work to remote ("save to github", "push to remote")
- Completes a feature and wants to share it
- Says phrases like "let's push this up" or "commit these changes"
## Workflow
**ALWAYS use the script** - do NOT use manual git commands:
```bash
bash skills/git-pushing/scripts/smart_commit.sh
```
With custom message:
```bash
bash skills/git-pushing/scripts/smart_commit.sh "feat: add feature"
```
Script handles: staging, conventional commit message, Claude footer, push with -u flag.

View File

@@ -0,0 +1,153 @@
#!/bin/bash
# Smart Git Commit Script for git-pushing skill
# Handles staging, commit message generation, and pushing
set -e # Exit on error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Function to print colored messages
info() { echo -e "${GREEN}${NC} $1"; }
warn() { echo -e "${YELLOW}${NC} $1"; }
error() { echo -e "${RED}${NC} $1" >&2; }
# Get current branch
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
info "Current branch: $CURRENT_BRANCH"
# Check if there are changes
if git diff --quiet && git diff --cached --quiet; then
warn "No changes to commit"
exit 0
fi
# Stage all changes
info "Staging all changes..."
git add .
# Get staged files for commit message analysis
STAGED_FILES=$(git diff --cached --name-only)
DIFF_STAT=$(git diff --cached --stat)
# Analyze changes to determine commit type
determine_commit_type() {
local files="$1"
# Check for specific patterns
if echo "$files" | grep -q "test"; then
echo "test"
elif echo "$files" | grep -qE "\.(md|txt|rst)$"; then
echo "docs"
elif echo "$files" | grep -qE "package\.json|requirements\.txt|Cargo\.toml"; then
echo "chore"
elif git diff --cached | grep -qE "^[\+].*fix|^[\+].*bug"; then
echo "fix"
elif git diff --cached | grep -qE "^[\+].*refactor"; then
echo "refactor"
else
echo "feat"
fi
}
# Analyze files to determine scope
determine_scope() {
local files="$1"
# Extract directory or component name
local scope=$(echo "$files" | head -1 | cut -d'/' -f1)
# Check for common patterns
if echo "$files" | grep -q "plugin"; then
echo "plugin"
elif echo "$files" | grep -q "skill"; then
echo "skill"
elif echo "$files" | grep -q "agent"; then
echo "agent"
elif [ -n "$scope" ] && [ "$scope" != "." ]; then
echo "$scope"
else
echo ""
fi
}
# Generate commit message if not provided
if [ -z "$1" ]; then
COMMIT_TYPE=$(determine_commit_type "$STAGED_FILES")
SCOPE=$(determine_scope "$STAGED_FILES")
# Count files changed
NUM_FILES=$(echo "$STAGED_FILES" | wc -l | xargs)
# Generate description based on changes
if [ "$COMMIT_TYPE" = "docs" ]; then
DESCRIPTION="update documentation"
elif [ "$COMMIT_TYPE" = "test" ]; then
DESCRIPTION="update tests"
elif [ "$COMMIT_TYPE" = "chore" ]; then
DESCRIPTION="update dependencies"
else
DESCRIPTION="update $NUM_FILES file(s)"
fi
# Build commit message
if [ -n "$SCOPE" ]; then
COMMIT_MSG="${COMMIT_TYPE}(${SCOPE}): ${DESCRIPTION}"
else
COMMIT_MSG="${COMMIT_TYPE}: ${DESCRIPTION}"
fi
info "Generated commit message: $COMMIT_MSG"
else
COMMIT_MSG="$1"
info "Using provided message: $COMMIT_MSG"
fi
# Create commit with Claude Code footer
git commit -m "$(cat <<EOF
${COMMIT_MSG}
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
COMMIT_HASH=$(git rev-parse --short HEAD)
info "Created commit: $COMMIT_HASH"
# Push to remote
info "Pushing to origin/$CURRENT_BRANCH..."
# Check if branch exists on remote
if git ls-remote --exit-code --heads origin "$CURRENT_BRANCH" >/dev/null 2>&1; then
# Branch exists, just push
if git push; then
info "Successfully pushed to origin/$CURRENT_BRANCH"
echo "$DIFF_STAT"
else
error "Push failed"
exit 1
fi
else
# New branch, push with -u
if git push -u origin "$CURRENT_BRANCH"; then
info "Successfully pushed new branch to origin/$CURRENT_BRANCH"
echo "$DIFF_STAT"
# Check if it's GitHub and show PR link
REMOTE_URL=$(git remote get-url origin)
if echo "$REMOTE_URL" | grep -q "github.com"; then
REPO=$(echo "$REMOTE_URL" | sed -E 's/.*github\.com[:/](.*)\.git/\1/')
warn "Create PR: https://github.com/$REPO/pull/new/$CURRENT_BRANCH"
fi
else
error "Push failed"
exit 1
fi
fi
exit 0

View File

@@ -0,0 +1,136 @@
---
name: review-implementing
description: Process and implement code review feedback systematically. Use when user provides reviewer comments, PR feedback, code review notes, or asks to implement suggestions from reviews.
---
# Review Feedback Implementation
Systematically process and implement changes based on code review feedback.
## When to Use
- Provides reviewer comments or feedback
- Pastes PR review notes
- Mentions implementing review suggestions
- Says "address these comments" or "implement feedback"
- Shares list of changes requested by reviewers
## Systematic Workflow
### 1. Parse Reviewer Notes
Identify individual feedback items:
- Split numbered lists (1., 2., etc.)
- Handle bullet points or unnumbered feedback
- Extract distinct change requests
- Clarify ambiguous items before starting
### 2. Create Todo List
Use TodoWrite tool to create actionable tasks:
- Each feedback item becomes one or more todos
- Break down complex feedback into smaller tasks
- Make tasks specific and measurable
- Mark first task as `in_progress` before starting
Example:
```
- Add type hints to extract function
- Fix duplicate tag detection logic
- Update docstring in chain.py
- Add unit test for edge case
```
### 3. Implement Changes Systematically
For each todo item:
**Locate relevant code:**
- Use Grep to search for functions/classes
- Use Glob to find files by pattern
- Read current implementation
**Make changes:**
- Use Edit tool for modifications
- Follow project conventions (CLAUDE.md)
- Preserve existing functionality unless changing behavior
**Verify changes:**
- Check syntax correctness
- Run relevant tests if applicable
- Ensure changes address reviewer's intent
**Update status:**
- Mark todo as `completed` immediately after finishing
- Move to next todo (only one `in_progress` at a time)
### 4. Handle Different Feedback Types
**Code changes:**
- Use Edit tool for existing code
- Follow type hint conventions (PEP 604/585)
- Maintain consistent style
**New features:**
- Create new files with Write tool if needed
- Add corresponding tests
- Update documentation
**Documentation:**
- Update docstrings following project style
- Modify markdown files as needed
- Keep explanations concise
**Tests:**
- Write tests as functions, not classes
- Use descriptive names
- Follow pytest conventions
**Refactoring:**
- Preserve functionality
- Improve code structure
- Run tests to verify no regressions
### 5. Validation
After implementing changes:
- Run affected tests
- Check for linting errors: `uv run ruff check`
- Verify changes don't break existing functionality
### 6. Communication
Keep user informed:
- Update todo list in real-time
- Ask for clarification on ambiguous feedback
- Report blockers or challenges
- Summarize changes at completion
## Edge Cases
**Conflicting feedback:**
- Ask user for guidance
- Explain conflict clearly
**Breaking changes required:**
- Notify user before implementing
- Discuss impact and alternatives
**Tests fail after changes:**
- Fix tests before marking todo complete
- Ensure all related tests pass
**Referenced code doesn't exist:**
- Ask user for clarification
- Verify understanding before proceeding
## Important Guidelines
- **Always use TodoWrite** for tracking progress
- **Mark todos completed immediately** after each item
- **Only one todo in_progress** at any time
- **Don't batch completions** - update status in real-time
- **Ask questions** for unclear feedback
- **Run tests** if changes affect tested code
- **Follow CLAUDE.md conventions** for all code changes
- **Use conventional commits** if creating commits afterward

109
skills/test-fixing/SKILL.md Normal file
View File

@@ -0,0 +1,109 @@
---
name: test-fixing
description: Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
---
# Test Fixing
Systematically identify and fix all failing tests using smart grouping strategies.
## When to Use
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
## Systematic Approach
### 1. Initial Test Run
Run `make test` to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
### 2. Smart Error Grouping
Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failures
- **Root cause**: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
### 3. Systematic Fixing Process
For each group (starting with highest impact):
1. **Identify root cause**
- Read relevant code
- Check recent changes with `git diff`
- Understand the error pattern
2. **Implement fix**
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
3. **Verify fix**
- Run subset of tests for this group
- Use pytest markers or file patterns:
```bash
uv run pytest tests/path/to/test_file.py -v
uv run pytest -k "pattern" -v
```
- Ensure group passes before moving on
4. **Move to next group**
### 4. Fix Order Strategy
**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues
**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions
**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling
### 5. Final Verification
After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact
## Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
## Example Workflow
User: "The tests are failing after my refactor"
1. Run `make test` → 15 failures identified
2. Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓