Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:50:24 +08:00
commit f172746dc6
52 changed files with 17406 additions and 0 deletions

View File

@@ -0,0 +1,515 @@
# Agent Orchestration Strategies
This file describes how to coordinate multiple specialized agents for complex implementation tasks.
## Agent Overview
### Available Agents for Implementation
```yaml
workflow-coordinator:
role: "Workflow validation and phase coordination"
use_first: true
validates:
- Planning phase completed
- Specification exists in active/
- Prerequisites met
coordinates: "Transition from planning to implementation"
implementer:
role: "Core feature development"
specializes:
- Building new features
- Implementing specifications
- Writing production code
- Updating documentation
architect:
role: "System design and architecture"
specializes:
- Architecture decisions
- Component design
- System refactoring
- Design patterns
security:
role: "Security review and implementation"
specializes:
- Authentication systems
- Authorization logic
- Encryption implementation
- Security best practices
qa:
role: "Quality assurance and testing"
specializes:
- Test creation
- Coverage analysis
- Test strategy
- Quality validation
refactorer:
role: "Code improvement and consistency"
specializes:
- Code refactoring
- Consistency enforcement
- Code smell removal
- Multi-file updates
researcher:
role: "Code exploration and analysis"
specializes:
- Dependency mapping
- Pattern identification
- Impact analysis
- Codebase exploration
```
---
## Agent Selection Rules
### Task-Based Selection
**Use this matrix to determine which agents to invoke:**
```yaml
Authentication Feature:
primary: architect # Design auth flow
secondary: security # Security requirements
tertiary: implementer # Build feature
final: qa # Create tests
API Development:
primary: architect # Design API structure
secondary: implementer # Build endpoints
tertiary: qa # Create API tests
Bug Fix:
primary: researcher # Find root cause
secondary: implementer # Fix the bug
tertiary: qa # Add regression test
Refactoring:
primary: researcher # Analyze impact
secondary: architect # Design new structure
tertiary: refactorer # Update consistently
final: qa # Validate no regressions
Multi-file Changes:
primary: researcher # Map dependencies
secondary: refactorer # Update consistently
tertiary: qa # Ensure nothing breaks
Performance Optimization:
primary: researcher # Profile and analyze
secondary: implementer # Implement optimization
tertiary: qa # Performance tests
Security Feature:
primary: security # Define requirements
secondary: implementer # Build securely
tertiary: qa # Security tests
```
---
## Agent Chaining Patterns
### Sequential Chaining
**When to Use:** Tasks that must be done in a specific order.
**Pattern:**
```yaml
Step 1: Use agent A to complete task
→ Wait for completion
Step 2: Use agent B to build on A's work
→ Wait for completion
Step 3: Use agent C to finalize
→ Wait for completion
```
**Example: Authentication System**
```yaml
Step 1: Use the architect agent to:
- Design authentication flow
- Define session management strategy
- Plan token structure
Step 2: Use the security agent to:
- Review architect's design
- Add security requirements
- Define encryption standards
Step 3: Use the implementer agent to:
- Implement auth flow per design
- Apply security requirements
- Build according to specifications
Step 4: Use the qa agent to:
- Create unit tests
- Create integration tests
- Create security tests
```
### Parallel Coordination
**When to Use:** Independent tasks that can be done simultaneously.
**Pattern:**
```yaml
Spawn Multiple Agents in Parallel:
- Agent A: Task 1 (independent)
- Agent B: Task 2 (independent)
- Agent C: Task 3 (independent)
Wait for All Completions
Consolidate Results
```
**Example: Feature with Multiple Components**
```yaml
Parallel Tasks:
- Use the implementer agent to: Build API endpoints
- Use the qa agent to: Create test fixtures
- Use the researcher agent to: Document existing patterns
All agents work simultaneously on independent tasks.
After All Complete:
- Integrate API with tests
- Apply documented patterns
- Validate complete feature
```
### Iterative Refinement
**When to Use:** Gradual improvement with feedback loops.
**Pattern:**
```yaml
Loop:
1. Use agent to make changes
2. Validate changes
3. If issues found:
- Use agent to fix issues
- Validate again
4. Repeat until quality gates pass
```
**Example: Code Refactoring**
```yaml
Iteration 1:
- Use refactorer agent to simplify function
- Run tests → 2 failures
- Use refactorer agent to fix test compatibility
- Run tests → All pass
Iteration 2:
- Use refactorer agent to extract duplicate code
- Run linter → 3 style issues
- Use refactorer agent to fix style
- Run linter → Clean
Iteration 3:
- Validate: All quality gates pass
- Complete: Refactoring done
```
---
## Agent Coordination Strategies
### Strategy 1: Single Agent (Simple Tasks)
**Use When:**
- Single file modification
- Simple bug fix
- Documentation update
- Straightforward feature
**Pattern:**
```yaml
Single Agent:
- Use the implementer agent to:
- Make the change
- Add tests
- Update documentation
```
**Example:**
```
User: "Add a max_length validation to the username field"
Use the implementer agent to:
- Add max_length=50 to User.username field
- Add validation test for max_length
- Update API documentation with constraint
```
### Strategy 2: Agent Pairs (Moderate Complexity)
**Use When:**
- Design + implementation needed
- Security review required
- Test coverage important
**Pattern:**
```yaml
Agent Pair:
Primary Agent: Core work
Secondary Agent: Validation/enhancement
```
**Example:**
```
User: "Implement password reset functionality"
Step 1: Use the architect agent to:
- Design password reset flow
- Plan token generation strategy
- Define security requirements
Step 2: Use the implementer agent to:
- Implement the designed flow
- Build according to security requirements
- Add comprehensive tests
```
### Strategy 3: Agent Chain (High Complexity)
**Use When:**
- System-wide changes
- Architecture modifications
- Security-critical features
- Major refactoring
**Pattern:**
```yaml
Agent Chain:
Phase 1: Research & Design
- researcher: Analyze impact
- architect: Design solution
Phase 2: Implementation
- implementer: Build core
- security: Review (if needed)
Phase 3: Quality Assurance
- qa: Comprehensive testing
- refactorer: Final polish
```
**Example:**
```
User: "Migrate from sessions to JWT authentication"
Phase 1 - Analysis:
Use the researcher agent to:
- Find all session usage
- Map authentication dependencies
- Identify breaking changes
Phase 2 - Design:
Use the architect agent to:
- Design JWT implementation
- Plan migration strategy
- Define backwards compatibility
Phase 3 - Security:
Use the security agent to:
- Review JWT implementation plan
- Add security requirements
- Define token validation rules
Phase 4 - Implementation:
Use the implementer agent to:
- Implement JWT manager
- Add token validation
- Build according to security requirements
Phase 5 - Migration:
Use the refactorer agent to:
- Update all authentication calls
- Remove session dependencies
- Ensure consistency
Phase 6 - Testing:
Use the qa agent to:
- Create unit tests
- Create integration tests
- Create security tests
- Validate migration
```
### Strategy 4: Parallel + Sequential Hybrid
**Use When:**
- Multiple independent components with dependencies
- Complex features with parallel work streams
**Pattern:**
```yaml
Parallel Phase:
- Agent A: Independent task 1
- Agent B: Independent task 2
Sequential Phase (after parallel complete):
- Agent C: Integration work
- Agent D: Final validation
```
**Example:**
```
User: "Add real-time notifications with WebSockets"
Parallel Phase:
Use the architect agent to:
- Design WebSocket architecture
Use the implementer agent to (simultaneously):
- Set up WebSocket server configuration
- Create notification data models
Sequential Phase:
Use the implementer agent to:
- Implement WebSocket handlers
- Connect to notification models
- Add client connection management
Use the qa agent to:
- Create WebSocket connection tests
- Create notification delivery tests
- Test connection stability
```
---
## Agent Communication Patterns
### Explicit Handoff
**Pattern:** Clearly state what the next agent should do based on previous work.
```yaml
Step 1: Use the researcher agent to map all API endpoints
→ Output: List of 47 endpoints in api_map.md
Step 2: Use the architect agent to design new API structure
Context: Review the 47 endpoints in api_map.md
Task: Design consolidated API with RESTful patterns
Step 3: Use the refactorer agent to update endpoints
Context: Follow new structure from architect
Task: Update all 47 endpoints to match design
```
### Context Sharing
**Pattern:** Ensure agents have necessary context from previous work.
```yaml
Context for Next Agent:
Previous Work: "architect agent designed auth flow"
Artifacts: "auth_design.md with flow diagram"
Requirements: "Must follow JWT pattern with refresh tokens"
Use the implementer agent with this context to:
- Implement auth flow from auth_design.md
- Use JWT with refresh token pattern
- Follow security guidelines from design
```
### Validation Loops
**Pattern:** Use agents to validate each other's work.
```yaml
Create → Validate → Fix Loop:
Step 1: Use the implementer agent to build feature
Step 2: Use the security agent to review implementation
→ If issues found:
Document security concerns
Step 3: Use the implementer agent to address security concerns
Context: Security review findings
Task: Fix identified issues
Step 4: Use the security agent to re-review
→ If clean: Proceed
→ If issues: Repeat loop
```
---
## Quality Checkpoints with Agents
### Code Quality Triggers
**Automatic Agent Invocation Based on Code Metrics:**
```yaml
Function Length > 50 Lines:
→ Use the refactorer agent to:
- Break into smaller functions
- Extract helper methods
- Improve readability
Nesting Depth > 3:
→ Use the refactorer agent to:
- Flatten conditional logic
- Extract nested blocks
- Simplify control flow
Duplicate Code Detected:
→ Use the refactorer agent to:
- Extract common functionality
- Create shared utilities
- Apply DRY principle
Circular Dependencies Found:
→ Use the architect agent to:
- Review dependency structure
- Redesign component relationships
- Break circular references
Performance Concerns:
→ Use the implementer agent to:
- Add performance measurements
- Identify bottlenecks
- Implement optimizations
Security Patterns Detected:
→ Use the security agent to:
- Review authentication code
- Validate authorization logic
- Check encryption usage
```
---
## Agent Coordination Best Practices
### DO:
- ✅ Use workflow-coordinator first to validate workflow state
- ✅ Be explicit about which agent to use and why
- ✅ Provide clear context when chaining agents
- ✅ Validate after each agent completes
- ✅ Use parallel agents for independent tasks
- ✅ Chain agents for dependent tasks
### DON'T:
- ❌ Skip workflow-coordinator validation
- ❌ Use wrong agent for the task
- ❌ Chain agents without clear handoff
- ❌ Run dependent tasks in parallel
- ❌ Forget to validate agent output
- ❌ Over-complicate simple tasks
---
*Comprehensive agent orchestration strategies for complex implementation tasks*

View File

@@ -0,0 +1,200 @@
# Quality Standards - Language Dispatch
This file provides an overview of quality standards and directs you to language-specific quality gates.
## When to Load This File
- User asks: "What are the quality standards?"
- Need overview of validation approach
- Choosing which language file to load
## Quality Philosophy
**All implementations must pass these gates:**
- ✅ Linting (0 errors, warnings with justification)
- ✅ Formatting (consistent code style)
- ✅ Tests (all passing, appropriate coverage)
- ✅ Type checking (if language supports it)
- ✅ Documentation (comprehensive and current)
## Language-Specific Standards
**Load the appropriate file based on detected project language:**
### Python Projects
**When to load:** `pyproject.toml`, `requirements.txt`, or `*.py` files detected
**Load:** `@languages/PYTHON.md`
**Quick commands:**
```bash
ruff check . && ruff format . && mypy . && pytest
```
---
### Rust Projects
**When to load:** `Cargo.toml` or `*.rs` files detected
**Load:** `@languages/RUST.md`
**Quick commands:**
```bash
cargo clippy -- -D warnings && cargo fmt --check && cargo test
```
---
### JavaScript/TypeScript Projects
**When to load:** `package.json`, `tsconfig.json`, or `*.js`/`*.ts` files detected
**Load:** `@languages/JAVASCRIPT.md`
**Quick commands:**
```bash
# TypeScript
npx eslint . && npx prettier --check . && npx tsc --noEmit && npm test
# JavaScript
npx eslint . && npx prettier --check . && npm test
```
---
### Go Projects
**When to load:** `go.mod` or `*.go` files detected
**Load:** `@languages/GO.md`
**Quick commands:**
```bash
gofmt -w . && golangci-lint run && go test ./...
```
---
### Other Languages
**When to load:** No specific language detected, or unsupported language (PHP, Ruby, C++, C#, Java, etc.)
**Load:** `@languages/GENERIC.md`
**Provides:** General quality principles applicable across languages
---
## Progressive Loading Pattern
**Don't load all language files!** Only load the relevant one:
1. **Detect project language** (from file extensions, config files)
2. **Load specific standards** for that language only
3. **Apply language-specific validation** commands
4. **Fallback to generic** if language not covered
## Continuous Validation
**Every 3 Edits:**
```yaml
Checkpoint:
1. Run relevant tests
2. Check linting
3. Verify type checking (if applicable)
4. If any fail:
- Fix immediately
- Re-validate
5. Continue implementation
```
## Pre-Completion Validation
**Before marking work complete:**
```yaml
Full Quality Suite:
1. Run full test suite
2. Run full linter
3. Run type checker
4. Check documentation
5. Review specification compliance
6. Verify all acceptance criteria met
If ANY fail:
- Fix issues
- Re-run full suite
- Only complete when all pass
```
## Quality Enforcement Strategy
```yaml
Detect Language:
- Check for language-specific files (pyproject.toml, Cargo.toml, etc.)
- Identify from file extensions
- User can override if auto-detection fails
Load Standards:
- Load @languages/PYTHON.md for Python
- Load @languages/RUST.md for Rust
- Load @languages/JAVASCRIPT.md for JS/TS
- Load @languages/GO.md for Go
- Load @languages/GENERIC.md for others
Apply Validation:
- Run language-specific commands
- Check against language-specific standards
- Enforce coverage requirements
- Validate documentation completeness
Report Results:
- Clear pass/fail for each gate
- Specific error messages
- Actionable fix suggestions
```
## When Standards Apply
**During Implementation:**
- After every 3 edits (checkpoint validation)
- Before declaring task complete (full validation)
- When explicitly requested by user
**Quality Gates Must Pass:**
- To move from implementation → review phase
- To mark specification acceptance criteria complete
- Before creating pull request
## Cross-Language Principles
**These apply regardless of language:**
```yaml
SOLID Principles:
- Single Responsibility
- Open/Closed
- Liskov Substitution
- Interface Segregation
- Dependency Inversion
Code Quality:
- No duplication
- Clear naming
- Reasonable function size (<= 50 lines guideline)
- Low nesting depth (<= 3 levels)
- Proper error handling
Testing:
- Unit tests for business logic
- Integration tests for workflows
- Edge case coverage
- Error path coverage
- Reasonable coverage targets
Documentation:
- README for setup
- API documentation
- Complex logic explained
- Usage examples
```
---
*Load language-specific files for detailed standards - avoid loading all language contexts unnecessarily*

View File

@@ -0,0 +1,179 @@
---
name: Implementing Features
description: Execute specification-driven implementation with automatic quality gates, multi-agent orchestration, and progress tracking. Use when building features from specs, fixing bugs with test coverage, or refactoring with validation.
allowed-tools: [Read, Write, Edit, MultiEdit, Bash, Glob, Grep, TodoWrite, Task]
---
# Implementing Features
I help you execute production-quality implementations with auto-detected language standards, intelligent agent orchestration, and specification integration.
## When to Use Me
**Auto-activate when:**
- Invoked via `/quaestor:implement` slash command
- User mentions "build [specific feature]" or "fix [specific bug]" with context
- Continuing implementation after planning phase is complete
- User says "continue implementation" or "resume implementing"
- Coordinating multi-agent implementation of an active specification
**Do NOT auto-activate when:**
- User says only "implement" or "implement it" (slash command handles this)
- User is still in planning/research phase
- Request is vague without feature details
## Supporting Files
This skill uses several supporting files for detailed workflows:
- **@WORKFLOW.md** - 4-phase implementation process (Discovery → Planning → Implementation → Validation)
- **@AGENTS.md** - Agent orchestration strategies and coordination patterns
- **@QUALITY.md** - Language-specific quality standards and validation gates
- **@SPECS.md** - Specification integration and tracking protocols
## My Process
I follow a structured 4-phase workflow to ensure quality and completeness:
### Phase 1: Discovery & Research 🔍
**Specification Integration:**
- Check `.quaestor/specs/active/` for in-progress work
- Search `.quaestor/specs/draft/` for matching specifications
- Move draft spec → active folder (if space available, max 3)
- Update spec status → "in_progress"
**Research Protocol:**
- Analyze codebase patterns & conventions
- Identify dependencies & integration points
- Determine required agents based on task requirements
**See @WORKFLOW.md Phase 1 for complete discovery process**
### Phase 2: Planning & Approval 📋
**Present Implementation Strategy:**
- Architecture decisions & trade-offs
- File changes & new components required
- Quality gates & validation approach
- Risk assessment & mitigation
**MANDATORY: Get user approval before proceeding**
**See @WORKFLOW.md Phase 2 for planning details**
### Phase 3: Implementation ⚡
**Agent Orchestration:**
- **Multi-file operations** → Use researcher + implementer agents
- **System refactoring** → Use architect + refactorer agents
- **Test creation** → Use qa agent for comprehensive coverage
- **Security implementation** → Use security + implementer agents
**Quality Cycle** (every 3 edits):
```
Execute → Validate → Fix (if ❌) → Continue
```
**See @AGENTS.md for complete agent coordination strategies**
### Phase 4: Validation & Completion ✅
**Quality Validation:**
1. Detect project language (Python, Rust, JS/TS, Go, or Generic)
2. Load language-specific standards from @QUALITY.md
3. Run validation pipeline for detected language
4. Fix any issues and re-validate
**Completion Criteria:**
- ✅ All tests passing
- ✅ Zero linting errors
- ✅ Type checking clean (if applicable)
- ✅ Documentation complete
- ✅ Specification status updated
**See @QUALITY.md for dispatch to language-specific standards:**
- `@languages/PYTHON.md` - Python projects
- `@languages/RUST.md` - Rust projects
- `@languages/JAVASCRIPT.md` - JS/TS projects
- `@languages/GO.md` - Go projects
- `@languages/GENERIC.md` - Other languages
## Auto-Intelligence
### Project Detection
- **Language**: Auto-detect → Python|Rust|JS|Generic standards
- **Scope**: Assess changes → Single-file|Multi-file|System-wide
- **Context**: Identify requirements → architecture|security|testing|refactoring
### Execution Strategy
- **System-wide**: Comprehensive planning with multiple agent coordination
- **Feature Development**: Iterative implementation with testing
- **Bug Fixes**: Focused resolution with validation
## Agent Coordination
**I coordinate with specialized agents based on task requirements:**
- **workflow-coordinator** - First! Validates workflow state and ensures planning phase completed
- **implementer** - Builds features according to specification
- **architect** - Designs system architecture when needed
- **security** - Reviews auth, encryption, or access control
- **qa** - Creates comprehensive tests alongside implementation
- **refactorer** - Ensures consistency across multiple files
- **researcher** - Maps dependencies for multi-file changes
**See @AGENTS.md for agent chaining patterns and coordination strategies**
## Specification Integration
**Auto-Update Protocol:**
**Pre-Implementation:**
- Check `.quaestor/specs/draft/` for matching spec ID
- Move spec from draft/ → active/ (max 3 active)
- Declare: "Working on Spec: [ID] - [Title]"
- Update phase status in spec file
**Post-Implementation:**
- Update phase status → "completed"
- Track acceptance criteria completion
- Move spec to completed/ when all phases done
- Create git commit with spec reference
**See @SPECS.md for complete specification integration details**
## Quality Gates
**Code Quality Checkpoints:**
- Function exceeds 50 lines → Use refactorer agent to break into smaller functions
- Nesting depth exceeds 3 → Use refactorer agent to simplify logic
- Circular dependencies detected → Use architect agent to review design
- Performance implications unclear → Use implementer agent to add measurements
**See @QUALITY.md for language-specific quality gates and standards**
## Success Criteria
- ✅ Workflow coordinator validates planning phase completed
- ✅ Specification identified and moved to active/
- ✅ User approval obtained for implementation strategy
- ✅ All quality gates passed (linting, tests, type checking)
- ✅ Documentation updated
- ✅ Specification status updated and tracked
- ✅ Ready for review phase
## Final Response
When implementation is complete:
```
Implementation complete. All quality gates passed.
Specification [ID] updated to completed status.
Ready for review and PR creation.
```
**See @WORKFLOW.md for complete workflow details**
---
*Intelligent implementation with agent orchestration, quality gates, and specification tracking*

View File

@@ -0,0 +1,539 @@
# Specification Integration & Tracking
This file describes how to integrate with Quaestor's specification system for tracking implementation progress.
## Specification Folder Structure
```yaml
.quaestor/specs/
├── draft/ # Planned specifications (not yet started)
├── active/ # In-progress implementations (max 3)
├── completed/ # Finished implementations
└── archived/ # Old/cancelled specifications
```
---
## Specification Lifecycle
### States and Transitions
```yaml
States:
draft: "Specification created but not started"
active: "Currently being implemented"
completed: "Implementation finished and validated"
archived: "Old or cancelled"
Transitions:
draft → active: "Start implementation"
active → completed: "Finish implementation"
active → draft: "Pause work"
any → archived: "Cancel or archive"
Limits:
active: "Maximum 3 active specs"
draft: "Unlimited"
completed: "Unlimited"
archived: "Unlimited"
```
---
## Phase 1: Specification Discovery
### No Arguments Provided
**Discovery Protocol:**
```yaml
Step 1: Check Active Specs
Location: .quaestor/specs/active/*.md
Purpose: Find in-progress work
Output: List of active specifications
Step 2: Check Draft Specs (if no active)
Location: .quaestor/specs/draft/*.md
Purpose: Find available work
Output: List of draft specifications
Step 3: Present to User
Format:
"Found 2 specifications:
- [active] spec-feature-001: User Authentication
- [draft] spec-feature-002: Data Export API
Which would you like to work on?"
Step 4: User Selection
User provides: spec ID or description
Match: Find corresponding specification
Activate: Move draft → active (if needed)
```
### Arguments Provided
**Match Specification by ID or Description:**
```yaml
Argument Examples:
- "spec-feature-001"
- "feature-001"
- "001"
- "user authentication"
- "auth system"
Matching Strategy:
1. Exact ID match: spec-feature-001.md
2. Partial ID match: Contains "feature-001"
3. Description match: Title contains "user authentication"
4. Fuzzy match: Similar words in title
Result:
Match Found:
→ Load specification
→ Display: "Found: spec-feature-001 - User Authentication System"
→ Activate if in draft/
No Match:
→ Display: "No matching specification found"
→ Suggest: "Available specs: [list]"
→ Ask: "Would you like to create a new spec?"
```
---
## Phase 2: Specification Activation
### Pre-Activation Validation
**Before Moving to Active:**
```yaml
Validation Checks:
1. Spec Location:
- If already active: "Already working on this spec"
- If in completed: "Spec already completed"
- If in draft: Proceed with activation
2. Active Limit:
- Count: Active specs in .quaestor/specs/active/
- Limit: Maximum 3 active specs
- If at limit: "Active limit reached (3 specs). Complete one before starting another."
- If under limit: Proceed with activation
3. Specification Validity:
- Check: Has phases defined
- Check: Has acceptance criteria
- If invalid: "Specification incomplete. Please update before starting."
```
### Activation Process
**Move from Draft to Active:**
```yaml
Atomic Operation:
1. Read Specification:
Source: .quaestor/specs/draft/spec-feature-001.md
Parse: Extract metadata and phases
2. Update Status:
Field: status
Change: "draft" → "in_progress"
Add: start_date (current date)
3. Move File:
From: .quaestor/specs/draft/spec-feature-001.md
To: .quaestor/specs/active/spec-feature-001.md
Method: Git mv (preserves history)
4. Confirm:
Display: "✅ Activated: spec-feature-001 - User Authentication"
Display: "Status: in_progress"
Display: "Phases: 4 total, 0 completed"
```
---
## Phase 3: Progress Tracking
### Phase Status Updates
**During Implementation:**
```yaml
Phase Tracking:
Format in Specification:
## Phases
### Phase 1: Authentication Flow Design
- [ ] Task 1
- [ ] Task 2
Status: ⏳ in_progress
### Phase 2: JWT Implementation
- [ ] Task 1
- [ ] Task 2
Status: ⏳ pending
Update Protocol:
1. Complete tasks: Mark checkboxes [x]
2. Update status: pending → in_progress → completed
3. Add notes: Implementation details
4. Track blockers: If any issues
Example Update:
### Phase 1: Authentication Flow Design
- [x] Design login flow
- [x] Design registration flow
- [x] Design password reset flow
Status: ✅ completed
Implementation Notes:
- Used JWT with 15min access, 7day refresh
- Implemented token rotation for security
- Added rate limiting on auth endpoints
```
### Acceptance Criteria Tracking
**Track Progress Against Criteria:**
```yaml
Acceptance Criteria Format:
## Acceptance Criteria
- [ ] AC1: Users can register with email/password
- [ ] AC2: Users can log in and receive JWT
- [ ] AC3: Tokens expire after 15 minutes
- [ ] AC4: Refresh tokens work correctly
- [ ] AC5: Rate limiting prevents brute force
Update During Implementation:
As Each Criterion Met:
- Mark checkbox: [x]
- Add evidence: Link to test or code
- Validate: Ensure actually working
Example:
- [x] AC1: Users can register with email/password
✓ Implemented in auth/registration.py
✓ Tests: test_registration_flow.py (8 tests passing)
```
---
## Phase 4: Completion & Transition
### Completion Criteria
**Before Moving to Completed:**
```yaml
All Must Be True:
1. All Phases Completed:
- Every phase status: ✅ completed
- All phase tasks: [x] checked
2. All Acceptance Criteria Met:
- Every criterion: [x] checked
- Evidence provided for each
- Tests passing for each
3. Quality Gates Passed:
- All tests passing
- Linting clean
- Type checking passed
- Documentation complete
4. No Blockers:
- All issues resolved
- No pending decisions
- Ready for review
```
### Move to Completed
**Atomic Transition:**
```yaml
Operation:
1. Update Specification:
Field: status
Change: "in_progress" → "completed"
Add: completion_date (current date)
Add: final_notes (summary of implementation)
2. Move File:
From: .quaestor/specs/active/spec-feature-001.md
To: .quaestor/specs/completed/spec-feature-001.md
Method: Git mv (preserves history)
3. Create Commit:
Message: "feat: implement spec-feature-001 - User Authentication"
Body: Include spec summary and changes
Reference: Link to specification
4. Confirm:
Display: "✅ Completed: spec-feature-001 - User Authentication"
Display: "Status: completed"
Display: "All phases completed, all criteria met"
Display: "Ready for review and PR creation"
```
---
## Specification File Format
### Markdown Structure
**Required Sections:**
```markdown
---
id: spec-feature-001
title: User Authentication System
status: in_progress
priority: high
type: feature
start_date: 2024-01-15
---
# User Authentication System
## Overview
Brief description of what this spec implements.
## Phases
### Phase 1: Phase Name
- [ ] Task 1
- [ ] Task 2
Status: ⏳ in_progress
### Phase 2: Phase Name
- [ ] Task 1
- [ ] Task 2
Status: ⏳ pending
## Acceptance Criteria
- [ ] AC1: Criterion 1
- [ ] AC2: Criterion 2
## Technical Details
Technical implementation notes.
## Testing Strategy
How this will be tested.
## Implementation Notes
Notes added during implementation.
```
### Metadata Fields
```yaml
Required Fields:
id: "Unique identifier (spec-feature-001)"
title: "Human-readable title"
status: "draft|in_progress|completed|archived"
priority: "low|medium|high|critical"
type: "feature|bugfix|refactor|docs|other"
Optional Fields:
start_date: "When implementation started"
completion_date: "When implementation finished"
estimated_hours: "Time estimate"
actual_hours: "Actual time spent"
assignee: "Who implemented it"
blockers: "Any blocking issues"
```
---
## Integration with Git
### Commit Messages
**Reference Specifications in Commits:**
```yaml
Format:
type(scope): message
Spec: spec-feature-001
Description: Detailed description
Example:
feat(auth): implement JWT authentication
Spec: spec-feature-001
- Add JWT token generation
- Implement refresh token rotation
- Add authentication middleware
All acceptance criteria met.
Tests: 42 new tests added (100% coverage)
```
### Git History Preservation
**Using Git MV:**
```yaml
Benefit:
- Preserves file history across moves
- Maintains specification evolution
- Enables tracking changes over time
Command:
git mv .quaestor/specs/draft/spec-feature-001.md \
.quaestor/specs/active/spec-feature-001.md
History:
- See full edit history
- Track progress over time
- Understand evolution of spec
```
---
## Auto-Update Protocol
### Pre-Implementation
**When Starting Implementation:**
```yaml
Actions:
1. Find Specification:
- Search draft/ and active/
- Match by ID or description
2. Activate Specification:
- Move draft → active (if needed)
- Update status → in_progress
- Add start date
3. Declare Intent:
Output: "🎯 Working on Spec: spec-feature-001 - User Authentication System"
Output: "Status: in_progress (moved to active/)"
Output: "Phases: 4 total, starting Phase 1"
4. Present Plan:
- Show implementation strategy
- Get user approval
- Begin implementation
```
### During Implementation
**Progress Updates:**
```yaml
After Completing Each Phase:
1. Update Specification:
- Mark phase tasks complete: [x]
- Update phase status: completed
- Add implementation notes
2. Track Progress:
Output: "✅ Phase 1 complete (1/4 phases)"
Output: " - All tasks finished"
Output: " - Implementation notes added"
Output: "Starting Phase 2..."
After Completing Acceptance Criterion:
1. Update Specification:
- Mark criterion complete: [x]
- Add evidence (tests, code references)
2. Track Progress:
Output: "✅ AC1 met: Users can register"
Output: " - Tests: test_registration.py (8 passing)"
Output: " - Code: auth/registration.py"
Output: "Progress: 1/5 criteria met"
```
### Post-Implementation
**When Implementation Complete:**
```yaml
Actions:
1. Validate Completion:
- All phases: ✅ completed
- All criteria: [x] met
- Quality gates: Passed
2. Update Specification:
- Status → completed
- Add completion date
- Add final summary
3. Move to Completed:
- From: active/spec-feature-001.md
- To: completed/spec-feature-001.md
- Method: Git mv
4. Create Commit:
- Reference spec in message
- Include summary of changes
- Link to relevant files
5. Declare Complete:
Output: "✅ Implementation Complete"
Output: "Specification: spec-feature-001"
Output: "Status: completed (moved to completed/)"
Output: "All 4 phases completed, all 5 criteria met"
Output: "Ready for review and PR creation"
```
---
## Error Handling
### Specification Not Found
```yaml
Issue: No matching specification
Actions:
1. Search all folders: draft/, active/, completed/
2. Try fuzzy matching on title
3. If still no match:
Output: "❌ No matching specification found"
Output: "Available specifications:"
Output: [List active and draft specs]
Output: "Would you like to create a new spec?"
4. If user wants to create:
Delegate to spec-writing skill
```
### Active Limit Reached
```yaml
Issue: Already 3 active specs
Actions:
1. Count active specs
2. If at limit:
Output: "❌ Active limit reached (3 specs)"
Output: "Currently active:"
Output: [List 3 active specs with progress]
Output: "Complete one before starting another"
3. Suggest:
Output: "Would you like to:"
Output: "1. Continue one of the active specs"
Output: "2. Move one back to draft"
```
### Invalid Specification
```yaml
Issue: Spec missing required fields
Actions:
1. Validate specification structure
2. Check required fields: id, title, phases, criteria
3. If invalid:
Output: "❌ Specification incomplete"
Output: "Missing: [list missing fields]"
Output: "Please update specification before starting"
4. Suggest fix:
Output: "Use spec-writing skill to update specification"
```
---
*Complete specification integration for tracking implementation progress with Quaestor*

View File

@@ -0,0 +1,480 @@
# Implementation Workflow - Complete 4-Phase Process
This file describes the detailed workflow for executing production-quality implementations.
## Workflow Overview: Research → Plan → Implement → Validate
```yaml
Phase 1: Discovery & Research (🔍)
- Specification discovery and activation
- Codebase analysis and pattern identification
- Dependency mapping
- Agent requirement determination
Phase 2: Planning & Approval (📋)
- Strategy presentation
- Architecture decisions
- Risk assessment
- MANDATORY user approval
Phase 3: Implementation (⚡)
- Agent-orchestrated development
- Quality cycle (every 3 edits)
- Continuous validation
- Documentation updates
Phase 4: Validation & Completion (✅)
- Language-specific quality gates
- Test execution
- Specification status update
- Completion confirmation
```
---
## Phase 1: Discovery & Research 🔍
### Specification Discovery
**No Arguments Provided?**
```yaml
Discovery Protocol:
1. Check: .quaestor/specs/active/*.md (current work in progress)
2. If empty: Check .quaestor/specs/draft/*.md (available work)
3. Match: spec ID from user request
4. Output: "Found spec: [ID] - [Title]" OR "No matching specification"
```
**Specification Activation:**
```yaml
🎯 Context Check:
- Scan: .quaestor/specs/draft/*.md for matching spec
- Validate: Max 3 active specs (enforce limit)
- Move: draft spec → active/ folder
- Update: spec status → "in_progress"
- Track: implementation progress in spec phases
```
### Codebase Research
**Research Protocol:**
1. **Pattern Analysis**
- Identify existing code conventions
- Determine file organization patterns
- Understand naming conventions
- Map testing strategies
2. **Dependency Mapping**
- Identify affected modules
- Map integration points
- Understand data flow
- Detect circular dependencies
3. **Agent Determination**
- Assess task complexity
- Determine required agent specializations
- Plan agent coordination strategy
- Identify potential bottlenecks
**Example Research Output:**
```
🔍 Research Complete:
Specification: spec-feature-001 - User Authentication System
Status: Moved to active/
Codebase Analysis:
- Pattern: Repository pattern with service layer
- Testing: pytest with 75% coverage requirement
- Dependencies: auth module, user module, database layer
Required Agents:
- architect: Design auth flow and session management
- security: Review authentication implementation
- implementer: Build core functionality
- qa: Create comprehensive test suite
```
---
## Phase 2: Clarification & Decision 🤔
### MANDATORY: Ask User to Make Key Decisions
**After research, identify decisions user must make BEFORE planning:**
#### 1. Approach Selection (when 2+ valid options exist)
```
Use AskUserQuestion tool:
- Present 2-3 architectural approaches
- Include pros/cons and trade-offs for each
- Explain complexity and maintenance implications
- Wait for user to choose before proceeding
```
**Example:**
- Approach A: REST API - Simple, widely understood, but less efficient
- Approach B: GraphQL - Flexible queries, but steeper learning curve
- Approach C: gRPC - High performance, but requires protobuf setup
#### 2. Scope Boundaries
```
Ask clarifying questions:
- "Should this also handle [related feature]?"
- "Include [edge case scenario]?"
- "Support [additional requirement]?"
```
**Example:** "Should user authentication also include password reset functionality, or handle that separately?"
#### 3. Priority Trade-offs
```
When trade-offs exist, ask user to decide:
- "Optimize for speed OR memory efficiency?"
- "Prioritize simplicity OR flexibility?"
- "Focus on performance OR maintainability?"
```
**Example:** "This can be implemented for speed (caching, more memory) or simplicity (no cache, easier to maintain). Which priority?"
#### 4. Integration Decisions
```
Clarify connections to existing systems:
- "Integrate with existing [system] OR standalone?"
- "Use [library A] OR [library B]?"
- "Follow [pattern X] OR [pattern Y]?"
```
**Example:** "Should this use the existing Redis cache or create a new in-memory cache?"
**Only proceed to planning after user has made these decisions.**
---
## Phase 3: Planning & Approval 📋
### Present Implementation Strategy
**MANDATORY Components:**
1. **Architecture Decisions**
- Design approach and rationale
- Component structure
- Data flow diagrams (if complex)
- Integration strategy
2. **File Changes**
- New files to create
- Existing files to modify
- Deletions (if any)
- Configuration updates
3. **Quality Gates**
- Testing strategy
- Validation checkpoints
- Coverage requirements
- Performance benchmarks
4. **Risk Assessment**
- Potential breaking changes
- Migration requirements
- Backwards compatibility concerns
- Mitigation strategies
### Example Planning Output
```markdown
## Implementation Strategy for spec-feature-001
### Architecture Decisions
- Use JWT for stateless authentication
- Implement refresh token rotation
- Store sessions in Redis for scalability
- Use bcrypt for password hashing (cost factor: 12)
**Trade-offs:**
- ✅ Stateless = better scalability
- ⚠️ Redis dependency added
- ✅ Refresh rotation = better security
### File Changes
**New Files:**
- `src/auth/jwt_manager.py` - JWT generation and validation
- `src/auth/session_store.py` - Redis session management
- `tests/test_auth_flow.py` - Authentication flow tests
**Modified Files:**
- `src/auth/service.py` - Add JWT authentication
- `src/config.py` - Add auth configuration
- `requirements.txt` - Add PyJWT, redis dependencies
### Quality Gates
- Unit tests: All auth functions
- Integration tests: Complete auth flow
- Security tests: Token validation, expiry, rotation
- Coverage target: 90% for auth module
### Risk Assessment
- ⚠️ Breaking change: Session format changes
- Migration: Clear existing sessions on deploy
- Backwards compat: Old tokens expire gracefully
- Mitigation: Feature flag for gradual rollout
```
### Get User Approval
**MANDATORY: Wait for explicit approval before proceeding to Phase 3**
Approval phrases:
- "Proceed"
- "Looks good"
- "Go ahead"
- "Approved"
- "Start implementation"
---
## Phase 4: Implementation ⚡
### Agent-Orchestrated Development
**Agent Selection Matrix:**
```yaml
Task Type → Agent Strategy:
System Architecture:
- Use architect agent to design solution
- Use implementer agent to build components
Multi-file Changes:
- Use researcher agent to map dependencies
- Use refactorer agent to update consistently
Security Features:
- Use security agent to define requirements
- Use implementer agent to build securely
- Use qa agent to create security tests
Test Creation:
- Use qa agent for comprehensive coverage
- Use implementer agent for test fixtures
Performance Optimization:
- Use researcher agent to profile hotspots
- Use refactorer agent to optimize code
- Use qa agent to create performance tests
```
### Quality Cycle (Every 3 Edits)
**Continuous Validation:**
```yaml
After Every 3 Code Changes:
1. Execute: Run relevant tests
2. Validate: Check linting and type checking
3. Fix: If ❌, address issues immediately
4. Continue: Proceed with next changes
Example:
Edit 1: Create auth/jwt_manager.py
Edit 2: Add JWT generation method
Edit 3: Add JWT validation method
→ RUN QUALITY CYCLE
Execute: pytest tests/test_jwt.py
Validate: ruff check auth/jwt_manager.py
Fix: Address any issues
Continue: Next 3 edits
```
### Implementation Patterns
**Single-File Feature:**
```yaml
Pattern:
1. Create/modify file
2. Add documentation
3. Create tests
4. Validate quality
5. Update specification
```
**Multi-File Feature:**
```yaml
Pattern:
1. Use researcher agent → Map dependencies
2. Use architect agent → Design components
3. Use implementer agent → Build core functionality
4. Use refactorer agent → Ensure consistency
5. Use qa agent → Create comprehensive tests
6. Validate quality gates
7. Update specification
```
**System Refactoring:**
```yaml
Pattern:
1. Use researcher agent → Analyze impact
2. Use architect agent → Design new structure
3. Use refactorer agent → Update all files
4. Use qa agent → Validate no regressions
5. Validate quality gates
6. Update documentation
```
### Code Quality Checkpoints
**Automatic Refactoring Triggers:**
- Function exceeds 50 lines → Use refactorer agent to break into smaller functions
- Nesting depth exceeds 3 → Use refactorer agent to simplify logic
- Circular dependencies detected → Use architect agent to review design
- Duplicate code found → Use refactorer agent to extract common functionality
- Performance implications unclear → Use implementer agent to add measurements
---
## Phase 5: Validation & Completion ✅
### Language-Specific Validation
**Python:**
```bash
ruff check . --fix # Linting
ruff format . # Formatting
pytest -v # Tests
mypy . --ignore-missing-imports # Type checking
```
**Rust:**
```bash
cargo clippy -- -D warnings # Linting
cargo fmt # Formatting
cargo test # Tests
cargo check # Type checking
```
**JavaScript/TypeScript:**
```bash
npx eslint . --fix # Linting
npx prettier --write . # Formatting
npm test # Tests
npx tsc --noEmit # Type checking (TS only)
```
**Generic (Any Language):**
- Syntax validation
- Error handling review
- Documentation completeness
- Test coverage assessment
### Completion Criteria
**All Must Pass:**
- ✅ All tests passing (no skipped tests without justification)
- ✅ Zero linting errors (warnings acceptable with comment)
- ✅ Type checking clean (if applicable to language)
- ✅ Documentation complete (functions, classes, modules)
- ✅ Specification status updated (phases marked complete)
- ✅ No unhandled edge cases
- ✅ Performance within acceptable bounds
### Specification Update
**Post-Implementation Protocol:**
```yaml
Update Specification:
- Mark completed phases: ✅ in spec file
- Update acceptance criteria status
- Add implementation notes (if needed)
- Check if all phases complete → Move to completed/
- Generate commit message with spec reference
Example:
Phase 1: Authentication Flow Design - ✅ Complete
Phase 2: JWT Implementation - ✅ Complete
Phase 3: Session Management - ✅ Complete
Phase 4: Security Testing - ✅ Complete
→ All phases complete
→ Move spec-feature-001 from active/ to completed/
→ Ready for review and PR creation
```
### Final Validation
**Before Declaring Complete:**
1. Run full test suite: `uv run pytest` or equivalent
2. Check git status: No unintended changes
3. Verify specification: All acceptance criteria met
4. Review documentation: Complete and accurate
5. Confirm quality gates: All passed
### Completion Response
**Standard Response Format:**
```
✅ Implementation Complete
Specification: spec-feature-001 - User Authentication System
Status: All phases completed, moved to completed/
Quality Gates:
- ✅ Tests: 42 passed, 0 failed
- ✅ Linting: 0 errors, 0 warnings
- ✅ Type checking: Clean
- ✅ Coverage: 92% (target: 90%)
Changes:
- 3 new files created
- 2 existing files modified
- 42 tests added
- 0 breaking changes
Ready for review phase. Use /review command to validate and create PR.
```
---
## Error Handling & Recovery
### Common Issues
**Issue: Tests Failing**
```yaml
Recovery:
1. Analyze: Identify root cause
2. Fix: Address failing tests
3. Validate: Re-run test suite
4. Continue: If fixed, proceed; if persistent, use qa agent for analysis
```
**Issue: Linting Errors**
```yaml
Recovery:
1. Auto-fix: Run linter with --fix flag
2. Manual: Address remaining issues
3. Validate: Re-run linter
4. Continue: Proceed when clean
```
**Issue: Type Checking Errors**
```yaml
Recovery:
1. Analyze: Identify type mismatches
2. Fix: Add proper type annotations
3. Validate: Re-run type checker
4. Continue: Proceed when clean
```
**Issue: Specification Conflict**
```yaml
Recovery:
1. Review: Check specification requirements
2. Discuss: Clarify with user if ambiguous
3. Adjust: Modify implementation or specification
4. Continue: Proceed with aligned understanding
```
---
*Complete workflow for production-quality implementation with quality gates and specification tracking*

View File

@@ -0,0 +1,190 @@
# Generic Language Quality Standards
**Load this file when:** Implementing in languages without specific quality standards (PHP, Ruby, C++, C#, etc.)
## General Quality Gates
```yaml
Syntax & Structure:
- Valid syntax (runs without parse errors)
- Consistent indentation (2 or 4 spaces)
- Clear variable naming
- Functions <= 50 lines (guideline)
- Nesting depth <= 3 levels
Testing:
- Unit tests for core functionality
- Integration tests for workflows
- Edge case coverage
- Error path testing
- Reasonable coverage (>= 70%)
Documentation:
- README with setup instructions
- Function/method documentation
- Complex algorithms explained
- API documentation (if library)
- Usage examples
Error Handling:
- Proper exception/error handling
- No swallowed errors
- Meaningful error messages
- Graceful failure modes
- Resource cleanup
Code Quality:
- No code duplication
- Clear separation of concerns
- Meaningful names
- Single responsibility principle
- No magic numbers/strings
```
## Quality Checklist
**Before Declaring Complete:**
- [ ] Code runs without errors
- [ ] All tests pass
- [ ] Documentation complete
- [ ] Error handling in place
- [ ] No obvious code smells
- [ ] Functions reasonably sized
- [ ] Clear variable names
- [ ] No TODO comments left
- [ ] Resources properly managed
- [ ] Code reviewed for clarity
## SOLID Principles
**Apply regardless of language:**
```yaml
Single Responsibility:
- Each class/module has one reason to change
- Clear, focused purpose
- Avoid "god objects"
Open/Closed:
- Open for extension, closed for modification
- Use interfaces/traits for extensibility
- Avoid modifying working code
Liskov Substitution:
- Subtypes must be substitutable for base types
- Honor contracts in inheritance
- Avoid breaking parent behavior
Interface Segregation:
- Many specific interfaces > one general interface
- Clients shouldn't depend on unused methods
- Keep interfaces focused
Dependency Inversion:
- Depend on abstractions, not concretions
- High-level modules independent of low-level
- Use dependency injection
```
## Code Smell Detection
**Watch for these issues:**
```yaml
Long Methods:
- Threshold: > 50 lines
- Action: Extract smaller methods
- Tool: Refactorer agent
Deep Nesting:
- Threshold: > 3 levels
- Action: Flatten with early returns
- Tool: Refactorer agent
Duplicate Code:
- Detection: Similar code blocks
- Action: Extract to shared function
- Tool: Refactorer agent
Large Classes:
- Threshold: > 300 lines
- Action: Split responsibilities
- Tool: Architect + Refactorer agents
Magic Numbers:
- Detection: Unexplained constants
- Action: Named constants
- Tool: Implementer agent
Poor Naming:
- Detection: Unclear variable names
- Action: Rename to be descriptive
- Tool: Refactorer agent
```
## Example Quality Pattern
**Pseudocode showing good practices:**
```
// Good: Clear function with single responsibility
function loadConfiguration(filePath: string): Config {
// Early validation
if (!fileExists(filePath)) {
throw FileNotFoundError("Config not found: " + filePath)
}
try {
// Clear steps
content = readFile(filePath)
config = parseYAML(content)
validateConfig(config)
return config
} catch (error) {
// Proper error context
throw ConfigError("Failed to load config from " + filePath, error)
}
}
// Good: Named constants instead of magic numbers
const MAX_RETRY_ATTEMPTS = 3
const TIMEOUT_MS = 5000
// Good: Early returns instead of deep nesting
function processUser(user: User): Result {
if (!user.isActive) {
return Result.error("User not active")
}
if (!user.hasPermission) {
return Result.error("Insufficient permissions")
}
if (!user.isVerified) {
return Result.error("User not verified")
}
// Main logic only runs if all checks pass
return Result.success(doProcessing(user))
}
```
## Language-Specific Commands
**Find and use the standard tools for your language:**
```yaml
Python: ruff, pytest, mypy
Rust: cargo clippy, cargo test, cargo fmt
JavaScript/TypeScript: eslint, prettier, jest/vitest
Go: golangci-lint, go test, gofmt
Java: checkstyle, junit, maven/gradle
C#: dotnet format, xunit, roslyn analyzers
Ruby: rubocop, rspec, yard
PHP: phpcs, phpunit, psalm/phpstan
C++: clang-tidy, gtest, clang-format
```
---
*Generic quality standards applicable across programming languages*

View File

@@ -0,0 +1,162 @@
# Go Quality Standards
**Load this file when:** Implementing features in Go projects
## Validation Commands
```bash
# Linting
golangci-lint run
# Formatting
gofmt -w .
# OR
go fmt ./...
# Tests
go test ./...
# Coverage
go test -cover ./...
# Race Detection
go test -race ./...
# Full Validation Pipeline
gofmt -w . && golangci-lint run && go test ./...
```
## Required Standards
```yaml
Code Style:
- Follow: Effective Go guidelines
- Formatting: gofmt (automatic)
- Naming: MixedCaps, not snake_case
- Package names: Short, concise, lowercase
Testing:
- Framework: Built-in testing package
- Coverage: >= 75%
- Test files: *_test.go
- Table-driven tests: Prefer for multiple cases
- Benchmarks: Include for performance-critical code
Documentation:
- Package: Package-level doc comment
- Exported: All exported items documented
- Examples: Provide examples for complex APIs
- README: Clear usage instructions
Error Handling:
- Return errors, don't panic
- Use errors.New or fmt.Errorf
- Wrap errors with context (errors.Wrap)
- Check all errors explicitly
- No ignored errors (use _ = explicitly)
```
## Quality Checklist
**Before Declaring Complete:**
- [ ] Code formatted (`gofmt` or `go fmt`)
- [ ] No linting issues (`golangci-lint run`)
- [ ] All tests pass (`go test ./...`)
- [ ] No race conditions (`go test -race ./...`)
- [ ] Test coverage >= 75%
- [ ] All exported items documented
- [ ] All errors checked explicitly
- [ ] No panics in library code
- [ ] Proper error wrapping with context
- [ ] Resource cleanup with defer
## Example Quality Pattern
```go
package config
import (
"fmt"
"os"
"gopkg.in/yaml.v3"
)
// Config represents the application configuration.
type Config struct {
APIKey string `yaml:"api_key"`
Timeout int `yaml:"timeout"`
}
// LoadConfig loads configuration from a YAML file.
//
// It returns an error if the file doesn't exist or contains invalid YAML.
//
// Example:
//
// config, err := LoadConfig("config.yaml")
// if err != nil {
// log.Fatal(err)
// }
func LoadConfig(path string) (*Config, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("failed to read config file %s: %w", path, err)
}
var config Config
if err := yaml.Unmarshal(data, &config); err != nil {
return nil, fmt.Errorf("failed to parse YAML in %s: %w", path, err)
}
return &config, nil
}
```
**Table-Driven Test Example:**
```go
func TestLoadConfig(t *testing.T) {
tests := []struct {
name string
path string
want *Config
wantErr bool
}{
{
name: "valid config",
path: "testdata/valid.yaml",
want: &Config{APIKey: "test-key", Timeout: 30},
wantErr: false,
},
{
name: "missing file",
path: "testdata/missing.yaml",
want: nil,
wantErr: true,
},
{
name: "invalid yaml",
path: "testdata/invalid.yaml",
want: nil,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := LoadConfig(tt.path)
if (err != nil) != tt.wantErr {
t.Errorf("LoadConfig() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("LoadConfig() = %v, want %v", got, tt.want)
}
})
}
}
```
---
*Go-specific quality standards for production-ready code*

View File

@@ -0,0 +1,154 @@
# JavaScript/TypeScript Quality Standards
**Load this file when:** Implementing features in JavaScript or TypeScript projects
## Validation Commands
**JavaScript:**
```bash
# Linting
npx eslint . --fix
# Formatting
npx prettier --write .
# Tests
npm test
# Full Validation Pipeline
npx eslint . && npx prettier --check . && npm test
```
**TypeScript:**
```bash
# Linting
npx eslint . --fix
# Formatting
npx prettier --write .
# Type Checking
npx tsc --noEmit
# Tests
npm test
# Full Validation Pipeline
npx eslint . && npx prettier --check . && npx tsc --noEmit && npm test
```
## Required Standards
```yaml
Code Style:
- Line length: 100-120 characters
- Semicolons: Consistent (prefer with)
- Quotes: Single or double (consistent)
- Trailing commas: Always in multiline
Testing:
- Framework: Jest, Mocha, or Vitest
- Coverage: >= 80%
- Test files: *.test.js, *.spec.js
- Mocking: Prefer dependency injection
- Async: Use async/await, not callbacks
Documentation:
- JSDoc for all exported functions
- README for packages
- Type definitions (TypeScript or JSDoc)
- API documentation for libraries
TypeScript Specific:
- Strict mode enabled
- No 'any' types (use 'unknown' if needed)
- Proper interface/type definitions
- Generic types where appropriate
- Discriminated unions for state
Error Handling:
- Try/catch for async operations
- Error boundaries (React)
- Proper promise handling
- No unhandled promise rejections
```
## Quality Checklist
**Before Declaring Complete:**
- [ ] No linting errors (`eslint .`)
- [ ] Code formatted (`prettier --check .`)
- [ ] Type checking passes (TS: `tsc --noEmit`)
- [ ] All tests pass (`npm test`)
- [ ] Test coverage >= 80%
- [ ] No 'any' types (TypeScript)
- [ ] All exported functions have JSDoc
- [ ] Async operations properly handled
- [ ] Error boundaries implemented (React)
- [ ] No console.log in production code
## Example Quality Pattern
**TypeScript:**
```typescript
/**
* Load configuration from YAML file.
*
* @param configPath - Path to configuration file
* @returns Parsed configuration object
* @throws {Error} If file doesn't exist or YAML is invalid
*
* @example
* ```ts
* const config = await loadConfig('./config.yaml');
* console.log(config.apiKey);
* ```
*/
export async function loadConfig(configPath: string): Promise<Config> {
if (!fs.existsSync(configPath)) {
throw new Error(`Config not found: ${configPath}`);
}
try {
const contents = await fs.promises.readFile(configPath, 'utf-8');
const config = yaml.parse(contents) as Config;
return config;
} catch (error) {
throw new Error(`Invalid YAML in ${configPath}: ${error.message}`);
}
}
```
**JavaScript with JSDoc:**
```javascript
/**
* @typedef {Object} Config
* @property {string} apiKey - API key for service
* @property {number} timeout - Request timeout in ms
*/
/**
* Load configuration from YAML file.
*
* @param {string} configPath - Path to configuration file
* @returns {Promise<Config>} Parsed configuration object
* @throws {Error} If file doesn't exist or YAML is invalid
*/
export async function loadConfig(configPath) {
if (!fs.existsSync(configPath)) {
throw new Error(`Config not found: ${configPath}`);
}
try {
const contents = await fs.promises.readFile(configPath, 'utf-8');
const config = yaml.parse(contents);
return config;
} catch (error) {
throw new Error(`Invalid YAML in ${configPath}: ${error.message}`);
}
}
```
---
*JavaScript/TypeScript-specific quality standards for production-ready code*

View File

@@ -0,0 +1,101 @@
# Python Quality Standards
**Load this file when:** Implementing features in Python projects
## Validation Commands
```bash
# Linting
ruff check . --fix
# Formatting
ruff format .
# Tests
pytest -v
# Type Checking
mypy . --ignore-missing-imports
# Coverage
pytest --cov --cov-report=html
# Full Validation Pipeline
ruff check . && ruff format . && mypy . && pytest
```
## Required Standards
```yaml
Code Style:
- Line length: 120 characters (configurable)
- Imports: Sorted with isort style
- Docstrings: Google or NumPy style
- Type hints: Everywhere (functions, methods, variables)
Testing:
- Framework: pytest
- Coverage: >= 80%
- Test files: test_*.py or *_test.py
- Fixtures: Prefer pytest fixtures over setup/teardown
- Assertions: Use pytest assertions, not unittest
Documentation:
- All modules: Docstring with purpose
- All classes: Docstring with attributes
- All functions: Docstring with args, returns, raises
- Complex logic: Inline comments for clarity
Error Handling:
- Use specific exceptions (not bare except)
- Custom exceptions for domain errors
- Proper exception chaining
- Clean resource management (context managers)
```
## Quality Checklist
**Before Declaring Complete:**
- [ ] All functions have type hints
- [ ] All functions have docstrings (Google/NumPy style)
- [ ] No linting errors (`ruff check .`)
- [ ] Code formatted consistently (`ruff format .`)
- [ ] Type checking passes (`mypy .`)
- [ ] All tests pass (`pytest`)
- [ ] Test coverage >= 80%
- [ ] No bare except clauses
- [ ] Proper exception handling
- [ ] Resources properly managed
## Example Quality Pattern
```python
from typing import Optional
from pathlib import Path
def load_config(config_path: Path) -> dict[str, any]:
"""Load configuration from YAML file.
Args:
config_path: Path to configuration file
Returns:
Dictionary containing configuration values
Raises:
FileNotFoundError: If config file doesn't exist
ValueError: If config file is invalid YAML
"""
if not config_path.exists():
raise FileNotFoundError(f"Config not found: {config_path}")
try:
with config_path.open() as f:
return yaml.safe_load(f)
except yaml.YAMLError as e:
raise ValueError(f"Invalid YAML in {config_path}") from e
```
---
*Python-specific quality standards for production-ready code*

View File

@@ -0,0 +1,120 @@
# Rust Quality Standards
**Load this file when:** Implementing features in Rust projects
## Validation Commands
```bash
# Linting
cargo clippy -- -D warnings
# Formatting
cargo fmt
# Tests
cargo test
# Type Checking (implicit)
cargo check
# Documentation
cargo doc --no-deps --open
# Full Validation Pipeline
cargo clippy -- -D warnings && cargo fmt --check && cargo test
```
## Required Standards
```yaml
Code Style:
- Follow: Rust API guidelines
- Formatting: rustfmt (automatic)
- Naming: snake_case for functions, PascalCase for types
- Modules: Clear separation of concerns
Testing:
- Framework: Built-in test framework
- Coverage: >= 75%
- Unit tests: In same file with #[cfg(test)]
- Integration tests: In tests/ directory
- Doc tests: In documentation examples
Documentation:
- All public items: /// documentation
- Modules: //! module-level docs
- Examples: Working examples in docs
- Safety: Document unsafe blocks thoroughly
Error Handling:
- Use Result<T, E> for fallible operations
- Use Option<T> for optional values
- No .unwrap() in production code
- Custom error types with thiserror or anyhow
- Proper error context with context/wrap_err
```
## Quality Checklist
**Before Declaring Complete:**
- [ ] No clippy warnings (`cargo clippy -- -D warnings`)
- [ ] Code formatted (`cargo fmt --check`)
- [ ] All tests pass (`cargo test`)
- [ ] No unwrap() calls in production code
- [ ] Result<T, E> used for all fallible operations
- [ ] All public items documented
- [ ] Examples in documentation tested
- [ ] Unsafe blocks documented with safety comments
- [ ] Proper error types defined
- [ ] Resource cleanup handled (Drop trait if needed)
## Example Quality Pattern
```rust
use thiserror::Error;
use std::path::Path;
#[derive(Error, Debug)]
pub enum ConfigError {
#[error("Config file not found: {0}")]
NotFound(String),
#[error("Invalid YAML: {0}")]
InvalidYaml(#[from] serde_yaml::Error),
}
/// Load configuration from YAML file.
///
/// # Arguments
///
/// * `path` - Path to configuration file
///
/// # Returns
///
/// Returns the parsed configuration or an error.
///
/// # Errors
///
/// Returns `ConfigError::NotFound` if file doesn't exist.
/// Returns `ConfigError::InvalidYaml` if parsing fails.
///
/// # Examples
///
/// ```
/// let config = load_config(Path::new("config.yaml"))?;
/// ```
pub fn load_config(path: &Path) -> Result<Config, ConfigError> {
if !path.exists() {
return Err(ConfigError::NotFound(path.display().to_string()));
}
let contents = std::fs::read_to_string(path)
.map_err(|e| ConfigError::InvalidYaml(e.into()))?;
let config: Config = serde_yaml::from_str(&contents)?;
Ok(config)
}
```
---
*Rust-specific quality standards for production-ready code*