Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:42:35 +08:00
commit 6afaf3cc68
5 changed files with 1674 additions and 0 deletions

View File

@@ -0,0 +1,11 @@
{
"name": "neills-skills",
"description": "Production-grade skills for Claude Code: ExecPlan for structured planning and REPL-Driven Development for Clojure 1.12",
"version": "1.0.0",
"author": {
"name": "Neill Killgore"
},
"skills": [
"./skills"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# neills-skills
Production-grade skills for Claude Code: ExecPlan for structured planning and REPL-Driven Development for Clojure 1.12

49
plugin.lock.json Normal file
View File

@@ -0,0 +1,49 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:neill-k/cc-skills:",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "046a586f5a38d0c5580e618fba2e19a256c4074c",
"treeHash": "cca08e0b38f39ea10d2503887c8fcc1db12723a7f73cd9ab98c69c183a5e7be6",
"generatedAt": "2025-11-28T10:27:17.829789Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "neills-skills",
"description": "Production-grade skills for Claude Code: ExecPlan for structured planning and REPL-Driven Development for Clojure 1.12",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "960d621743412c6593abd4b5391c5c367b68ae13cb7f3606db1b0b4f81825b93"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "9fd8f74279a3c92bd2467453e6aa29344ee6bcdcef4a78830a9e7c0f72cc639f"
},
{
"path": "skills/development/exec-plan.md",
"sha256": "89be573ab5baf01e19f755afcefc0447890498c63d6f0e3bddcc806dc421f6c9"
},
{
"path": "skills/development/repl-driven-clojure.md",
"sha256": "8d30e1e9d5f2a1f9b81f8369e4ef18ccb7c59627f43ead80d01bc150dd39c766"
}
],
"dirSha256": "cca08e0b38f39ea10d2503887c8fcc1db12723a7f73cd9ab98c69c183a5e7be6"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,673 @@
---
name: exec-plan
description: Create structured, living work plans for complex coding tasks with progress tracking, decision logs, and verification steps
---
# ExecPlan Skill
When the user requests help with a complex feature, refactoring, or significant code change, use this skill to create a structured work plan that serves as a living document throughout execution.
## Core Principles
1. **Structured Planning**: Break down complex work into concrete, verifiable steps
2. **Living Document**: Update the plan continuously as work progresses
3. **Decision Capture**: Log important choices and rationale
4. **Progress Visibility**: Track completion with timestamps and status
5. **Learning Record**: Document surprises, discoveries, and lessons learned
## When to Use This Skill
Activate this skill when the user:
- Starts a new feature implementation
- Plans a significant refactoring
- Requests help with a complex bug fix
- Needs to coordinate multi-file changes
- Wants a structured approach to development
- Asks "how should I approach this?"
- Mentions planning, roadmap, or execution strategy
## ExecPlan Framework
An ExecPlan is a **task-scoped work plan** that becomes shared vocabulary between you and the user. It structures how features get built, providing clarity and reducing cognitive load.
### Document Structure
Every ExecPlan should contain these sections:
1. **Metadata**: Ownership, dates, related links
2. **Short Description**: One-sentence outcome statement
3. **Purpose / Big Picture**: User or system gains with measurable outcomes
4. **Context and Orientation**: Current state, file locations, terminology
5. **Plan of Work**: Concrete steps with verification
6. **Progress**: Checkbox-tracked steps with timestamps
7. **Surprises & Discoveries**: Unexpected findings and optimizations
8. **Decision Log**: Material decisions with rationale
9. **Outcomes & Retrospective**: Results and lessons learned
10. **Risks / Open Questions**: (Optional) Assumptions and metrics
11. **Next Steps / Handoff Notes**: (Optional) Remaining work
## Output Format
```markdown
# ExecPlan: [Feature/Task Name]
> **Living design and execution record**
## Metadata
- **Owner**: [Your name or team]
- **Created**: [YYYY-MM-DD]
- **Last Updated**: [YYYY-MM-DD HH:MM UTC]
- **Agent Path**: [e.g., .agents/feature-name/]
- **Related Plans**: [Links to related plans or issues]
- **Status**: 🟡 In Progress | 🟢 Complete | 🔴 Blocked
---
## Short Description
[One sentence: What new behavior or improvement will exist when complete?]
Example: "Users can now filter search results by date range with sub-second response times."
---
## Purpose / Big Picture
**What does the user or system gain?**
[Explain measurable outcomes and benefits]
**Success Criteria:**
- [ ] Criterion 1 with measurable target
- [ ] Criterion 2 with measurable target
- [ ] Criterion 3 with measurable target
Example: "Users can ask multi-turn clarifications within 2s latency, improving task completion rates by 30%."
---
## Context and Orientation
**Current State:**
[Summarize the existing system, its limitations, and why this work is needed]
**Key Files:**
- `path/to/file1.ext` - Current role and limitations
- `path/to/file2.ext` - Current role and limitations
- `path/to/file3.ext` - Current role and limitations
**Domain Terminology:**
- **Term 1**: Definition (don't assume prior knowledge)
- **Term 2**: Definition
- **Term 3**: Definition
**Assumptions:**
- Assumption 1
- Assumption 2
---
## Plan of Work
### Phase 1: [Phase Name]
**1. [Concrete Action]**
- **Files affected**: `path/to/file.ext`
- **Expected effect**: What will change
- **Verification**: How to confirm it works
- **Estimated time**: X hours
**2. [Next Action]**
- **Files affected**: `path/to/files.ext`
- **Expected effect**: What will change
- **Verification**: How to confirm it works
- **Estimated time**: Y hours
### Phase 2: [Phase Name]
**3. [Action]**
...
### Phase 3: Testing & Validation
**X. [Test Action]**
- **Test coverage**: What scenarios to test
- **Expected results**: Pass criteria
- **Verification**: Test commands to run
---
## Progress
**Overall**: ⚪⚪⚪⚪⚪⚪⚪⚪⚪⚪ 0% (0/10 steps)
- [ ] **Step 1**: [Description]
- Status: Pending
- Started: —
- Completed: —
- [x] **Step 2**: [Description]
- Status: ✅ Complete
- Started: 2025-01-10 14:22 UTC
- Completed: 2025-01-10 15:45 UTC
- Notes: [Any relevant notes]
- [ ] **Step 3**: [Description]
- Status: 🟡 In Progress (60%)
- Started: 2025-01-10 15:50 UTC
- Completed: —
- Notes: Blocked by API rate limit issue
[Continue for all steps...]
**Velocity**: [X steps per day/hour, if tracking]
---
## Surprises & Discoveries
**[YYYY-MM-DD]**: [Discovery Title]
- **What**: What we found
- **Why it matters**: Impact on the plan
- **Action taken**: What we did about it
- **Evidence**: Links, metrics, or observations
Example:
**2025-01-10**: Database query N+1 problem in user list
- **What**: Found existing user list endpoint making 100+ queries per request
- **Why it matters**: 5s+ response time blocking this feature
- **Action taken**: Added eager loading, reduced to 2 queries
- **Evidence**: Response time dropped from 5.2s to 0.3s (see PR #123)
---
## Decision Log
**[YYYY-MM-DD]**: [Decision Title]
- **Decision**: What was decided
- **Rationale**: Why this choice over alternatives
- **Alternatives considered**: What else was evaluated
- **Trade-offs**: What we're giving up
- **Decided by**: [Name/role]
Example:
**2025-01-10**: Use Redis for caching instead of in-memory cache
- **Decision**: Implement Redis-based caching layer
- **Rationale**: Need shared cache across multiple servers
- **Alternatives considered**: In-memory cache (doesn't scale), Memcached (less feature-rich)
- **Trade-offs**: Additional infrastructure dependency, slight latency increase (2ms)
- **Decided by**: Engineering team consensus
---
## Outcomes & Retrospective
### What Worked ✅
- [Successful approach or decision]
- [Technique that proved valuable]
- [Tool or pattern that helped]
### What Didn't Work ❌
- [Challenge or obstacle]
- [Approach that failed]
- [Assumption that was wrong]
### Measured Impact 📊
- **Metric 1**: Before → After (% change)
- **Metric 2**: Before → After (% change)
- **Metric 3**: Before → After (% change)
### Lessons Learned 💡
- [Key insight for future work]
- [Pattern to apply elsewhere]
- [Pitfall to avoid]
---
## Risks / Open Questions
**Risks:**
- ⚠️ **[Risk]**: Description and mitigation strategy
- ⚠️ **[Risk]**: Description and mitigation strategy
**Open Questions:**
-**[Question]**: Context and why it matters
-**[Question]**: Context and why it matters
**Success Metrics:**
- [ ] Metric 1: Target value
- [ ] Metric 2: Target value
---
## Next Steps / Handoff Notes
**Immediate Next Steps:**
1. [Next action item]
2. [Next action item]
3. [Next action item]
**Future Enhancements:**
- [Potential improvement]
- [Follow-up feature]
- [Technical debt to address]
**Handoff Information:**
- PR: #[number] - [title]
- Related Issues: #[number], #[number]
- Documentation: [link to docs]
- Deployment Notes: [special considerations]
---
**Plan Status**: 🟡 Active | Last updated: [timestamp]
```
## Best Practices
### DO:
-**Update continuously**: Treat the plan as a living document
-**Be specific**: Use concrete file paths and measurable outcomes
-**Track progress**: Update checkboxes and timestamps religiously
-**Log decisions**: Capture rationale while it's fresh
-**Document surprises**: Write down unexpected findings immediately
-**Use timestamps**: Always include UTC timestamps for tracking
-**Break down work**: Keep steps small and verifiable
-**Think in phases**: Group related work logically
-**Define success**: Make success criteria measurable
-**Capture context**: Explain terms and assumptions
### DON'T:
-**Set and forget**: Don't write the plan then ignore it
-**Be vague**: Avoid "improve performance" without metrics
-**Skip verification**: Every step needs a way to confirm success
-**Ignore discoveries**: Document unexpected findings
-**Assume knowledge**: Define domain-specific terminology
-**Hide decisions**: Always explain why, not just what
-**Make steps too large**: Break down anything taking >4 hours
-**Forget timestamps**: Track when work actually happens
-**Abandon retrospectives**: Always complete the outcomes section
## Usage Patterns
### Pattern 1: New Feature Development
**User**: "I need to add user authentication to my app"
**Assistant**: "I'll create an ExecPlan to structure this feature implementation."
*Creates comprehensive plan with:*
- Context about current auth state
- Phases: Database → API endpoints → Frontend → Testing
- Specific file changes with verification steps
- Security considerations in decision log
- Progress tracking setup
### Pattern 2: Complex Refactoring
**User**: "This module has become a mess, help me refactor it"
**Assistant**: "Let me create an ExecPlan for this refactoring to ensure we don't break anything."
*Creates plan with:*
- Current state analysis and pain points
- Incremental refactoring steps (small, testable changes)
- Test strategy to maintain behavior
- Progress checkboxes for each refactor
- Decision log for architectural choices
### Pattern 3: Bug Fix with Root Cause Analysis
**User**: "Users are reporting intermittent 500 errors"
**Assistant**: "I'll create an ExecPlan to systematically debug and fix this issue."
*Creates plan with:*
- Hypothesis section for potential causes
- Investigation steps with verification
- Fix implementation plan
- Monitoring and validation steps
- Surprises section to document root cause
### Pattern 4: Multi-File Coordination
**User**: "We need to migrate from REST to GraphQL"
**Assistant**: "This is a complex migration. Let me create an ExecPlan to coordinate all the changes."
*Creates plan with:*
- Complete file inventory and dependencies
- Migration phases with rollback points
- Parallel work opportunities
- Integration testing strategy
- Handoff notes for deployment
## Example: Complete ExecPlan in Action
**Scenario**: User wants to add API rate limiting
**ExecPlan Output**:
````markdown
# ExecPlan: API Rate Limiting Implementation
> **Living design and execution record**
## Metadata
- **Owner**: Development Team
- **Created**: 2025-01-10
- **Last Updated**: 2025-01-10 16:30 UTC
- **Agent Path**: `.agents/rate-limiting/`
- **Related Plans**: None
- **Status**: 🟡 In Progress
---
## Short Description
API endpoints now enforce rate limits of 100 requests per minute per user, preventing abuse while maintaining quality of service for legitimate users.
---
## Purpose / Big Picture
**What does the user or system gain?**
The system gains protection against API abuse, reduces server load from runaway scripts, and ensures fair resource distribution among users. Legitimate users experience consistent performance.
**Success Criteria:**
- [ ] 100 req/min limit enforced per user with <5ms overhead
- [ ] Rate limit information returned in response headers
- [ ] Configurable limits per user tier (free/pro/enterprise)
- [ ] Graceful error messages when limits exceeded
- [ ] Monitoring dashboard shows rate limit hits
---
## Context and Orientation
**Current State:**
Currently, the API has no rate limiting. Some users run aggressive scripts that cause load spikes (up to 10,000 req/min), degrading service for others. We've had 3 outages in the past month related to API abuse.
**Key Files:**
- `src/middleware/auth.ts` - Current authentication middleware (38 lines)
- `src/routes/api.ts` - API route definitions (200+ lines)
- `src/config/limits.ts` - (NEW) Rate limit configuration
- `src/middleware/rateLimit.ts` - (NEW) Rate limiting logic
**Domain Terminology:**
- **Token bucket algorithm**: Rate limiting method that allows bursts while enforcing average rate
- **X-RateLimit headers**: Standard HTTP headers communicating limits to clients
- **429 status code**: "Too Many Requests" HTTP status
**Assumptions:**
- Using Redis for distributed rate limit tracking
- Rate limits apply per authenticated user (not IP-based)
- All existing API routes will be protected
---
## Plan of Work
### Phase 1: Infrastructure Setup
**1. Add Redis dependency and configuration**
- **Files affected**: `package.json`, `src/config/redis.ts` (NEW)
- **Expected effect**: Redis client available for rate limit storage
- **Verification**: `npm install` succeeds, Redis connection test passes
- **Estimated time**: 1 hour
**2. Create rate limit configuration**
- **Files affected**: `src/config/limits.ts` (NEW)
- **Expected effect**: Centralized config for different user tiers
- **Verification**: Config values accessible, TypeScript types correct
- **Estimated time**: 30 minutes
### Phase 2: Middleware Implementation
**3. Implement token bucket rate limiter**
- **Files affected**: `src/middleware/rateLimit.ts` (NEW)
- **Expected effect**: Middleware checks and updates rate limit counters
- **Verification**: Unit tests pass for limit enforcement
- **Estimated time**: 3 hours
**4. Add rate limit headers to responses**
- **Files affected**: `src/middleware/rateLimit.ts`
- **Expected effect**: Responses include X-RateLimit-* headers
- **Verification**: Response headers present in test requests
- **Estimated time**: 1 hour
### Phase 3: Integration
**5. Apply middleware to API routes**
- **Files affected**: `src/routes/api.ts`
- **Expected effect**: All routes protected by rate limiting
- **Verification**: Test requests get rate limited after threshold
- **Estimated time**: 1 hour
**6. Add error handling for rate limit exceeded**
- **Files affected**: `src/middleware/errorHandler.ts`
- **Expected effect**: Clear 429 error message with retry info
- **Verification**: Exceeded requests get helpful error message
- **Estimated time**: 30 minutes
### Phase 4: Testing & Monitoring
**7. Write integration tests**
- **Files affected**: `tests/rateLimit.integration.test.ts` (NEW)
- **Expected effect**: Comprehensive test coverage for rate limiting
- **Verification**: All tests pass, >90% coverage
- **Estimated time**: 2 hours
**8. Add monitoring dashboard**
- **Files affected**: `src/monitoring/rateLimitMetrics.ts` (NEW)
- **Expected effect**: Dashboard shows rate limit hits by user/endpoint
- **Verification**: Dashboard accessible, shows real-time data
- **Estimated time**: 2 hours
---
## Progress
**Overall**: ⚫⚫⚫⚫⚫⚪⚪⚪⚪⚪ 50% (4/8 steps)
- [x] **Step 1**: Add Redis dependency and configuration
- Status: ✅ Complete
- Started: 2025-01-10 14:00 UTC
- Completed: 2025-01-10 14:45 UTC
- Notes: Used ioredis library, connection pool configured
- [x] **Step 2**: Create rate limit configuration
- Status: ✅ Complete
- Started: 2025-01-10 14:50 UTC
- Completed: 2025-01-10 15:15 UTC
- Notes: Added tiered limits (free: 60/min, pro: 300/min, enterprise: 1000/min)
- [x] **Step 3**: Implement token bucket rate limiter
- Status: ✅ Complete
- Started: 2025-01-10 15:20 UTC
- Completed: 2025-01-10 16:10 UTC
- Notes: Used sliding window for accuracy, 12 unit tests passing
- [x] **Step 4**: Add rate limit headers to responses
- Status: ✅ Complete
- Started: 2025-01-10 16:15 UTC
- Completed: 2025-01-10 16:30 UTC
- Notes: Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset
- [ ] **Step 5**: Apply middleware to API routes
- Status: 🟡 In Progress
- Started: 2025-01-10 16:35 UTC
- Completed: —
- Notes: Working on route-specific overrides for admin endpoints
- [ ] **Step 6**: Add error handling for rate limit exceeded
- Status: Pending
- [ ] **Step 7**: Write integration tests
- Status: Pending
- [ ] **Step 8**: Add monitoring dashboard
- Status: Pending
**Velocity**: ~4 steps per 2.5 hours (1.6 steps/hour)
---
## Surprises & Discoveries
**2025-01-10 15:45**: Redis performance exceeds expectations
- **What**: Rate limit checks averaging 0.8ms instead of expected 5ms
- **Why it matters**: Gives us headroom for more sophisticated limiting algorithms
- **Action taken**: Documented for future reference, no plan changes needed
- **Evidence**: Load testing shows p95 latency of 1.2ms for 10k requests
**2025-01-10 16:20**: Some routes need different limits
- **What**: Admin endpoints and webhooks need separate (higher) limits
- **Why it matters**: Current design assumes uniform limits per user
- **Action taken**: Adding route-specific limit overrides in middleware
- **Evidence**: Admin users reported 429s during bulk operations
---
## Decision Log
**2025-01-10 14:30**: Use token bucket algorithm over fixed window
- **Decision**: Implement sliding window token bucket algorithm
- **Rationale**: Better handles bursts, more accurate than fixed windows
- **Alternatives considered**: Fixed window (simpler but less accurate), leaky bucket (harder to implement)
- **Trade-offs**: Slightly more complex implementation, minimal performance difference
- **Decided by**: Engineering team after reviewing rate limiting patterns
**2025-01-10 16:25**: Add route-specific limit overrides
- **Decision**: Allow individual routes to override default rate limits
- **Rationale**: Admin and webhook endpoints need higher limits than user endpoints
- **Alternatives considered**: Separate user tiers (too complex), ignore issue (blocks legitimate use)
- **Trade-offs**: Adds configuration complexity, worth it for flexibility
- **Decided by**: Lead developer after admin user feedback
---
## Outcomes & Retrospective
*(To be completed after implementation)*
### What Worked ✅
- TBD
### What Didn't Work ❌
- TBD
### Measured Impact 📊
- TBD
### Lessons Learned 💡
- TBD
---
## Risks / Open Questions
**Risks:**
- ⚠️ **Redis single point of failure**: If Redis goes down, API becomes unusable
- Mitigation: Implement fallback to in-memory cache with degraded accuracy
- ⚠️ **Clock drift in distributed systems**: Multiple servers might have slightly different time
- Mitigation: Use Redis SCRIPT for atomic operations
**Open Questions:**
- ❓ **Should websockets count against rate limits?**: Currently undefined behavior
- ❓ **How to handle shared API keys**: Multiple users with same key could hit limits quickly
**Success Metrics:**
- [ ] API abuse incidents: 3/month → 0/month
- [ ] P95 latency overhead: <5ms
- [ ] False positive rate: <0.1%
---
## Next Steps / Handoff Notes
**Immediate Next Steps:**
1. Complete route integration (Step 5)
2. Implement error handling (Step 6)
3. Write integration tests (Step 7)
4. Deploy to staging for testing
**Future Enhancements:**
- Implement rate limit exemptions for specific users
- Add GraphQL-specific rate limiting (query complexity)
- Create user-facing rate limit dashboard
**Handoff Information:**
- PR: #458 - Add API rate limiting
- Related Issues: #389 (API abuse report), #401 (performance issues)
- Documentation: Update API docs with rate limit details
- Deployment Notes: Requires Redis cluster, environment variables for limits
---
**Plan Status**: 🟡 Active | Last updated: 2025-01-10 16:35 UTC
````
## Integration with Development Workflow
### Storing Plans
**Recommended structure:**
```
.agents/
├── template/
│ └── PLAN.md # Master template
├── feature-auth/
│ └── PLAN.md # Feature-specific plan
├── refactor-api/
│ └── PLAN.md # Refactoring plan
└── tmp/ # Quick iterations (gitignored)
└── PLAN.md
```
### Workflow
1. **Start**: User describes complex task
2. **Plan**: Create ExecPlan using this skill
3. **Execute**: Work through plan, updating progress
4. **Discover**: Log surprises and decisions as they happen
5. **Review**: Complete retrospective at milestones
6. **Complete**: Archive or delete plan after merge
### Tips for Long-Running Sessions
- **Update frequently**: Don't let the plan drift from reality
- **Checkpoint progress**: Update status after completing each step
- **Document blockers**: When stuck, write it in Progress notes
- **Capture insights**: Write down discoveries immediately
- **Reference the plan**: Regularly review what's next
- **Adapt as needed**: Plans can change - document why
## Adaptation Guidelines
### For Solo Projects
- Simplify metadata (owner, handoff notes less critical)
- Focus on Progress and Surprises sections
- Use as personal execution log
### For Team Projects
- Emphasize Decision Log for async communication
- Include detailed Handoff Notes
- Link to PRs and issues extensively
- Update more frequently for visibility
### For Learning Projects
- Expand Surprises & Discoveries section
- Focus heavily on Retrospective
- Document wrong assumptions prominently
### For Production Systems
- Require Risks section completion
- Mandate verification steps for every change
- Include rollback procedures
- Link to monitoring/alerting
This skill transforms complex development work from chaotic improvisation into structured, trackable, and repeatable execution.

View File

@@ -0,0 +1,938 @@
---
name: repl-driven-clojure
description: Master REPL-driven development workflow for Clojure 1.12 with live coding, rich comment blocks, state management, and iterative design
---
# REPL-Driven Development with Clojure
When the user requests guidance on Clojure development, REPL workflow, interactive programming, or asks how to structure Clojure code for rapid iteration, use this skill to provide comprehensive REPL-driven development guidance using Clojure 1.12 conventions.
## Core Principles
1. **Live Interaction**: Code with immediate feedback through continuous evaluation
2. **Bottom-Up Development**: Build and test small functions, composing into larger systems
3. **Incremental Growth**: Make small changes, evaluate constantly, refine iteratively
4. **State Awareness**: Manage application state explicitly for reliable REPL sessions
5. **Documentation in Code**: Use rich comment blocks as living documentation
6. **Fast Feedback Loop**: Seconds from idea to working code
## When to Use This Skill
Activate this skill when the user:
- Requests help with Clojure development workflow
- Asks about REPL-driven development or interactive programming
- Wants to structure Clojure code for iterability
- Needs guidance on namespace management and reloading
- Asks about state management in long-running REPL sessions
- Wants to combine REPL-driven and test-driven development
- Mentions "rich comment blocks" or "design journals"
- Asks about Clojure 1.12 features or conventions
## REPL-Driven Development Framework
### The REPL Cycle
**Read → Evaluate → Print → Loop**
1. **Read**: Clojure reader processes code and expands macros
2. **Evaluate**: Code compiles to JVM bytecode and executes
3. **Print**: Results display in REPL or application
4. **Loop**: Continue with next expression
### Development Workflow
**Standard Flow:**
1. Start REPL connected to your editor
2. Write function in source file
3. Evaluate expression in namespace context
4. Inspect results, refine implementation
5. Iterate until satisfied
6. Write tests to codify successful experiments
7. Repeat for next function
## Clojure 1.12 Features and Conventions
### Qualified Methods (New in 1.12)
Clojure 1.12 simplifies Java interop with uniform `Classname/member` syntax:
```clojure
;; Instance methods - NEW uniform syntax
(String/length "hello") ; => 5
(String/toUpperCase "hello") ; => "HELLO"
;; Constructors with Classname/new
(java.util.ArrayList/new) ; Create ArrayList
(String/new "hello") ; Create String
;; Works with type hints for performance
(defn process-string [^String s]
(String/length s))
```
### Method Values (New in 1.12)
Reference methods as values using qualified method syntax:
```clojure
;; Method values
(map String/length ["foo" "hello" "world"])
;; => (3 5 5)
;; Pass as higher-order functions
(filter (String/startsWith "hel") ["hello" "world" "help"])
;; => ("hello" "help")
```
### Param Tags (Renamed in 1.12)
Type hints for method parameters (formerly `:arg-tags`):
```clojure
;; Use :param-tags for explicit method selection
(defn add-numbers
{:param-tags [Long Long]}
[^long a ^long b]
(+ a b))
```
### Array Class Syntax (Updated in 1.12)
New streamlined array class syntax:
```clojure
;; OLD: String-*
;; NEW: String*
(defn process-array [^String* arr]
(alength arr))
;; Multi-dimensional arrays
(defn matrix [^int** grid]
(aget grid 0 0))
```
## Rich Comment Blocks
Rich comment blocks are living documentation that capture interactive exploration:
```clojure
(ns myapp.core
(:require [clojure.string :as str]))
(defn greet [name]
(str "Hello, " name "!"))
(comment
;; REPL experiments and usage examples
;; Evaluate these expressions with your editor
;; Basic usage
(greet "Alice")
;; => "Hello, Alice!"
;; Edge cases
(greet "")
;; => "Hello, !"
(greet nil)
;; => NullPointerException - need to handle!
;; Fixed version exploration
(defn greet-safe [name]
(str "Hello, " (or name "stranger") "!"))
(greet-safe nil)
;; => "Hello, stranger!"
;; Test with various inputs
(map greet-safe ["Alice" "Bob" nil ""])
;; => ("Hello, Alice!" "Hello, Bob!" "Hello, stranger!" "Hello, stranger!")
:rcf) ;; Rich Comment Form marker
```
### Best Practices for Comment Blocks
**Structure:**
```clojure
(comment
;; Section: System Startup
(start-system!)
(reset-system!)
;; Section: Data Exploration
(def sample-data {...})
(process sample-data)
;; Section: Performance Testing
(time (expensive-operation))
;; Section: Debugging Helpers
(println "Debug:" (capture-state))
:rcf)
```
## Design Journals
Create separate namespaces to document design decisions:
```clojure
(ns myapp.design-journal
"Living record of design decisions and explorations"
(:require [myapp.core :as core]))
(comment
;; Decision: Data structure for user sessions
;; Date: 2025-01-10
;; Context: Need to track active user sessions with timeout
;; Option 1: Simple map (REJECTED)
;; - Pro: Easy to understand
;; - Con: No automatic cleanup, memory leak risk
(def sessions-v1
{:user-123 {:started-at (System/currentTimeMillis)}})
;; Option 2: core.cache with TTL (CHOSEN)
;; - Pro: Automatic expiration
;; - Pro: Battle-tested library
;; - Con: Additional dependency
(require '[clojure.core.cache :as cache])
(def sessions-v2
(cache/ttl-cache-factory {} :ttl 3600000))
;; Rationale: Automatic cleanup is critical for production
;; Alternative considered: Redis (overkill for this use case)
:rcf)
```
## State Management for Long-Running REPLs
### Using Mount
```clojure
(ns myapp.core
(:require [mount.core :as mount :refer [defstate]]))
;; Define stateful components
(defstate database
:start (connect-db!)
:stop (disconnect-db database))
(defstate http-server
:start (start-server! {:port 3000})
:stop (stop-server! http-server))
(comment
;; REPL workflow with Mount
;; Start all states
(mount/start)
;; Restart specific state after code changes
(mount/stop #'database)
(mount/start #'database)
;; Restart entire system
(mount/stop)
(mount/start)
:rcf)
```
### Using Integrant
```clojure
(ns myapp.system
(:require [integrant.core :as ig]))
;; Define system configuration
(def config
{:adapter/database {:uri "jdbc:postgresql://localhost/mydb"}
:handler/api {:db (ig/ref :adapter/database)}
:server/http {:port 3000
:handler (ig/ref :handler/api)}})
;; Define component lifecycle
(defmethod ig/init-key :adapter/database [_ {:keys [uri]}]
(connect-db uri))
(defmethod ig/halt-key! :adapter/database [_ db]
(disconnect-db db))
(comment
;; REPL workflow with Integrant
;; Initialize system
(def system (ig/init config))
;; Access components
(:adapter/database system)
;; Reload with changes
(def system (ig/suspend! system))
(def system (ig/resume config system))
;; Complete restart
(ig/halt! system)
(def system (ig/init config))
:rcf)
```
### Using Component
```clojure
(ns myapp.system
(:require [com.stuartsierra.component :as component]))
(defrecord Database [uri connection]
component/Lifecycle
(start [this]
(println "Starting database")
(assoc this :connection (connect-db uri)))
(stop [this]
(println "Stopping database")
(disconnect-db connection)
(assoc this :connection nil)))
(defn new-database [uri]
(map->Database {:uri uri}))
(comment
;; REPL workflow with Component
(def db (new-database "jdbc:postgresql://localhost/mydb"))
(def db (component/start db))
;; Use the component
(:connection db)
;; Stop when done
(def db (component/stop db))
:rcf)
```
## Namespace Management
### Reloading Namespaces
```clojure
(ns user
(:require [clojure.tools.namespace.repl :refer [refresh refresh-all]]))
(comment
;; Reload changed namespaces
(refresh)
;; Reload all namespaces (full reset)
(refresh-all)
;; Clear namespace before reload
(require '[myapp.core :as core] :reload)
;; Force reload dependencies
(require '[myapp.core :as core] :reload-all)
:rcf)
```
### User Namespace Setup
Create `dev/user.clj` for REPL utilities:
```clojure
(ns user
"REPL utilities and system management"
(:require [clojure.tools.namespace.repl :as repl]
[mount.core :as mount]
[myapp.core :as core]))
(defn start
"Start the system"
[]
(mount/start))
(defn stop
"Stop the system"
[]
(mount/stop))
(defn reset
"Stop, reload code, restart"
[]
(stop)
(repl/refresh :after 'user/start))
(comment
;; Quick system operations in REPL
(start)
(stop)
(reset)
:rcf)
```
## Combining REPL-Driven and Test-Driven Development
### Workflow Integration
```clojure
(ns myapp.core-test
(:require [clojure.test :refer [deftest is testing]]
[myapp.core :as core]))
;; Start with REPL exploration
(comment
;; Experiment with function behavior
(core/parse-email "user@example.com")
;; => {:local "user" :domain "example.com"}
(core/parse-email "invalid")
;; => nil (or should it throw?)
;; Try different approaches
(defn parse-email-v2 [s]
(when-let [[_ local domain] (re-matches #"(.+)@(.+)" s)]
{:local local :domain domain}))
(parse-email-v2 "user@example.com")
;; => {:local "user" :domain "example.com"}
(parse-email-v2 "invalid")
;; => nil (good!)
:rcf)
;; Codify successful experiments as tests
(deftest parse-email-test
(testing "valid email"
(is (= {:local "user" :domain "example.com"}
(core/parse-email "user@example.com"))))
(testing "invalid email"
(is (nil? (core/parse-email "invalid"))))
(testing "edge cases"
(is (nil? (core/parse-email "")))
(is (nil? (core/parse-email nil)))))
```
### Rich Comment Form (RCF) Testing
Use RCF-style tests for rapid feedback:
```clojure
(ns myapp.core
(:require [hyperfiddle.rcf :refer [tests]]))
(defn add [a b]
(+ a b))
(tests
"basic addition"
(add 2 3) := 5
(add 0 0) := 0
(add -1 1) := 0
"works with different number types"
(add 1.5 2.5) := 4.0
(add 1/2 1/2) := 1)
;; Tests run automatically when namespace loads in REPL
;; Fast feedback without leaving your code
```
## Data Inspection and Visualization
### Portal Integration
```clojure
(ns user
(:require [portal.api :as portal]))
(def p (portal/open))
(comment
;; Send data to Portal for inspection
(portal/submit {:users [{:name "Alice" :age 30}
{:name "Bob" :age 25}]})
;; Tap values automatically
(add-tap portal/submit)
(tap> {:event "user-login" :user-id 123})
;; Clear portal
(portal/clear)
;; Close portal
(portal/close)
:rcf)
```
### CIDER Inspector
```clojure
(comment
;; Inspect complex data structures
(require '[cider.inspector :as inspect])
(def complex-data
{:users [{:id 1 :name "Alice" :orders [...]}
{:id 2 :name "Bob" :orders [...]}]
:metadata {...}})
;; Evaluate with inspector (in CIDER/Emacs)
;; C-c M-i to inspect result
complex-data
:rcf)
```
## Performance and Profiling in REPL
### Basic Timing
```clojure
(comment
;; Quick timing
(time (expensive-operation))
;; "Elapsed time: 1234.56 msecs"
;; Multiple runs for average
(dotimes [_ 5]
(time (expensive-operation)))
:rcf)
```
### Criterium for Accurate Benchmarking
```clojure
(ns myapp.perf
(:require [criterium.core :as crit]))
(comment
;; Accurate benchmarking with JVM warmup
(crit/quick-bench
(reduce + (range 1000)))
;; Detailed benchmark
(crit/bench
(my-function args))
;; Compare implementations
(crit/quick-bench (map inc (range 1000))) ; lazy
(crit/quick-bench (mapv inc (range 1000))) ; eager
(crit/quick-bench (into [] (map inc) (range 1000))) ; transducer
:rcf)
```
## Debugging Techniques
### Tap and Inspect
```clojure
(defn complex-function [data]
(let [step1 (process-step-1 data)
_ (tap> {:stage :step1 :result step1}) ; Debug point
step2 (process-step-2 step1)
_ (tap> {:stage :step2 :result step2}) ; Debug point
step3 (process-step-3 step2)]
step3))
(comment
;; Set up tap handler
(add-tap println)
;; or
(add-tap portal/submit)
;; Run function and inspect tapped values
(complex-function test-data)
;; Remove tap when done
(remove-tap println)
:rcf)
```
### Scope Capture
```clojure
(ns myapp.debug
(:require [sc.api :as sc]))
(defn buggy-function [x y]
(let [a (+ x y)
b (* a 2)
c (sc/spy (/ b (- y x)))] ; Capture scope here
(+ a b c)))
(comment
;; When exception occurs, inspect captured scope
(buggy-function 5 5) ; Division by zero!
;; View last exception with scope
(sc/defsc 1) ; Define scope from last capture
;; Inspect variables
ep-1-a ; => 10
ep-1-b ; => 20
ep-1-x ; => 5
ep-1-y ; => 5
:rcf)
```
## REPL-Driven Development Patterns
### Exploratory Development Pattern
```clojure
(comment
;; 1. Start with data
(def sample-users
[{:id 1 :name "Alice" :email "alice@example.com"}
{:id 2 :name "Bob" :email "bob@example.com"}])
;; 2. Explore transformations
(map :name sample-users)
;; => ("Alice" "Bob")
(group-by :id sample-users)
;; => {1 [{:id 1 ...}], 2 [{:id 2 ...}]}
;; 3. Build helper functions
(defn by-id [users]
(reduce (fn [acc user]
(assoc acc (:id user) user))
{}
users))
(by-id sample-users)
;; => {1 {:id 1 ...}, 2 {:id 2 ...}}
;; 4. Refine based on REPL feedback
(defn index-by [key-fn coll]
(into {} (map (juxt key-fn identity)) coll))
(index-by :id sample-users)
;; => {1 {:id 1 ...}, 2 {:id 2 ...}}
;; 5. Extract to source file
;; 6. Write tests
;; 7. Move on to next function
:rcf)
```
### Bottom-Up Composition Pattern
```clojure
;; Start with smallest pieces
(defn parse-int [s]
(Integer/parseInt s))
(comment
(parse-int "42") ; => 42
(parse-int "abc") ; => NumberFormatException
:rcf)
;; Add error handling
(defn parse-int-safe [s]
(try
(Integer/parseInt s)
(catch NumberFormatException _
nil)))
(comment
(parse-int-safe "42") ; => 42
(parse-int-safe "abc") ; => nil
:rcf)
;; Compose into larger function
(defn sum-string-numbers [strings]
(->> strings
(keep parse-int-safe)
(reduce +)))
(comment
(sum-string-numbers ["1" "2" "3"]) ; => 6
(sum-string-numbers ["1" "abc" "3"]) ; => 4
:rcf)
```
### Refactoring with REPL Safety Net
```clojure
(comment
;; Original implementation
(defn process-order-v1 [order]
(let [total (reduce + (map :price (:items order)))
discount (if (:premium? (:user order))
(* total 0.1)
0)
final (- total discount)]
{:order-id (:id order)
:total final}))
;; Test current behavior before refactoring
(def test-order
{:id 123
:user {:premium? true}
:items [{:price 100} {:price 50}]})
(process-order-v1 test-order)
;; => {:order-id 123, :total 135.0}
;; Refactor: extract functions
(defn calculate-total [items]
(reduce + (map :price items)))
(defn calculate-discount [total premium?]
(if premium? (* total 0.1) 0))
(defn process-order-v2 [order]
(let [total (calculate-total (:items order))
discount (calculate-discount total (get-in order [:user :premium?]))
final (- total discount)]
{:order-id (:id order)
:total final}))
;; Verify behavior unchanged
(= (process-order-v1 test-order)
(process-order-v2 test-order))
;; => true ✓
;; Safe to replace!
:rcf)
```
## Editor Integration Best Practices
### Key Bindings (CIDER/Emacs)
- `C-c C-k` - Load/evaluate current buffer
- `C-c C-c` - Evaluate defn at point
- `C-M-x` - Evaluate top-level form
- `C-c M-n M-n` - Switch REPL namespace to current file
- `C-c C-v C-f` - Show function definition
- `C-c C-d C-d` - Show documentation
### Key Bindings (Calva/VS Code)
- `Ctrl+Alt+C Enter` - Load current file
- `Ctrl+Enter` - Evaluate current form
- `Alt+Enter` - Evaluate form and replace with result
- `Ctrl+Alt+C Space` - Evaluate selected text
- `Ctrl+Alt+C C` - Evaluate to comment
### Key Bindings (Cursive/IntelliJ)
- `Ctrl+Shift+L` - Load file in REPL
- `Ctrl+Shift+P` - Send form to REPL
- `Alt+Shift+P` - Send top-level form to REPL
- `Ctrl+Shift+M` - Run tests in namespace
## Common Pitfalls and Solutions
### Pitfall 1: Stale Namespace State
**Problem**: Old definitions linger after code changes
```clojure
(comment
;; Define function
(defn old-function [x] (* x 2))
;; Later, rename to new-function in source
;; But old-function still exists in REPL!
(old-function 5) ; => Still works! Bug risk!
;; Solution: Reload namespace
(require '[myapp.core :as core] :reload)
;; or use refresh
(require '[clojure.tools.namespace.repl :refer [refresh]])
(refresh)
:rcf)
```
### Pitfall 2: Circular Dependencies
**Problem**: Namespace reload fails due to circular deps
**Solution**: Restructure namespaces or use protocols
```clojure
;; Bad: Circular dependency
;; user.clj requires admin.clj
;; admin.clj requires user.clj
;; Good: Extract shared protocol
;; protocol.clj - defines interfaces
;; user.clj - requires protocol.clj
;; admin.clj - requires protocol.clj
```
### Pitfall 3: Side Effects in Top-Level
**Problem**: Code executes on namespace load
```clojure
;; Bad: Runs every load
(def db-connection (connect-to-db!))
;; Good: Defer until explicitly called
(defn get-db-connection []
(or @db-conn-atom (reset! db-conn-atom (connect-to-db!))))
;; Or use mount/integrant/component
(defstate db-connection
:start (connect-to-db!)
:stop (disconnect-db! db-connection))
```
### Pitfall 4: Lost REPL History
**Solution**: Use `.clojure/repl-history` or editor features
```clojure
;; Add to ~/.clojure/deps.edn
{:aliases
{:repl
{:extra-deps {reply/reply {:mvn/version "0.5.1"}}
:main-opts ["-m" "reply.main"]}}}
```
## Complete Example: REPL-Driven Feature Development
**Goal**: Build a user authentication system
```clojure
(ns myapp.auth
(:require [buddy.hashers :as hashers]
[clojure.spec.alpha :as s]))
;; 1. Define specs for data validation
(s/def ::email (s/and string? #(re-matches #".+@.+\..+" %)))
(s/def ::password (s/and string? #(>= (count %) 8)))
(s/def ::user (s/keys :req-un [::email ::password]))
(comment
;; Test specs interactively
(s/valid? ::email "user@example.com") ; => true
(s/valid? ::email "invalid") ; => false
(s/explain ::user {:email "test@test.com" :password "short"})
:rcf)
;; 2. Build hash-password function
(defn hash-password [password]
(hashers/derive password))
(comment
(def hashed (hash-password "mysecret123"))
hashed
;; => "bcrypt+sha512$4i9sd..."
;; Test verification
(hashers/check "mysecret123" hashed) ; => true
(hashers/check "wrongpass" hashed) ; => false
:rcf)
;; 3. Create user registration
(defn register-user [db user-data]
(when (s/valid? ::user user-data)
(let [hashed-pwd (hash-password (:password user-data))
user (assoc user-data :password hashed-pwd)]
(save-user! db user))))
(comment
;; Mock database for testing
(def mock-db (atom {}))
(defn save-user! [db user]
(swap! db assoc (:email user) user))
;; Test registration flow
(register-user mock-db
{:email "alice@example.com"
:password "secure123"})
@mock-db
;; => {"alice@example.com" {:email "..." :password "bcrypt+..."}}
:rcf)
;; 4. Build authentication function
(defn authenticate [db email password]
(when-let [user (get @db email)]
(when (hashers/check password (:password user))
(dissoc user :password))))
(comment
;; Test authentication
(authenticate mock-db "alice@example.com" "secure123")
;; => {:email "alice@example.com"}
(authenticate mock-db "alice@example.com" "wrongpass")
;; => nil
(authenticate mock-db "nobody@example.com" "anypass")
;; => nil
:rcf)
;; 5. Now write formal tests
(ns myapp.auth-test
(:require [clojure.test :refer [deftest is testing]]
[myapp.auth :as auth]))
(deftest authentication-test
(let [db (atom {})]
(testing "user registration"
(auth/register-user db {:email "test@example.com"
:password "secure123"})
(is (contains? @db "test@example.com")))
(testing "successful authentication"
(is (= {:email "test@example.com"}
(auth/authenticate db "test@example.com" "secure123"))))
(testing "failed authentication"
(is (nil? (auth/authenticate db "test@example.com" "wrongpass"))))))
```
## Best Practices Summary
### DO:
-**Start REPL first**: Launch before writing code
-**Evaluate continuously**: Test every function immediately
-**Use rich comments**: Document explorations inline
-**Build bottom-up**: Small functions → composition
-**Manage state**: Use mount/integrant/component
-**Inspect data**: Use Portal, tap>, CIDER inspector
-**Write tests after**: Codify successful REPL experiments
-**Reload carefully**: Use refresh, avoid stale state
-**Use 1.12 features**: Method values, qualified methods
-**Keep REPL running**: Long sessions with state management
### DON'T:
-**Write without REPL**: Don't code in a vacuum
-**Batch evaluation**: Don't wait to test everything at once
-**Side effects at top**: Avoid execution on namespace load
-**Ignore state**: Manage component lifecycle explicitly
-**Lose experiments**: Capture in rich comments or design journals
-**Skip tests**: REPL-driven ≠ test-less
-**Circular deps**: Structure namespaces carefully
-**Complex expressions**: Keep evaluation units small
-**Manual reloads**: Automate with tools
-**Fear the REPL**: It's your friend, not intimidating
This skill enables the highly productive, interactive development style that makes Clojure unique and powerful.