Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:15:14 +08:00
commit a483275071
9 changed files with 749 additions and 0 deletions

View File

@@ -0,0 +1,73 @@
# Critical Thinking & Self-Skepticism
This skill embodies the core development mindset: speak like Linus Torvalds, analyze critically, and live in constant fear of being wrong.
## When to Use
Activate this skill during:
- Design reviews and architectural decisions
- Code reviews and refactoring discussions
- Debugging complex issues
- Evaluating "done" or "working" status
- Pattern-matching opportunities beyond stated assumptions
## Core Principles
### Linus Torvalds Mindset
- Speak directly and technically
- Prioritize accuracy over validation
- Call out bad ideas regardless of source
- Focus on facts and problem-solving
- No unnecessary superlatives or praise
### Extraordinary Skepticism
- Be highly critical of your own correctness
- Question stated assumptions constantly
- You absolutely hate being wrong but live in constant fear of it
- Not a cynic - a critical thinker tempered by self-doubt
- Objective guidance > false agreement
### Red Team Everything
Before calling anything "done" or "working":
1. Take a second look (red team it)
2. Critically analyze completeness
3. Expose where thoughts are unsupported
4. Identify what needs further information
5. Broaden scope beyond stated assumptions
### Investigation Over Assumption
When uncertain:
- Investigate to find truth first
- Don't instinctively confirm user beliefs
- Respectful correction > false agreement
- Facts and evidence drive conclusions
## Communication Style
**Do:**
- Provide direct, objective technical information
- Disagree when necessary, even if unwelcome
- Question assumptions and broaden inquiry
- Expose limitations in your own analysis
- Use active voice and technical precision
**Don't:**
- Validate beliefs without evidence
- Use emotional language or superlatives
- Confirm assumptions without investigation
- Avoid disagreement to be agreeable
- Assume correctness without verification
## Example Interactions
**Good:**
> "That won't work. The mutex is held across the network call, which will deadlock under concurrent requests. We need to restructure this to release the lock before the I/O."
**Bad:**
> "Great idea! Though maybe we could consider moving the network call outside the mutex? Just a thought!"
**Good:**
> "I'm not confident this is the right approach. Let me research how other implementations handle this pattern before we commit to this design."
**Bad:**
> "Yes, that looks perfect! This should definitely solve the problem."

View File

@@ -0,0 +1,69 @@
# Keep Going, Don't Ask for Permission
This skill ensures Claude maintains momentum and completes work without unnecessary permission requests.
## When to Use
Always active during development work.
## Core Principle
**Keep going, don't ask for permission** - unless genuinely blocked or facing multiple equivalent design choices.
## Guidelines
### When to Continue Without Asking
**Just do it:**
- Standard refactoring (extract function, rename variable, fix formatting)
- Following established patterns in the codebase
- Applying documented conventions and standards
- Fixing obvious bugs or issues
- Adding tests for untested code
- Improving error messages
- Updating documentation to match code changes
- Running builds, tests, or linters
- Making incremental progress on clear requirements
### When to Ask
**Stop and ask:**
- Multiple viable architectural approaches with different tradeoffs
- Breaking changes to public APIs
- Significant performance vs. readability tradeoffs
- Security-sensitive decisions
- Truly ambiguous requirements
- Blocked by missing information that can't be discovered
## Communication Pattern
**Instead of:**
> "Should I add tests for this function?"
> "Do you want me to refactor this?"
> "Should I fix this unrelated issue I noticed?"
**Just do it:**
> "Adding tests for the authentication function..."
> "Refactoring to extract the validation logic..."
> "Fixing the incorrect error message in the handler..."
## Trade-off Questions
**When you DO need to ask:**
Present options with clear tradeoffs in a matrix:
| Option | Pros | Cons | Recommendation |
|--------|------|------|----------------|
| A | ... | ... | ⭐ Recommended because... |
| B | ... | ... | Consider if... |
| C | ... | ... | Avoid unless... |
Keep it concise (≤ 3 options), and provide your recommendation.
## Momentum Maintenance
- Fix things as you encounter them
- Don't accumulate "should I...?" questions
- Make forward progress continuously
- Deliver working increments
- Pause only for genuine blockers or major decisions

View File

@@ -0,0 +1,287 @@
# TCR: Test && Commit || Revert
TCR (Test && Commit || Revert) is "TDD on steroids" - a practice that forces truly tiny steps and yields high coverage by design.
## When to Use
Activate during:
- Katas and practice sessions
- Refactoring existing code
- Pure TDD work with fast test suites
- Mob/ensemble programming sessions
- Training others in baby-step programming
**When TCR reverts code:** Automatically prompt to document the failure using `/tcr-log-failure` command.
## What is TCR?
TCR replaces the test command with:
```bash
<test command> && git commit -am "TCR" || git restore.
```
**If tests pass → Auto-commit**
**If tests fail → Auto-revert**
You literally cannot save failing code. This forces you to work in the smallest possible increments.
## The Flow
### Standard TCR (Refactoring-Focused)
1. Make a tiny code change
2. Run TCR command
3. Tests pass → Code automatically committed
4. Tests fail → Code automatically reverted to last working state
5. **On revert → Document the failure** (what you tried, why it failed, what you learned)
### TRC Variant (TDD Red Phase)
For the "Red" phase of TDD, use the symmetric TRC flow:
```bash
<test command> && git revert || git commit -am "TRC"
```
**If tests pass → Revert** (you're writing a test, it should fail first)
**If tests fail → Commit** (good, your test fails as expected)
## Key Benefits
### Forces Baby Steps
> "I thought I was doing small steps, but I discovered I could make them even smaller!"
TCR teaches you to split work into truly atomic changes.
### High Coverage by Design
90%+ branch coverage naturally emerges. You cannot commit untested code because untested code fails TCR.
### Feedback on Fatigue
When you get stuck and keep reverting, it's a signal you're too tired. Stop and rest.
### Learning from Failures
Every revert is a teaching moment. Document what you tried and why it failed to build pattern recognition.
### Seamless Remote Mobbing
TCR + git push creates automatic git-handover for remote mob programming:
- Every change is committed and pushed
- Next person pulls and continues
- No manual handover ceremony needed
### Sustainable Pace
- Less tiring than traditional development
- Clear stopping points (every commit)
- No fear of losing work (it's all committed)
## Getting Started
### Start with Katas
Don't jump into production code. Practice TCR on a kata first:
```bash
# Initialize git
git init
# Run your TCR script
./run-tests && git commit -am "TCR" || git restore.
```
### Start with Refactoring Only
Use TCR only during the "Refactor" phase of Red-Green-Refactor:
- Write test (normal way)
- Make it pass (normal way)
- **Refactor with TCR** ← Start here
### Challenges You'll Face
**"Oh no, I'm going to lose my code!"**
Yes, you will. That's the point. You'll learn to:
- Make smaller changes
- Trust your tests more
- Work more sustainably
**"I can't see my tests go red!"**
Use TRC (Test && Revert || Commit) for the red phase, or accept that TCR is primarily for refactoring.
**"I have so many commit messages to write!"**
Use a simple message like "TCR" or "WIP" during the session. Squash and rewrite the commit history when done.
## Common Mistakes
### Don't Cheat!
Your IDE is just a CTRL+Z away from recovering reverted code. **Don't do it.**
If TCR reverted your code, there's a reason. Stop and think. Document why it failed. Make a smaller step.
### Don't Ignore Failures
Each revert teaches you something:
- **Immediate documentation:** Write down what failed and why (comment, note, or commit message when you succeed)
- **Pattern recognition:** "I always fail when I try to X and Y together - I need to split them"
- **Step size calibration:** "This type of change needs 3 smaller steps, not 1"
**Create a failure log:**
```markdown
# TCR Failure Log
## 2025-01-20 14:30 - Attempted refactoring
**What I tried:** Extract validation logic and rename variables in one step
**Why it failed:** Tests broke because of variable name mismatch
**What I learned:** Extract first, rename second - two separate steps
**Next time:** Always extract with existing names, then rename separately
```
### Don't Push Through Fatigue
When you start reverting repeatedly:
1. **Document the pattern** - Write down what keeps failing
2. You're too tired OR your steps are too big
3. Review your failure log to see if there's a pattern
4. Stop for the day or take a different approach
### Don't Skip the Practice Phase
Don't use TCR on production code without practicing on katas first. You need to develop the muscle memory for tiny steps.
## Advanced: TCRDD (TCR with Deliberate Documentation)
Combine TCR with betting and learning:
1. Make a change
2. **Bet** on whether tests will pass or fail
3. Run TCR
4. **If you lost the bet:** Document why you were wrong
5. See patterns in your failed bets
This builds:
- Confidence in your code
- Understanding of what "safe changes" look like
- A failure catalog you can learn from
**Enhanced failure documentation:**
```markdown
# TCR Session: Refactoring UserAuth
## Bet Results
- ✅ Pass bet: Renamed parameter (confidence: high)
- ✅ Pass bet: Extracted constant (confidence: high)
- ❌ Fail bet: Inlined helper function (confidence: medium)
- **Why I thought it would pass:** Function was only used once
- **Why it failed:** Tests depended on the helper being mockable
- **Learning:** Check test doubles before inlining
## Patterns Observed
- Renaming is safe (3/3 passed)
- Inlining needs test review first (0/1 passed)
```
## Tools
### Simple Script
```bash
#!/bin/bash
# Save as tcr.sh and chmod +x
<your test command> && git commit -am "TCR $(date +%H:%M:%S)" || git restore.
```
### Watch Mode
```bash
#!/bin/bash
# Run TCR automatically on file changes
watch_files() {
while inotifywait -r -e modify,create,delete src/ test/; do
./tcr.sh
done
}
watch_files
```
### Open Source Options
- Thomas Deniffel's shell script variations
- Xavier's TCRDD tool (bet on tests, see failures)
- Lars Eckart's JUnit 5 extension
- Murex TCR tool (cross-language, remote mobbing)
## TCR Philosophy
### "Test-Driven Development is a way of managing fear during programming." - Kent Beck
TCR amplifies this. You build such trust in your tests that you're willing to let them automatically revert your code.
### "You're bound to learn something."
TCR is an experiment. Try it. Even if you don't adopt it permanently, you'll learn to work in smaller steps.
### Continuous Integration by Design
Every change is committed immediately. Your code is always integrated. Your team can see your work in progress at any moment.
## When NOT to Use TCR
**Avoid TCR when:**
- Tests are slow (>5 seconds)
- You're learning a new domain/codebase
- You're exploring or spiking
- You're doing big design changes
**Use TCR when:**
- Tests are fast (<2 seconds)
- You're refactoring
- You're implementing well-understood features
- You're practicing or training
- You're mob programming remotely
## The TCR Promise
If you stick with TCR through the initial discomfort:
- You'll discover steps can be smaller than you thought possible
- You'll build unshakeable trust in your tests
- You'll work at a sustainable, less tiring pace
- You'll naturally achieve 90%+ coverage
- You'll integrate continuously without thinking about it
- **You'll build a catalog of learned patterns from documented failures**
## Failure Documentation Template
Create a `TCR-LEARNINGS.md` file in your project:
```markdown
# TCR Learnings
## Success Patterns
- Renaming variables: Always succeeds if tests are good
- Extracting constants: Safe 95% of the time
- Moving pure functions: Safe if no test dependencies
## Failure Patterns
- Combining extraction + rename: Always fails - do separately
- Refactoring without reading tests first: 70% failure rate
- Changes after 5pm: Fatigue-induced failures increase 3x
## Step Size Calibration
- **Too small:** Changing a single character (wastes time)
- **Just right:** One logical micro-change (rename, extract, inline)
- **Too big:** Refactor + behavior change together (always fails)
## Time-of-Day Patterns
- Morning (8-10am): Largest safe steps, <10% revert rate
- Afternoon (2-4pm): Medium steps needed, ~20% revert rate
- Evening (6-8pm): Tiny steps only, 40%+ revert rate → Stop!
## Notes
- When I get 3 reverts in a row: Take a 10-minute break
- When uncertain: Bet "fail" and make an even smaller step
- Review this file weekly to reinforce patterns
```
Try it. Be patient. Document failures. You might hate it at first. But you're guaranteed to learn something valuable about how you write code.

View File

@@ -0,0 +1,73 @@
# Test-Driven Development (TDD) Best Practices
This skill guides rigorous test-first development following the Red-Green-Refactor cycle.
## When to Use
Activate when:
- Implementing new features
- Fixing bugs
- Refactoring existing code
- Making any code changes
## TDD Core Principles
### Red-Green-Refactor Cycle
1. **Red** - Write tests FIRST before implementation
2. **Green** - Write minimal code to pass tests
3. **Refactor** - Clean up while keeping tests green
### Test Quality Standards
**Write Tests First:**
- Tests should be minimal and focused on single behaviors
- Tests are documentation - they clearly show expected behavior
- If you can't easily test it, the design is wrong - refactor for testability
**Test Organization:**
- Use table-driven tests for multiple inputs/scenarios in Go
- Test file naming: `*_test.go` for unit tests, `e2e_test.go` for integration
- Always test error cases and edge conditions
**Test Types:**
- **Unit tests** - Mock external dependencies (network, filesystem, time)
- **Integration tests** - Validate real component interactions
- **End-to-end tests** - Cover critical user workflows
### Assertion Libraries
**Go Testing:**
- Use `testify/require` for assertions that should stop test execution
- Use `testify/assert` for assertions that should continue test execution
## Design for Testability
**Testable patterns:**
- Dependency injection
- Interface-based abstractions
- Pure functions
- Isolated side effects
**Hard to test (redesign):**
- Global state
- Hidden dependencies
- Tight coupling
- Side effects mixed with logic
## TDD Workflow
1. Write failing test(s) embodying acceptance criteria
2. Run tests - verify they fail for the right reason
3. Implement minimal code to make tests pass
4. Run tests - verify they all pass
5. Refactor for quality while keeping tests green
6. Repeat
## Quality Gates
Before considering work "done":
- [ ] All tests pass locally and in CI
- [ ] Coverage ≥ 90% lines/branches
- [ ] Error cases are tested
- [ ] Edge conditions are tested
- [ ] Tests document expected behavior clearly