Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:03:21 +08:00
commit bacee10b89
6 changed files with 619 additions and 0 deletions

View File

@@ -0,0 +1,201 @@
---
name: root-cause-tracing
description: Use when errors occur deep in execution and you need to trace back to find the original trigger - systematically traces bugs backward through call stack, adding instrumentation when needed, to identify source of invalid data or incorrect behavior
---
# Root Cause Tracing
## Overview
Bugs often manifest deep in the call stack (git init in wrong directory, file created in wrong location, database opened with wrong path). Your instinct is to fix where the error appears, but that's treating a symptom.
**Core principle:** Trace backward through the call chain until you find the original trigger, then fix at the source.
## When to Use
```dot
digraph when_to_use {
"Bug appears deep in stack?" [shape=diamond];
"Can trace backwards?" [shape=diamond];
"Fix at symptom point" [shape=box];
"Trace to original trigger" [shape=box];
"BETTER: Also add defense-in-depth" [shape=box];
"Bug appears deep in stack?" -> "Can trace backwards?" [label="yes"];
"Can trace backwards?" -> "Trace to original trigger" [label="yes"];
"Can trace backwards?" -> "Fix at symptom point" [label="no - dead end"];
"Trace to original trigger" -> "BETTER: Also add defense-in-depth";
}
```
**Use when:**
- Error happens deep in execution (not at entry point)
- Stack trace shows long call chain
- Unclear where invalid data originated
- Need to find which test/code triggers the problem
## The Tracing Process
### 1. Observe the Symptom
```
Error: git init failed in /Users/jesse/project/packages/core
```
### 2. Find Immediate Cause
**What code directly causes this?**
```typescript
await execFileAsync("git", ["init"], { cwd: projectDir });
```
### 3. Ask: What Called This?
```typescript
WorktreeManager.createSessionWorktree(projectDir, sessionId)
called by Session.initializeWorkspace()
called by Session.create()
called by test at Project.create()
```
### 4. Keep Tracing Up
**What value was passed?**
- `projectDir = ''` (empty string!)
- Empty string as `cwd` resolves to `process.cwd()`
- That's the source code directory!
### 5. Find Original Trigger
**Where did empty string come from?**
```typescript
const context = setupCoreTest(); // Returns { tempDir: '' }
Project.create("name", context.tempDir); // Accessed before beforeEach!
```
## Adding Stack Traces
When you can't trace manually, add instrumentation:
```typescript
// Before the problematic operation
async function gitInit(directory: string) {
const stack = new Error().stack;
console.error("DEBUG git init:", {
directory,
cwd: process.cwd(),
nodeEnv: process.env.NODE_ENV,
stack,
});
await execFileAsync("git", ["init"], { cwd: directory });
}
```
**Critical:** Use `console.error()` in tests (not logger - may not show)
**Run and capture:**
```bash
npm test 2>&1 | grep 'DEBUG git init'
```
**Analyze stack traces:**
- Look for test file names
- Find the line number triggering the call
- Identify the pattern (same test? same parameter?)
## Real Example: Empty projectDir
**Symptom:** `.git` created in `packages/core/` (source code)
**Trace chain:**
1. `git init` runs in `process.cwd()` ← empty cwd parameter
2. WorktreeManager called with empty projectDir
3. Session.create() passed empty string
4. Test accessed `context.tempDir` before beforeEach
5. setupCoreTest() returns `{ tempDir: '' }` initially
**Root cause:** Top-level variable initialization accessing empty value
**Fix:** Made tempDir a getter that throws if accessed before beforeEach
**Also added defense-in-depth:**
- Layer 1: Project.create() validates directory
- Layer 2: WorkspaceManager validates not empty
- Layer 3: NODE_ENV guard refuses git init outside tmpdir
- Layer 4: Stack trace logging before git init
## Verification Strategy After Fix
**Auto-decide on testing based on complexity:**
**Write deterministic unit test for:**
- Complex algorithms or business logic
- Data transformations where bugs are likely
- Critical paths that could break silently
**Skip test for:**
- UI components or React hooks
- Simple CRUD operations
- Straightforward mappings
- Anything you're 100% certain is correct
**If no test:**
- Verify with typecheck/lint
- Manual verification for UI changes
- Code review confidence
**Only deterministic unit tests** - no integration tests, no complex mocking, no async complexity.
## Key Principle
```dot
digraph principle {
"Found immediate cause" [shape=ellipse];
"Can trace one level up?" [shape=diamond];
"Trace backwards" [shape=box];
"Is this the source?" [shape=diamond];
"Fix at source" [shape=box];
"Add validation at each layer" [shape=box];
"Bug impossible" [shape=doublecircle];
"NEVER fix just the symptom" [shape=octagon, style=filled, fillcolor=red, fontcolor=white];
"Found immediate cause" -> "Can trace one level up?";
"Can trace one level up?" -> "Trace backwards" [label="yes"];
"Can trace one level up?" -> "NEVER fix just the symptom" [label="no"];
"Trace backwards" -> "Is this the source?";
"Is this the source?" -> "Trace backwards" [label="no - keeps going"];
"Is this the source?" -> "Fix at source" [label="yes"];
"Fix at source" -> "Add validation at each layer";
"Add validation at each layer" -> "Bug impossible";
}
```
**NEVER fix just where the error appears.** Trace back to find the original trigger.
## Stack Trace Tips
**In tests:** Use `console.error()` not logger - logger may be suppressed
**Before operation:** Log before the dangerous operation, not after it fails
**Include context:** Directory, cwd, environment variables, timestamps
**Capture stack:** `new Error().stack` shows complete call chain
## Real-World Impact
From debugging session (2025-10-03):
- Found root cause through 5-level trace
- Fixed at source (getter validation)
- Added 4 layers of defense
- 1847 tests passed, zero pollution

View File

@@ -0,0 +1,343 @@
---
name: systematic-debugging
description: Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes - four-phase framework (root cause investigation, pattern analysis, hypothesis testing, implementation) that ensures understanding before attempting solutions
---
# Systematic Debugging
## Overview
Random fixes waste time and create new bugs. Quick patches mask underlying issues.
**Core principle:** ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
**Violating the letter of this process is violating the spirit of debugging.**
## The Iron Law
```
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
```
If you haven't completed Phase 1, you cannot propose fixes.
## When to Use
Use for ANY technical issue:
- Test failures
- Bugs in production
- Unexpected behavior
- Performance problems
- Build failures
- Integration issues
**Use this ESPECIALLY when:**
- Under time pressure (emergencies make guessing tempting)
- "Just one quick fix" seems obvious
- You've already tried multiple fixes
- Previous fix didn't work
- You don't fully understand the issue
**Don't skip when:**
- Issue seems simple (simple bugs have root causes too)
- You're in a hurry (rushing guarantees rework)
- Manager wants it fixed NOW (systematic is faster than thrashing)
## The Four Phases
You MUST complete each phase before proceeding to the next.
### Phase 1: Root Cause Investigation
**BEFORE attempting ANY fix:**
1. **Read Error Messages Carefully**
- Don't skip past errors or warnings
- They often contain the exact solution
- Read stack traces completely
- Note line numbers, file paths, error codes
2. **Reproduce Consistently**
- Can you trigger it reliably?
- What are the exact steps?
- Does it happen every time?
- If not reproducible → gather more data, don't guess
3. **Check Recent Changes**
- What changed that could cause this?
- Git diff, recent commits
- New dependencies, config changes
- Environmental differences
4. **Gather Evidence in Multi-Component Systems**
**WHEN system has multiple components (CI → build → signing, API → service → database):**
**BEFORE proposing fixes, add diagnostic instrumentation:**
```
For EACH component boundary:
- Log what data enters component
- Log what data exits component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify failing component
THEN investigate that specific component
```
**Example (multi-layer system):**
```bash
# Layer 1: Workflow
echo "=== Secrets available in workflow: ==="
echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
# Layer 2: Build script
echo "=== Env vars in build script: ==="
env | grep IDENTITY || echo "IDENTITY not in environment"
# Layer 3: Signing script
echo "=== Keychain state: ==="
security list-keychains
security find-identity -v
# Layer 4: Actual signing
codesign --sign "$IDENTITY" --verbose=4 "$APP"
```
**This reveals:** Which layer fails (secrets → workflow ✓, workflow → build ✗)
5. **Trace Data Flow**
**WHEN error is deep in call stack:**
**REQUIRED SUB-SKILL:** Use superpowers root-cause-tracing for backward tracing technique
**Quick version:**
- Where does bad value originate?
- What called this with bad value?
- Keep tracing up until you find the source
- Fix at source, not at symptom
### Phase 2: Pattern Analysis
**Find the pattern before fixing:**
1. **Find Working Examples**
- Locate similar working code in same codebase
- What works that's similar to what's broken?
2. **Compare Against References**
- If implementing pattern, read reference implementation COMPLETELY
- Don't skim - read every line
- Understand the pattern fully before applying
3. **Identify Differences**
- What's different between working and broken?
- List every difference, however small
- Don't assume "that can't matter"
4. **Understand Dependencies**
- What other components does this need?
- What settings, config, environment?
- What assumptions does it make?
### Phase 3: Hypothesis and Testing
**Scientific method:**
1. **Form Single Hypothesis**
- State clearly: "I think X is the root cause because Y"
- Write it down
- Be specific, not vague
2. **Test Minimally**
- Make the SMALLEST possible change to test hypothesis
- One variable at a time
- Don't fix multiple things at once
3. **Verify Before Continuing**
- Did it work? Yes → Phase 4
- Didn't work? Form NEW hypothesis
- DON'T add more fixes on top
4. **When You Don't Know**
- Say "I don't understand X"
- Don't pretend to know
- Ask for help
- Research more
### Phase 4: Implementation
**Fix the root cause, not the symptom:**
1. **Decide on Testing Strategy**
**Auto-decide based on complexity:**
- **Write test for**: Complex algorithms, business logic, data transformations where bugs are likely
- **Skip test for**: UI components, React hooks, simple CRUD, straightforward mappings, anything you're 100% certain is correct
- **Test type**: Only deterministic unit tests - no integration tests, no complex mocking, no async complexity
**If writing test:**
- Simplest possible reproduction
- Automated test that fails before fix
- Verify logic, not implementation details
**If skipping test:**
- Verify fix with typecheck/lint
- Manual verification for UI changes
- Code review confidence that fix is correct
2. **Implement Single Fix**
- Address the root cause identified
- ONE change at a time
- No "while I'm here" improvements
- No bundled refactoring
3. **Verify Fix**
**If test was written:**
- Test passes now?
- No other tests broken?
**If no test:**
- Typecheck passes?
- Lint clean?
- Manual verification confirms fix?
**Always check:**
- Issue actually resolved?
- No regressions in related functionality?
4. **If Fix Doesn't Work**
- STOP
- Count: How many fixes have you tried?
- If < 3: Return to Phase 1, re-analyze with new information
- **If ≥ 3: STOP and question the architecture (step 5 below)**
- DON'T attempt Fix #4 without architectural discussion
5. **If 3+ Fixes Failed: Question Architecture**
**Pattern indicating architectural problem:**
- Each fix reveals new shared state/coupling/problem in different place
- Fixes require "massive refactoring" to implement
- Each fix creates new symptoms elsewhere
**STOP and question fundamentals:**
- Is this pattern fundamentally sound?
- Are we "sticking with it through sheer inertia"?
- Should we refactor architecture vs. continue fixing symptoms?
**Discuss with your human partner before attempting more fixes**
This is NOT a failed hypothesis - this is a wrong architecture.
## Red Flags - STOP and Follow Process
If you catch yourself thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see if it works"
- "Add multiple changes, run tests"
- "It's probably X, let me fix that"
- "I don't fully understand but this might work"
- "Pattern says X but I'll adapt it differently"
- "Here are the main problems: [lists fixes without investigation]"
- Proposing solutions before tracing data flow
- **"One more fix attempt" (when already tried 2+)**
- **Each fix reveals new problem in different place**
- **Writing tests for UI components when you're certain the fix is correct**
**ALL of these mean: STOP. Return to Phase 1.**
**If 3+ fixes failed:** Question the architecture (see Phase 4.5)
## your human partner's Signals You're Doing It Wrong
**Watch for these redirections:**
- "Is that not happening?" - You assumed without verifying
- "Will it show us...?" - You should have added evidence gathering
- "Stop guessing" - You're proposing fixes without understanding
- "Ultrathink this" - Question fundamentals, not just symptoms
- "We're stuck?" (frustrated) - Your approach isn't working
**When you see these:** STOP. Return to Phase 1.
## Common Rationalizations
| Excuse | Reality |
| -------------------------------------------- | ----------------------------------------------------------------------------- |
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
| "UI fix doesn't need tests" | Correct! UI components verified via typecheck/manual testing, not unit tests. |
## Quick Reference
| Phase | Key Activities | Success Criteria |
| --------------------- | ------------------------------------------------------ | --------------------------- |
| **1. Root Cause** | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
| **2. Pattern** | Find working examples, compare | Identify differences |
| **3. Hypothesis** | Form theory, test minimally | Confirmed or new hypothesis |
| **4. Implementation** | Create test, fix, verify | Bug resolved, tests pass |
## When Process Reveals "No Root Cause"
If systematic investigation reveals issue is truly environmental, timing-dependent, or external:
1. You've completed the process
2. Document what you investigated
3. Implement appropriate handling (retry, timeout, error message)
4. Add monitoring/logging for future investigation
**But:** 95% of "no root cause" cases are incomplete investigation.
## Integration with Other Skills
**This skill requires using:**
- **root-cause-tracing** - REQUIRED when error is deep in call stack (see Phase 1, Step 5)
**Testing skills (when needed):**
- **test-driven-development** (if available) - Use when fixing complex business logic that needs test coverage
- Skip for UI components, simple CRUD, or anything verifiable via typecheck/manual testing
## Real-World Impact
From debugging sessions:
- Systematic approach: 15-30 minutes to fix
- Random fixes approach: 2-3 hours of thrashing
- First-time fix rate: 95% vs 40%
- New bugs introduced: Near zero vs common