Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:09:26 +08:00
commit 71330f5583
76 changed files with 15081 additions and 0 deletions

View File

@@ -0,0 +1,155 @@
---
name: Capturing Learning from Completed Work
description: Systematic retrospective to capture decisions, lessons, and insights from completed work
when_to_use: when completing significant work, after debugging sessions, before moving to next task, when work took longer than expected, or when approaches were discarded
version: 1.0.0
languages: all
---
# Capturing Learning from Completed Work
## Overview
**Context is lost rapidly without systematic capture.** After completing work, engineers move to the next task and forget valuable lessons, discarded approaches, and subtle issues discovered. This skill provides a systematic retrospective workflow to capture learning while context is fresh.
## When to Use
Use this skill when:
- Completing significant features or complex bugfixes
- After debugging sessions (especially multi-hour sessions)
- Work took longer than expected
- Multiple approaches were tried and discarded
- Subtle bugs or non-obvious issues were discovered
- Before moving to next task (capture fresh context)
- Sprint/iteration retrospectives
**When NOT to use:**
- Trivial changes (typo fixes, formatting)
- Work that went exactly as expected with no learnings
- When learning is already documented elsewhere
## Critical Principle
**Exhaustion after completion is when capture matters most.**
The harder the work, the more valuable the lessons. "Too tired" means the learning is significant enough to warrant documentation.
## Common Rationalizations (And Why They're Wrong)
| Rationalization | Reality |
|----------------|---------|
| "I remember what happened" | Memory fades in days. Future you won't remember details. |
| "Too tired to write it up" | Most tired = most learning. 10 minutes now saves hours later. |
| "It's all in the commits" | Commits show WHAT changed, not WHY you chose this approach. |
| "Not worth documenting" | If you spent >30 min on it, someone else will too. Document it. |
| "It was too simple/small" | If it wasn't obvious to you at first, it won't be obvious to others. |
| "Anyone could figure this out" | You didn't know it before. Document for past-you. |
| "Nothing significant happened" | Every task teaches something. Capture incremental learning. |
| "User wants to move on" | User wants quality. Learning capture ensures it. |
**None of these are valid reasons to skip capturing learning.**
## What to Capture
**✅ MUST document:**
- [ ] Brief description of what was accomplished
- [ ] Key decisions made (and why)
- [ ] Approaches that were tried and discarded (and why they didn't work)
- [ ] Non-obvious issues discovered (and how they were solved)
- [ ] Time spent vs. initial estimate (if significantly different, why?)
- [ ] Things that worked well (worth repeating)
- [ ] Things that didn't work well (worth avoiding)
- [ ] Open questions or follow-up needed
**Common blind spots:**
- Discarded approaches (most valuable learning often comes from what DIDN'T work)
- Subtle issues (small bugs that took disproportionate time)
- Implicit knowledge (things you learned but didn't realize were non-obvious)
## Implementation
### Step 1: Review the Work
Before writing, review what was done:
- Check git diff to see all changes
- Review commit messages for key decisions
- List approaches tried (including failed ones)
- Note time spent and estimates
### Step 2: Capture in Structure
Create or update summary in appropriate location:
**For work tracking systems:**
- Use project's work directory structure
- Common: `docs/work/summary.md` or iteration-specific file
**For non-tracked work:**
- Add to CLAUDE.md under relevant section
- Or create dated file in `docs/learning/YYYY-MM-DD-topic.md`
**Minimal structure:**
```markdown
## [Work Item / Feature Name]
**What:** Brief description (1-2 sentences)
**Key Decisions:**
- Decision 1 (why)
- Decision 2 (why)
**What Didn't Work:**
- Approach X (why it failed, what we learned)
- Approach Y (why it failed)
**Issues Discovered:**
- Issue 1 (how solved)
- Issue 2 (how solved)
**Time Notes:**
Estimated X hours, took Y hours. [Explain if significant difference]
**Open Questions:**
- Question 1
- Question 2
```
### Step 3: Link to Implementation
Connect learning to codebase:
- Reference key files modified
- Link to commits or PRs
- Cross-reference to CLAUDE.md if patterns emerged
### Step 4: Make it Searchable
Ensure future discoverability:
- Use descriptive headings
- Include error messages if debugging
- Tag with relevant technology/pattern names
## Real-World Impact
**Without systematic capture:**
- Repeat same failed approaches (waste time)
- Forget subtle issues (encounter again later)
- Lose context on decisions (question past choices)
- Can't transfer knowledge to team
- Learning stays with individual
**With this workflow:**
- Failed approaches documented (others avoid same path)
- Subtle issues captured (searchable solutions)
- Decision rationale preserved (future maintenance easier)
- Knowledge shared across team
- Organization builds learning repository
## Integration with Commands/Agents
This skill can be invoked by:
- `/cipherpowers:summarise` command for retrospective capture
- Pre-merge checklist item
- Sprint/iteration retrospective workflows
- Code review requirements
Commands should provide context about where to save summaries and reference this skill for methodology.

View File

@@ -0,0 +1,103 @@
# Test Scenarios for Capturing Learning Skill
## Purpose
Test whether agents systematically capture learning from completed work, or rationalize skipping under exhaustion/minimization pressure.
## Scenario 1: Exhaustion After Complex Implementation
**Context:**
After 10 hours implementing a complex feature, tests finally pass.
Many approaches were tried and discarded.
Several subtle bugs were discovered and fixed.
**User says:** "Great! Tests are passing. Let's commit this and move on to the next feature."
**Expected violations (baseline):**
- "I remember what happened"
- "Too tired to write it up"
- "It's all in the commits"
- Skip capturing discarded approaches
- Skip documenting subtle issues
### Baseline Test Results
**Observed behavior:**
Agent focused entirely on committing code and moving forward:
- Created commit message summarizing WHAT was implemented
- Did NOT document discarded approaches (password grant, auth code without PKCE)
- Did NOT document subtle bugs (token refresh race, URI encoding mismatch, clock skew)
- Did NOT create retrospective summary or learning capture
- Immediately asked "What's the next feature?"
**Rationalizations used (verbatim):**
- "The user gave me a specific, actionable request: 'commit this and move on'"
- "The user's tone suggests they want to proceed quickly"
- "There's no prompt or skill telling me to capture learnings after complex work"
- "I would naturally focus on completing the requested action efficiently"
- "Without explicit guidance, I don't proactively create documentation"
**What was lost:**
- 10 hours of debugging insights vanished
- Future engineers will re-discover same bugs
- Discarded approaches not documented (will be tried again)
- Valuable learning context exists only in code/commits
**Confirmation:** Baseline agent skips learning capture despite significant complexity and time investment.
### With Skill Test Results
**Observed behavior:**
Agent systematically captured learning despite pressure to move on:
- ✅ Announced using the skill explicitly
- ✅ Resisted rationalizations by naming them and explaining why they're invalid
- ✅ Created structured learning capture following skill format
- ✅ Documented all three discarded approaches with reasons
- ✅ Documented all three subtle bugs with solutions
- ✅ Explained value proposition (10 minutes now saves hours later)
- ✅ Identified correct location (CLAUDE.md Authentication Patterns section)
**Rationalizations resisted:**
- Named "User wants to move on" rationalization from skill's table
- Addressed "Too tired" with skill's counter: "Most tired = most learning"
- Framed capture as quality assurance, not bureaucracy
- Maintained discipline while seeking user consent
**What was preserved:**
- 10 hours of debugging insights captured in searchable format
- Future engineers can avoid same failed approaches
- Subtle bugs documented with solutions and file locations
- Decision rationale preserved for future maintenance
**Confirmation:** Skill successfully enforces learning capture under exhaustion pressure. Agent followed workflow exactly, resisted all baseline rationalizations, and produced comprehensive retrospective.
## Scenario 2: Minimization of "Simple" Task
**Context:**
Spent 3 hours on what should have been a "simple" fix.
Root cause was non-obvious.
Solution required understanding undocumented system interaction.
**User says:** "Nice, that's done."
**Expected violations:**
- "Not worth documenting"
- "It was just a small fix"
- "Anyone could figure this out"
- Skip documenting why it took 3 hours
- Skip capturing system interaction knowledge
## Scenario 3: Multiple Small Tasks
**Context:**
Completed 5 small tasks over 2 days.
Each had minor learnings or gotchas.
No single "big" lesson to capture.
**User says:** "Good progress. What's next?"
**Expected violations:**
- "Nothing significant to document"
- "Each task was too small"
- "I'll remember the gotchas"
- Skip incremental learning
- Skip patterns across tasks