Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:48:52 +08:00
commit 6ec3196ecc
434 changed files with 125248 additions and 0 deletions

View File

@@ -0,0 +1,40 @@
# Problem-Solving Skills - Attribution
These skills were derived from agent patterns in the [Amplifier](https://github.com/microsoft/amplifier) project.
**Source Repository:**
- Name: Amplifier
- URL: https://github.com/microsoft/amplifier
- Commit: 2adb63f858e7d760e188197c8e8d4c1ef721e2a6
- Date: 2025-10-10
## Skills Derived from Amplifier Agents
**From insight-synthesizer agent:**
- simplification-cascades - Finding insights that eliminate multiple components
- collision-zone-thinking - Forcing unrelated concepts together for breakthroughs
- meta-pattern-recognition - Spotting patterns across 3+ domains
- inversion-exercise - Flipping assumptions to reveal alternatives
- scale-game - Testing at extremes to expose fundamental truths
**From ambiguity-guardian agent:**
- (architecture) preserving-productive-tensions - Preserving multiple valid approaches
**From knowledge-archaeologist agent:**
- (research) tracing-knowledge-lineages - Understanding how ideas evolved
**Dispatch pattern:**
- when-stuck - Maps stuck-symptoms to appropriate technique
## What Was Adapted
The amplifier agents are specialized long-lived agents with structured JSON output. These skills extract the core problem-solving techniques and adapt them as:
- Scannable quick-reference guides (~60 lines each)
- Symptom-based discovery via when_to_use
- Immediate application without special tooling
- Composable through dispatch pattern
## Core Insight
Agent capabilities are domain-agnostic patterns. Whether packaged as "amplifier agent" or "superpowers skill", the underlying technique is the same. We extracted the techniques and made them portable.

View File

@@ -0,0 +1,62 @@
---
name: Collision-Zone Thinking
description: Force unrelated concepts together to discover emergent properties - "What if we treated X like Y?"
when_to_use: when conventional approaches feel inadequate and you need breakthrough innovation by forcing unrelated concepts together
version: 1.1.0
---
# Collision-Zone Thinking
## Overview
Revolutionary insights come from forcing unrelated concepts to collide. Treat X like Y and see what emerges.
**Core principle:** Deliberate metaphor-mixing generates novel solutions.
## Quick Reference
| Stuck On | Try Treating As | Might Discover |
|----------|-----------------|----------------|
| Code organization | DNA/genetics | Mutation testing, evolutionary algorithms |
| Service architecture | Lego bricks | Composable microservices, plug-and-play |
| Data management | Water flow | Streaming, data lakes, flow-based systems |
| Request handling | Postal mail | Message queues, async processing |
| Error handling | Circuit breakers | Fault isolation, graceful degradation |
## Process
1. **Pick two unrelated concepts** from different domains
2. **Force combination**: "What if we treated [A] like [B]?"
3. **Explore emergent properties**: What new capabilities appear?
4. **Test boundaries**: Where does the metaphor break?
5. **Extract insight**: What did we learn?
## Example Collision
**Problem:** Complex distributed system with cascading failures
**Collision:** "What if we treated services like electrical circuits?"
**Emergent properties:**
- Circuit breakers (disconnect on overload)
- Fuses (one-time failure protection)
- Ground faults (error isolation)
- Load balancing (current distribution)
**Where it works:** Preventing cascade failures
**Where it breaks:** Circuits don't have retry logic
**Insight gained:** Failure isolation patterns from electrical engineering
## Red Flags You Need This
- "I've tried everything in this domain"
- Solutions feel incremental, not breakthrough
- Stuck in conventional thinking
- Need innovation, not optimization
## Remember
- Wild combinations often yield best insights
- Test metaphor boundaries rigorously
- Document even failed collisions (they teach)
- Best source domains: physics, biology, economics, psychology

View File

@@ -0,0 +1,58 @@
---
name: Inversion Exercise
description: Flip core assumptions to reveal hidden constraints and alternative approaches - "what if the opposite were true?"
when_to_use: when stuck on unquestioned assumptions or feeling forced into "the only way" to do something
version: 1.1.0
---
# Inversion Exercise
## Overview
Flip every assumption and see what still works. Sometimes the opposite reveals the truth.
**Core principle:** Inversion exposes hidden assumptions and alternative approaches.
## Quick Reference
| Normal Assumption | Inverted | What It Reveals |
|-------------------|----------|-----------------|
| Cache to reduce latency | Add latency to enable caching | Debouncing patterns |
| Pull data when needed | Push data before needed | Prefetching, eager loading |
| Handle errors when occur | Make errors impossible | Type systems, contracts |
| Build features users want | Remove features users don't need | Simplicity >> addition |
| Optimize for common case | Optimize for worst case | Resilience patterns |
## Process
1. **List core assumptions** - What "must" be true?
2. **Invert each systematically** - "What if opposite were true?"
3. **Explore implications** - What would we do differently?
4. **Find valid inversions** - Which actually work somewhere?
## Example
**Problem:** Users complain app is slow
**Normal approach:** Make everything faster (caching, optimization, CDN)
**Inverted:** Make things intentionally slower in some places
- Debounce search (add latency → enable better results)
- Rate limit requests (add friction → prevent abuse)
- Lazy load content (delay → reduce initial load)
**Insight:** Strategic slowness can improve UX
## Red Flags You Need This
- "There's only one way to do this"
- Forcing solution that feels wrong
- Can't articulate why approach is necessary
- "This is just how it's done"
## Remember
- Not all inversions work (test boundaries)
- Valid inversions reveal context-dependence
- Sometimes opposite is the answer
- Question "must be" statements

View File

@@ -0,0 +1,54 @@
---
name: Meta-Pattern Recognition
description: Spot patterns appearing in 3+ domains to find universal principles
when_to_use: when noticing the same pattern across 3+ different domains or experiencing déjà vu in problem-solving
version: 1.1.0
---
# Meta-Pattern Recognition
## Overview
When the same pattern appears in 3+ domains, it's probably a universal principle worth extracting.
**Core principle:** Find patterns in how patterns emerge.
## Quick Reference
| Pattern Appears In | Abstract Form | Where Else? |
|-------------------|---------------|-------------|
| CPU/DB/HTTP/DNS caching | Store frequently-accessed data closer | LLM prompt caching, CDN |
| Layering (network/storage/compute) | Separate concerns into abstraction levels | Architecture, organization |
| Queuing (message/task/request) | Decouple producer from consumer with buffer | Event systems, async processing |
| Pooling (connection/thread/object) | Reuse expensive resources | Memory management, resource governance |
## Process
1. **Spot repetition** - See same shape in 3+ places
2. **Extract abstract form** - Describe independent of any domain
3. **Identify variations** - How does it adapt per domain?
4. **Check applicability** - Where else might this help?
## Example
**Pattern spotted:** Rate limiting in API throttling, traffic shaping, circuit breakers, admission control
**Abstract form:** Bound resource consumption to prevent exhaustion
**Variation points:** What resource, what limit, what happens when exceeded
**New application:** LLM token budgets (same pattern - prevent context window exhaustion)
## Red Flags You're Missing Meta-Patterns
- "This problem is unique" (probably not)
- Multiple teams independently solving "different" problems identically
- Reinventing wheels across domains
- "Haven't we done something like this?" (yes, find it)
## Remember
- 3+ domains = likely universal
- Abstract form reveals new applications
- Variations show adaptation points
- Universal patterns are battle-tested

View File

@@ -0,0 +1,63 @@
---
name: Scale Game
description: Test at extremes (1000x bigger/smaller, instant/year-long) to expose fundamental truths hidden at normal scales
when_to_use: when uncertain about scalability, edge cases unclear, or validating architecture for production volumes
version: 1.1.0
---
# Scale Game
## Overview
Test your approach at extreme scales to find what breaks and what surprisingly survives.
**Core principle:** Extremes expose fundamental truths hidden at normal scales.
## Quick Reference
| Scale Dimension | Test At Extremes | What It Reveals |
|-----------------|------------------|-----------------|
| Volume | 1 item vs 1B items | Algorithmic complexity limits |
| Speed | Instant vs 1 year | Async requirements, caching needs |
| Users | 1 user vs 1B users | Concurrency issues, resource limits |
| Duration | Milliseconds vs years | Memory leaks, state growth |
| Failure rate | Never fails vs always fails | Error handling adequacy |
## Process
1. **Pick dimension** - What could vary extremely?
2. **Test minimum** - What if this was 1000x smaller/faster/fewer?
3. **Test maximum** - What if this was 1000x bigger/slower/more?
4. **Note what breaks** - Where do limits appear?
5. **Note what survives** - What's fundamentally sound?
## Examples
### Example 1: Error Handling
**Normal scale:** "Handle errors when they occur" works fine
**At 1B scale:** Error volume overwhelms logging, crashes system
**Reveals:** Need to make errors impossible (type systems) or expect them (chaos engineering)
### Example 2: Synchronous APIs
**Normal scale:** Direct function calls work
**At global scale:** Network latency makes synchronous calls unusable
**Reveals:** Async/messaging becomes survival requirement, not optimization
### Example 3: In-Memory State
**Normal duration:** Works for hours/days
**At years:** Memory grows unbounded, eventual crash
**Reveals:** Need persistence or periodic cleanup, can't rely on memory
## Red Flags You Need This
- "It works in dev" (but will it work in production?)
- No idea where limits are
- "Should scale fine" (without testing)
- Surprised by production behavior
## Remember
- Extremes reveal fundamentals
- What works at one scale fails at another
- Test both directions (bigger AND smaller)
- Use insights to validate architecture early

View File

@@ -0,0 +1,76 @@
---
name: Simplification Cascades
description: Find one insight that eliminates multiple components - "if this is true, we don't need X, Y, or Z"
when_to_use: when implementing the same concept multiple ways, accumulating special cases, or complexity is spiraling
version: 1.1.0
---
# Simplification Cascades
## Overview
Sometimes one insight eliminates 10 things. Look for the unifying principle that makes multiple components unnecessary.
**Core principle:** "Everything is a special case of..." collapses complexity dramatically.
## Quick Reference
| Symptom | Likely Cascade |
|---------|----------------|
| Same thing implemented 5+ ways | Abstract the common pattern |
| Growing special case list | Find the general case |
| Complex rules with exceptions | Find the rule that has no exceptions |
| Excessive config options | Find defaults that work for 95% |
## The Pattern
**Look for:**
- Multiple implementations of similar concepts
- Special case handling everywhere
- "We need to handle A, B, C, D differently..."
- Complex rules with many exceptions
**Ask:** "What if they're all the same thing underneath?"
## Examples
### Cascade 1: Stream Abstraction
**Before:** Separate handlers for batch/real-time/file/network data
**Insight:** "All inputs are streams - just different sources"
**After:** One stream processor, multiple stream sources
**Eliminated:** 4 separate implementations
### Cascade 2: Resource Governance
**Before:** Session tracking, rate limiting, file validation, connection pooling (all separate)
**Insight:** "All are per-entity resource limits"
**After:** One ResourceGovernor with 4 resource types
**Eliminated:** 4 custom enforcement systems
### Cascade 3: Immutability
**Before:** Defensive copying, locking, cache invalidation, temporal coupling
**Insight:** "Treat everything as immutable data + transformations"
**After:** Functional programming patterns
**Eliminated:** Entire classes of synchronization problems
## Process
1. **List the variations** - What's implemented multiple ways?
2. **Find the essence** - What's the same underneath?
3. **Extract abstraction** - What's the domain-independent pattern?
4. **Test it** - Do all cases fit cleanly?
5. **Measure cascade** - How many things become unnecessary?
## Red Flags You're Missing a Cascade
- "We just need to add one more case..." (repeating forever)
- "These are all similar but different" (maybe they're the same?)
- Refactoring feels like whack-a-mole (fix one, break another)
- Growing configuration file
- "Don't touch that, it's complicated" (complexity hiding pattern)
## Remember
- Simplification cascades = 10x wins, not 10% improvements
- One powerful abstraction > ten clever hacks
- The pattern is usually already there, just needs recognition
- Measure in "how many things can we delete?"

View File

@@ -0,0 +1,88 @@
---
name: When Stuck - Problem-Solving Dispatch
description: Dispatch to the right problem-solving technique based on how you're stuck
when_to_use: when stuck and unsure which problem-solving technique to apply for your specific type of stuck-ness
version: 1.1.0
---
# When Stuck - Problem-Solving Dispatch
## Overview
Different stuck-types need different techniques. This skill helps you quickly identify which problem-solving skill to use.
**Core principle:** Match stuck-symptom to technique.
## Quick Dispatch
```dot
digraph stuck_dispatch {
rankdir=TB;
node [shape=box, style=rounded];
stuck [label="You're Stuck", shape=ellipse, style=filled, fillcolor=lightblue];
complexity [label="Same thing implemented 5+ ways?\nGrowing special cases?\nExcessive if/else?"];
innovation [label="Can't find fitting approach?\nConventional solutions inadequate?\nNeed breakthrough?"];
patterns [label="Same issue in different places?\nFeels familiar across domains?\nReinventing wheels?"];
assumptions [label="Solution feels forced?\n'This must be done this way'?\nStuck on assumptions?"];
scale [label="Will this work at production?\nEdge cases unclear?\nUnsure of limits?"];
bugs [label="Code behaving wrong?\nTest failing?\nUnexpected output?"];
stuck -> complexity;
stuck -> innovation;
stuck -> patterns;
stuck -> assumptions;
stuck -> scale;
stuck -> bugs;
complexity -> simp [label="yes"];
innovation -> collision [label="yes"];
patterns -> meta [label="yes"];
assumptions -> invert [label="yes"];
scale -> scale_skill [label="yes"];
bugs -> debug [label="yes"];
simp [label="skills/problem-solving/\nsimplification-cascades", shape=box, style="rounded,filled", fillcolor=lightgreen];
collision [label="skills/problem-solving/\ncollision-zone-thinking", shape=box, style="rounded,filled", fillcolor=lightgreen];
meta [label="skills/problem-solving/\nmeta-pattern-recognition", shape=box, style="rounded,filled", fillcolor=lightgreen];
invert [label="skills/problem-solving/\ninversion-exercise", shape=box, style="rounded,filled", fillcolor=lightgreen];
scale_skill [label="skills/problem-solving/\nscale-game", shape=box, style="rounded,filled", fillcolor=lightgreen];
debug [label="skills/debugging/\nsystematic-debugging", shape=box, style="rounded,filled", fillcolor=lightyellow];
}
```
## Stuck-Type → Technique
| How You're Stuck | Use This Skill |
|------------------|----------------|
| **Complexity spiraling** - Same thing 5+ ways, growing special cases | skills/problem-solving/simplification-cascades |
| **Need innovation** - Conventional solutions inadequate, can't find fitting approach | skills/problem-solving/collision-zone-thinking |
| **Recurring patterns** - Same issue different places, reinventing wheels | skills/problem-solving/meta-pattern-recognition |
| **Forced by assumptions** - "Must be done this way", can't question premise | skills/problem-solving/inversion-exercise |
| **Scale uncertainty** - Will it work in production? Edge cases unclear? | skills/problem-solving/scale-game |
| **Code broken** - Wrong behavior, test failing, unexpected output | skills/debugging/systematic-debugging |
| **Multiple independent problems** - Can parallelize investigation | skills/collaboration/dispatching-parallel-agents |
| **Root cause unknown** - Symptom clear, cause hidden | skills/debugging/root-cause-tracing |
## Process
1. **Identify stuck-type** - What symptom matches above?
2. **Load that skill** - Read the specific technique
3. **Apply technique** - Follow its process
4. **If still stuck** - Try different technique or combine
## Combining Techniques
Some problems need multiple techniques:
- **Simplification + Meta-pattern**: Find pattern, then simplify all instances
- **Collision + Inversion**: Force metaphor, then invert its assumptions
- **Scale + Simplification**: Extremes reveal what to eliminate
## Remember
- Match symptom to technique
- One technique at a time
- Combine if first doesn't work
- Document what you tried