Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,130 @@
{
"name": "Abstraction Ladder Quality Rubric",
"scale": {
"min": 1,
"max": 5,
"description": "1=Poor, 2=Fair, 3=Good, 4=Very Good, 5=Excellent"
},
"criteria": [
{
"name": "Level Distinctness",
"description": "Each level is clearly distinct from adjacent levels with no redundancy",
"scoring": {
"1": "Levels are redundant or indistinguishable",
"2": "Some levels overlap significantly",
"3": "Levels are mostly distinct with minor overlap",
"4": "All levels are clearly distinct",
"5": "Each level adds unique, valuable perspective"
}
},
{
"name": "Transition Clarity",
"description": "Connections between levels are logical and traceable",
"scoring": {
"1": "No clear connection between levels",
"2": "Some connections are unclear or missing",
"3": "Most transitions are logical",
"4": "All transitions are clear and logical",
"5": "Transitions reveal deep insights about the topic"
}
},
{
"name": "Abstraction Range",
"description": "Spans from truly universal principles to concrete specifics",
"scoring": {
"1": "Limited range; all levels at similar abstraction",
"2": "Some variation but doesn't reach extremes",
"3": "Good range from abstract to concrete",
"4": "Excellent range; top is universal, bottom is specific",
"5": "Exceptional range with measurable concrete details and broadly applicable principles"
}
},
{
"name": "Concreteness at Bottom",
"description": "Most concrete level has specific, measurable, verifiable details",
"scoring": {
"1": "Bottom level still abstract or vague",
"2": "Bottom level somewhat specific but lacks detail",
"3": "Bottom level has concrete examples",
"4": "Bottom level has specific, measurable details",
"5": "Bottom level includes exact values, measurements, edge cases"
}
},
{
"name": "Abstraction at Top",
"description": "Most abstract level is universally applicable beyond this context",
"scoring": {
"1": "Top level is context-specific",
"2": "Top level is somewhat general but domain-limited",
"3": "Top level is broadly applicable",
"4": "Top level is universal within domain",
"5": "Top level transcends domain; applies to many fields"
}
},
{
"name": "Edge Case Quality",
"description": "Edge cases meaningfully test boundaries and reveal insights",
"scoring": {
"1": "No edge cases or trivial examples",
"2": "Edge cases present but don't challenge principles",
"3": "Edge cases test some boundaries",
"4": "Edge cases reveal interesting tensions or limits",
"5": "Edge cases expose deep insights and prompt refinement"
}
},
{
"name": "Assumption Transparency",
"description": "Assumptions, context, and limitations are stated explicitly",
"scoring": {
"1": "No acknowledgment of assumptions or limits",
"2": "Few assumptions mentioned",
"3": "Key assumptions stated",
"4": "Comprehensive assumption documentation",
"5": "Assumptions stated with analysis of how changes would affect ladder"
}
},
{
"name": "Coherence",
"description": "All levels address the same aspect/thread of the topic",
"scoring": {
"1": "Levels address completely different topics",
"2": "Significant topic drift between levels",
"3": "Mostly coherent with minor drift",
"4": "Strong coherence throughout",
"5": "Perfect thematic unity; tells a clear story"
}
},
{
"name": "Utility",
"description": "Ladder serves its stated purpose and provides actionable value",
"scoring": {
"1": "Purpose unclear; no practical value",
"2": "Some value but doesn't clearly serve a purpose",
"3": "Useful for stated purpose",
"4": "Highly useful with clear applications",
"5": "Exceptional utility; enables decisions or insights not otherwise possible"
}
},
{
"name": "Comprehensibility",
"description": "Someone unfamiliar with the topic can follow the logic",
"scoring": {
"1": "Requires deep expertise to understand",
"2": "Accessible only to domain experts",
"3": "Understandable with some background",
"4": "Clear to most readers",
"5": "Crystal clear; excellent pedagogical tool"
}
}
],
"overall_assessment": {
"thresholds": {
"excellent": "Average score ≥ 4.5 across all criteria",
"very_good": "Average score ≥ 4.0 across all criteria",
"good": "Average score ≥ 3.5 across all criteria",
"acceptable": "Average score ≥ 3.0 across all criteria",
"needs_improvement": "Average score < 3.0"
}
},
"usage_instructions": "Rate each criterion independently on 1-5 scale. Calculate average. Identify lowest-scoring criteria for targeted improvement before delivering to user."
}

View File

@@ -0,0 +1,139 @@
# Abstraction Ladder Example: API Design
## Topic: RESTful API Design
## Overview
This ladder shows how abstract API design principles translate into concrete implementation decisions for a user management endpoint.
## Abstraction Levels
### Level 1 (Most Abstract): Universal Principle
**"Interfaces should be intuitive, consistent, and predictable"**
This applies to all interfaces: APIs, UI, command-line tools, hardware controls. Users should be able to predict behavior based on consistent patterns.
### Level 2: Framework & Standards
**"Follow REST architectural constraints and HTTP semantics"**
RESTful design provides standardized patterns:
- Resources identified by URIs
- Stateless communication
- Standard HTTP methods (GET, POST, PUT, DELETE)
- Appropriate status codes
- HATEOAS (where applicable)
### Level 3: Approach & Patterns
**"Design resource-oriented endpoints with predictable CRUD operations"**
Concrete patterns:
- Use nouns for resources, not verbs
- Plural resource names
- Nested resources show relationships
- Query parameters for filtering/pagination
- Consistent error response format
### Level 4: Specific Implementation
**"User management API with standard CRUD endpoints"**
```
GET /api/v1/users # List all users
GET /api/v1/users/:id # Get specific user
POST /api/v1/users # Create user
PUT /api/v1/users/:id # Update user (full)
PATCH /api/v1/users/:id # Update user (partial)
DELETE /api/v1/users/:id # Delete user
```
Authentication: Bearer token in Authorization header
Content-Type: application/json
### Level 5 (Most Concrete): Precise Details
**Exact endpoint specification:**
```http
GET /api/v1/users/12345
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
Accept: application/json
Response: 200 OK
{
"id": 12345,
"email": "user@example.com",
"firstName": "Jane",
"lastName": "Doe",
"createdAt": "2024-01-15T10:30:00Z",
"role": "standard"
}
Edge cases:
- User not found: 404 Not Found
- Invalid token: 401 Unauthorized
- Insufficient permissions: 403 Forbidden
- Invalid ID format: 400 Bad Request
- Server error: 500 Internal Server Error
```
Rate limit: 1000 requests/hour per token
Pagination: max 100 items per page, default 20
## Connections & Transitions
**L1 → L2**: REST provides a proven framework for creating predictable interfaces through standard conventions.
**L2 → L3**: Resource-oriented design is how REST constraints manifest in practical API design.
**L3 → L4**: User management is a concrete application of CRUD patterns to a specific domain resource.
**L4 → L5**: Exact HTTP requests/responses and error handling show how design patterns become actual code.
## Edge Cases & Boundary Testing
### Case 1: Deleting a non-existent user
- **Abstract principle (L1)**: Interface should provide clear feedback
- **Expected (L3)**: Return error for invalid operations
- **Actual (L5)**: `DELETE /users/99999` returns `404 Not Found` with body `{"error": "User not found"}`
- **Alignment**: ✓ Concrete implementation matches principle
### Case 2: Updating with partial data
- **Abstract principle (L1)**: Interface should be predictable
- **Expected (L3)**: PATCH for partial updates, PUT for full replacement
- **Actual (L5)**: `PATCH /users/123` with `{"firstName": "John"}` updates only firstName, leaves other fields unchanged
- **Alignment**: ✓ Follows REST semantics
### Case 3: Bulk operations
- **Abstract principle (L1)**: Interfaces should be consistent
- **Question**: How to delete multiple users?
- **Options**:
- POST /users/bulk-delete (violates resource-oriented design)
- DELETE /users with query params (non-standard)
- Multiple DELETE requests (chatty but consistent)
- **Gap**: Pure REST doesn't handle bulk operations elegantly
- **Resolution**: Accept trade-off; use `POST /users/actions/bulk-delete` with clear documentation
## Applications
This ladder is useful for:
- **Onboarding new developers**: Show how design principles inform specific code
- **API review**: Check if implementation aligns with stated principles
- **Documentation**: Explain "why" behind specific endpoint designs
- **Consistency checking**: Ensure new endpoints follow same patterns
- **Client SDK design**: Derive SDK structure from abstraction levels
## Gaps & Assumptions
**Assumptions:**
- Using JSON (could be XML, Protocol Buffers, etc.)
- Token-based auth (could be OAuth, API keys, etc.)
- Synchronous operations (could be async/webhooks)
**Gaps:**
- Real-time updates not covered (WebSockets?)
- File uploads not addressed (multipart/form-data?)
- Versioning strategy mentioned but not detailed
- Caching strategy not specified
- Bulk operations awkward in pure REST
**Questions for deeper exploration:**
- How do GraphQL or gRPC change this ladder?
- What happens at massive scale (millions of requests/sec)?
- How does distributed/microservices architecture affect this?

View File

@@ -0,0 +1,194 @@
# Abstraction Ladder Example: Hiring Process
## Topic: Building an Effective Hiring Process
## Overview
This ladder demonstrates how abstract hiring principles translate into concrete interview procedures. Built bottom-up from actual hiring experiences.
## Abstraction Levels
### Level 5 (Most Concrete): Specific Example
**Tuesday interview for Senior Engineer position:**
- 9:00 AM: Recruiter sends calendar invite with Zoom link
- 10:00 AM: 45-min technical interview
- Candidate shares screen
- Interviewer asks: "Design a URL shortening service"
- Candidate discusses for 30 min while drawing architecture
- 10 min for candidate questions
- Interviewer fills scorecard: System Design=4/5, Communication=5/5
- 11:00 AM: Candidate receives thank-you email
- 11:30 AM: Interviewer submits scores in Greenhouse ATS
- Week later: Debrief meeting reviews 6 scorecards, makes hire/no-hire decision
**Specific scoreboard criteria:**
- Problem solving: 1-5 scale
- Communication: 1-5 scale
- Culture fit: 1-5 scale
- Technical depth: 1-5 scale
- Bar raiser must approve (score ≥4 average)
### Level 4: Implementation Pattern
**Structured interview loop with standardized evaluation**
Process:
1. Phone screen (30 min) - basic qualification
2. Take-home assignment (2-4 hours) - practical skills
3. Onsite loop (4-5 hours):
- Technical interview #1: System design
- Technical interview #2: Coding
- Behavioral interview: Past experience
- Hiring manager: Role fit & vision alignment
- Optional: Team member lunch (informal)
4. Debrief within 48 hours
5. Reference checks for strong candidates
6. Offer or rejection with feedback
Each interviewer:
- Uses structured scorecard
- Submits written feedback within 24 hours
- Rates on consistent rubric
- Provides hire/no-hire recommendation
### Level 3: Approach & Method
**Use structured interviews with job-relevant assessments and multiple evaluators**
Key practices:
- Define role requirements before interviews
- Create standardized questions for each competency
- Train interviewers on bias and evaluation
- Use panel of diverse interviewers
- Evaluate on job-specific skills, not proxies
- Aggregate independent ratings before discussion
- Check references to validate assessments
- Provide candidate feedback regardless of outcome
### Level 2: Framework & Research
**Apply evidence-based hiring practices to reduce bias and improve predictive validity**
Research-backed principles:
- Structured interviews outperform unstructured (Schmidt & Hunter meta-analysis)
- Work samples better predict performance than credentials
- Multiple independent evaluators reduce individual bias
- Job analysis identifies actual success criteria
- Standardization enables fair comparisons
- Cognitive diversity in hiring panels improves decisions
Standards to follow:
- EEOC guidelines for non-discrimination
- GDPR/privacy compliance for candidate data
- Industry best practices (e.g., SHRM)
### Level 1 (Most Abstract): Universal Principle
**"Hiring should identify candidates most likely to succeed while treating all applicants fairly and respectfully"**
Core values:
- Meritocracy: Select based on ability to do the job
- Equity: Provide equal opportunity regardless of background
- Predictive validity: Assessments should predict actual job performance
- Candidate experience: Treat people with dignity
- Continuous improvement: Learn from outcomes to refine process
This applies beyond hiring to any selection process: admissions, promotions, awards, grants, etc.
## Connections & Transitions
**L5 → L4**: The specific Tuesday interview exemplifies the structured interview loop approach. Each element (scorecard, timing, Greenhouse submission) reflects the systematic pattern.
**L4 → L3**: The structured loop implements the principle of using job-relevant assessments with multiple evaluators. The 48-hour debrief and standardized scorecards are concrete applications of standardization.
**L3 → L2**: Structured interviews and work samples are the practical application of "evidence-based hiring practices" from I/O psychology research.
**L2 → L1**: Evidence-based practices are how we operationalize the abstract values of merit, equity, and predictive validity.
## Edge Cases & Boundary Testing
### Case 1: Candidate has unconventional background
- **Abstract principle (L1)**: Hire based on merit and ability
- **Standard process (L4)**: Looking for "5+ years experience with React"
- **Edge case**: Candidate has 2 years React but exceptional work sample and adjacent skills
- **Tension**: Strict requirements vs. actual capability
- **Resolution**: Requirements are proxy for skills; assess skills directly through work sample
### Case 2: All interviewers are available except one
- **Abstract principle (L1)**: Multiple evaluators reduce bias
- **Standard process (L3)**: Panel of diverse interviewers
- **Edge case**: Only senior engineers available this week, no product manager
- **Tension**: Speed vs. diverse perspectives
- **Resolution**: Delay one week to get proper panel, or explicitly note missing perspective in decision
### Case 3: Internal referral from CEO
- **Abstract principle (L1)**: Treat all applicants fairly
- **Standard process (L4)**: All candidates go through same loop
- **Edge case**: CEO's referral puts pressure to hire
- **Tension**: Political dynamics vs. process integrity
- **Resolution**: Use same process but ensure bar raiser is involved; separate "good referral" from "strong candidate"
### Case 4: Candidate requests accommodation
- **Abstract principle (L1)**: Treat people with dignity and respect
- **Standard process (L4)**: 45-min technical interview with live coding
- **Edge case**: Candidate has dyslexia, requests written questions in advance
- **Tension**: Standardization vs. accessibility
- **Resolution**: Accommodation maintains what we're testing (problem-solving) while removing irrelevant barrier (reading speed). Provide questions 30 min before; maintain time limit.
## Applications
This ladder is useful for:
**For hiring managers:**
- Design new interview process grounded in principles
- Explain to candidates why process is structured this way
- Train new interviewers on the "why" behind each step
**For executives:**
- Understand ROI of structured hiring (L1-L2)
- Make resource decisions (time investment in L4-L5)
**For candidates:**
- Understand what to expect and why
- See how specific interview ties to broader goals
**For process improvement:**
- Identify where implementation (L5) drifts from principles (L1)
- Test if new tools/techniques align with evidence base (L2)
## Gaps & Assumptions
**Assumptions:**
- Hiring for full-time employee role (not contractor/intern)
- Mid-size tech company context (not 10-person startup or Fortune 500)
- White-collar knowledge work (not frontline/manual labor)
- North American legal/cultural context
- Sufficient candidate volume to justify structure
**Gaps:**
- Doesn't address compensation negotiation
- Doesn't detail sourcing/recruiting before application
- Doesn't specify onboarding after hire
- Limited discussion of diversity/inclusion initiatives
- Doesn't address remote vs. in-person trade-offs
- No mention of employer branding
**What changes at different scales:**
- **Startup (10 people)**: Might skip structured scorecards (everyone knows everyone)
- **Enterprise (10,000 people)**: Might add compliance reviews, more stakeholders
- **High-volume hiring**: Might add automated screening, assessment centers
**What changes in different domains:**
- **Trades/manual labor**: Work samples would be actual task performance
- **Creative roles**: Portfolio review more important than interviews
- **Executive roles**: Board involvement, longer timeline, reference checks crucial
## Lessons Learned
**Principle that held up:**
The core idea (L1) of "fair and predictive" remains true even when implementation (L5) varies wildly by context.
**Principle that required nuance:**
"Multiple evaluators" (L3) assumes independence. In practice, first interviewer's opinion can bias later interviewers. Solution: collect ratings before debrief discussion.
**Missing level:**
Could add L2.5 for company-specific values ("hire for culture add, not culture fit"). Shows how universal principles get customized before becoming process.
**Alternative ladder:**
Could build parallel ladder for "candidate experience" that shows how to treat applicants well. Would share L1 but diverge at L2-L5 with different practices (clear communication, timely feedback, etc.).

View File

@@ -0,0 +1,356 @@
# Abstraction Ladder Methodology
## Abstraction Ladder Workflow
Copy this checklist and track your progress:
```
Abstraction Ladder Progress:
- [ ] Step 1: Choose your direction (top-down, bottom-up, or middle-out)
- [ ] Step 2: Build each abstraction level
- [ ] Step 3: Validate transitions between levels
- [ ] Step 4: Test with edge cases
- [ ] Step 5: Verify coherence and completeness
```
**Step 1: Choose your direction**
Select the approach that fits your purpose. See [Choosing the Right Direction](#choosing-the-right-direction) for detailed guidance on top-down, bottom-up, or middle-out approaches.
**Step 2: Build each abstraction level**
Create 3-5 distinct levels following quality criteria for each level type. See [Building Each Level](#building-each-level) for characteristics and quality checks for universal principles, frameworks, methods, implementations, and precise details.
**Step 3: Validate transitions**
Ensure each level logically derives from the previous one. See [Validating Transitions](#validating-transitions) for transition tests and connection patterns.
**Step 4: Test with edge cases**
Test your abstraction ladder against boundary scenarios to reveal gaps or conflicts. See [Edge Case Discovery](#edge-case-discovery) for techniques to find and analyze edge cases.
**Step 5: Verify coherence and completeness**
Check that the ladder flows as a coherent whole and covers the necessary scope. See [Common Pitfalls](#common-pitfalls) and [Advanced Techniques](#advanced-techniques) for validation approaches.
## Choosing the Right Direction
### Top-Down (Abstract → Concrete)
**When to use:**
- Communicating established principles to new audience
- Designing systems from first principles
- Teaching theoretical concepts
- Creating implementation from requirements
**Process:**
1. Start with the most universal, broadly applicable statement
2. Ask "How would this manifest in practice?"
3. Add constraints and context at each level
4. End with specific, measurable examples
**Example flow:**
- Level 1: "Software should be maintainable"
- Level 2: "Use modular architecture and clear interfaces"
- Level 3: "Implement dependency injection and single responsibility principle"
- Level 4: "UserService has one public method `getUser(id)` with IUserRepository injected"
- Level 5: "Line 47: `constructor(private repository: IUserRepository) {}`"
### Bottom-Up (Concrete → Abstract)
**When to use:**
- Analyzing existing implementations
- Discovering patterns from observations
- Generalizing from specific cases
- Root cause analysis
**Process:**
1. Start with specific, observable facts
2. Ask "What pattern does this exemplify?"
3. Remove context-specific details at each level
4. End with universal principles
**Example flow:**
- Level 5: "GET /api/users/123 returns 404 when user doesn't exist"
- Level 4: "API returns appropriate HTTP status codes for resource states"
- Level 3: "REST API follows HTTP semantic conventions"
- Level 2: "System communicates errors consistently through standard protocols"
- Level 1: "Interfaces should provide clear, unambiguous feedback"
### Middle-Out (Familiar → Both Directions)
**When to use:**
- Starting with something stakeholders understand
- Bridging technical and business perspectives
- Teaching from known to unknown
**Process:**
1. Start at a familiar, mid-level example
2. Expand upward to extract principles
3. Expand downward to show implementation
4. Ensure both directions connect coherently
## Building Each Level
### Level 1: Universal Principles
**Characteristics:**
- Applies across domains and contexts
- Value-based or theory-driven
- Often uses terms like "should," "must," "always"
- Could apply to different industries/fields
**Quality check:**
- Can you apply this to a completely different domain?
- Is it so abstract it's almost philosophical?
- Does it express fundamental values or laws?
**Examples:**
- "Systems should be resilient to failure"
- "Users deserve privacy and control over their data"
- "Organizations should optimize for long-term value"
### Level 2: Categories & Frameworks
**Characteristics:**
- Organizes the domain into conceptual buckets
- References established frameworks or standards
- Defines high-level approaches
- Still domain-general but more specific
**Quality check:**
- Does it reference a framework others would recognize?
- Could practitioners cite this as a "best practice"?
- Is it general enough to apply across similar projects?
**Examples:**
- "Follow SOLID principles for object-oriented design"
- "Implement defense-in-depth security strategy"
- "Use Agile methodology for iterative development"
### Level 3: Methods & Approaches
**Characteristics:**
- Actionable techniques and methods
- Still flexible in implementation
- Describes "how" in general terms
- Multiple valid implementations possible
**Quality check:**
- Could two teams implement this differently but both be correct?
- Does it guide action without dictating exact steps?
- Can you name 3+ ways to implement this?
**Examples:**
- "Use dependency injection for loose coupling"
- "Implement rate limiting to prevent abuse"
- "Create user personas based on research interviews"
### Level 4: Specific Instances
**Characteristics:**
- Concrete implementations
- Project or context-specific
- References actual code, designs, or artifacts
- Limited variation in implementation
**Quality check:**
- Could you point to this in a codebase or document?
- Is it specific to one project/product?
- Would changing this require actual work (not just thinking)?
**Examples:**
- "AuthService uses JWT tokens with 1-hour expiration"
- "Dashboard loads user data via GraphQL endpoint"
- "Button uses 16px padding and #007bff background"
### Level 5: Precise Details
**Characteristics:**
- Measurable, verifiable specifics
- Exact values, configurations, line numbers
- Edge cases and boundary conditions
- No ambiguity in interpretation
**Quality check:**
- Can you measure or test this objectively?
- Is there exactly one interpretation?
- Could QA write a test case from this?
**Examples:**
- "Line 234: `if (userId < 1 || userId > 2147483647) throw RangeError`"
- "Button #submit-btn has tabindex=0 and aria-label='Submit form'"
- "Password must be 8-72 chars, including: a-z, A-Z, 0-9, !@#$%"
## Validating Transitions
### Connection Tests
For each adjacent pair of levels, verify:
1. **Derivation**: Can you logically derive the lower level from the higher level?
- Ask: "Does this concrete example truly exemplify that abstract principle?"
2. **Generalization**: Can you extract the higher level from the lower level?
- Ask: "If I saw only this concrete example, would I infer that principle?"
3. **No jumps**: Is the gap between levels small enough to follow?
- Ask: "Can I explain the transition without introducing entirely new concepts?"
### Red Flags
- **Too similar**: Two levels say essentially the same thing
- **Missing middle**: Big conceptual leap between levels
- **Contradiction**: Concrete example violates abstract principle
- **Jargon shift**: Different terminology without translation
- **Context switch**: Levels address different aspects of the topic
## Edge Case Discovery
Edge cases are concrete scenarios that test the boundaries of abstract principles.
### Finding Edge Cases
1. **Boundary testing**: What happens at extremes?
- Zero, negative, maximum values
- Empty sets, single items, massive scale
- Start/end of time ranges
2. **Contradiction hunting**: When does the principle not apply?
- Special circumstances
- Conflicting principles
- Trade-offs that force compromise
3. **Real-world friction**: What makes implementation hard?
- Technical limitations
- Business constraints
- User behavior
- Legacy systems
### Documenting Edge Cases
For each edge case, document:
- **Scenario**: Specific concrete situation
- **Expectation**: What abstract principle suggests should happen
- **Reality**: What actually happens
- **Gap**: Why there's a difference
- **Resolution**: How to handle it
**Example:**
- **Scenario**: User uploads 5GB profile photo
- **Expectation**: "System should accept user input" (abstract principle)
- **Reality**: Server rejects file > 10MB
- **Gap**: Principle doesn't account for resource limits
- **Resolution**: Revise principle to "System should accept reasonable user input within documented constraints"
## Common Pitfalls
### 1. Fake Concreteness
**Problem**: Using specific-sounding language without actual specificity.
**Bad**: "The system should have good performance"
**Good**: "The system should respond to API requests in < 200ms at p95"
### 2. Missing the Abstract
**Problem**: Starting too concrete, never reaching universal principles.
**Bad**: Levels 1-5 all describe different API endpoints
**Good**: Extract what makes a "good API" from those endpoints
### 3. Inconsistent Granularity
**Problem**: Some levels are finely divided, others make huge jumps.
**Fix**: Ensure roughly equal conceptual distance between all adjacent levels.
### 4. Topic Drift
**Problem**: Different levels address different aspects of the topic.
**Bad**:
- Level 1: "Software should be secure"
- Level 2: "Use encryption for data"
- Level 3: "Users prefer simple interfaces" ← Drift!
**Good**: Keep all levels on the same thread (security, in this case).
### 5. Over-specification
**Problem**: Making higher levels too specific too early.
**Bad**: Level 1: "React apps should use Redux Toolkit"
**Good**: Level 1: "Applications should manage state predictably"
## Advanced Techniques
### Multiple Ladders
For complex topics, create multiple parallel ladders for different aspects:
**Topic: E-commerce Checkout**
- Ladder A: Security (data protection → PCI compliance → specific encryption)
- Ladder B: UX (easy purchase → progress indication → specific button placement)
- Ladder C: Performance (fast checkout → async processing → specific caching strategy)
### Ladder Mapping
Connect ladders at various levels to show relationships:
```
Ladder A (Feature): Ladder B (Architecture):
L1: Improve user engagement ← L1: System should be modular
L2: Add social features ← L2: Use microservices
L3: Implement commenting ← L3: Comment service
L4: POST /comments endpoint ← L4: Express.js REST API
```
### Audience Targeting
Create the same ladder with different emphasis for different audiences:
**For executives**: Focus on levels 1-2 (strategy, ROI)
**For managers**: Focus on levels 2-3 (approach, methods)
**For engineers**: Focus on levels 3-5 (implementation, details)
### Reverse Engineering
Take existing concrete work and extract the abstraction ladder:
1. Document exactly what was built (Level 5)
2. Ask "Why this specific implementation?" (Level 4)
3. Ask "What approach guided this?" (Level 3)
4. Continue upward to principles
This reveals:
- Implicit assumptions
- Unstated principles
- Gaps between intent and execution
### Gap Analysis
Compare ideal ladder vs. actual implementation:
**Ideal**:
- L1: "Products should be accessible"
- L2: "Follow WCAG 2.1 AA"
- L3: "All interactive elements keyboard navigable"
**Actual**:
- L5: "Some buttons missing tabindex"
- Inference: Gap between L1 intention and L5 reality
Use gap analysis to:
- Identify technical debt
- Find missing requirements
- Plan improvements
## Summary
Effective abstraction ladders:
- Have clear, logical transitions between levels
- Cover both universal principles and specific details
- Reveal assumptions and edge cases
- Serve the user's actual need (communication, design, validation, etc.)
Remember: The ladder is a tool for thinking and communicating, not an end in itself. Build what's useful for the task at hand.

View File

@@ -0,0 +1,219 @@
# Quick-Start Template
## Workflow
Copy this checklist and track your progress:
```
Abstraction Ladder Progress:
- [ ] Step 1: Gather inputs (topic, purpose, audience, levels, direction)
- [ ] Step 2: Choose starting point and build levels
- [ ] Step 3: Add connections and transitions
- [ ] Step 4: Test with edge cases
- [ ] Step 5: Validate quality checklist
```
**Step 1: Gather inputs**
Define topic (what concept/system/problem?), purpose (communication/design/validation?), audience (who will use this?), levels (3-5, default 4), direction (top-down/bottom-up/middle-out), and focus areas (edge cases/communication/implementation).
**Step 2: Choose starting point and build levels**
Use [Common Starting Points](#common-starting-points) to select direction. Top-down for teaching/design, bottom-up for analysis/patterns, middle-out for bridging gaps. Build each level ensuring distinctness and logical flow.
**Step 3: Add connections and transitions**
Explain how levels flow together as coherent whole. Each level should logically derive from previous level. See [Template Structure](#template-structure) for format.
**Step 4: Test with edge cases**
Identify 2-3 boundary scenarios that test principles. For each: describe scenario, state what abstract principle suggests, note what actually happens, assess alignment (matches/conflicts/requires nuance).
**Step 5: Validate quality checklist**
Use [Quality Checklist](#quality-checklist) to verify: levels are distinct, concrete level has specifics, abstract level is universal, edge cases are meaningful, assumptions stated, serves stated purpose.
## Template Structure
Copy this structure to create your abstraction ladder:
```markdown
# Abstraction Ladder: [Your Topic]
## Overview
**Topic**: [What you're exploring]
**Purpose**: [Why you're building this ladder]
**Audience**: [Who will use this]
## Abstraction Levels
### Level 1 (Most Abstract): [Give it a label]
[Universal principle or highest-level concept]
Why this matters: [Explain the significance]
### Level 2: [Label]
[Framework, category, or general approach]
Connection to L1: [How does this derive from Level 1?]
### Level 3: [Label]
[Specific method or implementation approach]
Connection to L2: [How does this derive from Level 2?]
### Level 4 (Most Concrete): [Label]
[Exact implementation with specific details]
Connection to L3: [How does this derive from Level 3?]
*Add Level 5 if you need more granularity*
## Connections & Transitions
[Explain how the levels flow together as a coherent whole]
**Key insight**: [What becomes clear when you see all levels together?]
## Edge Cases & Boundary Testing
### Edge Case 1: [Name]
- **Scenario**: [Concrete situation]
- **Abstract principle**: [What L1/L2 suggests should happen]
- **Reality**: [What actually happens]
- **Alignment**: [✓ matches / ✗ conflicts / ~ requires nuance]
### Edge Case 2: [Name]
[Same structure]
## Applications
This ladder is useful for:
- [Use case 1]
- [Use case 2]
- [Use case 3]
## Gaps & Assumptions
**Assumptions:**
- [What are we taking for granted?]
- [What context is this specific to?]
**Gaps:**
- [What's not covered?]
- [What questions remain?]
**What would change if:**
- [Different scale? Different domain? Different constraints?]
```
## Common Starting Points
### Start Top-Down (Abstract → Concrete)
**Good for**: Teaching, designing from principles, communication to varied audiences
**Prompt to yourself**:
1. "What's the most universal statement I can make about this topic?"
2. "How would this principle manifest in practice?"
3. "What framework implements this principle?"
4. "What's a concrete example?"
5. "What are the exact, measurable details?"
**Example**:
- L1: "Communication should be clear"
- L2: "Use plain language and structure"
- L3: "Organize documents with headings, bullets, short paragraphs"
- L4: "This document uses H2 headings every 3-4 paragraphs, bullet lists for steps"
### Start Bottom-Up (Concrete → Abstract)
**Good for**: Analyzing existing work, generalizing patterns, root cause analysis
**Prompt to yourself**:
1. "What specific thing am I looking at?"
2. "What pattern does this exemplify?"
3. "What general approach does that pattern reflect?"
4. "What framework supports that approach?"
5. "What universal principle underlies this?"
**Example**:
- L5: "Button has onClick={handleSubmit} and disabled={!isValid}"
- L4: "Form button is disabled until validation passes"
- L3: "Prevent invalid form submission through UI controls"
- L2: "Use defensive programming and client-side validation"
- L1: "Systems should prevent errors, not just catch them"
### Start Middle-Out (Familiar → Both Directions)
**Good for**: Building shared understanding, bridging expertise gaps
**Prompt to yourself**:
1. "What's something everyone already understands?"
2. Go up: "What principle does this exemplify?"
3. Go down: "How exactly is this implemented?"
4. Continue in both directions
**Example** (start at L3):
- L1: ↑ "Products should be accessible to all"
- L2: ↑ "Follow WCAG guidelines"
- **L3: "Add alt text to images"** ← Start here
- L4: ↓ `<img src="logo.png" alt="Company name logo">`
- L5: ↓ Screen reader reads: "Company name logo, image"
## Quality Checklist
Before finalizing, check:
- [ ] Each level is clearly distinct from adjacent levels
- [ ] I can explain the transition between any two adjacent levels
- [ ] Most concrete level has specific, measurable details
- [ ] Most abstract level is broadly applicable beyond this context
- [ ] Edge cases test the boundaries meaningfully
- [ ] Assumptions are stated explicitly
- [ ] The ladder serves the stated purpose
- [ ] Someone unfamiliar with the topic could follow the logic
## Guardrails
**Do:**
- State what you don't know or aren't sure about
- Include edge cases that challenge the principles
- Make concrete levels truly concrete (numbers, specifics)
- Make abstract levels truly universal (apply to other domains)
**Don't:**
- Use vague language like "good," "better," "appropriate" without defining
- Make huge jumps between levels (missing middle)
- Let different levels address different aspects of the topic
- Assume expertise your audience doesn't have
## Next Steps After Creating Ladder
**For communication:**
- Share L1-L2 with executives
- Share L2-L3 with managers
- Share L3-L5 with implementers
**For design:**
- Use L1-L2 to guide decisions
- Use L3-L4 to specify requirements
- Use L5 for implementation
**For validation:**
- Test if L5 reality matches L1 principles
- Find gaps between levels
- Identify where principles break down
**For documentation:**
- Use as table of contents (each level = section depth)
- Create expandable sections (click for more detail)
- Link levels to relevant resources
## Examples to Study
See `resources/examples/` for complete examples:
- `api-design.md` - Technical example (API design principles)
- `hiring-process.md` - Process example (hiring practices)
Each example shows different techniques and applications.