Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,197 @@
---
name: negative-contrastive-framing
description: Use when clarifying fuzzy boundaries, defining quality criteria, teaching by counterexample, preventing common mistakes, setting design guardrails, disambiguating similar concepts, refining requirements through anti-patterns, creating clear decision criteria, or when user mentions near-miss examples, anti-goals, what not to do, negative examples, counterexamples, or boundary clarification.
---
# Negative Contrastive Framing
## Table of Contents
- [Purpose](#purpose)
- [When to Use](#when-to-use)
- [What Is It](#what-is-it)
- [Workflow](#workflow)
- [Common Patterns](#common-patterns)
- [Guardrails](#guardrails)
- [Quick Reference](#quick-reference)
## Purpose
Define concepts, quality criteria, and boundaries by showing what they're NOT—using anti-goals, near-miss examples, and failure patterns to create crisp decision criteria where positive definitions alone are ambiguous.
## When to Use
**Clarifying Fuzzy Boundaries:**
- Positive definition exists but edges are unclear
- Multiple interpretations cause confusion
- Team debates what "counts" as meeting criteria
- Need to distinguish similar concepts
**Teaching & Communication:**
- Explaining concepts to learners who need counterexamples
- Training teams to recognize anti-patterns
- Creating style guides with do's and don'ts
- Onboarding with common mistake prevention
**Setting Standards:**
- Defining code quality (show bad patterns)
- Establishing design principles (show violations)
- Creating evaluation rubrics (clarify failure modes)
- Building decision criteria (identify disqualifiers)
**Preventing Errors:**
- Near-miss incidents revealing risk patterns
- Common mistakes that need explicit guards
- Edge cases that almost pass but shouldn't
- Subtle failures that look like successes
## What Is It
Negative contrastive framing defines something by showing what it's NOT:
**Types of Negative Examples:**
1. **Anti-goals:** Opposite of desired outcome ("not slow" → define fast)
2. **Near-misses:** Examples that almost qualify but fail on key dimension
3. **Failure patterns:** Common mistakes that violate criteria
4. **Boundary cases:** Edge examples clarifying where line is drawn
**Example:**
Defining "good UX":
- **Positive:** "Intuitive, efficient, delightful"
- **Negative contrast:**
- ❌ Near-miss: Fast but confusing (speed without clarity)
- ❌ Anti-pattern: Dark patterns (manipulative design)
- ❌ Failure: Requires manual to understand basic tasks
## Workflow
Copy this checklist and track your progress:
```
Negative Contrastive Framing Progress:
- [ ] Step 1: Define positive concept
- [ ] Step 2: Identify negative examples
- [ ] Step 3: Analyze contrasts
- [ ] Step 4: Validate quality
- [ ] Step 5: Deliver framework
```
**Step 1: Define positive concept**
Start with initial positive definition, identify why it's ambiguous or fuzzy (multiple interpretations, edge cases unclear), and clarify purpose (teaching, decision-making, quality control). See [Common Patterns](#common-patterns) for typical applications.
**Step 2: Identify negative examples**
For simple cases with clear anti-patterns → Use [resources/template.md](resources/template.md) to structure anti-goals, near-misses, and failure patterns. For complex cases with subtle boundaries → Study [resources/methodology.md](resources/methodology.md) for techniques like contrast matrices and boundary mapping.
**Step 3: Analyze contrasts**
Create `negative-contrastive-framing.md` with: positive definition, 3-5 anti-goals, 5-10 near-miss examples with explanations, common failure patterns, clear decision criteria ("passes if..." / "fails if..."), and boundary cases. Ensure contrasts reveal the *why* behind criteria.
**Step 4: Validate quality**
Self-assess using [resources/evaluators/rubric_negative_contrastive_framing.json](resources/evaluators/rubric_negative_contrastive_framing.json). Check: negative examples span the boundary space, near-misses are genuinely close calls, contrasts clarify criteria better than positive definition alone, failure patterns are actionable guards. Minimum standard: Average score ≥ 3.5.
**Step 5: Deliver framework**
Present completed framework with positive definition sharpened by negatives, most instructive near-misses highlighted, decision criteria operationalized as checklist, common mistakes identified for prevention.
## Common Patterns
### By Domain
**Engineering (Code Quality):**
- Positive: "Maintainable code"
- Negative: God objects, tight coupling, unclear names, magic numbers, exception swallowing
- Near-miss: Well-commented spaghetti code (documentation without structure)
**Design (UX):**
- Positive: "Intuitive interface"
- Negative: Hidden actions, inconsistent patterns, cryptic error messages
- Near-miss: Beautiful but unusable (form over function)
**Communication (Clear Writing):**
- Positive: "Clear documentation"
- Negative: Jargon-heavy, assuming context, no examples, passive voice
- Near-miss: Technically accurate but incomprehensible to target audience
**Strategy (Market Positioning):**
- Positive: "Premium brand"
- Negative: Overpriced without differentiation, luxury signaling without substance
- Near-miss: High price without service quality to match
### By Application
**Teaching:**
- Show common mistakes students make
- Provide near-miss solutions revealing misconceptions
- Identify "looks right but is wrong" patterns
**Decision Criteria:**
- Define disqualifiers (automatic rejection criteria)
- Show edge cases that almost pass
- Clarify ambiguous middle ground
**Quality Control:**
- Identify anti-patterns to avoid
- Show subtle defects that might pass inspection
- Define clear pass/fail boundaries
## Guardrails
**Near-Miss Selection:**
- Near-misses must be genuinely close to positive examples
- Should reveal specific dimension that fails (not globally bad)
- Avoid trivial failures—focus on subtle distinctions
**Contrast Quality:**
- Explain *why* each negative example fails
- Show what dimension violates criteria
- Make contrasts instructive, not just lists
**Completeness:**
- Cover failure modes across key dimensions
- Don't cherry-pick—include hard-to-classify cases
- Show spectrum from clear pass to clear fail
**Actionability:**
- Translate insights into decision rules
- Provide guards/checks to prevent failures
- Make criteria operationally testable
**Avoid:**
- Strawman negatives (unrealistically bad examples)
- Negatives without explanation (show what's wrong and why)
- Missing the "close call" zone (all examples clearly pass or fail)
## Quick Reference
**Resources:**
- `resources/template.md` - Structured format for anti-goals, near-misses, failure patterns
- `resources/methodology.md` - Advanced techniques (contrast matrices, boundary mapping, failure taxonomies)
- `resources/evaluators/rubric_negative_contrastive_framing.json` - Quality criteria
**Output:** `negative-contrastive-framing.md` with positive definition, anti-goals, near-misses with analysis, failure patterns, decision criteria
**Success Criteria:**
- Negative examples span boundary space (not just extremes)
- Near-misses are instructive close calls
- Contrasts clarify ambiguous criteria
- Failure patterns are actionable guards
- Decision criteria operationalized
- Score ≥ 3.5 on rubric
**Quick Decisions:**
- **Clear anti-patterns?** → Template only
- **Subtle boundaries?** → Use methodology for contrast matrices
- **Teaching application?** → Emphasize near-misses revealing misconceptions
- **Quality control?** → Focus on failure pattern taxonomy
**Common Mistakes:**
1. Only showing extreme negatives (not instructive near-misses)
2. Lists without analysis (not explaining why examples fail)
3. Cherry-picking easy cases (avoiding hard boundary calls)
4. Strawman negatives (unrealistically bad)
5. No operationalization (criteria remain fuzzy despite contrasts)
**Key Insight:**
Negative examples are most valuable when they're *almost* positive—close calls that force articulation of subtle criteria invisible in positive definition alone.

View File

@@ -0,0 +1,298 @@
{
"name": "Negative Contrastive Framing Evaluator",
"description": "Evaluate quality of negative contrastive framing—assessing anti-goals, near-misses, failure patterns, and how well negative examples clarify boundaries and criteria.",
"version": "1.0.0",
"criteria": [
{
"name": "Anti-Goal Quality",
"description": "Evaluates quality of anti-goals (opposite of desired outcomes) - do they represent true opposites and clarify boundaries?",
"weight": 1.0,
"scale": {
"1": {
"label": "No or weak anti-goals",
"description": "Anti-goals missing, too vague, or just bad versions of goal (not true opposites). Example: For 'fast code,' anti-goal is 'slow code' without specificity."
},
"2": {
"label": "Basic anti-goals",
"description": "2-3 anti-goals listed but generic or obvious. Limited insight into boundaries. Example: Anti-goals are extreme cases everyone already knows to avoid."
},
"3": {
"label": "Clear anti-goals",
"description": "3-5 anti-goals that represent different dimensions of failure. Each has clear explanation of why it's opposite. Provides reasonable boundary clarification."
},
"4": {
"label": "Insightful anti-goals",
"description": "3-5 well-chosen anti-goals spanning key failure dimensions. Each reveals something non-obvious about what makes goal work. Anti-goals are specific and instructive."
},
"5": {
"label": "Exceptional anti-goals",
"description": "Anti-goals comprehensively span opposition space, reveal subtle aspects of goal, include surprising/counterintuitive opposites. Explanations show deep understanding of what makes goal succeed."
}
}
},
{
"name": "Near-Miss Quality",
"description": "Evaluates quality of near-miss examples - are they genuinely close calls that fail on specific dimensions?",
"weight": 1.3,
"scale": {
"1": {
"label": "No near-misses or obviously bad examples",
"description": "Near-misses missing, or examples are clearly bad (not 'near'). Example: For 'good UX,' showing completely broken interface as near-miss."
},
"2": {
"label": "Weak near-misses",
"description": "1-3 examples labeled as near-misses but not genuinely close. Fail on multiple dimensions (can't isolate lesson). Not instructive."
},
"3": {
"label": "Genuine near-misses",
"description": "3-5 examples that are genuinely close to passing. Each fails on identifiable dimension. Somewhat instructive but explanations could be deeper."
},
"4": {
"label": "Instructive near-misses",
"description": "5-10 well-chosen near-misses that fool initial judgment. Each isolates specific failing dimension. Explanations clarify why failure occurs and what dimension matters. Reveals subtleties in criteria."
},
"5": {
"label": "Exemplary near-misses",
"description": "10+ near-misses that systematically cover boundary space. Each is plausible mistake someone would make. Failures isolate single dimensions. Explanations reveal non-obvious criteria and provide 'aha' moments. Build pattern recognition."
}
}
},
{
"name": "Failure Pattern Identification",
"description": "Evaluates identification and documentation of common failure patterns with detection and prevention guidance",
"weight": 1.2,
"scale": {
"1": {
"label": "No failure patterns",
"description": "Failure patterns not identified or just list of bad examples without pattern recognition. No actionable guidance."
},
"2": {
"label": "Basic failure listing",
"description": "2-3 failure modes listed but not organized into patterns. Minimal detection or prevention guidance. Example: Lists failures without explaining commonality."
},
"3": {
"label": "Identified patterns",
"description": "3-5 failure patterns identified and named. Basic detection heuristics and prevention guidance. Patterns are recognizable. Reasonable actionability."
},
"4": {
"label": "Comprehensive patterns",
"description": "5-7 failure patterns well-documented with: pattern name, description, why it fails, how to detect, how to prevent, examples. Patterns are memorable and actionable. Create practical guards."
},
"5": {
"label": "Systematic failure taxonomy",
"description": "7+ failure patterns organized into taxonomy (by severity, type, detection difficulty). Each pattern thoroughly documented with root causes, detection methods, prevention guards, and examples. Patterns are reusable across contexts. Creates comprehensive quality checklist."
}
}
},
{
"name": "Contrast Analysis Depth",
"description": "Evaluates depth of analysis - are contrasts just listed, or is there analysis of why examples fail and what dimensions matter?",
"weight": 1.2,
"scale": {
"1": {
"label": "No analysis",
"description": "Negative examples listed without explanation of why they fail or what dimensions they violate. Just 'good' vs 'bad' with no insight."
},
"2": {
"label": "Surface analysis",
"description": "Brief explanations of why examples fail but lacks depth. Doesn't identify specific dimensions. Example: 'This fails because it's bad' without explaining what aspect fails."
},
"3": {
"label": "Dimensional identification",
"description": "Identifies key dimensions (3-5) and explains which dimensions each negative example violates. Basic contrast analysis showing what differs between pass and fail."
},
"4": {
"label": "Deep contrast analysis",
"description": "Thorough analysis of contrasts revealing subtle differences. Identifies necessary vs sufficient conditions. Explains interactions between dimensions. Uses contrast matrix or boundary mapping. Clarifies ambiguous cases."
},
"5": {
"label": "Revelatory analysis",
"description": "Exceptional depth of analysis that reveals non-obvious criteria invisible in positive definition alone. Multi-dimensional analysis showing dimension interactions, compensation effects, thresholds. Operationalizes fuzzy criteria through contrast insights. Handles ambiguous boundary cases explicitly."
}
}
},
{
"name": "Decision Criteria Operationalization",
"description": "Evaluates whether analysis translates into clear, testable decision criteria for determining pass/fail",
"weight": 1.2,
"scale": {
"1": {
"label": "No operationalization",
"description": "Criteria remain fuzzy after analysis. No clear pass/fail rules. Can't apply criteria consistently to new examples."
},
"2": {
"label": "Vague criteria",
"description": "Some attempt at criteria but still subjective. Example: 'Should be intuitive' without defining what makes something intuitive. Hard to test."
},
"3": {
"label": "Basic operationalization",
"description": "Decision criteria stated with reasonable clarity. Pass/fail conditions identified. Somewhat testable but may require judgment in edge cases. Example: Checklist with 3-5 items."
},
"4": {
"label": "Clear operational criteria",
"description": "Decision criteria are testable and specific. Clear pass conditions and disqualifiers. Handles edge cases explicitly. Provides detection heuristics. Can be applied consistently. Example: Measurable thresholds, specific checklist, ambiguous cases addressed."
},
"5": {
"label": "Rigorous operationalization",
"description": "Criteria fully operationalized with objective tests, thresholds, and decision rules. Handles all edge cases and ambiguous middle ground explicitly. Provides guards/checks that can be automated or consistently applied. Criteria are falsifiable and have been validated. Inter-rater reliability would be high."
}
}
},
{
"name": "Boundary Completeness",
"description": "Evaluates whether negative examples comprehensively cover the boundary space or leave gaps",
"weight": 1.1,
"scale": {
"1": {
"label": "Incomplete coverage",
"description": "Negative examples only cover obvious failure modes. Major gaps in boundary space. Missing entire categories of near-misses or failure patterns."
},
"2": {
"label": "Limited coverage",
"description": "Covers some failure modes but significant gaps remain. Examples cluster in one area of boundary space. Doesn't systematically vary dimensions."
},
"3": {
"label": "Reasonable coverage",
"description": "Covers major failure modes and most common near-misses. Some gaps in boundary space acceptable. Examples span multiple dimensions. Representative but not exhaustive."
},
"4": {
"label": "Comprehensive coverage",
"description": "Systematically covers boundary space. Negative examples span all key dimensions. Includes obvious and subtle failures. Near-misses cover single-dimension failures across each critical dimension. Few gaps."
},
"5": {
"label": "Exhaustive coverage",
"description": "Complete, systematic coverage of boundary space using techniques like contrast matrices or boundary mapping. Negative examples span full spectrum from clear pass to clear fail. All combinations of dimensional failures represented. Explicitly identifies any remaining ambiguous cases. No significant gaps."
}
}
},
{
"name": "Actionability",
"description": "Evaluates whether framework provides actionable guards, checklists, or heuristics for preventing failures",
"weight": 1.0,
"scale": {
"1": {
"label": "Not actionable",
"description": "Analysis interesting but provides no practical guidance for applying criteria or preventing failures. No guards, checklists, or heuristics."
},
"2": {
"label": "Minimally actionable",
"description": "Some guidance but vague or hard to apply. Example: 'Watch out for pattern X' without detection method or prevention strategy."
},
"3": {
"label": "Reasonably actionable",
"description": "Provides basic checklist or guards. Can be applied with some effort. Example: Prevention checklist with 3-5 items, basic detection heuristics for failure patterns."
},
"4": {
"label": "Highly actionable",
"description": "Comprehensive prevention checklist, detection heuristics for each failure pattern, and clear decision criteria. Can be immediately applied in practice. Example: Specific guards, red flags to watch for, step-by-step evaluation process."
},
"5": {
"label": "Immediately implementable",
"description": "Complete action framework with: prevention checklist, detection heuristics, decision flowchart, measurement criteria, and examples for calibration. Could be automated or implemented as process. Teams can apply consistently with minimal training."
}
}
},
{
"name": "Instructiveness",
"description": "Evaluates whether negative examples reveal insights not obvious from positive definition alone",
"weight": 1.0,
"scale": {
"1": {
"label": "Not instructive",
"description": "Negative examples obvious or redundant with positive definition. Don't reveal anything new. Example: For 'fast code,' showing extremely slow code as negative (already implied)."
},
"2": {
"label": "Minimally instructive",
"description": "Some insights but mostly confirming what positive definition already implies. Near-misses are weak. Little 'aha' value."
},
"3": {
"label": "Reasonably instructive",
"description": "Negative examples clarify boundaries and reveal some non-obvious aspects of criteria. Near-misses show trade-offs or subtle requirements. Adds value beyond positive definition."
},
"4": {
"label": "Highly instructive",
"description": "Negative examples reveal important subtleties invisible in positive definition. Near-misses create 'aha' moments. Analysis articulates implicit criteria. Significantly deepens understanding of concept."
},
"5": {
"label": "Transformatively instructive",
"description": "Negative examples fundamentally reshape understanding of concept. Near-misses reveal surprising requirements or common misconceptions. Analysis makes previously fuzzy criteria explicit and operational. Reader gains ability to recognize patterns they couldn't see before. Teaching value exceptional."
}
}
}
],
"guidance": {
"by_application": {
"teaching": {
"focus": "Prioritize near-miss quality and instructiveness. Weight near-misses heavily (1.5x). Use examples that reveal common student misconceptions.",
"typical_scores": "Near-miss quality and instructiveness should be 4+. Other criteria can be 3+.",
"red_flags": "Near-misses are obviously bad (not close calls), examples don't build pattern recognition, no connection to common errors"
},
"decision_criteria": {
"focus": "Prioritize operationalization and actionability. Must translate into clear pass/fail rules.",
"typical_scores": "Operationalization and actionability should be 4+. Boundary completeness 3+.",
"red_flags": "Criteria remain fuzzy, no clear decision rules, can't consistently apply to new examples"
},
"quality_control": {
"focus": "Prioritize failure patterns and boundary completeness. Need systematic coverage and prevention guards.",
"typical_scores": "Failure patterns and boundary completeness should be 4+. Actionability 4+.",
"red_flags": "Missing common failure modes, no prevention checklist, can't systematically check quality"
},
"requirements_clarification": {
"focus": "Prioritize contrast analysis depth and boundary completeness. Need to expose ambiguities.",
"typical_scores": "Contrast analysis and boundary completeness should be 4+. Near-miss quality 3+.",
"red_flags": "Ambiguous cases not addressed, boundary gaps, unclear which edge cases pass/fail"
}
},
"by_domain": {
"engineering": {
"anti_goal_examples": "Unmaintainable code, tight coupling, unclear naming, no tests",
"near_miss_examples": "Well-commented spaghetti code, over-engineered simple solution, premature optimization",
"failure_patterns": "God objects, leaky abstractions, magic numbers, exception swallowing",
"operationalization": "Cyclomatic complexity thresholds, test coverage %, linter rules"
},
"design": {
"anti_goal_examples": "Unusable interface, inaccessible design, inconsistent patterns",
"near_miss_examples": "Beautiful but non-intuitive, feature-complete but overwhelming, accessible but unappealing",
"failure_patterns": "Form over function, hidden affordances, inconsistent mental models",
"operationalization": "Task completion rates, time on task, user satisfaction scores, accessibility audits"
},
"communication": {
"anti_goal_examples": "Incomprehensible writing, audience mismatch, missing context",
"near_miss_examples": "Technically accurate but inaccessible, comprehensive but unclear structure, engaging but inaccurate",
"failure_patterns": "Jargon overload, buried lede, assumed context, passive voice abuse",
"operationalization": "Reading level scores, comprehension tests, jargon ratio, example density"
},
"strategy": {
"anti_goal_examples": "Wrong market, misaligned resources, unsustainable model",
"near_miss_examples": "Good strategy but bad timing, right market but wrong positioning, strong execution of wrong plan",
"failure_patterns": "Strategy-execution gap, resource mismatch, underestimating competition",
"operationalization": "Market criteria checklist, resource requirements, success metrics, kill criteria"
}
}
},
"common_failure_modes": {
"strawman_negatives": "Negative examples are unrealistically bad, creating false sense of understanding. Fix: Use realistic failures people actually make.",
"missing_near_misses": "Only showing clear passes and clear fails, no instructive close calls. Fix: Generate examples that fail on single dimension.",
"no_analysis": "Listing negatives without explaining why they fail or what dimension matters. Fix: Add dimension analysis and contrast insights.",
"incomplete_coverage": "Examples cluster in one area, missing other failure modes. Fix: Systematically vary dimensions using contrast matrix.",
"fuzzy_criteria": "After analysis, still can't apply criteria consistently to new examples. Fix: Operationalize criteria with testable conditions.",
"not_actionable": "Interesting analysis but no practical guards or prevention checklist. Fix: Extract actionable heuristics and detection methods."
},
"excellence_indicators": [
"Near-misses are genuinely close calls that fool initial judgment (not obviously bad)",
"Each near-miss isolates failure on single dimension (teaches specific lesson)",
"Failure patterns are memorable, recognizable, and common (not rare edge cases)",
"Contrast analysis reveals criteria invisible in positive definition alone",
"Decision criteria are operationalized with testable conditions or measurable thresholds",
"Boundary space systematically covered using contrast matrix or dimension variation",
"Actionable guards/checklist provided that can be immediately implemented",
"Examples span full spectrum: clear pass, borderline pass, borderline fail, clear fail",
"Ambiguous middle ground explicitly addressed with context-dependent rules",
"Framework has high teaching value - builds pattern recognition skills"
],
"evaluation_notes": {
"scoring": "Calculate weighted average across all criteria. Minimum passing score: 3.0 (basic quality). Production-ready target: 3.5+. Excellence threshold: 4.2+. For teaching applications, weight near-miss quality and instructiveness at 1.5x.",
"context": "Adjust expectations by application. Teaching requires exceptional near-misses (4+). Decision criteria requires strong operationalization (4+). Quality control requires comprehensive failure patterns (4+). Requirements clarification requires thorough boundary coverage (4+).",
"iteration": "Low scores indicate specific improvement areas. Priority order: 1) Improve near-miss quality (highest ROI), 2) Add contrast analysis depth, 3) Operationalize criteria, 4) Expand boundary coverage, 5) Create actionable guards. Near-misses are most valuable—invest effort there first."
}
}

View File

@@ -0,0 +1,486 @@
# Negative Contrastive Framing Methodology
## Table of Contents
1. [Contrast Matrix Design](#1-contrast-matrix-design)
2. [Boundary Mapping](#2-boundary-mapping)
3. [Failure Taxonomy](#3-failure-taxonomy)
4. [Near-Miss Generation](#4-near-miss-generation)
5. [Multi-Dimensional Analysis](#5-multi-dimensional-analysis)
6. [Operationalizing Criteria](#6-operationalizing-criteria)
7. [Teaching Applications](#7-teaching-applications)
---
## 1. Contrast Matrix Design
### Concept
Systematically vary dimensions to explore boundary space between pass/fail.
### Building the Matrix
**Step 1: Identify key dimensions**
- What aspects define quality/success?
- Which dimensions are necessary? Sufficient?
- Common: Quality, Speed, Cost, Usability, Accuracy, Completeness
**Step 2: Create dimension scales**
- For each dimension: What does "fail" vs "pass" vs "excellent" look like?
- Make measurable or at least clearly distinguishable
**Step 3: Generate combinations**
- Vary one dimension at a time (hold others constant)
- Create examples at: all-pass, single-fail, multi-fail, all-fail
- Focus on single-fail cases (most instructive)
**Example: Code Quality**
| Example | Readable | Tested | Performant | Maintainable | Verdict |
|---------|----------|--------|------------|--------------|---------|
| Ideal | ✓ | ✓ | ✓ | ✓ | PASS |
| Readable mess | ✓ | ✗ | ✗ | ✗ | FAIL (testability) |
| Fast spaghetti | ✗ | ✗ | ✓ | ✗ | FAIL (maintainability) |
| Tested but slow | ✓ | ✓ | ✗ | ✓ | BORDERLINE (performance trade-off acceptable?) |
| Over-engineered | ✗ | ✓ | ✓ | ✗ | FAIL (unnecessary complexity) |
**Insights from matrix:**
- "Tested but slow" reveals performance is sometimes acceptable trade-off
- "Readable mess" shows readability alone insufficient
- "Over-engineered" shows you can pass some dimensions while creating new problems
### Application Pattern
1. List 3-5 critical dimensions
2. Create 2×2 or 3×3 matrix (keeps manageable)
3. Fill diagonal (all pass, all fail) first
4. Fill off-diagonal (single fails) - THESE ARE YOUR NEAR-MISSES
5. Analyze which single fails are most harmful
6. Derive decision rules from pattern
---
## 2. Boundary Mapping
### Concept
Identify exact threshold where pass becomes fail for continuous dimensions.
### Technique: Boundary Walk
**For quantitative dimensions:**
1. **Start at clear pass:** Example clearly meeting criterion
2. **Degrade incrementally:** Worsen one dimension step-by-step
3. **Find threshold:** Where does it tip from pass to fail?
4. **Document why:** What breaks at that threshold?
5. **Test robustness:** Is threshold consistent across contexts?
**Example: Response Time (UX)**
- 0-100ms: Instant (excellent)
- 100-300ms: Slight delay (acceptable)
- 300-1000ms: Noticeable lag (borderline)
- 1000ms+: Frustrating (fail)
- **Threshold insight:** 1 second is psychological boundary where "waiting" begins
**For qualitative dimensions:**
1. **Create spectrum:** Order examples from clear pass to clear fail
2. **Identify transition zone:** Where does judgment become difficult?
3. **Analyze edge cases:** What's ambiguous about borderline cases?
4. **Articulate criteria:** What additional factor tips decision?
**Example: Technical Jargon (Communication)**
- Spectrum: No jargon → Domain terms explained → Domain terms assumed → Excessive jargon
- Transition: "Domain terms assumed" is borderline
- Criteria: Acceptable if audience is domain experts; fails for general audience
- **Boundary insight:** Jargon acceptability depends on audience, not absolute rule
### Finding the "Almost" Zone
Near-misses live in transition zones. To find them:
- Identify clear passes and clear fails
- Generate examples between them
- Find examples that provoke disagreement
- Analyze what dimension causes disagreement
- Make that dimension explicit in criteria
---
## 3. Failure Taxonomy
### Concept
Categorize failure modes to create comprehensive guards.
### Taxonomy Dimensions
**By Severity:**
- **Critical:** Absolute disqualifier
- **Major:** Significant but possibly acceptable if compensated
- **Minor:** Weakness but not disqualifying
**By Type:**
- **Omission:** Missing required element
- **Commission:** Contains prohibited element
- **Distortion:** Present but incorrect/inappropriate
**By Detection:**
- **Obvious:** Easily caught in review
- **Subtle:** Requires careful inspection
- **Latent:** Only revealed in specific contexts
### Building Failure Taxonomy
**Step 1: Collect failure examples**
- Real failures from past experience
- Hypothetical failures from risk analysis
- Near-misses that were caught
**Step 2: Group by pattern**
- What do similar failures have in common?
- Name patterns memorably
**Step 3: Analyze root causes**
- Why does this pattern occur?
- What misconception leads to it?
**Step 4: Create detection + prevention**
- **Detection:** How to spot this pattern?
- **Prevention:** What guard prevents it?
### Example: API Design Failures
**Failure Pattern 1: Inconsistent Naming**
- **Description:** Endpoints use different naming conventions
- **Example:** `/getUsers` vs `/user/list` vs `/fetch-user-data`
- **Root cause:** No style guide; organic growth
- **Detection:** Run naming pattern analysis
- **Prevention:** Establish and enforce convention (e.g., RESTful: `GET /users`)
**Failure Pattern 2: Over-fetching**
- **Description:** Endpoint returns much more data than needed
- **Example:** User profile endpoint returns all user data including admin fields
- **Root cause:** Lazy serialization; security oversight
- **Detection:** Check payload size; audit returned fields
- **Prevention:** Field-level permissions; explicit response schemas
**Failure Pattern 3: Breaking Changes Without Versioning**
- **Description:** Modifying existing endpoint behavior
- **Example:** Changing field type from string to number
- **Root cause:** Not anticipating backward compatibility needs
- **Detection:** Compare API schema versions
- **Prevention:** Semantic versioning; deprecation warnings
**Taxonomy Benefits:**
- Systematic coverage of failure modes
- Reusable patterns across projects
- Training material for new team members
- Checklist for quality control
---
## 4. Near-Miss Generation
### Concept
Systematically create examples that almost pass but fail on specific dimension.
### Generation Strategies
**Strategy 1: Single-Dimension Degradation**
- Take a clear pass example
- Degrade exactly one dimension
- Keep all other dimensions at "pass" level
- Result: Isolates effect of that dimension
**Example: Job Candidate**
- Base (clear pass): Strong skills, culture fit, experience, communication
- Near-miss 1: Strong skills, culture fit, experience, **poor communication**
- Near-miss 2: Strong skills, culture fit, **no experience**, communication
- Near-miss 3: Strong skills, **poor culture fit**, experience, communication
**Strategy 2: Compensation Testing**
- Create example that fails on dimension X
- Excel on dimension Y
- Test if Y compensates for X
- Reveals whether dimensions are substitutable
**Example: Product Launch**
- Near-miss: Amazing product, **terrible go-to-market**
- Question: Can product quality overcome poor launch?
- Insight: Both are necessary; excellence in one doesn't compensate
**Strategy 3: Naive Solution**
- Imagine someone with surface understanding
- What would they produce?
- Often hits obvious requirements but misses subtle ones
**Example: Data Visualization**
- Naive: Technically accurate chart, **misleading visual encoding**
- Gets data right but fails communication goal
**Strategy 4: Adjacent Domain Transfer**
- Take best practice from domain A
- Apply naively to domain B
- Often creates near-miss: right idea, wrong context
**Example: Agile in Hardware**
- Software practice: Frequent releases
- Naive transfer: Frequent hardware prototypes
- Near-miss: Prototyping is expensive in hardware; frequency limits differ
### Making Near-Misses Instructive
**Good near-miss characteristics:**
1. **Plausible:** Someone might genuinely produce this
2. **Specific failure:** Clear what dimension fails
3. **Reveals subtlety:** Failure isn't obvious at first
4. **Teachable moment:** Clarifies criterion through contrast
**Weak near-miss characteristics:**
- Too obviously bad (not genuinely "near")
- Fails on multiple dimensions (can't isolate lesson)
- Artificial/strawman (no one would actually do this)
---
## 5. Multi-Dimensional Analysis
### Concept
When quality depends on multiple dimensions, analyze their interactions.
### Necessary vs Sufficient Conditions
**Necessary:** Must have to pass (absence = automatic fail)
**Sufficient:** Enough alone to pass
**Neither:** Helps but not decisive
**Analysis:**
1. List candidate dimensions
2. Test each: If dimension fails but example passes → not necessary
3. Test each: If dimension passes alone and example passes → sufficient
4. Most quality criteria: Multiple necessary, none sufficient alone
**Example: Effective Presentation**
- Content quality: **Necessary** (bad content dooms even great delivery)
- Delivery skill: **Necessary** (great content poorly delivered fails)
- Slide design: Helpful but not necessary (can present without slides)
- **Insight:** Both content and delivery are necessary; neither sufficient alone
### Dimension Interactions
**Independent:** Dimensions don't affect each other
**Complementary:** Success in one enhances value of another
**Substitutable:** Can trade off between dimensions
**Conflicting:** Improving one tends to worsen another
**Example: Software Documentation**
- Accuracy × Completeness: **Independent** (can have both or neither)
- Completeness × Conciseness: **Conflicting** (trade-off required)
- Clarity × Examples: **Complementary** (examples make clarity even better)
### Weighting Dimensions
Not all dimensions equally important. Three approaches:
**1. Threshold Model:**
- Each dimension has minimum threshold
- Must pass all thresholds (no compensation)
- Example: Safety-critical systems (can't trade safety for speed)
**2. Compensatory Model:**
- Dimensions can offset each other
- Weighted average determines pass/fail
- Example: Hiring (strong culture fit can partially offset skill gap)
**3. Hybrid Model:**
- Some dimensions are thresholds (must-haves)
- Others are compensatory (nice-to-haves)
- Example: Product quality (safety = threshold, features = compensatory)
---
## 6. Operationalizing Criteria
### Concept
Turn fuzzy criteria into testable decision rules.
### From Fuzzy to Operational
**Fuzzy:** "Intuitive user interface"
**Operational:**
- ✓ Pass: 80% of new users complete core task without help in <5 min
- ✗ Fail: <60% complete without help, or >10 min required
**Process:**
**Step 1: Identify fuzzy terms**
- "Intuitive," "clear," "high-quality," "good performance"
**Step 2: Ask "How would I measure this?"**
- What observable behavior indicates presence/absence?
- What test could definitively determine pass/fail?
**Step 3: Set thresholds**
- Where is the line between pass and fail?
- Based on: standards, benchmarks, user research, expert judgment
**Step 4: Add context conditions**
- "Passes if X, assuming Y"
- Edge cases requiring judgment call
### Operationalization Patterns
**For Usability:**
- Task completion rate
- Time on task
- Error frequency
- User satisfaction scores
- Needing help/documentation
**For Code Quality:**
- Cyclomatic complexity < N
- Test coverage > X%
- No functions > Y lines
- Naming convention compliance
- Passes linter with zero warnings
**For Communication:**
- Reading level (Flesch-Kincaid)
- Audience comprehension test
- Jargon ratio
- Example-to-concept ratio
**For Performance:**
- Response time < X ms at Y percentile
- Throughput > N requests/sec
- Resource usage < M under load
### Handling Ambiguity
Some criteria resist full operationalization. Strategies:
**1. Provide Examples:**
- Can't define perfectly, but can show instances
- "It's like these examples, not like those"
**2. Multi-Factor Checklist:**
- Not single metric, but combination of indicators
- "Passes if meets 3 of 5 criteria"
**3. Expert Judgment with Guidelines:**
- Operational guidelines, but final call requires judgment
- Document what experts consider in edge cases
**4. Context-Dependent Rules:**
- Different thresholds for different contexts
- "For audience X: criterion A; for audience Y: criterion B"
---
## 7. Teaching Applications
### Concept
Use negative examples to accelerate learning and pattern recognition.
### Pedagogical Sequence
**Stage 1: Show Extremes**
- Clear pass vs clear fail
- Build confidence in obvious cases
**Stage 2: Introduce Near-Misses**
- Show close calls
- Discuss what tips the balance
- Most learning happens here
**Stage 3: Learner Generation**
- Have learners create their own negative examples
- Test understanding by having them explain why examples fail
**Stage 4: Pattern Recognition Practice**
- Mixed examples (pass/fail/borderline)
- Rapid classification with feedback
- Build intuition
### Designing Learning Exercises
**Exercise 1: Spot the Difference**
- Show pairs: pass and near-miss that differs in one dimension
- Ask: "What's the key difference?"
- Reveals whether learner understands criterion
**Exercise 2: Fix the Failure**
- Give near-miss example
- Ask: "How would you fix this?"
- Tests whether learner can apply criterion
**Exercise 3: Generate Anti-Examples**
- Give positive example
- Ask: "Create a near-miss that fails on dimension X"
- Tests generative understanding
**Exercise 4: Boundary Exploration**
- Give borderline example
- Ask: "Pass or fail? Why? What additional info would help decide?"
- Develops judgment for ambiguous cases
### Common Misconceptions to Address
**Misconception 1: "More is always better"**
- Show: Over-engineering, feature bloat, information overload
- Near-miss: Technically impressive but unusable
**Misconception 2: "Following rules guarantees success"**
- Show: Compliant but ineffective
- Near-miss: Meets all formal requirements but misses spirit
**Misconception 3: "Avoiding negatives is enough"**
- Show: Absence of bad ≠ presence of good
- Near-miss: No obvious flaws but no strengths either
### Feedback on Negative Examples
**Good feedback format:**
1. **Acknowledge what's right:** "You correctly identified that..."
2. **Point to specific failure:** "But notice that dimension X fails because..."
3. **Contrast with alternative:** "Compare to this version where X passes..."
4. **Extract principle:** "This teaches us that [criterion] requires [specific condition]"
**Example:**
"Your API design (near-miss) correctly uses RESTful conventions and clear naming. However, it returns too much data (security and performance issue). Compare to this version that returns only requested fields. This teaches us that security requires explicit field-level controls, not just endpoint authentication."
---
## Quick Reference: Methodology Selection
**Use Contrast Matrix when:**
- Multiple dimensions to analyze
- Need systematic coverage
- Building comprehensive understanding
**Use Boundary Mapping when:**
- Dimensions are continuous
- Need to find exact thresholds
- Handling borderline cases
**Use Failure Taxonomy when:**
- Creating quality checklists
- Training teams on anti-patterns
- Systematic risk identification
**Use Near-Miss Generation when:**
- Teaching/training application
- Need instructive counterexamples
- Clarifying subtle criteria
**Use Multi-Dimensional Analysis when:**
- Dimensions interact
- Trade-offs exist
- Need to weight relative importance
**Use Operationalization when:**
- Criteria too fuzzy to apply consistently
- Need objective pass/fail test
- Reducing judgment variability
**Use Teaching Applications when:**
- Training users/teams
- Building pattern recognition
- Accelerating learning curve

View File

@@ -0,0 +1,292 @@
# Negative Contrastive Framing Template
## Quick Start
**Purpose:** Define concepts by showing what they're NOT—use anti-goals, near-misses, and failure patterns to clarify fuzzy boundaries.
**When to use:** Positive definition exists but edges are unclear, multiple interpretations cause confusion, or need to distinguish similar concepts.
---
## Part 1: Positive Definition
**Concept/Goal:** [What you're trying to define]
**Initial Positive Definition:**
[Your current definition using positive attributes]
**Why It's Ambiguous:**
- [Interpretation 1 vs Interpretation 2]
- [Edge cases unclear]
- [Confusion point]
**Purpose:**
- [ ] Teaching/training
- [ ] Decision criteria
- [ ] Quality control
- [ ] Requirements clarification
- [ ] Other: [Specify]
---
## Part 2: Anti-Goals
**What This is NOT:** (Opposite of desired outcome)
**Anti-Goal 1:** [Opposite extreme]
- **Description:** [What it looks like]
- **Why it fails:** [Violates which criterion]
- **Example:** [Concrete instance]
**Anti-Goal 2:** [Another opposite]
- **Description:**
- **Why it fails:**
- **Example:**
**Anti-Goal 3:** [Third opposite]
- **Description:**
- **Why it fails:**
- **Example:**
[Add 2-5 anti-goals total]
---
## Part 3: Near-Miss Examples
**Close Calls That FAIL:** (Examples that almost qualify but fail on key dimension)
**Near-Miss 1:** [Example]
- **What it gets right:** [Positive aspects]
- **Where it fails:** [Specific dimension that disqualifies]
- **Why it's instructive:** [What it reveals about criteria]
- **Boundary lesson:** [Insight about where line is drawn]
**Near-Miss 2:** [Example]
- **What it gets right:**
- **Where it fails:**
- **Why it's instructive:**
- **Boundary lesson:**
**Near-Miss 3:** [Example]
- **What it gets right:**
- **Where it fails:**
- **Why it's instructive:**
- **Boundary lesson:**
[Continue for 5-10 near-misses—these are most valuable]
---
## Part 4: Common Failure Patterns
**Failure Pattern 1:** [Pattern name]
- **Description:** [What the pattern looks like]
- **Why it fails:** [Criterion violated]
- **How to spot:** [Detection heuristic]
- **How to avoid:** [Prevention guard]
- **Example:** [Instance]
**Failure Pattern 2:** [Pattern name]
- **Description:**
- **Why it fails:**
- **How to spot:**
- **How to avoid:**
- **Example:**
**Failure Pattern 3:** [Pattern name]
- **Description:**
- **Why it fails:**
- **How to spot:**
- **How to avoid:**
- **Example:**
[List 3-7 common failure patterns]
---
## Part 5: Contrast Matrix
| Example | Dimension 1 | Dimension 2 | Dimension 3 | Passes? | Why/Why Not |
|---------|-------------|-------------|-------------|---------|-------------|
| [Positive example] | ✓ | ✓ | ✓ | ✓ PASS | All criteria met |
| [Near-miss 1] | ✓ | ✓ | ✗ | ✗ FAIL | Fails Dimension 3 |
| [Near-miss 2] | ✓ | ✗ | ✓ | ✗ FAIL | Fails Dimension 2 |
| [Negative example] | ✗ | ✗ | ✗ | ✗ FAIL | Fails all dimensions |
**Key Dimensions:**
- **Dimension 1:** [Name] - [What it measures]
- **Dimension 2:** [Name] - [What it measures]
- **Dimension 3:** [Name] - [What it measures]
---
## Part 6: Sharpened Definition
**Revised Positive Definition:**
[Updated definition informed by negative contrasts]
**Decision Criteria:**
**Passes if:**
- [ ] [Criterion 1 operationalized]
- [ ] [Criterion 2 operationalized]
- [ ] [Criterion 3 operationalized]
**Fails if ANY of:**
- [ ] [Disqualifier 1]
- [ ] [Disqualifier 2]
- [ ] [Disqualifier 3]
⚠️ **Ambiguous middle ground:**
- [Case 1]: Consider context [X]
- [Case 2]: Requires judgment call on [Y]
---
## Part 7: Actionable Guards
**Prevention Checklist:**
- [ ] [Guard against failure pattern 1]
- [ ] [Guard against failure pattern 2]
- [ ] [Guard against failure pattern 3]
- [ ] [Check for near-miss condition 1]
- [ ] [Check for near-miss condition 2]
**Detection Heuristics:**
1. **Red flag:** [Signal that example might fail]
- **Check:** [What to verify]
2. **Yellow flag:** [Warning sign]
- **Check:** [What to verify]
3. **Green light:** [Positive signal]
- **Confirm:** [What to validate]
---
## Part 8: Examples Across Spectrum
**Clear PASS:** [Unambiguous positive example]
- [Why it clearly meets all criteria]
**Borderline PASS:** [Barely qualifies]
- [Why it passes despite weakness in dimension X]
**Borderline FAIL:** [Almost qualifies]
- [Why it fails despite strength in dimensions Y and Z]
**Clear FAIL:** [Unambiguous negative example]
- [Why it clearly violates criteria]
---
## Output Format
Create `negative-contrastive-framing.md`:
```markdown
# [Concept]: Negative Contrastive Framing
**Date:** [YYYY-MM-DD]
## Positive Definition
[Sharpened definition]
## Anti-Goals
1. [Anti-goal] - [Why it's opposite]
2. [Anti-goal] - [Why it's opposite]
## Near-Miss Examples (Most Instructive)
1. **[Example]**: Almost passes but fails because [key dimension]
2. **[Example]**: Gets [X] right but fails on [Y]
3. **[Example]**: Looks like success but is actually [failure mode]
## Common Failure Patterns
1. **[Pattern]**: [Description] - Guard: [Prevention]
2. **[Pattern]**: [Description] - Guard: [Prevention]
## Decision Criteria
**Passes if:**
- [Operationalized criterion 1]
- [Operationalized criterion 2]
**Fails if:**
- [Disqualifier 1]
- [Disqualifier 2]
## Contrast Matrix
[Table showing examples across key dimensions]
## Key Insights
- [What negative examples revealed about boundaries]
- [Subtle criteria made explicit through near-misses]
- [Actionable guards to prevent common failures]
```
---
## Quality Checklist
Before finalizing:
- [ ] Anti-goals represent true opposites (not just bad versions)
- [ ] Near-misses are genuinely close calls (not obviously bad)
- [ ] Each failure has clear explanation of *why* it fails
- [ ] Failure patterns are common/realistic (not strawmen)
- [ ] Decision criteria are operationalized (testable)
- [ ] Guards are actionable (can be implemented)
- [ ] Covers spectrum from clear pass to clear fail
- [ ] Ambiguous cases acknowledged and addressed
- [ ] Insights reveal something not obvious from positive definition alone
---
## Common Applications
**Code Review:**
- Anti-goal: Unreadable code
- Near-miss: Well-commented but poorly structured code
- Pattern: "Documentation hides design problems"
**UX Design:**
- Anti-goal: Unusable interface
- Near-miss: Beautiful but non-intuitive design
- Pattern: "Form over function"
**Hiring:**
- Anti-goal: Wrong culture fit
- Near-miss: Strong skills but misaligned values
- Pattern: "Optimizing for résumé over team dynamics"
**Product Strategy:**
- Anti-goal: Feature bloat
- Near-miss: Useful feature that distracts from core value
- Pattern: "Saying yes to everything"
**Communication:**
- Anti-goal: Incomprehensible writing
- Near-miss: Technically accurate but inaccessible
- Pattern: "Correctness without clarity"
---
## Tips
**For Near-Misses:**
- Look for examples that fool initial judgment
- Find cases where single dimension tips the balance
- Use real examples from experience
**For Failure Patterns:**
- Name patterns memorably
- Make detection criteria specific
- Provide concrete prevention guards
**For Decision Criteria:**
- Test against edge cases
- Make falsifiable/testable
- Handle ambiguous middle ground explicitly
**For Teaching:**
- Start with near-misses (most engaging)
- Build pattern recognition through repetition
- Have learners generate their own negative examples