268 lines
16 KiB
Markdown
268 lines
16 KiB
Markdown
---
|
|
model: claude-sonnet-4-0
|
|
allowed-tools: Task, Read, Write, Bash(*), Glob, Grep
|
|
argument-hint: <domain-or-proposal> [--audit-depth=<level>] [--challenge-method=<approach>] [--scope=<breadth>]
|
|
description: Fundamental premise challenging with alternative framework generation
|
|
---
|
|
|
|
# Assumption Audit Engine
|
|
|
|
Systematically identify, examine, and challenge fundamental assumptions to reveal hidden constraints and generate alternative frameworks for breakthrough thinking. Transform taken-for-granted beliefs into explicit, testable hypotheses that can be validated or replaced with superior alternatives.
|
|
|
|
## Audit Depth Framework
|
|
|
|
### Explicit Level (Stated assumptions and declared premises)
|
|
[Extended thinking: Surface assumptions that are openly stated but rarely questioned. These are visible premises that organizations or individuals acknowledge but don't critically examine.]
|
|
|
|
**Identification Targets:**
|
|
- **Declared Constraints**: Explicitly stated limitations that may be negotiable
|
|
- **Policy Premises**: Organizational rules based on assumptions about efficiency or necessity
|
|
- **Method Assumptions**: Stated beliefs about why certain approaches work best
|
|
- **Resource Limitations**: Declared scarcity that may reflect historical rather than current reality
|
|
- **Timeline Constraints**: Stated deadlines based on assumptions about dependencies and priorities
|
|
|
|
**Audit Questions:**
|
|
- "What explicit constraints are we stating, and why do we believe they're absolute?"
|
|
- "Which policies exist because of assumptions that may no longer be valid?"
|
|
- "What stated limitations might be more flexible than we assume?"
|
|
- "Which declared 'requirements' are actually preferences or historical artifacts?"
|
|
- "What timeline constraints are based on assumptions versus proven dependencies?"
|
|
|
|
### Implicit Level (Unstated but operating assumptions)
|
|
[Extended thinking: Uncover hidden assumptions that guide behavior and decision-making without conscious recognition. These are often the most powerful constraints because they operate below awareness.]
|
|
|
|
**Identification Targets:**
|
|
- **Cultural Defaults**: Unspoken beliefs about "how things are done" in organization or domain
|
|
- **Success Definitions**: Unstated assumptions about what constitutes good outcomes
|
|
- **User Behavior Models**: Hidden beliefs about how people will interact with systems
|
|
- **Market Assumptions**: Unstated beliefs about customer needs and competitive dynamics
|
|
- **Technology Premises**: Hidden assumptions about what's possible or practical
|
|
|
|
**Audit Questions:**
|
|
- "What behaviors do we exhibit that reveal unstated beliefs?"
|
|
- "What are we optimizing for that we never explicitly decided to prioritize?"
|
|
- "What user behaviors are we assuming without validation?"
|
|
- "Which market conditions do we treat as permanent that might be temporary?"
|
|
- "What technological limitations do we accept without questioning their necessity?"
|
|
|
|
### Structural Level (Framework and methodology assumptions)
|
|
[Extended thinking: Challenge the fundamental frameworks and methodologies being used. Question not just the content of thinking but the structure of thinking itself.]
|
|
|
|
**Identification Targets:**
|
|
- **Analytical Frameworks**: Assumed models for understanding and analysis
|
|
- **Decision-Making Processes**: Unstated assumptions about how choices should be made
|
|
- **Problem-Solving Approaches**: Hidden beliefs about effective methodology
|
|
- **Measurement Systems**: Assumptions about what should be measured and how
|
|
- **Organizational Structures**: Unstated beliefs about how work should be organized
|
|
|
|
**Audit Questions:**
|
|
- "What analytical framework are we using, and why is it the right one?"
|
|
- "What assumptions underlie our decision-making process?"
|
|
- "Are we solving the right problem, or just the problem we know how to solve?"
|
|
- "What are we measuring, and what does that reveal about our assumptions?"
|
|
- "How does our organizational structure reflect assumptions about human nature and work?"
|
|
|
|
### Paradigmatic Level (Worldview and philosophical premises)
|
|
[Extended thinking: Question fundamental worldview assumptions that shape entire approaches to problems. Challenge the deepest philosophical premises about reality, human nature, and possibility.]
|
|
|
|
**Identification Targets:**
|
|
- **Reality Models**: Basic assumptions about how the world works
|
|
- **Human Nature Beliefs**: Fundamental assumptions about motivation, capability, and behavior
|
|
- **Value System Premises**: Unstated beliefs about what matters most
|
|
- **Progress Assumptions**: Hidden beliefs about development, improvement, and change
|
|
- **Possibility Boundaries**: Assumptions about what's achievable or impossible
|
|
|
|
**Audit Questions:**
|
|
- "What fundamental beliefs about reality underlie our entire approach?"
|
|
- "What assumptions about human nature shape our strategies?"
|
|
- "Which values are we prioritizing without conscious choice?"
|
|
- "What beliefs about progress and improvement guide our decisions?"
|
|
- "What do we assume is impossible that might actually be achievable?"
|
|
|
|
## Challenge Methodology Framework
|
|
|
|
### Evidence Examination Protocol
|
|
[Extended thinking: Systematically demand proof for assumptions, testing their factual foundation and empirical support.]
|
|
|
|
**Evidence Validation Process:**
|
|
1. **Source Identification**: Where did this assumption originate?
|
|
2. **Recency Assessment**: How current is the supporting evidence?
|
|
3. **Context Relevance**: Does evidence from other contexts apply here?
|
|
4. **Sample Size Evaluation**: Is evidence based on sufficient data?
|
|
5. **Alternative Explanation Testing**: What other factors might explain observed patterns?
|
|
|
|
**Evidence Quality Framework:**
|
|
- **Primary vs. Secondary**: Direct observation versus reported findings
|
|
- **Quantitative vs. Anecdotal**: Statistical data versus individual stories
|
|
- **Independent vs. Interested**: Unbiased sources versus stakeholder claims
|
|
- **Recent vs. Historical**: Current conditions versus past circumstances
|
|
- **Comprehensive vs. Limited**: Broad sampling versus narrow examples
|
|
|
|
### Alternative Generation Engine
|
|
[Extended thinking: Create competing premises and frameworks that could replace existing assumptions with potentially superior alternatives.]
|
|
|
|
**Alternative Development Process:**
|
|
1. **Assumption Inversion**: What if the opposite assumption were true?
|
|
2. **Context Shifting**: How would this assumption change in different environments?
|
|
3. **Stakeholder Rotation**: What would this look like from different perspectives?
|
|
4. **Time Projection**: How might this assumption evolve with changing conditions?
|
|
5. **Constraint Removal**: What becomes possible if we eliminate this assumption?
|
|
|
|
**Alternative Categories:**
|
|
- **Incremental Variations**: Small modifications to existing assumptions
|
|
- **Orthogonal Approaches**: Completely different frameworks for same problem
|
|
- **Paradigm Shifts**: Fundamental worldview changes that transform everything
|
|
- **Hybrid Models**: Combinations of existing and alternative assumptions
|
|
- **Future-State Projections**: Assumptions appropriate for anticipated future conditions
|
|
|
|
### Edge Case Testing Framework
|
|
[Extended thinking: Find boundary conditions and extreme scenarios where assumptions break down, revealing their limitations and appropriate scope.]
|
|
|
|
**Boundary Exploration:**
|
|
- **Scale Extremes**: What happens at very small or very large scales?
|
|
- **Performance Limits**: At what point do assumptions cease to function?
|
|
- **Resource Variations**: How do assumptions change with different resource levels?
|
|
- **Context Extremes**: What conditions would make assumptions invalid?
|
|
- **Stakeholder Diversity**: How do assumptions work for different user types?
|
|
|
|
**Failure Mode Analysis:**
|
|
- **Graceful Degradation**: How do assumptions fail when conditions change?
|
|
- **Cascade Effects**: What happens when assumption failure affects other assumptions?
|
|
- **Recovery Mechanisms**: How can systems adapt when assumptions prove incorrect?
|
|
- **Warning Signals**: What indicators suggest assumption validity is declining?
|
|
- **Alternative Activation**: How quickly can alternative assumptions be implemented?
|
|
|
|
## Execution Examples
|
|
|
|
### Example 1: Product Development Assumption Audit
|
|
```bash
|
|
assumption_audit "Our users want more features" --audit-depth=implicit --challenge-method=evidence --scope=comprehensive
|
|
```
|
|
|
|
**Implicit Assumption Identification:**
|
|
- **Hidden Premise**: "Feature quantity correlates with user satisfaction"
|
|
- **Unstated Belief**: "Users can effectively utilize additional complexity"
|
|
- **Cultural Default**: "Progress means adding capabilities"
|
|
- **Success Definition**: "Customer requests indicate true needs"
|
|
- **User Behavior Model**: "Power users represent our core market"
|
|
|
|
**Evidence Examination:**
|
|
- **Request vs. Usage Analysis**: Do users actually use requested features?
|
|
- **Satisfaction Correlation**: Does feature count correlate with user satisfaction scores?
|
|
- **Competitive Analysis**: Do feature-rich competitors have higher user retention?
|
|
- **Behavior Tracking**: How do users interact with existing feature sets?
|
|
- **Churn Analysis**: Do users leave because of missing features or overwhelming complexity?
|
|
|
|
**Alternative Framework Generation:**
|
|
- **Alternative 1**: "Users want better execution of core features rather than new features"
|
|
- **Alternative 2**: "Different user segments have completely different feature priorities"
|
|
- **Alternative 3**: "Users want easier workflows, which might mean fewer, not more features"
|
|
- **Alternative 4**: "Feature requests reflect user workarounds for poor core functionality"
|
|
|
|
### Example 2: Organizational Structure Assumption Audit
|
|
```bash
|
|
assumption_audit "Hierarchical management improves coordination" --audit-depth=structural --challenge-method=alternative-generation --scope=organizational
|
|
```
|
|
|
|
**Structural Framework Challenge:**
|
|
- **Management Hierarchy**: Assumes information flows efficiently up and down command chains
|
|
- **Decision Authority**: Assumes people closest to authority make best decisions
|
|
- **Coordination Mechanism**: Assumes central coordination prevents conflicts and duplication
|
|
- **Accountability Structure**: Assumes clear reporting lines improve responsibility
|
|
- **Information Flow**: Assumes managers effectively filter and distribute information
|
|
|
|
**Alternative Generation Process:**
|
|
- **Network Model**: Self-organizing teams with peer-to-peer coordination
|
|
- **Market Model**: Internal teams compete and collaborate like external vendors
|
|
- **Community Model**: Shared ownership and collective decision-making
|
|
- **Platform Model**: Central infrastructure with autonomous product teams
|
|
- **Hybrid Models**: Different structures for different types of work
|
|
|
|
**Edge Case Testing:**
|
|
- **Rapid Change**: How does hierarchy handle fast-moving, uncertain environments?
|
|
- **Creative Work**: Does hierarchical oversight help or hinder innovation?
|
|
- **Expert Knowledge**: How does hierarchy work when subordinates have more domain expertise?
|
|
- **Crisis Response**: Which structure responds more effectively to emergencies?
|
|
- **Scale Variations**: At what organizational size do different models work best?
|
|
|
|
### Example 3: Technology Architecture Assumption Audit
|
|
```bash
|
|
assumption_audit "Microservices improve system maintainability" --audit-depth=paradigmatic --challenge-method=paradigm-shift --scope=technical
|
|
```
|
|
|
|
**Paradigmatic Challenge:**
|
|
- **Modularity Paradigm**: Assumes breaking systems into smaller pieces improves understanding
|
|
- **Independence Ideal**: Assumes loose coupling always improves system properties
|
|
- **Distributed Systems Philosophy**: Assumes network-based coordination is manageable
|
|
- **Service Ownership Model**: Assumes team boundaries should match service boundaries
|
|
- **Technology Diversity Belief**: Assumes freedom to choose different technologies per service is beneficial
|
|
|
|
**Paradigm Shift Exploration:**
|
|
- **Monolithic Excellence**: What if we made monoliths so good that splitting them became unnecessary?
|
|
- **Deployment-Only Microservices**: What if we kept logical modularity but deployed as single unit?
|
|
- **Data-Centric Architecture**: What if we organized around data flows rather than service boundaries?
|
|
- **Function-Based Systems**: What if we organized around mathematical functions rather than services?
|
|
- **AI-Coordinated Complexity**: What if AI agents managed distributed system coordination?
|
|
|
|
**Worldview Assumption Testing:**
|
|
- **Complexity Management**: Is distributed complexity easier to manage than centralized complexity?
|
|
- **Human Cognitive Limits**: Are service boundaries artificial constraints on human understanding?
|
|
- **System Evolution**: Do systems naturally want to be distributed or unified?
|
|
- **Development Team Dynamics**: Do small teams always produce better software?
|
|
|
|
## Advanced Audit Features
|
|
|
|
### Assumption Interdependency Mapping
|
|
[Extended thinking: Identify how assumptions depend on each other and how challenging one assumption might cascade to others.]
|
|
|
|
**Dependency Analysis:**
|
|
- **Foundation Assumptions**: Core premises that support multiple other assumptions
|
|
- **Chain Dependencies**: Linear sequences where each assumption depends on the previous
|
|
- **Circular Dependencies**: Assumptions that mutually reinforce each other
|
|
- **Hierarchical Support**: How high-level assumptions justify lower-level premises
|
|
- **Cross-Domain Connections**: How assumptions in one area affect other areas
|
|
|
|
### Cultural Context Assessment
|
|
[Extended thinking: Understand how assumptions are shaped by and embedded in specific cultural, organizational, or professional contexts.]
|
|
|
|
**Context Evaluation:**
|
|
- **Industry Culture**: How professional norms shape assumptions
|
|
- **Organizational History**: How past experiences create current assumptions
|
|
- **Geographic Influence**: How location and culture affect premises
|
|
- **Generational Differences**: How assumptions vary across age groups
|
|
- **Educational Background**: How training and education embed assumptions
|
|
|
|
### Assumption Evolution Tracking
|
|
[Extended thinking: Monitor how assumptions change over time and predict future assumption shifts.]
|
|
|
|
**Evolution Patterns:**
|
|
- **Historical Progression**: How assumptions have changed in the past
|
|
- **Trigger Events**: What events cause assumption shifts
|
|
- **Gradual Drift**: Slow assumption evolution without explicit recognition
|
|
- **Revolutionary Shifts**: Rapid, dramatic assumption changes
|
|
- **Cyclical Patterns**: Assumptions that periodically return in new forms
|
|
|
|
## Success Optimization
|
|
|
|
### Audit Quality Indicators
|
|
- **Assumption Discovery**: Identification of previously unrecognized premises
|
|
- **Evidence Rigor**: Thorough testing of assumption foundations
|
|
- **Alternative Creativity**: Generation of genuinely different frameworks
|
|
- **Challenge Depth**: Willingness to question fundamental premises
|
|
- **Implementation Readiness**: Practical pathways for assumption replacement
|
|
|
|
### Breakthrough Potential Assessment
|
|
- **Constraint Liberation**: Freedom from limiting beliefs
|
|
- **Innovation Opportunity**: New possibilities revealed through assumption challenging
|
|
- **Competitive Advantage**: Unique approaches unavailable to assumption-bound competitors
|
|
- **Problem Reframing**: Better problem definitions through assumption questioning
|
|
- **Solution Space Expansion**: Broader range of possible solutions
|
|
|
|
### Risk Management
|
|
- **Assumption Replacement Safety**: Ensuring new assumptions are better than old ones
|
|
- **Change Management**: Managing transition from old to new assumption sets
|
|
- **Validation Methodology**: Testing new assumptions before full commitment
|
|
- **Rollback Planning**: Preparing for assumption change failure
|
|
- **Stakeholder Communication**: Explaining assumption changes to affected parties
|
|
|
|
The assumption_audit command reveals hidden constraints and creates breakthrough opportunities by systematically challenging fundamental premises and generating alternative frameworks that expand solution possibilities. |