Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,297 @@
---
name: layered-reasoning
description: Use when reasoning across multiple abstraction levels (strategic/tactical/operational), designing systems with hierarchical layers, explaining concepts at different depths, maintaining consistency between high-level principles and concrete implementation, or when users mention 30,000-foot view, layered thinking, abstraction levels, top-down design, or need to move fluidly between strategy and execution.
---
# Layered Reasoning
## Purpose
Layered reasoning structures thinking across multiple levels of abstraction—from high-level principles (30,000 ft) to tactical approaches (3,000 ft) to concrete actions (300 ft). Good layered reasoning maintains consistency: lower layers implement upper layers, upper layers constrain lower layers, and each layer is independently useful.
Use this skill when:
- **Designing systems** with architectural layers (strategy → design → implementation)
- **Explaining complex topics** at multiple depths (executive summary → technical detail → code)
- **Strategic planning** connecting vision → objectives → tactics → tasks
- **Ensuring consistency** between principles and execution
- **Bridging communication** between stakeholders at different levels (CEO → manager → engineer)
- **Problem-solving** where high-level constraints must guide low-level decisions
Layered reasoning prevents inconsistency: strategic plans that can't be executed, implementations that violate principles, or explanations that confuse by jumping abstraction levels.
---
## Common Patterns
### Pattern 1: 30K → 3K → 300 ft Decomposition (Top-Down)
**When**: Starting from vision/principles, deriving concrete actions
**Structure**:
- **30,000 ft (Strategic)**: Why? Core principles, invariants, constraints (e.g., "Customer privacy is non-negotiable")
- **3,000 ft (Tactical)**: What? Approaches, architectures, policies (e.g., "Zero-trust security model, end-to-end encryption")
- **300 ft (Operational)**: How? Specific actions, procedures, code (e.g., "Implement AES-256 encryption for data at rest")
**Example**: Product strategy
- **30K**: "Become the most trusted platform" (principle)
- **3K**: "Achieve SOC 2 compliance, publish security reports, 24/7 support" (tactics)
- **300 ft**: "Implement MFA, conduct quarterly audits, hire 5 support engineers" (actions)
**Process**: (1) Define strategic layer invariants, (2) Derive tactical options that satisfy invariants, (3) Select tactics, (4) Design operational procedures implementing tactics, (5) Validate operational layer doesn't violate strategic constraints
### Pattern 2: Bottom-Up Aggregation
**When**: Starting from observations/data, building up to principles
**Structure**:
- **300 ft**: Specific observations, measurements, incidents (e.g., "User A clicked 5 times, User B abandoned")
- **3,000 ft**: Patterns, trends, categories (e.g., "40% abandon at checkout, slow load times correlate with abandonment")
- **30,000 ft**: Principles, theories, root causes (e.g., "Performance impacts conversion; every 100ms costs 1% conversion")
**Example**: Engineering postmortem
- **300 ft**: "Service crashed at 3:42 PM, memory usage spiked to 32GB, 500 errors returned"
- **3K**: "Memory leak in caching layer, triggered by specific API call pattern under load"
- **30K**: "Our caching strategy lacks eviction policy; need TTL-based expiration for all caches"
**Process**: (1) Collect operational data, (2) Identify patterns and group, (3) Formulate hypotheses at tactical layer, (4) Validate with more data, (5) Distill strategic principles
### Pattern 3: Layer Translation (Cross-Layer Communication)
**When**: Explaining same concept to different audiences (CEO, manager, engineer)
**Technique**: Translate preserving core meaning while adjusting abstraction
**Example**: Explaining tech debt
- **CEO (30K)**: "We built quickly early on. Now growth slows 20% annually unless we invest $2M to modernize."
- **Manager (3K)**: "Monolithic architecture prevents independent team velocity. Migrate to microservices over 6 months."
- **Engineer (300 ft)**: "Extract user service from monolith. Create API layer, implement service mesh, migrate traffic."
**Process**: (1) Identify audience's layer, (2) Extract core message, (3) Translate using concepts/metrics relevant to that layer, (4) Maintain causal links across layers
### Pattern 4: Constraint Propagation (Top-Down)
**When**: High-level constraints must guide low-level decisions
**Mechanism**: Strategic constraints flow down, narrowing options at each layer
**Example**: Healthcare app design
- **30K constraint**: "HIPAA compliance is non-negotiable" (strategic)
- **3K derivation**: "All PHI must be encrypted, audit logs required, access control mandatory" (tactical)
- **300 ft implementation**: "Use AWS KMS for encryption, CloudTrail for audits, IAM for access" (operational)
**Guardrail**: Lower layers cannot violate upper constraints (e.g., operational decision to skip encryption violates strategic constraint)
### Pattern 5: Emergent Property Recognition (Bottom-Up)
**When**: Lower-layer interactions create unexpected upper-layer behavior
**Example**: Team structure
- **300 ft**: "Each team owns microservice, deploys independently, uses Slack for coordination"
- **3K emergence**: "Conway's Law: architecture mirrors communication structure; slow cross-team features"
- **30K insight**: "Org structure determines system architecture; realign teams to product lines, not services"
**Process**: (1) Observe operational behavior, (2) Identify emerging patterns at tactical layer, (3) Recognize strategic implications, (4) Adjust strategy if needed
### Pattern 6: Consistency Checking Across Layers
**When**: Validating that all layers align (no contradictions)
**Check types**:
- **Upward consistency**: Do operations implement tactics? Do tactics achieve strategy?
- **Downward consistency**: Can strategy be executed with these tactics? Can tactics be implemented operationally?
- **Lateral consistency**: Do parallel tactical choices contradict? Do operational procedures conflict?
**Example inconsistency**: Strategy says "Move fast," tactics say "Extensive approval process," operations say "3-week release cycle" → Contradiction
**Fix**: Align layers. Either (1) change strategy ("Move carefully"), (2) change tactics ("Lightweight approvals"), or (3) change operations ("Daily releases")
---
## Workflow
Use this structured approach when applying layered reasoning:
```
□ Step 1: Identify relevant layers and abstraction levels
□ Step 2: Define strategic layer (principles, invariants, constraints)
□ Step 3: Derive tactical layer (approaches that satisfy strategy)
□ Step 4: Design operational layer (concrete actions implementing tactics)
□ Step 5: Validate consistency across all layers
□ Step 6: Translate between layers for different audiences
□ Step 7: Iterate based on feedback from any layer
□ Step 8: Document reasoning at each layer
```
**Step 1: Identify relevant layers and abstraction levels** ([details](#1-identify-relevant-layers-and-abstraction-levels))
Determine how many layers needed (typically 3-5). Map layers to domains: business (vision/strategy/execution), technical (architecture/design/code), organizational (mission/goals/tasks).
**Step 2: Define strategic layer** ([details](#2-define-strategic-layer))
Establish high-level principles, invariants, and constraints that must hold. These are non-negotiable and guide all lower layers.
**Step 3: Derive tactical layer** ([details](#3-derive-tactical-layer))
Generate approaches/policies/architectures that satisfy strategic constraints. Multiple tactical options may exist; choose based on tradeoffs.
**Step 4: Design operational layer** ([details](#4-design-operational-layer))
Create specific procedures, implementations, or actions that realize tactical choices. This is where execution happens.
**Step 5: Validate consistency across all layers** ([details](#5-validate-consistency-across-all-layers))
Check upward (do ops implement tactics?), downward (can strategy be executed?), and lateral (do parallel choices conflict?) consistency.
**Step 6: Translate between layers for different audiences** ([details](#6-translate-between-layers-for-different-audiences))
Communicate at appropriate abstraction level for each stakeholder. CEO needs strategic view, engineers need operational detail.
**Step 7: Iterate based on feedback from any layer** ([details](#7-iterate-based-on-feedback-from-any-layer))
If operational constraints make tactics infeasible, adjust tactics or strategy. If strategic shift occurs, propagate changes downward.
**Step 8: Document reasoning at each layer** ([details](#8-document-reasoning-at-each-layer))
Write explicit rationale at each layer explaining how it relates to layers above/below. Makes assumptions visible and aids future iteration.
---
## Critical Guardrails
### 1. Maintain Consistency Across Layers
**Danger**: Strategic goals contradict operational reality, or implementation violates principles
**Guardrail**: Regularly check upward, downward, and lateral consistency. Propagate changes bidirectionally (strategy changes → update tactics/ops; operational constraints → update tactics/strategy).
**Red flag**: "Our strategy is X but we actually do Y" signals layer mismatch
### 2. Don't Skip Layers When Communicating
**Danger**: Jumping from 30K to 300 ft confuses audiences, loses context
**Guardrail**: Move through layers sequentially. If explaining to executive, start 30K → 3K (stop there unless asked). If explaining to engineer, provide 30K context first, then dive to 300 ft.
**Test**: Can listener answer "why does this matter?" (links to upper layer) and "how do we do this?" (links to lower layer)
### 3. Each Layer Should Be Independently Useful
**Danger**: Layers that only make sense when combined, not standalone
**Guardrail**: Strategic layer should guide decisions even without seeing operations. Tactical layer should be understandable without code. Operational layer should be executable without re-deriving strategy.
**Principle**: Good layers can be consumed independently by different audiences
### 4. Limit Layers to 3-5 Levels
**Danger**: Too many layers create overhead; too few lose nuance
**Guardrail**: For most domains, 3 layers sufficient (strategy/tactics/operations or architecture/design/code). Complex domains may need 4-5 but rarely more.
**Rule of thumb**: Can you name each layer clearly? If not, you have too many.
### 5. Upper Layers Constrain, Lower Layers Implement
**Danger**: Treating layers as independent rather than hierarchical
**Guardrail**: Strategic layer sets constraints ("must be HIPAA compliant"). Tactical layer chooses approaches within constraints ("encryption + audit logs"). Operational layer implements ("AES-256 + CloudTrail"). Cannot violate upward.
**Anti-pattern**: Operational decision ("skip encryption for speed") violating strategic constraint ("HIPAA compliance")
### 6. Propagate Changes Bidirectionally
**Danger**: Strategic shift without updating tactics/ops, or operational constraint discovered but strategy unchanged
**Guardrail**: **Top-down**: Strategy changes → re-evaluate tactics → adjust operations. **Bottom-up**: Operational constraint → re-evaluate tactics → potentially adjust strategy.
**Example**: Strategy shift to "privacy-first" → Update tactics (end-to-end encryption) → Update ops (implement encryption). Or: Operational constraint (performance) → Tactical adjustment (different approach) → Strategic clarification ("privacy-first within performance constraints")
### 7. Make Assumptions Explicit at Each Layer
**Danger**: Implicit assumptions lead to inconsistency when assumptions violated
**Guardrail**: Document assumptions at each layer. Strategic: "Assuming competitive market." Tactical: "Assuming cloud infrastructure." Operational: "Assuming Python 3.9+."
**Benefit**: When assumptions change, know which layers need updating
### 8. Recognize Emergent Properties
**Danger**: Focusing only on designed properties, missing unintended consequences
**Guardrail**: Regularly observe bottom layer, look for emerging patterns at middle layer, consider strategic implications. Emergent properties can invalidate strategic assumptions.
**Example**: Microservices (operational) → Coordination overhead (tactical emergence) → Slower feature delivery (strategic failure if goal was speed)
---
## Quick Reference
### Layer Mapping by Domain
| Domain | Layer 1 (30K ft) | Layer 2 (3K ft) | Layer 3 (300 ft) |
|--------|------------------|-----------------|------------------|
| **Business** | Vision, mission | Strategy, objectives | Tactics, tasks |
| **Product** | Market positioning | Feature roadmap | User stories |
| **Technical** | Architecture principles | System design | Code implementation |
| **Organizational** | Culture, values | Policies, processes | Daily procedures |
### Consistency Check Questions
| Check Type | Question |
|------------|----------|
| **Upward** | Do these operations implement the tactics? Do tactics achieve strategy? |
| **Downward** | Can this strategy be executed with available tactics? Can tactics be implemented operationally? |
| **Lateral** | Do parallel tactical choices contradict each other? Do operational procedures conflict? |
### Translation Hints by Audience
| Audience | Layer | Focus | Metrics |
|----------|-------|-------|---------|
| **CEO / Board** | 30K ft | Why, outcomes, risk | Revenue, market share, strategic risk |
| **VP / Director** | 3K ft | What, approach, resources | Team velocity, roadmap, budget |
| **Manager / Lead** | 300-3K ft | How, execution, timeline | Sprint velocity, milestones, quality |
| **Engineer** | 300 ft | Implementation, details | Code quality, test coverage, performance |
---
## Resources
### Navigation to Resources
- [**Templates**](resources/template.md): Layered reasoning document template, consistency check template, cross-layer communication template
- [**Methodology**](resources/methodology.md): Layer design principles, consistency validation techniques, emergence detection, bidirectional propagation
- [**Rubric**](resources/evaluators/rubric_layered_reasoning.json): Evaluation criteria for layered reasoning quality (10 criteria)
### Related Skills
- **abstraction-concrete-examples**: For moving between abstract and concrete (related but less structured than layers)
- **decomposition-reconstruction**: For breaking down complex systems (complements layered approach)
- **communication-storytelling**: For translating between audiences at different layers
- **adr-architecture**: For documenting architectural decisions across layers
- **alignment-values-north-star**: For strategic layer definition (values → strategy)
---
## Examples in Context
### Example 1: SaaS Product Strategy
**30K (Strategic)**: "Become the easiest CRM for small businesses" (positioning)
**3K (Tactical)**: "Simple UI, 5-minute setup, mobile-first, $20/user pricing, self-serve onboarding"
**300 ft (Operational)**: "React app, OAuth for auth, Stripe for billing, onboarding flow: signup → import contacts → send first email"
**Consistency check**: Does $20 pricing support "easiest" (yes, low barrier)? Does 5-minute setup work with current implementation (measure in practice)? Does mobile-first align with React architecture (yes)?
### Example 2: Technical Architecture
**30K**: "Highly available system with <1% downtime, supports 10× traffic growth"
**3K**: "Multi-region deployment, auto-scaling, circuit breakers, blue-green deployments"
**300 ft**: "AWS multi-AZ, ECS Fargate with target tracking, Istio circuit breakers, CodeDeploy blue-green"
**Emergence**: Observed: cross-region latency 200ms → Tactical adjustment: regional data replication → Strategic clarification: "High availability within regions, eventual consistency across regions"
### Example 3: Organizational Change
**30K**: "Build customer-centric culture where customer feedback drives decisions"
**3K**: "Monthly customer advisory board, NPS surveys after each interaction, customer support KPIs in exec dashboards"
**300 ft**: "Schedule CAB meetings first Monday monthly, automated NPS via Delighted after ticket close, Looker dashboard with CS CSAT by rep"
**Consistency**: Does monthly CAB support "customer-centric" (or too infrequent)? Do support KPIs incentivize right behavior (check for gaming)? Does automation reduce personal touch (potential conflict)?

View File

@@ -0,0 +1,216 @@
{
"skill_name": "layered-reasoning",
"version": "1.0",
"criteria": [
{
"name": "Layer Independence",
"description": "Each layer is independently useful without requiring other layers to make sense",
"ratings": {
"1": "Layers tightly coupled, can't understand one without seeing all others",
"3": "Layers mostly independent but some concepts leak between layers",
"5": "Each layer fully modular: strategic useful to executives, tactical to managers, operational to engineers independently"
}
},
{
"name": "Appropriate Number of Layers",
"description": "Uses 3-5 layers optimally; not too few (loses nuance) or too many (overhead)",
"ratings": {
"1": "Too few (<3) or too many (>5) layers; either oversimplified or excessively complex",
"3": "Acceptable number of layers but some redundancy or gaps between levels",
"5": "Optimal 3-5 layers, each clearly named with distinct role and appropriate abstraction gap"
}
},
{
"name": "Upward Consistency",
"description": "Operations implement tactics, tactics achieve strategy (bottom-up validation)",
"ratings": {
"1": "Operations don't implement tactics, or tactics don't achieve strategy; significant gaps or orphans",
"3": "Most operations map to tactics and tactics to strategy, but some orphans or unclear mappings",
"5": "Complete upward consistency: every operation implements a tactic, every tactic achieves strategic principle"
}
},
{
"name": "Downward Consistency",
"description": "Strategy can be executed with chosen tactics, tactics implementable operationally",
"ratings": {
"1": "Strategy infeasible with tactics, or tactics can't be implemented; missing critical tactics/operations",
"3": "Strategy mostly achievable but some gaps in tactics or operations; feasibility concerns",
"5": "Complete downward consistency: strategy achievable with tactics, tactics implementable with available resources/skills"
}
},
{
"name": "Lateral Consistency",
"description": "No contradictions among parallel choices at same layer (e.g., tactics don't conflict)",
"ratings": {
"1": "Multiple contradictions at same layer (e.g., conflicting tactics or incompatible operations)",
"3": "Mostly consistent but some tension between parallel choices requiring tradeoffs",
"5": "Full lateral consistency: all parallel choices compatible or explicitly prioritized with clear tradeoffs"
}
},
{
"name": "Abstraction Gap Size",
"description": "Each layer translates to 3-10 elements at layer below; not too large (confusing) or too small (redundant)",
"ratings": {
"1": "Abstraction gaps inappropriate: jumps too large (>20 elements) or too small (<2 elements)",
"3": "Acceptable gaps but some layers have irregular expansion (e.g., 1→30 or 1→1)",
"5": "Optimal gaps: each strategic element → 3-10 tactical, each tactical → 3-10 operational"
}
},
{
"name": "Bidirectional Propagation",
"description": "Changes propagate both top-down (strategy → ops) and bottom-up (constraints → strategy)",
"ratings": {
"1": "One-way only: strategy changes don't update ops, or operational constraints ignored at strategic layer",
"3": "Bidirectional but incomplete: some changes propagated, others missed or delayed",
"5": "Full bidirectional propagation: strategic changes cascade down, operational constraints escalate up with clarification"
}
},
{
"name": "Emergence Recognition",
"description": "Bottom-up patterns identified and strategic implications recognized",
"ratings": {
"1": "No emergence detection; unintended consequences surprise stakeholders; reactive only",
"3": "Some emergence recognized post-hoc but not systematically monitored; pattern recognition ad-hoc",
"5": "Proactive emergence detection: operational behavior monitored, patterns identified, strategic implications assessed"
}
},
{
"name": "Clear Layer Contracts",
"description": "Explicit dependencies and requirements documented between layers",
"ratings": {
"1": "Implicit contracts; unclear what each layer requires from others or guarantees upward",
"3": "Contracts partially documented but some assumptions implicit or dependencies unclear",
"5": "Explicit contracts: strategic invariants clear, tactical requirements documented, operational guarantees specified"
}
},
{
"name": "Communication Translation",
"description": "Content translated appropriately for different audiences at different abstraction levels",
"ratings": {
"1": "Same content for all audiences; executives get operational details or engineers get only strategy",
"3": "Some translation but inconsistent; occasional abstraction mismatches for audience",
"5": "Perfect translation: executives get strategic view, managers tactical, engineers operational; all maintain causal links"
}
}
],
"guidance_by_type": [
{
"type": "System Architecture",
"focus_areas": ["Layer Independence", "Upward Consistency", "Clear Layer Contracts"],
"target_score": "≥4.0",
"rationale": "Architectural layers must be independently useful and implement each other consistently. Contracts prevent integration failures."
},
{
"type": "Strategic Planning",
"focus_areas": ["Downward Consistency", "Bidirectional Propagation", "Communication Translation"],
"target_score": "≥4.0",
"rationale": "Strategy must be executable (downward) and responsive to operational reality (upward). Translation ensures stakeholder alignment."
},
{
"type": "Organizational Design",
"focus_areas": ["Emergence Recognition", "Bidirectional Propagation", "Lateral Consistency"],
"target_score": "≥3.5",
"rationale": "Org changes create emergent behavior (Conway's Law). Lateral consistency prevents conflicting policies across departments."
},
{
"type": "Technical Explanation",
"focus_areas": ["Communication Translation", "Abstraction Gap Size", "Layer Independence"],
"target_score": "≥4.0",
"rationale": "Explanations at different depths require appropriate abstraction gaps and clear translation without losing coherence."
},
{
"type": "Product Development",
"focus_areas": ["Upward Consistency", "Downward Consistency", "Emergence Recognition"],
"target_score": "≥3.5",
"rationale": "Product vision must translate to executable features (down) and user feedback must inform strategy (up via emergence)."
}
],
"guidance_by_complexity": [
{
"complexity": "Simple (3 layers, single domain)",
"target_score": "≥3.5",
"priority_criteria": ["Upward Consistency", "Downward Consistency", "Appropriate Number of Layers"],
"notes": "Focus on basic consistency checks. Three layers (strategy/tactics/ops) sufficient. Independence less critical for simple domains."
},
{
"complexity": "Moderate (4 layers or cross-functional)",
"target_score": "≥4.0",
"priority_criteria": ["Upward Consistency", "Downward Consistency", "Lateral Consistency", "Communication Translation"],
"notes": "Cross-functional requires strong translation between audiences. Lateral consistency critical when multiple teams involved."
},
{
"complexity": "Complex (5 layers, multi-domain, large org)",
"target_score": "≥4.5",
"priority_criteria": ["All criteria essential", "Bidirectional Propagation", "Emergence Recognition", "Clear Layer Contracts"],
"notes": "Large scale demands formal contracts, systematic emergence detection, and rigorous bidirectional propagation to maintain alignment."
}
],
"common_failure_modes": [
{
"name": "Layer Skipping",
"symptom": "Jump directly from strategy to operations without tactical layer (e.g., 'We need to scale' → 'Deploy Kubernetes')",
"detection": "Gap in reasoning: no intermediate 'what' layer between 'why' (strategy) and 'how' (operations)",
"fix": "Insert tactical layer: 'Scale' (why) → 'Container orchestration' (what) → 'Deploy Kubernetes' (how)"
},
{
"name": "Tight Coupling",
"symptom": "Can't understand strategic layer without seeing code, or operational layer without re-deriving strategy",
"detection": "Stakeholders can't consume single layer independently; requires full context to make sense",
"fix": "Make each layer standalone: strategic principles clear without ops, operations executable without strategy rederivation"
},
{
"name": "Too Many Layers",
"symptom": "6+ layers with redundant distinctions (Vision, Mission, Strategy, Goals, Objectives, Tactics, Tasks, Subtasks...)",
"detection": "People debate which layer something belongs to; layers blur together; can't clearly name each layer's role",
"fix": "Consolidate to 3-5 layers by merging redundant levels (e.g., Goals + Objectives → single Tactical layer)"
},
{
"name": "Orphan Operations",
"symptom": "Operations that don't implement any tactic (e.g., 'Cache user data unencrypted' when tactics are all about security)",
"detection": "Trace operation upward: which tactic does it implement? If answer is 'none', it's an orphan",
"fix": "Either add tactic it implements (revealing implicit strategy) or remove operation (violates strategy)"
},
{
"name": "Infeasible Strategy",
"symptom": "Strategic principle can't be achieved with available tactics/operations (e.g., 'Sub-second latency globally' but no CDN tactics)",
"detection": "Downward trace from strategy: can we execute this with chosen tactics? Do we have operations to implement?",
"fix": "Either add missing tactics/operations or refine strategy to acknowledge constraints ('Sub-second in primary regions')"
},
{
"name": "Lateral Contradictions",
"symptom": "Parallel tactics conflict (e.g., 'Microservices for independence' + 'Shared database for simplicity' → incompatible)",
"detection": "Pairwise comparison of all choices at same layer: do any contradict or compete for resources?",
"fix": "Resolve by prioritizing based on strategic importance or refining to make compatible (regional databases)"
},
{
"name": "One-Way Propagation",
"symptom": "Strategy changes but operations don't update, or operational constraints discovered but strategy unchanged",
"detection": "Strategic shift announced but teams still executing old tactics; or performance issues known but strategy still claims 'sub-second'",
"fix": "Implement bidirectional propagation: cascade strategic changes down, escalate operational constraints up for clarification"
},
{
"name": "Missed Emergence",
"symptom": "Unintended consequences surprise stakeholders (e.g., microservices slowed delivery due to coordination overhead - Conway's Law)",
"detection": "Operational behavior doesn't match tactical expectations; patterns emerge that weren't designed",
"fix": "Monitor operational metrics, identify emerging tactical patterns, recognize strategic implications, adjust strategy if needed"
},
{
"name": "Wrong Abstraction for Audience",
"symptom": "Presenting operational details to executives (AWS regions) or only strategic vision to engineers ('we need to scale')",
"detection": "Audience confused, asks for different level of detail, or can't take action based on info provided",
"fix": "Translate to appropriate layer: executives get strategic (why/outcomes), engineers get operational (how/tech stack)"
},
{
"name": "Implicit Layer Contracts",
"symptom": "Unclear what each layer requires from others; integration failures when layers don't meet unstated assumptions",
"detection": "Frequent misalignment discovered late; 'I thought tactical layer X meant Y' disagreements",
"fix": "Document explicit contracts: strategic invariants, tactical requirements, operational guarantees at each layer boundary"
}
],
"overall_guidance": {
"excellent": "Score ≥4.5: Exemplary layered reasoning. Clear, independent layers (3-5) with full consistency (upward/downward/lateral). Bidirectional propagation maintains alignment. Emergence proactively detected. Communication perfectly translated for all audiences.",
"good": "Score 3.5-4.4: Solid layered reasoning. Mostly consistent layers with some minor gaps. Bidirectional propagation exists but could be more systematic. Good translation for primary audiences.",
"needs_improvement": "Score <3.5: Significant gaps. Layer skipping, tight coupling, contradictions, one-way propagation, or missed emergence. Communication confuses audiences with wrong abstraction level.",
"key_principle": "Layered reasoning maintains consistency: lower layers implement upper layers, upper layers constrain lower layers. Good layers are independently useful, appropriately numerous (3-5), and bidirectionally propagate changes."
}
}

View File

@@ -0,0 +1,372 @@
# Layered Reasoning: Methodology
Advanced techniques for multi-level reasoning, layer design, consistency validation, emergence detection, and bidirectional change propagation.
---
## 1. Layer Design Principles
### Choosing the Right Number of Layers
**Rule of thumb**: 3-5 layers optimal for most domains
**Too few layers** (1-2):
- **Problem**: Jumps abstraction too quickly, loses nuance
- **Example**: "Vision: Scale globally" → "Code: Deploy to AWS regions" (missing tactical layer: multi-region strategy, data sovereignty)
- **Symptom**: Strategic and operational teams can't communicate; implementation doesn't align with vision
**Too many layers** (6+):
- **Problem**: Excessive overhead, confusion about which layer to use
- **Example**: Vision → Strategy → Goals → Objectives → Tactics → Tasks → Subtasks → Actions (8 layers, redundant)
- **Symptom**: People debate which layer something belongs to; layers blur together
**Optimal**:
- **3 layers**: Strategic (why) → Tactical (what) → Operational (how)
- **4 layers**: Vision (purpose) → Strategy (approach) → Tactics (methods) → Operations (execution)
- **5 layers**: Vision → Strategy → Programs → Projects → Tasks (for large organizations)
**Test**: Can you clearly name each layer and explain its role? If not, simplify.
### Layer Independence (Modularity)
**Principle**: Each layer should be independently useful without requiring other layers
**Good layering** (modular):
- **Strategic**: "Customer privacy first" (guides decisions even without seeing code)
- **Tactical**: "Zero-trust architecture" (understandable without knowing AWS KMS details)
- **Operational**: "Implement AES-256 encryption" (executable without re-deriving strategy)
**Bad layering** (coupled):
- **Strategic**: "Use AES-256" (too operational for strategy)
- **Tactical**: "Deploy to AWS" (missing why: scalability, compliance, etc.)
- **Operational**: "Implement privacy" (too vague without tactical guidance)
**Test**: Show each layer independently to different stakeholders. Strategic layer to CEO → makes sense alone. Operational layer to engineer → executable alone.
### Layer Abstraction Gaps
**Principle**: Each layer should be roughly one abstraction level apart
**Optimal gap**: Each layer translates to 3-10 elements at layer below
**Example**: Good abstraction gap
- **Strategic**: "High availability" (1 principle)
- **Tactical**: "Multi-region, auto-scaling, circuit breakers" (3 tactics)
- **Operational**: Multi-region = "Deploy AWS us-east-1 + eu-west-1 + ap-southeast-1" (3 regions); Auto-scaling = "ECS target tracking on CPU 70%" (1 config); Circuit breakers = "Istio circuit breaker 5xx >50%" (1 config) → Total 5 operational items from 3 tactical
**Too large gap**:
- **Strategic**: "High availability" →
- **Operational**: "Deploy us-east-1, eu-west-1, ap-southeast-1, configure ECS target tracking CPU 70%, configure Istio..." (10+ items, no intermediate)
- **Problem**: Can't understand how strategy maps to operations; no tactical layer to adjust
**Test**: Can you translate each strategic element to 3-10 tactical elements? Can you translate each tactical element to 3-10 operational steps?
---
## 2. Consistency Validation Techniques
### Upward Consistency (Bottom-Up)
**Question**: Do lower layers implement upper layers?
**Validation method**:
1. **Trace operational procedure** → Which tactical approach does it implement?
2. **Aggregate tactical approaches** → Which strategic principle do they achieve?
3. **Check coverage**: Does every operation map to a tactic? Does every tactic map to strategy?
**Example**:
- **Ops**: "Encrypt with AES-256" → Tactical: "End-to-end encryption" → Strategic: "Customer privacy"
- **Ops**: "Deploy Istio mTLS" → Tactical: "Zero-trust architecture" → Strategic: "Customer privacy"
- **Coverage check**: All ops map to tactics ✓, all tactics map to privacy principle ✓
**Gap detection**:
- **Orphan operation**: Operation that doesn't implement any tactic (e.g., "Cache user data unencrypted" contradicts zero-trust)
- **Orphan tactic**: Tactic that doesn't achieve any strategic principle (e.g., "Use GraphQL" doesn't map to "privacy" or "scale")
**Fix**: Remove orphan operations, add missing tactics if operations reveal implicit strategy.
### Downward Consistency (Top-Down)
**Question**: Can upper layers be executed with lower layers?
**Validation method**:
1. **For each strategic principle** → List tactics that would achieve it
2. **For each tactic** → List operations required to implement it
3. **Check feasibility**: Can we actually execute these operations given constraints (budget, time, team skills)?
**Example**:
- **Strategy**: "HIPAA compliance" →
- **Tactics needed**: "Encryption, audit logs, access control" →
- **Operations needed**: "Deploy AWS KMS, enable CloudTrail, implement IAM policies" →
- **Feasibility**: Team has AWS expertise ✓, budget allows ✓, timeline feasible ✓
**Gap detection**:
- **Infeasible tactic**: Tactic can't be implemented operationally (e.g., "Real-time fraud detection" but team lacks ML expertise)
- **Missing tactic**: Strategic principle without sufficient tactics (e.g., "Privacy" but no encryption tactics)
**Fix**: Add missing tactics, adjust strategy if tactics infeasible, hire/train if skill gaps.
### Lateral Consistency (Same-Layer)
**Question**: Do parallel choices at the same layer contradict?
**Validation method**:
1. **List all choices at each layer** (e.g., all tactical approaches)
2. **Pairwise comparison**: For each pair, do they conflict?
3. **Check resource conflicts**: Do they compete for same resources (budget, team, time)?
**Example**: Tactical layer lateral check
- **Tactic A**: "Microservices for scale" vs **Tactic B**: "Monorepo for simplicity"
- **Conflict?** No, microservices (runtime) + monorepo (code organization) compatible
- **Tactic A**: "Multi-region deployment" vs **Tactic C**: "Single database"
- **Conflict?** Yes, multi-region requires distributed database (latency, sync)
**Resolution**:
- **Compatible**: Keep both (e.g., microservices + monorepo)
- **Incompatible**: Choose one based on strategic priority (e.g., multi-region wins if "availability" > "simplicity")
- **Refine**: Adjust to make compatible (e.g., "Multi-region with regional databases + eventual consistency")
### Formal Consistency Checking
**Dependency graph approach**:
1. **Build dependency graph**:
- Nodes = elements at all layers (strategic principles, tactical approaches, operational procedures)
- Edges = "implements" (upward) or "requires" (downward) relationships
2. **Check properties**:
- **No orphans**: Every node has at least one edge (connects to another layer)
- **No cycles**: Strategic A → Tactical B → Operational C → Tactical D → Strategic A (circular dependency = contradiction)
- **Full path**: Every strategic principle has path to at least one operational procedure
3. **Identify inconsistencies**:
- **Orphan node**: E.g., tactical approach not implementing any strategy
- **Cycle**: E.g., "We need X to implement Y, but Y is required for X"
- **Dead end**: Strategy with no path to operations (can't be executed)
**Example graph**:
```
Strategic: Privacy (S1)
Tactical: Encryption (T1), Access Control (T2)
Operational: AES-256 (O1), IAM policies (O2)
```
- **Check**: S1 → T1 → O1 (complete path ✓), S1 → T2 → O2 (complete path ✓)
- **No orphans** ✓, **No cycles** ✓, **Full coverage**
---
## 3. Emergence Detection
### Bottom-Up Pattern Recognition
**Definition**: Lower-layer interactions create unexpected upper-layer behavior not explicitly designed
**Process**:
1. **Observe operational behavior** (metrics, incidents, user feedback)
2. **Identify patterns** that occur repeatedly (not one-offs)
3. **Aggregate to tactical layer**: What systemic issue causes this pattern?
4. **Recognize strategic implication**: Does this invalidate strategic assumptions?
**Example 1**: Conway's Law emergence
- **Ops observation**: Cross-team features take 3× longer than single-team features
- **Tactical pattern**: Microservices owned by different teams require extensive coordination
- **Strategic implication**: "Org structure determines architecture; current structure slows key features" → Realign teams to product streams, not services
**Example 2**: Performance vs. security tradeoff
- **Ops observation**: Encryption adds 50ms latency, users complain about slowness
- **Tactical pattern**: Security measures consistently hurt performance
- **Strategic implication**: Original strategy "Security + speed" incompatible → Refine: "Security first, optimize critical paths to <100ms"
### Leading Indicators for Emergence
**Monitor these signals** to catch emergence early:
1. **Increasing complexity at operational layer**: More workarounds, special cases, exceptions
- **Meaning**: Tactics may not fit reality; strategic assumptions may be wrong
2. **Frequent tactical adjustments**: Changing approaches every sprint
- **Meaning**: Strategy unclear or infeasible; need strategic clarity
3. **Misalignment between metrics**: Strategic KPI improving but operational satisfaction dropping
- **Example**: Revenue up (strategic) but engineer productivity down (operational) → Hidden cost emerging
4. **Repeated failures of same type**: Same class of incident/bug over and over
- **Meaning**: Tactical approach has systematic flaw; may require strategic shift
### Emergence vs. Noise
**Emergence** (systematic pattern):
- Repeats across multiple contexts
- Persists over time (not transient)
- Has clear causal mechanism at lower layer
**Noise** (random variance):
- One-off occurrence
- Transient (disappears quickly)
- No clear causal mechanism
**Test**: Can you explain mechanism? ("Microservices cause coordination overhead because teams must sync on interfaces" = emergence). If no mechanism, likely noise.
---
## 4. Bidirectional Change Propagation
### Top-Down Propagation (Strategy Changes)
**Scenario**: Strategic shift (market change, new regulation, company pivot)
**Process**:
1. **Document strategic change**: What changed and why?
2. **Identify affected tactics**: Which tactical approaches depended on old strategy?
3. **Re-evaluate tactics**: Do current tactics still achieve new strategy? If not, generate alternatives.
4. **Cascade to operations**: Update operational procedures to implement new tactics.
5. **Validate consistency**: Check upward/downward/lateral consistency with new strategy.
**Example**: Strategic shift from "Speed" to "Trust"
- **Strategic change**: "Move fast and break things" → "Build trust through reliability"
- **Tactical impact**:
- OLD: "Deploy daily, fix issues post-deploy" → NEW: "Staged rollouts, canary testing, rollback plans"
- OLD: "Ship MVP, iterate" → NEW: "Comprehensive testing, beta programs, polish before GA"
- **Operational impact**:
- Update CI/CD: Add pre-deploy tests, canary stages
- Update sprint process: Add QA phase, user acceptance testing
- Update monitoring: Add error budgets, SLO tracking
**Propagation timeline**:
- **Week 1**: Communicate strategic change, get alignment
- **Week 2-3**: Re-evaluate tactics, design new approaches
- **Week 4-8**: Update operational procedures, train teams
- **Ongoing**: Monitor consistency, adjust as needed
### Bottom-Up Propagation (Operational Constraints)
**Scenario**: Operational constraint discovered (technical limitation, resource shortage, performance issue)
**Process**:
1. **Document operational constraint**: What's infeasible and why?
2. **Evaluate tactical impact**: Can we adjust tactics to work around constraint?
3. **If no tactical workaround**: Clarify or adjust strategy to acknowledge constraint.
4. **Communicate upward**: Ensure stakeholders understand strategic implications of operational reality.
**Example**: Performance constraint discovered
- **Operational constraint**: "Encryption adds 50ms latency, exceeds <100ms SLA"
- **Tactical re-evaluation**:
- Option A: Optimize encryption (caching, hardware acceleration) → Still 20ms overhead
- Option B: Selective encryption (only sensitive fields) → Violates "encrypt everything" tactic
- Option C: Lighter encryption (AES-128 instead of AES-256) → Security tradeoff
- **Strategic clarification** (if needed): Original strategy: "<100ms latency for all APIs"
- **Refined strategy**: "<100ms for user-facing APIs, <200ms for internal APIs where security critical"
- **Rationale**: Accept latency cost for security on sensitive paths, optimize user-facing
**Escalation decision tree**:
1. **Can tactical adjustment solve?** (e.g., optimize) → YES: Tactical change only
2. **Tactical adjustment insufficient?** → Escalate to strategic layer
3. **Strategic constraint absolute?** (e.g., compliance non-negotiable) → Accept operational cost or change tactics
4. **Strategic constraint negotiable?** → Refine strategy to acknowledge operational reality
### Change Impact Analysis
**Before propagating any change**, analyze impact:
**Impact dimensions**:
1. **Scope**: How many layers affected? (1 layer = local change, 3 layers = systemic)
2. **Magnitude**: How big are changes at each layer? (minor adjustment vs. complete redesign)
3. **Timeline**: How long to propagate changes fully? (1 week vs. 6 months)
4. **Risk**: What breaks if change executed poorly? (downtime, customer trust, team morale)
**Example**: Impact analysis of "Strategic shift to privacy-first"
- **Scope**: All 3 layers (strategic, tactical, operational)
- **Magnitude**: High (major tactical changes: add encryption, access control; major operational changes: new infrastructure)
- **Timeline**: 6 months (implement encryption Q1, access control Q2, audit Q3)
- **Risk**: High (customer data at risk if done wrong, compliance penalties if incomplete)
- **Decision**: Phased rollout with validation gates at each phase
---
## 5. Advanced Topics
### Layer Invariants and Contracts
**Concept**: Each layer establishes "contracts" that lower layers must satisfy
**Strategic layer invariants**:
- Non-negotiable constraints (e.g., "HIPAA compliance", "zero downtime")
- These are **invariants**: never violated regardless of tactical/operational choices
**Tactical layer contracts**:
- Promises to strategic layer (e.g., "Encryption ensures privacy")
- Requirements for operational layer (e.g., "Operations must implement AES-256")
**Operational layer contracts**:
- Guarantees to tactical layer (e.g., "KMS provides AES-256 encryption")
**Validation**: If operational layer can't satisfy tactical contract, either change operations or change tactics (which may require strategic clarification).
### Cross-Cutting Concerns
**Problem**: Some concerns span all layers (logging, security, monitoring)
**Approaches**:
**Option 1: Separate layers for cross-cutting concern**
- **Strategic (Security)**: "Defense in depth"
- **Tactical (Security)**: "WAF, encryption, access control"
- **Operational (Security)**: "Deploy WAF rules, implement RBAC"
- **Pro**: Clear security focus
- **Con**: Parallel structure, coordination overhead
**Option 2: Integrate into each layer**
- **Strategic**: "Privacy-first product" (security embedded)
- **Tactical**: "Zero-trust architecture" (security included in tactics)
- **Operational**: "Implement encryption" (security in operations)
- **Pro**: Unified structure
- **Con**: Security may get diluted across layers
**Recommendation**: Use Option 1 for critical cross-cutting concerns (security, compliance), Option 2 for less critical (logging, monitoring).
### Abstraction Hierarchies vs. Layered Reasoning
**Abstraction hierarchy** (programming):
- Layers hide implementation details (API → library → OS → hardware)
- Lower layers serve upper layers (hardware serves OS, OS serves library)
- **Focus**: Information hiding, modularity
**Layered reasoning** (thinking framework):
- Layers represent abstraction levels (strategy → tactics → operations)
- Lower layers implement upper layers; upper layers constrain lower layers
- **Focus**: Consistency, alignment, translation
**Key difference**: Abstraction hierarchy is **unidirectional** (call downward, hide details upward). Layered reasoning is **bidirectional** (implement downward, feedback upward).
### Formal Specifications at Each Layer
**Strategic layer**: Natural language principles + constraints
- "System must be HIPAA compliant"
- "Support 10× traffic growth"
**Tactical layer**: Architecture diagrams, decision records, policies
- ADR: "We will use microservices for scalability"
- Policy: "All services must implement health checks"
**Operational layer**: Code, runbooks, configuration
- Code: `encrypt(data, key)`
- Runbook: "If service fails health check, restart pod"
**Validation**: Can you trace from code (operational) → policy (tactical) → principle (strategic)?
---
## Common Mistakes and Solutions
| Mistake | Symptom | Solution |
|---------|---------|----------|
| **Skipping layers** | Jump from strategy to code without tactics | Insert tactical layer; design approaches before coding |
| **Layer coupling** | Can't understand one layer without others | Make each layer independently useful with clear contracts |
| **Too many layers** | Confusion about which layer to use, redundancy | Consolidate to 3-5 layers; eliminate redundant levels |
| **Ignoring emergence** | Surprised by unintended consequences | Monitor operational behavior; recognize emerging tactical patterns |
| **One-way propagation** | Strategy changes but operations don't update | Use bidirectional propagation; cascade changes downward |
| **No consistency checks** | Misalignment between layers discovered late | Regular upward/downward/lateral consistency validation |
| **Implicit assumptions** | Assumptions change, layers break | Document assumptions explicitly at each layer |
| **Orphan elements** | Operations/tactics not implementing strategy | Build dependency graph; ensure every element maps upward |

View File

@@ -0,0 +1,311 @@
# Layered Reasoning Templates
Quick-start templates for structuring multi-level reasoning, checking consistency across layers, and translating between abstraction levels.
---
## Template 1: 3-Layer Reasoning Document
**When to use**: Designing systems, planning strategy, explaining concepts at multiple depths
### Layered Reasoning Template
**Topic/System**: [Name of what you're analyzing]
**Purpose**: [Why this layering matters]
**Date**: [Date]
---
### Layer 1: Strategic (30,000 ft) — WHY
**Core Principles/Invariants**:
1. [Principle 1, e.g., "Customer privacy is non-negotiable"]
2. [Principle 2, e.g., "System must scale to 10× current load"]
3. [Principle 3, e.g., "Developer productivity is paramount"]
**Strategic Constraints**:
- [Constraint 1, e.g., "Must comply with GDPR"]
- [Constraint 2, e.g., "Cannot exceed $X budget"]
- [Constraint 3, e.g., "Launch within 6 months"]
**Assumptions**:
- [Assumption 1, e.g., "Market remains competitive"]
- [Assumption 2, e.g., "Cloud infrastructure available"]
**Success Criteria (Strategic)**:
- [Metric 1, e.g., "Market leader in trust ratings"]
- [Metric 2, e.g., "Support 100M users"]
---
### Layer 2: Tactical (3,000 ft) — WHAT
**Approaches/Architectures** (that satisfy strategic layer):
**Approach 1**: [Name, e.g., "Microservices Architecture"]
- **Satisfies**: [Which strategic principles, e.g., "Scales to 10×, enables team independence"]
- **Tactics**:
- [Tactic 1, e.g., "Service mesh for inter-service communication"]
- [Tactic 2, e.g., "Event-driven architecture for async operations"]
- [Tactic 3, e.g., "API gateway for external requests"]
- **Tradeoffs**: [What's sacrificed, e.g., "Increased operational complexity"]
**Approach 2**: [Name, e.g., "Zero-Trust Security Model"]
- **Satisfies**: [Which strategic principles, e.g., "Customer privacy, GDPR compliance"]
- **Tactics**:
- [Tactic 1, e.g., "End-to-end encryption for all data"]
- [Tactic 2, e.g., "Identity-based access control"]
- [Tactic 3, e.g., "Continuous verification, no implicit trust"]
- **Tradeoffs**: [What's sacrificed, e.g., "Slight performance overhead"]
**Success Criteria (Tactical)**:
- [Metric 1, e.g., "99.9% uptime"]
- [Metric 2, e.g., "API response time <100ms p95"]
---
### Layer 3: Operational (300 ft) — HOW
**Implementation Details** (that realize tactical layer):
**Implementation 1**: [Tactic being implemented, e.g., "Service Mesh"]
- **Technology**: [Specific tools, e.g., "Istio on Kubernetes"]
- **Procedures**:
- [Step 1, e.g., "Deploy sidecar proxies to each pod"]
- [Step 2, e.g., "Configure mutual TLS between services"]
- [Step 3, e.g., "Set up traffic routing rules"]
- **Timeline**: [When, e.g., "Sprint 1-2"]
- **Owner**: [Who, e.g., "Platform team"]
**Implementation 2**: [Tactic being implemented, e.g., "End-to-End Encryption"]
- **Technology**: [Specific tools, e.g., "AWS KMS for key management, AES-256 encryption"]
- **Procedures**:
- [Step 1, e.g., "Generate master key in KMS"]
- [Step 2, e.g., "Encrypt data at rest using KMS data keys"]
- [Step 3, e.g., "Use TLS 1.3 for data in transit"]
- **Timeline**: [When, e.g., "Sprint 3"]
- **Owner**: [Who, e.g., "Security team"]
**Success Criteria (Operational)**:
- [Metric 1, e.g., "100% services behind Istio"]
- [Metric 2, e.g., "All PHI encrypted, verified in audit"]
---
### Consistency Validation
**Upward Consistency** (Do operations implement tactics? Do tactics achieve strategy?):
- ☐ [Check 1]: Does Istio implementation enable microservices architecture? → [Yes/No + Evidence]
- ☐ [Check 2]: Does microservices architecture support 10× scale? → [Yes/No + Evidence]
- ☐ [Check 3]: Does encryption implementation satisfy privacy principle? → [Yes/No + Evidence]
**Downward Consistency** (Can strategy be executed with tactics? Can tactics be implemented?):
- ☐ [Check 1]: Can we achieve GDPR compliance with chosen tactics? → [Yes/No + Gaps]
- ☐ [Check 2]: Can we implement Istio within budget/timeline? → [Yes/No + Constraints]
**Lateral Consistency** (Do parallel choices contradict?):
- ☐ [Check 1]: Does encryption overhead conflict with <100ms latency goal? → [Yes/No + Mitigation]
- ☐ [Check 2]: Do microservices complexity conflict with 6-month timeline? → [Yes/No + Adjustment]
**Emergent Properties Observed**:
- [Emergence 1, e.g., "Microservices increased cross-team coordination overhead → slowed feature delivery"]
- [Implication, e.g., "Adjust strategy: 'Scale' includes team coordination, not just technical scale"]
---
### Change Propagation Plan
**If Strategic Layer Changes**:
- [Change scenario, e.g., "New regulation requires data residency"] →
- [Tactical impact, e.g., "Need multi-region deployment with geo-replication"] →
- [Operational impact, e.g., "Deploy regional clusters, implement data sovereignty"]
**If Operational Constraint Discovered**:
- [Constraint, e.g., "Istio adds 50ms latency, exceeds <100ms goal"] →
- [Tactical adjustment, e.g., "Switch to Linkerd (lighter sidecar) or optimize routes"] →
- [Strategic clarification, e.g., "Refine: <100ms for critical paths, <200ms for others"]
---
## Template 2: Cross-Layer Communication Plan
**When to use**: Presenting to stakeholders at different levels (board, management, engineers)
### Communication Template
**Core Message**: [Single sentence summarizing what you're communicating]
---
### For Executive Audience (30K ft - Strategic)
**Slide 1: WHY** (Problem/Opportunity)
- **Context**: [Strategic context, e.g., "Market shifting to privacy-first products"]
- **Impact**: [Business impact, e.g., "Losing 20% customers to competitors on trust"]
- **Outcome**: [Desired end state, e.g., "Become market leader in data privacy"]
**Slide 2: WHAT** (Approach)
- **Strategy**: [High-level approach, e.g., "Zero-trust architecture, end-to-end encryption, transparent security"]
- **Investment**: [Resources required, e.g., "$2M, 6 months, 10 engineers"]
- **Risk**: [Key risks, e.g., "Performance impact, timeline risk if encryption complex"]
**Slide 3: OUTCOMES** (Success Metrics)
- **Revenue impact**: [e.g., "$5M ARR from enterprise customers requiring compliance"]
- **Market position**: [e.g., "SOC 2 Type II + GDPR certified within 6 months"]
- **Risk mitigation**: [e.g., "Avoid $XM penalty for non-compliance"]
**Language**: Business outcomes, revenue, market share, strategic risk. Avoid technical details.
---
### For Management Audience (3K ft - Tactical)
**Roadmap Overview**:
- **Q1**: [Tactical milestone, e.g., "Implement encryption at rest + TLS 1.3"]
- **Q2**: [Tactical milestone, e.g., "Deploy zero-trust access controls + audit logging"]
- **Q3**: [Tactical milestone, e.g., "SOC 2 audit + penetration testing"]
**Team & Resources**:
- **Security Team** (5 engineers): Encryption, access control, audit systems
- **Platform Team** (3 engineers): Infrastructure, key management, monitoring
- **Product Team** (2 engineers): User-facing privacy controls
**Dependencies & Risks**:
- **Dependency 1**: [e.g., "Requires AWS KMS setup (1 week)"]
- **Risk 1**: [e.g., "Encryption may slow API response; plan optimization sprint"]
**Success Metrics**:
- [Metric 1, e.g., "100% data encrypted by end Q1"]
- [Metric 2, e.g., "Pass SOC 2 audit Q3"]
**Language**: Roadmaps, team velocity, dependencies, milestones. Some technical detail but focus on execution.
---
### For Engineering Audience (300 ft - Operational)
**Technical Design**:
- **Architecture**: [e.g., "Client → API Gateway (TLS 1.3) → Service Mesh (mTLS) → Encrypted DB"]
- **Components**:
- [Component 1, e.g., "AWS KMS for key management"]
- [Component 2, e.g., "AES-256-GCM for data at rest"]
- [Component 3, e.g., "Istio sidecar proxies for service-to-service encryption"]
**Implementation Steps**:
1. [Step 1, e.g., "Set up AWS KMS, create customer master key"]
2. [Step 2, e.g., "Implement encryption layer in data access library"]
3. [Step 3, e.g., "Deploy Istio with mTLS strict mode"]
4. [Step 4, e.g., "Migrate existing data: encrypt in background job"]
5. [Step 5, e.g., "Update monitoring: track encryption overhead"]
**Code Example** (if relevant):
```python
# Encrypt data before storing
def store_user_data(user_id, data):
encrypted_data = kms.encrypt(data, key_id='customer-master-key')
db.insert(user_id, encrypted_data)
```
**Testing Plan**:
- [Test 1, e.g., "Unit tests for encryption/decryption"]
- [Test 2, e.g., "Integration tests for end-to-end encrypted flow"]
- [Test 3, e.g., "Performance tests: measure latency impact"]
**Language**: Code, architecture diagrams, APIs, specific technologies, testing. Detailed and technical.
---
## Template 3: Consistency Check Matrix
**When to use**: Validating alignment across layers before finalizing decisions
### Consistency Check Template
**System/Decision**: [What you're validating]
---
| Layer Pair | Consistency Question | Status | Evidence/Gaps | Action Required |
|------------|----------------------|--------|---------------|-----------------|
| **Ops → Tactical** | Do operational procedures implement tactical approaches? | ☐ Pass ☐ Fail | [e.g., "Istio implements service mesh as planned"] | [e.g., "None" or "Fix X"] |
| **Tactical → Strategic** | Do tactical approaches achieve strategic principles? | ☐ Pass ☐ Fail | [e.g., "Zero-trust satisfies privacy principle"] | [e.g., "None" or "Adjust Y"] |
| **Strategic → Tactical** | Can strategic goals be achieved with chosen tactics? | ☐ Pass ☐ Fail | [e.g., "Tactics support GDPR compliance"] | [e.g., "None" or "Add Z tactic"] |
| **Tactical → Ops** | Can tactical approaches be implemented operationally? | ☐ Pass ☐ Fail | [e.g., "Team has Istio expertise"] | [e.g., "Hire or train"] |
| **Ops A ↔ Ops B** | Do parallel operational choices conflict? | ☐ Pass ☐ Fail | [e.g., "Encryption + caching compatible"] | [e.g., "Optimize cache encryption"] |
| **Tactical A ↔ Tactical B** | Do parallel tactical approaches contradict? | ☐ Pass ☐ Fail | [e.g., "Microservices + monorepo: no conflict"] | [e.g., "None" or "Choose one"] |
---
**Overall Consistency**: ☐ **Aligned** (all pass) ☐ **Minor Issues** (1-2 fails, fixable) ☐ **Major Issues** (3+ fails, rethink)
**Summary of Gaps**:
1. [Gap 1, e.g., "Encryption overhead conflicts with latency SLA → Need optimization"]
2. [Gap 2, e.g., "No plan for key rotation → Add operational procedure"]
**Remediation Plan**:
1. [Action 1, e.g., "Benchmark encryption overhead, optimize hot paths"]
2. [Action 2, e.g., "Define 90-day key rotation policy, automate in KMS"]
---
## Template 4: Layer Transition Analysis
**When to use**: When changing strategies, discovering operational constraints, or refactoring systems
### Transition Template
**Transition Type**: ☐ **Top-Down** (Strategy changed) ☐ **Bottom-Up** (Operational constraint discovered)
---
### If Top-Down (Strategy Change)
**Strategic Change**: [What changed at strategic layer, e.g., "New regulation requires data residency in EU"]
**Tactical Implications**:
- **Current Tactics**: [e.g., "Single US region deployment"]
- **Required Tactics**: [e.g., "Multi-region deployment with EU data sovereignty"]
- **New Tactics**: [e.g., "Deploy EU cluster, implement regional routing, data replication"]
**Operational Implications**:
- **Current Operations**: [e.g., "Single AWS us-east-1 region"]
- **Required Operations**: [e.g., "AWS eu-west-1 cluster, regional databases, GDPR-compliant data handling"]
- **Migration Plan**: [Timeline and steps, e.g., "Q1: Deploy EU infra, Q2: Migrate EU customers"]
**Cost/Impact**:
- **Resources**: [e.g., "$500K infrastructure, 3 engineers for 3 months"]
- **Risk**: [e.g., "Data sync latency, potential data loss during migration"]
---
### If Bottom-Up (Operational Constraint)
**Operational Constraint**: [What was discovered, e.g., "Istio sidecar adds 50ms latency, exceeds 100ms SLA"]
**Tactical Re-Evaluation**:
- **Current Tactic**: [e.g., "Istio service mesh for microservices"]
- **Options**:
1. **Option A**: [e.g., "Switch to Linkerd (lighter, ~10ms overhead)"]
2. **Option B**: [e.g., "Optimize Istio config, remove unnecessary features"]
3. **Option C**: [e.g., "Selective mesh: only for services needing mTLS"]
- **Recommendation**: [Which option and why]
**Strategic Clarification** (if needed):
- **Original Strategy**: [e.g., "<100ms latency for all APIs"]
- **Refined Strategy**: [e.g., "<100ms for critical paths (user-facing), <200ms for internal APIs"]
- **Rationale**: [e.g., "Security (mTLS) worth 50ms for internal, but optimize user-facing"]
**Decision**: ☐ **Keep Strategy, Adjust Tactics****Refine Strategy, Keep Tactics****Both Change**
---
## Quick Reference: When to Use Each Template
| Template | Use Case | Output |
|----------|----------|--------|
| **3-Layer Reasoning Document** | Designing systems, planning strategy | Structured doc with strategic/tactical/operational layers + consistency checks |
| **Cross-Layer Communication** | Presenting to different stakeholders | Tailored messaging for exec/management/engineers |
| **Consistency Check Matrix** | Validating alignment before finalizing | Gap analysis with remediation plan |
| **Layer Transition Analysis** | Strategy changes or operational constraints | Impact analysis + migration/adjustment plan |