Initial commit
This commit is contained in:
@@ -0,0 +1,223 @@
|
||||
{
|
||||
"criteria": [
|
||||
{
|
||||
"name": "Decomposition Completeness & Granularity",
|
||||
"description": "Are all major components identified at appropriate granularity?",
|
||||
"scoring": {
|
||||
"1": "Decomposition too shallow (2-3 components still very complex) or too deep (50+ atomic parts, overwhelming). Major components missing.",
|
||||
"3": "Most major components identified. Granularity mostly appropriate but some components too coarse or too fine. Minor gaps in coverage.",
|
||||
"5": "Complete decomposition. All major components identified. Granularity is appropriate (3-8 components per level, atomic components clearly marked). Decomposition depth justified with rationale."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Decomposition Strategy Alignment",
|
||||
"description": "Is decomposition strategy (functional, structural, data flow, temporal, cost) appropriate for system type and goal?",
|
||||
"scoring": {
|
||||
"1": "Inconsistent strategy (mixing functional/structural randomly). Wrong strategy for system type (e.g., functional decomposition for architectural analysis).",
|
||||
"3": "Strategy is stated and mostly consistent. Reasonable fit for system type but may not be optimal for goal.",
|
||||
"5": "Exemplary strategy selection. Decomposition approach clearly matches system type and goal. Strategy is consistently applied. Alternative strategies considered and rationale provided."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Relationship Mapping Accuracy",
|
||||
"description": "Are component relationships (dependencies, data flow, control flow) correctly identified and documented?",
|
||||
"scoring": {
|
||||
"1": "Critical relationships missing. Relationship types unclear or mislabeled. No distinction between dependency, data flow, control flow.",
|
||||
"3": "Major relationships documented. Types mostly correct. Some implicit or transitive dependencies may be missing. Critical path identified but may have gaps.",
|
||||
"5": "Comprehensive relationship mapping. All critical relationships documented with correct types (dependency, data flow, control, temporal, resource sharing). Critical path clearly identified. Dependency graph is accurate and complete."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Property Measurement Rigor",
|
||||
"description": "Are component properties (latency, cost, complexity, etc.) measured or estimated with documented sources?",
|
||||
"scoring": {
|
||||
"1": "All properties are guesses with no rationale. No data sources. Properties relevant to goal not measured.",
|
||||
"3": "Mix of measured and estimated properties. Some data sources documented. Key properties addressed but may lack precision.",
|
||||
"5": "Rigorous property measurement. Quantitative properties backed by measurements (profiling, logs, benchmarks). Qualitative properties have clear anchors/definitions. All data sources and estimation rationale documented. Focus on goal-relevant properties."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Critical Component Identification",
|
||||
"description": "Are bottlenecks, single points of failure, or highest-impact components correctly identified?",
|
||||
"scoring": {
|
||||
"1": "Critical components not identified or identification is clearly wrong. No analysis of which components drive system behavior.",
|
||||
"3": "Critical components identified but analysis is superficial. Bottleneck stated but not validated with data. Some impact components may be missed.",
|
||||
"5": "Exemplary critical component analysis. Bottlenecks identified with evidence (critical path analysis, property measurements). Single points of failure surfaced. Impact ranking of components is data-driven and validated."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Reconstruction Pattern Selection",
|
||||
"description": "Is reconstruction approach (bottleneck ID, simplification, reordering, parallelization, substitution, etc.) appropriate for goal?",
|
||||
"scoring": {
|
||||
"1": "Reconstruction pattern doesn't match goal (e.g., simplification when goal is performance). No clear reconstruction approach.",
|
||||
"3": "Reconstruction pattern stated and reasonable for goal. Pattern applied but may not be optimal. Alternative approaches not considered.",
|
||||
"5": "Optimal reconstruction pattern selected. Clear rationale for why this pattern over alternatives. Pattern is systematically applied. Multiple patterns combined if appropriate (e.g., reordering + parallelization)."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Recommendation Specificity & Impact",
|
||||
"description": "Are recommendations specific, actionable, and quantify expected impact?",
|
||||
"scoring": {
|
||||
"1": "Recommendations vague ('optimize component B'). No expected impact quantified. Not actionable.",
|
||||
"3": "Recommendations are specific to components but lack implementation detail. Expected impact stated but not quantified or poorly estimated.",
|
||||
"5": "Exemplary recommendations. Specific changes to make (WHAT, HOW, WHY). Expected impact quantified with confidence level ('reduce latency by 800ms, 67% improvement, high confidence'). Implementation approach outlined. Risks and mitigations noted."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Assumption & Constraint Documentation",
|
||||
"description": "Are key assumptions, constraints, and limitations explicitly stated?",
|
||||
"scoring": {
|
||||
"1": "No assumptions documented. Ignores stated constraints. Recommends impossible changes.",
|
||||
"3": "Some assumptions mentioned. Constraints acknowledged but may not be fully respected in recommendations.",
|
||||
"5": "All key assumptions explicitly documented. Constraints clearly stated and respected. Uncertainties flagged (which properties are estimates vs measurements). Recommendations validated against constraints."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Analysis Depth Appropriate to Complexity",
|
||||
"description": "Is analysis depth proportional to system complexity and goal criticality?",
|
||||
"scoring": {
|
||||
"1": "Over-analysis of simple system (50+ components for 3-part system) or under-analysis of complex system (single-level decomposition of multi-tier architecture).",
|
||||
"3": "Analysis depth is reasonable but may go deeper than needed in some areas or miss depth in critical areas.",
|
||||
"5": "Analysis depth perfectly calibrated. Simple systems get lightweight analysis. Complex/critical systems get rigorous decomposition, dependency analysis, and critical path. Depth focused on goal-relevant areas."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Communication Clarity",
|
||||
"description": "Is decomposition visualizable, analysis clear, and recommendations understandable?",
|
||||
"scoring": {
|
||||
"1": "Decomposition is confusing, inconsistent notation. Analysis findings are unclear. Recommendations lack structure.",
|
||||
"3": "Decomposition is documented but hard to visualize. Analysis findings are present but require effort to understand. Recommendations are structured.",
|
||||
"5": "Exemplary communication. Decomposition is easily visualizable (hierarchy or diagram could be drawn). Analysis findings are evidence-based and clear. Recommendations are well-structured with priority, rationale, impact. Technical level appropriate for audience."
|
||||
}
|
||||
}
|
||||
],
|
||||
"minimum_score": 3.5,
|
||||
"guidance_by_system_type": {
|
||||
"Software Architecture (microservices, APIs, databases)": {
|
||||
"target_score": 4.0,
|
||||
"focus_criteria": [
|
||||
"Decomposition Strategy Alignment",
|
||||
"Relationship Mapping Accuracy",
|
||||
"Critical Component Identification"
|
||||
],
|
||||
"recommended_strategy": "Structural decomposition (services, layers, data stores). Focus on dependency graph, critical path for latency.",
|
||||
"common_pitfalls": [
|
||||
"Missing implicit runtime dependencies (service discovery, config)",
|
||||
"Not decomposing database into query types (reads vs writes)",
|
||||
"Ignoring async patterns (message queues, event streams)"
|
||||
]
|
||||
},
|
||||
"Business Processes (workflows, operations)": {
|
||||
"target_score": 3.8,
|
||||
"focus_criteria": [
|
||||
"Decomposition Completeness & Granularity",
|
||||
"Relationship Mapping Accuracy",
|
||||
"Recommendation Specificity & Impact"
|
||||
],
|
||||
"recommended_strategy": "Functional or temporal decomposition (steps, decision points, handoffs). Focus on cycle time, bottlenecks.",
|
||||
"common_pitfalls": [
|
||||
"Missing decision points and branching logic",
|
||||
"Not identifying manual handoffs (often bottlenecks)",
|
||||
"Ignoring exception/error paths"
|
||||
]
|
||||
},
|
||||
"Performance Optimization (latency, throughput)": {
|
||||
"target_score": 4.2,
|
||||
"focus_criteria": [
|
||||
"Property Measurement Rigor",
|
||||
"Critical Component Identification",
|
||||
"Reconstruction Pattern Selection"
|
||||
],
|
||||
"recommended_strategy": "Data flow or structural decomposition. Measure latency/throughput per component. Critical path analysis (PERT/CPM).",
|
||||
"common_pitfalls": [
|
||||
"Using estimated latencies instead of measured (profiling)",
|
||||
"Missing parallelization opportunities (independent components)",
|
||||
"Optimizing non-critical path components"
|
||||
]
|
||||
},
|
||||
"Cost Optimization (cloud spend, resource allocation)": {
|
||||
"target_score": 4.0,
|
||||
"focus_criteria": [
|
||||
"Decomposition Completeness & Granularity",
|
||||
"Property Measurement Rigor",
|
||||
"Recommendation Specificity & Impact"
|
||||
],
|
||||
"recommended_strategy": "Cost/resource decomposition (by service, resource type, usage pattern). Identify highest cost drivers.",
|
||||
"common_pitfalls": [
|
||||
"Missing indirect costs (maintenance, opportunity cost)",
|
||||
"Not decomposing by usage pattern (peak vs baseline)",
|
||||
"Recommending cost cuts that break functionality"
|
||||
]
|
||||
},
|
||||
"Problem Decomposition (complex tasks)": {
|
||||
"target_score": 3.5,
|
||||
"focus_criteria": [
|
||||
"Decomposition Completeness & Granularity",
|
||||
"Relationship Mapping Accuracy",
|
||||
"Analysis Depth Appropriate to Complexity"
|
||||
],
|
||||
"recommended_strategy": "Functional decomposition (sub-problems, dependencies). Identify blockers and parallelizable work.",
|
||||
"common_pitfalls": [
|
||||
"Decomposition too shallow (still stuck on complex sub-problems)",
|
||||
"Missing dependencies between sub-problems",
|
||||
"Not identifying which sub-problems are blockers"
|
||||
]
|
||||
}
|
||||
},
|
||||
"guidance_by_complexity": {
|
||||
"Simple (3-5 components, clear relationships, straightforward goal)": {
|
||||
"target_score": 3.5,
|
||||
"sufficient_depth": "1-2 level decomposition. Basic relationship mapping. Measured properties where available, estimated otherwise. Simple bottleneck identification."
|
||||
},
|
||||
"Moderate (6-10 components, some hidden dependencies, multi-objective goal)": {
|
||||
"target_score": 3.8,
|
||||
"sufficient_depth": "2-3 level decomposition. Comprehensive relationship mapping with dependency graph. Measured properties for critical components. Critical path analysis. Sensitivity analysis on key assumptions."
|
||||
},
|
||||
"Complex (>10 components, complex interactions, strategic importance)": {
|
||||
"target_score": 4.2,
|
||||
"sufficient_depth": "3+ level hierarchical decomposition. Full dependency graph with SCCs, topological ordering. Rigorous property measurement (profiling, benchmarking). PERT/CPM critical path. Optimization algorithms (greedy, dynamic programming). Failure mode analysis (FMEA)."
|
||||
}
|
||||
},
|
||||
"common_failure_modes": {
|
||||
"1. Decomposition Mismatch": {
|
||||
"symptom": "Using wrong decomposition strategy for system type (e.g., temporal decomposition for architecture)",
|
||||
"detection": "Decomposition feels forced, components don't map cleanly, hard to identify relationships",
|
||||
"fix": "Restart with different strategy. Functional for processes, structural for architecture, data flow for pipelines."
|
||||
},
|
||||
"2. Missing Critical Relationships": {
|
||||
"symptom": "Components seem independent but system has cascading failures or hidden dependencies",
|
||||
"detection": "Recommendations ignore dependency ripple effects, stakeholder says 'but X depends on Y'",
|
||||
"fix": "Trace data and control flow systematically. Validate dependency graph with stakeholders or code analysis tools."
|
||||
},
|
||||
"3. Unmeasured Bottlenecks": {
|
||||
"symptom": "Bottleneck identified by intuition, not data. Or wrong bottleneck due to poor estimates.",
|
||||
"detection": "'Component B seems slow' without measurements. Optimization of B doesn't improve overall system.",
|
||||
"fix": "Profile, measure, benchmark. Build critical path from measured data, not guesses."
|
||||
},
|
||||
"4. Vague Recommendations": {
|
||||
"symptom": "Recommendations like 'optimize component B' or 'improve performance' without specifics",
|
||||
"detection": "Engineer reads recommendation and asks 'how?'. No implementation path forward.",
|
||||
"fix": "Make recommendations concrete: WHAT to change, HOW to change it (approach), WHY (evidence from analysis), IMPACT (quantified)."
|
||||
},
|
||||
"5. Analysis Paralysis": {
|
||||
"symptom": "Decomposition goes 5+ levels deep, 100+ components, no actionable insights",
|
||||
"detection": "Spending weeks on decomposition, no recommendations yet. Stakeholders losing patience.",
|
||||
"fix": "Set decomposition depth limit. Focus on goal-relevant areas. Stop when further decomposition doesn't help recommendations."
|
||||
},
|
||||
"6. Ignoring Constraints": {
|
||||
"symptom": "Recommendations that violate stated constraints (can't change legacy system, budget limit, compliance)",
|
||||
"detection": "Stakeholder rejects all recommendations as 'impossible given our constraints'",
|
||||
"fix": "Document constraints upfront. Validate all recommendations against constraints before finalizing."
|
||||
},
|
||||
"7. No Impact Quantification": {
|
||||
"symptom": "Can't answer 'how much better will this be?' for recommendations",
|
||||
"detection": "Recommendations lack numbers. Can't prioritize by ROI.",
|
||||
"fix": "Estimate expected impact from component properties. 'Removing 1.2s component should reduce total by ~40% (1.2/3.0)'. Provide confidence level."
|
||||
},
|
||||
"8. Over-Optimization of Non-Critical Path": {
|
||||
"symptom": "Optimizing components that aren't bottlenecks, wasting effort",
|
||||
"detection": "After optimization, system performance unchanged or minimal improvement",
|
||||
"fix": "Identify critical path first. Only optimize components on critical path. Measure improvement after each change."
|
||||
}
|
||||
}
|
||||
}
|
||||
424
skills/decomposition-reconstruction/resources/methodology.md
Normal file
424
skills/decomposition-reconstruction/resources/methodology.md
Normal file
@@ -0,0 +1,424 @@
|
||||
# Decomposition & Reconstruction: Advanced Methodology
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist for complex decomposition scenarios:
|
||||
|
||||
```
|
||||
Advanced Decomposition Progress:
|
||||
- [ ] Step 1: Apply hierarchical decomposition techniques
|
||||
- [ ] Step 2: Build and analyze dependency graphs
|
||||
- [ ] Step 3: Perform critical path analysis
|
||||
- [ ] Step 4: Use advanced property measurement
|
||||
- [ ] Step 5: Apply optimization algorithms
|
||||
```
|
||||
|
||||
**Step 1: Apply hierarchical decomposition techniques** - Multi-level decomposition with consistent abstraction levels. See [1. Hierarchical Decomposition](#1-hierarchical-decomposition).
|
||||
|
||||
**Step 2: Build and analyze dependency graphs** - Visualize and analyze component relationships. See [2. Dependency Graph Analysis](#2-dependency-graph-analysis).
|
||||
|
||||
**Step 3: Perform critical path analysis** - Identify bottlenecks using PERT/CPM. See [3. Critical Path Analysis](#3-critical-path-analysis).
|
||||
|
||||
**Step 4: Use advanced property measurement** - Rigorous measurement and statistical analysis. See [4. Advanced Property Measurement](#4-advanced-property-measurement).
|
||||
|
||||
**Step 5: Apply optimization algorithms** - Systematic reconstruction approaches. See [5. Optimization Algorithms](#5-optimization-algorithms).
|
||||
|
||||
---
|
||||
|
||||
## 1. Hierarchical Decomposition
|
||||
|
||||
### Multi-Level Decomposition Strategy
|
||||
|
||||
Break into levels: L0 (System) → L1 (3-7 subsystems) → L2 (3-7 components each) → L3+ (only if needed). Stop when component is atomic or further breakdown doesn't help goal.
|
||||
|
||||
**Abstraction consistency:** All components at same level should be at same abstraction type (e.g., all architectural components, not mixing "API Service" with "user login function").
|
||||
|
||||
**Template:**
|
||||
```
|
||||
System → Subsystem A → Component A.1, A.2, A.3
|
||||
→ Subsystem B → Component B.1, B.2
|
||||
→ Subsystem C → Component C.1 (atomic)
|
||||
```
|
||||
|
||||
Document WHY decomposed to this level and WHY stopped.
|
||||
|
||||
---
|
||||
|
||||
## 2. Dependency Graph Analysis
|
||||
|
||||
### Building Dependency Graphs
|
||||
|
||||
**Nodes:** Components (from decomposition)
|
||||
**Edges:** Relationships (dependency, data flow, control flow, etc.)
|
||||
**Direction:** Arrow shows dependency direction (A → B means A depends on B)
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
Frontend → API Service → Database
|
||||
↓
|
||||
Cache
|
||||
↓
|
||||
Message Queue
|
||||
```
|
||||
|
||||
### Graph Properties
|
||||
|
||||
**Strongly Connected Components (SCCs):** Circular dependencies (A → B → C → A). Problematic for isolation. Use Tarjan's algorithm.
|
||||
|
||||
**Topological Ordering:** Linear order where edges point forward (only if acyclic). Reveals safe build/deploy order.
|
||||
|
||||
**Critical Path:** Longest weighted path, determines minimum completion time. Bottleneck for optimization.
|
||||
|
||||
### Dependency Analysis
|
||||
|
||||
**Forward:** "If I change X, what breaks?" (BFS from X outgoing)
|
||||
**Backward:** "What must work for X to function?" (BFS from X incoming)
|
||||
**Transitive Reduction:** Remove redundant edges to simplify visualization.
|
||||
|
||||
---
|
||||
|
||||
## 3. Critical Path Analysis
|
||||
|
||||
### PERT/CPM (Program Evaluation and Review Technique / Critical Path Method)
|
||||
|
||||
**Use case:** System with sequential stages, need to identify time bottlenecks
|
||||
|
||||
**Inputs:**
|
||||
- Components with estimated duration
|
||||
- Dependencies between components
|
||||
|
||||
**Process:**
|
||||
|
||||
**Step 1: Build dependency graph with durations**
|
||||
|
||||
```
|
||||
A (3h) → B (5h) → D (2h)
|
||||
A (3h) → C (4h) → D (2h)
|
||||
```
|
||||
|
||||
**Step 2: Calculate earliest start time (EST) for each component**
|
||||
|
||||
EST(node) = max(EST(predecessor) + duration(predecessor)) for all predecessors
|
||||
|
||||
**Example:**
|
||||
- EST(A) = 0
|
||||
- EST(B) = EST(A) + duration(A) = 0 + 3 = 3h
|
||||
- EST(C) = EST(A) + duration(A) = 0 + 3 = 3h
|
||||
- EST(D) = max(EST(B) + duration(B), EST(C) + duration(C)) = max(3+5, 3+4) = 8h
|
||||
|
||||
**Step 3: Calculate latest finish time (LFT) working backwards**
|
||||
|
||||
LFT(node) = min(LFT(successor) - duration(node)) for all successors
|
||||
|
||||
**Example (working backwards from D):**
|
||||
- LFT(D) = project deadline (say 10h)
|
||||
- LFT(B) = LFT(D) - duration(B) = 10 - 5 = 5h
|
||||
- LFT(C) = LFT(D) - duration(C) = 10 - 4 = 6h
|
||||
- LFT(A) = min(LFT(B) - duration(A), LFT(C) - duration(A)) = min(5-3, 6-3) = 2h
|
||||
|
||||
**Step 4: Calculate slack (float)**
|
||||
|
||||
Slack(node) = LFT(node) - EST(node) - duration(node)
|
||||
|
||||
**Example:**
|
||||
- Slack(A) = 2 - 0 - 3 = -1h (on critical path, negative slack means delay)
|
||||
- Slack(B) = 5 - 3 - 5 = -3h (critical)
|
||||
- Slack(C) = 6 - 3 - 4 = -1h (has some float)
|
||||
- Slack(D) = 10 - 8 - 2 = 0 (critical)
|
||||
|
||||
**Step 5: Identify critical path**
|
||||
|
||||
Components with zero (or minimum) slack form the critical path.
|
||||
|
||||
**Critical path:** A → B → D (total 10h)
|
||||
|
||||
**Optimization insight:** Only optimizing B will reduce total time. Optimizing C (non-critical) won't help.
|
||||
|
||||
### Handling Uncertainty (PERT Estimates)
|
||||
|
||||
When durations are uncertain, use three-point estimates:
|
||||
|
||||
- **Optimistic (O):** Best case
|
||||
- **Most Likely (M):** Expected case
|
||||
- **Pessimistic (P):** Worst case
|
||||
|
||||
**Expected duration:** E = (O + 4M + P) / 6
|
||||
|
||||
**Standard deviation:** σ = (P - O) / 6
|
||||
|
||||
**Example:**
|
||||
- Component A: O=2h, M=3h, P=8h
|
||||
- Expected: E = (2 + 4×3 + 8) / 6 = 3.67h
|
||||
- Std dev: σ = (8 - 2) / 6 = 1h
|
||||
|
||||
**Use expected durations for critical path analysis, report confidence intervals**
|
||||
|
||||
---
|
||||
|
||||
## 4. Advanced Property Measurement
|
||||
|
||||
### Quantitative vs Qualitative Properties
|
||||
|
||||
**Quantitative (measurable):**
|
||||
- Latency (ms), throughput (req/s), cost ($/month), lines of code, error rate (%)
|
||||
- **Measurement:** Use APM tools, profilers, logs, benchmarks
|
||||
- **Reporting:** Mean, median, p95, p99, min, max, std dev
|
||||
|
||||
**Qualitative (subjective):**
|
||||
- Code readability, maintainability, user experience, team morale
|
||||
- **Measurement:** Use rating scales (1-10), comparative ranking, surveys
|
||||
- **Reporting:** Mode, distribution, outliers
|
||||
|
||||
### Statistical Rigor
|
||||
|
||||
**For quantitative measurements:**
|
||||
|
||||
**1. Multiple samples:** Don't rely on single measurement
|
||||
- Run benchmark 10+ times, report distribution
|
||||
- Example: Latency = 250ms ± 50ms (mean ± std dev, n=20)
|
||||
|
||||
**2. Control for confounds:** Isolate what you're measuring
|
||||
- Example: Measure DB query time with same dataset, same load, same hardware
|
||||
|
||||
**3. Statistical significance:** Determine if difference is real or noise
|
||||
- Use t-test or ANOVA to compare means
|
||||
- Report p-value (p < 0.05 typically considered significant)
|
||||
|
||||
**For qualitative measurements:**
|
||||
|
||||
**1. Multiple raters:** Reduce individual bias
|
||||
- Have 3+ people rate complexity independently, average scores
|
||||
|
||||
**2. Calibration:** Define rating scale clearly
|
||||
- Example: Complexity 1="< 50 LOC, no dependencies", 10=">1000 LOC, 20+ dependencies"
|
||||
|
||||
**3. Inter-rater reliability:** Check if raters agree
|
||||
- Calculate Cronbach's alpha or correlation coefficient
|
||||
|
||||
### Performance Profiling Techniques
|
||||
|
||||
**CPU Profiling:**
|
||||
- Identify which components consume most CPU time
|
||||
- Tools: perf, gprof, Chrome DevTools, Xcode Instruments
|
||||
|
||||
**Memory Profiling:**
|
||||
- Identify which components allocate most memory or leak
|
||||
- Tools: valgrind, heaptrack, Chrome DevTools, Instruments
|
||||
|
||||
**I/O Profiling:**
|
||||
- Identify which components perform most disk/network I/O
|
||||
- Tools: iotop, iostat, Network tab in DevTools
|
||||
|
||||
**Tracing:**
|
||||
- Track execution flow through distributed systems
|
||||
- Tools: OpenTelemetry, Jaeger, Zipkin, AWS X-Ray
|
||||
|
||||
**Result:** Component-level resource consumption data for bottleneck analysis
|
||||
|
||||
---
|
||||
|
||||
## 5. Optimization Algorithms
|
||||
|
||||
### Greedy Optimization
|
||||
|
||||
**Approach:** Optimize components in order of highest impact first
|
||||
|
||||
**Algorithm:**
|
||||
1. Measure impact of optimizing each component (reduction in latency, cost, etc.)
|
||||
2. Sort components by impact (descending)
|
||||
3. Optimize highest-impact component
|
||||
4. Re-measure, repeat until goal achieved or diminishing returns
|
||||
|
||||
**Example (latency optimization):**
|
||||
- Components: A (100ms), B (500ms), C (50ms)
|
||||
- Sort by impact: B (500ms), A (100ms), C (50ms)
|
||||
- Optimize B first → Reduce to 200ms → Total latency improved by 300ms
|
||||
- Re-measure, continue
|
||||
|
||||
**Advantage:** Fast, often gets 80% of benefit with 20% of effort
|
||||
**Limitation:** May miss global optimum (e.g., removing B entirely better than optimizing B)
|
||||
|
||||
### Dynamic Programming Approach
|
||||
|
||||
**Approach:** Find optimal decomposition/reconstruction by exploring combinations
|
||||
|
||||
**Use case:** When multiple components interact, greedy may not find best solution
|
||||
|
||||
**Example (budget allocation):**
|
||||
- Budget: $1000/month
|
||||
- Components: A (improves UX, costs $400), B (reduces latency, costs $600), C (adds feature, costs $500)
|
||||
- Constraint: Total cost ≤ $1000
|
||||
- Goal: Maximize value
|
||||
|
||||
**Algorithm:**
|
||||
1. Enumerate all feasible combinations: {A}, {B}, {C}, {A+B}, {A+C}, {B+C}
|
||||
2. Calculate value and cost for each
|
||||
3. Select combination with max value under budget constraint
|
||||
|
||||
**Result:** Optimal combination (may not be greedy choice)
|
||||
|
||||
### Constraint Satisfaction
|
||||
|
||||
**Approach:** Find reconstruction that satisfies all hard constraints
|
||||
|
||||
**Use case:** Multiple constraints (latency < 500ms AND cost < $500/month AND reliability > 99%)
|
||||
|
||||
**Formulation:**
|
||||
- Variables: Component choices (use component A or B? Parallelize or serialize?)
|
||||
- Domains: Possible values for each choice
|
||||
- Constraints: Rules that must be satisfied
|
||||
|
||||
**Algorithm:** Backtracking search, constraint propagation
|
||||
**Tools:** CSP solvers (Z3, MiniZinc)
|
||||
|
||||
### Sensitivity Analysis
|
||||
|
||||
**Goal:** Understand how sensitive reconstruction is to property estimates
|
||||
|
||||
**Process:**
|
||||
1. Build reconstruction based on measured/estimated properties
|
||||
2. Vary each property by ±X% (e.g., ±20%)
|
||||
3. Re-run reconstruction
|
||||
4. Identify which properties most affect outcome
|
||||
|
||||
**Example:**
|
||||
- Baseline: Component A latency = 100ms → Optimize B
|
||||
- Sensitivity: If A latency = 150ms → Optimize A instead
|
||||
- **Conclusion:** Decision is sensitive to A's latency estimate, need better measurement
|
||||
|
||||
---
|
||||
|
||||
## 6. Advanced Reconstruction Patterns
|
||||
|
||||
### Caching & Memoization
|
||||
|
||||
**Pattern:** Add caching layer for frequently accessed components
|
||||
|
||||
**When:** Component is slow, accessed repeatedly, output deterministic
|
||||
|
||||
**Example:** Database query repeated 1000x/sec → Add Redis cache → 95% cache hit rate → 20× latency reduction
|
||||
|
||||
**Trade-offs:** Memory cost, cache invalidation complexity, eventual consistency
|
||||
|
||||
### Batch Processing
|
||||
|
||||
**Pattern:** Process items in batches instead of one-at-a-time
|
||||
|
||||
**When:** Per-item overhead is high, latency not critical
|
||||
|
||||
**Example:** Send 1000 individual emails (1s each, total 1000s) → Batch into groups of 100 → Send via batch API (10s per batch, total 100s)
|
||||
|
||||
**Trade-offs:** Increased latency for individual items, complexity in failure handling
|
||||
|
||||
### Asynchronous Processing
|
||||
|
||||
**Pattern:** Decouple components using message queues
|
||||
|
||||
**When:** Component is slow but result not needed immediately
|
||||
|
||||
**Example:** User uploads video → Process synchronously (60s wait) → User unhappy
|
||||
**Reconstruction:** User uploads → Queue processing → User sees "processing" → Email when done
|
||||
|
||||
**Trade-offs:** Complexity (need queue infrastructure), eventual consistency, harder to debug
|
||||
|
||||
### Load Balancing & Sharding
|
||||
|
||||
**Pattern:** Distribute load across multiple instances of a component
|
||||
|
||||
**When:** Component is bottleneck, can be parallelized, load is high
|
||||
|
||||
**Example:** Single DB handles 10K req/s, saturated → Shard by user ID → 10 DBs each handle 1K req/s
|
||||
|
||||
**Trade-offs:** Operational complexity, cross-shard queries expensive, rebalancing cost
|
||||
|
||||
### Circuit Breaker
|
||||
|
||||
**Pattern:** Fail fast when dependent component is down
|
||||
|
||||
**When:** Component depends on unreliable external service
|
||||
|
||||
**Example:** API calls external service → Service is down → API waits 30s per request → API becomes slow
|
||||
**Reconstruction:** Add circuit breaker → Detect failures → Stop calling for 60s → Fail fast (< 1ms)
|
||||
|
||||
**Trade-offs:** Reduced functionality during outage, tuning thresholds (false positives vs negatives)
|
||||
|
||||
---
|
||||
|
||||
## 7. Failure Mode & Effects Analysis (FMEA)
|
||||
|
||||
### FMEA Process
|
||||
|
||||
**Goal:** Identify weaknesses and single points of failure in decomposed system
|
||||
|
||||
**Process:**
|
||||
|
||||
**Step 1: List all components**
|
||||
|
||||
**Step 2: For each component, identify failure modes**
|
||||
- How can this component fail? (crash, slow, wrong output, security breach)
|
||||
|
||||
**Step 3: For each failure mode, assess:**
|
||||
- **Severity (S):** Impact if failure occurs (1-10, 10 = catastrophic)
|
||||
- **Occurrence (O):** Likelihood of failure (1-10, 10 = very likely)
|
||||
- **Detection (D):** Ability to detect before impact (1-10, 10 = undetectable)
|
||||
|
||||
**Step 4: Calculate Risk Priority Number (RPN)**
|
||||
RPN = S × O × D
|
||||
|
||||
**Step 5: Prioritize failures by RPN, design mitigations**
|
||||
|
||||
### Example
|
||||
|
||||
| Component | Failure Mode | S | O | D | RPN | Mitigation |
|
||||
|-----------|--------------|---|---|---|-----|------------|
|
||||
| Database | Crashes | 9 | 2 | 1 | 18 | Add replica, automatic failover |
|
||||
| Cache | Stale data | 5 | 6 | 8 | 240 | Reduce TTL, add invalidation |
|
||||
| API | DDoS attack | 8 | 4 | 3 | 96 | Add rate limiting, WAF |
|
||||
|
||||
**Highest RPN = 240 (Cache stale data)** → Address this first
|
||||
|
||||
### Mitigation Strategies
|
||||
|
||||
**Redundancy:** Multiple instances, failover
|
||||
**Monitoring:** Early detection, alerting
|
||||
**Graceful degradation:** Degrade functionality instead of total failure
|
||||
**Rate limiting:** Prevent overload
|
||||
**Input validation:** Prevent bad data cascading
|
||||
**Circuit breakers:** Fail fast when dependencies down
|
||||
|
||||
---
|
||||
|
||||
## 8. Case Study Approach
|
||||
|
||||
### Comparative Analysis
|
||||
|
||||
Compare reconstruction alternatives in table format (Latency, Cost, Time, Risk, Maintainability). Make recommendation with rationale based on trade-offs.
|
||||
|
||||
### Iterative Refinement
|
||||
|
||||
If initial decomposition doesn't reveal insights, refine: go deeper in critical areas, switch decomposition strategy, add missing relationships. Re-run analysis. Stop when further refinement doesn't change recommendations.
|
||||
|
||||
---
|
||||
|
||||
## 9. Tool-Assisted Decomposition
|
||||
|
||||
**Static analysis:** CLOC, SonarQube (dependency graphs, complexity metrics)
|
||||
**Dynamic analysis:** Flame graphs, perf, Chrome DevTools (CPU/memory/I/O), Jaeger/Zipkin (distributed tracing)
|
||||
|
||||
**Workflow:** Static analysis → Dynamic measurement → Manual validation → Combine quantitative + qualitative
|
||||
|
||||
**Caution:** Tools miss runtime dependencies, overestimate coupling, produce overwhelming detail. Use as guide, not truth.
|
||||
|
||||
---
|
||||
|
||||
## 10. Communication & Visualization
|
||||
|
||||
**Diagrams:** Hierarchy trees, dependency graphs (color-code critical path), property heatmaps, before/after comparisons
|
||||
|
||||
**Stakeholder views:**
|
||||
- Executives: 1-page summary, key findings, business impact
|
||||
- Engineers: Detailed breakdown, technical rationale, implementation
|
||||
- Product/Business: UX impact, cost-benefit, timeline
|
||||
|
||||
Adapt depth to audience expertise.
|
||||
394
skills/decomposition-reconstruction/resources/template.md
Normal file
394
skills/decomposition-reconstruction/resources/template.md
Normal file
@@ -0,0 +1,394 @@
|
||||
# Decomposition & Reconstruction Template
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Decomposition & Reconstruction Progress:
|
||||
- [ ] Step 1: System definition and scoping
|
||||
- [ ] Step 2: Component decomposition
|
||||
- [ ] Step 3: Relationship mapping
|
||||
- [ ] Step 4: Property analysis
|
||||
- [ ] Step 5: Reconstruction and recommendations
|
||||
```
|
||||
|
||||
**Step 1: System definition and scoping** - Define system, goal, boundaries, constraints. See [System Definition](#system-definition).
|
||||
|
||||
**Step 2: Component decomposition** - Break into atomic parts using appropriate strategy. See [Component Decomposition](#component-decomposition).
|
||||
|
||||
**Step 3: Relationship mapping** - Map dependencies, data flow, control flow. See [Relationship Mapping](#relationship-mapping).
|
||||
|
||||
**Step 4: Property analysis** - Measure/estimate component properties, identify critical elements. See [Property Analysis](#property-analysis).
|
||||
|
||||
**Step 5: Reconstruction and recommendations** - Apply reconstruction pattern, deliver recommendations. See [Reconstruction & Recommendations](#reconstruction--recommendations).
|
||||
|
||||
---
|
||||
|
||||
## System Definition
|
||||
|
||||
### Input Questions
|
||||
|
||||
Ask user to clarify:
|
||||
|
||||
**1. System description:**
|
||||
- What system are we analyzing? (Specific name, not vague category)
|
||||
- What does it do? (Purpose, inputs, outputs)
|
||||
- Current state vs desired state?
|
||||
|
||||
**2. Goal:**
|
||||
- What problem needs solving? (Performance, cost, complexity, reliability, redesign)
|
||||
- What would success look like? (Specific, measurable outcome)
|
||||
- Primary objective: Optimize, simplify, understand, or redesign?
|
||||
|
||||
**3. Boundaries:**
|
||||
- What's included in this system? (Components definitely in scope)
|
||||
- What's excluded? (Adjacent systems, dependencies we won't decompose further)
|
||||
- Why these boundaries? (Prevent scope creep)
|
||||
|
||||
**4. Constraints:**
|
||||
- What can't change? (Legacy integrations, regulatory requirements, budget limits)
|
||||
- Time horizon? (Quick analysis vs comprehensive redesign)
|
||||
- Stakeholder priorities? (Speed vs cost vs reliability)
|
||||
|
||||
### System Definition Template
|
||||
|
||||
```markdown
|
||||
## System Definition
|
||||
**Name:** [Specific system name]
|
||||
**Purpose:** [What it does]
|
||||
**Problem:** [Current issue]
|
||||
**Goal:** [Target improvement with success criteria]
|
||||
**Scope:** In: [Components to decompose] | Out: [Excluded systems]
|
||||
**Constraints:** [What can't change, timeline]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Component Decomposition
|
||||
|
||||
### Choose Decomposition Strategy
|
||||
|
||||
Match strategy to system type:
|
||||
|
||||
**Functional Decomposition** (processes, workflows):
|
||||
- Question: "What tasks does this system perform?"
|
||||
- Break down by function or activity
|
||||
- Example: User onboarding → Signup | Email verification | Profile setup | Tutorial
|
||||
|
||||
**Structural Decomposition** (architectures, organizations):
|
||||
- Question: "What are the physical or logical parts?"
|
||||
- Break down by component or module
|
||||
- Example: Microservices app → Auth service | User service | Payment service | Notification service
|
||||
|
||||
**Data Flow Decomposition** (pipelines, ETL):
|
||||
- Question: "How does data transform as it flows?"
|
||||
- Break down by transformation or processing stage
|
||||
- Example: Log processing → Collect | Parse | Filter | Aggregate | Store | Alert
|
||||
|
||||
**Temporal Decomposition** (sequences, journeys):
|
||||
- Question: "What are the stages over time?"
|
||||
- Break down by phase or time period
|
||||
- Example: Sales funnel → Awareness | Consideration | Decision | Purchase | Retention
|
||||
|
||||
**Cost/Resource Decomposition** (budgets, capacity):
|
||||
- Question: "How are resources allocated?"
|
||||
- Break down by cost center or resource type
|
||||
- Example: Team capacity → Development (60%) | Meetings (20%) | Support (15%) | Admin (5%)
|
||||
|
||||
### Decomposition Process
|
||||
|
||||
**Step 1: First-level decomposition**
|
||||
|
||||
Break system into 3-8 major components. If <3, system may be too simple for this analysis. If >8, group related items.
|
||||
|
||||
**Step 2: Determine decomposition depth**
|
||||
|
||||
For each component, ask: "Is further breakdown useful?"
|
||||
- **Yes, decompose further if:**
|
||||
- Component is complex and opaque
|
||||
- Further breakdown reveals optimization opportunities
|
||||
- Component is the bottleneck or high-cost area
|
||||
- **No, stop if:**
|
||||
- Component is atomic (can't meaningfully subdivide)
|
||||
- Further detail doesn't help achieve goal
|
||||
- Component is out of scope
|
||||
|
||||
**Step 3: Document decomposition hierarchy**
|
||||
|
||||
Use indentation or numbering to show levels.
|
||||
|
||||
### Component Decomposition Template
|
||||
|
||||
```markdown
|
||||
## Component Breakdown
|
||||
|
||||
**Strategy:** [Functional / Structural / Data Flow / Temporal / Cost-Resource]
|
||||
|
||||
**Hierarchy:**
|
||||
- **[Component A]:** [Description]
|
||||
- A.1: [Sub-component description]
|
||||
- A.2: [Sub-component description]
|
||||
- **[Component B]:** [Description]
|
||||
- B.1: [Sub-component description]
|
||||
- **[Component C]:** [Description - atomic]
|
||||
|
||||
**Depth Rationale:** [Why decomposed to this level]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Relationship Mapping
|
||||
|
||||
### Relationship Types
|
||||
|
||||
Identify all applicable relationships:
|
||||
|
||||
**1. Dependency:** A requires B to function
|
||||
- Example: Frontend depends on API, API depends on database
|
||||
- Notation: A → B (A depends on B)
|
||||
|
||||
**2. Data flow:** A sends data to B
|
||||
- Example: User input → Validation → Database
|
||||
- Notation: A ⇒ B (data flows from A to B)
|
||||
|
||||
**3. Control flow:** A triggers or controls B
|
||||
- Example: Payment success triggers fulfillment
|
||||
- Notation: A ⊳ B (A triggers B)
|
||||
|
||||
**4. Temporal ordering:** A must happen before B
|
||||
- Example: Authentication before authorization
|
||||
- Notation: A < B (A before B in time)
|
||||
|
||||
**5. Resource sharing:** A and B both use C
|
||||
- Example: Services share database connection pool
|
||||
- Notation: A ← C → B (both use C)
|
||||
|
||||
### Mapping Process
|
||||
|
||||
**Step 1: Pairwise relationship check**
|
||||
|
||||
For each pair of components, ask:
|
||||
- Does A depend on B?
|
||||
- Does data flow from A to B?
|
||||
- Does A trigger B?
|
||||
- Must A happen before B?
|
||||
- Do A and B share a resource?
|
||||
|
||||
**Step 2: Document relationships**
|
||||
|
||||
List all relationships with type and direction.
|
||||
|
||||
**Step 3: Identify critical paths**
|
||||
|
||||
Trace sequences of dependencies from input to output. Longest path = critical path.
|
||||
|
||||
### Relationship Mapping Template
|
||||
|
||||
```markdown
|
||||
## Relationships
|
||||
|
||||
**Dependencies:** [A] → [B] → [C] (A requires B, B requires C)
|
||||
**Data Flows:** [Input] ⇒ [Process] ⇒ [Output]
|
||||
**Control Flows:** [Trigger] ⊳ [Action] ⊳ [Notification]
|
||||
**Temporal:** [Step 1] < [Step 2] < [Step 3]
|
||||
**Resource Sharing:** [A, B] share [Resource C]
|
||||
**Critical Path:** [Start] → [A] → [B] → [C] → [End] (Total: [time/cost])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Property Analysis
|
||||
|
||||
### Component Properties
|
||||
|
||||
For each component, measure or estimate:
|
||||
|
||||
**Performance properties:**
|
||||
- Latency: Time to complete
|
||||
- Throughput: Capacity (requests/sec, items/hour)
|
||||
- Reliability: Uptime, failure rate
|
||||
- Scalability: Can it handle growth?
|
||||
|
||||
**Cost properties:**
|
||||
- Direct cost: $/month, $/transaction
|
||||
- Indirect cost: Maintenance burden, technical debt
|
||||
- Opportunity cost: What else could we build with these resources?
|
||||
|
||||
**Complexity properties:**
|
||||
- Lines of code, number of dependencies, cyclomatic complexity
|
||||
- Cognitive load: How hard to understand/change?
|
||||
- Coupling: How tightly connected to other components?
|
||||
|
||||
**Other properties (domain-specific):**
|
||||
- Security: Vulnerability surface
|
||||
- Compliance: Regulatory requirements
|
||||
- User experience: Friction points, satisfaction
|
||||
|
||||
### Analysis Techniques
|
||||
|
||||
**Measurement (objective):**
|
||||
- Use profiling tools, logs, metrics dashboards
|
||||
- Benchmark performance, measure latency, count resources
|
||||
- Example: Database query takes 1.2s (measured via APM tool)
|
||||
|
||||
**Estimation (subjective):**
|
||||
- When measurement isn't available, estimate with rationale
|
||||
- Use comparative judgment (high/medium/low or 1-10 scale)
|
||||
- Example: "Component A complexity: 8/10 because 500 LOC, 12 dependencies, no docs"
|
||||
|
||||
**Sensitivity analysis:**
|
||||
- Identify which properties matter most for goal
|
||||
- Focus measurement/estimation on critical properties
|
||||
|
||||
### Property Analysis Template
|
||||
|
||||
```markdown
|
||||
## Component Properties
|
||||
|
||||
| Component | Latency | Cost | Complexity | Reliability | Notes |
|
||||
|-----------|---------|------|------------|-------------|-------|
|
||||
| [A] | 500ms | $200/mo | 5/10 | 99.9% | [Notes] |
|
||||
| [B] | 1.2s | $50/mo | 8/10 | 95% | [Notes] |
|
||||
|
||||
**Data sources:** [Where metrics came from]
|
||||
|
||||
**Critical Components:** [List with impact on goal]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reconstruction & Recommendations
|
||||
|
||||
### Choose Reconstruction Pattern
|
||||
|
||||
Based on goal and analysis, select approach:
|
||||
|
||||
**Bottleneck Identification:**
|
||||
- Goal: Find limiting factor
|
||||
- Approach: Identify component with highest impact on goal metric
|
||||
- Recommendation: Optimize the bottleneck first
|
||||
|
||||
**Simplification:**
|
||||
- Goal: Reduce complexity
|
||||
- Approach: Question necessity of each component, eliminate low-value parts
|
||||
- Recommendation: Remove or consolidate components
|
||||
|
||||
**Reordering:**
|
||||
- Goal: Improve efficiency through sequencing
|
||||
- Approach: Identify independent components, move earlier or parallelize
|
||||
- Recommendation: Change execution order
|
||||
|
||||
**Parallelization:**
|
||||
- Goal: Increase throughput
|
||||
- Approach: Find independent components, execute concurrently
|
||||
- Recommendation: Run in parallel instead of serial
|
||||
|
||||
**Substitution:**
|
||||
- Goal: Replace underperforming component
|
||||
- Approach: Identify weak component, find better alternative
|
||||
- Recommendation: Swap component
|
||||
|
||||
**Consolidation:**
|
||||
- Goal: Reduce overhead
|
||||
- Approach: Find redundant/overlapping components, merge
|
||||
- Recommendation: Combine similar components
|
||||
|
||||
**Modularization:**
|
||||
- Goal: Improve maintainability
|
||||
- Approach: Identify tight coupling, separate concerns
|
||||
- Recommendation: Extract into independent modules
|
||||
|
||||
### Recommendation Structure
|
||||
|
||||
Each recommendation should include:
|
||||
1. **What:** Specific change to make
|
||||
2. **Why:** Rationale based on analysis
|
||||
3. **Expected impact:** Quantified or estimated benefit
|
||||
4. **Implementation:** High-level approach or next steps
|
||||
5. **Risks:** Potential downsides or considerations
|
||||
|
||||
### Reconstruction Template
|
||||
|
||||
```markdown
|
||||
## Reconstruction
|
||||
|
||||
**Pattern:** [Bottleneck ID / Simplification / Reordering / Parallelization / Substitution / Consolidation / Modularization]
|
||||
|
||||
**Key Findings:**
|
||||
- [Finding 1 with evidence]
|
||||
- [Finding 2 with evidence]
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Priority 1: [Title]
|
||||
**What:** [Specific change]
|
||||
**Why:** [Rationale from analysis]
|
||||
**Impact:** [Quantified improvement, confidence level]
|
||||
**Implementation:** [High-level approach, effort estimate]
|
||||
**Risks:** [Key risks and mitigations]
|
||||
|
||||
### Priority 2: [Title]
|
||||
[Same structure]
|
||||
|
||||
## Summary
|
||||
**Current:** [System as analyzed]
|
||||
**Proposed:** [After recommendations]
|
||||
**Total Impact:** [Goal metric improvement]
|
||||
**Next Steps:** [1. Immediate action, 2. Planning, 3. Execution]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before delivering, verify:
|
||||
|
||||
**Decomposition quality:**
|
||||
- [ ] System boundary is clear and justified
|
||||
- [ ] Components are at appropriate granularity (not too coarse, not too fine)
|
||||
- [ ] Decomposition strategy matches system type
|
||||
- [ ] All major components identified
|
||||
- [ ] Decomposition depth is justified (why stopped where we did)
|
||||
|
||||
**Relationship mapping:**
|
||||
- [ ] All critical relationships documented
|
||||
- [ ] Relationship types are clear (dependency vs data flow vs control flow)
|
||||
- [ ] Critical path identified
|
||||
- [ ] Dependencies are accurate (verified with stakeholders if uncertain)
|
||||
|
||||
**Property analysis:**
|
||||
- [ ] Key properties measured or estimated for each component
|
||||
- [ ] Data sources documented (measurement vs estimation)
|
||||
- [ ] Critical components identified (highest impact on goal)
|
||||
- [ ] Analysis focuses on properties relevant to goal
|
||||
|
||||
**Reconstruction & recommendations:**
|
||||
- [ ] Reconstruction pattern matches goal
|
||||
- [ ] Recommendations are specific and actionable
|
||||
- [ ] Expected impact is quantified or estimated
|
||||
- [ ] Rationale ties back to component analysis
|
||||
- [ ] Risks and considerations noted
|
||||
- [ ] Prioritization is clear (Priority 1, 2, 3)
|
||||
|
||||
**Communication:**
|
||||
- [ ] Decomposition is visualizable (hierarchy or diagram could be drawn)
|
||||
- [ ] Analysis findings are clear and evidence-based
|
||||
- [ ] Recommendations have clear expected impact
|
||||
- [ ] Technical level appropriate for audience
|
||||
- [ ] Assumptions and limitations stated
|
||||
|
||||
---
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
| Pitfall | Fix |
|
||||
|---------|-----|
|
||||
| **Decomposition too shallow** (2-3 complex components) | Ask "can this be broken down further?" |
|
||||
| **Decomposition too deep** (50+ atomic parts) | Group related components, focus on goal-relevant areas |
|
||||
| **Inconsistent strategy** (mixing functional/structural) | Choose one primary strategy, stick to it |
|
||||
| **Missing critical relationships** (hidden dependencies) | Trace data/control flow systematically, validate with stakeholders |
|
||||
| **Unmeasured properties** (all guesses) | Prioritize measurement for critical components |
|
||||
| **Vague recommendations** ("optimize X") | Specify WHAT, HOW, WHY with evidence from analysis |
|
||||
| **Ignoring constraints** (impossible suggestions) | Check all recommendations against stated constraints |
|
||||
| **No impact quantification** ("can't estimate improvement") | Estimate expected impact from component properties |
|
||||
Reference in New Issue
Block a user