Initial commit
This commit is contained in:
149
skills/methodology-bootstrapping/reference/overview.md
Normal file
149
skills/methodology-bootstrapping/reference/overview.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Methodology Bootstrapping - Overview
|
||||
|
||||
**Unified framework for developing software engineering methodologies through systematic observation, empirical validation, and automated enforcement.**
|
||||
|
||||
## Philosophy
|
||||
|
||||
> The best methodologies are not **designed** but **evolved** through systematic observation, codification, and automation of successful practices.
|
||||
|
||||
Traditional methodologies are:
|
||||
- Theory-driven (based on principles, not data)
|
||||
- Static (created once, rarely updated)
|
||||
- Prescriptive (one-size-fits-all)
|
||||
- Manual (require discipline, no automated validation)
|
||||
|
||||
**Methodology Bootstrapping** enables methodologies that are:
|
||||
- Data-driven (based on empirical observation)
|
||||
- Dynamic (continuously evolving)
|
||||
- Adaptive (project-specific)
|
||||
- Automated (enforced by CI/CD)
|
||||
|
||||
## Three-Layer Architecture
|
||||
|
||||
The framework integrates three complementary layers:
|
||||
|
||||
### Layer 1: Core Framework (OCA Cycle)
|
||||
- **Observe**: Instrument and collect data
|
||||
- **Codify**: Extract patterns and document
|
||||
- **Automate**: Convert to automated checks
|
||||
- **Evolve**: Apply methodology to itself
|
||||
|
||||
**Output**: Three-tuple (O, Aₙ, Mₙ)
|
||||
- O = Task output (code, docs, system)
|
||||
- Aₙ = Converged agent set (reusable)
|
||||
- Mₙ = Converged meta-agent (transferable)
|
||||
|
||||
### Layer 2: Scientific Foundation
|
||||
- Hypothesis formation
|
||||
- Experimental validation
|
||||
- Statistical analysis
|
||||
- Pattern recognition
|
||||
- Empirical evidence
|
||||
|
||||
### Layer 3: Quantitative Evaluation
|
||||
- **V_instance(s)**: Domain-specific task quality
|
||||
- **V_meta(s)**: Methodology transferability quality
|
||||
- Convergence criteria
|
||||
- Optimization mathematics
|
||||
|
||||
## Key Insights
|
||||
|
||||
### Insight 1: Dual-Layer Value Functions
|
||||
|
||||
Optimizing only task quality (V_instance) produces good code but no reusable methodology.
|
||||
Optimizing both layers creates **compound value**: good code + transferable methodology.
|
||||
|
||||
### Insight 2: Self-Referential Feedback Loop
|
||||
|
||||
The methodology can improve itself:
|
||||
1. Use tools to observe methodology development
|
||||
2. Extract meta-patterns from methodology creation
|
||||
3. Codify patterns as methodology improvements
|
||||
4. Automate methodology validation
|
||||
|
||||
This creates **closed loop**: methodologies optimize methodologies.
|
||||
|
||||
### Insight 3: Convergence is Mathematical
|
||||
|
||||
Methodology is complete when:
|
||||
- System stable (no agent evolution)
|
||||
- Dual threshold met (V_instance ≥ 0.80, V_meta ≥ 0.80)
|
||||
- Diminishing returns (ΔV < epsilon)
|
||||
|
||||
No guesswork - the math tells you when done.
|
||||
|
||||
### Insight 4: Agent Specialization Emerges
|
||||
|
||||
Don't predetermine agents. Let specialization emerge:
|
||||
- Start with generic agents (coder, tester, doc-writer)
|
||||
- Identify gaps during execution
|
||||
- Create specialized agents only when needed
|
||||
- 8 experiments: 0-5 specialized agents per experiment
|
||||
|
||||
### Insight 5: Meta-Agent M₀ is Sufficient
|
||||
|
||||
Across all 8 experiments, the base Meta-Agent (M₀) never needed evolution:
|
||||
- M₀ capabilities: observe, plan, execute, reflect, evolve
|
||||
- Sufficient for all domains tested
|
||||
- Agent specialization handles domain gaps
|
||||
- Meta-Agent handles coordination
|
||||
|
||||
## Validated Outcomes
|
||||
|
||||
**From 8 experiments** (testing, error recovery, CI/CD, observability, dependency health, knowledge transfer, technical debt, cross-cutting concerns):
|
||||
|
||||
- **Success rate**: 100% (8/8 converged)
|
||||
- **Efficiency**: 4.9 avg iterations, 9.1 avg hours
|
||||
- **Quality**: V_instance 0.784, V_meta 0.840
|
||||
- **Transferability**: 70-95%
|
||||
- **Speedup**: 3-46x vs ad-hoc
|
||||
|
||||
## When to Use
|
||||
|
||||
**Ideal conditions**:
|
||||
- Recurring problem requiring systematic approach
|
||||
- Methodology needs to be transferable
|
||||
- Empirical data available for observation
|
||||
- Automation infrastructure exists (CI/CD)
|
||||
- Team values data-driven decisions
|
||||
|
||||
**Sub-optimal conditions**:
|
||||
- One-time ad-hoc task
|
||||
- Established industry standard fully applies
|
||||
- No data available (greenfield)
|
||||
- No automation infrastructure
|
||||
- Team prefers intuition over data
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Tools**:
|
||||
- Session analysis (meta-cc MCP server or equivalent)
|
||||
- Git repository access
|
||||
- Code metrics tools (coverage, linters)
|
||||
- CI/CD platform (GitHub Actions, GitLab CI)
|
||||
- Markdown editor
|
||||
|
||||
**Skills**:
|
||||
- Basic data analysis (statistics, patterns)
|
||||
- Software development experience
|
||||
- Scientific method understanding
|
||||
- Documentation writing
|
||||
|
||||
**Time investment**:
|
||||
- Learning framework: 4-8 hours
|
||||
- First experiment: 6-15 hours
|
||||
- Subsequent experiments: 4-10 hours (with acceleration)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
| Criterion | Target | Validation |
|
||||
|-----------|--------|------------|
|
||||
| Framework understanding | Can explain OCA cycle | Self-test |
|
||||
| Dual-layer evaluation | Can calculate V_instance, V_meta | Practice |
|
||||
| Convergence recognition | Can identify completion | Apply criteria |
|
||||
| Methodology documentation | Complete docs | Peer review |
|
||||
| Transferability | ≥85% reusability | Cross-project test |
|
||||
|
||||
---
|
||||
|
||||
**Next**: Read [observe-codify-automate.md](observe-codify-automate.md) for detailed OCA cycle explanation.
|
||||
Reference in New Issue
Block a user