Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:07:22 +08:00
commit fab98d059b
179 changed files with 46209 additions and 0 deletions

View File

@@ -0,0 +1,7 @@
# Iteration Templates
- Use `scripts/new-iteration-doc.sh <num> <title>` to scaffold iteration logs from `.claude/skills/code-refactoring/templates/iteration-template.md`.
- Fill in Observe/Codify/Automate and value scores immediately after running `make metrics-mcp`.
- Link evidence (tests, metrics files) to keep V_meta_completeness ≥ 0.8.
This practice was established in iteration-3.md and should be repeated for future refactors.

View File

@@ -0,0 +1,37 @@
{
"pattern_count": 8,
"patterns": [
{
"name": "builder_map_decomposition",
"description": "\u2014 Map tool/command identifiers to factory functions to eliminate switch ladders and ease extension (evidence: MCP server Iteration 1)."
},
{
"name": "pipeline_config_struct",
"description": "\u2014 Gather shared parameters into immutable config structs so orchestration functions stay linear and testable (evidence: MCP server Iteration 1)."
},
{
"name": "helper_specialization",
"description": "\u2014 Push tracing/metrics/error branches into helpers to keep primary logic readable and reuse instrumentation (evidence: MCP server Iteration 1)."
},
{
"name": "jq_pipeline_segmentation",
"description": "\u2014 Treat JSONL parsing, jq execution, and serialization as independent helpers to confine failure domains (evidence: MCP server Iteration 2)."
},
{
"name": "automation_first_metrics",
"description": "\u2014 Bundle metrics capture in scripts/make targets so every iteration records complexity & coverage automatically (evidence: MCP server Iteration 2, CLI Iteration 3)."
},
{
"name": "documentation_templates",
"description": "\u2014 Use standardized iteration templates + generators to maintain BAIME completeness with minimal overhead (evidence: MCP server Iteration 3, CLI Iteration 3)."
},
{
"name": "conversation_turn_builder",
"description": "\u2014 Extract user/assistant maps and assemble turns through helper orchestration to control complexity in conversation analytics (evidence: CLI Iteration 4)."
},
{
"name": "prompt_outcome_analyzer",
"description": "\u2014 Split prompt outcome evaluation into dedicated helpers (confirmation, errors, deliverables, status) for predictable analytics (evidence: CLI Iteration 4)."
}
]
}

View File

@@ -0,0 +1,9 @@
# Builder Map Decomposition
**Problem**: Command dispatchers with large switch statements cause high cyclomatic complexity and brittle branching (see iterations/iteration-1.md).
**Solution**: Replace the monolithic switch with a map of tool names to builder functions plus shared helpers for defaults. Keep scope flags as separate helpers for readability.
**Outcome**: Cyclomatic complexity dropped from 51 to 3 on `(*ToolExecutor).buildCommand`, with behaviour validated by existing executor tests.
**When to Use**: Any CLI/tool dispatcher with ≥8 branches or duplicated flag wiring.

View File

@@ -0,0 +1,9 @@
# Conversation Turn Pipeline
**Problem**: Conversation queries bundled user/assistant extraction, duration math, and output assembly into one 80+ line function, inflating cyclomatic complexity (25) and risking regressions when adding filters.
**Solution**: Extract helpers for user indexing, assistant metrics, turn collection, and timestamp finalization. Each step focuses on a single responsibility, enabling targeted unit tests and reuse across similar commands.
**Evidence**: `cmd/query_conversation.go` (CLI iteration-3) reduced `buildConversationTurns` to a coordinator with helper functions ≤6 complexity.
**When to Use**: Any CLI/API that pairs multi-role messages into aggregate records (e.g., chat analytics, ticket conversations) where duplicating loops would obscure business rules.

View File

@@ -0,0 +1,9 @@
# Prompt Outcome Analyzer
**Problem**: Analytics commands that inspect user prompts often intermingle success detection, error counting, and deliverable extraction within one loop, leading to brittle logic and high cyclomatic complexity.
**Solution**: Break the analysis into helpers that (1) detect user-confirmed success, (2) count tool errors, (3) aggregate deliverables, and (4) finalize status. The orchestration function composes these steps, making behaviour explicit and testable.
**Evidence**: Meta-CC CLI Iteration 4 refactored `analyzePromptOutcome` using this pattern, dropping complexity from 25 to 5 while preserving behaviour across short-mode tests.
**When to Use**: Any Go CLI or service that evaluates multi-step workflows (prompts, tasks, pipelines) and needs to separate signal extraction from aggregation logic.

View File

@@ -0,0 +1,7 @@
# Automate Evidence Capture
**Principle**: Every iteration should capture complexity and coverage metrics via a single command to keep BAIME evaluations trustworthy.
**Implementation**: Iteration 2 introduced `scripts/capture-mcp-metrics.sh`, later surfaced through `make metrics-mcp` (iteration-3.md). Running the target emits timestamped gocyclo and coverage reports under `build/methodology/`.
**Benefit**: Raises V_meta_effectiveness by eliminating manual data gathering and preventing stale metrics.

View File

@@ -0,0 +1,5 @@
# Pattern Name
- **Problem**: Describe the recurring issue.
- **Solution**: Summarize the refactoring tactic.
- **Evidence**: Link to iteration documents and metrics.