Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,103 @@
---
name: skill-creator
description: Use when the user has a document (PDF, markdown, book notes, research paper, methodology guide) containing theoretical knowledge or frameworks and wants to convert it into an actionable, reusable skill. Invoke when the user mentions "create a skill from this document", "turn this into a skill", "extract a skill from this file", or when analyzing documents with methodologies, frameworks, processes, or systematic approaches that could be made actionable for future use.
---
# Skill Creator
## Table of Contents
- [Read This First](#read-this-first)
- [Workflow](#workflow)
- [Step 1: Inspectional Reading](#step-1-inspectional-reading)
- [Step 2: Structural Analysis](#step-2-structural-analysis)
- [Step 3: Component Extraction](#step-3-component-extraction)
- [Step 4: Synthesis and Application](#step-4-synthesis-and-application)
- [Step 5: Skill Construction](#step-5-skill-construction)
- [Step 6: Validation and Refinement](#step-6-validation-and-refinement)
---
## Read This First
### What This Skill Does
This skill helps you transform documents containing theoretical knowledge into actionable, reusable skills. It applies systematic reading methodology from "How to Read a Book" by Mortimer Adler to extract, analyze, and structure knowledge from documents.
### The Process Overview
The skill follows a **six-step progressive reading approach**:
1. **Inspectional Reading** - Quick overview to understand structure and determine if the document contains skill-worthy material
2. **Structural Analysis** - Deep understanding of what the document is about and how it's organized
3. **Component Extraction** - Systematic extraction of actionable components from the content
4. **Synthesis and Application** - Critical evaluation and transformation of theory into practical application
5. **Skill Construction** - Building the actual skill files (SKILL.md, resources, rubric)
6. **Validation and Refinement** - Scoring the skill quality and making improvements
### Why This Approach Works
This methodology prevents common mistakes like:
- Reading entire documents without structure (information overload)
- Missing key concepts by not understanding the overall framework first
- Extracting theory without identifying practical applications
- Creating skills that can't be reused because they're too specific or too vague
### Collaborative Process
**This skill is always collaborative with you, the user.** At decision points, you'll be presented with options and trade-offs. The final decisions always belong to you. This ensures the skill created matches your needs and mental model.
---
## Workflow
**COPY THIS CHECKLIST** and work through each step:
```
Skill Creation Workflow
- [ ] Step 0: Initialize session workspace
- [ ] Step 1: Inspectional Reading
- [ ] Step 2: Structural Analysis
- [ ] Step 3: Component Extraction
- [ ] Step 4: Synthesis and Application
- [ ] Step 5: Skill Construction
- [ ] Step 6: Validation and Refinement
```
**Step 0: Initialize Session Workspace**
Create working directory and global context file. See [resources/inspectional-reading.md#session-initialization](resources/inspectional-reading.md#session-initialization) for setup commands.
**Step 1: Inspectional Reading**
Skim document systematically, classify type, assess skill-worthiness. Writes to `step-1-output.md`. See [resources/inspectional-reading.md#why-systematic-skimming](resources/inspectional-reading.md#why-systematic-skimming) for skim approach, [resources/inspectional-reading.md#why-document-type-matters](resources/inspectional-reading.md#why-document-type-matters) for classification, [resources/inspectional-reading.md#why-skill-worthiness-check](resources/inspectional-reading.md#why-skill-worthiness-check) for assessment criteria.
**Step 2: Structural Analysis**
Reads `global-context.md` + `step-1-output.md`. Classify content, state unity, enumerate parts, define problems. Writes to `step-2-output.md`. See [resources/structural-analysis.md#why-classify-content](resources/structural-analysis.md#why-classify-content), [resources/structural-analysis.md#why-state-unity](resources/structural-analysis.md#why-state-unity), [resources/structural-analysis.md#why-enumerate-parts](resources/structural-analysis.md#why-enumerate-parts), [resources/structural-analysis.md#why-define-problems](resources/structural-analysis.md#why-define-problems).
**Step 3: Component Extraction**
Reads `global-context.md` + `step-2-output.md`. Choose reading strategy, extract terms/propositions/arguments/solutions section-by-section. Writes to `step-3-output.md`. See [resources/component-extraction.md#why-reading-strategy](resources/component-extraction.md#why-reading-strategy) for strategy selection, [resources/component-extraction.md#section-based-extraction](resources/component-extraction.md#section-based-extraction) for programmatic approach, [resources/component-extraction.md#why-extract-terms](resources/component-extraction.md#why-extract-terms) through [resources/component-extraction.md#why-extract-solutions](resources/component-extraction.md#why-extract-solutions) for what to extract.
**Step 4: Synthesis and Application**
Reads `global-context.md` + `step-3-output.md`. Evaluate completeness, identify applications, transform to actionable steps, define triggers. Writes to `step-4-output.md`. See [resources/synthesis-application.md#why-evaluate-completeness](resources/synthesis-application.md#why-evaluate-completeness), [resources/synthesis-application.md#why-identify-applications](resources/synthesis-application.md#why-identify-applications), [resources/synthesis-application.md#why-transform-to-actions](resources/synthesis-application.md#why-transform-to-actions), [resources/synthesis-application.md#why-define-triggers](resources/synthesis-application.md#why-define-triggers).
**Step 5: Skill Construction**
Reads `global-context.md` + `step-4-output.md`. Determine complexity, plan resources, create SKILL.md and resource files, create rubric. Writes to `step-5-output.md`. See [resources/skill-construction.md#why-complexity-level](resources/skill-construction.md#why-complexity-level), [resources/skill-construction.md#why-plan-resources](resources/skill-construction.md#why-plan-resources), [resources/skill-construction.md#why-skill-md-structure](resources/skill-construction.md#why-skill-md-structure), [resources/skill-construction.md#why-resource-structure](resources/skill-construction.md#why-resource-structure), [resources/skill-construction.md#why-evaluation-rubric](resources/skill-construction.md#why-evaluation-rubric).
**Step 6: Validation and Refinement**
Reads `global-context.md` + `step-5-output.md` + actual skill files. Score using rubric, present analysis, refine based on user decision. Writes to `step-6-output.md`. See [resources/evaluation-rubric.json](resources/evaluation-rubric.json) for criteria.
---
## Notes
- **File-Based Context:** Each step writes output files to avoid context overflow
- **Global Context:** All steps read `global-context.md` for continuity
- **Sequential Dependencies:** Each step reads previous step's output
- **User Collaboration:** Always present findings and get approval at decision points
- **Quality Standards:** Use evaluation rubric (threshold ≥ 3.5) before delivery

View File

@@ -0,0 +1,340 @@
# Component Extraction
This resource supports **Step 3** of the Skill Creator workflow.
**Input files:** `$SESSION_DIR/global-context.md`, `$SESSION_DIR/step-2-output.md`, `$SOURCE_DOC` (section-by-section reading)
**Output files:** `$SESSION_DIR/step-3-output.md`, `$SESSION_DIR/step-3-extraction-workspace.md` (intermediate), updates `global-context.md`
**Stage goal:** Extract all actionable components systematically using the interpretive reading approach.
---
## Why Reading Strategy
### WHY Strategy Selection Matters
Different document sizes and structures need different reading approaches:
- **Too large to read at once:** Context overflow, can't hold everything in memory
- **Too structured for linear reading:** Miss relationships between non-adjacent sections
- **Too dense for single pass:** Need multiple focused extractions
**Mental model:** You wouldn't read a 500-page book the same way you read a 10-page article. Match your reading strategy to the document characteristics.
Without a deliberate strategy: context overflow, missed content, inefficient extraction, fatigue.
### WHAT Strategies Exist
Choose based on document characteristics identified in Steps 1-2:
**Three strategies:**
1. **Section-Based:** Document has clear sections < 50 pages. Read one section, extract, write notes, clear, repeat.
2. **Windowing:** Long document > 50 pages without clear breaks. Read 200-line chunks with 20-line overlap.
3. **Targeted:** Hybrid content. Based on Step 2, read only high-value sections identified.
---
### WHAT to Decide
**Present options to user:**
```markdown
## Reading Strategy Options
Based on document analysis:
- Size: [X pages/lines]
- Structure: [Clear sections / Continuous flow]
- Relevant parts: [All / Specific sections]
**Recommended strategy:** [Section-based / Windowing / Targeted]
**Rationale:** [Why this strategy fits]
**Alternative approach:** [If applicable]
Which approach would you like to use?
```
---
## Section-Based Extraction
### WHY Programmatic Approach
Reading entire document into context:
- Floods context with potentially thousands of lines
- Makes it hard to focus on one section at a time
- Risk of losing extracted content before writing it down
**Solution:** Read one section, extract components, write to intermediate file, clear from context, repeat.
### WHAT to Do
#### Step 1: Read Global Context and Previous Output
```bash
# Read what we know so far
Read("$SESSION_DIR/global-context.md")
Read("$SESSION_DIR/step-2-output.md")
```
From step-2-output, you'll have the list of major parts/sections.
#### Step 2: Initialize Extraction File
```bash
# Create extraction workspace
cat > "$SESSION_DIR/step-3-extraction-workspace.md" << 'EOF'
# Component Extraction Workspace
## Sections Processed
[Mark sections as you complete them]
## Extracted Components
[Append after each section]
EOF
```
#### Step 3: Process Each Section
**For each section from step-2-output:**
```bash
# Example for Section 1
# Read just that section from source document
Read("$SOURCE_DOC", offset=[start_line], limit=[section_length])
# Extract components from this section following the guides below:
# - Extract Terms (see Why Extract Terms section)
# - Extract Propositions (see Why Extract Propositions section)
# - Extract Arguments (see Why Extract Arguments section)
# - Extract Solutions (see Why Extract Solutions section)
# Write extraction notes to workspace
cat >> "$SESSION_DIR/step-3-extraction-workspace.md" << 'EOF'
### Section [X]: [Section Name]
**Terms extracted:**
- [Term 1]: [Definition]
- [Term 2]: [Definition]
**Propositions extracted:**
- [Proposition 1]
- [Proposition 2]
**Arguments extracted:**
- [Argument 1]
**Solutions/Examples:**
- [Example 1]
EOF
# Mark section as complete
echo "- [x] Section [X]: [Name]" >> "$SESSION_DIR/step-3-extraction-workspace.md"
# Clear this section from context (it's now in the file)
# Move to next section
```
**Repeat for all sections.**
#### Step 4: Synthesize Extraction Notes
After all sections processed:
```bash
# Read the extraction workspace
Read("$SESSION_DIR/step-3-extraction-workspace.md")
# Synthesize into final step-3-output
# Combine all terms, remove duplicates
# Combine all propositions, identify core ones
# Combine all arguments, identify workflow sequences
# Combine all solutions/examples
```
**Write final output:** (see end of this file for output template)
---
## Why Extract Terms
### WHY Key Terms Matter
Terms are the building blocks of understanding:
- They're the **specialized vocabulary** of the methodology
- Define the conceptual framework
- Must be understood to apply the skill
- Become the "Key Concepts" section of your skill
**Adler's rule:** "Come to terms with the author by interpreting key words."
**Mental model:** Terms are like the variables in an equation. You can't solve the equation without knowing what the variables mean.
Without term extraction: users misunderstand the skill, can't apply it correctly, confusion about core concepts.
### WHAT to Extract
**Look for:** Defined terms, repeated concepts, technical vocabulary, emphasized terms. **Skip:** Common words, one-off mentions.
**Format per term:** Name, Definition, Context (why it matters), Usage (how used in practice).
**How many:** 5-15 terms. Test: Would users be confused without this term?
---
## Why Extract Propositions
### WHY Propositions Matter
Propositions are the **key assertions** or principles:
- They're the "truths" the author claims
- Form the theoretical foundation
- Often become guidelines or principles in your skill
- Different from process steps (those come from arguments)
**Adler's rule:** "Grasp the author's leading propositions by dealing with important sentences."
**Mental model:** Propositions are like theorems in mathematics - fundamental truths that everything else builds on.
Without proposition extraction: shallow skill that misses underlying principles, can't explain WHY the method works.
### WHAT to Extract
**Look for:** Declarative/principle/causal statements. Signal phrases: "key insight", "research shows", "fundamental principle".
**Format per proposition:** Short title, Statement (one sentence), Evidence (why true), Implication (for practice).
**How many:** 5-10 core propositions that explain why methodology works.
---
## Why Extract Arguments
### WHY Arguments Matter
Arguments are the **logical sequences** that connect premises to conclusions:
- They explain HOW the methodology works
- Often become the step-by-step workflow
- Show dependencies and order
- Reveal decision points
**Adler's rule:** "Know the author's arguments by finding them in sequences of sentences."
**Mental model:** Arguments are like algorithms - step-by-step logic from inputs to outputs.
Without argument extraction: missing the procedural flow, unclear how to apply the methodology, no decision logic.
### WHAT to Extract
**Look for:** If-then sequences, step sequences, causal chains, decision trees. Signal phrases: "the process is", "follow these steps", "this leads to".
**Format per argument:** Name, Premise, Sequence (numbered steps), Conclusion, Decision points.
**Map to workflow:** Sequential → linear steps; Decision-tree → branching; Parallel → optional/modular.
---
## Why Extract Solutions
### WHY Solutions Matter
Solutions are the **practical applications** and outcomes:
- They show what success looks like
- Often become examples in your skill
- Reveal edge cases and variations
- Demonstrate application in different contexts
**Adler's rule:** "Determine which of the problems the author has solved, and which they have not."
**Mental model:** Solutions are the "proof" that the methodology works - concrete instances of application.
Without solution extraction: theoretical skill without practical grounding, unclear what success looks like, no examples.
### WHAT to Extract
**Look for:** Examples, case studies, templates, before/after, success criteria. Types: worked examples, templates, checklists, success indicators.
**Format per solution:** Name, Problem, Application (how methodology applied), Outcome, Key factors, Transferability.
**For templates:** Name, Purpose, Structure outline, How to use (steps), Example if available.
---
## Write Step 3 Output
After completing extraction from all sections and getting user validation, write to output file:
```bash
cat > "$SESSION_DIR/step-3-output.md" << 'EOF'
# Step 3: Component Extraction Output
## Key Terms (5-15)
### [Term 1]
**Definition:** [Clear definition]
**Context:** [Why it matters]
### [Term 2]
...
## Core Propositions (5-10)
1. **[Proposition title]:** [Statement]
- Evidence: [Support]
- Implication: [For practice]
2. **[Proposition 2]:** ...
## Arguments & Sequences
### Argument 1: [Name]
**Premise:** [Starting condition]
**Sequence:**
1. [Step 1]
2. [Step 2]
**Conclusion:** [Result]
**Decision points:** [Choices]
### Argument 2: ...
## Solutions & Examples
### Example 1: [Name]
**Problem:** [Context]
**Application:** [How applied]
**Outcome:** [Result]
### Example 2: ...
## Gaps Identified
- [Gap 1 to address in synthesis]
- [Gap 2]
## User Validation
**Status:** [Approved / Needs revision]
**User notes:** [Feedback]
EOF
```
**Update global context:**
```bash
cat >> "$SESSION_DIR/global-context.md" << 'EOF'
## Step 3 Complete
**Components extracted:** [X terms, Y propositions, Z arguments, W examples]
**Gaps identified:** [List if any]
EOF
```
**Next step:** Step 4 (Synthesis) will read `global-context.md` + `step-3-output.md`.

View File

@@ -0,0 +1,94 @@
{
"criteria": [
{
"name": "Completeness",
"description": "Are all required components present (YAML, TOC, Read This First, Workflow checklist, Step details, Resources, Rubric)?",
"scores": {
"1": "Missing 3+ major components (e.g., no workflow, no resources, no rubric)",
"2": "Missing 1-2 major components or several minor elements",
"3": "All major components present but some minor elements missing (e.g., one step lacks goal statement)",
"4": "All required components present with very minor gaps (e.g., missing one optional section)",
"5": "All components fully present and comprehensive, nothing missing"
}
},
{
"name": "Clarity",
"description": "Are instructions clear, unambiguous, and easy to understand? Is language precise and terminology well-defined?",
"scores": {
"1": "Instructions are confusing, ambiguous, or contradictory. Unclear what to do.",
"2": "Many instructions are vague or require interpretation. Key terms undefined.",
"3": "Most instructions are clear but some ambiguity remains. Key terms mostly defined.",
"4": "Instructions are clear and precise with minor ambiguity in edge cases only.",
"5": "All instructions crystal clear, unambiguous, and precise. All terms well-defined."
}
},
{
"name": "Actionability",
"description": "Can this skill be followed step-by-step? Are steps concrete and executable?",
"scores": {
"1": "Steps are too abstract or theoretical to execute. No clear action guidance.",
"2": "Some steps are actionable but many remain too high-level or vague.",
"3": "Most steps are actionable with concrete actions, but some need more specificity.",
"4": "Nearly all steps provide clear, executable actions with minor gaps.",
"5": "All steps are concrete, specific, and immediately executable. Clear action verbs throughout."
}
},
{
"name": "Structure & Organization",
"description": "Is the skill logically organized? Is navigation easy? Do resource links work correctly?",
"scores": {
"1": "Disorganized, hard to navigate. No clear structure. Broken or missing links.",
"2": "Some organization but flow is unclear. Several broken links or poor organization.",
"3": "Logical organization with minor navigation issues. Most links work correctly.",
"4": "Well-organized and easy to navigate. All links work. Very minor structural issues.",
"5": "Excellently organized with perfect navigation. Logical flow. All links and anchors work perfectly."
}
},
{
"name": "Triggers & When to Use",
"description": "Is it clear WHEN to use this skill? Are triggers specific and recognizable? Does YAML description focus on WHEN not WHAT?",
"scores": {
"1": "No clear triggers. YAML description describes WHAT the skill does, not WHEN to use it.",
"2": "Vague triggers that are hard to recognize. YAML description partially WHEN-focused.",
"3": "Triggers present and somewhat specific. YAML description adequately WHEN-focused.",
"4": "Clear, specific triggers that are easy to recognize. YAML description well WHEN-focused.",
"5": "Excellent, specific triggers with clear examples. YAML description perfectly WHEN-focused with multiple trigger scenarios."
}
},
{
"name": "Resource Quality",
"description": "Do resources follow WHY/WHAT structure? Do WHY sections activate context appropriately? Do WHAT sections provide actionable guidance?",
"scores": {
"1": "Resources don't follow WHY/WHAT structure. Missing critical content.",
"2": "Partial WHY/WHAT structure. WHY sections over-explain or under-explain. WHAT sections vague.",
"3": "Resources follow WHY/WHAT structure adequately. Some sections could be clearer or more focused.",
"4": "Resources follow WHY/WHAT structure well. WHY activates context appropriately. WHAT provides clear guidance.",
"5": "Resources perfectly structured with WHY/WHAT. WHY sections perfectly prime context. WHAT sections provide excellent, specific guidance."
}
},
{
"name": "User Collaboration",
"description": "Are user choice points clearly marked? Are options presented with trade-offs? Does the skill appropriately involve the user in decisions?",
"scores": {
"1": "No user involvement. Skill makes all decisions or lacks decision points entirely.",
"2": "Minimal user involvement. Few choice points. Options not well explained.",
"3": "Some user collaboration. Key choice points marked. Options explained adequately.",
"4": "Good user collaboration. Most choice points marked. Options presented with clear trade-offs.",
"5": "Excellent collaboration. All choice points clearly marked. Options presented with comprehensive trade-off analysis. User empowered to make informed decisions."
}
},
{
"name": "File Size Compliance",
"description": "Are all files under 500 lines? Is content appropriately distributed across files?",
"scores": {
"1": "Multiple files significantly over 500 lines (>600 lines)",
"2": "One or more files moderately over limit (500-600 lines)",
"3": "All files under 500 lines but some are close to limit (450-499)",
"4": "All files comfortably under 500 lines (most under 450)",
"5": "All files well under 500 lines with good content distribution"
}
}
],
"threshold": 3.5,
"passing_note": "Average score must be ≥ 3.5 for skill to be considered complete and ready for use. Any individual criterion scoring below 3 requires revision before delivery. After scoring, present results to user with specific improvement suggestions for any criterion scoring below 4."
}

View File

@@ -0,0 +1,459 @@
# Inspectional Reading
This resource supports **Step 0** and **Step 1** of the Skill Creator workflow.
**Step 0 - Input files:** None (initialization step)
**Step 0 - Output files:** `$SESSION_DIR/global-context.md` (created)
**Step 1 - Input files:** `$SESSION_DIR/global-context.md`, `$SOURCE_DOC` (skim only)
**Step 1 - Output files:** `$SESSION_DIR/step-1-output.md`, updates `global-context.md`
---
## Session Initialization
### WHY File-Based Workflow Matters
Working with documents for skill extraction can flood context with:
- Entire document content (potentially thousands of lines)
- Extracted components accumulating across steps
- Analysis and synthesis notes
**Solution:** Write outputs to files after each step, read only what's needed for current step.
**Mental model:** Each step is a pipeline stage that reads inputs, processes, and writes outputs. Next stage picks up from there.
### WHAT to Set Up
#### Create Session Directory
```bash
# Create timestamped session directory
SESSION_DIR="/tmp/skill-extraction-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$SESSION_DIR"
echo "Session workspace: $SESSION_DIR"
```
#### Initialize Global Context
```bash
# Create global context file
cat > "$SESSION_DIR/global-context.md" << 'EOF'
# Skill Extraction Global Context
**Source document:** [path will be added in step 1]
**Session started:** $(date)
**Target skill name:** [to be determined]
## Key Information Across Steps
[This file is updated by each step with critical information needed by subsequent steps]
EOF
```
#### Set Document Path
Store the source document path for reference:
```bash
# Set this to your actual document path
SOURCE_DOC="[user-provided-path]"
echo "**Source document:** $SOURCE_DOC" >> "$SESSION_DIR/global-context.md"
```
**You're now ready for Step 1.**
---
## Why Systematic Skimming
### WHY This Matters
Systematic skimming activates the right reading approach before deep engagement. Without it:
- You waste time reading documents that don't contain extractable skills
- You miss the overall structure, making later extraction harder
- You can't estimate the effort required or plan the approach
- You don't know what type of content you're dealing with
**Mental model:** Think of this as reconnaissance before a mission. You need to know the terrain before committing resources.
**Key insight from Adler:** Inspectional reading answers "Is this book worth reading carefully?" and "What kind of book is this?" - both critical before investing analytical reading effort.
### WHAT to Do
Perform the following skimming activities in order:
#### 1. Check Document Metadata & Read Title/Introduction
Get file info, note size and type. Read title, introduction/abstract completely to extract stated purpose and intended audience.
#### 2. Examine TOC/Structure
Read TOC if exists. If not, scan headers to create quick outline. Note major sections, sequence, and depth.
#### 3. Scan Key Elements & End Material
Read first paragraph of major sections, summaries, conclusion/final pages. Note diagrams/tables/callouts. **Time: 10-30 minutes total depending on document size.**
---
## Why Document Type Matters
### WHY Classification Is Essential
Document type determines:
- **Reading strategy:** Code requires different analysis than prose
- **Extraction targets:** Methodologies yield processes; frameworks yield decision structures
- **Skill structure:** Some documents map to linear workflows; others to contextual frameworks
- **Expected completeness:** Research papers have gaps; guidebooks are comprehensive
**Mental model:** You wouldn't use a roadmap the same way you use a cookbook. Different document types serve different purposes and need different extraction approaches.
### WHAT Document Types Exist
After skimming, classify the document into one of these types:
#### Type 1: Methodology / Process Guide
**Characteristics:**
- Sequential steps or phases
- Clear "first do X, then Y" structure
- Process diagrams or flowcharts
- Decision points along a path
**Examples:**
- "How to conduct user research"
- "The scientific method"
- "Agile development process"
**Extraction focus:** Steps, sequence, inputs/outputs, decision criteria
**Skill structure:** Linear workflow with numbered steps
---
#### Type 2: Framework / Mental Model
**Characteristics:**
- Dimensions, axes, or categories
- Principles or heuristics
- Matrices or quadrants
- Conceptual models
**Examples:**
- "Eisenhower decision matrix"
- "Design thinking principles"
- "SWOT analysis framework"
**Extraction focus:** Dimensions, categories, when to apply each, interpretation guide
**Skill structure:** Framework application with decision logic
---
#### Type 3: Tool / Template
**Characteristics:**
- Fill-in-the-blank sections
- Templates or formats
- Checklists
- Structured forms
**Examples:**
- "Business model canvas"
- "User story template"
- "Code review checklist"
**Extraction focus:** Template structure, what goes in each section, usage guidelines
**Skill structure:** Template with completion instructions
---
#### Type 4: Theoretical / Conceptual
**Characteristics:**
- Explains "why" more than "how"
- Research findings
- Principles without procedures
- Conceptual relationships
**Examples:**
- "Cognitive load theory"
- "Growth mindset research"
- "System dynamics principles"
**Extraction focus:** Core concepts, implications, how to apply theory in practice
**Skill structure:** Concept → Application mapping (requires synthesis step)
**Note:** This type needs extra work in Step 4 (Synthesis) to make actionable
---
#### Type 5: Reference / Catalog
**Characteristics:**
- Lists of items, patterns, or examples
- Encyclopedia-like structure
- Lookup-oriented
- No overarching process
**Examples:**
- "Design patterns catalog"
- "Cognitive biases list"
- "API reference"
**Skill-worthiness:** **Usually NOT skill-worthy** - these are references, not methodologies
**Exception:** If the document includes *when/how to choose* among options, extract that decision framework
---
#### Type 6: Hybrid
**Characteristics:**
- Combines multiple types above
- Has both framework and process
- Includes theory and application
**Approach:** Identify which parts map to which types, extract each accordingly
**Example:** "Design thinking" combines a framework (mindsets) with a process (steps) and tools (templates)
---
### WHAT to Decide
Based on document type classification, answer:
1. **Primary type:** Which category best fits this document?
2. **Secondary aspects:** Does it have elements of other types?
3. **Extraction strategy:** What should we focus on extracting?
4. **Skill structure:** What will the resulting skill look like?
**Present to user:** "I've classified this as a [TYPE] document. This means we'll focus on extracting [EXTRACTION TARGETS] and structure the skill as [SKILL STRUCTURE]. Does this match your understanding?"
---
## Why Skill-Worthiness Check
### WHY Not Everything Is Skill-Worthy
Creating a skill has overhead:
- Time to extract, structure, and validate
- Maintenance burden (keeping it updated)
- Cognitive load (another skill to remember exists)
**Only create skills for material that is:**
- Reusable across multiple contexts
- Teachable (can be articulated as steps or principles)
- Non-obvious (provides value beyond common sense)
- Complete enough to be actionable
**Anti-pattern:** Creating skills for one-time information or simple facts that don't need systematic application.
### WHAT Makes Content Skill-Worthy
Evaluate against these criteria:
#### Criterion 1: Teachability
**Question:** Can this be taught as a process, framework, or set of principles?
**Strong signals:**
- Clear steps or stages
- Decision rules or criteria
- Repeatable patterns
- Structured approach
**Weak signals:**
- Purely informational (facts without process)
- Contextual knowledge (only applies in one situation)
- Opinion without methodology
- Single example without generalization
**Decision:** If you can't articulate "Here's how to do this" or "Here's how to think about this," it's not teachable.
---
#### Criterion 2: Generalizability
**Question:** Can this be applied across multiple situations or domains?
**Strong signals:**
- Document shows examples from different domains
- Principles are abstract enough to transfer
- Method doesn't depend on specific tools/context
- Core process remains stable across use cases
**Weak signals:**
- Highly specific to one tool or platform
- Only works in one narrow context
- Requires specific resources you won't have
- Examples are all from the same narrow domain
**Decision:** If it only works in one exact scenario, it's probably not worth a skill.
---
#### Criterion 3: Recurring Problem
**Question:** Is this solving a problem that comes up repeatedly?
**Strong signals:**
- Document addresses common pain points
- You can imagine needing this multiple times
- Problem exists across projects/contexts
- It's not a one-time decision
**Weak signals:**
- One-off decision or task
- Historical information
- Situational advice for rare scenarios
**Decision:** If you'll only use it once, save it as a note instead of a skill.
---
#### Criterion 4: Actionability
**Question:** Does this provide enough detail to actually do something?
**Strong signals:**
- Concrete steps or methods
- Clear decision criteria
- Examples showing application
- Guidance on handling edge cases
**Weak signals:**
- High-level philosophy only
- Vague principles without application
- Aspirational goals without methods
- "You should do X" without explaining how
**Decision:** If the document is all theory with no application guidance, flag this - you'll need to create the application in Step 4.
---
#### Criterion 5: Completeness
**Question:** Is there enough material to create a useful skill?
**Strong signals:**
- Multiple sections or components
- Depth beyond surface level
- Covers multiple aspects (when, how, why)
- Includes examples or case studies
**Weak signals:**
- Single tip or trick
- One-paragraph advice
- Incomplete methodology
- Missing critical steps
**Decision:** If the document is too sparse, it might be better as a reference note than a full skill.
---
### WHAT to Do: Skill-Worthiness Decision
Score the document on each criterion (1-5 scale):
- **5:** Strongly meets criterion
- **4:** Meets criterion well
- **3:** Partially meets criterion
- **2:** Weakly meets criterion
- **1:** Doesn't meet criterion
**Threshold:** If average score ≥ 3.5, proceed with skill extraction
**If score < 3.5, present options to user:**
**Option A: Proceed with Modifications**
- "This document is borderline skill-worthy. We can proceed, but we'll need to supplement it with additional application guidance in Step 4. Should we continue?"
**Option B: Save as Reference**
- "This might be better saved as a reference document rather than a full skill. Would you prefer to extract key insights into a note instead?"
**Option C: Defer Until More Material Available**
- "This document alone isn't sufficient for a skill. Do you have additional related documents we could synthesize together?"
**Present to user:**
```
Skill-Worthiness Assessment:
- Teachability: [score]/5 - [brief rationale]
- Generalizability: [score]/5 - [brief rationale]
- Recurring Problem: [score]/5 - [brief rationale]
- Actionability: [score]/5 - [brief rationale]
- Completeness: [score]/5 - [brief rationale]
Average: [X.X]/5
Recommendation: [Proceed / Modify / Alternative approach]
What would you like to do?
```
---
## Write Step 1 Output
After completing inspectional reading and getting user approval, write results to file:
```bash
# Write Step 1 output
cat > "$SESSION_DIR/step-1-output.md" << 'EOF'
# Step 1: Inspectional Reading Output
## Document Classification
**Type:** [methodology/framework/tool/theory/reference/hybrid]
**Structure:** [clear sections / continuous flow / mixed]
**Page/line count:** [X]
## Document Overview
**Main topic:** [1-2 sentence summary]
**Key sections identified:**
1. [Section 1]
2. [Section 2]
3. [Section 3]
...
## Skill-Worthiness Assessment
**Scores:**
- Teachability: [X]/5 - [rationale]
- Generalizability: [X]/5 - [rationale]
- Recurring Problem: [X]/5 - [rationale]
- Actionability: [X]/5 - [rationale]
- Completeness: [X]/5 - [rationale]
**Average:** [X.X]/5
**Decision:** [Proceed / Modify / Alternative]
## User Approval
**Status:** [Approved / Rejected / Modified]
**User notes:** [Any specific guidance from user]
EOF
```
**Update global context:**
```bash
# Add key info to global context
cat >> "$SESSION_DIR/global-context.md" << 'EOF'
## Step 1 Complete
**Document type:** [type]
**Skill-worthiness:** [average score]/5
**Approved to proceed:** [Yes/No]
EOF
```
**Next step:** Proceed to Step 2 (Structural Analysis) which will read `global-context.md` + `step-1-output.md`.

View File

@@ -0,0 +1,419 @@
# Skill Construction
This resource supports **Step 5** of the Skill Creator workflow.
**Input files:** `$SESSION_DIR/global-context.md`, `$SESSION_DIR/step-4-output.md`
**Output files:** `$SESSION_DIR/step-5-output.md`, actual skill files created in target directory, updates `global-context.md`
**Stage goal:** Build the actual skill files following standard structure.
---
## Why Complexity Level
### WHY Complexity Assessment Matters
Complexity determines skill structure:
- **Simple skills:** SKILL.md only (no resources needed)
- **Moderate skills:** SKILL.md + 1-3 focused resource files
- **Complex skills:** SKILL.md + 4-8 resource files
**Mental model:** Don't build a mansion when a cottage will do. Match structure to content needs.
Over-engineering (too many files for simple content): maintenance burden, navigation difficulty
Under-engineering (too little structure for complex content): bloated files, poor organization
### WHAT Complexity Levels Exist
**Levels:**
**Level 1 - Simple:** 3-5 steps, < 300 lines total → SKILL.md + rubric only
**Level 2 - Moderate:** 5-8 steps, 300-800 lines → SKILL.md + 1-3 resource files + rubric
**Level 3 - Complex:** 8+ steps, 800+ lines → SKILL.md + 4-8 resource files + rubric
---
### WHAT to Decide
**Assess your extracted content:**
**Count:**
- Total workflow steps: [X]
- Major decision points: [X]
- Key concepts to explain: [X]
- Examples needed: [X]
- Total estimated lines: [X]
**Complexity level:** [1 / 2 / 3]
**Rationale:** [Why this level fits]
**Proposed structure:** [List of files]
**Present to user:** "Based on content volume, I recommend [LEVEL] complexity. Structure: [FILES]. Does this make sense?"
---
## Why Plan Resources
### WHY Resource Planning Matters
Resource files should be:
- **Focused:** Each file covers one cohesive topic
- **Referenced:** Linked from SKILL.md workflow steps
- **Sized appropriately:** Under 500 lines each
- **WHY/WHAT structured:** Follows standard format
**Mental model:** Resource files are like appendices - referenced when needed, not read linearly.
Poor planning: overlapping content, unclear purpose, navigation difficulty, files too large or too granular.
### WHAT to Plan
#### Grouping Principles
**Group related content into resource files based on:**
**By workflow step** (for complex skills):
- One resource per major step
- Contains WHY/WHAT for all sub-steps
- Example: `inspectional-reading.md` for Step 1
**By topic** (for moderate skills):
- Group related concepts
- Example: `key-concepts.md`, `decision-framework.md`, `examples.md`
**By type** (alternative):
- All principles in one file
- All examples in another
- All templates in another
---
#### Resource File Template
**Each resource file should include:**
```markdown
# [Resource Name]
This resource supports **Step X** of the [Skill Name] workflow.
---
## Why [First Topic]
### WHY This Matters
[Brief explanation activating context, not over-explaining]
### WHAT to Do
[Specific instructions, options with tradeoffs, user choice points]
---
## Why [Second Topic]
### WHY This Matters
...
### WHAT to Do
...
```
---
#### File Naming
**Use descriptive, kebab-case names:**
-`inspectional-reading.md`
-`component-extraction.md`
-`evaluation-rubric.json`
-`resource1.md`
-`temp_file.md`
---
### WHAT to Document
```markdown
## Resource Plan
**Complexity level:** [Level 1/2/3]
**Resource files:**
1. **[filename.md]**
- Purpose: [What this covers]
- Linked from: [Which SKILL.md steps]
- Estimated lines: [X]
2. **[filename.md]**
- Purpose: [What this covers]
- Linked from: [Which SKILL.md steps]
- Estimated lines: [X]
3. **evaluation-rubric.json**
- Purpose: Quality scoring
- Linked from: Final validation step
**Total files:** [X]
```
**Present to user:** "Here's the resource structure plan. Any changes needed?"
---
## Why SKILL.md Structure
### WHY Standard Structure Matters
SKILL.md must follow conventions:
- **YAML frontmatter:** For skill system identification and invocation
- **Table of Contents:** For navigation
- **Read This First:** For context and overview
- **Workflow with checklist:** For execution
- **Step details:** With resource links where needed
**Mental model:** SKILL.md is the "main" file - everything starts here.
Without standard structure: skill won't be invoked correctly, users confused about how to start, poor usability.
### WHAT to Include
#### 1. YAML Frontmatter
```yaml
---
name: skill-name
description: Use when [TRIGGER CONDITIONS - focus on WHEN not WHAT]. Invoke when [USER MENTIONS]. Also applies when [SCENARIOS].
---
```
**Critical:** Description focuses on WHEN to use (triggers), not WHAT it does.
**Bad:** `description: Extracts skills from documents`
**Good:** `description: Use when user has a document containing theory or methodology and wants to convert it into a reusable skill`
---
#### 2. Title and Table of Contents
Standard TOC linking to Read This First, Workflow, and each step.
#### 3. Read This First
Includes: What skill does (1-2 sentences), Process overview, Why it works, Collaborative nature.
#### 4. Workflow Section
Must have: **"COPY THIS CHECKLIST"** instruction, followed by checklist with steps and sub-tasks.
#### 5. Step Details
Format: Step heading, Goal statement, Sub-tasks with resource links or inline instructions.
---
### WHAT to Validate
**Check:**
- ✅ YAML frontmatter present and description focuses on WHEN
- ✅ Table of contents complete
- ✅ Read This First section provides context
- ✅ Workflow has explicit copy instruction
- ✅ All steps have goals and sub-tasks
- ✅ Resource links are correct and use anchors
- ✅ File is under 500 lines
---
## Why Resource Structure
### WHY WHY/WHAT Format Matters
Resource files follow standard format:
- **WHY sections:** Activate relevant context, explain importance
- **WHAT sections:** Provide specific instructions, present options
**Mental model:** WHY primes the LLM's activation space; WHAT provides execution guidance.
Without WHY: LLM may not activate relevant knowledge, shallow understanding
Without WHAT: Clear intent but unclear execution, user stuck on "now what?"
### WHAT to Include in Resources
#### WHY Section Format
Explains what this accomplishes, why it's important, how it fits in the process. Optional mental model. Keep focused - don't over-explain.
#### WHAT Section Format
Specific instructions in clear steps. If options exist, present each with: when to use, pros, cons, how. Mark user choice points.
---
### WHAT to Validate
**For each resource file, check:**
- ✅ Each major section has WHY and WHAT subsections
- ✅ WHY explains importance without over-explaining
- ✅ WHAT provides concrete, actionable guidance
- ✅ Options presented with trade-offs when applicable
- ✅ User choice points clearly marked
- ✅ File is under 500 lines
---
## Why Evaluation Rubric
### WHY Rubric Matters
The rubric enables:
- **Self-assessment:** LLM can objectively score its work
- **Quality standards:** Clear criteria for success
- **Improvement identification:** Know what needs fixing
- **User transparency:** User sees quality assessment
**Mental model:** Rubric is like a grading rubric for an assignment - objective criteria for evaluation.
Without rubric: no quality control, subjective assessment, missed improvements.
### WHAT to Include
#### JSON Structure
```json
{
"criteria": [
{
"name": "[Criterion Name]",
"description": "[What this measures]",
"scores": {
"1": "[Description of score 1 performance]",
"2": "[Description of score 2 performance]",
"3": "[Description of score 3 performance]",
"4": "[Description of score 4 performance]",
"5": "[Description of score 5 performance]"
}
}
],
"threshold": 3.5,
"passing_note": "Average score must be ≥ 3.5 for skill to be considered complete. Scores below 3 in any category require revision."
}
```
---
#### Standard Criteria
**Recommended criteria for skills:**
1. **Completeness:** Are all required components present?
2. **Clarity:** Are instructions clear and unambiguous?
3. **Actionability:** Can this be followed step-by-step?
4. **Structure:** Is organization logical and navigable?
5. **Examples:** Are sufficient examples provided?
6. **Triggers:** Is "when to use" clearly defined?
**Customize based on skill type** - add domain-specific criteria as needed.
---
#### Scoring Guidelines
**Score scale:**
- **5:** Excellent - exceeds expectations
- **4:** Good - meets all requirements well
- **3:** Adequate - meets minimum requirements
- **2:** Needs improvement - significant gaps
- **1:** Poor - major issues
**Threshold:** Average ≥ 3.5 recommended
---
### WHAT to Validate
**Check rubric:**
- ✅ 4-7 criteria (not too few or too many)
- ✅ Each criterion has clear descriptions for scores 1-5
- ✅ Criteria are measurable (not subjective)
- ✅ Threshold is specified
- ✅ JSON is valid
---
## Write Step 5 Output
After completing skill construction and verifying files, write to output file:
```bash
cat > "$SESSION_DIR/step-5-output.md" << 'EOF'
# Step 5: Skill Construction Output
## Complexity Level
**Level:** [1/2/3] - [Simple/Moderate/Complex]
**Rationale:** [Why this level]
## Resource Plan
**Files created:**
1. SKILL.md ([X] lines) - Main skill file
2. resources/[filename1].md ([X] lines) - [Purpose]
3. resources/[filename2].md ([X] lines) - [Purpose]
...
N. resources/evaluation-rubric.json - Quality scoring
**Total files:** [X]
## Skill Location
**Path:** [Full path to created skill directory]
## File Verification
**Line counts:**
- ✅ SKILL.md: [X]/500 lines
- ✅ [file1].md: [X]/500 lines
- ✅ [file2].md: [X]/500 lines
...
**Structure checks:**
- ✅ YAML frontmatter present and WHEN-focused
- ✅ Table of contents complete
- ✅ Workflow has copy instruction
- ✅ All resource links valid
- ✅ WHY/WHAT structure followed
- ✅ Rubric JSON valid
## User Validation
**Status:** [Approved / Needs revision]
**User notes:** [Feedback]
EOF
```
**Update global context:**
```bash
cat >> "$SESSION_DIR/global-context.md" << 'EOF'
## Step 5 Complete
**Skill created at:** [path]
**Files:** [count]
**All files under 500 lines:** Yes
**Ready for validation:** Yes
EOF
```
**Next step:** Step 6 (Validation) will read `global-context.md` + `step-5-output.md` + actual skill files.

View File

@@ -0,0 +1,412 @@
# Structural Analysis
This resource supports **Step 2** of the Skill Creator workflow.
**Input files:** `$SESSION_DIR/global-context.md`, `$SESSION_DIR/step-1-output.md`, `$SOURCE_DOC` (targeted reading)
**Output files:** `$SESSION_DIR/step-2-output.md`, updates `global-context.md`
**Stage goal:** Understand what the document is about as a whole and how its parts relate.
---
## Why Classify Content
### WHY Content Classification Matters
Classification activates the right extraction patterns:
- **Methodologies** need sequential process extraction
- **Frameworks** need dimensional/categorical extraction
- **Tools** need template structure extraction
- **Theories** need concept-to-application mapping
**Mental model:** You read fiction differently than non-fiction. Similarly, you extract differently from processes vs. frameworks.
Without classification, you might force a framework into linear steps (loses nuance) or extract a process as disconnected concepts (loses flow).
### WHAT to Classify
#### Classification Question 1: Practical vs. Theoretical
**Practical content** teaches **how to do something** (action-focused, procedures, methods)
**Theoretical content** teaches **that something is the case** (understanding-focused, principles, explanations)
**Decide:** Is this primarily teaching **how** (practical) or **why/what** (theoretical)?
**If theoretical:** Flag this - you'll need extra synthesis in Step 4 to make it actionable.
---
#### Classification Question 2: Content Structure Type
**Sequential (Methodology/Process):**
- Look for: Numbered steps, phases, "before/after" language
- Extraction focus: Order, dependencies, decision points
**Categorical (Framework/Model):**
- Look for: Dimensions, types, categories, "aspects of" language
- Extraction focus: Categories, definitions, relationships
**Structured (Tool/Template):**
- Look for: Blanks to fill, sections to complete
- Extraction focus: Template structure, what goes where
**Hybrid:**
- Combines multiple types (e.g., Design Thinking has framework + process + tools)
- Extraction focus: Identify boundaries, extract each appropriately
---
#### Classification Question 3: Completeness Level
Rate completeness 1-5:
- **5 = Complete:** Covers when/how/what, includes examples
- **3-4 = Partial:** Missing some aspects, needs gap-filling
- **1-2 = Incomplete:** Sketchy outline, missing critical pieces
**If < 3:** Ask user if you should proceed and fill gaps or find additional sources.
---
### WHAT to Document
```markdown
## Content Classification
**Type:** [Practical / Theoretical / Hybrid]
**Structure:** [Sequential / Categorical / Structured / Hybrid]
**Completeness:** [X/5] - [Brief rationale]
**Implications:**
- [What this means for extraction approach]
- [What skill structure will likely be]
```
**Present to user for validation before proceeding.**
---
## Why State Unity
### WHY Unity Statement Is Critical
The unity statement is your North Star for extraction:
- Prevents scope creep (keeps focus on main theme)
- Guides component selection (only extract what relates to unity)
- Defines skill purpose (becomes core of skill description)
- Enables coherence (everything connects back to this)
**Adler's rule:** "State the unity of the whole book in a single sentence, or at most a few sentences."
Without clear unity: bloated skills, missed central points, unclear purpose.
### WHAT to Extract
Create a one-sentence (or short paragraph) unity statement:
#### Unity Formula
**For practical content:**
"This [document type] teaches how to [VERB] [OBJECT] by [METHOD] in order to [PURPOSE]."
**Example:** "This guide teaches how to conduct user interviews by asking open-ended questions following the TEDW framework in order to discover unmet needs and validate assumptions."
**For theoretical content:**
"This [document type] explains [PHENOMENON] through [FRAMEWORK] to enable [APPLICATION]."
**Example:** "This paper explains cognitive load through information processing theory to enable instructional designers to create more effective learning materials."
---
#### How to Find the Unity
**Look for:**
1. Explicit statements in abstract, introduction, or conclusion
2. "This paper/guide..." statements
3. If not explicit, infer: What question does this answer? What problem does it solve?
**Test your statement:**
- Does it cover the whole document?
- Is it specific enough to be meaningful?
- Would the author agree?
---
### WHAT to Validate
**Present to user:**
```markdown
## Unity Statement
"[Your one-sentence unity statement]"
**Rationale:** [Why this captures the main point]
Does this align with your understanding?
```
---
## Why Enumerate Parts
### WHY Structure Mapping Is Essential
Understanding how parts relate to the whole:
- Reveals organization logic (chronological, categorical, priority-based)
- Shows dependencies (which parts build on others)
- Identifies extraction units (natural boundaries for deep reading)
- Exposes gaps (missing pieces)
- Guides skill structure (major parts often become skill sections)
**Adler's rule:** "Set forth the major parts of the book, and show how these are organized into a whole."
Without structure mapping: linear reading without understanding relationships, redundant extraction, poor skill organization.
### WHAT to Extract
#### Step 1: Identify Major Parts
Look for main section headings, numbered phases, distinct topics, natural breaks.
```markdown
## Major Parts
1. [Part 1 name] - [What it covers]
2. [Part 2 name] - [What it covers]
3. [Part 3 name] - [What it covers]
```
---
#### Step 2: Understand Relationships
**Common patterns:**
**Linear:** Part 1 → Part 2 → Part 3 (sequential, each builds on previous)
**Hub-spoke:** Core concept with multiple aspects exploring different dimensions
**Layered:** Foundation → Building blocks → Advanced applications
**Modular:** Independent parts, use what you need
---
#### Step 3: Map Parts to Unity
For each part:
- How does this contribute to the overall unity?
- Is this essential or supplementary?
```markdown
## Parts → Unity Mapping
**Part 1: [Name]**
- Contribution: [How it supports main theme]
- Essentiality: [Essential / Supporting / Optional]
```
---
#### Step 4: Identify Sub-Structure
For complex documents, go one level deeper (major parts + subsections).
**Example:**
```markdown
1. Introduction
1.1. Problem statement
1.2. Proposed solution
2. Core Framework
2.1. Dimension 1
2.2. Dimension 2
2.3. How dimensions interact
3. Application Process
3.1. Step 1
3.2. Step 2
```
**Note:** Don't go too deep - 2 levels is usually sufficient.
---
### WHAT to Validate
**Present to user:**
```markdown
## Document Structure
[Your hierarchical outline]
**Organizational pattern:** [Linear/Hub-spoke/Layered/Modular]
**Key relationships:** [Major dependencies]
Does this match your understanding?
```
---
## Why Define Problems
### WHY Problem Identification Matters
Understanding the problems being solved:
- Clarifies purpose (why does this methodology exist)
- Identifies use cases (when to apply this skill)
- Reveals gaps (what problems are NOT addressed)
- Frames value (what benefit this provides)
- Guides "When to Use" section (problems = triggers)
**Adler's rule:** "Define the problem or problems the author is trying to solve."
Without problem identification: skills without clear triggers, unclear value proposition, no boundary conditions.
### WHAT to Extract
#### Level 1: Main Problem
**Question:** What is the overarching problem this document addresses?
```markdown
## Main Problem
**Problem:** [One-sentence statement]
**Why it matters:** [Significance]
**Current gaps:** [What's missing in current solutions]
```
---
#### Level 2: Sub-Problems
Map problems to structure:
```markdown
## Sub-Problems by Part
**Part 1: [Name]**
- Problem: [What this part solves]
- Solution approach: [How it solves it]
```
---
#### Level 3: Out-of-Scope Problems
What does this document NOT solve?
```markdown
## Out of Scope
This does NOT solve:
- [Problem 1 not addressed]
- [Problem 2 not addressed]
**Implication:** Users will need [OTHER SKILL] for these.
```
This defines boundaries (when NOT to use).
---
#### Level 4: Problem-Solution Mapping
```markdown
| Problem | Solution Provided | Where Addressed |
|---------|------------------|----------------|
| [Problem 1] | [Solution] | [Part/Section] |
| [Problem 2] | [Solution] | [Part/Section] |
```
This becomes the foundation for "When to Use" section.
---
### WHAT to Validate
**Present to user:**
```markdown
## Problems Being Solved
**Main problem:** [Statement]
**Sub-problems:**
1. [Problem 1] → Solved by [Part/Method]
2. [Problem 2] → Solved by [Part/Method]
**Not addressed:**
- [Out of scope items]
**Implications for "When to Use":** [Draft triggers based on problems]
Is this problem framing accurate?
```
---
## Write Step 2 Output
After completing structural analysis and getting user approval, write to output file:
```bash
cat > "$SESSION_DIR/step-2-output.md" << 'EOF'
# Step 2: Structural Analysis Output
## Content Classification
**Type:** [Practical / Theoretical / Hybrid]
**Structure:** [Sequential / Categorical / Structured / Hybrid]
**Completeness:** [X/5] - [Rationale]
## Unity Statement
"[One-sentence unity statement]"
**Rationale:** [Why this captures the main point]
## Document Structure
[Hierarchical outline of major parts]
1. [Part 1] - [Description]
1.1. [Subsection]
1.2. [Subsection]
2. [Part 2] - [Description]
...
**Organizational pattern:** [Linear/Hub-spoke/Layered/Modular]
## Problems Being Solved
**Main problem:** [One-sentence statement]
**Sub-problems:**
1. [Problem 1] → Solved by [Part/Section]
2. [Problem 2] → Solved by [Part/Section]
**Out of scope:** [What this doesn't address]
## User Validation
**Status:** [Approved / Needs revision]
**User notes:** [Feedback]
EOF
```
**Update global context:**
```bash
cat >> "$SESSION_DIR/global-context.md" << 'EOF'
## Step 2 Complete
**Content type:** [type]
**Unity:** [short version]
**Major parts:** [count]
**Ready for extraction:** Yes
EOF
```
**Next step:** Step 3 (Component Extraction) will read `global-context.md` + `step-2-output.md`.

View File

@@ -0,0 +1,498 @@
# Synthesis and Application
This resource supports **Step 4** of the Skill Creator workflow.
**Input files:** `$SESSION_DIR/global-context.md`, `$SESSION_DIR/step-3-output.md`
**Output files:** `$SESSION_DIR/step-4-output.md`, updates `global-context.md`
**Stage goal:** Transform extracted components into actionable, practical guidance.
---
## Why Evaluate Completeness
### WHY Critical Evaluation Matters
Before transforming to application, you must evaluate what you've extracted:
- **Is it logically sound?** Do the arguments make sense?
- **Is it complete?** Are there gaps or missing pieces?
- **Is it consistent?** Do parts contradict each other?
- **Is it practical?** Can this actually be applied?
**Mental model:** You're fact-checking and quality-assuring before building. Bad foundation = bad skill.
**Adler's Critical Stage:** "Is it true? What of it?" - evaluating truth and significance.
Without evaluation: you might create skills based on incomplete or flawed methodologies, perpetuate errors, build unusable workflows.
### WHAT to Evaluate
#### Completeness Check
**Ask for each major component:**
**Terms:**
- Are all key concepts defined?
- Are definitions clear and unambiguous?
- Are there terms used but not defined?
**Propositions:**
- Are claims supported with evidence or reasoning?
- Are there contradictions between propositions?
- Are assumptions stated or hidden?
**Arguments:**
- Are logical sequences complete (no missing steps)?
- Do conclusions follow from premises?
- Are decision criteria specified?
**Solutions:**
- Are examples representative or cherry-picked?
- Do solutions address the stated problems?
- Are edge cases considered?
---
#### Logic Check
**Identify logical issues:**
**Gaps:** "The document jumps from A to C without explaining B"
- **Action:** Note the gap; decide if you can fill it or need user input
**Contradictions:** "Section 2 says X, but Section 5 says not-X"
- **Action:** Flag for user; determine which is correct
**Circular reasoning:** "A is true because of B; B is true because of A"
- **Action:** Identify the actual foundation or note as limitation
**Unsupported claims:** "The author asserts X but provides no evidence"
- **Action:** Note as assumption; decide if acceptable
---
#### Practical Feasibility Check
**Can this actually be used?**
**Resource requirements:**
- Does this require tools/resources users won't have?
- Are time requirements realistic?
**Skill prerequisites:**
- Does this assume knowledge users won't have?
- Are dependencies stated?
**Context constraints:**
- Does this only work in specific contexts?
- Are limitations acknowledged?
---
### WHAT to Document
```markdown
## Completeness Evaluation
**Complete:** [What's well-covered]
**Gaps:** [What's missing]
**Contradictions:** [Any inconsistencies found]
**Assumptions:** [Unstated prerequisites]
**Logical soundness:** [Strong / Moderate / Weak] - [Rationale]
**Practical feasibility:** [High / Medium / Low] - [Rationale]
**Implications for skill creation:**
- [What needs to be added]
- [What needs clarification]
- [What needs user input]
```
**Present to user:** Share evaluation and ask for input on gaps/issues.
---
## Why Identify Applications
### WHY Application Mapping Matters
Extracted theory must connect to real-world use:
- **Triggers skill invocation:** Users need to know WHEN to use this
- **Validates usefulness:** Theory without application is just information
- **Reveals variations:** Different contexts may need different approaches
- **Informs examples:** Concrete scenarios make skills understandable
**Mental model:** A hammer is useful because you know when to use it (nails) and when not to (screws). Same with skills.
**Adler's "What of it?" question:** What does this matter in practice?
Without application mapping: skill sits unused because users don't recognize appropriate situations, unclear value proposition.
### WHAT to Identify
#### Scenario Mapping
**Generate concrete scenarios where this skill applies:**
**Format:**
```markdown
### Scenario: [Descriptive name]
**Context:** [Situation description]
**Problem:** [What needs solving]
**How skill applies:** [Which parts of methodology address this]
**Expected outcome:** [What success looks like]
**Variations:** [How application might differ in sub-contexts]
```
**Aim for:** 3-5 diverse scenarios covering different domains or contexts
---
#### Domain Transfer
**If document examples are domain-specific, identify transfer opportunities:**
**Original domain:** [Where document's examples come from]
**Transfer domains:** [Other areas where this applies]
**Example:**
- **Original:** Reading books analytically
- **Transfer:** Reading research papers, analyzing codebases, understanding documentation, evaluating business reports
**For each transfer:**
- What changes in the application?
- What stays the same?
- Are there domain-specific considerations?
---
#### Use Case Patterns
**Identify recurring patterns of when to use:**
**Pattern types:**
- **Problem-driven:** "Use when you encounter [PROBLEM]"
- **Goal-driven:** "Use when you want to achieve [GOAL]"
- **Context-driven:** "Use when you're in [CONTEXT]"
- **Trigger-driven:** "Use when [EVENT] happens"
**Example patterns:**
```markdown
**Problem-driven:** Use inspectional reading when you have limited time and need to understand a book quickly
**Goal-driven:** Use analytical reading when you want to master complex material above your current level
**Context-driven:** Use syntopical reading when researching a topic requiring multiple sources
**Trigger-driven:** Use critical reading stage after you've fully understood the author's position
```
---
### WHAT to Document
```markdown
## Application Mapping
**Scenarios:**
1. [Scenario 1 with context and application]
2. [Scenario 2]
3. [Scenario 3]
**Domain transfers:**
- Original: [Domain]
- Applicable to: [List of domains]
**Use case patterns:**
- [Pattern type]: [Trigger description]
**Boundary conditions (when NOT to use):**
- [Context where skill doesn't apply]
```
**Present to user:** "Do these scenarios match your intended use cases? Any to add or modify?"
---
## Why Transform to Actions
### WHY Actionable Steps Matter
Theory must become procedure:
- **Users need clear instructions:** "Do this, then this, then this"
- **Reduces cognitive load:** No need to interpret principles on the fly
- **Enables execution:** Can follow steps even without deep theoretical understanding
- **Allows refinement:** Clear steps can be improved iteratively
**Mental model:** Recipe vs. food science. Both are valuable, but recipes get you cooking immediately.
Without transformation to actions: skill remains theoretical, users struggle to apply, high barrier to entry.
### WHAT to Transform
#### From Propositions → Principles/Guidelines
**Propositions** (theoretical claims) become **Principles** (actionable guidance)
**Transformation pattern:**
```markdown
**Proposition:** [Theoretical claim]
**Principle:** [How to apply this in practice]
```
**Example:**
```markdown
**Proposition:** Active reading improves retention more than passive reading.
**Principle:** Take notes and mark important passages while reading to improve retention.
```
---
#### From Arguments → Workflow Steps
**Arguments** (logical sequences) become **Workflow Steps** (procedures)
**Transformation pattern:**
```markdown
**Argument:** [Logical sequence]
**Step X:** [Action to take]
**Input:** [What you need]
**Action:** [What to do]
**Output:** [What you get]
**Decision:** [If applicable]
```
**Example:**
```markdown
**Argument:** Systematic skimming before deep reading saves time by identifying valuable books.
**Step 1: Systematic Skim**
**Input:** Book you're considering reading
**Action:** Read title, TOC, index, first/last paragraphs
**Output:** Understanding of book structure and main points
**Decision:** Is this book worth deep reading? Yes → Step 2; No → Next book
```
---
#### From Solutions → Examples and Templates
**Solutions** (demonstrations) become **Examples** (illustrations) and **Templates** (structures)
**For examples:**
- Show application in specific context
- Include before/after if possible
- Highlight key decision points
**For templates:**
- Extract reusable structure
- Add placeholders
- Provide completion instructions
---
#### Handling Theoretical Content
**If source is theoretical (no inherent procedure):**
**Ask:**
1. What decisions does this theory inform?
2. What would change in practice based on this?
3. What would someone DO differently knowing this?
**Transform:**
```markdown
**Theory:** [Conceptual understanding]
**Application decision framework:**
**When to use:** [Trigger]
**How to apply:** [Action steps informed by theory]
**What to consider:** [Factors from theoretical understanding]
```
---
### WHAT to Document
```markdown
## Actionable Transformation
**Principles** (from propositions):
1. [Principle 1]
2. [Principle 2]
**Workflow** (from arguments):
**Step 1:** [Action]
- Input: [X]
- Action: [Y]
- Output: [Z]
**Step 2:** [Action]
...
**Examples** (from solutions):
- [Example 1 showing application]
**Templates** (if applicable):
- [Template structure]
**Theoretical foundations** (if source is theoretical):
- Decision framework: [How theory informs practice]
```
**Present to user:** "Does this workflow make sense? Is it actionable as written?"
---
## Why Define Triggers
### WHY When/How Clarity Matters
Users need to know:
- **WHEN:** In what situations should I invoke this skill?
- **HOW:** What's the entry point and overall approach?
**Mental model:** A fire extinguisher has clear labels for WHEN (type of fire) and HOW (pull pin, aim, squeeze). Skills need the same clarity.
Without clear triggers: skills go unused even when appropriate, users uncertain about application, poor skill adoption.
### WHAT to Define
#### When to Use (Triggers)
**Based on problem-solution mapping from Step 2 and application scenarios from earlier in Step 4:**
```markdown
## When to Use
**Use this skill when:**
- [Trigger condition 1]
- [Trigger condition 2]
- [Trigger condition 3]
**Examples of trigger situations:**
- [Concrete situation 1]
- [Concrete situation 2]
**Do NOT use when:**
- [Anti-pattern 1]
- [Anti-pattern 2]
```
**Make triggers specific:**
- ❌ Vague: "Use when you need to understand something"
- ✅ Specific: "Use when you need to extract a methodology from a document and make it reusable"
---
#### How to Use (Entry Point)
**Provide clear entry guidance:**
```markdown
## How to Use This Skill
**Prerequisites:**
- [What you need before starting]
**Typical session flow:**
1. [High-level step 1]
2. [High-level step 2]
3. [High-level step 3]
**Time investment:**
- [Estimated time for typical use]
**Expected outcome:**
- [What you'll have when done]
```
---
## Write Step 4 Output
After completing synthesis and getting user approval, write to output file:
```bash
cat > "$SESSION_DIR/step-4-output.md" << 'EOF'
# Step 4: Synthesis and Application Output
## Completeness Evaluation
**Complete:** [What's well-covered]
**Gaps:** [What's missing]
**Contradictions:** [Any inconsistencies]
**Logical soundness:** [Strong/Moderate/Weak] - [Rationale]
**Practical feasibility:** [High/Medium/Low] - [Rationale]
## Application Scenarios
1. **Scenario:** [Name]
- Context: [Description]
- How skill applies: [Application]
- Expected outcome: [Result]
2. **Scenario:** [Name]
...
**Domain transfers:** [Original domain → Transfer domains]
## Actionable Workflow
**Principles** (from propositions):
1. [Principle 1]
2. [Principle 2]
**Workflow Steps** (from arguments):
**Step 1:** [Action]
- Input: [X]
- Action: [Do Y]
- Output: [Z]
- Decision: [If applicable]
**Step 2:** [Action]
...
**Examples:** [Application examples]
**Templates:** [If applicable]
## Triggers (When/How to Use)
**When to use:**
- [Trigger condition 1]
- [Trigger condition 2]
**When NOT to use:**
- [Anti-pattern 1]
**How to use:**
- Prerequisites: [What's needed]
- Time investment: [Estimate]
- Expected outcome: [What you'll have]
## User Validation
**Status:** [Approved / Needs revision]
**User notes:** [Feedback]
EOF
```
**Update global context:**
```bash
cat >> "$SESSION_DIR/global-context.md" << 'EOF'
## Step 4 Complete
**Workflow defined:** [X steps]
**Triggers identified:** Yes
**Ready for construction:** Yes
EOF
```
**Next step:** Step 5 (Skill Construction) will read `global-context.md` + `step-4-output.md`.