Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:28:45 +08:00
commit 7f6390019e
8 changed files with 3301 additions and 0 deletions

View File

@@ -0,0 +1,916 @@
---
name: mochi-creator
description: Create evidence-based spaced repetition flashcards using cognitive science principles from Andy Matuschak's research. Use when user wants to create Mochi cards, flashcards, study materials, or mentions learning, memorization, spaced repetition, SRS, Anki-style cards, or knowledge retention. Applies the 5 properties of effective prompts (focused, precise, consistent, tractable, effortful) to ensure cards actually work for long-term retention.
---
# Mochi Creator
## Overview
This skill enables creation and management of flashcards, decks, and templates in Mochi.cards, a spaced repetition learning system. **Critically, this skill applies evidence-based cognitive science principles** to ensure flashcards actually work for long-term retention.
**Core Philosophy**: Writing prompts for spaced repetition is task design. You're creating recurring retrieval tasks for your future self. Effective prompts leverage *retrieval practice* - actively recalling information strengthens memory far more than passive review.
Use this skill to transform content into study materials, organize learning resources into deck hierarchies, and create cards that are focused, precise, consistent, tractable, and effortful.
## Quick Start
### Setup
Before using this skill, set the Mochi API key as an environment variable:
```bash
export MOCHI_API_KEY="your_api_key_here"
```
To obtain an API key:
1. Open the Mochi.cards application
2. Navigate to Account Settings
3. Locate the API Keys section
4. Generate a new API key
### Using the Python Script
The `scripts/mochi_api.py` script provides a complete Python interface to the Mochi API. Import and use it in Python code:
```python
from scripts.mochi_api import MochiAPI
# Initialize the client (reads MOCHI_API_KEY from environment)
api = MochiAPI()
# Create a deck
deck = api.create_deck(name="Python Programming")
# Create a card in that deck
card = api.create_card(
content="# What is a list comprehension?\n---\nA concise way to create lists in Python",
deck_id=deck["id"],
manual_tags=["python", "syntax"]
)
```
Or execute it directly from command line for testing:
```bash
python scripts/mochi_api.py list-decks
python scripts/mochi_api.py create-deck "My Study Deck"
python scripts/mochi_api.py list-cards <deck-id>
```
## The Science of Effective Prompts
**CRITICAL**: Before creating any flashcard, understand what makes prompts effective. Bad prompts waste time and fail to build lasting memory. Great prompts compound learning over years.
### The Five Properties of Effective Prompts
Every prompt you create must satisfy these five properties (based on Andy Matuschak's research):
1. **Focused**: One detail at a time
- ❌ "What are Python decorators, what syntax do they use, and when would you use them?"
- ✅ "What is the primary purpose of Python decorators?" (separate cards for syntax and usage)
2. **Precise**: Specific questions demand specific answers
- ❌ "What's interesting about decorators?"
- ✅ "What Python feature allows decorators to modify function behavior?"
3. **Consistent**: Should produce the same answer each time
- ❌ "Give an example of a decorator" (produces different answers, creates interference)
- ✅ "What is the most common built-in Python decorator?" (consistent: @property or @staticmethod)
- Note: Creative prompts asking for novel answers are advanced and experimental
4. **Tractable**: You should answer correctly ~90% of the time
- If struggling, break down further or add cues
- Too easy? Increase effortfulness (next property)
- Add mnemonic cues in parentheses when helpful
5. **Effortful**: Must require actual memory retrieval, not trivial inference
- ❌ "Is Python a programming language?" (too trivial)
- ✅ "What problem does the @lru_cache decorator solve?" (requires retrieval)
### The "More Than You Think" Rule
**Write 3-5 focused prompts instead of 1 comprehensive prompt.**
This feels unnatural initially. You'll want to economize. Resist this urge.
- Each focused prompt takes only 10-30 seconds across an entire year of review
- Prompts are cheaper than you think
- Coarser prompts don't reduce work - they make learning harder and less reliable
**Example transformation:**
❌ One unfocused prompt:
```
Q: What are the ingredients in chicken stock?
A: Chicken bones, onions, carrots, celery, bay leaves, water
```
✅ Six focused prompts:
```
Q: What protein source forms the base of chicken stock?
A: Chicken bones
Q: What three vegetables form the aromatic base of chicken stock?
A: Onions, carrots, celery (mirepoix)
Q: What herb is traditionally added to chicken stock?
A: Bay leaves
Q: What liquid comprises the majority of chicken stock?
A: Water
Q: What is the French term for the onion-carrot-celery base?
A: Mirepoix
Q: What ratio of vegetables to liquid is typical in stock?
A: Roughly 1:4 (vegetables to water)
```
Notice: Each prompt lights a specific "bulb" in your understanding. The unfocused version leaves bulbs unlit.
### Emotional Connection is Primary
**Only create prompts about material that genuinely matters to you.**
- If creating cards "because you should," stop and reassess
- Connect prompts to your actual creative work and goals
- During review sessions, notice internal "sighs" - flag those cards for revision or deletion
- Delete liberally when emotional connection fades
- Boredom leads to abandonment of the entire system
**Ask yourself**: "Do I actually care about remembering this in six months? Why?"
### Common Anti-Patterns to Avoid
1. **Binary prompts** (yes/no questions)
- ❌ "Is encapsulation important in OOP?"
- ✅ "What benefit does encapsulation provide in object-oriented design?"
2. **Pattern-matching prompts** (answerable by syntax recognition)
- ❌ "In the context of RESTful APIs using HTTP methods with proper authentication headers, what method creates resources?"
- ✅ "What HTTP method creates resources?"
3. **Unfocused prompts** (multiple details)
- ❌ "What are the features, benefits, and drawbacks of Redis?"
- ✅ Create separate prompts for each feature, benefit, and drawback
4. **Vague prompts** (imprecise questions)
- ❌ "Tell me about async/await"
- ✅ "What problem does async/await solve in JavaScript?"
5. **Trivial prompts** (no retrieval required)
- ❌ "What does URL stand for?"
- ✅ "Why do URLs encode spaces as %20 instead of using literal spaces?"
### Quality Validation Checklist
Before creating each card, verify:
- [ ] **Focused**: Tests exactly one detail?
- [ ] **Precise**: Question is specific, answer is unambiguous?
- [ ] **Consistent**: Will produce the same answer each time?
- [ ] **Tractable**: I can answer correctly ~90% of the time?
- [ ] **Effortful**: Requires actual memory retrieval?
- [ ] **Emotional**: I genuinely care about remembering this?
If any checkbox fails, revise before creating the card.
## Core Tasks
### Creating Simple Flashcards
For basic question-and-answer flashcards, create cards with markdown content using `---` to separate card sides.
**Example user requests:**
- "Create a Mochi card about Python decorators"
- "Add a flashcard to my Python deck explaining lambda functions"
- "Make flashcards from these notes"
**Implementation approach:**
1. List existing decks to get deck IDs or create a new deck if needed:
```python
decks = api.list_decks()
# Or create new deck
deck = api.create_deck(name="Python Programming")
deck_id = deck["id"]
```
2. Format content with markdown and side separators:
```python
content = """# What are Python decorators?
---
Functions that modify the behavior of other functions or methods.
They use the @decorator syntax above function definitions.
Example:
@staticmethod
def my_function():
pass
"""
```
3. Create the card with optional tags:
```python
card = api.create_card(
content=content,
deck_id=deck_id,
manual_tags=["python", "functions", "decorators"]
)
```
**Multi-card creation from text:**
When creating multiple cards from a document or conversation:
1. Parse or chunk the content into logical learning units
2. Format each as question/answer or concept/explanation
3. Create cards in a loop, handling each API response
4. Report success/failure for each card created
### Creating Template-Based Cards
For structured, repeatable card formats (vocabulary, definitions, examples), use templates with fields.
**Example user requests:**
- "Create vocabulary flashcards with word, definition, and example"
- "Make a template for programming concepts with name, description, and code example"
- "Use the Basic Flashcard template to create cards"
**Implementation approach:**
1. Create or retrieve a template:
```python
# Create a new template
template = api.create_template(
name="Vocabulary Card",
content="# << Word >>\n\n**Definition:** << Definition >>\n\n**Example:** << Example >>",
fields={
"word": {
"id": "word",
"name": "Word",
"type": "text",
"pos": "a"
},
"definition": {
"id": "definition",
"name": "Definition",
"type": "text",
"pos": "b",
"options": {"multi-line?": True}
},
"example": {
"id": "example",
"name": "Example",
"type": "text",
"pos": "c",
"options": {"multi-line?": True}
}
}
)
```
2. Create cards using the template:
```python
card = api.create_card(
content="", # Content can be empty when using fields
deck_id=deck_id,
template_id=template["id"],
fields={
"word": {
"id": "word",
"value": "ephemeral"
},
"definition": {
"id": "definition",
"value": "Lasting for a very short time; temporary"
},
"example": {
"id": "example",
"value": "The beauty of cherry blossoms is ephemeral, lasting only a few weeks."
}
}
)
```
**Reusing existing templates:**
1. List available templates:
```python
templates = api.list_templates()
for template in templates["docs"]:
print(f"{template['name']}: {template['id']}")
```
2. Retrieve template details to see field structure:
```python
template = api.get_template(template_id)
field_ids = list(template["fields"].keys())
```
3. Create cards matching the template's field structure
### Managing Decks
Organize cards into hierarchical deck structures for better content organization.
**Example user requests:**
- "Create a deck for studying Spanish"
- "Organize these cards into a Python → Data Structures subdeck"
- "List my existing Mochi decks"
**Implementation approach:**
**Creating decks:**
```python
# Top-level deck
deck = api.create_deck(
name="Programming",
sort=1
)
# Nested subdeck
subdeck = api.create_deck(
name="Python",
parent_id=deck["id"],
sort=1
)
```
**Listing decks:**
```python
result = api.list_decks()
for deck in result["docs"]:
parent = f" (under {deck.get('parent-id', 'root')})" if deck.get("parent-id") else ""
print(f"{deck['name']}: {deck['id']}{parent}")
# Handle pagination if needed
if result.get("bookmark"):
next_page = api.list_decks(bookmark=result["bookmark"])
```
**Updating deck properties:**
```python
# Archive a deck
api.update_deck(deck_id, archived=True)
# Change deck display settings
api.update_deck(
deck_id,
cards_view="grid",
sort_by="updated-at",
show_sides=True
)
# Reorganize deck hierarchy
api.update_deck(deck_id, parent_id=new_parent_id)
```
**Deck organization strategies:**
- Use hierarchical structures: Subject → Topic → Subtopic
- Set `sort` field numerically to control deck ordering
- Archive completed decks instead of deleting them
- Use `archived?` to hide decks from active review
### Batch Operations
Create multiple cards efficiently from source materials like notes, documents, or conversations.
**Example user requests:**
- "Turn this conversation into Mochi flashcards"
- "Create cards from these 20 definitions"
- "Import my study notes into Mochi"
**Implementation approach:**
1. Parse source content into individual card items
2. Identify or create target deck
3. Determine if template-based or simple cards are appropriate
4. Create cards in sequence with error handling:
```python
def create_cards_from_list(items, deck_id, template_id=None):
"""Create multiple cards with error handling."""
results = {"success": [], "failed": []}
for item in items:
try:
if template_id:
card = api.create_card(
content="",
deck_id=deck_id,
template_id=template_id,
fields=item["fields"]
)
else:
card = api.create_card(
content=item["content"],
deck_id=deck_id,
manual_tags=item.get("tags", [])
)
results["success"].append(card["id"])
except Exception as e:
results["failed"].append({"item": item, "error": str(e)})
return results
```
5. Report results to user with success count and any errors
**Content extraction strategies:**
- Split text by headers or numbered lists for question/answer pairs
- Extract key terms and definitions from formatted documents
- Parse conversation history for teaching moments or explanations
- Identify code examples and create cards with syntax and explanation
## Knowledge-Type Specific Workflows
Different types of knowledge require different prompt strategies. Always apply the 5 properties, but adapt your approach based on what you're learning.
### Factual Knowledge (Simple Facts)
**Characteristics**: Names, dates, definitions, ingredients, components
**Strategy**: Break into atomic units, write more prompts than feels natural
**Example - Learning Recipe Components:**
❌ Poor approach:
```python
api.create_card(
content="# What ingredients are in chocolate chip cookies?\n---\nFlour, butter, sugar, brown sugar, eggs, vanilla, baking soda, salt, chocolate chips",
deck_id=deck_id
)
```
✅ Better approach - Create 5-8 focused cards:
```python
facts = [
("What is the primary dry ingredient in chocolate chip cookies?", "Flour"),
("What fat is used in chocolate chip cookies?", "Butter"),
("What two sweeteners are used in chocolate chip cookies?", "White sugar and brown sugar"),
("What provides structure in chocolate chip cookies?", "Eggs"),
("What flavoring extract is used in chocolate chip cookies?", "Vanilla"),
("What leavening agent makes cookies rise?", "Baking soda"),
("What balances sweetness in cookies?", "Salt"),
]
for question, answer in facts:
api.create_card(
content=f"# {question}\n---\n{answer}",
deck_id=deck_id,
manual_tags=["baking", "cookies", "recipes"]
)
```
**Key principle**: Each card lights one specific "bulb" of understanding
### Lists (Closed vs Open)
**Closed lists** (fixed members like "7 continents"):
- Use cloze deletion - one card per missing element
- Keep order consistent across cards to build visual memory
- Example: "Africa, Antarctica, Asia, Australia, Europe, North America, __?" → "South America"
**Open lists** (evolving categories like "design patterns"):
- Don't memorize the whole list
- Create prompts linking instances to the category
- Write prompts about patterns within the category
- Example: "What design pattern does the Observer pattern belong to?" → "Behavioral patterns"
**Implementation:**
```python
# Closed list - continents
continents = ["Africa", "Antarctica", "Asia", "Australia", "Europe", "North America", "South America"]
for i, continent in enumerate(continents):
others = ", ".join([c for j, c in enumerate(continents) if j != i])
api.create_card(
content=f"# Name all 7 continents\n---\n{others}, __{continent}__ (fill in the blank)",
deck_id=deck_id,
pos=chr(97 + i) # 'a', 'b', 'c', etc. for ordering
)
# Open list - design patterns
patterns = [
("Observer", "Behavioral"),
("Factory", "Creational"),
("Adapter", "Structural"),
]
for pattern, category in patterns:
api.create_card(
content=f"# What category does the {pattern} pattern belong to?\n---\n{category} patterns",
deck_id=deck_id,
manual_tags=["design-patterns", category.lower()]
)
```
### Conceptual Knowledge (Understanding Ideas)
**Characteristics**: Principles, theories, mental models, frameworks
**Strategy**: Use multiple "lenses" to trace the edges of a concept
**The Five Conceptual Lenses:**
1. **Attributes and tendencies**: What's always/sometimes/never true?
2. **Similarities and differences**: How does it relate to adjacent concepts?
3. **Parts and wholes**: Examples, sub-concepts, categories
4. **Causes and effects**: What does it do? When is it used?
5. **Significance and implications**: Why does it matter personally?
**Example - Learning "Dependency Injection":**
```python
concept = "dependency injection"
deck_id = get_or_create_deck("Software Design Patterns")
# Lens 1: Attributes
api.create_card(
content="# What is the core attribute of dependency injection?\n---\nDependencies are provided from outside rather than created internally",
deck_id=deck_id,
manual_tags=["dependency-injection", "attributes"]
)
# Lens 2: Similarities/Differences
api.create_card(
content="# How does dependency injection differ from service locator?\n---\nDI pushes dependencies in, service locator pulls them out",
deck_id=deck_id,
manual_tags=["dependency-injection", "comparison"]
)
# Lens 3: Parts/Wholes
api.create_card(
content="# Give one concrete example of dependency injection\n---\nPassing a database connection to a class constructor instead of creating it inside the class",
deck_id=deck_id,
manual_tags=["dependency-injection", "examples"]
)
# Lens 4: Causes/Effects
api.create_card(
content="# What problem does dependency injection solve?\n---\nMakes code testable by allowing mock dependencies to be injected",
deck_id=deck_id,
manual_tags=["dependency-injection", "benefits"]
)
# Lens 5: Significance
api.create_card(
content="# When would you use dependency injection in your work?\n---\nWhen writing testable APIs that need to swap database implementations or mock external services",
deck_id=deck_id,
manual_tags=["dependency-injection", "application"]
)
```
**Key principle**: Multiple angles create robust understanding resistant to forgetting
### Procedural Knowledge (How to Do Things)
**Characteristics**: Processes, workflows, algorithms, techniques
**Strategy**: Focus on transitions, timing, and rationale (not rote steps)
**Anti-pattern**: Don't create "step 1, step 2, step 3" cards - this encourages rote memorization
**Better approach**: Focus on:
- **Keywords**: Verbs, conditions, heuristics
- **Transitions**: When do you move from step X to Y?
- **Timing**: How long do things take? ("heads-up" information)
- **Rationale**: Why does each step matter?
**Example - Learning "How to Make Sourdough Bread":**
❌ Poor approach (rote steps):
```python
# Don't do this!
api.create_card(
content="# What is step 1 in making sourdough?\n---\nMix flour and water",
deck_id=deck_id
)
api.create_card(
content="# What is step 2 in making sourdough?\n---\nLet it autolyse for 30 minutes",
deck_id=deck_id
)
# ... etc - encourages mindless recitation
```
✅ Better approach (transitions and rationale):
```python
# Focus on transitions
api.create_card(
content="# When do you know the autolyse phase is complete?\n---\nAfter 30-60 minutes when flour is fully hydrated",
deck_id=deck_id,
manual_tags=["sourdough", "transitions"]
)
# Focus on rationale
api.create_card(
content="# Why do you autolyse before adding salt?\n---\nSalt inhibits gluten development; autolyse allows gluten to form first",
deck_id=deck_id,
manual_tags=["sourdough", "rationale"]
)
# Focus on timing/heads-up
api.create_card(
content="# How long does bulk fermentation take for sourdough?\n---\n4-6 hours at room temperature (temperature-dependent)",
deck_id=deck_id,
manual_tags=["sourdough", "timing"]
)
# Focus on conditions
api.create_card(
content="# What indicates sourdough is ready for shaping?\n---\n50-100% volume increase, jiggly texture, small bubbles on surface",
deck_id=deck_id,
manual_tags=["sourdough", "conditions"]
)
```
**Key principle**: Understand the *why* and *when*, not just the *what*
### Salience Prompts (Behavioral Change)
**Purpose**: Keep ideas "top of mind" to drive actual application, not just retention
**Use when**: You want to change behavior or apply knowledge, not just remember facts
**Strategy**: Create prompts around contexts where ideas might be meaningful
**Example - Applying "First Principles Thinking":**
```python
# Context-based application
api.create_card(
content="# What's one situation this week where you could apply first principles thinking?\n---\n(Give an answer specific to your current work context - answer may vary)",
deck_id=deck_id,
manual_tags=["first-principles", "application"],
review_reverse=False # Don't review in reverse
)
# Implication-focused
api.create_card(
content="# What's one assumption you're making in your current project that could be questioned?\n---\n(Identify a specific assumption - answer will vary)",
deck_id=deck_id,
manual_tags=["first-principles", "reflection"]
)
# Creative application
api.create_card(
content="# Describe a way to apply first principles thinking you haven't mentioned before\n---\n(Novel answer each time - leverages generation effect)",
deck_id=deck_id,
manual_tags=["first-principles", "creative"]
)
```
**Key principle**: Extend the "Baader-Meinhof effect" where new knowledge feels salient and you notice it everywhere
**Warning**: Salience prompts are experimental. Standard retrieval prompts have stronger research backing.
### Interactive Card Creation
Guide users through card creation with clarifying questions when details are ambiguous. **Critically, always validate quality before creating cards.**
**Example user requests:**
- "Help me create some flashcards"
- "I want to study biology with Mochi"
**Implementation approach:**
1. **First, establish emotional connection:**
- "What specifically do you want to remember from this material?"
- "How does this connect to your work or goals?"
- "What would success look like in 6 months?"
- If user seems unmotivated or creating cards "because they should," push back gently
2. **Determine information needed:**
- Which deck to add to (list existing or create new)
- What type of knowledge (factual, conceptual, procedural, salience)
- Card format (simple or template-based)
- Content source (manual input, existing notes, conversation)
- Tags and organization preferences
3. **Before creating ANY card, apply quality validation:**
- Check against 5 properties (focused, precise, consistent, tractable, effortful)
- Suggest breaking unfocused prompts into multiple cards
- Identify and fix anti-patterns (binary questions, vague prompts, etc.)
- Ask: "This prompt tests multiple details - should we break it into 3 separate cards?"
4. **Suggest knowledge-type appropriate patterns:**
- For concepts: "I can create cards using the 5 conceptual lenses to give you robust understanding"
- For procedures: "Instead of step-by-step cards, I'll focus on transitions and rationale"
- For facts: "I'll break this into atomic prompts - expect 5-8 cards instead of 1"
5. **Create cards with quality commentary:**
```python
# Show your reasoning
print("Creating focused card: Tests ONE detail (what problem DI solves)")
print("Precise: Asks specifically about testability benefit")
print("Tractable: You should get this right ~90% of the time")
api.create_card(
content="# What problem does dependency injection solve?\n---\nMakes code testable by allowing mock dependencies",
deck_id=deck_id,
manual_tags=["design-patterns", "dependency-injection"]
)
```
6. **Offer iteration and refinement:**
- "I've created 5 cards covering attributes, examples, and significance. Would you like me to add more lenses?"
- "This card might be too difficult - should I add a mnemonic cue?"
- "Should I create salience prompts to help you apply this in your work?"
7. **Flag quality issues proactively:**
- "I notice this prompt is unfocused - it asks about features AND drawbacks. Let me split it."
- "This question is binary (yes/no). Let me rephrase as an open-ended question."
- "This might be too trivial. Let me make it more effortful."
**Quality-First Workflow:**
```python
def create_card_with_validation(question: str, answer: str, deck_id: str) -> None:
"""Always validate before creating."""
# Check 1: Focused?
if " and " in question or len(answer.split(",")) > 2:
print("⚠️ This prompt seems unfocused. Consider breaking into separate cards.")
return
# Check 2: Precise?
vague_words = ["interesting", "important", "good", "tell me about"]
if any(word in question.lower() for word in vague_words):
print("⚠️ Question is vague. Be more specific about what you're asking.")
return
# Check 3: Binary?
if question.strip().startswith(("Is ", "Does ", "Can ", "Will ")):
print("⚠️ Binary question detected. Rephrase as open-ended.")
return
# Check 4: Pattern-matchable?
if len(question) > 200:
print("⚠️ Question is very long - might be answerable by pattern matching.")
return
# Validation passed - create the card
api.create_card(
content=f"# {question}\n---\n{answer}",
deck_id=deck_id
)
print("✅ Card created (passed quality checks)")
```
**Remember**: Your job is to help create cards that *work* - not just cards that exist. Push back on poor quality prompts.
## Advanced Features
### Card Positioning
Control card order within decks using the `pos` field with lexicographic sorting:
```python
# Cards sort lexicographically by pos field
card1 = api.create_card(content="First card", deck_id=deck_id, pos="a")
card2 = api.create_card(content="Third card", deck_id=deck_id, pos="c")
# Insert between existing cards
card_between = api.create_card(content="Second card", deck_id=deck_id, pos="b")
```
### Tagging Strategies
Tags can be added inline in content or via `manual_tags`:
```python
# Inline tags in content
content = "# What is Python?\n---\nA programming language #python #programming"
# Manual tags (preferred for programmatic creation)
card = api.create_card(
content="# What is Python?\n---\nA programming language",
deck_id=deck_id,
manual_tags=["python", "programming", "basics"]
)
```
Use manual tags when:
- Creating cards programmatically
- Tags don't fit naturally in content
- Maintaining clean card appearance
- Need to update tags separately from content
### Soft Delete vs Hard Delete
Prefer soft deletion for safety:
```python
# Soft delete (reversible)
from datetime import datetime
api.update_card(card_id, trashed=datetime.utcnow().isoformat())
# Undelete
api.update_card(card_id, trashed=None)
# Hard delete (permanent)
api.delete_card(card_id) # Cannot be undone
```
### Pagination Handling
Handle pagination for large collections:
```python
def get_all_cards(deck_id):
"""Retrieve all cards from a deck, handling pagination."""
all_cards = []
bookmark = None
while True:
result = api.list_cards(deck_id=deck_id, limit=100, bookmark=bookmark)
all_cards.extend(result["docs"])
bookmark = result.get("bookmark")
if not bookmark or not result["docs"]:
break
return all_cards
```
## Common Patterns
### Pattern: Topic Extraction
Extract topics from a document and create organized flashcards:
1. Identify main topics/sections
2. Create a deck for the subject
3. Create subdeck for each major topic
4. Generate cards from content within each topic
5. Tag cards with relevant concepts
### Pattern: Vocabulary Lists
Transform vocabulary lists into flashcards:
1. Create or reuse vocabulary template
2. Parse vocabulary source (spreadsheet, document, etc.)
3. Create cards using template fields
4. Group into appropriate decks by category/difficulty
5. Tag with language and proficiency level
### Pattern: Conversation Capture
Turn teaching moments from conversations into cards:
1. Review conversation history for explanations
2. Identify distinct concepts explained
3. Format as question/answer pairs
4. Create cards in relevant topic deck
5. Tag with context from conversation
## Error Handling
Handle API errors gracefully:
```python
from scripts.mochi_api import MochiAPIError
try:
card = api.create_card(content=content, deck_id=deck_id)
except MochiAPIError as e:
# Report specific error to user
print(f"Failed to create card: {e}")
# Possibly retry or ask for corrected input
```
Common errors:
- Missing required fields (content, deck-id)
- Invalid deck or template IDs
- Validation failures on field values
- Network connectivity issues
## Resources
### scripts/mochi_api.py
Complete Python client for the Mochi API. Provides classes and functions for:
- `MochiAPI`: Main client class with methods for all operations
- `create_card()`, `update_card()`, `delete_card()`: Card operations
- `create_deck()`, `update_deck()`, `delete_deck()`: Deck operations
- `create_template()`, `get_template()`, `list_templates()`: Template operations
- `list_cards()`, `list_decks()`: Listing with pagination support
Execute directly for command-line testing or import as a module for programmatic use.
### references/mochi_api_reference.md
Detailed API reference documentation including:
- Complete field type reference for templates
- Deck sort and view options
- Card content markdown syntax
- Positioning and tagging strategies
- Pagination details
- Error handling patterns
- Best practices for API usage
Consult this reference when:
- Creating complex templates with specialized field types
- Implementing advanced sorting or display options
- Handling edge cases or errors
- Optimizing API usage patterns

View File

@@ -0,0 +1,297 @@
# Mochi API Reference
This document provides detailed reference information for the Mochi.cards API.
## Authentication
Mochi uses HTTP Basic Auth with your API key as the username (no password needed).
To get your API key:
1. Open Mochi app
2. Go to Account Settings
3. Find the API Keys section
Set the API key as an environment variable:
```bash
export MOCHI_API_KEY="your_api_key_here"
```
## Base URL
```
https://app.mochi.cards/api/
```
## Data Formats
The API supports both JSON and transit+json. This skill uses JSON for simplicity.
## Pagination
List endpoints return paginated results with:
- `docs`: Array of items
- `bookmark`: Cursor for next page (may be present even if no more pages exist)
## Cards
### Card Structure
Cards contain:
- `id`: Unique identifier
- `content`: Markdown content
- `deck-id`: Parent deck ID
- `template-id`: Optional template ID
- `fields`: Map of field IDs to values (when using templates)
- `tags`: Set of tags extracted from content
- `manual-tags`: Set of tags added manually
- `archived?`: Boolean indicating archived status
- `trashed?`: Timestamp if trashed (soft delete)
- `review-reverse?`: Whether to review in reverse
- `pos`: Lexicographic position for sorting
- `created-at`: Creation timestamp
- `updated-at`: Last modification timestamp
- `reviews`: Array of review history
- `references`: Set of references to other cards
### Card Content
Card content is markdown with some special features:
- `---` creates a card side separator (for multi-sided cards)
- `@[title](url)` creates references to other cards or external links
- `![](@media/filename.png)` embeds attachments
- `#tag` adds tags inline
- `<< Field name >>` in templates gets replaced by field values
### Positioning Cards
The `pos` field determines card order within a deck using lexicographic sorting:
- If card A has `pos` of `"6"` and card B has `pos` of `"7"`
- To insert between them, use `pos` of `"6V"` (any string between "6" and "7")
- Common pattern: use letters like `"a"`, `"b"`, `"c"` for initial cards, then insert with decimals or additional letters
### Card Attachments
Attachments can be added to cards using multipart/form-data:
- POST `/cards/{card-id}/attachments/{filename}`
- DELETE `/cards/{card-id}/attachments/{filename}`
Attachments are referenced in card content using `@media/` syntax.
## Decks
### Deck Structure
Decks contain:
- `id`: Unique identifier
- `name`: Deck name
- `parent-id`: Optional parent deck for nesting
- `sort`: Numeric sort order (decks sorted by this number)
- `archived?`: Boolean indicating archived status
- `trashed?`: Timestamp if trashed (soft delete)
- `sort-by`: How cards are sorted in the deck
- `cards-view`: How cards are displayed
- `show-sides?`: Whether to show all card sides
- `sort-by-direction`: Boolean to reverse sort order
- `review-reverse?`: Whether to review cards in reverse
### Sort-by Options
How cards are sorted within the deck:
- `none`: Manual ordering (using `pos` field)
- `lexicographically` or `lexigraphically`: Alphabetically by name
- `created-at`: By creation date
- `updated-at`: By last modification date
- `retention-rate-asc`: By retention rate (ascending)
- `interval-length`: By review interval length
### Cards-view Options
How cards are displayed in the deck:
- `list`: Traditional list view
- `grid`: Grid/tile layout
- `note`: Note-taking view
- `column`: Column layout
### Deck Hierarchy
Decks can be nested using the `parent-id` field to create organizational hierarchies.
## Templates
### Template Structure
Templates define card structure using fields:
- `id`: Unique identifier
- `name`: Template name (1-64 characters)
- `content`: Markdown with field placeholders
- `pos`: Lexicographic position for sorting
- `fields`: Map of field definitions
- `style`: Visual styling options
- `options`: Template behavior options
### Field Placeholders
In template content, use `<< Field name >>` to insert field values.
### Field Types
Available field types:
- `text`: Simple text input
- `boolean`: True/false checkbox
- `number`: Numeric input
- `draw`: Drawing/sketch input
- `ai`: AI-generated content
- `speech`: Speech/audio recording
- `image`: Image upload
- `translate`: Translation field
- `transcription`: Audio transcription
- `dictionary`: Dictionary lookup
- `pinyin`: Chinese pinyin
- `furigana`: Japanese reading aid
### Field Definition Structure
Each field is defined as:
```json
{
"id": "field-id",
"name": "Display Name",
"type": "text",
"pos": "a",
"content": "Default or instruction text",
"options": {
"multi-line?": true,
"hide-term": false
}
}
```
### Common Field Options
- `multi-line?`: Allow multi-line text input
- `hide-term`: Hide the term in certain views
- `ai-task`: Instructions for AI-generated fields
- Various type-specific options
### Template Style Options
```json
{
"text-alignment": "left" // or "center", "right"
}
```
### Template Options
```json
{
"show-sides-separately?": false // Show template sides separately during review
}
```
### Example Template
Basic flashcard template:
```json
{
"name": "Basic Flashcard",
"content": "# << Front >>\\n---\\n<< Back >>",
"fields": {
"front": {
"id": "front",
"name": "Front",
"type": "text",
"pos": "a"
},
"back": {
"id": "back",
"name": "Back",
"type": "text",
"pos": "b",
"options": {
"multi-line?": true
}
}
}
}
```
## Common Patterns
### Creating Simple Flashcards
For simple flashcards, use markdown content with `---` separator:
```markdown
# Question here
---
Answer here
```
### Creating Template-Based Cards
1. First retrieve or create a template
2. Get the field IDs from the template
3. Create card with field values matching the template structure
### Batch Operations
To create multiple cards:
1. List existing decks to get deck ID
2. Loop through content items
3. Create cards one at a time (API doesn't support batch creation)
### Organizing with Tags
Tags can be added two ways:
1. Inline in content: `#python #programming`
2. Using `manual-tags` field: `["python", "programming"]` (without # prefix)
Manual tags are useful when you want tags separate from content.
### Soft Delete vs Hard Delete
- **Soft delete**: Set `trashed?` to current ISO 8601 timestamp
- **Hard delete**: Use DELETE endpoint (permanent, cannot be undone)
Soft delete is recommended for safety.
## Error Handling
HTTP Status Codes:
- `2xx`: Success
- `4xx`: Client error (missing parameters, validation failure, not found)
- `5xx`: Server error (rare)
Error responses include:
```json
{
"errors": {
"field-name": "Error message for this field"
}
}
```
Or for general errors:
```json
{
"errors": ["Error message"]
}
```
## Rate Limiting
The API documentation doesn't specify rate limits, but follow good practices:
- Don't make excessive requests in short periods
- Implement exponential backoff on errors
- Cache deck/template lists when possible
## Best Practices
1. **Use Templates**: For consistent card structure, create templates
2. **Organize with Decks**: Create hierarchical deck structures
3. **Tag Consistently**: Use consistent tag naming conventions
4. **Soft Delete First**: Use trashed? instead of permanent deletion
5. **Position Strategically**: Use the `pos` field for custom ordering
6. **Validate Content**: Check markdown syntax before creating cards
7. **Handle Pagination**: Always check for and handle `bookmark` in list responses
8. **Store IDs**: Keep track of deck and template IDs for reuse

View File

@@ -0,0 +1,569 @@
# Prompt Design Principles - Deep Dive
This document provides comprehensive background on the cognitive science and research behind effective spaced repetition prompt design, based on Andy Matuschak's research and extensive literature review.
## Table of Contents
- [Core Mechanism: Retrieval Practice](#core-mechanism-retrieval-practice)
- [The Five Properties Explained](#the-five-properties-explained)
- [Knowledge Type Strategies](#knowledge-type-strategies)
- [Cognitive Science Background](#cognitive-science-background)
- [Common Failure Modes](#common-failure-modes)
- [Advanced Techniques](#advanced-techniques)
- [Research References](#research-references)
## Core Mechanism: Retrieval Practice
### What Makes Spaced Repetition Work?
Spaced repetition works through **retrieval practice** - the act of actively recalling information from memory strengthens that memory more effectively than passive review (re-reading).
**Key Research Finding** (Roediger & Karpicke, 2006):
- Students who practiced retrieval remembered 50% more after one week than students who only re-read material
- This effect persisted even when retrieval practice took less total time
- The benefit increased with longer retention intervals
### Why Prompts Matter
When you write a prompt in a spaced repetition system, you are giving your future self a recurring task. **Prompt design is task design.**
A poorly designed prompt creates a recurring task that:
- Doesn't actually strengthen the memory you care about
- Wastes time through false positives (answering without knowing)
- Creates interference through inconsistent retrievals
- Leads to abandonment through boredom
A well-designed prompt creates a recurring task that:
- Precisely targets the knowledge you want to retain
- Builds robust understanding resistant to forgetting
- Takes minimal time (10-30 seconds per year)
- Feels meaningful and connected to your goals
## The Five Properties Explained
### 1. Focused: One Detail at a Time
**Principle**: Each prompt should test exactly one piece of knowledge.
**Why it matters**: When a prompt tests multiple details simultaneously, you may successfully retrieve some but not others. This creates "partial lighting" - some mental "bulbs" light up, others don't. Your brain interprets this as success, but critical knowledge remains unstrengthened.
**The "bulbs" metaphor**: Imagine your full understanding of a concept as a string of light bulbs. Each bulb represents one aspect:
- What it is
- What it does
- When to use it
- How it differs from similar concepts
- Why it matters
An unfocused prompt like "Explain dependency injection" might light some bulbs but leave others dark. You'll feel like you "know" it, but gaps remain.
**Research basis**: Testing effect research (Roediger et al.) shows that retrieval must be specific to be effective. Vague retrievals don't strengthen specific memory traces.
**Practical example**:
❌ Unfocused:
```
Q: What is Redux and how does it work?
A: State management library, uses actions and reducers, maintains single store
```
This tests 3+ concepts:
- What Redux is (category)
- What actions are
- What reducers are
- What the single store principle is
✅ Focused - break into 4 cards:
```
Q: What category of library is Redux?
A: State management library
Q: What two mechanisms does Redux use to update state?
A: Actions (describe changes) and reducers (apply changes)
Q: What is the "single source of truth" principle in Redux?
A: All state lives in one store object
Q: What problem does Redux's unidirectional data flow solve?
A: Makes state changes predictable and debuggable
```
### 2. Precise: Specific Questions, Specific Answers
**Principle**: Questions should be specific about what they're asking for. Answers should be unambiguous.
**Why it matters**: Vague questions elicit vague answers. Vague retrievals are shallow retrievals. Shallow retrievals don't build strong memories.
**The precision spectrum**:
- **Too vague**: "What's important about X?"
- **Better**: "What benefit does X provide?"
- **Best**: "What specific problem does X solve in [context]?"
**Vague language to avoid**:
- "Interesting" - interesting to whom? In what way?
- "Important" - important for what purpose?
- "Good"/"bad" - by what criteria?
- "Tell me about" - what specifically?
- "Describe" - describe which aspect?
**Research basis**: The "transfer appropriate processing" principle (Morris et al., 1977) shows that memory retrieval is most effective when the retrieval context matches the encoding context. Precision in both creates stronger bonds.
**Practical example**:
❌ Vague:
```
Q: What's important about the async/await pattern?
A: Makes asynchronous code easier to read
```
Problems:
- "Important" is subjective
- "Easier to read" compared to what?
- Doesn't test specific understanding
✅ Precise:
```
Q: What syntax does async/await replace for handling promises?
A: Promise.then() chains
Q: What error handling mechanism works with async/await?
A: try/catch blocks (instead of .catch())
Q: What does the 'await' keyword do to promise execution?
A: Pauses function execution until promise resolves
```
### 3. Consistent: Same Answer Each Time
**Principle**: Prompts should produce the same answer on each review (with advanced exceptions for creative prompts).
**Why it matters**: When a prompt can have multiple valid answers, each retrieval strengthens a *different* memory trace. This creates **retrieval-induced forgetting** - recalling one answer actually inhibits other related memories.
**Example of the problem**:
```
Q: Give an example of a design pattern
```
Review 1: "Observer pattern"
Review 2: "Factory pattern"
Review 3: "Singleton pattern"
Each retrieval strengthens a different trace. None becomes reliably accessible. The category "design pattern" becomes associated with whichever example you recalled most recently, inhibiting others.
**Research basis**: Retrieval-induced forgetting (Anderson et al., 1994) shows that retrieving some items from a category inhibits other items in that category.
**How to handle lists and examples**:
For **closed lists** (fixed members):
- Use cloze deletion - one card per missing element
- Keep the same order to build visual "shape" memory
For **open lists** (evolving categories):
- Don't try to memorize the whole list
- Create prompts linking instances to category
- Write prompts about patterns within the category
For **examples**:
- Ask for "the most common example" or "a canonical example"
- Or flip it: "What pattern does Observer implement?" (specific instance → category)
**Creative prompts exception**: Advanced users can write prompts that explicitly ask for novel answers each time. These leverage the "generation effect" but are less well-researched.
### 4. Tractable: ~90% Success Rate
**Principle**: You should be able to answer correctly about 90% of the time.
**Why it matters**:
- Too easy (>95%): Wastes time, no effortful retrieval
- Too hard (<80%): Frustrating, leads to abandonment, creates negative associations
**The Goldilocks zone**: Enough difficulty to require memory retrieval, not so much that you frequently fail.
**How to calibrate**:
If struggling:
1. Break down further into smaller pieces
2. Add mnemonic cues (in parentheses in the answer)
3. Provide more context in the question
4. Link to existing strong memories
If too easy:
1. Remove scaffolding from the question
2. Increase effortfulness (see property 5)
3. Combine with related prompt for slightly broader scope
**Mnemonic cues examples**:
```
Q: What algorithm finds the shortest path in a weighted graph?
A: Dijkstra's algorithm (sounds like "dike-stra" → building dikes along shortest water path)
Q: What design pattern allows object behavior to vary based on internal state?
A: State pattern (literally named for what it does - different states, different behavior)
```
Cues should:
- Appear in parentheses in the answer
- Connect to vivid, memorable associations
- Use visual, emotional, or humorous links
- Relate new knowledge to existing memories
**Research basis**: Desirable difficulties (Bjork, 1994) - optimal learning occurs with moderate challenge. Spaced repetition systems work best when interval scheduling keeps difficulty in the sweet spot.
### 5. Effortful: Requires Actual Retrieval
**Principle**: The prompt must require pulling information from memory, not trivial inference or pattern matching.
**Why it matters**: The retrieval itself is what strengthens memory. If you can answer without retrieving, you're not getting the benefit.
**Common failure modes**:
**Too trivial**:
```
Q: Is Python a programming language?
A: Yes
```
No retrieval required - everyone knows this.
**Pattern-matchable**:
```
Q: In the context of RESTful APIs using HTTP methods with proper authentication headers and JSON payloads, what method is used to create a new resource?
A: POST
```
The question is so specific and long that you can answer by pattern matching ("create" → POST) without actually retrieving understanding of REST principles.
**The right level**:
```
Q: What problem does the POST method solve that GET cannot?
A: Sending data in the request body (GET uses URL parameters)
Q: Why should resource creation use POST instead of PUT?
A: PUT requires knowing the resource ID in advance; POST lets the server assign it
```
These require retrieving actual understanding.
**How to assess effortfulness**:
Ask yourself during review:
- Did I have to think about this?
- Or did I answer automatically/reflexively?
If answering automatically:
- Question might be too easy
- Or you've truly internalized it (good!)
- Check: Can you apply it in a novel context?
**Research basis**: Retrieval effort correlates with learning gains (Bjork & Bjork, 2011). Effort during encoding and retrieval creates stronger, more durable memories.
## Knowledge Type Strategies
### Factual Knowledge
**Definition**: Discrete, well-defined facts - names, dates, definitions, components, ingredients.
**Core strategy**: Break into atomic units. Write more prompts than feels natural.
**Why this works**: Each fact is a separate memory trace. Lumping them together creates the unfocused prompt problem.
**Example transformation**:
❌ One card:
```
Q: What are the SOLID principles?
A: Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion
```
✅ Five cards:
```
Q: What does the 'S' in SOLID stand for?
A: Single Responsibility
Q: What does the 'O' in SOLID stand for?
A: Open/Closed
Q: What does the 'L' in SOLID stand for?
A: Liskov Substitution
Q: What does the 'I' in SOLID stand for?
A: Interface Segregation
Q: What does the 'D' in SOLID stand for?
A: Dependency Inversion
```
Then create additional cards for what each principle means.
### Conceptual Knowledge
**Definition**: Understanding ideas, principles, theories, mental models.
**Core strategy**: Use multiple "lenses" to trace the edges of a concept.
**The five conceptual lenses**:
1. **Attributes and tendencies**: What's always/sometimes/never true?
2. **Similarities and differences**: How does it relate to adjacent concepts?
3. **Parts and wholes**: What are examples? What are sub-concepts?
4. **Causes and effects**: What does it do? When is it used?
5. **Significance and implications**: Why does it matter to you personally?
**Why this works**: A robust concept is not a single memory - it's a network of related memories. Approaching from multiple angles builds that network.
**Research basis**: Elaborative encoding (Craik & Lockhart, 1972) - deeper, more elaborate processing creates stronger memories. Multiple retrieval routes create redundancy and resilience.
**Example - Understanding "Technical Debt"**:
```
Lens 1 - Attributes:
Q: What's the core attribute of technical debt?
A: Code shortcuts that save time now but cost time later
Lens 2 - Similarities:
Q: How does technical debt differ from bugs?
A: Bugs are unintentional; technical debt is a conscious trade-off
Lens 3 - Parts/Wholes:
Q: Give one concrete example of technical debt
A: Skipping tests to ship faster (will slow down future changes)
Lens 4 - Causes/Effects:
Q: What forces cause teams to accumulate technical debt?
A: Deadline pressure, incomplete understanding, changing requirements
Lens 5 - Significance:
Q: When is taking on technical debt the right choice for your team?
A: When speed to market outweighs future maintenance cost (time-sensitive opportunities)
```
### Procedural Knowledge
**Definition**: How to do things - processes, workflows, algorithms, techniques.
**Core strategy**: Focus on transitions, timing, and rationale. Avoid rote step memorization.
**Why rote steps fail**: Memorizing "step 1, step 2, step 3" encourages mindless recitation without understanding. You can recite the steps but not apply them flexibly.
**Better focuses**:
1. **Transitions**: When do you move from step X to step Y?
2. **Conditions**: How do you know you're ready for the next step?
3. **Rationale**: Why does each step matter?
4. **Timing**: How long do things take? ("heads-up" information)
5. **Heuristics**: Rules of thumb for decision points
**Example - Git Workflow**:
❌ Rote steps:
```
Q: What are the steps to create a feature branch?
A: 1. git checkout main, 2. git pull, 3. git checkout -b feature-name, 4. Make changes, 5. git commit, 6. git push
```
✅ Transitions and rationale:
```
Q: Why pull before creating a feature branch?
A: To start from the latest changes (avoid merge conflicts later)
Q: When is the right time to create a feature branch?
A: Before making any changes (keep main clean)
Q: What's the relationship between commits and pushes?
A: Commit saves locally, push shares with remote (can commit many times before pushing)
Q: How do you know when a feature branch is ready to merge?
A: Tests pass, code reviewed, conflicts resolved
```
## Cognitive Science Background
### Spacing Effect
**Finding**: Distributed practice beats massed practice.
**Application**: Spaced repetition systems automatically schedule reviews at increasing intervals. Your job is to write prompts that make each review meaningful.
**Research**: Ebbinghaus (1885), Cepeda et al. (2006)
### Testing Effect
**Finding**: Retrieval practice is more effective than re-studying.
**Application**: Each prompt review is a retrieval practice session. More prompts = more practice opportunities.
**Research**: Roediger & Karpicke (2006)
### Elaborative Encoding
**Finding**: Deeper processing creates stronger memories.
**Application**: Connect new information to existing knowledge. Use multiple lenses for concepts. Ask "why" not just "what".
**Research**: Craik & Lockhart (1972)
### Generation Effect
**Finding**: You remember better what you generate yourself.
**Application**: Answers should come from your memory, not pattern matching. Creative prompts leverage this explicitly.
**Research**: Slamecka & Graf (1978)
### Retrieval-Induced Forgetting
**Finding**: Retrieving some items from a category inhibits other items.
**Application**: Prompts must produce consistent answers. Variable answers create interference.
**Research**: Anderson et al. (1994)
## Common Failure Modes
### False Positives: Answering Without Knowing
**Problem**: You answer correctly but don't actually have the knowledge.
**Causes**:
1. Pattern matching on question structure
2. Binary questions (50% guess rate)
3. Trivial prompts (no retrieval needed)
4. Recognition instead of recall
**Solutions**:
- Keep questions short and simple
- Use open-ended questions
- Increase effortfulness
- Test application, not just recall
### False Negatives: Knowing But Failing
**Problem**: You have the knowledge but answer incorrectly.
**Causes**:
1. Not enough context to exclude alternative answers
2. Too much provincial context (overfitting to specific examples)
3. Prompt is too hard (needs breaking down)
**Solutions**:
- Include just enough context
- Express general knowledge generally
- Break into smaller pieces
- Add mnemonic cues
### The Sigh: Boredom and Abandonment
**Problem**: Reviewing cards feels like a chore. You abandon the system.
**Causes**:
1. No emotional connection to material
2. Creating cards "because you should"
3. Prompts are trivial or frustrating
4. Material no longer relevant
**Solutions**:
- Only create prompts about things that matter to you
- Connect to actual creative work and goals
- Be alert to internal sighs during review
- Delete liberally when connection fades
- Revise frustrating prompts immediately
## Advanced Techniques
### Salience Prompts
**Purpose**: Keep ideas "top of mind" to drive behavioral change and application.
**How they differ**: Standard prompts build retention. Salience prompts extend the period where knowledge feels salient - where you notice it everywhere.
**Example patterns**:
```
# Context-based
Q: What's one situation this week where you could apply X?
A: (Answer varies based on current context)
# Implication-focused
Q: What's one assumption you're making that X challenges?
A: (Identify specific assumption - varies)
# Creative application
Q: Describe a way to apply X you haven't mentioned before
A: (Novel answer each time)
```
**Warning**: Less well-researched than standard retrieval prompts. Experimental.
**Research basis**: Frequency judgments (Tversky & Kahneman, 1973) - recently encountered concepts feel more common (Baader-Meinhof effect). Salience prompts extend this.
### Interpretation Over Transcription
**Principle**: Don't parrot source material verbatim. Extract transferable principles.
**Why**: Verbatim cards create brittle knowledge that doesn't transfer to new contexts.
**Example**:
❌ Transcription:
```
Q: What does the recipe say about olive oil?
A: "Use 2 tablespoons extra virgin olive oil"
```
✅ Interpretation:
```
Q: What's the typical ratio of olive oil to pasta in aglio e olio?
A: Roughly 2 tablespoons per serving (adjust based on pasta amount)
```
The interpreted version extracts the principle (ratio) rather than the specific quantity.
### Cues and Mnemonics
**When to add cues**: When you're struggling with a prompt that's otherwise well-designed.
**How to add cues**: In parentheses in the answer, using vivid associations.
**Types of associations**:
- Visual (create a mental image)
- Emotional (attach a feeling)
- Humorous (funny sticks)
- Personal (connect to your experience)
**Example**:
```
Q: What algorithm is optimal for finding shortest paths from one source to all other vertices?
A: Dijkstra's algorithm (sounds like "dike-stra" → imagine building dikes along the shortest path to dam flooding from source to all destinations)
```
### Creative Prompts
**Purpose**: Drive application and novel thinking, not just retention.
**Pattern**: Ask for a different answer each time.
**Example**:
```
Q: Explain one way you could apply first principles thinking that you haven't mentioned before
A: (Generate novel answer using current context)
```
**Research status**: Experimental. Leverages generation effect but less proven than standard retrieval prompts.
## Research References
- Anderson, M. C., Bjork, R. A., & Bjork, E. L. (1994). Remembering can cause forgetting: Retrieval dynamics in long-term memory.
- Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings.
- Bjork, R. A., & Bjork, E. L. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning.
- Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis.
- Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research.
- Ebbinghaus, H. (1885). Memory: A contribution to experimental psychology.
- Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing.
- Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention.
- Slamecka, N. J., & Graf, P. (1978). The generation effect: Delineation of a phenomenon.
- Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability.
---
For practical application guidance, see the main SKILL.md file and knowledge_type_templates.md.

View File

@@ -0,0 +1,846 @@
#!/usr/bin/env python3
"""
ABOUTME: Mochi API client for creating and managing flashcards, decks, and templates.
ABOUTME: Provides a Python interface to the Mochi.cards REST API with authentication and error handling.
Dependencies: requests
Install with: pip install requests
"""
import os
import sys
import json
from typing import Dict, List, Optional, Any
from urllib.parse import urljoin, urlencode
import requests
from requests.auth import HTTPBasicAuth
class MochiAPIError(Exception):
"""Custom exception for Mochi API errors."""
pass
class PromptQualityError(Exception):
"""Exception raised when a prompt fails quality validation."""
pass
def validate_prompt_quality(question: str, answer: str, strict: bool = False) -> Dict[str, Any]:
"""
Validate a prompt against the 5 properties of effective prompts.
Based on Andy Matuschak's research on spaced repetition prompt design.
Args:
question: The question/front of the card
answer: The answer/back of the card
strict: If True, raise exception on validation failure
Returns:
Dict with 'valid' (bool), 'issues' (list), and 'suggestions' (list)
Raises:
PromptQualityError: If strict=True and validation fails
"""
issues = []
suggestions = []
# Check 1: Focused (one detail at a time)
if " and " in question or len(answer.split(",")) > 2:
issues.append("Prompt appears unfocused (tests multiple details)")
suggestions.append("Break into separate cards, one per detail")
# Check 2: Precise (specific, not vague)
vague_words = ["interesting", "important", "good", "bad", "tell me about", "what about"]
if any(word in question.lower() for word in vague_words):
issues.append("Question uses vague language")
suggestions.append("Be specific about what you're asking for")
# Check 3: Consistent (same answer each time)
variable_prompts = ["give an example", "name one", "describe a"]
if any(phrase in question.lower() for phrase in variable_prompts):
# This is OK for creative prompts, but warn
suggestions.append("Note: This prompt may produce variable answers (advanced technique)")
# Check 4: Binary questions (usually poor)
if question.strip().startswith(("Is ", "Does ", "Can ", "Will ", "Do ", "Are ")):
issues.append("Binary question (yes/no) - produces shallow understanding")
suggestions.append("Rephrase as open-ended question starting with What/Why/How/When")
# Check 5: Pattern-matchable (too long, answerable by syntax)
if len(question) > 200:
issues.append("Question is very long - may be answerable by pattern matching")
suggestions.append("Keep questions short and simple")
# Check 6: Trivial (too easy, no retrieval)
trivial_indicators = [
"what does", "is", "acronym", "stands for",
"true or false", "correct"
]
if any(indicator in question.lower() for indicator in trivial_indicators) and len(answer) < 20:
suggestions.append("Verify this requires memory retrieval, not trivial knowledge")
valid = len(issues) == 0
result = {
"valid": valid,
"issues": issues,
"suggestions": suggestions
}
if strict and not valid:
error_msg = "Prompt quality validation failed:\n"
error_msg += "\n".join(f" - {issue}" for issue in issues)
error_msg += "\n\nSuggestions:\n"
error_msg += "\n".join(f" - {suggestion}" for suggestion in suggestions)
raise PromptQualityError(error_msg)
return result
def create_conceptual_lens_cards(
api: "MochiAPI",
concept: str,
deck_id: str,
lenses: Optional[Dict[str, str]] = None,
base_tags: Optional[List[str]] = None
) -> List[Dict]:
"""
Create multiple cards for a concept using the 5 conceptual lenses approach.
This creates robust understanding by examining a concept from multiple angles:
- Attributes: What's always/sometimes/never true?
- Similarities: How does it relate to adjacent concepts?
- Parts/Wholes: Examples, sub-concepts, categories
- Causes/Effects: What does it do? When is it used?
- Significance: Why does it matter personally?
Args:
api: MochiAPI instance
concept: The concept to create cards about
deck_id: Target deck ID
lenses: Dict mapping lens names to specific prompts (optional - will use defaults)
base_tags: Common tags for all cards (concept name will be added automatically)
Returns:
List of created card objects
Example:
>>> api = MochiAPI()
>>> cards = create_conceptual_lens_cards(
... api,
... concept="dependency injection",
... deck_id="deck123",
... lenses={
... "attributes": "Dependencies are provided from outside",
... "similarities": "Different from service locator (push vs pull)",
... "parts": "Pass DB connection to constructor instead of creating it",
... "causes": "Makes code testable by allowing mock dependencies",
... "significance": "Essential for writing testable FastAPI endpoints"
... }
... )
"""
if base_tags is None:
base_tags = []
# Add concept as tag
concept_tag = concept.lower().replace(" ", "-")
all_tags = base_tags + [concept_tag]
# Default lens questions if not provided
default_questions = {
"attributes": f"What is the core attribute of {concept}?",
"similarities": f"How does {concept} differ from similar concepts?",
"parts": f"Give a concrete example of {concept}",
"causes": f"What problem does {concept} solve?",
"significance": f"When would you use {concept} in your work?"
}
cards = []
for lens_name, question in default_questions.items():
# Use custom lens content if provided, otherwise skip if not in lenses
if lenses and lens_name not in lenses:
continue
answer = lenses.get(lens_name, "") if lenses else ""
if not answer:
# Skip if no answer provided
continue
# Create the card
content = f"# {question}\n---\n{answer}"
card = api.create_card(
content=content,
deck_id=deck_id,
manual_tags=all_tags + [lens_name]
)
cards.append(card)
return cards
def break_into_atomic_prompts(complex_prompt: str, answer: str) -> List[Dict[str, str]]:
"""
Suggest how to break a complex prompt into atomic, focused prompts.
This is a heuristic function that identifies common patterns of unfocused prompts.
Args:
complex_prompt: The unfocused question
answer: The complex answer
Returns:
List of dicts with 'question' and 'answer' keys, or empty list if can't break down
Example:
>>> prompts = break_into_atomic_prompts(
... "What are Python decorators and what syntax do they use?",
... "Functions that modify behavior, use @ syntax"
... )
>>> len(prompts)
2
"""
suggestions = []
# Pattern 1: "What are X and Y" or "What do X and Y"
if " and " in complex_prompt:
parts = complex_prompt.split(" and ")
if len(parts) == 2:
# Try to split answer as well
answer_parts = answer.split(",")
if len(answer_parts) >= 2:
suggestions.append({
"question": parts[0].strip() + "?",
"answer": answer_parts[0].strip()
})
suggestions.append({
"question": "What " + parts[1].strip(),
"answer": ", ".join(answer_parts[1:]).strip()
})
# Pattern 2: Multiple comma-separated items in answer
if "," in answer:
items = [item.strip() for item in answer.split(",")]
if len(items) > 2:
# Suggest creating one card per item
for item in items:
suggestions.append({
"question": f"{complex_prompt} (focus on {item.split()[0]})",
"answer": item,
"note": "Break into individual cards, one per item"
})
return suggestions
def create_procedural_cards(
api: "MochiAPI",
procedure_name: str,
deck_id: str,
transitions: Optional[List[Dict[str, str]]] = None,
rationales: Optional[List[Dict[str, str]]] = None,
timings: Optional[List[Dict[str, str]]] = None,
base_tags: Optional[List[str]] = None
) -> List[Dict]:
"""
Create cards for procedural knowledge focusing on transitions, rationales, and timing.
Avoids rote "step 1, step 2" memorization in favor of understanding.
Args:
api: MochiAPI instance
procedure_name: Name of the procedure/process
deck_id: Target deck ID
transitions: List of dicts with 'condition' and 'next_step'
rationales: List of dicts with 'action' and 'reason'
timings: List of dicts with 'phase' and 'duration'
base_tags: Common tags for all cards
Returns:
List of created card objects
Example:
>>> cards = create_procedural_cards(
... api,
... procedure_name="sourdough bread making",
... deck_id="deck123",
... transitions=[
... {"condition": "flour is fully hydrated", "next_step": "add salt"}
... ],
... rationales=[
... {"action": "autolyse before adding salt",
... "reason": "salt inhibits gluten development"}
... ],
... timings=[
... {"phase": "bulk fermentation", "duration": "4-6 hours at room temp"}
... ]
... )
"""
if base_tags is None:
base_tags = []
procedure_tag = procedure_name.lower().replace(" ", "-")
all_tags = base_tags + [procedure_tag]
cards = []
# Create transition cards
if transitions:
for trans in transitions:
content = f"# When do you know it's time to {trans['next_step']} in {procedure_name}?\n---\n{trans['condition']}"
card = api.create_card(
content=content,
deck_id=deck_id,
manual_tags=all_tags + ["transitions"]
)
cards.append(card)
# Create rationale cards
if rationales:
for rat in rationales:
content = f"# Why do you {rat['action']} in {procedure_name}?\n---\n{rat['reason']}"
card = api.create_card(
content=content,
deck_id=deck_id,
manual_tags=all_tags + ["rationale"]
)
cards.append(card)
# Create timing cards
if timings:
for timing in timings:
content = f"# How long does {timing['phase']} take in {procedure_name}?\n---\n{timing['duration']}"
card = api.create_card(
content=content,
deck_id=deck_id,
manual_tags=all_tags + ["timing"]
)
cards.append(card)
return cards
class MochiAPI:
"""Client for interacting with the Mochi.cards API."""
BASE_URL = "https://app.mochi.cards/api/"
def __init__(self, api_key: Optional[str] = None):
"""
Initialize the Mochi API client.
Args:
api_key: Mochi API key. If not provided, reads from MOCHI_API_KEY environment variable.
Raises:
MochiAPIError: If no API key is provided or found in environment.
"""
self.api_key = api_key or os.getenv("MOCHI_API_KEY")
if not self.api_key:
raise MochiAPIError(
"No API key provided. Set MOCHI_API_KEY environment variable or pass api_key parameter."
)
self.auth = HTTPBasicAuth(self.api_key, "")
self.headers = {
"Content-Type": "application/json",
"Accept": "application/json"
}
def _request(self, method: str, endpoint: str, data: Optional[Dict] = None, params: Optional[Dict] = None) -> Any:
"""
Make a request to the Mochi API.
Args:
method: HTTP method (GET, POST, DELETE)
endpoint: API endpoint path
data: Request body data
params: Query parameters
Returns:
Response data as dict or None for DELETE requests
Raises:
MochiAPIError: If the request fails
"""
url = urljoin(self.BASE_URL, endpoint.lstrip("/"))
try:
response = requests.request(
method=method,
url=url,
auth=self.auth,
headers=self.headers,
json=data,
params=params,
timeout=30
)
if response.status_code == 204:
return None
if not response.ok:
error_data = {}
try:
error_data = response.json()
except:
pass
error_msg = f"API request failed with status {response.status_code}"
if "errors" in error_data:
error_msg += f": {error_data['errors']}"
raise MochiAPIError(error_msg)
if response.text:
return response.json()
return None
except requests.RequestException as e:
raise MochiAPIError(f"Network error: {str(e)}")
# Card operations
def create_card(
self,
content: str,
deck_id: str,
template_id: Optional[str] = None,
fields: Optional[Dict[str, Dict[str, str]]] = None,
archived: bool = False,
review_reverse: bool = False,
pos: Optional[str] = None,
manual_tags: Optional[List[str]] = None
) -> Dict:
"""
Create a new card.
Args:
content: Markdown content of the card
deck_id: ID of the deck to add the card to
template_id: Optional template ID to use
fields: Optional dict of field IDs to field values
archived: Whether the card is archived
review_reverse: Whether to review in reverse order
pos: Relative position within deck (lexicographic sorting)
manual_tags: List of tags (without # prefix)
Returns:
Created card data
Example:
>>> api = MochiAPI()
>>> card = api.create_card(
... content="# What is Python?\\n---\\nA high-level programming language",
... deck_id="abc123",
... manual_tags=["programming", "python"]
... )
"""
data = {
"content": content,
"deck-id": deck_id,
"archived?": archived,
"review-reverse?": review_reverse
}
if template_id:
data["template-id"] = template_id
if fields:
data["fields"] = fields
if pos:
data["pos"] = pos
if manual_tags:
data["manual-tags"] = manual_tags
return self._request("POST", "/cards/", data=data)
def get_card(self, card_id: str) -> Dict:
"""
Retrieve a card by ID.
Args:
card_id: The card ID
Returns:
Card data
"""
return self._request("GET", f"/cards/{card_id}")
def list_cards(
self,
deck_id: Optional[str] = None,
limit: int = 10,
bookmark: Optional[str] = None
) -> Dict:
"""
List cards with optional filtering and pagination.
Args:
deck_id: Optional deck ID to filter by
limit: Number of cards per page (1-100, default 10)
bookmark: Pagination cursor from previous request
Returns:
Dict with 'docs' (list of cards) and 'bookmark' (for next page)
"""
params = {"limit": limit}
if deck_id:
params["deck-id"] = deck_id
if bookmark:
params["bookmark"] = bookmark
return self._request("GET", "/cards/", params=params)
def update_card(
self,
card_id: str,
content: Optional[str] = None,
deck_id: Optional[str] = None,
template_id: Optional[str] = None,
fields: Optional[Dict[str, Dict[str, str]]] = None,
archived: Optional[bool] = None,
trashed: Optional[str] = None,
review_reverse: Optional[bool] = None,
pos: Optional[str] = None,
manual_tags: Optional[List[str]] = None
) -> Dict:
"""
Update an existing card.
Args:
card_id: The card ID to update
content: New markdown content
deck_id: Move to different deck
template_id: Change template
fields: Update field values
archived: Archive/unarchive
trashed: ISO 8601 timestamp to trash, or None to untrash
review_reverse: Update reverse review setting
pos: Update position
manual_tags: Update tags (replaces existing)
Returns:
Updated card data
"""
data = {}
if content is not None:
data["content"] = content
if deck_id is not None:
data["deck-id"] = deck_id
if template_id is not None:
data["template-id"] = template_id
if fields is not None:
data["fields"] = fields
if archived is not None:
data["archived?"] = archived
if trashed is not None:
data["trashed?"] = trashed
if review_reverse is not None:
data["review-reverse?"] = review_reverse
if pos is not None:
data["pos"] = pos
if manual_tags is not None:
data["manual-tags"] = manual_tags
return self._request("POST", f"/cards/{card_id}", data=data)
def delete_card(self, card_id: str) -> None:
"""
Permanently delete a card.
Warning: This cannot be undone. Consider using update_card with trashed parameter for soft delete.
Args:
card_id: The card ID to delete
"""
self._request("DELETE", f"/cards/{card_id}")
# Deck operations
def create_deck(
self,
name: str,
parent_id: Optional[str] = None,
sort: Optional[int] = None,
archived: bool = False,
sort_by: str = "none",
cards_view: str = "list",
show_sides: bool = True,
sort_by_direction: bool = False,
review_reverse: bool = False
) -> Dict:
"""
Create a new deck.
Args:
name: Deck name
parent_id: Optional parent deck ID for nesting
sort: Numeric sort order
archived: Whether deck is archived
sort_by: How to sort cards (none, lexicographically, created-at, updated-at, etc.)
cards_view: Display mode (list, grid, note, column)
show_sides: Show all sides of cards
sort_by_direction: Reverse sort order
review_reverse: Review cards in reverse
Returns:
Created deck data
"""
data = {
"name": name,
"archived?": archived,
"sort-by": sort_by,
"cards-view": cards_view,
"show-sides?": show_sides,
"sort-by-direction": sort_by_direction,
"review-reverse?": review_reverse
}
if parent_id:
data["parent-id"] = parent_id
if sort is not None:
data["sort"] = sort
return self._request("POST", "/decks/", data=data)
def get_deck(self, deck_id: str) -> Dict:
"""
Retrieve a deck by ID.
Args:
deck_id: The deck ID
Returns:
Deck data
"""
return self._request("GET", f"/decks/{deck_id}")
def list_decks(self, bookmark: Optional[str] = None) -> Dict:
"""
List all decks with pagination.
Args:
bookmark: Pagination cursor from previous request
Returns:
Dict with 'docs' (list of decks) and 'bookmark' (for next page)
"""
params = {}
if bookmark:
params["bookmark"] = bookmark
return self._request("GET", "/decks/", params=params)
def update_deck(
self,
deck_id: str,
name: Optional[str] = None,
parent_id: Optional[str] = None,
sort: Optional[int] = None,
archived: Optional[bool] = None,
trashed: Optional[str] = None,
sort_by: Optional[str] = None,
cards_view: Optional[str] = None,
show_sides: Optional[bool] = None,
sort_by_direction: Optional[bool] = None,
review_reverse: Optional[bool] = None
) -> Dict:
"""
Update an existing deck.
Args:
deck_id: The deck ID to update
name: New name
parent_id: Move to different parent
sort: Update sort order
archived: Archive/unarchive
trashed: ISO 8601 timestamp to trash, or None to untrash
sort_by: Update sort method
cards_view: Update view mode
show_sides: Update show sides setting
sort_by_direction: Update sort direction
review_reverse: Update reverse review setting
Returns:
Updated deck data
"""
data = {}
if name is not None:
data["name"] = name
if parent_id is not None:
data["parent-id"] = parent_id
if sort is not None:
data["sort"] = sort
if archived is not None:
data["archived?"] = archived
if trashed is not None:
data["trashed?"] = trashed
if sort_by is not None:
data["sort-by"] = sort_by
if cards_view is not None:
data["cards-view"] = cards_view
if show_sides is not None:
data["show-sides?"] = show_sides
if sort_by_direction is not None:
data["sort-by-direction"] = sort_by_direction
if review_reverse is not None:
data["review-reverse?"] = review_reverse
return self._request("POST", f"/decks/{deck_id}", data=data)
def delete_deck(self, deck_id: str) -> None:
"""
Permanently delete a deck.
Warning: This cannot be undone. Cards and child decks are NOT deleted.
Consider using update_deck with trashed parameter for soft delete.
Args:
deck_id: The deck ID to delete
"""
self._request("DELETE", f"/decks/{deck_id}")
# Template operations
def create_template(
self,
name: str,
content: str,
fields: Dict[str, Dict[str, Any]],
pos: Optional[str] = None,
style: Optional[Dict[str, str]] = None,
options: Optional[Dict[str, bool]] = None
) -> Dict:
"""
Create a new template.
Args:
name: Template name
content: Markdown content with field placeholders like << Field name >>
fields: Dict of field definitions
pos: Position for sorting
style: Style options (e.g., text-alignment)
options: Template options (e.g., show-sides-separately?)
Returns:
Created template data
Example:
>>> fields = {
... "front": {
... "id": "front",
... "name": "Front",
... "type": "text",
... "pos": "a"
... },
... "back": {
... "id": "back",
... "name": "Back",
... "type": "text",
... "pos": "b",
... "options": {"multi-line?": True}
... }
... }
>>> template = api.create_template(
... name="Basic Flashcard",
... content="# << Front >>\\n---\\n<< Back >>",
... fields=fields
... )
"""
data = {
"name": name,
"content": content,
"fields": fields
}
if pos:
data["pos"] = pos
if style:
data["style"] = style
if options:
data["options"] = options
return self._request("POST", "/templates/", data=data)
def get_template(self, template_id: str) -> Dict:
"""
Retrieve a template by ID.
Args:
template_id: The template ID
Returns:
Template data
"""
return self._request("GET", f"/templates/{template_id}")
def list_templates(self, bookmark: Optional[str] = None) -> Dict:
"""
List all templates with pagination.
Args:
bookmark: Pagination cursor from previous request
Returns:
Dict with 'docs' (list of templates) and 'bookmark' (for next page)
"""
params = {}
if bookmark:
params["bookmark"] = bookmark
return self._request("GET", "/templates/", params=params)
def main():
"""CLI interface for testing the Mochi API."""
if len(sys.argv) < 2:
print("Usage: python mochi_api.py <command> [args...]")
print("\nCommands:")
print(" list-decks - List all decks")
print(" list-cards <deck-id> - List cards in a deck")
print(" create-deck <name> - Create a new deck")
print(" create-card <deck-id> <content> - Create a card")
sys.exit(1)
try:
api = MochiAPI()
command = sys.argv[1]
if command == "list-decks":
result = api.list_decks()
print(json.dumps(result, indent=2))
elif command == "list-cards":
if len(sys.argv) < 3:
print("Error: deck-id required")
sys.exit(1)
result = api.list_cards(deck_id=sys.argv[2])
print(json.dumps(result, indent=2))
elif command == "create-deck":
if len(sys.argv) < 3:
print("Error: deck name required")
sys.exit(1)
result = api.create_deck(name=sys.argv[2])
print(json.dumps(result, indent=2))
elif command == "create-card":
if len(sys.argv) < 4:
print("Error: deck-id and content required")
sys.exit(1)
result = api.create_card(deck_id=sys.argv[2], content=sys.argv[3])
print(json.dumps(result, indent=2))
else:
print(f"Unknown command: {command}")
sys.exit(1)
except MochiAPIError as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,598 @@
# Knowledge Type Templates
This document provides ready-to-use templates for different types of knowledge. Each template includes example code and guidance on when to use it.
## Table of Contents
- [Factual Knowledge Templates](#factual-knowledge-templates)
- [Conceptual Knowledge Templates](#conceptual-knowledge-templates)
- [Procedural Knowledge Templates](#procedural-knowledge-templates)
- [List Templates](#list-templates)
- [Salience Templates](#salience-templates)
## Factual Knowledge Templates
Use when learning: names, dates, definitions, components, ingredients, vocabulary, formulas.
### Template 1: Simple Fact
```python
def create_simple_fact(api, deck_id, question, answer, tags=None):
"""Create a focused factual card."""
return api.create_card(
content=f"# {question}\n---\n{answer}",
deck_id=deck_id,
manual_tags=tags or []
)
# Example usage
create_simple_fact(
api, deck_id,
question="What is the capital of France?",
answer="Paris",
tags=["geography", "europe", "capitals"]
)
```
### Template 2: Definition
```python
def create_definition_card(api, deck_id, term, definition, example=None, tags=None):
"""Create a definition card, optionally with example."""
answer = definition
if example:
answer += f"\n\nExample: {example}"
return api.create_card(
content=f"# What is {term}?\n---\n{answer}",
deck_id=deck_id,
manual_tags=tags or [term.lower().replace(" ", "-")]
)
# Example usage
create_definition_card(
api, deck_id,
term="technical debt",
definition="Code shortcuts that save time now but cost time later",
example="Skipping tests to ship faster",
tags=["software-engineering", "concepts"]
)
```
### Template 3: Component/Ingredient Lists (Atomic)
```python
def create_component_cards(api, deck_id, whole_name, components, tags=None):
"""
Break a list of components into individual atomic cards.
Returns list of created cards.
"""
cards = []
for component in components:
card = api.create_card(
content=f"# Name one component of {whole_name}\n---\n{component}",
deck_id=deck_id,
manual_tags=(tags or []) + [whole_name.lower().replace(" ", "-")]
)
cards.append(card)
return cards
# Example usage
create_component_cards(
api, deck_id,
whole_name="chocolate chip cookies",
components=[
"Flour (primary dry ingredient)",
"Butter (fat)",
"White sugar and brown sugar (sweeteners)",
"Eggs (structure)",
"Vanilla (flavoring)",
"Baking soda (leavening)",
"Salt (balances sweetness)"
],
tags=["baking", "recipes"]
)
```
### Template 4: Comparison/Contrast
```python
def create_comparison_card(api, deck_id, item_a, item_b, difference, tags=None):
"""Create a card comparing two similar things."""
return api.create_card(
content=f"# How does {item_a} differ from {item_b}?\n---\n{difference}",
deck_id=deck_id,
manual_tags=tags or []
)
# Example usage
create_comparison_card(
api, deck_id,
item_a="list",
item_b="tuple",
difference="Lists are mutable (can be changed), tuples are immutable (cannot be changed)",
tags=["python", "data-structures"]
)
```
## Conceptual Knowledge Templates
Use when learning: principles, theories, mental models, frameworks, design patterns.
### Template 1: Five Lenses Approach
```python
def create_concept_cards_five_lenses(
api,
deck_id,
concept,
attributes,
similarities,
parts_examples,
causes_effects,
significance,
tags=None
):
"""
Create 5 cards covering a concept from multiple angles.
Returns list of created cards.
"""
base_tags = (tags or []) + [concept.lower().replace(" ", "-")]
cards = []
# Lens 1: Attributes
cards.append(api.create_card(
content=f"# What is the core attribute of {concept}?\n---\n{attributes}",
deck_id=deck_id,
manual_tags=base_tags + ["attributes"]
))
# Lens 2: Similarities/Differences
cards.append(api.create_card(
content=f"# How does {concept} differ from similar concepts?\n---\n{similarities}",
deck_id=deck_id,
manual_tags=base_tags + ["comparison"]
))
# Lens 3: Parts/Wholes/Examples
cards.append(api.create_card(
content=f"# Give a concrete example of {concept}\n---\n{parts_examples}",
deck_id=deck_id,
manual_tags=base_tags + ["examples"]
))
# Lens 4: Causes/Effects
cards.append(api.create_card(
content=f"# What problem does {concept} solve?\n---\n{causes_effects}",
deck_id=deck_id,
manual_tags=base_tags + ["benefits"]
))
# Lens 5: Significance
cards.append(api.create_card(
content=f"# When would you use {concept} in your work?\n---\n{significance}",
deck_id=deck_id,
manual_tags=base_tags + ["application"]
))
return cards
# Example usage
create_concept_cards_five_lenses(
api, deck_id,
concept="dependency injection",
attributes="Dependencies are provided from outside rather than created internally",
similarities="Different from service locator (DI pushes dependencies in, service locator pulls them out)",
parts_examples="Passing a database connection to a class constructor instead of creating it inside",
causes_effects="Makes code testable by allowing mock dependencies to be injected",
significance="Essential when writing testable FastAPI endpoints that need to swap database implementations",
tags=["design-patterns", "software-architecture"]
)
```
### Template 2: Core Principle
```python
def create_principle_card(api, deck_id, principle_name, explanation, violation_example=None, tags=None):
"""Create a card explaining a principle, optionally with a violation example."""
answer = explanation
if violation_example:
answer += f"\n\nViolation example: {violation_example}"
return api.create_card(
content=f"# What is the {principle_name} principle?\n---\n{answer}",
deck_id=deck_id,
manual_tags=tags or []
)
# Example usage
create_principle_card(
api, deck_id,
principle_name="Single Responsibility",
explanation="A class should have only one reason to change (one responsibility)",
violation_example="A User class that handles both data storage AND email sending",
tags=["solid-principles", "oop"]
)
```
### Template 3: Mental Model
```python
def create_mental_model_card(api, deck_id, model_name, analogy, application, tags=None):
"""Create a card for a mental model using analogy."""
return api.create_card(
content=f"# How does the '{model_name}' mental model work?\n---\n{analogy}\n\nApplication: {application}",
deck_id=deck_id,
manual_tags=tags or []
)
# Example usage
create_mental_model_card(
api, deck_id,
model_name="Second-order thinking",
analogy="Don't just ask 'What happens next?' Ask 'What happens after that?'",
application="Before making a decision, consider not just immediate effects but downstream consequences",
tags=["mental-models", "decision-making"]
)
```
## Procedural Knowledge Templates
Use when learning: processes, workflows, algorithms, techniques, recipes.
### Template 1: Transition Card
```python
def create_transition_card(api, deck_id, procedure, condition, next_step, tags=None):
"""Create a card about when to transition between steps."""
return api.create_card(
content=f"# When do you know it's time to {next_step} in {procedure}?\n---\n{condition}",
deck_id=deck_id,
manual_tags=(tags or []) + [procedure.lower().replace(" ", "-"), "transitions"]
)
# Example usage
create_transition_card(
api, deck_id,
procedure="sourdough bread making",
condition="After 30-60 minutes when flour is fully hydrated",
next_step="add salt",
tags=["baking", "sourdough"]
)
```
### Template 2: Rationale Card
```python
def create_rationale_card(api, deck_id, procedure, action, reason, tags=None):
"""Create a card about WHY a step is done."""
return api.create_card(
content=f"# Why do you {action} in {procedure}?\n---\n{reason}",
deck_id=deck_id,
manual_tags=(tags or []) + [procedure.lower().replace(" ", "-"), "rationale"]
)
# Example usage
create_rationale_card(
api, deck_id,
procedure="git workflow",
action="pull before creating a feature branch",
reason="To start from the latest changes and avoid merge conflicts later",
tags=["git", "version-control"]
)
```
### Template 3: Timing/Duration Card
```python
def create_timing_card(api, deck_id, procedure, phase, duration, tags=None):
"""Create a card about how long something takes (heads-up information)."""
return api.create_card(
content=f"# How long does {phase} take in {procedure}?\n---\n{duration}",
deck_id=deck_id,
manual_tags=(tags or []) + [procedure.lower().replace(" ", "-"), "timing"]
)
# Example usage
create_timing_card(
api, deck_id,
procedure="sourdough bread making",
phase="bulk fermentation",
duration="4-6 hours at room temperature (temperature-dependent)",
tags=["baking", "sourdough"]
)
```
### Template 4: Condition Recognition
```python
def create_condition_card(api, deck_id, procedure, goal, indicators, tags=None):
"""Create a card about recognizing when a condition is met."""
return api.create_card(
content=f"# What indicates {goal} in {procedure}?\n---\n{indicators}",
deck_id=deck_id,
manual_tags=(tags or []) + [procedure.lower().replace(" ", "-"), "conditions"]
)
# Example usage
create_condition_card(
api, deck_id,
procedure="sourdough bread making",
goal="dough is ready for shaping",
indicators="50-100% volume increase, jiggly texture, small bubbles on surface",
tags=["baking", "sourdough"]
)
```
### Template 5: Complete Procedural Set
```python
def create_procedure_cards(
api,
deck_id,
procedure_name,
transitions,
rationales,
timings,
conditions=None,
tags=None
):
"""
Create a complete set of procedural knowledge cards.
Args:
transitions: List of {"condition": str, "next_step": str}
rationales: List of {"action": str, "reason": str}
timings: List of {"phase": str, "duration": str}
conditions: Optional list of {"goal": str, "indicators": str}
Returns:
List of created cards
"""
cards = []
base_tags = tags or []
for trans in transitions:
cards.append(create_transition_card(
api, deck_id, procedure_name,
trans["condition"], trans["next_step"], base_tags
))
for rat in rationales:
cards.append(create_rationale_card(
api, deck_id, procedure_name,
rat["action"], rat["reason"], base_tags
))
for timing in timings:
cards.append(create_timing_card(
api, deck_id, procedure_name,
timing["phase"], timing["duration"], base_tags
))
if conditions:
for cond in conditions:
cards.append(create_condition_card(
api, deck_id, procedure_name,
cond["goal"], cond["indicators"], base_tags
))
return cards
# Example usage
create_procedure_cards(
api, deck_id,
procedure_name="deploying to production",
transitions=[
{"condition": "All tests pass and code is reviewed", "next_step": "merge to main"},
{"condition": "CI/CD pipeline completes successfully", "next_step": "deploy to staging"},
],
rationales=[
{"action": "deploy to staging before production", "reason": "Catch environment-specific issues in a safe environment"},
{"action": "run database migrations before deploying code", "reason": "Ensure schema changes are in place before code expects them"},
],
timings=[
{"phase": "CI/CD pipeline", "duration": "5-10 minutes"},
{"phase": "staging deployment", "duration": "2-3 minutes"},
],
conditions=[
{"goal": "staging is ready for production deployment", "indicators": "Smoke tests pass, no errors in logs, performance is acceptable"},
],
tags=["devops", "deployment"]
)
```
## List Templates
Use when learning: lists of items (continents, design patterns, HTTP methods, etc.)
### Template 1: Closed List (Cloze Deletion)
```python
def create_closed_list_cards(api, deck_id, list_name, items, tags=None):
"""
Create cloze deletion cards for a closed list (fixed members).
One card per item, each showing all others.
"""
cards = []
for i, item in enumerate(items):
others = ", ".join([it for j, it in enumerate(items) if j != i])
cards.append(api.create_card(
content=f"# Name all {len(items)} {list_name}\n---\n{others}, __{item}__",
deck_id=deck_id,
pos=chr(97 + i), # 'a', 'b', 'c'... for consistent ordering
manual_tags=(tags or []) + [list_name.lower().replace(" ", "-")]
))
return cards
# Example usage
create_closed_list_cards(
api, deck_id,
list_name="HTTP methods",
items=["GET", "POST", "PUT", "DELETE", "PATCH"],
tags=["http", "rest-api"]
)
```
### Template 2: Open List (Instance → Category)
```python
def create_open_list_cards(api, deck_id, category, instances, tags=None):
"""
Create cards linking instances to an open category.
Don't try to memorize all members - focus on classification.
"""
cards = []
for instance, details in instances.items():
cards.append(api.create_card(
content=f"# What {category} category does {instance} belong to?\n---\n{details}",
deck_id=deck_id,
manual_tags=(tags or []) + [category.lower().replace(" ", "-")]
))
return cards
# Example usage
create_open_list_cards(
api, deck_id,
category="design pattern",
instances={
"Observer": "Behavioral (defines communication between objects)",
"Factory": "Creational (handles object creation)",
"Adapter": "Structural (adapts interfaces)",
},
tags=["design-patterns"]
)
```
## Salience Templates
Use when: You want to apply knowledge, not just remember it. Keep ideas top-of-mind.
**Warning**: Experimental. Less research backing than standard retrieval prompts.
### Template 1: Context Application
```python
def create_context_application_card(api, deck_id, concept, context_question, tags=None):
"""
Create a card prompting application in current context.
Answer will vary based on what you're working on.
"""
return api.create_card(
content=f"# {context_question}\n---\n(Give an answer specific to your current work context - answer may vary)",
deck_id=deck_id,
manual_tags=(tags or []) + [concept.lower().replace(" ", "-"), "application"],
review_reverse=False
)
# Example usage
create_context_application_card(
api, deck_id,
concept="first principles thinking",
context_question="What's one situation this week where you could apply first principles thinking?",
tags=["mental-models"]
)
```
### Template 2: Implication Prompt
```python
def create_implication_card(api, deck_id, concept, implication_question, tags=None):
"""
Create a card about implications or assumptions.
Drives deeper thinking about the concept's relevance.
"""
return api.create_card(
content=f"# {implication_question}\n---\n(Identify specific implications or assumptions - answer will vary)",
deck_id=deck_id,
manual_tags=(tags or []) + [concept.lower().replace(" ", "-"), "reflection"]
)
# Example usage
create_implication_card(
api, deck_id,
concept="technical debt",
implication_question="What's one assumption you're making in your current project that might be creating technical debt?",
tags=["software-engineering"]
)
```
### Template 3: Creative Generation
```python
def create_creative_generation_card(api, deck_id, concept, generation_prompt, tags=None):
"""
Create a card that asks for a novel answer each time.
Leverages the generation effect.
"""
return api.create_card(
content=f"# {generation_prompt}\n---\n(Novel answer each time - give an answer you haven't given before)",
deck_id=deck_id,
manual_tags=(tags or []) + [concept.lower().replace(" ", "-"), "creative"]
)
# Example usage
create_creative_generation_card(
api, deck_id,
concept="dependency injection",
generation_prompt="Describe a way to apply dependency injection that you haven't mentioned before",
tags=["design-patterns"]
)
```
## Quick Reference: When to Use Which Template
| Knowledge Type | Template | Best For |
|---------------|----------|----------|
| **Facts** | Simple Fact | Names, dates, single facts |
| **Facts** | Definition | Terms and their meanings |
| **Facts** | Component Cards | Lists of ingredients, parts, components |
| **Facts** | Comparison | Distinguishing similar concepts |
| **Concepts** | Five Lenses | Deep understanding of ideas |
| **Concepts** | Core Principle | Design principles, rules |
| **Concepts** | Mental Model | Frameworks for thinking |
| **Procedures** | Transition | When to move between steps |
| **Procedures** | Rationale | Why steps are done |
| **Procedures** | Timing | How long things take |
| **Procedures** | Condition | Recognizing states/readiness |
| **Lists** | Closed List | Fixed members (continents, HTTP methods) |
| **Lists** | Open List | Evolving categories (design patterns) |
| **Application** | Context Application | Applying concepts to work |
| **Application** | Implication | Deeper thinking about relevance |
| **Application** | Creative Generation | Novel applications |
## Validation Workflow
Before creating any card from these templates:
1. **Check focus**: Does it test exactly one detail?
2. **Check precision**: Is the question specific?
3. **Check consistency**: Will it produce the same answer each time? (Unless it's a creative prompt)
4. **Check tractability**: Can you answer correctly ~90% of the time?
5. **Check effort**: Does it require actual memory retrieval?
6. **Check emotion**: Do you actually care about this?
Use the `validate_prompt_quality()` function from mochi_api.py:
```python
from scripts.mochi_api import validate_prompt_quality
result = validate_prompt_quality(
question="What is dependency injection?",
answer="A design pattern where dependencies are provided from outside"
)
if not result["valid"]:
print("Issues found:")
for issue in result["issues"]:
print(f" - {issue}")
print("\nSuggestions:")
for suggestion in result["suggestions"]:
print(f" - {suggestion}")
```
---
For deeper understanding of the principles behind these templates, see:
- SKILL.md - Main skill documentation with workflows
- references/prompt_design_principles.md - Research and cognitive science background