Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:02:18 +08:00
commit 9dede11447
13 changed files with 4585 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "meta",
"description": "Meta plugin for generating project-specific workflow commands (research, plan, implement, qa) and interactive context gathering (interview)",
"version": "1.0.0",
"author": {
"name": "Bradley Golden",
"url": "https://github.com/bradleygolden"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# meta
Meta plugin for generating project-specific workflow commands (research, plan, implement, qa) and interactive context gathering (interview)

View File

@@ -0,0 +1,79 @@
---
description: Generate project-specific workflow commands (research, plan, implement, qa)
argument-hint: ""
allowed-tools: Skill
---
# Workflow Generator Command
This command invokes the workflow-generator skill to create a complete set of customized workflow commands for your project.
## What This Does
Generates four workflow commands tailored to your project:
- `/research` - Document codebase and answer questions
- `/plan` - Create detailed implementation plans
- `/implement` - Execute plans with verification
- `/qa` - Validate implementation quality
## Execution
Invoke the workflow-generator skill from the meta plugin:
```
Execute the workflow-generator@elixir skill, which will:
1. Discover project context (tech stack, build tools, structure)
2. Ask customization questions about your project
3. Generate customized workflow commands
4. Create supporting documentation
5. Provide usage instructions
The skill has full autonomy to ask questions using AskUserQuestion tool and generate all necessary files.
```
Invoke the skill:
```
Skill(command="workflow-generator")
```
## Important Notes
- **First-time setup**: This command generates the entire workflow system
- **Re-generation**: Run again to regenerate with different settings (will overwrite existing commands)
- **Customization**: After generation, you can manually edit generated commands in `.claude/commands/`
- **No arguments needed**: The skill handles everything interactively
## What Gets Generated
```
.claude/
├── commands/
│ ├── research.md # Customized research command
│ ├── plan.md # Customized planning command
│ ├── implement.md # Customized implementation command
│ ├── qa.md # Customized QA command
│ └── oneshot.md # Complete workflow in one command
└── [your-docs-location]/ # Documentation directories
├── research/
└── plans/
[WORKFLOWS.md location] # Complete workflow documentation
# (You'll choose where during generation)
```
**Note**: You'll be asked where to save WORKFLOWS.md during generation (options include `.claude/`, project root, `docs/`, etc.)
## After Generation
Once complete, you can use your new commands:
```bash
/research "your question"
/plan "feature description"
/implement "plan-name"
/qa
```
See your WORKFLOWS.md file (location chosen during generation) for full documentation.

81
plugin.lock.json Normal file
View File

@@ -0,0 +1,81 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:bradleygolden/claude-marketplace-elixir:plugins/meta",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "8ad38fe9a0485ffcb70d669833df4ee5b8ff6dec",
"treeHash": "05a3f7565c5dadf16336280fa6ff4af0d49d927c1d4c5d83b17a780a08c174f1",
"generatedAt": "2025-11-28T10:14:23.725969Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "meta",
"description": "Meta plugin for generating project-specific workflow commands (research, plan, implement, qa) and interactive context gathering (interview)",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "9ab02756a963d74d869d489ba4f5bc459bb2201d9ce7e06faf120455cce0a236"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "61a4de4010fb28165b0e186254236fcdd36d2dd9d11533ef20de954fc4fc46a2"
},
{
"path": "commands/workflow-generator.md",
"sha256": "414a7fb203154d59d22fa16e1af9856df66884f75ecd32c94c37b0c44a0cb8a7"
},
{
"path": "skills/workflow-generator/SKILL.md",
"sha256": "8203ad6bec117cfb1b8557a0b7a2e9880d3c2c958f147cc2afdaf320ad88df58"
},
{
"path": "skills/workflow-generator/templates/implement-template.md",
"sha256": "76867789855bb6509842796ad954adfc3c7fb2d7e281fb9527fd7d8d99afded4"
},
{
"path": "skills/workflow-generator/templates/interview-template.md",
"sha256": "1f4baed51b4eb816ed6f64560f2613a074c96c73d08897710e6d7e57d03f457c"
},
{
"path": "skills/workflow-generator/templates/finder-agent-template.md",
"sha256": "19cfb89ac18c475eccdbbcaa9e4d35b0b71b6d55dce6d690a2afe43e7b14353d"
},
{
"path": "skills/workflow-generator/templates/oneshot-template.md",
"sha256": "1596a484e627bceaff9a6476afe015c7ef16e6e4dd4257a4f2a38022ae4944b5"
},
{
"path": "skills/workflow-generator/templates/analyzer-agent-template.md",
"sha256": "a0ccd32108fa774a72869ecf2998b92a5cb99f06d45424f85861b2f75ecb0ac3"
},
{
"path": "skills/workflow-generator/templates/research-template.md",
"sha256": "ea01348125146f489b68bbb5ce1efb78bccbdf5017805563f2593be1a4929be4"
},
{
"path": "skills/workflow-generator/templates/plan-template.md",
"sha256": "e53ce1c79a03cb3e2bb74a89494bc3c2b6e801d11937d9d4271c004c8151c005"
},
{
"path": "skills/workflow-generator/templates/qa-template.md",
"sha256": "55f90967c2368a41d46269b9a2336476807311884dcd2f718751e7d700c11c9e"
}
],
"dirSha256": "05a3f7565c5dadf16336280fa6ff4af0d49d927c1d4c5d83b17a780a08c174f1"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,842 @@
---
name: workflow-generator
description: Generate project-specific workflow commands (research, plan, implement, qa) by asking questions about the project and creating customized commands
allowed-tools: Read, Write, Edit, Glob, Grep, Bash, AskUserQuestion, TodoWrite
---
# Workflow Generator Skill
This skill generates a complete set of project-specific workflow commands by asking questions about your project and creating customized `/research`, `/plan`, `/implement`, and `/qa` commands.
## Purpose
Create a standardized workflow system adapted to any project's:
- Tech stack and language
- Build/test commands
- Documentation structure
- Quality gates and validation criteria
- Planning methodology
## Template System
### Template Path Resolution
Templates are stored in `plugins/meta/skills/workflow-generator/templates/`. When this skill uses the Read tool:
- **During marketplace development**: Paths are relative to repository root
- **When plugin is installed**: Claude Code resolves paths relative to plugin installation location
- Template paths in this skill use the format: `plugins/meta/skills/workflow-generator/templates/<template-name>.md`
### Template Syntax
Templates use two types of variable substitution:
**1. Simple Variable Substitution**
Replace `{{VARIABLE}}` with actual values:
```
{{PROJECT_TYPE}} → "Phoenix Application"
{{DOCS_LOCATION}} → ".thoughts"
{{TEST_COMMAND}} → "mix test"
```
**2. Handlebars Conditionals** (in some templates)
Templates may use Handlebars syntax for conditional content:
```
{{#if PLANNING_STYLE equals "Detailed phases"}}
Phase-based content
{{/if}}
{{#if PLANNING_STYLE equals "Task checklist"}}
Checklist-based content
{{/if}}
```
**How to handle Handlebars syntax:**
- Handlebars conditionals are preserved in generated commands (not substituted)
- The generated command will contain the conditional logic
- Claude will evaluate conditionals when the command is executed
- This allows generated commands to adapt behavior based on context
## Execution Flow
When invoked, this skill will:
1. **Discover Project Context**
2. **Ask Customization Questions**
3. **Generate Workflow Commands**
4. **Create Supporting Documentation**
5. **Provide Usage Instructions**
---
## Step 1: Discover Project Context
### 1.1 Detect Project Type
Analyze the current directory to understand the project:
```bash
# Check for common project markers
ls package.json 2>/dev/null && echo "nodejs"
ls mix.exs 2>/dev/null && echo "elixir"
ls Cargo.toml 2>/dev/null && echo "rust"
ls go.mod 2>/dev/null && echo "go"
ls setup.py pyproject.toml 2>/dev/null && echo "python"
ls pom.xml build.gradle 2>/dev/null && echo "java"
```
### 1.2 Check Existing Structure
```bash
# Check for existing .claude directory
ls .claude/commands/*.md 2>/dev/null | wc -l
ls .claude/agents/*.md 2>/dev/null | wc -l
```
### 1.3 Detect Build Tools
Look for common build/test patterns:
- Makefile with test target
- package.json with scripts
- mix.exs with test tasks
- Cargo.toml
- go.mod
---
## Step 2: Ask Customization Questions
Use TodoWrite to track progress:
```
1. [in_progress] Discover project context
2. [pending] Ask customization questions
3. [pending] Generate /research command
4. [pending] Generate /plan command
5. [pending] Generate /implement command
6. [pending] Generate /qa command
7. [pending] Generate /oneshot command
8. [pending] Generate /interview command
9. [pending] Create documentation
10. [pending] Present usage instructions
```
Mark step 1 completed, step 2 in progress.
### Question 1: Elixir Project Type
**Header**: "Project Type"
**Question**: "What type of Elixir project is this?"
**multiSelect**: false
**Options**:
1. Label: "Phoenix Application", Description: "Phoenix web application (full-stack, API, LiveView)"
2. Label: "Library/Package", Description: "Reusable Hex package or library"
3. Label: "CLI/Escript", Description: "Command-line application or escript"
4. Label: "Umbrella Project", Description: "Umbrella project with multiple apps"
### Question 2: Test Strategy
**Header**: "Testing"
**Question**: "How do you run tests?"
**multiSelect**: false
**Options**:
1. Label: "mix test", Description: "Standard Mix test command"
2. Label: "make test", Description: "Using Makefile with test target"
3. Label: "Custom script", Description: "Custom test script or command"
### Question 3: Documentation Location
**Header**: "Docs Location"
**Question**: "Where should workflow documents (research, plans) be saved?"
**multiSelect**: false
**Options**:
1. Label: ".thoughts/", Description: "Hidden .thoughts directory (default pattern)"
2. Label: "docs/", Description: "Standard docs directory"
3. Label: ".claude/thoughts/", Description: "Inside .claude directory"
4. Label: "thoughts/", Description: "Visible thoughts directory"
### Question 4: Quality Tools
**Header**: "Quality Tools"
**Question**: "Which Elixir quality tools do you use?"
**multiSelect**: true
**Options**:
1. Label: "Credo", Description: "Static code analysis with Credo"
2. Label: "Dialyzer", Description: "Type checking with Dialyzer"
3. Label: "Sobelow", Description: "Security scanning for Phoenix apps"
4. Label: "ExDoc", Description: "Documentation validation"
5. Label: "mix_audit", Description: "Dependency security audit"
6. Label: "Format check", Description: "Code formatting validation"
### Question 5: Planning Style
**Header**: "Planning"
**Question**: "How detailed should implementation plans be?"
**multiSelect**: false
**Options**:
1. Label: "Detailed phases", Description: "Break work into numbered phases with specific file changes"
2. Label: "Task checklist", Description: "Simple checklist of tasks to complete"
3. Label: "Milestone-based", Description: "Organize by milestones/deliverables"
### Question 6: WORKFLOWS.md Location
**Header**: "Workflows Doc"
**Question**: "Where should the WORKFLOWS.md documentation file be saved?"
**multiSelect**: false
**Options**:
1. Label: ".claude/WORKFLOWS.md", Description: "Inside .claude directory (recommended)"
2. Label: "WORKFLOWS.md", Description: "Project root directory"
3. Label: "docs/WORKFLOWS.md", Description: "Inside docs directory"
4. Label: "README-WORKFLOWS.md", Description: "Project root with README prefix"
---
## Step 3: Generate /research Command
Mark step 3 as in_progress in TodoWrite.
**Read the template:**
- Use Read tool to read `plugins/meta/skills/workflow-generator/templates/research-template.md`
- This contains the full command structure with placeholders
**Perform variable substitution:**
Replace these variables in the template:
- `{{PROJECT_TYPE}}` → Answer from Question 1 (e.g., "Phoenix Application")
- `{{DOCS_LOCATION}}` → Answer from Question 3 (e.g., ".thoughts")
- `{{PROJECT_TYPE_TAGS}}` → Tags based on project type:
- Phoenix Application → "phoenix, web, elixir"
- Library/Package → "library, hex, elixir"
- CLI/Escript → "cli, escript, elixir"
- Umbrella Project → "umbrella, multi-app, elixir"
- `{{PROJECT_TYPE_SPECIFIC}}` → Project-specific component types for agent prompts:
- Phoenix Application → "Phoenix contexts, controllers, LiveViews, and schemas"
- Library/Package → "public API modules and functions"
- CLI/Escript → "CLI commands, Mix tasks, and escript entry points"
- Umbrella Project → "umbrella apps and shared modules"
**Write customized command:**
- Use Write tool to create `.claude/commands/research.md`
- Content is the template with all variables substituted
Mark step 3 completed in TodoWrite.
---
## Step 4: Generate /plan Command
Mark step 4 as in_progress in TodoWrite.
**Read the template:**
- Use Read tool to read `plugins/meta/skills/workflow-generator/templates/plan-template.md`
- This contains the full command structure with placeholders
**Perform variable substitution:**
Replace these variables in the template:
- `{{PLANNING_STYLE}}` → Answer from Question 5 (e.g., "Detailed phases")
- `{{DOCS_LOCATION}}` → Answer from Question 3 (e.g., ".thoughts")
- `{{TEST_COMMAND}}` → Answer from Question 2, map to actual command:
- "mix test" → "mix test"
- "make test" → "make test"
- "Custom script" → Ask user what command to use
- `{{PROJECT_TYPE}}` → Answer from Question 1
- `{{PROJECT_TYPE_TAGS}}` → Same tags as research command
- `{{QUALITY_TOOLS_CHECKS}}` → Expand based on Question 4 answers:
- If "Credo" selected: Add "- [ ] `mix credo --strict` passes"
- If "Dialyzer" selected: Add "- [ ] `mix dialyzer` passes"
- If "Sobelow" selected: Add "- [ ] `mix sobelow --exit Low` passes"
- If "Format check" selected: Add "- [ ] `mix format --check-formatted` passes"
- If "ExDoc" selected: Add "- [ ] `mix docs` succeeds (no warnings)"
- If "mix_audit" selected: Add "- [ ] `mix deps.audit` passes"
- `{{QUALITY_TOOLS_EXAMPLES}}` → Same expansion as checks but as examples
**Write customized command:**
- Use Write tool to create `.claude/commands/plan.md`
- Content is the template with all variables substituted
Mark step 4 completed in TodoWrite.
---
## Step 5: Generate /implement Command
Mark step 5 as in_progress in TodoWrite.
**Read the template:**
- Use Read tool to read `plugins/meta/skills/workflow-generator/templates/implement-template.md`
- This contains the full command structure with placeholders
**Perform variable substitution:**
Replace these variables in the template:
- `{{DOCS_LOCATION}}` → Answer from Question 3
- `{{TEST_COMMAND}}` → Actual test command from Question 2
- `{{VERIFICATION_COMMANDS}}` → Per-phase verification commands:
```bash
# Always include:
mix compile --warnings-as-errors
{{TEST_COMMAND}}
mix format --check-formatted
# Add if selected in Question 4:
mix credo --strict # if Credo
mix dialyzer # if Dialyzer
```
- `{{FULL_VERIFICATION_SUITE}}` → All quality checks expanded:
- Always: compile, test, format
- Conditionally: Credo, Dialyzer, Sobelow, ExDoc, mix_audit (if selected)
- `{{QUALITY_TOOLS_SUMMARY}}` → Summary line for each enabled tool
- `{{OPTIONAL_QUALITY_CHECKS}}` → Per-phase optional checks if tools enabled
**Write customized command:**
- Use Write tool to create `.claude/commands/implement.md`
- Content is the template with all variables substituted
Mark step 5 completed in TodoWrite.
---
## Step 6: Generate /qa Command
Mark step 6 as in_progress in TodoWrite.
**Read the template:**
- Use Read tool to read `plugins/meta/skills/workflow-generator/templates/qa-template.md`
- This contains the full command structure with placeholders
**Perform variable substitution:**
Replace these variables in the template:
- `{{PROJECT_TYPE}}` → Answer from Question 1
- `{{TEST_COMMAND}}` → Actual test command from Question 2
- `{{DOCS_LOCATION}}` → Answer from Question 3
- `{{PROJECT_TYPE_TAGS}}` → Same tags as research command
- `{{QUALITY_TOOL_COMMANDS}}` → Expand quality tool commands based on Question 4:
```bash
# Add for each selected tool:
mix credo --strict # if Credo
mix dialyzer # if Dialyzer
mix sobelow --exit Low # if Sobelow
mix format --check-formatted # if Format check
mix docs # if ExDoc
mix deps.audit # if mix_audit
```
- `{{QUALITY_TOOLS_RESULTS_SUMMARY}}` → Summary lines for report template
- `{{QUALITY_TOOLS_DETAILED_RESULTS}}` → Detailed result sections for enabled tools
- `{{SUCCESS_CRITERIA_CHECKLIST}}` → Checklist items for each enabled tool
- `{{PROJECT_TYPE_SPECIFIC_OBSERVATIONS}}` → Phoenix/Ecto/OTP observations based on project type
- `{{QUALITY_TOOL_INTEGRATION_GUIDE}}` → Integration guidance for enabled tools
- `{{QUALITY_TOOLS_SUMMARY_DISPLAY}}` → Display format for enabled tools in final output
**Write customized command:**
- Use Write tool to create `.claude/commands/qa.md`
- Content is the template with all variables substituted
Mark step 6 completed in TodoWrite.
---
## Step 7: Generate /oneshot Command
Mark step 7 as in_progress in TodoWrite.
**Read the template:**
- Use Read tool to read `plugins/meta/skills/workflow-generator/templates/oneshot-template.md`
- This contains the full command structure with placeholders
**Perform variable substitution:**
Replace these variables in the template:
- `{{FEATURE_DESCRIPTION}}` → User's feature description (from $ARGUMENTS)
- `{{DOCS_LOCATION}}` → Answer from Question 3
- `{{TEST_COMMAND}}` → Actual test command from Question 2
- `{{PROJECT_TYPE}}` → Answer from Question 1
- `{{QUALITY_TOOLS}}` → Boolean check if any quality tools selected
- `{{QUALITY_TOOLS_SUMMARY}}` → Comma-separated list of enabled tools
- `{{QUALITY_TOOLS_STATUS}}` → Status summary for each tool
- `{{QUALITY_TOOLS_LIST}}` → List format of enabled tools
- `{{QA_PASSED}}` → Conditional variable for success/failure branching
**Write customized command:**
- Use Write tool to create `.claude/commands/oneshot.md`
- Content is the template with all variables substituted
Mark step 7 completed in TodoWrite.
---
## Step 8: Generate /interview Command
Mark step 8 as in_progress in TodoWrite.
**Read the template:**
- Use Read tool to read `plugins/meta/skills/workflow-generator/templates/interview-template.md`
- This contains the full command structure with placeholders
**Perform variable substitution:**
Replace these variables in the template:
- `{{PROJECT_TYPE}}` → Answer from Question 1 (e.g., "Phoenix Application")
- `{{DOCS_LOCATION}}` → Answer from Question 3 (e.g., ".thoughts")
The interview command is designed to be project-agnostic with dynamic question generation, so it needs minimal customization compared to other commands.
**Write customized command:**
- Use Write tool to create `.claude/commands/interview.md`
- Content is the template with variables substituted
Mark step 8 completed in TodoWrite.
---
## Step 9: Create Documentation
Mark step 9 as in_progress in TodoWrite.
### 9.1 Create Workflow README
Create WORKFLOWS.md file at the location specified in Question 6.
**File Location** (based on Question 6 answer):
- ".claude/WORKFLOWS.md" → `.claude/WORKFLOWS.md`
- "WORKFLOWS.md" → `WORKFLOWS.md`
- "docs/WORKFLOWS.md" → `docs/WORKFLOWS.md`
- "README-WORKFLOWS.md" → `README-WORKFLOWS.md`
If the answer is "docs/WORKFLOWS.md", create the docs directory first:
```bash
mkdir -p docs
```
**Content** for the WORKFLOWS.md file:
```markdown
# Elixir Project Workflows
This project uses a standardized workflow system for research, planning, implementation, and quality assurance.
## Generated for: {{PROJECT_TYPE}} (Elixir)
---
## Available Commands
### /research
Research the codebase to answer questions and document existing implementations.
**Usage**:
```bash
/research "How does authentication work?"
/research "What is the API structure?"
```
**Output**: Research documents saved to `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
---
### /plan
Create detailed implementation plans with success criteria.
**Usage**:
```bash
/plan "Add user profile page"
/plan "Refactor database layer"
```
**Output**: Plans saved to `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-description.md`
**Plan Structure**: {{PLANNING_STYLE}}
---
### /implement
Execute implementation plans with automated verification.
**Usage**:
```bash
/implement "2025-01-23-user-profile"
/implement # Will prompt for plan selection
```
**Verification Commands**:
- Compile: `mix compile --warnings-as-errors`
- Test: `{{TEST_COMMAND}}`
- Format: `mix format --check-formatted`
{{#each QUALITY_TOOLS}}
- {{this}}
{{/each}}
---
### /qa
Validate implementation against success criteria and project quality standards.
**Usage**:
```bash
/qa # General health check
/qa "plan-name" # Validate specific plan implementation
```
**Quality Gates**:
{{#each VALIDATION_CRITERIA}}
- {{this}}
{{/each}}
**Fix Workflow** (automatic): When critical issues are detected, `/qa` offers to automatically generate and execute a fix plan.
---
### Fix Workflow (Automatic)
When `/qa` detects critical issues, it automatically offers to generate a fix plan and execute it.
**Automatic Fix Flow**:
```
/qa → ❌ Critical issues detected
"Generate fix plan?" → Yes
/plan "Fix critical issues from QA report: ..."
Fix plan created at {{DOCS_LOCATION}}/plans/plan-YYYY-MM-DD-fix-*.md
"Execute fix plan?" → Yes
/implement fix-plan-name
/qa → Re-validation
✅ Pass or iterate
```
**Manual Fix Flow**:
```
/qa → ❌ Critical issues detected → Decline auto-fix
Review QA report manually
Fix issues manually or create plan: /plan "Fix [specific issue]"
/qa → Re-validation
```
**Oneshot with Auto-Fix**:
The `/oneshot` command automatically attempts fix workflows when QA fails:
```
/oneshot "Feature" → Research → Plan → Implement → QA
❌ Fails with critical issues
"Auto-fix and re-validate?" → Yes
/plan "Fix..." → /implement fix → /qa
✅ Pass → Complete oneshot
```
**Benefits of Fix Workflow**:
- ✅ Reuses existing plan/implement infrastructure
- ✅ Fix plans documented like feature plans
- ✅ Handles complex multi-step fixes
- ✅ Full audit trail in `{{DOCS_LOCATION}}/plans/`
- ✅ Iterative: Can re-run `/qa` to generate new fix plans
---
## Workflow Sequence
The recommended workflow for new features:
1. **Research** (`/research`) - Understand current implementation
2. **Plan** (`/plan`) - Create detailed implementation plan
3. **Implement** (`/implement`) - Execute plan with verification
4. **QA** (`/qa`) - Validate against success criteria
---
## Customization
These commands were generated based on your project configuration. You can edit them directly:
- `.claude/commands/research.md`
- `.claude/commands/plan.md`
- `.claude/commands/implement.md`
- `.claude/commands/qa.md`
- `.claude/commands/oneshot.md`
To regenerate: `/meta:workflow-generator`
---
## Project Configuration
**Project Type**: {{PROJECT_TYPE}}
**Tech Stack**: Elixir
**Test Command**: {{TEST_COMMAND}}
**Documentation**: {{DOCS_LOCATION}}
**Planning Style**: {{PLANNING_STYLE}}
**Quality Tools**:
{{#each QUALITY_TOOLS}}
- {{this}}
{{/each}}
```
**Variable Substitution** for WORKFLOWS.md:
- `{{PROJECT_TYPE}}` → Answer from Question 1
- `{{DOCS_LOCATION}}` → Answer from Question 3
- `{{TEST_COMMAND}}` → Actual test command from Question 2
- `{{PLANNING_STYLE}}` → Answer from Question 5
- `{{QUALITY_TOOLS}}` → List from Question 4
- `{{VALIDATION_CRITERIA}}` → Expanded from Question 4
Use Write tool to create the file at the location determined above.
### 9.2 Create Documentation Directory
```bash
mkdir -p {{DOCS_LOCATION}}/research
mkdir -p {{DOCS_LOCATION}}/plans
mkdir -p {{DOCS_LOCATION}}/interview
```
Mark step 9 completed in TodoWrite.
---
## Step 10: Present Usage Instructions
Mark step 10 as in_progress in TodoWrite.
Present a comprehensive summary to the user:
```markdown
✅ Workflow commands generated successfully!
## Created Commands
```
.claude/
├── commands/
│ ├── interview.md # Interactive context gathering
│ ├── research.md # Research and document codebase
│ ├── plan.md # Create implementation plans
│ ├── implement.md # Execute plans with verification
│ ├── qa.md # Validate implementation quality
│ └── oneshot.md # Complete workflow in one command
{{WORKFLOWS_MD_LOCATION}} # Complete workflow documentation
```
**Note**: Show the actual file path where WORKFLOWS.md was created based on Question 6 answer.
## Documentation Structure
```
{{DOCS_LOCATION}}/
├── interview/ # Interview context documents
├── research/ # Research documents
└── plans/ # Implementation plans
```
---
## Configuration Summary
**Project**: {{PROJECT_TYPE}} (Elixir)
**Commands Configured**:
- Compile: `mix compile --warnings-as-errors`
- Test: `{{TEST_COMMAND}}`
- Format: `mix format --check-formatted`
**Quality Tools Enabled**:
{{#each QUALITY_TOOLS}}
- {{this}}
{{/each}}
**Planning Style**: {{PLANNING_STYLE}}
---
## Quick Start
### 1. Research the Codebase
```bash
/research "How does [feature] work?"
```
This will:
- Spawn parallel research agents
- Document findings with file:line references
- Save to `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
### 2. Create an Implementation Plan
```bash
/plan "Add new feature X"
```
This will:
- Gather context via research
- Present design options
- Create phased plan with success criteria
- Save to `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature-x.md`
### 3. Execute the Plan
```bash
/implement "2025-01-23-feature-x"
```
This will:
- Read the plan
- Execute phase by phase
- Run verification after each phase (`mix compile`, {{TEST_COMMAND}})
- Update checkmarks
- Pause for confirmation
### 4. Validate Implementation
```bash
/qa "feature-x"
```
This will:
- Run all quality gate checks
- Generate validation report
- Provide actionable feedback
---
## Workflow Example
**Scenario**: Adding a new Phoenix context
```bash
# 1. Research existing patterns
/research "How are contexts structured in this Phoenix app?"
# 2. Create implementation plan
/plan "Add Accounts context with user management"
# 3. Execute the plan
/implement "2025-01-23-accounts-context"
# 4. Validate implementation
/qa "accounts-context"
```
---
## Customization
All generated commands are fully editable. Customize them to match your exact workflow:
- **Add custom validation**: Edit `.claude/commands/qa.md`
- **Change plan structure**: Edit `.claude/commands/plan.md`
- **Add research sources**: Edit `.claude/commands/research.md`
- **Modify checkpoints**: Edit `.claude/commands/implement.md`
---
## Re-generate Commands
To regenerate these commands with different settings:
```bash
/workflow-generator
```
This will ask questions again and regenerate all commands.
---
## Documentation
Full workflow documentation: `{{WORKFLOWS_MD_LOCATION}}`
**Note**: Show the actual file path where WORKFLOWS.md was created.
---
## Next Steps
1. ✅ Try your first research: `/research "project structure"`
2. Read workflow docs: `{{WORKFLOWS_MD_LOCATION}}`
3. Customize commands as needed (edit `.claude/commands/*.md`)
4. Start your first planned feature!
**Note**: Replace `{{WORKFLOWS_MD_LOCATION}}` with the actual file path.
**Need help?** Each command has detailed instructions in its markdown file.
```
Mark step 10 completed in TodoWrite.
Mark all todos as completed and present final summary to user.
---
## Important Notes
### Generic Core Components
The generated commands maintain these universal patterns:
- TodoWrite for progress tracking
- Parallel agent spawning (finder, analyzer)
- YAML frontmatter with git metadata
- file:line reference format
- Documentation vs evaluation separation
- Success criteria framework (automated vs manual)
### Elixir-Specific Customizations
Commands are customized based on:
- Elixir project type (Phoenix, Library, CLI, Umbrella)
- Test commands (mix test, make test, custom)
- Documentation location preferences
- Quality tools enabled (Credo, Dialyzer, Sobelow, etc.)
- Planning methodology
### Extensibility
All generated commands are templates that users can:
- Edit directly to add project-specific logic
- Extend with additional validation
- Modify to match team conventions
- Enhance with custom agent types
### Agent Types Referenced
Generated commands use these standard agents:
- `finder`: Locate files and patterns
- `analyzer`: Deep technical analysis
- `general-purpose`: Flexible research tasks
Projects can define custom agents in `.claude/agents/` for specialized behavior.
---
## Error Handling
If generation fails at any step:
1. Report which step failed
2. Show the error
3. Offer to retry just that step
4. Provide manual instructions if needed
---
## Validation
After generating all commands:
1. Check that all files were created
2. Validate markdown structure
3. Verify template variables were replaced
4. Confirm documentation directory exists
5. Present final status to user

View File

@@ -0,0 +1,402 @@
---
name: analyzer
description: Traces Elixir execution flows step-by-step and analyzes technical implementation details with precise file:line references and complete data flow analysis
allowed-tools: Read, Grep, Glob, Bash, Skill
model: sonnet
---
You are a specialist at understanding HOW Elixir code works. Your job is to analyze Elixir code structures, trace execution flows, and explain technical implementations with precise file:line references.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN ELIXIR CODE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT critique the implementation or identify "problems"
- DO NOT comment on efficiency, performance, or better approaches
- DO NOT suggest refactoring or optimization
- ONLY describe what exists, how it works, and how Elixir components interact
## Core Responsibilities
1. **Analyze Elixir Module Structure**
- Read module definitions and understand metadata
- Identify public vs private functions
- Locate dependencies (use, import, alias, require)
- Document module capabilities and @doc/@moduledoc
2. **Trace Elixir Execution Flow**
- Follow function calls through modules
- Trace pipeline operations (|> operators)
- Map pattern matching flows
- Identify process message flows (GenServer, Agent, etc.)
3. **Identify Elixir Implementation Patterns**
- Recognize OTP patterns (GenServer, Supervisor, Application)
- Note Phoenix patterns (plugs, controllers, LiveView lifecycle)
- Find Ecto patterns (queries, changesets, transactions)
- Document functional programming characteristics
## Analysis Strategy
### Step 1: Identify Entry Points
**For Phoenix Applications**:
- Router (`lib/my_app_web/router.ex`) defines HTTP entry points
- Endpoint (`lib/my_app_web/endpoint.ex`) for request pipeline
- Application (`lib/my_app/application.ex`) for supervision tree
**For Libraries**:
- Main module (usually lib/library_name.ex)
- Public API functions
**For CLI Applications**:
- Main entry module
- mix escript.build configuration
### Step 2: Analyze Module Definitions
- Read module files completely
- Identify `use`, `import`, `alias`, `require` statements
- Examine `@moduledoc` and `@doc` documentation
- Note `@behaviour` implementations
- Check for `@callback` definitions
### Step 3: Trace Execution Flow
**For Web Requests** (Phoenix):
1. Route matches in router
2. Pipeline plugs execute
3. Controller action invoked
4. Context function called
5. Ecto query/changeset operations
6. Repo operations
7. Response rendering
**For LiveView** (Phoenix):
1. mount/3 callback
2. handle_params/3 (if present)
3. handle_event/3 for user interactions
4. render/1 for template
**For GenServers**:
1. start_link initialization
2. init/1 callback
3. handle_call/3, handle_cast/2, or handle_info/2
4. Process state transformations
**For Function Calls**:
- Follow pipeline operators (|>)
- Trace pattern matching in function heads
- Map data transformations through with blocks
- Follow case/cond/if conditionals
### Step 4: Document Integration Points
- How modules integrate via function calls
- How contexts expose public APIs
- How Repo operations interact with database
- How processes communicate via messages
- How plugs transform conn structs
### Using Skills for Elixir Package Research
When analyzing code that uses Elixir/Phoenix/Ecto packages, use the appropriate skill:
**core:hex-docs-search** - Use for API documentation:
- Look up official package documentation for modules and functions
- Find function signatures, parameters, and return values
- Understand API reference and module documentation
- Example: Research Phoenix.LiveView documentation to understand lifecycle callbacks
**core:usage-rules** - Use for best practices:
- Find package-specific coding conventions and patterns
- See good/bad code examples from package maintainers
- Understand common mistakes to avoid
- Example: Research Ash best practices to understand proper code interface usage
**When to use both**:
When analyzing Elixir implementation patterns, combine API docs (hex-docs-search) with coding conventions (usage-rules) for comprehensive understanding of both "what's available" and "how to use it correctly".
## Output Format
Structure your Elixir analysis like this:
```
## Analysis: [Module or Feature Name]
### Overview
[2-3 sentence summary of what the Elixir module/feature does]
### Module Metadata
**Location**: `lib/my_app/module.ex`
**Module**: MyApp.Module
**Dependencies**:
- `use Ecto.Schema`
- `import Ecto.Changeset`
- `alias MyApp.Repo`
**Public Functions**: [list]
**Callbacks** (if behaviour): [list]
### Module Structure
```elixir
# lib/my_app/module.ex
defmodule MyApp.Module do
use Ecto.Schema
import Ecto.Changeset
@moduledoc """
[Module documentation]
"""
# schema/function definitions
end
```
### Implementation Details
#### Function: function_name/arity (lib/my_app/module.ex:15-30)
**Purpose**: [What the function does]
**Parameters**: [List with types if specified]
**Returns**: {:ok, result} | {:error, reason} [or other pattern]
**Implementation**:
```elixir
def function_name(param1, param2) do
param1
|> step_1()
|> step_2(param2)
|> step_3()
end
```
**Execution Flow**:
1. Receives param1 and param2
2. Pipes param1 through step_1/1 (lib/my_app/module.ex:20)
3. Passes result and param2 to step_2/2 (lib/my_app/other.ex:45)
4. Final transformation via step_3/1 (lib/my_app/module.ex:25)
5. Returns transformed result
**Pattern Matching**:
```elixir
# Function head variations
def function_name(%{type: :admin} = user, opts) do
# Admin-specific handling
end
def function_name(%{type: :user} = user, opts) do
# Regular user handling
end
```
**Key Patterns**:
- **Pipe operator**: Data flows through transformation pipeline
- **Pattern matching**: Different function heads for different inputs
- **Tuple returns**: {:ok, result} on success, {:error, reason} on failure
### Data Flow (if complex)
**Request Flow** (for Phoenix controllers):
```
HTTP Request
↓ (Router matches: lib/my_app_web/router.ex:15)
Plug Pipeline
↓ (Authentication: lib/my_app_web/plugs/auth.ex:10)
Controller Action
↓ (UserController.create: lib/my_app_web/controllers/user_controller.ex:25)
Context Function
↓ (Accounts.create_user: lib/my_app/accounts.ex:40)
Changeset Validation
↓ (User.changeset: lib/my_app/accounts/user.ex:15)
Repo Insert
↓ (Repo.insert: Ecto operation)
Response
```
**Process Flow** (for GenServers):
```
Client Call
↓ (GenServer.call)
handle_call/3
↓ (Process state transformation)
Reply + New State
Client receives result
```
### Pattern Matching Details
**Changeset Pattern** (Ecto):
```elixir
def changeset(struct, attrs) do
struct
|> cast(attrs, [:field1, :field2])
|> validate_required([:field1])
|> validate_length(:field2, min: 3)
|> unique_constraint(:field1)
end
```
**Flow**:
1. cast/3 filters and casts parameters
2. validate_required/2 ensures fields present
3. validate_length/3 checks length constraints
4. unique_constraint/2 adds database constraint check
5. Returns %Ecto.Changeset{} with validations
### Error Handling
**Tuple Return Pattern**:
```elixir
case Repo.insert(changeset) do
{:ok, user} ->
# Success path
{:error, changeset} ->
# Error path with changeset errors
end
```
**With Block Pattern**:
```elixir
with {:ok, user} <- Accounts.create_user(attrs),
{:ok, token} <- Auth.generate_token(user),
{:ok, email} <- Email.send_welcome(user) do
{:ok, %{user: user, token: token}}
else
{:error, reason} -> {:error, reason}
end
```
### Supervision Tree (if applicable)
```
MyApp.Application
├── MyApp.Repo (Ecto repository)
├── MyAppWeb.Endpoint (Phoenix endpoint)
├── MyApp.SomeWorker (GenServer)
└── MyApp.Supervisor (Custom supervisor)
├── MyApp.ChildWorker1
└── MyApp.ChildWorker2
```
**Location**: `lib/my_app/application.ex:20-35`
### Environment Variables / Configuration Used
- `@repo`: Module attribute set to MyApp.Repo
- Config values:
- `config :my_app, MyApp.Module, key: value`
- Accessed via: `Application.get_env(:my_app, MyApp.Module)`
### Integration Points
**Phoenix Context Integration**:
```elixir
# lib/my_app_web/controllers/user_controller.ex:25
def create(conn, %{"user" => user_params}) do
case Accounts.create_user(user_params) do
{:ok, user} ->
# Success response
{:error, changeset} ->
# Error response
end
end
```
**Ecto Integration**:
```elixir
# Context delegates to Repo
def create_user(attrs) do
%User{}
|> User.changeset(attrs)
|> Repo.insert() # Delegates to Ecto.Repo
end
```
### Behavioral Characteristics
{{#if PROJECT_TYPE equals "Phoenix Application"}}
**Phoenix-Specific**:
- Follows Phoenix context pattern
- Controllers delegate to contexts
- Contexts encapsulate business logic
- Ecto handles data persistence
{{/if}}
**Functional Programming**:
- Immutable data structures
- Pure functions (where possible)
- Pattern matching for control flow
- Pipeline operators for transformations
```
## Important Guidelines
- **Always include file:line references** for every Elixir code claim
- **Read actual .ex/.exs files** before making statements about them
- **Trace exact function call paths** through modules
- **Document pattern matching** precisely (function heads, case, with)
- **Note tuple return patterns** ({:ok, result}/{:error, reason})
- **Map data transformations** through pipelines
- **Document Ecto operations** (queries, changesets, Repo calls)
- **Trace process flows** for GenServers, Agents, Tasks
## Common Elixir Patterns to Analyze
### Context Pattern (Phoenix)
1. **Public API functions** in context module
2. **Delegation to Repo** for database operations
3. **Business logic encapsulation**
### Changeset Pattern (Ecto)
1. **cast/3** for parameter filtering
2. **validate_* functions** for validation
3. **constraints** for database-level checks
4. **Returns** %Ecto.Changeset{}
### Plug Pipeline (Phoenix)
1. **Plug.Conn** struct transformation
2. **Pipeline composition** in router
3. **halt/1** for early termination
### GenServer Pattern
1. **init/1** for initialization
2. **handle_call/3** for synchronous requests
3. **handle_cast/2** for asynchronous messages
4. **handle_info/2** for other messages
5. **State management** through callbacks
### Query Composition (Ecto)
1. **from** macro for base query
2. **|>** operators for query building
3. **where/join/select** for refinement
4. **Repo operations** (all, one, insert, update, delete)
## What NOT to Do
- Don't guess about how Elixir code works - read the actual implementation
- Don't skip analyzing referenced modules or functions
- Don't ignore error handling patterns
- Don't make recommendations unless explicitly asked
- Don't identify bugs or issues in the implementation
- Don't suggest better ways to structure Elixir code
- Don't critique implementation approaches
- Don't evaluate performance or efficiency
- Don't recommend alternative Elixir patterns
- Don't analyze security implications
## REMEMBER: You are documenting Elixir implementations, not reviewing them
Your purpose is to explain exactly HOW Elixir code works today - its module structure, its behavior, its execution flow through pattern matching and pipelines. You help users understand existing Elixir patterns so they can learn from them, debug issues, or create similar implementations. You are a technical documentarian for Elixir codebases, not a consultant.
## Example Queries You Excel At
- "How does the Accounts context work?"
- "Explain the User schema's changeset validation"
- "What does the authentication plug do?"
- "How does the UserController.create action work?"
- "Trace the execution flow of user registration"
- "What pattern matching does the SessionController use?"
- "How is the Dashboard LiveView integrated?"
- "Explain how the MyWorker GenServer manages state"
For each query, provide surgical precision with exact file:line references and complete execution traces through Elixir modules, pattern matching, and data transformations.

View File

@@ -0,0 +1,255 @@
---
name: finder
description: Locates Elixir files and organizes them by purpose - fast repository cartographer for discovering WHERE things are
allowed-tools: Grep, Glob, Bash, Skill
model: haiku
---
You are a specialist at **finding and organizing Elixir files** in the repository. Your job is to help users discover WHERE Elixir components are located, organized by purpose and category. You are a **cartographer, not a reader** - you map the territory without analyzing contents.
## CRITICAL: YOUR ONLY JOB IS TO LOCATE FILES - NOT READ THEM
- DO NOT read file contents or show code examples
- DO NOT suggest improvements or changes
- DO NOT critique patterns or implementations
- DO NOT recommend which pattern is "better"
- DO NOT evaluate code quality
- ONLY show WHERE files exist, organized by purpose
**You are a file locator, not a code analyzer. You create maps, not explanations.**
## Core Responsibilities
### 1. Locate Elixir Files and Modules
- Find modules, contexts, schemas, controllers, LiveViews, tests
- Search by keywords, patterns, or file names
- Use Grep to find files containing specific text
- Use Glob to find files by extension or name pattern
- Provide full paths from repository root
### 2. Organize by Purpose
- Group files into logical categories (contexts, schemas, controllers, LiveViews, tests)
- Identify relationships between modules
- Note directory structures (lib/, test/, config/)
- Count files in directories with similar purposes
### 3. Create Repository Maps
- Structure output to show WHERE things are
- Organize by type (contexts, web, data, config, tests)
- Show file counts for clusters
- Identify entry points and related directories
## Search Strategy
### Step 1: Understand the Request
Parse what the user wants to FIND:
- "Where are X?" → Locate files related to X
- "Find all Y" → Search for files of type Y
- "Show me Z structure" → Map Z's file organization
### Step 2: Search Fast and Broad
Use the right tools for efficient location:
- **Grep**: Find files containing specific text (defmodule, def, use, import)
- **Glob**: Find files by name pattern (*.ex, *.exs, *_controller.ex, *_live.ex)
- **Bash**: Navigate lib/, test/, config/ structures, count files
- **Skill**: Look up Elixir/Phoenix package documentation when relevant
**DO NOT use Read** - You locate, you don't analyze.
### Step 3: Organize Results
Structure output to show the repository map:
- Group by purpose (contexts, schemas, controllers, LiveViews, plugs, tests)
- Show full paths from repository root
- Include file counts for directories
- Note relationships between file clusters
## Elixir Repository Structure Knowledge
### Common Locations ({{PROJECT_TYPE}})
**Mix Project Structure**:
- `lib/` - Application source code
- `lib/my_app/` - Core application (contexts, schemas, business logic)
- `lib/my_app_web/` - Web layer (controllers, views, templates, LiveViews)
- `test/` - Test files
- `config/` - Configuration files
- `priv/` - Private assets (migrations, static files)
{{#if PROJECT_TYPE equals "Phoenix Application"}}
**Phoenix-Specific**:
- `lib/my_app_web/controllers/` - Controllers
- `lib/my_app_web/live/` - LiveView modules
- `lib/my_app_web/components/` - Function components
- `lib/my_app_web/router.ex` - Routes
- `lib/my_app_web/endpoint.ex` - Endpoint configuration
- `lib/my_app/` - Contexts and business logic
- `priv/repo/migrations/` - Database migrations
{{/if}}
{{#if PROJECT_TYPE equals "Library/Package"}}
**Library Structure**:
- `lib/my_library.ex` - Main module
- `lib/my_library/` - Sub-modules
- `test/` - Unit tests
- `mix.exs` - Package definition
{{/if}}
{{#if PROJECT_TYPE equals "Umbrella Project"}}
**Umbrella Structure**:
- `apps/` - Individual applications
- `apps/app_name/lib/` - Application source
- `apps/app_name/test/` - Application tests
- `config/` - Shared configuration
{{/if}}
### Common Elixir File Patterns
- Contexts: `lib/my_app/*.ex` (e.g., accounts.ex, billing.ex)
- Schemas: `lib/my_app/*/` subdirectories (e.g., accounts/user.ex)
- Controllers: `lib/my_app_web/controllers/*_controller.ex`
- LiveViews: `lib/my_app_web/live/*_live.ex`
- Tests: `test/**/*_test.exs`
## Output Format
### Repository Map Structure
```
## [Topic] File Locations
### [Category 1] (X files)
- `lib/my_app/accounts.ex`
- `lib/my_app/billing.ex`
- `lib/my_app/inventory.ex`
### [Category 2] (Y files)
- `lib/my_app/accounts/user.ex`
- `lib/my_app/accounts/session.ex`
### [Category 3]
- `lib/my_app_web/controllers/` (contains Z files)
- user_controller.ex
- session_controller.ex
### Related Directories
- `test/my_app/` - Context tests
- `priv/repo/migrations/` - Database migrations
### Summary
- Total files found: N
- Main categories: [list]
- Entry points: [if applicable]
- Configuration: mix.exs, config/*.exs
```
**Key principles**:
- Organize by logical purpose/category
- Show full paths from repository root
- Include file counts for clarity
- Note relationships between file clusters
- List directories with content counts
- Do NOT show file contents
## File Categories to Locate
### Common Elixir File Types
- **Contexts**: Business logic modules in lib/my_app/
- **Schemas**: Ecto data models
- **Controllers**: Phoenix request handlers
- **LiveViews**: Phoenix LiveView modules
- **Components**: Function components
- **Tests**: ExUnit test files
- **Config**: Application configuration
- **Migrations**: Database schema changes
### Typical Patterns
- Entry points (Application, Endpoint, Router)
- Context modules (public API)
- Schema definitions (data models)
- Web layer (controllers, LiveViews)
- Process modules (GenServers, Supervisors)
- Test suites
## Important Guidelines
### Always Include
- Full paths from repository root
- File counts for directories
- Category organization
- Relationships between file clusters
### Never Do
- Read file contents
- Show code examples
- Critique or evaluate patterns
- Recommend one pattern over another
- Suggest improvements
- Identify problems or issues
- Make judgments about code quality
## Tool Usage
### Use Grep For
- Finding files containing specific text
- Searching for module names: `grep -r "defmodule MyApp"`
- Finding function definitions: `grep -r "def create_user"`
- Locating use statements: `grep -r "use Ecto.Schema"`
- Example: `grep -r "pattern" --files-with-matches`
### Use Glob For
- Finding Elixir files: `**/*.ex`, `**/*.exs`
- Finding controllers: `**/*_controller.ex`
- Finding LiveViews: `**/*_live.ex`
- Finding tests: `test/**/*_test.exs`
- Pattern-based file discovery
### Use Bash For
- Directory navigation (cd, ls)
- File counting (`find | wc -l`)
- Complex search combinations
- Checking directory structure
### Use Skill For
- Phoenix documentation (core:hex-docs-search)
- Ecto documentation
- Elixir standard library
- Other Hex packages used in project
- Understanding framework conventions before finding usage
**Never use Read** - That's the analyzer's job.
## Example Queries You Handle
- "Where are the contexts?"
- "Find all Ecto schemas"
- "Locate LiveView modules"
- "Show me the directory structure for authentication"
- "Find all controller files"
- "Where are the tests for X?"
- "Find migration files"
- "Locate configuration files"
## Boundary with Analyzer Agent
**You (Finder)**: Create maps of WHERE things are
- Fast, broad file location
- No file reading
- Organized by purpose
**Analyzer**: Explains HOW things work
- Deep file reading
- Execution flow tracing
- Technical analysis
**Workflow**: Finder locates → Analyzer reads those files
## Remember
You are a **fast file locator** for Elixir codebases. You help users discover WHERE components are by:
- Searching broadly without reading
- Organizing results by purpose (contexts, schemas, web, tests)
- Providing clear file paths
- Creating repository maps
You save tokens by NOT reading files. The analyzer does that deep work.

View File

@@ -0,0 +1,394 @@
---
description: Execute Elixir implementation plan with verification checkpoints
argument-hint: [plan-name or path]
allowed-tools: Read, Write, Edit, Grep, Glob, Task, Bash, TodoWrite, AskUserQuestion
---
# Implement
Execute an approved Elixir implementation plan with built-in verification and progress tracking.
## Purpose
Follow implementation plans phase-by-phase while maintaining quality through automated verification at each checkpoint.
## Steps to Execute:
### Step 1: Locate and Read Plan
**If user provides plan name:**
```bash
# Search for plan file
find {{DOCS_LOCATION}}/plans -name "*[plan-name]*.md" -type f
```
**If user provides path:**
- Read the file directly
**If no argument provided:**
- List available plans:
```bash
ls -t {{DOCS_LOCATION}}/plans/*.md | head -5
```
- Ask user which plan to implement
**Read plan completely:**
- Use Read tool WITHOUT limit/offset
- Parse the full plan structure
- Identify all phases/tasks/milestones
- Note the success criteria
### Step 2: Check Existing Progress
**Look for checkmarks in the plan:**
- Identify which phases/tasks are already completed (checked)
- Identify current phase/task (first unchecked item)
- Verify completed items are actually done (spot check)
**If resuming work:**
- Trust completed checkmarks unless evidence suggests otherwise
- Start from first unchecked phase/task
- Confirm with user if multiple checkmarks exist but work seems incomplete
**If starting fresh:**
- All items should be unchecked
- Begin with Phase 1 / Task 1 / Milestone 1
### Step 3: Review Original Context
**Before implementing, review:**
- Original research that informed the plan
- Related tickets or documentation
- Files that will be modified
**Read referenced files:**
- Read any files mentioned in the current phase/task
- Understand current implementation
- Identify exact changes needed
### Step 4: Execute Phase-by-Phase
**For each phase/task/milestone:**
1. **Mark as in-progress** in TodoWrite
```
1. [in_progress] Implementing Phase 1: Database Layer
2. [pending] Implementing Phase 2: Context Functions
3. [pending] Implementing Phase 3: Web Layer
```
2. **Follow the plan's core intent** while remaining flexible:
- Stick to the planned approach
- Use the code examples as guides
- Adapt if you discover better patterns in the codebase
- If reality diverges from plan, surface the discrepancy (see Mismatches section)
3. **Make the changes:**
- Create new modules as specified
- Modify existing modules as specified
- Add tests as specified
- Update configuration if needed
4. **Run verification** after completing the phase:
{{VERIFICATION_COMMANDS}}
5. **Update the plan file** with checkmarks:
- Use Edit tool to add `[x]` to completed items in the plan
- Example: Change `- [ ] Create schema` to `- [x] Create schema`
6. **Pause for confirmation:**
- Present verification results
- Show what was completed
- Show test output if relevant
- Ask: "Phase N complete. Proceed to Phase N+1?"
- Wait for user approval before continuing
### Step 5: Handle Plan vs Reality Mismatches
**When reality diverges from the plan:**
**State the discrepancy explicitly:**
- "The plan expects X to exist at path/to/file.ex:10"
- "But I found Y instead"
- "This matters because Z"
**Explain why it matters:**
- How does this affect the current phase?
- Can we proceed or must we adjust?
**Request clarity:**
- "Should I:"
- "A) Adapt the plan to match reality"
- "B) Update reality to match the plan"
- "C) Something else"
**Do not guess** - always surface mismatches and get user input.
### Step 6: Complete Implementation
**After all phases/tasks complete:**
1. **Run full verification suite:**
{{FULL_VERIFICATION_SUITE}}
2. **Mark plan as complete:**
- Edit plan file frontmatter: `status: completed`
- Add completion date to frontmatter
3. **Present final summary:**
```markdown
✅ Implementation Complete: [Plan Name]
**Phases Completed**: [N]
**Files Modified**: [list key files]
**Tests Added**: [N]
**Final Verification**:
- ✅ Compilation: Success
- ✅ Tests: [N] passed
{{QUALITY_TOOLS_SUMMARY}}
**Next Steps**:
- Run QA validation: `/qa "[plan-name]"`
- Manual testing recommended
- Ready for code review
```
## Verification Commands
### Per-Phase Verification
After each phase, run:
```bash
# Compile with warnings as errors
mix compile --warnings-as-errors
# Run tests
{{TEST_COMMAND}}
# Format check
mix format --check-formatted
```
{{OPTIONAL_QUALITY_CHECKS}}
### Full Verification Suite
After completing all phases:
```bash
# Clean compile
mix clean && mix compile --warnings-as-errors
# Full test suite
{{TEST_COMMAND}}
# Format check
mix format --check-formatted
{{FULL_QUALITY_SUITE}}
```
## Handling Failures
### If Compilation Fails
1. **Show the error**
2. **Identify the issue** (missing import, typo, etc.)
3. **Fix the issue**
4. **Re-run compilation**
5. **Do not proceed** until compilation succeeds
### If Tests Fail
1. **Show the test output**
2. **Analyze the failure**:
- Is it expected based on incomplete implementation?
- Is it a real bug in the new code?
- Is it a pre-existing test that needs updating?
3. **Fix or update as needed**
4. **Re-run tests**
5. **Do not proceed** until tests pass
### If Pre-Commit Hook Triggers
Some projects have pre-commit hooks (format, Credo, etc.):
1. **Read the hook output**
2. **Apply automatic fixes** if available:
```bash
mix format
```
3. **Address issues** manually if needed
4. **Re-run verification**
## Progress Tracking
### TodoWrite Usage
Maintain a todo list throughout implementation:
```
1. [completed] Read and parse plan
2. [completed] Phase 1: Database Layer
3. [in_progress] Phase 2: Context Functions
4. [pending] Phase 3: Web Layer
5. [pending] Phase 4: Tests
6. [pending] Final verification
```
**Update frequently:**
- Mark completed when phase finishes
- Mark in-progress when starting new phase
- Keep user informed of progress
### Plan File Updates
The plan file is the source of truth:
- **Add checkmarks** as phases complete
- **Update status** in frontmatter
- **Add notes** if implementation deviates
- **Preserve history** (don't delete content)
## Flexibility vs Adherence
### Stick to the Plan When:
- Code examples are clear and correct
- Planned approach matches codebase patterns
- No new information contradicts the plan
### Adapt When:
- You discover better existing patterns
- File structure has changed
- Dependencies have been updated
- Tests reveal issues with planned approach
### Always Surface When:
- Plan expects something that doesn't exist
- Existing code contradicts planned changes
- Uncertainty about how to proceed
## Elixir-Specific Guidelines
### Module Creation
When creating new modules:
```elixir
defmodule MyApp.Context.Feature do
@moduledoc """
[Description of module purpose]
"""
# Clear module structure
# Use statements at top
# Group related functions
# Add @doc for public functions
end
```
### Test Organization
```elixir
defmodule MyApp.FeatureTest do
use MyApp.DataCase # or ConnCase for controllers
describe "function_name/1" do
test "success case" do
# Arrange
# Act
# Assert
end
test "error case" do
# Test error handling
end
end
end
```
### Migration Files
When creating migrations:
```elixir
defmodule MyApp.Repo.Migrations.CreateFeature do
use Ecto.Migration
def change do
create table(:features) do
add :name, :string, null: false
# ...
timestamps()
end
# Indexes
create index(:features, [:name])
end
end
```
**Run migration after creation:**
```bash
mix ecto.migrate
```
## Resume Capability
### If Implementation is Interrupted
The plan file preserves state:
1. **Read the plan** - checkmarks show progress
2. **Verify checkmarks** - spot check completed work
3. **Continue** from first unchecked item
4. **No need to restart** - trust the checkmarks
### If Tests Break Later
Sometimes completed phases have tests that break:
1. **Identify which phase's tests broke**
2. **Analyze the root cause**
3. **Fix the issue**
4. **Re-verify that phase**
5. **Continue with current phase**
## Example Session
**User**: `/implement user-authentication`
**Process**:
1. Find plan: `{{DOCS_LOCATION}}/plans/2025-01-23-user-authentication.md`
2. Read plan: 4 phases, all unchecked
3. Phase 1: Create User schema
- Create `lib/my_app/accounts/user.ex`
- Create migration
- Run `mix ecto.migrate`
- Verify: `mix compile && {{TEST_COMMAND}}`
- Update plan: `[x] Phase 1: Database Layer`
- Pause: "Phase 1 complete. Proceed to Phase 2?"
4. User: "yes"
5. Phase 2: Add context functions
- Implement in `lib/my_app/accounts.ex`
- Add tests
- Verify: `mix compile && {{TEST_COMMAND}}`
- Update plan: `[x] Phase 2: Context Functions`
- Pause: "Phase 2 complete. Proceed to Phase 3?"
6. Continue until all phases complete
7. Final verification suite
8. Mark plan status: completed
9. Present summary
## Important Reminders
- **One phase at a time**: Complete and verify before moving on
- **Checkpoints matter**: Don't skip verification
- **Update the plan**: Keep checkmarks current
- **Pause for confirmation**: Let user track progress
- **Surface mismatches**: Don't guess when plan diverges from reality
- **Trust completions**: Checkmarks indicate done work (unless evidence suggests otherwise)
- **Maintain quality**: All verifications must pass

View File

@@ -0,0 +1,486 @@
---
description: Gather context through interactive questioning to guide workflow execution
argument-hint: [workflow-phase]
allowed-tools: Read, Glob, Bash, TodoWrite, Write, AskUserQuestion
---
# Interview
You are tasked with gathering context through interactive questioning to guide workflow execution. This command intelligently determines what questions to ask based on the current workflow state.
## Steps to Execute:
### 1. Parse Arguments and Detect Context
**Check for explicit workflow phase argument:**
- If user provides argument (e.g., `/interview research`, `/interview plan`), use that as target phase
- If no argument, auto-detect based on existing documents
**Auto-detect workflow phase:**
Use Glob to check for existing documents:
```bash
# Check for existing workflow documents
ls {{DOCS_LOCATION}}/research/research-*.md 2>/dev/null
ls {{DOCS_LOCATION}}/plans/plan-*.md 2>/dev/null
ls {{DOCS_LOCATION}}/interview/interview-*.md 2>/dev/null
```
**Detection logic:**
- No research docs exist → Target: `pre-research`
- Research docs exist, no plan docs → Target: `pre-plan`
- Both research and plan exist → Target: `pre-implement`
- Interview doc already exists → Target: `follow-up` (refine context)
**Create TodoWrite tracking:**
```
1. [in_progress] Detect workflow context and target phase
2. [pending] Analyze context and formulate relevant questions
3. [pending] Ask context gathering questions
4. [pending] Process and validate answers
5. [pending] Generate derived context and recommendations
6. [pending] Write interview document
7. [pending] Present interview summary
```
Mark first todo as completed, second as in_progress.
---
### 2. Analyze Context and Formulate Questions
**Analyze the current situation:**
1. **Read relevant context:**
- If pre-research: Read the user's research query to understand what they're asking about
- If pre-plan: Read research documents (if they exist) and the user's feature description
- If pre-implement: Read plan documents to understand what's being implemented
2. **Identify what information is needed:**
- What decisions cannot be made by code analysis alone?
- What preferences or trade-offs require user input?
- What constraints or requirements are unclear?
- What aspects of the task are ambiguous?
3. **Formulate contextual questions (as many as needed):**
- Questions should be specific to the actual task at hand
- Focus on decisions that will meaningfully impact the workflow
- Avoid generic questions that don't apply to this specific situation
- Ask as many questions as necessary to gather sufficient context (typically 2-6)
- Each question should have 2-4 relevant options
- Note: AskUserQuestion tool supports 1-4 questions per call, so make multiple calls if needed
**Question Formulation Guidelines:**
**For Pre-Research Phase:**
Analyze the research query and determine:
- Is the scope clear or ambiguous?
- What depth of investigation is appropriate?
- Are there specific aspects to focus on or avoid?
- What format would be most useful for the results?
Example questions (adapt to actual query):
- Scope: How deep should the research go?
- Focus: What specific aspects matter most?
- Constraints: Any time/scope limitations?
**For Pre-Plan Phase:**
Analyze research findings (if available) and feature description:
- Are there multiple valid architectural approaches?
- What are the key trade-offs that need user input?
- What priorities should guide the design?
- Are there specific patterns or conventions to follow?
Example questions (adapt to actual feature):
- Architecture: What design approach is preferred?
- Priorities: What matters most (performance, maintainability, simplicity)?
- Testing: What level of test coverage is needed?
**For Pre-Implement Phase:**
Analyze the plan and identify implementation preferences:
- Are there style or convention questions?
- What's the preferred implementation sequence?
- What validation criteria are most important?
- Are there any implementation constraints?
Example questions (adapt to actual plan):
- Style: Follow existing patterns or introduce improvements?
- Approach: Iterative or complete-then-test?
- Validation: What quality checks are critical?
**For Follow-Up Phase:**
Read existing interview document and identify:
- What aspects need refinement or clarification?
- What new information has emerged?
- What decisions need to be revisited?
### 3. Ask Context Gathering Questions
Use AskUserQuestion to ask the dynamically formulated questions.
**Important:**
- Ask as many questions as needed to gather sufficient context
- AskUserQuestion supports 1-4 questions per call - make multiple calls if you have more questions
- Each question must have clear header (≤12 chars), question text, and 2-4 options
- Options should have concise labels (1-5 words) and helpful descriptions
- Use multiSelect: true when options are not mutually exclusive
- Use multiSelect: false for mutually exclusive choices
- Always provide an "opt-out" option if appropriate (e.g., "Not sure", "Show me options", "No constraints")
**If you have more than 4 questions:**
1. Ask first batch (1-4 questions) with AskUserQuestion
2. Process those answers
3. Ask next batch (1-4 questions) with another AskUserQuestion call
4. Continue until all necessary questions are asked
Mark second todo as completed, third as in_progress.
---
### 4. Process and Validate Answers
**Capture answers:**
- Store each answer with its question context
- Note which questions used multi-select (answers are arrays)
- Validate that critical questions were answered
**Derive context from answers:**
**For Pre-Research:**
- Translate scope selection into research depth guidance
- Map focus areas to specific components/patterns to investigate
- Convert constraints into research boundaries
- Generate specific research directives
**For Pre-Plan:**
- Translate architecture preference into design approach
- Map priorities to plan structure and detail level
- Convert testing strategy into test plan requirements
- Generate planning directives
**For Pre-Implement:**
- Translate code style into implementation guidelines
- Map approach to implementation sequence
- Convert validation priorities into quality gates
- Generate implementation directives
**Generate recommendations:**
- Specific actions for the next workflow step
- File/directory focus areas
- Patterns to follow or avoid
- Quality criteria to meet
Mark third todo as completed, fourth as in_progress.
---
### 5. Generate Derived Context and Recommendations
Based on processed answers, create actionable guidance for subsequent workflow steps.
**Context structure:**
```markdown
## Workflow Directives
### For [Target Phase]
[Specific instructions based on answers]
### Focus Areas
[What to prioritize]
### Constraints
[What to avoid or limitations to respect]
### Quality Criteria
[Success metrics for this phase]
```
**Example derivations:**
**If user selected "Deep technical dive" + "Data flow" + "Integration points":**
```
Research Directive: Use analyzer agents to trace complete execution flows.
Focus on how data moves between components and integration boundaries.
Document each step with file:line references and data transformations.
```
**If user selected "Minimal viable" + "Maintainability" + "Follow existing":**
```
Plan Directive: Design should match existing patterns in the codebase.
Prioritize code clarity and simplicity over optimization.
Include refactoring opportunities only if they improve maintainability.
```
Mark fourth todo as completed, fifth as in_progress.
---
### 6. Write Interview Document
**Gather metadata:**
```bash
date -u +"%Y-%m-%d %H:%M:%S %Z"
git log -1 --format="%H"
git branch --show-current
git config user.name
```
**Determine filename:**
- Format: `{{DOCS_LOCATION}}/interview/interview-YYYY-MM-DD-[phase]-[brief-topic].md`
- Examples:
- `{{DOCS_LOCATION}}/interview/interview-2025-10-28-pre-research-authentication.md`
- `{{DOCS_LOCATION}}/interview/interview-2025-10-28-pre-plan-monitoring-plugin.md`
- `{{DOCS_LOCATION}}/interview/interview-2025-10-28-pre-implement-api-refactor.md`
**Create {{DOCS_LOCATION}}/interview/ directory if needed:**
```bash
mkdir -p {{DOCS_LOCATION}}/interview
```
**Document structure:**
```markdown
---
date: [ISO timestamp]
interviewer: [Git user name]
commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name from git remote]
workflow_phase: "[pre-research|pre-plan|pre-implement|follow-up]"
target_workflow: "[research|plan|implement]"
tags: [interview, context-gathering, workflow-phase]
status: complete
---
# Interview: Context for [Target Workflow]
**Date**: [timestamp]
**Interviewer**: [user]
**Git Commit**: [hash]
**Branch**: [branch]
**Workflow Phase**: [phase]
**Target Workflow**: [target]
## Interview Summary
[High-level summary of gathered context - 2-3 sentences about what was learned]
## Questions and Answers
### Question 1: [Question text]
**Header**: [header]
**Answer**: [User's selection(s)]
**Context**: [What this means for the workflow - how this answer guides execution]
### Question 2: [Question text]
**Header**: [header]
**Answer**: [User's selection(s)]
**Context**: [What this means for the workflow]
### Question 3: [Question text]
**Header**: [header]
**Answer**: [User's selection(s)]
**Context**: [What this means for the workflow]
[Repeat for all questions asked]
## Derived Context
[Processed answers transformed into actionable guidance]
### Workflow Directives
[Specific instructions for the target workflow step based on answers]
### Focus Areas
[What to prioritize, specific components/patterns to investigate or implement]
### Constraints and Boundaries
[Limitations, areas to avoid, time constraints, etc.]
### Quality Criteria
[Success metrics for the target workflow phase]
## Recommendations for [Target Workflow]
[Numbered list of specific, actionable recommendations]
1. [Specific directive based on answers]
2. [Specific directive based on answers]
3. [Specific directive based on answers]
## Next Steps
**To use this context:**
[IF pre-research:]
Run: `/research [your-topic]`
The research command will automatically detect and use this interview context.
[IF pre-plan:]
Run: `/plan [your-feature]`
The plan command will automatically detect and use this interview context.
[IF pre-implement:]
Run: `/implement [plan-name]`
The implement command can reference this context for implementation decisions.
[IF follow-up:]
This follow-up interview updates the context for ongoing work.
Subsequent workflow commands will use the refined context.
```
**Write document:**
- Use Write tool to create the interview document
- Include all gathered information
- Ensure all sections are complete
Mark fifth todo as completed, sixth as in_progress.
---
### 7. Present Interview Summary
Present concise summary to user:
```markdown
# Interview Complete
**Workflow Phase**: [phase]
**Target Workflow**: [target]
## Context Gathered
**Scope**: [summary of scope/approach selected]
**Priorities**: [key priorities identified]
**Constraints**: [any constraints or special requirements]
## Key Directives
[3-5 most important directives for next workflow step]
## Interview Document
**Location**: `{{DOCS_LOCATION}}/interview/interview-YYYY-MM-DD-[phase]-[topic].md`
This context will be automatically detected and used by subsequent workflow commands.
## Next Steps
[IF pre-research:]
You can now run:
```bash
/research [your-research-topic]
```
The research command will use this interview context to focus its investigation.
[IF pre-plan:]
You can now run:
```bash
/plan [your-feature-description]
```
The plan command will use this interview context to guide design decisions.
[IF pre-implement:]
You can now run:
```bash
/implement [your-plan-name]
```
The implement command can reference this context for implementation preferences.
[IF follow-up:]
The interview context has been updated. Continue with your workflow - commands will use the refined context.
```
Mark sixth todo as completed, all todos complete.
---
## Important Notes
### Dynamic Question Generation
The interview command does NOT use hardcoded questions. Instead:
- Analyze the specific task/query/feature being worked on
- Read relevant context (research docs, plan docs, user's query)
- Identify what decisions actually need user input for THIS specific situation
- Formulate 2-4 contextual questions that directly address those decisions
- Generate appropriate options based on the actual context
This ensures questions are always relevant and meaningful, not generic.
### Context Detection Logic
The command intelligently determines what questions to ask based on:
1. **Explicit argument**: User can specify phase (`/interview research`, `/interview plan`)
2. **Existing documents**: Auto-detect based on what workflow docs exist
3. **Interview history**: Check for existing interview docs to avoid redundant questions
### Answer Processing
- Single-select answers: String value
- Multi-select answers: Array of string values
- Always provide context interpretation for each answer
- Generate specific, actionable directives from answers
### Document Location
Interview documents are stored in `{{DOCS_LOCATION}}/interview/` for:
- Persistence across workflow steps
- Reference by subsequent commands
- Tracking of context evolution
### Integration with Workflows
Interview documents are referenced by:
- `/research` - Checks for interview doc at start, uses context to focus research
- `/plan` - Checks for interview doc at start, uses context for design decisions
- `/implement` - Can reference interview doc for implementation preferences
### Follow-Up Interviews
If an interview document already exists:
- Ask if user wants to refine existing context
- Update existing document with refinements
- Add follow-up section with timestamp
## Example Usage
**User**: `/interview` (before researching "How does the plugin hook system work?")
**Process**:
1. Detect no research docs exist → Target: pre-research
2. Read user's research query: "How does the plugin hook system work?"
3. Analyze: This is about understanding architecture and execution flow
4. Formulate contextual questions:
- Q1: "How deep should we investigate the hook system?" (Single-select: High-level flow, Detailed execution trace, Pattern comparison, Implementation examples)
- Q2: "Which aspects of hooks are most important?" (Multi-select: Hook registration, Event triggering, Script execution, Error handling)
- Q3: "Any specific constraints?" (Multi-select: Time-sensitive, Focus on specific hooks, Include tests, No constraints)
5. User answers questions
6. Process answers into research directives specific to the hook system
7. Generate interview document
8. Present summary with next steps to run `/research`
**User**: `/interview plan` (after researching, before planning "Add monitoring plugin")
**Process**:
1. Explicit phase provided → Target: pre-plan
2. Read research document about similar plugins (credo, dialyzer patterns)
3. Analyze: Multiple valid approaches for monitoring (health checks, metrics, alerts)
4. Formulate contextual questions:
- Q1: "What type of monitoring?" (Single-select: Health checks, Metrics collection, Alert integration, All of the above)
- Q2: "When should monitoring run?" (Single-select: PostToolUse, PreToolUse, Both, On-demand)
- Q3: "What matters most for this plugin?" (Multi-select: Performance overhead, Detailed output, Integration with tools, Ease of configuration)
5. User answers questions
6. Process into planning directives specific to monitoring plugin design
7. Generate interview document
8. Present summary with next steps to run `/plan`
Note how questions are specific to the actual task, not generic templates.

View File

@@ -0,0 +1,635 @@
---
description: Execute complete workflow from research to QA in one command
argument-hint: "[feature-description]"
allowed-tools: SlashCommand, TodoWrite, Bash, Read, AskUserQuestion
---
# Oneshot Workflow
Execute the complete workflow cycle (research → plan → implement → qa) for a single feature in one command.
## Purpose
This command automates the entire development workflow:
1. Research existing codebase patterns
2. Create detailed implementation plan
3. Execute the plan with verification
4. Validate implementation quality
## Usage
```bash
/oneshot "Add user profile page with avatar upload"
/oneshot "Refactor authentication to use OAuth"
```
## Execution Flow
When invoked with a feature description, this command will:
### Step 1: Initialize
Create TodoWrite plan to track the entire workflow:
```
1. [in_progress] Parse feature description and setup
2. [pending] Research phase - understand existing patterns
3. [pending] Planning phase - create implementation plan
4. [pending] Implementation phase - execute the plan
5. [pending] QA phase - validate implementation
6. [pending] Present final summary
```
### Step 2: Research Phase
Execute research to understand the codebase:
```
SlashCommand(command="/research $ARGUMENTS")
```
**What happens:**
- Spawns parallel agents to find relevant patterns
- Documents existing implementations
- Saves research document to `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
**Wait for research to complete** before proceeding.
Mark step 2 completed, mark step 3 in_progress.
### Step 3: Planning Phase
Create implementation plan based on research:
```
SlashCommand(command="/plan $ARGUMENTS")
```
**What happens:**
- Uses research findings as context
- Asks design questions if needed
- Creates phased implementation plan
- Defines success criteria (automated + manual)
- Saves plan to `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature.md`
**Wait for planning to complete** before proceeding.
**Capture the plan filename** from the output (needed for next steps).
Mark step 3 completed, mark step 4 in_progress.
### Step 4: Implementation Phase
Execute the plan:
```
SlashCommand(command="/implement PLAN_FILENAME")
```
Where `PLAN_FILENAME` is extracted from the plan command output (e.g., "2025-10-25-user-profile").
**What happens:**
- Reads the plan document
- Executes phase by phase
- Runs verification after each phase:
- `mix compile --warnings-as-errors`
- `{{TEST_COMMAND}}`
- `mix format --check-formatted`
{{#if QUALITY_TOOLS}}
- Quality tools: {{QUALITY_TOOLS_SUMMARY}}
{{/if}}
- Updates checkmarks in plan
- Handles mismatches between plan and reality
**Wait for implementation to complete** before proceeding.
Mark step 4 completed, mark step 5 in_progress.
### Step 5: QA Phase
Validate the implementation:
```
SlashCommand(command="/qa PLAN_FILENAME")
```
**What happens:**
- Runs all quality gate checks
- Spawns validation agents
- Generates comprehensive QA report
- Validates against plan success criteria
- Provides actionable feedback
**Wait for QA to complete** before proceeding.
**After QA completes, check for critical issues:**
**5.1 Read QA Report and Parse Status**
Find and read the most recent QA report:
```bash
ls -t {{DOCS_LOCATION}}/qa-reports/*-qa.md 2>/dev/null | head -1
```
Read the QA report file and parse the "Overall Status" from the Executive Summary section.
Possible statuses:
- ✅ ALL PASS or ✅ PASS
- ⚠️ NEEDS ATTENTION or ⚠️ PASS WITH WARNINGS
- ❌ CRITICAL ISSUES or ❌ FAIL
**5.2 Handle QA Results Conditionally**
**IF status is ❌ CRITICAL ISSUES or ❌ FAIL:**
**5.2.1 Extract Critical Issue Count**
Parse the QA report to count critical issues listed.
**5.2.2 Prompt User for Auto-Fix**
Use AskUserQuestion tool:
```
Question: "QA detected [N] critical issues. Generate and execute fix plan automatically?"
Header: "Auto-Fix"
Options (multiSelect: false):
Option 1:
Label: "Yes, auto-fix and re-validate"
Description: "Automatically create fix plan, implement fixes, and re-run QA"
Option 2:
Label: "No, stop for manual fixes"
Description: "Stop oneshot workflow, fix manually, then re-run /qa"
```
**5.2.3 If User Selects "Yes, auto-fix and re-validate":**
Report: "Generating fix plan for [N] critical issues..."
Add dynamic todos to track fix cycle:
```
TodoWrite: Add new todos:
5a. [in_progress] Generate fix plan for critical issues
5b. [pending] Execute fix implementation
5c. [pending] Re-run QA validation
```
Mark step 5a in_progress.
**Get QA Report Path:**
```bash
ls -t {{DOCS_LOCATION}}/qa-reports/*-qa.md 2>/dev/null | head -1
```
**Generate Fix Plan:**
```
SlashCommand(command="/plan Fix critical issues from QA report: [QA_REPORT_PATH]")
```
Wait for plan generation to complete.
**Extract fix plan filename** from output (e.g., "plan-2025-10-27-fix-*").
Store plan name without path/extension in variable: FIX_PLAN_NAME
Report: "Fix plan created at: {{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]"
Mark step 5a completed, mark step 5b in_progress.
**Execute Fix Implementation:**
```
SlashCommand(command="/implement [FIX_PLAN_NAME]")
```
Wait for implementation to complete.
Report: "Fixes applied. Re-running QA..."
Mark step 5b completed, mark step 5c in_progress.
**Re-run QA Validation:**
```
SlashCommand(command="/qa PLAN_FILENAME")
```
Note: Use original PLAN_FILENAME, not the fix plan name.
Wait for QA to complete.
**Read New QA Report:**
```bash
ls -t {{DOCS_LOCATION}}/qa-reports/*-qa.md 2>/dev/null | head -1
```
Parse new status from report.
**Evaluate Re-validation Results:**
IF new status is ✅ ALL PASS or ✅ PASS:
Report: "✅ Auto-fix successful! QA passed after fixes."
Mark step 5c completed with note: "Passed after auto-fix"
Mark step 5 completed with note: "Completed with auto-fix"
Mark step 6 in_progress
Set workflow_status = "SUCCESS_WITH_AUTOFIX"
Continue to Step 6
ELSE IF new status is ⚠️ NEEDS ATTENTION or ⚠️ PASS WITH WARNINGS:
Report: "⚠️ Auto-fix partially successful. QA passed with warnings."
Mark step 5c completed with note: "Passed with warnings after auto-fix"
Mark step 5 completed with note: "Completed with warnings after auto-fix"
Mark step 6 in_progress
Set workflow_status = "SUCCESS_WITH_WARNINGS_AFTER_AUTOFIX"
Continue to Step 6
ELSE IF new status is ❌ CRITICAL ISSUES or ❌ FAIL:
Report: "❌ Auto-fix incomplete. Critical issues remain."
Report: "Fix plan: {{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]"
Report: "Final QA report: [NEW_QA_REPORT_PATH]"
Report: "Manual intervention required. Review reports and fix remaining issues."
Mark step 5c completed with note: "Failed after auto-fix attempt"
Mark step 5 completed with note: "Failed - auto-fix incomplete"
Mark step 6 in_progress
Set workflow_status = "FAILED_AUTOFIX_INCOMPLETE"
Continue to Step 6 (summary will show failure details)
**5.2.4 If User Selects "No, stop for manual fixes":**
Report: "Workflow stopped for manual fixes."
Report: "QA report: [QA_REPORT_PATH]"
Report: "After fixing, continue with: /qa PLAN_FILENAME"
Mark step 5 completed with note: "Stopped for manual fixes"
Mark step 6 in_progress
Set workflow_status = "PAUSED_MANUAL_FIXES_REQUIRED"
Continue to Step 6 (summary will show paused status)
**ELSE IF status is ⚠️ NEEDS ATTENTION or ⚠️ PASS WITH WARNINGS:**
Report: "QA passed with warnings (non-blocking)"
Mark step 5 completed with note: "Passed with warnings"
Mark step 6 in_progress
Set workflow_status = "SUCCESS_WITH_WARNINGS"
Continue to Step 6
**ELSE IF status is ✅ ALL PASS or ✅ PASS:**
Report: "QA passed successfully"
Mark step 5 completed
Mark step 6 in_progress
Set workflow_status = "SUCCESS"
Continue to Step 6
### Step 6: Final Summary
Present comprehensive workflow summary based on workflow_status:
**IF workflow_status is "SUCCESS":**
```markdown
# ✅ Oneshot Workflow Complete - Success
**Feature**: [Feature Description]
**Status**: ✅ SUCCESS
## Phases Executed
1. ✅ Research - Codebase patterns analyzed
2. ✅ Planning - Implementation plan created
3. ✅ Implementation - All phases completed
4. ✅ QA Validation - All quality gates passed
## Final QA Status
✅ ALL PASS - All quality gates passed
**Automated Checks**:
- Compilation: ✅
- Tests: ✅ [N/M passed]
- Formatting: ✅
{{#if QUALITY_TOOLS}}
- Quality tools: {{QUALITY_TOOLS_STATUS}}
{{/if}}
**Code Quality**:
- Code review: No issues
- Test coverage: Adequate
- Documentation: Complete
## Next Steps
Your implementation is ready!
1. **Review changes**: `git diff`
2. **Create commit**:
```bash
git add -A
git commit -m "[Feature description]"
```
3. **Push**: `git push origin $(git branch --show-current)`
## Documentation
- **Research**: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
- **Plan**: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature.md`
- **QA Report**: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
```
**IF workflow_status is "SUCCESS_WITH_AUTOFIX":**
```markdown
# ✅ Oneshot Workflow Complete - Success (with Auto-Fix)
**Feature**: [Feature Description]
**Status**: ✅ SUCCESS (auto-fixes applied)
## Phases Executed
1. ✅ Research - Codebase patterns analyzed
2. ✅ Planning - Implementation plan created
3. ✅ Implementation - All phases completed
4. ⚠️ QA Validation (initial) - Critical issues detected
5. ✅ Fix Plan Generation - Issues analyzed and fix plan created
6. ✅ Fix Implementation - Automated fixes applied
7. ✅ QA Re-validation - All quality gates passed
## Fix Details
**Initial QA**: ❌ FAILED ([N] critical issues)
**Fix Plan**: `{{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
**Fix Result**: ✅ All issues resolved
**Final QA**: ✅ ALL PASS
**Issues Fixed**:
[List top 3-5 issues that were fixed automatically]
## Next Steps
Your implementation is ready (with auto-fixes applied)!
1. **Review changes including fixes**: `git diff`
2. **Review fix plan**: `cat {{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
3. **Create commit**:
```bash
git add -A
git commit -m "[Feature description]"
```
4. **Push**: `git push origin $(git branch --show-current)`
## Documentation
- **Research**: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
- **Plan**: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature.md`
- **Fix Plan**: `{{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
- **QA Report**: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
```
**IF workflow_status is "SUCCESS_WITH_WARNINGS" or "SUCCESS_WITH_WARNINGS_AFTER_AUTOFIX":**
```markdown
# ⚠️ Oneshot Workflow Complete - Success with Warnings
**Feature**: [Feature Description]
**Status**: ⚠️ SUCCESS WITH WARNINGS
## Phases Executed
1. ✅ Research
2. ✅ Planning
3. ✅ Implementation
4. ⚠️ QA Validation - Passed with warnings
{{#if workflow_status equals "SUCCESS_WITH_WARNINGS_AFTER_AUTOFIX"}}
5. ✅ Fix Plan Generation (for critical issues)
6. ✅ Fix Implementation
7. ⚠️ QA Re-validation - Passed with warnings
{{/if}}
## QA Status
⚠️ PASS WITH WARNINGS - Core functionality validated, warnings present
**Warnings**:
[List warnings from QA report]
## Next Steps
Implementation complete, but review warnings:
1. **Review warnings**: `cat {{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
2. **Address warnings** (optional but recommended)
3. **Review changes**: `git diff`
4. **Create commit**:
```bash
git add -A
git commit -m "[Feature description]"
```
## Documentation
- **Research**: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
- **Plan**: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature.md`
{{#if workflow_status equals "SUCCESS_WITH_WARNINGS_AFTER_AUTOFIX"}}
- **Fix Plan**: `{{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
{{/if}}
- **QA Report**: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
```
**IF workflow_status is "PAUSED_MANUAL_FIXES_REQUIRED":**
```markdown
# ⚠️ Oneshot Workflow Paused - Manual Fixes Required
**Feature**: [Feature Description]
**Status**: ⚠️ PAUSED
## Phases Executed
1. ✅ Research
2. ✅ Planning
3. ✅ Implementation
4. ❌ QA Validation - Failed with critical issues
## QA Failure Details
**Status**: ❌ CRITICAL ISSUES ([N] issues found)
**QA Report**: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
**Critical Issues**:
[List top 3-5 critical issues from report]
## Next Steps
Workflow paused for manual fixes:
1. **Review QA report**: `cat {{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
2. **Fix critical issues manually**
3. **Re-run QA**: `/qa [PLAN_NAME]`
4. **Or generate fix plan**: `/qa` (will offer fix plan generation)
## Documentation
- **Research**: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
- **Plan**: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature.md`
- **QA Report**: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
```
**IF workflow_status is "FAILED_AUTOFIX_INCOMPLETE":**
```markdown
# ❌ Oneshot Workflow Failed - Auto-Fix Incomplete
**Feature**: [Feature Description]
**Status**: ❌ FAILED (auto-fix incomplete)
## Phases Executed
1. ✅ Research
2. ✅ Planning
3. ✅ Implementation
4. ❌ QA Validation (initial) - Failed with critical issues
5. ✅ Fix Plan Generation - Fix strategy created
6. ✅ Fix Implementation - Fixes attempted
7. ❌ QA Re-validation - Still failing
## Fix Attempt Details
**Initial Issues**: [N] critical issues
**Fix Plan**: `{{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
**Fixes Applied**: [M] fixes attempted
**Remaining Issues**: [P] critical issues remain
**Remaining Critical Issues**:
[List remaining issues from final QA report]
## Next Steps
Auto-fix was incomplete, manual intervention required:
1. **Review final QA report**: `cat {{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
2. **Review fix plan**: `cat {{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
3. **Fix remaining issues manually**
4. **Re-run QA**: `/qa [ORIGINAL_PLAN_NAME]`
5. **Or generate new fix plan**: `/qa` (will offer new plan generation)
## Documentation
- **Research**: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic.md`
- **Plan**: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-feature.md`
- **Fix Plan**: `{{DOCS_LOCATION}}/plans/[FIX_PLAN_FILENAME]`
- **QA Report** (initial): `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md`
- **QA Report** (after fix): `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-qa.md` (most recent)
```
Mark step 6 completed.
## Error Handling
### Research Phase Fails
If `/research` fails or times out:
1. Report the error to user
2. Ask if they want to:
- Retry research
- Skip research and proceed with planning
- Abort oneshot workflow
### Planning Phase Fails
If `/plan` fails:
1. Report the error
2. Research document still exists for reference
3. Ask if they want to:
- Retry planning
- Manually create plan
- Abort workflow
### Implementation Phase Fails
If `/implement` fails or verification fails:
1. Implementation is partial (some phases may be complete)
2. Plan document shows progress (checkmarks)
3. Do NOT proceed to QA
4. Report what was completed and what failed
5. Ask if they want to:
- Continue implementation manually
- Fix issues and retry `/implement`
- Abort and review partial work
### QA Phase Fails
If `/qa` fails or checks fail:
1. Report QA failures
2. Implementation is complete but quality issues exist
3. Present QA report with actionable feedback
4. Ask if they want to:
- Fix issues and re-run QA
- Review issues manually
- Accept current state
## Sequential Execution Notes
**CRITICAL**: Each phase MUST complete before starting the next:
1. **DO NOT** run commands in parallel
2. **WAIT** for each SlashCommand to complete
3. **CHECK** for errors after each phase
4. **EXTRACT** plan filename from planning output
5. **PASS** plan filename to implement and qa commands
6. **VALIDATE** each phase succeeded before continuing
## When to Use Oneshot vs Individual Commands
**Use `/oneshot`** when:
- Starting a new feature from scratch
- You want automated end-to-end workflow
- Feature scope is well-defined
- You trust the automation for research/planning
**Use individual commands** when:
- Researching without implementation
- Planning requires heavy customization
- Implementing existing plan
- Running QA separately
- Iterating on specific phases
## Customization Points
After generation, users can customize:
- Error handling strategies
- Verification commands between phases
- Approval gates (e.g., ask before implementation)
- Summary format
- Abort conditions
## Example Execution
```bash
# User runs:
/oneshot "Add OAuth integration for GitHub"
# What happens:
1. Research: Finds auth patterns, session handling, OAuth examples
→ Saves to .thoughts/research-2025-10-25-oauth-github.md
2. Plan: Creates 4-phase plan with success criteria
→ Saves to .thoughts/plans/2025-10-25-oauth-github.md
3. Implement: Executes all 4 phases with verification
→ Code changes, tests pass, format pass
4. QA: Validates all success criteria
→ Generates report, identifies 2 manual checks remaining
5. Summary: Shows complete workflow status
→ Feature ready for manual validation
```
## Project Type: {{PROJECT_TYPE}}
This oneshot workflow is customized for Elixir projects ({{PROJECT_TYPE}}) with:
- Test command: `{{TEST_COMMAND}}`
- Documentation: `{{DOCS_LOCATION}}`
{{#if QUALITY_TOOLS}}
- Quality tools: {{QUALITY_TOOLS_LIST}}
{{/if}}
---
**Note**: This command orchestrates other workflow commands. For more control over individual phases, use `/research`, `/plan`, `/implement`, and `/qa` separately.

View File

@@ -0,0 +1,408 @@
---
description: Create detailed implementation plan for Elixir feature or task
argument-hint: [brief-description]
allowed-tools: Read, Grep, Glob, Task, Bash, TodoWrite, Write, AskUserQuestion
---
# Plan
Generate a detailed, phased implementation plan for Elixir projects.
## Purpose
Create executable implementation plans with clear phases, success criteria, and verification steps for Elixir development.
## Steps to Execute:
### Step 1: Context Gathering
**Read referenced files completely:**
- If user references files, code, or tickets, read them FULLY first
- Use Read tool WITHOUT limit/offset parameters
- Gather complete context before any planning
**Spawn parallel research agents:**
Use Task tool to spawn agents that will inform your plan:
1. **codebase-locator** (subagent_type="general-purpose"):
- Find relevant Elixir modules, contexts, schemas
- Locate similar implementations for reference
- Identify files that will need modification
2. **codebase-analyzer** (subagent_type="general-purpose"):
- Analyze existing patterns and conventions
- Understand current architecture and design
- Trace how similar features are implemented
3. **Skill** (core:hex-docs-search):
- Research relevant Hex packages if needed
- Understand framework patterns (Phoenix, Ecto, etc.)
- Find official documentation for libraries
**Wait for all agents** before proceeding.
**Present your informed understanding:**
- Summarize what you learned with file:line references
- Show the current implementation state
- Identify what needs to change
**Ask ONLY questions that code cannot answer:**
- Design decisions and trade-offs
- User preferences between valid approaches
- Clarifications on requirements
### Step 2: Research & Discovery
If user provides corrections or additional context:
- Verify through additional research
- Spawn new sub-agents if needed
- Update your understanding
**Present design options:**
- Show 2-3 valid approaches with pros/cons
- Include code examples for each approach
- Reference similar patterns in the codebase
- Explain trade-offs specific to Elixir/Phoenix
**Get user approval** on approach before writing detailed plan.
### Step 3: Plan Structure Proposal
**Propose phased implementation outline** based on {{PLANNING_STYLE}}:
{{#if PLANNING_STYLE equals "Detailed phases"}}
**Phased Structure:**
1. Phase 1: [Name] - [Brief description]
2. Phase 2: [Name] - [Brief description]
3. Phase 3: [Name] - [Brief description]
Each phase will include:
- Specific module/file changes
- Code examples showing changes
- Verification steps
{{/if}}
{{#if PLANNING_STYLE equals "Task checklist"}}
**Task Checklist Structure:**
- [ ] Task 1: [Description]
- [ ] Task 2: [Description]
- [ ] Task 3: [Description]
Each task will include:
- Files to modify
- Expected outcome
- How to verify
{{/if}}
{{#if PLANNING_STYLE equals "Milestone-based"}}
**Milestone Structure:**
- Milestone 1: [Deliverable]
- Milestone 2: [Deliverable]
- Milestone 3: [Deliverable]
Each milestone will include:
- Tasks required
- Acceptance criteria
- Verification approach
{{/if}}
**Get user approval** before writing detailed plan.
### Step 4: Write Detailed Plan
**Gather metadata:**
```bash
date +"%Y-%m-%d" && git log -1 --format="%H" && git branch --show-current && git config user.name
```
**Create plan file:**
- Location: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-description.md`
- Format: `{{DOCS_LOCATION}}/plans/YYYY-MM-DD-brief-kebab-case-description.md`
- Example: `{{DOCS_LOCATION}}/plans/2025-01-23-add-user-authentication.md`
**Plan Template:**
```markdown
---
date: [ISO timestamp]
author: [Git user name]
commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
title: "[Feature/Task Description]"
status: planned
tags: [plan, elixir, {{PROJECT_TYPE_TAGS}}]
---
# Plan: [Feature/Task Description]
**Date**: [Current date]
**Author**: [Git user name]
**Branch**: [Current branch]
**Project Type**: {{PROJECT_TYPE}}
## Overview
[2-3 sentences describing what this plan accomplishes and why]
## Current State
[Describe the current implementation with file:line references]
**Existing Modules:**
- `lib/my_app/context.ex` - [What it currently does]
- `lib/my_app_web/controllers/controller.ex` - [What it currently does]
**Current Behavior:**
[Describe how the system currently works in this area]
## Desired End State
[Describe the target implementation]
**New/Modified Modules:**
- `lib/my_app/new_context.ex` - [What it will do]
- `lib/my_app/schemas/new_schema.ex` - [What it will contain]
**Target Behavior:**
[Describe how the system should work after implementation]
{{#if PLANNING_STYLE equals "Detailed phases"}}
## Implementation Phases
### Phase 1: [Phase Name]
**Goal**: [What this phase accomplishes]
**Changes Required:**
1. **Create/Modify** `lib/my_app/schema.ex`
```elixir
defmodule MyApp.Schema do
use Ecto.Schema
import Ecto.Changeset
schema "table" do
field :name, :string
# Add fields
timestamps()
end
def changeset(struct, params) do
struct
|> cast(params, [:name])
|> validate_required([:name])
end
end
```
2. **Create/Modify** `lib/my_app/context.ex`
```elixir
defmodule MyApp.Context do
alias MyApp.{Repo, Schema}
def create_thing(attrs) do
%Schema{}
|> Schema.changeset(attrs)
|> Repo.insert()
end
end
```
**Verification:**
- [ ] `mix compile --warnings-as-errors` succeeds
- [ ] {{TEST_COMMAND}} passes
- [ ] Schema migration runs cleanly
{{/if}}
{{#if PLANNING_STYLE equals "Task checklist"}}
## Implementation Tasks
- [ ] **Task 1**: Create Ecto schema for [entity]
- File: `lib/my_app/schemas/entity.ex`
- Include fields: [list]
- Add validations: [list]
- Verify: Schema tests pass
- [ ] **Task 2**: Add context functions
- File: `lib/my_app/contexts/context.ex`
- Functions: create/1, update/2, delete/1, list/0
- Verify: Context tests pass
- [ ] **Task 3**: Create controller/LiveView
- File: `lib/my_app_web/controllers/entity_controller.ex`
- Actions: index, show, new, create, edit, update, delete
- Verify: Controller tests pass
{{/if}}
{{#if PLANNING_STYLE equals "Milestone-based"}}
## Implementation Milestones
### Milestone 1: Database Layer Complete
**Deliverables:**
- Ecto schema with validations
- Migration file
- Basic CRUD context functions
**Tasks:**
- Create schema module
- Write migration
- Implement context
- Add tests
**Acceptance Criteria:**
- All database operations work
- Tests cover happy and error paths
- Migration runs without errors
### Milestone 2: Web Layer Complete
**Deliverables:**
- Controller or LiveView
- Templates/HEEx
- Routes configured
**Tasks:**
- Create controller/LiveView
- Add templates
- Update router
- Add integration tests
**Acceptance Criteria:**
- All CRUD operations accessible via web
- UI renders correctly
- Integration tests pass
{{/if}}
## Success Criteria
### Automated Verification
Run these commands to verify implementation:
- [ ] **Compilation**: `mix compile --warnings-as-errors` succeeds
- [ ] **Tests**: {{TEST_COMMAND}} passes
{{QUALITY_TOOLS_CHECKS}}
### Manual Verification
Human verification required:
- [ ] Feature works as expected in browser/IEx
- [ ] Edge cases handled appropriately
- [ ] Error messages are clear and helpful
- [ ] Documentation updated (@moduledoc, @doc)
- [ ] No console errors or warnings
- [ ] Performance is acceptable
## Dependencies
[List any Hex packages that need to be added to mix.exs]
## Configuration Changes
[List any config changes needed in config/]
## Migration Strategy
[If database changes, describe migration approach]
## Rollback Plan
[How to undo these changes if needed]
## Notes
[Any additional context, decisions, or considerations]
```
### Step 5: Present Plan
**Show user the created plan:**
- Location of plan file
- Brief summary of phases/tasks
- Success criteria overview
**Confirm readiness:**
- Ask if plan looks good or needs adjustments
- Offer to clarify any phase/task
- Ready to proceed to implementation
## Important Guidelines
### Complete Alignment Required
**No open questions in final plan:**
- All technical decisions resolved
- All design choices made
- All ambiguities clarified
- Ready for immediate execution
### Success Criteria Format
**Separate automated from manual:**
**Automated** = Can run via command:
- `{{TEST_COMMAND}}`
- `mix compile --warnings-as-errors`
- `mix format --check-formatted`
{{QUALITY_TOOLS_EXAMPLES}}
**Manual** = Requires human verification:
- UI functionality
- UX quality
- Edge case handling
- Documentation quality
### Code Examples Required
Every phase/task with code changes MUST include:
- Specific file paths
- Actual Elixir code examples
- Not pseudo-code or placeholders
- Show imports, use statements, module attributes
### Elixir-Specific Considerations
**For Phoenix projects:**
- Context boundaries and public APIs
- Controller vs LiveView choice
- Route placement and naming
- Template organization
**For Ecto changes:**
- Schema design and relationships
- Changeset validations
- Migration strategy (reversible)
- Repo operations (transaction needs)
**For Process-based features:**
- Supervision tree placement
- GenServer/Agent design
- Message passing patterns
- Process naming and registration
## Non-Negotiable Standards
1. **Research first**: Always gather context before planning
2. **No placeholders**: Every code example must be real Elixir code
3. **File references**: Always include specific file paths
4. **Success criteria**: Always separate automated vs manual
5. **User approval**: Get approval on approach before detailed plan
6. **Complete plan**: No open questions when finished
## Example Scenario
**User**: "Add user authentication to the Phoenix app"
**Process**:
1. Research existing auth patterns in codebase
2. Present options: Guardian vs Pow vs custom
3. User chooses Guardian
4. Propose 5 phases: Schema, Context, Plugs, Controllers, Tests
5. User approves
6. Write detailed plan with Guardian-specific code examples
7. Include Guardian dependency in plan
8. Define success criteria (auth tests pass, login works)

View File

@@ -0,0 +1,685 @@
---
description: Comprehensive quality assurance and validation for Elixir projects
argument-hint: [optional-plan-name]
allowed-tools: Read, Grep, Glob, Task, Bash, TodoWrite, Write
---
# QA
Systematically validate Elixir implementation against quality standards and success criteria.
**Project Type**: {{PROJECT_TYPE}}
## Purpose
Validate completed work through automated checks, code review, and comprehensive quality analysis to ensure implementation meets standards.
## Steps to Execute:
### Step 1: Determine Scope
**If plan name provided:**
- Locate plan file: `{{DOCS_LOCATION}}/plans/*[plan-name]*.md`
- Read plan completely
- Extract success criteria
- Validate implementation against that plan
**If no plan provided:**
- General Elixir project health check
- Run all quality tools
- Review recent changes
- Provide overall quality assessment
### Step 2: Initial Discovery
**Read implementation plan** (if validating against plan):
```bash
find {{DOCS_LOCATION}}/plans -name "*[plan-name]*.md" -type f
```
**Gather git evidence:**
```bash
# See what changed
git status
git diff --stat
git log --oneline -10
# If validating a specific branch
git diff main...HEAD --stat
```
**Create validation plan** using TodoWrite:
```
1. [in_progress] Gather context and plan
2. [pending] Run automated quality checks
3. [pending] Spawn validation agents
4. [pending] Check manual criteria
5. [pending] Generate validation report
6. [pending] Offer fix plan generation if critical issues found
```
### Step 3: Run Automated Quality Checks
Run all automated checks in parallel using separate Bash commands:
**Compilation Check:**
```bash
mix clean && mix compile --warnings-as-errors
```
**Test Suite:**
```bash
{{TEST_COMMAND}}
```
**Format Validation:**
```bash
mix format --check-formatted
```
{{QUALITY_TOOL_COMMANDS}}
**Capture results** from each check:
- Exit code (0 = pass, non-zero = fail)
- Output messages
- Any warnings or errors
Mark this step complete in TodoWrite.
### Step 4: Spawn Validation Agents
**Agent Strategy**:
This template uses `subagent_type="general-purpose"` agents since generated QA commands
are project-agnostic. For token-efficient specialized validation, consider defining:
- **finder**: Fast file location without reading (token-efficient for discovery)
- **analyzer**: Deep code analysis with file reading (for technical tracing)
For standard Elixir projects, general-purpose agents provide flexibility.
Use Task tool to spawn parallel validation agents:
**Agent 1: Code Review** (subagent_type="general-purpose"):
```
Review the Elixir code changes for:
- Module organization and naming conventions
- Function documentation (@moduledoc, @doc)
- Pattern matching best practices
- Error handling (tuple returns, with blocks)
- Code clarity and readability
- Adherence to Elixir idioms
Focus on:
- Context boundaries (if Phoenix)
- Ecto query patterns
- GenServer/Agent usage
- Supervisor tree organization
Note: This agent will both locate and read files. For token-efficient workflows,
consider splitting into a finder (locate) + analyzer (read) pattern.
Provide file:line references for all observations.
Document current implementation (not suggestions for improvement).
```
**Agent 2: Test Coverage** (subagent_type="general-purpose"):
```
Find and analyze test coverage for the implementation:
First, locate test files:
- Find all test files related to changes
- Identify test directory structure
Then, analyze coverage:
- Check if all public functions have tests
- Verify both success and error cases are tested
- Check for edge case coverage
- Identify any untested code paths
ExUnit-specific:
- describe blocks usage
- test naming conventions
- setup/teardown patterns
- assertion quality
Note: This agent combines finding and reading. For token-efficient workflows,
first use finder to locate test files, then use analyzer to read and evaluate them.
Provide file:line references.
Document what is tested, not what should be tested.
```
**Agent 3: Documentation Review** (subagent_type="general-purpose"):
```
Find and review documentation completeness:
First, locate documentation:
- README files
- Module documentation files
- Inline documentation
Then, analyze quality:
- @moduledoc present and descriptive
- @doc on all public functions
- @typedoc on public types
- Inline documentation for complex logic
- README updates if needed
Check for:
- Code examples in docs
- Clear explanations
- Accurate descriptions
Note: This agent combines finding and reading. For token-efficient workflows,
first use finder to locate docs, then use analyzer to evaluate quality.
Provide file:line references.
Document current documentation state.
```
**Wait for all agents** to complete before proceeding.
Mark this step complete in TodoWrite.
### Step 5: Verify Success Criteria
**If validating against plan:**
**Read success criteria** from plan:
- Automated verification section
- Manual verification section
**Check automated criteria:**
- Match each criterion against actual checks
- Confirm all automated checks passed
- Note any that failed
**Check manual criteria:**
- Review each manual criterion
- Assess whether it's met (check implementation)
- Document status for each
**If general health check:**
**Automated Health Indicators:**
- Compilation succeeds
- All tests pass
- Format check passes
- Quality tools pass (if configured)
**Manual Health Indicators:**
- Recent changes are logical
- Code follows project patterns
- No obvious bugs or issues
- Documentation is adequate
Mark this step complete in TodoWrite.
### Step 6: Elixir-Specific Quality Checks
**Module Organization:**
- Are modules properly namespaced?
- Is module structure clear (use, import, alias at top)?
- Are public vs private functions clearly separated?
**Pattern Matching:**
- Are function heads used effectively?
- Is pattern matching preferred over conditionals?
- Are guard clauses used appropriately?
**Error Handling:**
- Are tuple returns used ({:ok, result}/{:error, reason})?
- Are with blocks used for complex error flows?
- Are errors propagated correctly?
**Phoenix-Specific** (if {{PROJECT_TYPE}} is Phoenix):
- Are contexts properly bounded?
- Do controllers delegate to contexts?
- Are LiveViews structured correctly (mount, handle_event, render)?
- Are routes organized logically?
**Ecto-Specific:**
- Are schemas properly defined?
- Are changesets comprehensive (validations, constraints)?
- Are queries composable and efficient?
- Are transactions used where needed?
**Process-Based** (GenServer, Agent, Task):
- Is supervision tree correct?
- Are processes named appropriately?
- Is message passing clear?
- Are process lifecycles managed?
Mark this step complete in TodoWrite.
### Step 7: Generate Validation Report
**Compile all findings:**
- Automated check results
- Agent findings (code review, tests, docs)
- Success criteria status
- Elixir-specific observations
**Create validation report structure:**
```markdown
---
date: [ISO timestamp]
validator: [Git user name]
commit: [Current commit hash]
branch: [Current branch name]
plan: [Plan name if applicable]
status: [PASS / PASS_WITH_WARNINGS / FAIL]
tags: [qa, validation, elixir, {{PROJECT_TYPE_TAGS}}]
---
# QA Report: [Plan Name or "General Health Check"]
**Date**: [Current date and time]
**Validator**: [Git user name]
**Commit**: [Current commit hash]
**Branch**: [Current branch]
**Project Type**: {{PROJECT_TYPE}}
## Executive Summary
**Overall Status**: ✅ PASS / ⚠️ PASS WITH WARNINGS / ❌ FAIL
**Quick Stats:**
- Compilation: ✅/❌
- Tests: [N] passed, [N] failed
{{QUALITY_TOOLS_RESULTS_SUMMARY}}
- Code Review: [N] observations
- Test Coverage: [Assessment]
- Documentation: [Assessment]
## Automated Verification Results
### Compilation
```
[Output from mix compile]
```
**Status**: ✅ Success / ❌ Failed
**Issues**: [List any warnings or errors]
### Test Suite
```
[Output from test command]
```
**Status**: ✅ All passed / ❌ [N] failed
**Failed Tests**:
- [test name] - [reason]
### Code Formatting
```
[Output from mix format --check-formatted]
```
**Status**: ✅ Formatted / ❌ Needs formatting
{{QUALITY_TOOLS_DETAILED_RESULTS}}
## Agent Validation Results
### Code Review Findings
[Findings from code review agent]
**Observations** ([N] total):
1. [file:line] - [Observation about current implementation]
2. [file:line] - [Observation]
### Test Coverage Analysis
[Findings from test coverage agent]
**Coverage Assessment**:
- Public functions tested: [N]/[M]
- Edge cases covered: [Assessment]
- Untested paths: [List if any]
### Documentation Review
[Findings from documentation agent]
**Documentation Status**:
- Modules documented: [N]/[M]
- Public functions documented: [N]/[M]
- Quality assessment: [Good/Adequate/Needs Work]
## Success Criteria Validation
[If validating against plan, list each criterion]
**Automated Criteria**:
- [x] Compilation succeeds
- [x] {{TEST_COMMAND}} passes
{{SUCCESS_CRITERIA_CHECKLIST}}
**Manual Criteria**:
- [x] Feature works as expected
- [ ] Edge cases handled [Status]
- [x] Documentation updated
## Elixir-Specific Observations
**Module Organization**: [Assessment]
**Pattern Matching**: [Assessment]
**Error Handling**: [Assessment]
{{PROJECT_TYPE_SPECIFIC_OBSERVATIONS}}
## Issues Found
[If any issues, list them by severity]
### Critical Issues (Must Fix)
[None or list]
### Warnings (Should Fix)
[None or list]
### Recommendations (Consider)
[None or list]
## Overall Assessment
[IF PASS]
✅ **IMPLEMENTATION VALIDATED**
All quality checks passed:
- Automated verification: Complete
- Code review: No issues
- Tests: All passing
- Documentation: Adequate
Implementation meets quality standards and is ready for merge/deploy.
[IF PASS WITH WARNINGS]
⚠️ **PASS WITH WARNINGS**
Core functionality validated but some areas need attention:
- [List warning areas]
Address warnings before merge or create follow-up tasks.
[IF FAIL]
❌ **VALIDATION FAILED**
Critical issues prevent approval:
- [List critical issues]
Fix these issues and re-run QA: `/qa "[plan-name]"`
## Next Steps
[IF PASS]
- Merge to main branch
- Deploy (if applicable)
- Close related tickets
[IF PASS WITH WARNINGS]
- Address warnings
- Re-run QA or accept warnings and proceed
- Document accepted warnings
[IF FAIL]
- Fix critical issues
- Address failing tests
- Re-run: `/qa "[plan-name]"`
```
Save report to: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-[plan-name]-qa.md`
Mark this step complete in TodoWrite.
### Step 8: Present Results
**Show concise summary to user:**
```markdown
# QA Validation Complete
**Plan**: [Plan name or "General Health Check"]
**Status**: ✅ PASS / ⚠️ PASS WITH WARNINGS / ❌ FAIL
## Results Summary
**Automated Checks**:
- Compilation: ✅
- Tests: ✅ [N] passed
{{QUALITY_TOOLS_SUMMARY_DISPLAY}}
**Code Quality**:
- Code Review: [N] observations
- Test Coverage: [Good/Adequate/Needs Work]
- Documentation: [Good/Adequate/Needs Work]
**Detailed Report**: `{{DOCS_LOCATION}}/qa-reports/YYYY-MM-DD-[plan-name]-qa.md`
[IF FAIL]
**Critical Issues**:
1. [Issue with file:line]
2. [Issue with file:line]
Fix these and re-run: `/qa "[plan-name]"`
[IF PASS]
**Ready to merge!**
```
### Step 9: Offer Fix Plan Generation (Conditional)
**Only execute this step if overall status is ❌ FAIL**
If QA detected critical issues:
**9.1 Count Critical Issues**
Count issues from validation report that are marked as ❌ CRITICAL or blocking.
**9.2 Prompt User for Fix Plan Generation**
Use AskUserQuestion tool:
```
Question: "QA detected [N] critical issues. Generate a fix plan to address them?"
Header: "Fix Plan"
Options (multiSelect: false):
Option 1:
Label: "Yes, generate fix plan"
Description: "Create a detailed plan to address all critical issues using /plan command"
Option 2:
Label: "No, I'll fix manually"
Description: "Exit QA and fix issues manually, then re-run /qa"
```
**9.3 If User Selects "Yes, generate fix plan":**
**9.3.1 Extract QA Report Filename**
Get the most recent QA report generated in Step 7:
```bash
ls -t {{DOCS_LOCATION}}/qa-reports/*-qa.md 2>/dev/null | head -1
```
Store filename in variable: QA_REPORT_PATH
**9.3.2 Invoke Plan Command**
Use SlashCommand tool:
```
Command: /plan "Fix critical issues from QA report: [QA_REPORT_PATH]"
```
Wait for plan generation to complete.
**9.3.3 Extract Plan Filename**
Parse the output from /plan command to find the generated plan filename.
Typical format: `{{DOCS_LOCATION}}/plans/plan-YYYY-MM-DD-fix-*.md`
Store plan name without path/extension in variable: FIX_PLAN_NAME
Report to user:
```
Fix plan created at: [PLAN_FILENAME]
```
**9.3.4 Prompt User for Plan Execution**
Use AskUserQuestion tool:
```
Question: "Fix plan created. Execute the fix plan now?"
Header: "Execute Plan"
Options (multiSelect: false):
Option 1:
Label: "Yes, execute fix plan"
Description: "Run /implement to apply fixes, then re-run /qa for validation"
Option 2:
Label: "No, I'll review first"
Description: "Exit and review the plan manually before implementing"
```
**9.3.5 If User Selects "Yes, execute fix plan":**
Use SlashCommand tool:
```
Command: /implement "[FIX_PLAN_NAME]"
```
Wait for implementation to complete.
Report:
```
Fix implementation complete. Re-running QA for validation...
```
Use SlashCommand tool:
```
Command: /qa
```
Wait for QA to complete.
Report:
```
Fix cycle complete. Check QA results above.
```
**9.3.6 If User Selects "No, I'll review first":**
Report:
```
Fix plan saved at: [PLAN_FILENAME]
When ready to implement:
/implement "[FIX_PLAN_NAME]"
After implementing, re-run QA:
/qa
```
**9.4 If User Selects "No, I'll fix manually":**
Report:
```
Manual fixes required.
Critical issues documented in: [QA_REPORT_PATH]
After fixing, re-run QA:
/qa
```
**9.5 If QA Status is NOT ❌ FAIL:**
Skip this step entirely (no fix plan offer needed).
## Quality Tool Integration
{{QUALITY_TOOL_INTEGRATION_GUIDE}}
## Important Guidelines
### Automated vs Manual
**Automated Verification:**
- Must be runnable via command
- Exit code determines pass/fail
- Repeatable and consistent
**Manual Verification:**
- Requires human judgment
- UI/UX quality
- Business logic correctness
- Edge case appropriateness
### Thoroughness
**Be comprehensive:**
- Run all configured quality tools
- Spawn all validation agents
- Check all success criteria
- Document all findings
**Be objective:**
- Report what you find
- Don't minimize issues
- Don't over-report non-issues
- Focus on facts
### Validation Philosophy
**Not a rubber stamp:**
- Real validation, not formality
- Find real issues
- Assess true quality
**Not overly strict:**
- Focus on significant issues
- Warnings vs failures
- Practical quality bar
## Edge Cases
### If Plan Doesn't Exist
User provides plan name but file not found:
- Search {{DOCS_LOCATION}}/plans/
- List available plans
- Ask user to clarify or choose
### If No Changes Detected
Running QA but no git changes:
- Note in report
- Run general health check anyway
- Report clean state
### If Tests Have Pre-Existing Failures
Tests failing before this implementation:
- Document which tests are pre-existing
- Focus on new failures
- Note technical debt in report
### If Quality Tools Not Installed
If Credo, Dialyzer, etc. not in mix.exs:
- Note in report
- Skip that tool
- Don't fail validation for missing optional tools
## Example Session
**User**: `/qa "user-authentication"`
**Process**:
1. Find plan: `{{DOCS_LOCATION}}/plans/2025-01-23-user-authentication.md`
2. Read success criteria from plan
3. Run automated checks (compile, test, format, Credo, Dialyzer)
4. Spawn 3 validation agents (code review, test coverage, docs)
5. Wait for agents to complete
6. Verify success criteria
7. Check Elixir-specific patterns
8. Generate comprehensive report
9. Present summary: "✅ PASS - All 12 success criteria met"

View File

@@ -0,0 +1,300 @@
---
description: Conduct comprehensive research across the Elixir repository to answer questions
argument-hint: [research-query]
allowed-tools: Read, Grep, Glob, Task, Bash, TodoWrite, Write, Skill
---
# Research
You are tasked with conducting comprehensive research across the Elixir repository to answer user questions by spawning parallel sub-agents and synthesizing their findings.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT critique implementations or identify problems
- DO NOT recommend refactoring, optimization, or architectural changes
- ONLY describe what exists, where it exists, how it works, and how components interact
- You are creating technical documentation of the existing codebase
## Steps to Execute:
When this command is invoked, the user provides their research query as an argument (e.g., `/research How does authentication work?`). Begin research immediately.
1. **Read any directly mentioned files first:**
- If the user mentions specific files, read them FULLY first
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
- This ensures you have full context before decomposing the research
2. **Analyze and decompose the research question:**
- Break down the user's query into composable research areas
- Identify specific Elixir modules, functions, or patterns to investigate
- Create a research plan using TodoWrite to track all subtasks
- Use concrete TodoWrite structure:
```
1. [in_progress] Identify relevant Elixir modules and components
2. [pending] Research component A with finder agent
3. [pending] Analyze component B with analyzer agent
4. [pending] Synthesize findings
5. [pending] Write research document
```
- Consider which Elixir components are relevant:
- Mix configuration and dependencies
- Application modules and supervision trees
- Contexts (Phoenix) or domain modules
- Schemas and Ecto queries
- Controllers, LiveViews, and routes (Phoenix)
- GenServers, Agents, and other processes
- Tests (ExUnit)
- Configuration files
3. **Spawn parallel sub-agent tasks for comprehensive research:**
- Create multiple Task agents to research different aspects concurrently
- We have specialized agents for repository research:
**For finding files and patterns:**
- Use the **finder** agent (subagent_type="general-purpose") to:
- Locate relevant Elixir files (`.ex`, `.exs`)
- Show code patterns (modules, functions, behaviours)
- Extract implementation examples
- Example prompt: "Find all {{PROJECT_TYPE_SPECIFIC}} in the codebase and show their implementation patterns"
**For deep analysis:**
- Use the **analyzer** agent (subagent_type="general-purpose") to:
- Trace execution flows through Elixir modules
- Analyze technical implementation details
- Explain step-by-step processing
- Example prompt: "Analyze how the authentication plug works, tracing the complete flow from request to response"
**For package and framework documentation:**
- Use **core:hex-docs-search** skill for API documentation:
- Research Hex packages (Phoenix, Ecto, Ash, Credo, etc.)
- Find module and function documentation
- Understand API reference and integration patterns
- Example: `Skill(command="core:hex-docs-search")` with prompt about Phoenix.Router
- Use **core:usage-rules** skill for best practices:
- Find package-specific coding conventions and patterns
- See good/bad code examples from package maintainers
- Understand common mistakes to avoid
- Example: `Skill(command="core:usage-rules")` with prompt about Ash querying best practices
- Use skills when you need official documentation/conventions vs code search
- Combine skill research with finder/analyzer for comprehensive understanding
**IMPORTANT**: All agents are documentarians, not critics. They will describe what exists without suggesting improvements or identifying issues.
**Key principles:**
- Start with finder to discover what exists
- Use analyzer for deep understanding of how things work
- Run multiple agents in parallel when researching different aspects
- Each agent knows its job - be specific about what you're looking for
- Remind agents they are documenting, not evaluating or improving
4. **Wait for all sub-agents to complete and synthesize findings:**
- IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
- Compile all sub-agent results
- Connect findings across different Elixir modules and components
- Include specific file paths and line numbers for reference
- Highlight patterns, connections, and implementation decisions
- Answer the user's specific questions with concrete evidence from the codebase
**Handling Sub-Agent Failures:**
- If a sub-agent fails or times out, document what was attempted
- Note which agents failed and why in the research document
- Proceed with available information from successful agents
- Mark gaps in coverage in the "Open Questions" section
- Include error details in a "Research Limitations" section if significant
5. **Gather metadata for the research document:**
- Get current date/time: `date -u +"%Y-%m-%d %H:%M:%S %Z"`
- Get git info: `git log -1 --format="%H" && git branch --show-current && git config user.name`
- Determine filename: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic-description.md`
- Format: `{{DOCS_LOCATION}}/research-YYYY-MM-DD-topic-description.md` where:
- YYYY-MM-DD is today's date
- topic-description is a brief kebab-case description
- Examples:
- `{{DOCS_LOCATION}}/research-2025-01-23-authentication-flow.md`
- `{{DOCS_LOCATION}}/research-2025-01-23-ecto-queries.md`
- `{{DOCS_LOCATION}}/research-2025-01-23-phoenix-contexts.md`
6. **Generate research document:**
- Use the metadata gathered in step 5
- Structure the document with YAML frontmatter followed by content:
```markdown
---
date: [Current date and time in ISO format]
researcher: [Git user name]
commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name from git remote]
topic: "[User's Question/Topic]"
tags: [research, elixir, {{PROJECT_TYPE_TAGS}}]
status: complete
---
# Research: [User's Question/Topic]
**Date**: [Current date and time]
**Researcher**: [Git user name]
**Git Commit**: [Current commit hash]
**Branch**: [Current branch name]
**Repository**: [Repository name]
**Project Type**: {{PROJECT_TYPE}}
## Research Question
[Original user query]
## Summary
[High-level overview of what was found, answering the user's question by describing what exists in the Elixir codebase]
## Detailed Findings
[Organize findings based on research type - adapt sections as needed]
**For module/component research, use:**
### [Module/Component 1]
- Description of what exists (`path/to/file.ex:7-10`)
- How it works
- Current implementation details (without evaluation)
- Related modules and dependencies
**For flow/process research, use:**
### Step 1: [Phase Name]
- What happens (`path/to/file.ex:line`)
- How data flows through the pipeline
- Related handlers and processes
**For pattern/convention research, use:**
### Pattern: [Pattern Name]
- Where it's used in the codebase
- How it's implemented
- Examples with file:line references
## Code References
[All relevant file:line references from Elixir files]
- `lib/my_app/accounts/user.ex:9` - [Brief description]
- `lib/my_app_web/controllers/session_controller.ex:16-26` - [Brief description]
## [Optional: Implementation Patterns]
[Include if Elixir patterns are central to the research]
- Pattern 1: [Description]
- Pattern 2: [Description]
## [Optional: Pattern Examples]
[Include if code examples clarify findings]
```elixir
# From lib/my_app/accounts.ex:9
def get_user(id) do
# implementation
end
```
## [Optional: Related Research]
[Include if other research documents are relevant]
## [Optional: Open Questions]
[Include if areas need further investigation]
## [Optional: Research Limitations]
[Include if sub-agents failed or coverage was incomplete]
```
7. **Write the research document:**
- Create the file at the determined path
- Use the Write tool to create the document with all gathered information
- Ensure all file references include line numbers
- Include Elixir code snippets for key patterns
8. **Present findings:**
- Present a concise summary of findings to the user
- Include key file references for easy navigation (module:line format)
- Highlight discovered patterns and implementations
- Ask if they have follow-up questions or need clarification
9. **Handle follow-up questions:**
- If the user has follow-up questions, append to the same research document
- Update the frontmatter fields `last_updated` and `last_updated_by`
- Add `last_updated_note: "Added follow-up research for [brief description]"` to frontmatter
- Add a new section: `## Follow-up Research [timestamp]`
- Spawn new sub-agents as needed for additional investigation
- Continue updating the document
## Elixir-Specific Research Considerations:
- **Mix Project Structure**: config/, lib/, test/, priv/, mix.exs
- **Application Modules**: Application start, supervision trees, workers
- **Contexts** (Phoenix): Bounded contexts, public API functions
- **Schemas**: Ecto schemas, changesets, validations
- **Controllers/LiveViews** (Phoenix): Request handling, renders, assigns
- **Queries**: Ecto queries, Repo operations
- **GenServers/Agents**: Process-based state, message handling
- **Plugs**: Middleware, request transformation
- **Tests**: ExUnit tests, test helpers, fixtures
- **Configuration**: Config files, runtime config, environment variables
## Pattern Categories to Research:
- **Supervision Patterns**: Supervisor trees, restart strategies, child specs
- **Data Handling**: Ecto schemas, queries, transactions, changesets
- **Phoenix Patterns**: Contexts, controllers, LiveView, channels
- **Process Patterns**: GenServer, Agent, Task, GenStage
- **Authentication/Authorization**: Plugs, Guardian, Pow, custom auth
- **Error Handling**: {:ok, result}/{:error, reason}, with blocks, error tracking
- **Testing Patterns**: ExUnit, mocks (Mox), fixtures, factories
- **API Patterns**: JSON APIs, GraphQL (Absinthe), REST endpoints
## Important notes:
- Always use parallel Task agents to maximize efficiency
- Focus on finding concrete file paths and line numbers for Elixir modules
- Research documents should be self-contained with all necessary context
- Each sub-agent prompt should be specific and focused on documentation
- Document cross-module connections and patterns
- Include Elixir code examples with file:line references
- Keep the main agent focused on synthesis, not deep analysis
- Have sub-agents document Elixir patterns as they exist
- **CRITICAL**: You and all sub-agents are documentarians, not evaluators
- **REMEMBER**: Document what IS, not what SHOULD BE
- **NO RECOMMENDATIONS**: Only describe the current state of the Elixir codebase
- **File reading**: Always read mentioned files FULLY before spawning sub-tasks
- **Critical ordering**: Follow the numbered steps exactly
- ALWAYS read mentioned files first before spawning sub-tasks (step 1)
- ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
- ALWAYS gather metadata before writing the document (step 5 before step 6)
- NEVER write the research document with placeholder values
- **Frontmatter consistency**:
- Always include frontmatter at the beginning
- Keep frontmatter fields consistent
- Update frontmatter when adding follow-up research
- Use snake_case for multi-word field names
- Tags should include elixir and project-type specific tags
## Example Usage:
**User**: `/research` then "How does authentication work in this application?"
**Process**:
1. Read any mentioned files
2. Create TodoWrite with research subtasks
3. Spawn parallel agents:
- finder: "Find all authentication-related modules and show their implementation patterns"
- analyzer: "Analyze the authentication plug, tracing the execution flow from request to verification"
- Skill: Search hex docs for Guardian/Pow/relevant auth library
4. Wait for completion
5. Synthesize findings into research document
6. Present summary with key patterns and file references
**User**: "How are Ecto queries structured in this codebase?"
**Process**:
1. Spawn parallel agents:
- finder: "Find all Ecto query modules and show their query patterns"
- analyzer: "Analyze how queries are composed and executed in the main contexts"
2. Synthesize findings about query patterns, composition, and Repo usage
3. Present comprehensive documentation with Elixir examples
**User**: "What LiveView components are used and how are they structured?"
**Process**:
1. Spawn parallel agents:
- finder: "Find all LiveView modules and identify component patterns"
- analyzer: "Analyze LiveView lifecycle, handle_event patterns, and assign management"
2. Synthesize findings about LiveView approach and conventions
3. Present documentation with LiveView examples