--- description: Perform research for a specific task and return structured findings (called by issue agent) category: dev difficulty: beginner estimated_time: instant allowed-tools: WebFetch, WebSearch, Read, Bash version: 1.0.0 --- # Research Task Agent Specialized command for performing research on technical topics. Returns structured findings to the calling agent. ## Variables RESEARCH_QUESTIONS: (required - list of questions to answer) TASK_CONTEXT: (required - why this research matters) SUGGESTED_APPROACH: (optional - where to look) ## Workflow ### Step 1: Understand Research Scope Parse the research questions: - Primary questions (must answer) - Secondary questions (nice to answer) - Context (why it matters) ### Step 2: Identify Information Sources Based on research questions, determine sources: - **Official documentation** (e.g., Apple developer docs, API references) - **Technical articles** (e.g., developer blogs, Medium) - **Code examples** (e.g., GitHub, Stack Overflow) - **Community discussions** (e.g., forums, Reddit) - **Academic papers** (if deep technical topic) ### Step 3: Gather Information For each source type: **Documentation**: - Use WebFetch for official docs - Extract key concepts, APIs, limitations - Note version/compatibility requirements **Code Examples**: - Search GitHub for relevant implementations - Look for patterns and best practices - Identify common pitfalls **Community Knowledge**: - WebSearch for recent discussions - Find real-world experiences - Identify gotchas and workarounds ### Step 4: Synthesize Findings Organize findings by research question: For each question: - **Answer**: Direct answer if found - **Details**: Supporting information - **Sources**: Where information came from - **Confidence**: How certain (high/medium/low) - **Caveats**: Limitations or conditions ### Step 5: Create Recommendations Based on findings: - **Recommended approach**: What to do - **Rationale**: Why this approach - **Alternatives**: Backup options - **Risks**: What to watch out for - **Next steps**: How to proceed ### Step 6: Return Structured Findings Output format (returned to calling agent): ```markdown ## Research Findings for: {TASK_TITLE} ### Question 1: {QUESTION} **Answer**: {DIRECT_ANSWER} **Details**: {SUPPORTING_INFORMATION} **Sources**: - {SOURCE_1} - {SOURCE_2} **Confidence**: High | Medium | Low **Caveats**: {LIMITATIONS} --- ### Question 2: {QUESTION} [Same structure] --- ## Recommendations ### Approach {WHAT_TO_DO} ### Rationale {WHY} ### Risks - {RISK_1}: {mitigation} - {RISK_2}: {mitigation} ### Alternatives 1. {ALTERNATIVE_1}: {when to use} 2. {ALTERNATIVE_2}: {when to use} ## Code Examples ```{language} {EXAMPLE_CODE} ``` ## Open Questions Unanswered questions: - {OPEN_Q1} - {OPEN_Q2} ## References - [{Title}]({URL}) - [{Title}]({URL}) ``` --- ## Example Invocation Called by `/research-task` when task type = research: ``` Input: - RESEARCH_QUESTIONS: * "What NaturalLanguage framework APIs are available?" * "Can NER extract job titles and companies?" * "What's the accuracy for career narratives?" - TASK_CONTEXT: "Stage 1 TELL requires extracting career events from CV text" - SUGGESTED_APPROACH: "Check Apple docs, test with sample CV text" Output: Structured findings with answers, code examples, recommendations ``` --- ## Design Principles 1. **Single Responsibility**: Only does research, doesn't write files 2. **Returns Data**: Outputs findings as structured text to calling agent 3. **Evidence-Based**: All claims backed by sources 4. **Actionable**: Provides clear recommendations 5. **Honest**: Admits when information not found or uncertain --- ## Notes - This agent is typically called by `/paz:plan:issue`, not directly by user - If called directly, will still work and output findings to console - Uses WebFetch for documentation, WebSearch for discussions - May read local files if researching internal codebase - Research is cached naturally by WebFetch (15-minute cache)