Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:41:27 +08:00
commit 1916d21d59
4 changed files with 239 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "research-task",
"description": "Deep Dive Research",
"version": "1.0.0",
"author": {
"name": "Matthew Pazaryna",
"email": "[email protected]"
},
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# research-task
Deep Dive Research

179
commands/task.md Normal file
View File

@@ -0,0 +1,179 @@
---
description: Perform research for a specific task and return structured findings (called by issue agent)
category: dev
difficulty: beginner
estimated_time: instant
allowed-tools: WebFetch, WebSearch, Read, Bash
version: 1.0.0
---
# Research Task Agent
Specialized command for performing research on technical topics. Returns structured findings to the calling agent.
## Variables
RESEARCH_QUESTIONS: (required - list of questions to answer)
TASK_CONTEXT: (required - why this research matters)
SUGGESTED_APPROACH: (optional - where to look)
## Workflow
### Step 1: Understand Research Scope
Parse the research questions:
- Primary questions (must answer)
- Secondary questions (nice to answer)
- Context (why it matters)
### Step 2: Identify Information Sources
Based on research questions, determine sources:
- **Official documentation** (e.g., Apple developer docs, API references)
- **Technical articles** (e.g., developer blogs, Medium)
- **Code examples** (e.g., GitHub, Stack Overflow)
- **Community discussions** (e.g., forums, Reddit)
- **Academic papers** (if deep technical topic)
### Step 3: Gather Information
For each source type:
**Documentation**:
- Use WebFetch for official docs
- Extract key concepts, APIs, limitations
- Note version/compatibility requirements
**Code Examples**:
- Search GitHub for relevant implementations
- Look for patterns and best practices
- Identify common pitfalls
**Community Knowledge**:
- WebSearch for recent discussions
- Find real-world experiences
- Identify gotchas and workarounds
### Step 4: Synthesize Findings
Organize findings by research question:
For each question:
- **Answer**: Direct answer if found
- **Details**: Supporting information
- **Sources**: Where information came from
- **Confidence**: How certain (high/medium/low)
- **Caveats**: Limitations or conditions
### Step 5: Create Recommendations
Based on findings:
- **Recommended approach**: What to do
- **Rationale**: Why this approach
- **Alternatives**: Backup options
- **Risks**: What to watch out for
- **Next steps**: How to proceed
### Step 6: Return Structured Findings
Output format (returned to calling agent):
```markdown
## Research Findings for: {TASK_TITLE}
### Question 1: {QUESTION}
**Answer**: {DIRECT_ANSWER}
**Details**:
{SUPPORTING_INFORMATION}
**Sources**:
- {SOURCE_1}
- {SOURCE_2}
**Confidence**: High | Medium | Low
**Caveats**: {LIMITATIONS}
---
### Question 2: {QUESTION}
[Same structure]
---
## Recommendations
### Approach
{WHAT_TO_DO}
### Rationale
{WHY}
### Risks
- {RISK_1}: {mitigation}
- {RISK_2}: {mitigation}
### Alternatives
1. {ALTERNATIVE_1}: {when to use}
2. {ALTERNATIVE_2}: {when to use}
## Code Examples
```{language}
{EXAMPLE_CODE}
```
## Open Questions
Unanswered questions:
- {OPEN_Q1}
- {OPEN_Q2}
## References
- [{Title}]({URL})
- [{Title}]({URL})
```
---
## Example Invocation
Called by `/research-task` when task type = research:
```
Input:
- RESEARCH_QUESTIONS:
* "What NaturalLanguage framework APIs are available?"
* "Can NER extract job titles and companies?"
* "What's the accuracy for career narratives?"
- TASK_CONTEXT:
"Stage 1 TELL requires extracting career events from CV text"
- SUGGESTED_APPROACH:
"Check Apple docs, test with sample CV text"
Output:
Structured findings with answers, code examples, recommendations
```
---
## Design Principles
1. **Single Responsibility**: Only does research, doesn't write files
2. **Returns Data**: Outputs findings as structured text to calling agent
3. **Evidence-Based**: All claims backed by sources
4. **Actionable**: Provides clear recommendations
5. **Honest**: Admits when information not found or uncertain
---
## Notes
- This agent is typically called by `/paz:plan:issue`, not directly by user
- If called directly, will still work and output findings to console
- Uses WebFetch for documentation, WebSearch for discussions
- May read local files if researching internal codebase
- Research is cached naturally by WebFetch (15-minute cache)

45
plugin.lock.json Normal file
View File

@@ -0,0 +1,45 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:mpazaryna/claude-toolkit:plugins/research-task",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "d51773dd879907815f03d3fa53d8fba98024183f",
"treeHash": "3534693686bcd634e8dc3c7af4b650340eccdab2fa62110b638750173df44ce9",
"generatedAt": "2025-11-28T10:27:11.604893Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "research-task",
"description": "Deep Dive Research",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "4a04b5c9949e45d73274eac59d955b36b16a4f4206ed6e08c505a893d6938ccb"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "2fbb3741d0dc33f7991f93a4b138273510143fc66692e022cedba800d496e0d9"
},
{
"path": "commands/task.md",
"sha256": "2b4bff6a1978b3fa456fd9a7d6944f66c6c39c805611c90b3264688e6581ff96"
}
],
"dirSha256": "3534693686bcd634e8dc3c7af4b650340eccdab2fa62110b638750173df44ce9"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}