Initial commit
This commit is contained in:
168
commands/agent-orchestrator-create-agent.md
Normal file
168
commands/agent-orchestrator-create-agent.md
Normal file
@@ -0,0 +1,168 @@
|
||||
---
|
||||
name: agent-orchestrator-create-agent
|
||||
description: Creates a new specialized orchestrated agent configuration for delegating work to specialized Claude Code sessions
|
||||
---
|
||||
|
||||
# create-orchestrated-agent
|
||||
|
||||
You are a specialist in creating orchestrated agent configurations. You use the user's requirements to create agent definition files in the folder `.agent-orchestrator/agents/`.
|
||||
|
||||
Read the following instructions from the user:
|
||||
<instructions>
|
||||
$ARGUMENTS
|
||||
</instructions>
|
||||
|
||||
## Process
|
||||
|
||||
1. **Analyze** the user instructions carefully
|
||||
2. **Understand** the purpose, domain, and functionality of the orchestrated agent to be created
|
||||
3. **Clarify** any ambiguities by asking the user specific questions
|
||||
4. **Generate** a concise, descriptive agent name `$AGENT_NAME` (use kebab-case, e.g., "data-analyzer", "code-reviewer")
|
||||
5. **Present** the agent name to the user for approval
|
||||
6. **Create** the agent directory `.agent-orchestrator/agents/$AGENT_NAME/`
|
||||
7. **Draft** a system prompt that defines the agent's expertise and behavior
|
||||
8. **Present** the system prompt to the user for approval
|
||||
9. **Create** the required configuration file and optional system prompt file
|
||||
|
||||
## File Structure
|
||||
|
||||
Each agent is organized in its own directory within `.agent-orchestrator/agents/`. The directory name must match the agent name.
|
||||
|
||||
```
|
||||
.agent-orchestrator/agents/
|
||||
└── $AGENT_NAME/
|
||||
├── agent.json # Required: Agent configuration
|
||||
├── agent.system-prompt.md # Optional: System prompt (discovered by convention)
|
||||
└── agent.mcp.json # Optional: MCP server configuration (discovered by convention)
|
||||
```
|
||||
|
||||
### agent.json (Required)
|
||||
|
||||
**Location:** `.agent-orchestrator/agents/$AGENT_NAME/agent.json`
|
||||
|
||||
**Purpose:** Defines the agent metadata.
|
||||
|
||||
**Schema:**
|
||||
```json
|
||||
{
|
||||
"name": "string (required) - Agent identifier matching folder name",
|
||||
"description": "string (required) - Brief description of agent's purpose and expertise"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"name": "firstspirit-architect",
|
||||
"description": "Specialist in FirstSpirit CMS architecture, templating, and SiteArchitect development"
|
||||
}
|
||||
```
|
||||
|
||||
**Key Patterns:**
|
||||
- `name` field must match the folder name
|
||||
- `description` should be a single clear sentence describing the agent's domain expertise and when to use it. Input format should be mentioned concisely and informatively.
|
||||
|
||||
---
|
||||
|
||||
### agent.system-prompt.md (Optional)
|
||||
|
||||
**Location:** `.agent-orchestrator/agents/$AGENT_NAME/agent.system-prompt.md`
|
||||
|
||||
**Purpose:** Defines the agent's persona, expertise, behavior, tools, and input/output expectations. This file is discovered by convention - no need to reference it in agent.json.
|
||||
|
||||
**Structure Template:**
|
||||
```markdown
|
||||
You are a [ROLE/TITLE] with deep expertise in [DOMAIN/TECHNOLOGY].
|
||||
|
||||
Your expertise includes:
|
||||
- [Specific skill or knowledge area 1]
|
||||
- [Specific skill or knowledge area 2]
|
||||
|
||||
[OPTIONAL: **IMPORTANT:** Any critical instructions, tool requirements, or workflow requirements]
|
||||
|
||||
[IF APPLICABLE: Instructions for using specific skills or tools]
|
||||
```
|
||||
[Skill or tool invocation example]
|
||||
```
|
||||
## Workflow Guidelines
|
||||
|
||||
When working on [TASK TYPE]:
|
||||
1. [Step-by-step workflow or best practices]
|
||||
2. [Key considerations]
|
||||
3. [Quality standards]
|
||||
4. [Documentation requirements]
|
||||
5. [Naming conventions or standards]
|
||||
|
||||
## Output format
|
||||
|
||||
[Describe expected output format, structure, and any file creation requirements]
|
||||
|
||||
## Notes
|
||||
|
||||
[IF APPLICABLE: Reference to available tools, documentation, or resources]
|
||||
|
||||
Be practical, focus on [QUALITY ATTRIBUTES like maintainability, performance, best practices].
|
||||
```
|
||||
|
||||
**Criteria for Effective System Prompts:**
|
||||
|
||||
1. **Role Definition**
|
||||
- Clearly state the agent's role and domain of expertise
|
||||
- Specify the level of expertise (specialist, architect, analyst, etc.)
|
||||
|
||||
2. **Scope of Expertise**
|
||||
- List specific capabilities and knowledge areas
|
||||
- Include relevant technologies, frameworks, or methodologies
|
||||
- Define boundaries of what the agent should/shouldn't do
|
||||
|
||||
3. **Tool & Skill Integration**
|
||||
- Explicitly mention required tools or skills the agent must use
|
||||
- Provide exact invocation syntax
|
||||
- Explain when and why to use specific tools
|
||||
|
||||
4. **Workflow & Process**
|
||||
- Define clear step-by-step processes for common tasks
|
||||
- Include best practices and quality standards
|
||||
- Specify any required conventions (naming, structure, etc.)
|
||||
|
||||
5. **Input Expectations**
|
||||
- Describe the format and structure of expected inputs
|
||||
- For longer context, specify that the agent should expect file references
|
||||
- Keep prompts concise by referencing files rather than embedding large content
|
||||
|
||||
6. **Output Requirements**
|
||||
- Define the format and structure of expected outputs
|
||||
- Outputs should be brief and action-oriented
|
||||
- For lengthy results, instruct the agent to create files and provide references
|
||||
- Specify any required documentation or handover format
|
||||
|
||||
7. **Behavioral Guidelines**
|
||||
- Include personality traits (practical, thorough, concise, etc.)
|
||||
- Specify communication style
|
||||
- Define how to handle ambiguity or missing information
|
||||
|
||||
---
|
||||
|
||||
### agent.mcp.json (Optional)
|
||||
|
||||
**Location:** `.agent-orchestrator/agents/$AGENT_NAME/agent.mcp.json`
|
||||
|
||||
**Purpose:** Configures MCP (Model Context Protocol) server integration for agents that require external tool access. This file is discovered by convention - no need to reference it in agent.json.
|
||||
|
||||
**When to Use:**
|
||||
Only create this file if your agent needs access to specialized tools through MCP servers (e.g., browser automation, database access, API integrations).
|
||||
|
||||
**Format:**
|
||||
Standard MCP server configuration as documented in the MCP specification. The configuration is automatically passed to the Claude CLI via `--mcp-config` flag when the agent is used.
|
||||
|
||||
**Note:**
|
||||
Most agents do not require MCP configurations. Only add this file when your agent specifically needs external tool capabilities.
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Agent names should be descriptive and use kebab-case
|
||||
- System prompts should be focused and actionable
|
||||
- Consider reusability: create agents for specific domains, not one-off tasks
|
||||
- Test the agent with sample inputs before finalizing
|
||||
141
commands/agent-orchestrator-create-runtime-report.md
Normal file
141
commands/agent-orchestrator-create-runtime-report.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Create Runtime Report from Agent Sessions
|
||||
|
||||
Generate a comprehensive runtime analysis report from CLI agent session files, accounting for parallel execution.
|
||||
|
||||
## Instructions
|
||||
|
||||
Follow these steps to create a runtime report:
|
||||
|
||||
### Step 1: Get Wall Clock Time
|
||||
|
||||
Get file timestamps to determine when the workflow started and ended:
|
||||
|
||||
```bash
|
||||
stat -f "%Sm %N" -t "%Y-%m-%d %H:%M:%S" .agent-orchestrator/agent-sessions/*.jsonl | sort
|
||||
```
|
||||
|
||||
The earliest timestamp is the start time, the latest is the end time.
|
||||
|
||||
### Step 2: Extract All Session Data
|
||||
|
||||
Use this Python script to extract timing data from all sessions:
|
||||
|
||||
```bash
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
sessions_dir = Path(".agent-orchestrator/agent-sessions")
|
||||
session_files = sorted(sessions_dir.glob("*.jsonl"))
|
||||
|
||||
results = []
|
||||
|
||||
for session_file in session_files:
|
||||
name = session_file.stem
|
||||
with open(session_file, 'r') as f:
|
||||
for line in f:
|
||||
try:
|
||||
data = json.loads(line)
|
||||
if data.get('type') == 'result':
|
||||
results.append({
|
||||
'name': name,
|
||||
'duration_ms': data.get('duration_ms', 0),
|
||||
'duration_s': data.get('duration_ms', 0) / 1000,
|
||||
'api_time_ms': data.get('duration_api_ms', 0),
|
||||
'api_time_s': data.get('duration_api_ms', 0) / 1000,
|
||||
'turns': data.get('num_turns', 0),
|
||||
'cost': data.get('total_cost_usd', 0),
|
||||
})
|
||||
break
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
for result in results:
|
||||
print(f"=== {result['name']} ===")
|
||||
print(f"Duration: {result['duration_ms']}ms ({result['duration_s']:.1f}s)")
|
||||
print(f"API Time: {result['api_time_ms']}ms ({result['api_time_s']:.1f}s)")
|
||||
print(f"Turns: {result['turns']}")
|
||||
print(f"Cost: ${result['cost']:.5f}")
|
||||
print()
|
||||
|
||||
print("=== TOTALS ===")
|
||||
total_duration = sum(r['duration_ms'] for r in results)
|
||||
total_api_time = sum(r['api_time_ms'] for r in results)
|
||||
total_cost = sum(r['cost'] for r in results)
|
||||
total_turns = sum(r['turns'] for r in results)
|
||||
|
||||
print(f"Total Duration: {total_duration}ms ({total_duration/1000:.1f}s, {total_duration/60000:.1f}m)")
|
||||
print(f"Total API Time: {total_api_time}ms ({total_api_time/1000:.1f}s, {total_api_time/60000:.1f}m)")
|
||||
print(f"Total Turns: {total_turns}")
|
||||
print(f"Total Cost: ${total_cost:.5f}")
|
||||
print(f"Number of Sessions: {len(results)}")
|
||||
EOF
|
||||
```
|
||||
|
||||
### Step 3: Generate Formatted Report
|
||||
|
||||
Use the timestamps from Step 1 and data from Step 2 to create a final report. Update the start/end times based on your actual data:
|
||||
|
||||
```bash
|
||||
python3 << 'EOF'
|
||||
from datetime import datetime
|
||||
|
||||
# UPDATE THESE with actual timestamps from Step 1
|
||||
start_time = datetime.strptime("2025-11-02 17:35:23", "%Y-%m-%d %H:%M:%S")
|
||||
end_time = datetime.strptime("2025-11-02 18:04:46", "%Y-%m-%d %H:%M:%S")
|
||||
|
||||
wall_clock_seconds = (end_time - start_time).total_seconds()
|
||||
wall_clock_minutes = wall_clock_seconds / 60
|
||||
|
||||
# UPDATE THESE with totals from Step 2
|
||||
total_agent_time_s = 2837.3
|
||||
total_agent_time_m = 47.3
|
||||
total_cost = 2.65816
|
||||
total_turns = 233
|
||||
|
||||
efficiency_gain = ((total_agent_time_m - wall_clock_minutes) / total_agent_time_m) * 100
|
||||
|
||||
print("# Runtime Analysis Report")
|
||||
print()
|
||||
print("## Wall Clock Time")
|
||||
print(f"- **Start**: {start_time.strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
print(f"- **End**: {end_time.strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
print(f"- **Total**: {wall_clock_minutes:.1f} minutes ({wall_clock_seconds:.0f} seconds)")
|
||||
print()
|
||||
print("## Overall Summary")
|
||||
print()
|
||||
print(f"- **Total Agent Execution Time**: {total_agent_time_s:.1f}s ({total_agent_time_m:.1f} min)")
|
||||
print(f"- **Actual Wall Clock Time**: {wall_clock_seconds:.0f}s ({wall_clock_minutes:.1f} min)")
|
||||
print(f"- **Total Cost**: ${total_cost:.5f}")
|
||||
print(f"- **Total Conversation Turns**: {total_turns}")
|
||||
print(f"- **Efficiency Gain**: {efficiency_gain:.1f}% faster due to parallel execution")
|
||||
print(f"- **Time Saved**: {total_agent_time_m - wall_clock_minutes:.1f} minutes")
|
||||
EOF
|
||||
```
|
||||
|
||||
## Report Output Structure
|
||||
|
||||
The report should include:
|
||||
|
||||
1. **Wall Clock Time**: Start, end, and total elapsed time
|
||||
2. **Phase-by-Phase Breakdown**: Individual session metrics with completion timestamps
|
||||
3. **Overall Summary**:
|
||||
- Total agent execution time (sum of all sessions)
|
||||
- Actual wall clock time (real elapsed time)
|
||||
- Total cost and conversation turns
|
||||
- Efficiency gain percentage from parallel execution
|
||||
- Time saved by running agents in parallel
|
||||
|
||||
## Key Metrics
|
||||
|
||||
- **Wall Clock Time**: Real-world time from start to finish
|
||||
- **Agent Execution Time**: Sum of all individual agent runtimes
|
||||
- **Efficiency Gain**: Percentage reduction due to parallel execution
|
||||
- **Time Saved**: Difference between sequential and parallel execution
|
||||
|
||||
## Notes
|
||||
|
||||
- Use Python heredoc (`<< 'EOF'`) to avoid bash quoting issues
|
||||
- The script automatically handles missing or incomplete session files
|
||||
- Efficiency gain > 0% indicates agents ran in parallel
|
||||
- Update the timestamps and totals in Step 3 with actual values from Steps 1 and 2
|
||||
134
commands/agent-orchestrator-extract-token-usage.md
Normal file
134
commands/agent-orchestrator-extract-token-usage.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Extract Token Usage Statistics from Agent Sessions
|
||||
|
||||
Extract context window sizes and token usage statistics from CLI agent session files.
|
||||
|
||||
## Context Window Analysis
|
||||
|
||||
This script calculates the **context window size** for each agent by summing `input_tokens + cache_creation_input_tokens` across all invocations. This represents the fresh token budget used by each agent.
|
||||
|
||||
```bash
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
sessions_dir = Path(".agent-orchestrator/agent-sessions")
|
||||
results = []
|
||||
|
||||
for session_file in sorted(sessions_dir.glob("*.jsonl")):
|
||||
agent_name = session_file.stem
|
||||
total_fresh_tokens = 0
|
||||
num_invocations = 0
|
||||
invocation_details = []
|
||||
|
||||
with open(session_file, 'r') as f:
|
||||
for line in f:
|
||||
if '"type":"result"' not in line:
|
||||
continue
|
||||
try:
|
||||
data = json.loads(line)
|
||||
if data.get('type') == 'result':
|
||||
num_invocations += 1
|
||||
usage = data.get('usage', {})
|
||||
input_tokens = usage.get('input_tokens', 0)
|
||||
cache_creation = usage.get('cache_creation_input_tokens', 0)
|
||||
fresh_tokens = input_tokens + cache_creation
|
||||
total_fresh_tokens += fresh_tokens
|
||||
invocation_details.append({
|
||||
'num': num_invocations,
|
||||
'input': input_tokens,
|
||||
'cache_creation': cache_creation,
|
||||
'fresh_tokens': fresh_tokens
|
||||
})
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
if num_invocations > 0:
|
||||
results.append({
|
||||
'name': agent_name,
|
||||
'total_fresh_tokens': total_fresh_tokens,
|
||||
'num_invocations': num_invocations,
|
||||
'invocations': invocation_details
|
||||
})
|
||||
|
||||
# Detailed per-agent breakdown
|
||||
print("=" * 80)
|
||||
print("CONTEXT WINDOW ANALYSIS: FRESH TOKENS (input + cache_creation)")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
for result in results:
|
||||
print(f"=== {result['name']} ===")
|
||||
print(f"Number of Invocations: {result['num_invocations']}")
|
||||
|
||||
if result['num_invocations'] > 1:
|
||||
print("\nPer-Invocation Breakdown:")
|
||||
for inv in result['invocations']:
|
||||
print(f" Invocation #{inv['num']}:")
|
||||
print(f" Input tokens: {inv['input']:>10,}")
|
||||
print(f" Cache creation tokens: {inv['cache_creation']:>10,}")
|
||||
print(f" Fresh tokens total: {inv['fresh_tokens']:>10,}")
|
||||
|
||||
print(f"\nTotal Fresh Tokens (Context Window): {result['total_fresh_tokens']:>10,}")
|
||||
print()
|
||||
|
||||
# Summary table
|
||||
print("=" * 80)
|
||||
print("SUMMARY TABLE")
|
||||
print("=" * 80)
|
||||
print()
|
||||
print("| Agent | Invocations | Total Fresh Tokens (Context) |")
|
||||
print("|-------|------------:|-----------------------------:|")
|
||||
|
||||
for result in results:
|
||||
print(f"| {result['name']} | {result['num_invocations']} | "
|
||||
f"{result['total_fresh_tokens']:,} |")
|
||||
|
||||
print()
|
||||
|
||||
# Aggregate statistics
|
||||
print("=" * 80)
|
||||
print("AGGREGATE STATISTICS")
|
||||
print("=" * 80)
|
||||
total_agents = len(results)
|
||||
total_invocations = sum(r['num_invocations'] for r in results)
|
||||
total_fresh_tokens = sum(r['total_fresh_tokens'] for r in results)
|
||||
avg_fresh_tokens = total_fresh_tokens / total_agents if total_agents > 0 else 0
|
||||
max_fresh_tokens = max(r['total_fresh_tokens'] for r in results) if results else 0
|
||||
min_fresh_tokens = min(r['total_fresh_tokens'] for r in results) if results else 0
|
||||
|
||||
print(f"Total Agents: {total_agents}")
|
||||
print(f"Total Invocations: {total_invocations}")
|
||||
print(f"Total Fresh Tokens: {total_fresh_tokens:,}")
|
||||
print(f"Average Fresh Tokens/Agent: {avg_fresh_tokens:,.0f}")
|
||||
print(f"Maximum Fresh Tokens: {max_fresh_tokens:,}")
|
||||
print(f"Minimum Fresh Tokens: {min_fresh_tokens:,}")
|
||||
print()
|
||||
EOF
|
||||
```
|
||||
|
||||
## Key Metrics
|
||||
|
||||
- **Fresh Tokens (Context Window)**: `input_tokens + cache_creation_input_tokens`
|
||||
- Represents the actual context window size used by the agent
|
||||
- Accumulated across all invocations if an agent was called multiple times
|
||||
- Excludes cached reads (which don't count toward new context)
|
||||
|
||||
- **Invocations**: Number of times an agent was executed
|
||||
- Agents may be invoked multiple times if they need additional iterations
|
||||
- Context window is summed across all invocations for total agent workload
|
||||
|
||||
## Assessment Guidelines
|
||||
|
||||
Use these context window sizes to determine if tasks were appropriately sized:
|
||||
|
||||
- **< 50K tokens**: Well-sized task, plenty of headroom
|
||||
- **50K - 100K tokens**: Moderate task size, comfortable range
|
||||
- **100K - 150K tokens**: Large task, approaching upper limits
|
||||
- **> 150K tokens**: Very large task, consider splitting into smaller agents
|
||||
|
||||
## Notes
|
||||
|
||||
- Script efficiently reads only result lines from JSONL files
|
||||
- Handles multiple invocations per agent automatically
|
||||
- All calculations handle division by zero safely
|
||||
- Output includes both detailed breakdown and summary table
|
||||
11
commands/agent-orchestrator-init.md
Normal file
11
commands/agent-orchestrator-init.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
name: agent-orchestrator-init
|
||||
description: Primes the agent to orchestrate other agents using the Agent Orchestrator skill.
|
||||
---
|
||||
Your are an agent orchestrator. You can create and manage specialized agents to perform specific tasks and delegate work to them and use multiple agents to collaborate on complex tasks.
|
||||
|
||||
Your initial task:
|
||||
* Load the skill agent-orchestrator.
|
||||
* Figure out which are available - don't return them, just figure it out.
|
||||
|
||||
Answer only: with "READY" when you have loaded the skill and figured out which agents are available.
|
||||
Reference in New Issue
Block a user