Initial commit
This commit is contained in:
19
.claude-plugin/plugin.json
Normal file
19
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"name": "betty-framework",
|
||||
"description": "Betty Framework is RiskExec's system for structured, auditable AI-assisted engineering.\nWhere Claude Code provides the runtime, Betty adds methodology, orchestration, and governance—\nturning raw agent capability into a repeatable, enterprise-grade engineering discipline.\n",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "RiskExec",
|
||||
"email": "platform@riskexec.com",
|
||||
"url": "https://github.com/epieczko/betty"
|
||||
},
|
||||
"skills": [
|
||||
"./skills"
|
||||
],
|
||||
"agents": [
|
||||
"./agents"
|
||||
],
|
||||
"commands": [
|
||||
"./commands"
|
||||
]
|
||||
}
|
||||
5
README.md
Normal file
5
README.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# betty-framework
|
||||
|
||||
Betty Framework is RiskExec's system for structured, auditable AI-assisted engineering.
|
||||
Where Claude Code provides the runtime, Betty adds methodology, orchestration, and governance—
|
||||
turning raw agent capability into a repeatable, enterprise-grade engineering discipline.
|
||||
203
agents/README.md
Normal file
203
agents/README.md
Normal file
@@ -0,0 +1,203 @@
|
||||
# Betty Framework Agents
|
||||
|
||||
## ⚙️ **Integration Note: Claude Code Plugin System**
|
||||
|
||||
**Betty agents are Claude Code plugins.** You do not invoke agents via standalone CLI commands (`betty` or direct Python scripts). Instead:
|
||||
|
||||
- **Claude Code serves as the execution environment** for all agent invocation
|
||||
- Each agent is registered through its `agent.yaml` manifest
|
||||
- Agents become automatically discoverable and executable through Claude Code's natural language interface
|
||||
- All routing, validation, and execution is handled by Claude Code via MCP (Model Context Protocol)
|
||||
|
||||
**No separate installation step is needed** beyond plugin registration in your Claude Code environment.
|
||||
|
||||
---
|
||||
|
||||
This directory contains agent manifests for the Betty Framework.
|
||||
|
||||
## What are Agents?
|
||||
|
||||
Agents are intelligent orchestrators that compose skills with reasoning, context awareness, and error recovery. Unlike workflows (which follow fixed sequential steps) or skills (which execute atomic operations), agents can:
|
||||
|
||||
- **Reason** about requirements and choose appropriate strategies
|
||||
- **Iterate** based on feedback and validation results
|
||||
- **Recover** from errors with intelligent retry logic
|
||||
- **Adapt** their approach based on context
|
||||
|
||||
## Directory Structure
|
||||
|
||||
Each agent has its own directory containing:
|
||||
```
|
||||
agents/
|
||||
├── <agent-name>/
|
||||
│ ├── agent.yaml # Agent manifest (required)
|
||||
│ ├── README.md # Documentation (auto-generated)
|
||||
│ └── tests/ # Agent behavior tests (optional)
|
||||
│ └── test_agent.py
|
||||
```
|
||||
|
||||
## Creating an Agent
|
||||
|
||||
### Using meta.agent (Recommended)
|
||||
|
||||
**Via Claude Code:**
|
||||
```
|
||||
"Use meta.agent to create a my.agent that does [description],
|
||||
with capabilities [list], using skills [skill.one, skill.two],
|
||||
and iterative reasoning mode"
|
||||
```
|
||||
|
||||
**Direct execution (development/testing):**
|
||||
```bash
|
||||
cat > /tmp/my_agent.md <<'EOF'
|
||||
# Name: my.agent
|
||||
# Purpose: What your agent does
|
||||
# Capabilities: First capability, Second capability
|
||||
# Skills: skill.one, skill.two
|
||||
# Reasoning: iterative
|
||||
EOF
|
||||
python agents/meta.agent/meta_agent.py /tmp/my_agent.md
|
||||
```
|
||||
|
||||
### Manual Creation
|
||||
|
||||
1. Create agent directory:
|
||||
```bash
|
||||
mkdir -p agents/my.agent
|
||||
```
|
||||
|
||||
2. Create agent manifest (`agents/my.agent/agent.yaml`):
|
||||
```yaml
|
||||
name: my.agent
|
||||
version: 0.1.0
|
||||
description: "What your agent does"
|
||||
|
||||
capabilities:
|
||||
- First capability
|
||||
- Second capability
|
||||
|
||||
skills_available:
|
||||
- skill.one
|
||||
- skill.two
|
||||
|
||||
reasoning_mode: iterative # or oneshot
|
||||
|
||||
status: draft
|
||||
```
|
||||
|
||||
3. Validate and register:
|
||||
|
||||
**Via Claude Code:**
|
||||
```
|
||||
"Use agent.define to validate agents/my.agent/agent.yaml"
|
||||
```
|
||||
|
||||
**Direct execution (development/testing):**
|
||||
```bash
|
||||
python skills/agent.define/agent_define.py agents/my.agent/agent.yaml
|
||||
```
|
||||
|
||||
## Agent Manifest Schema
|
||||
|
||||
### Required Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `name` | string | Unique identifier (e.g., `api.designer`) |
|
||||
| `version` | string | Semantic version (e.g., `0.1.0`) |
|
||||
| `description` | string | Human-readable purpose statement |
|
||||
| `capabilities` | array[string] | List of what the agent can do |
|
||||
| `skills_available` | array[string] | Skills the agent can orchestrate |
|
||||
| `reasoning_mode` | enum | `iterative` or `oneshot` |
|
||||
|
||||
### Optional Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `status` | enum | `draft`, `active`, `deprecated`, `archived` |
|
||||
| `context_requirements` | object | Structured context the agent needs |
|
||||
| `workflow_pattern` | string | Narrative description of reasoning process |
|
||||
| `example_task` | string | Concrete usage example |
|
||||
| `error_handling` | object | Retry strategies and failure handling |
|
||||
| `output` | object | Expected success/failure outputs |
|
||||
| `tags` | array[string] | Categorization tags |
|
||||
| `dependencies` | array[string] | Other agents or schemas |
|
||||
|
||||
## Reasoning Modes
|
||||
|
||||
### Iterative
|
||||
Agent can retry with feedback, refine based on errors, and improve incrementally.
|
||||
|
||||
**Use for:**
|
||||
- Validation loops (API design with validation feedback)
|
||||
- Refinement tasks (code optimization)
|
||||
- Error correction (fixing compilation errors)
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
reasoning_mode: iterative
|
||||
|
||||
error_handling:
|
||||
max_retries: 3
|
||||
on_validation_failure: "Analyze errors, refine spec, retry"
|
||||
```
|
||||
|
||||
### Oneshot
|
||||
Agent executes once without retry.
|
||||
|
||||
**Use for:**
|
||||
- Analysis and reporting (compatibility checks)
|
||||
- Deterministic transformations (code generation)
|
||||
- Tasks where retry doesn't help (documentation)
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
reasoning_mode: oneshot
|
||||
|
||||
output:
|
||||
success:
|
||||
- Analysis report
|
||||
failure:
|
||||
- Error details
|
||||
```
|
||||
|
||||
## Example Agents
|
||||
|
||||
See the documentation for example agent manifests:
|
||||
- [API Designer](../docs/agent-schema-reference.md#example-iterative-refinement-agent) - Iterative API design
|
||||
- [Compliance Checker](../docs/agent-schema-reference.md#example-multi-domain-agent) - Multi-domain compliance
|
||||
|
||||
## Validation
|
||||
|
||||
All agent manifests are automatically validated for:
|
||||
- Required fields presence
|
||||
- Name format (`^[a-z][a-z0-9._-]*$`)
|
||||
- Version format (semantic versioning)
|
||||
- Reasoning mode enum (`iterative` or `oneshot`)
|
||||
- Skill references (all skills must exist in skill registry)
|
||||
|
||||
## Registry
|
||||
|
||||
Validated agents are registered in `/registry/agents.json`:
|
||||
```json
|
||||
{
|
||||
"registry_version": "1.0.0",
|
||||
"generated_at": "2025-10-23T00:00:00Z",
|
||||
"agents": [
|
||||
{
|
||||
"name": "api.designer",
|
||||
"version": "0.1.0",
|
||||
"description": "Design RESTful APIs...",
|
||||
"reasoning_mode": "iterative",
|
||||
"skills_available": ["api.define", "api.validate"],
|
||||
"status": "draft"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Agent Schema Reference](../docs/agent-schema-reference.md) - Complete field reference
|
||||
- [Betty Architecture](../docs/betty-architecture.md) - Five-layer architecture
|
||||
- [Agent Implementation Plan](../docs/agent-define-implementation-plan.md) - Implementation details
|
||||
45
agents/ai.orchestrator/README.md
Normal file
45
agents/ai.orchestrator/README.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Ai.Orchestrator Agent
|
||||
|
||||
Orchestrates AI/ML workflows including model training, evaluation, and deployment
|
||||
|
||||
## Purpose
|
||||
|
||||
This orchestrator agent coordinates complex ai workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Coordinate meta-agent creation and composition
|
||||
- Manage skill and agent generation workflows
|
||||
- Orchestrate AI-powered automation
|
||||
- Handle agent compatibility and optimization
|
||||
- Coordinate marketplace publishing
|
||||
|
||||
## Available Skills
|
||||
|
||||
- `agent.compose`
|
||||
- `agent.define`
|
||||
- `agent.run`
|
||||
- `generate.docs`
|
||||
- `generate.marketplace`
|
||||
- `meta.compatibility`
|
||||
|
||||
## Usage
|
||||
|
||||
This agent uses iterative reasoning to:
|
||||
1. Analyze requirements
|
||||
2. Plan execution steps
|
||||
3. Coordinate skill execution
|
||||
4. Validate results
|
||||
5. Handle errors and retries
|
||||
|
||||
## Status
|
||||
|
||||
**Generated**: Auto-generated from taxonomy gap analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] Review and refine capabilities
|
||||
- [ ] Test with real workflows
|
||||
- [ ] Add domain-specific examples
|
||||
- [ ] Integrate with existing agents
|
||||
- [ ] Document best practices
|
||||
55
agents/ai.orchestrator/agent.yaml
Normal file
55
agents/ai.orchestrator/agent.yaml
Normal file
@@ -0,0 +1,55 @@
|
||||
name: ai.orchestrator
|
||||
version: 0.1.0
|
||||
description: Orchestrates AI/ML workflows including model training, evaluation, and
|
||||
deployment
|
||||
capabilities:
|
||||
- Coordinate meta-agent creation and composition
|
||||
- Manage skill and agent generation workflows
|
||||
- Orchestrate AI-powered automation
|
||||
- Handle agent compatibility and optimization
|
||||
- Coordinate marketplace publishing
|
||||
skills_available:
|
||||
- agent.compose
|
||||
- agent.define
|
||||
- agent.run
|
||||
- generate.docs
|
||||
- generate.marketplace
|
||||
- meta.compatibility
|
||||
reasoning_mode: iterative
|
||||
tags:
|
||||
- ai
|
||||
- orchestration
|
||||
- meta
|
||||
- automation
|
||||
workflow_pattern: '1. Analyze incoming request and requirements
|
||||
|
||||
2. Identify relevant ai skills and workflows
|
||||
|
||||
3. Compose multi-step execution plan
|
||||
|
||||
4. Execute skills in coordinated sequence
|
||||
|
||||
5. Validate intermediate results
|
||||
|
||||
6. Handle errors and retry as needed
|
||||
|
||||
7. Return comprehensive results'
|
||||
example_task: "Input: \"Complete ai workflow from start to finish\"\n\nAgent will:\n\
|
||||
1. Break down the task into stages\n2. Select appropriate skills for each stage\n\
|
||||
3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n4. Monitor\
|
||||
\ progress and handle failures\n5. Generate comprehensive reports"
|
||||
error_handling:
|
||||
timeout_seconds: 300
|
||||
retry_strategy: exponential_backoff
|
||||
max_retries: 3
|
||||
output:
|
||||
success:
|
||||
- Ai workflow results
|
||||
- Execution logs and metrics
|
||||
- Validation reports
|
||||
- Generated artifacts
|
||||
failure:
|
||||
- Error details and stack traces
|
||||
- Partial results (if available)
|
||||
- Remediation suggestions
|
||||
status: generated
|
||||
302
agents/api.analyzer/README.md
Normal file
302
agents/api.analyzer/README.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# api.analyzer Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
**api.analyzer** is a specialized agent that analyzes API specifications for backward compatibility and breaking changes between versions.
|
||||
|
||||
This agent provides detailed compatibility reports, identifies breaking vs non-breaking changes, and suggests migration paths for consumers when breaking changes are unavoidable.
|
||||
|
||||
## Behavior
|
||||
|
||||
- **Reasoning Mode**: `oneshot` – The agent executes once without retries, as compatibility analysis is deterministic
|
||||
- **Capabilities**:
|
||||
- Detect breaking changes between API versions
|
||||
- Generate detailed compatibility reports
|
||||
- Identify removed or modified endpoints
|
||||
- Suggest migration paths for breaking changes
|
||||
- Validate API evolution best practices
|
||||
|
||||
## Skills Used
|
||||
|
||||
The agent has access to the following skills:
|
||||
|
||||
| Skill | Purpose |
|
||||
|-------|---------|
|
||||
| `api.compatibility` | Compares two API spec versions and detects breaking changes |
|
||||
| `api.validate` | Validates individual specs for well-formedness |
|
||||
|
||||
## Workflow Pattern
|
||||
|
||||
The agent follows this straightforward pattern:
|
||||
|
||||
```
|
||||
1. Load old and new API specifications
|
||||
2. Run comprehensive compatibility analysis
|
||||
3. Categorize changes as breaking or non-breaking
|
||||
4. Generate detailed report with migration recommendations
|
||||
5. Return results (no retry needed)
|
||||
```
|
||||
|
||||
## Manifest Fields (Quick Reference)
|
||||
|
||||
```yaml
|
||||
name: api.analyzer
|
||||
version: 0.1.0
|
||||
reasoning_mode: oneshot
|
||||
skills_available:
|
||||
- api.compatibility
|
||||
- api.validate
|
||||
status: draft
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
This agent is invoked through a command or workflow:
|
||||
|
||||
### Via Slash Command
|
||||
|
||||
```bash
|
||||
# Assuming /api-compatibility command is registered to use this agent
|
||||
/api-compatibility specs/user-service-v1.yaml specs/user-service-v2.yaml
|
||||
```
|
||||
|
||||
### Via Workflow
|
||||
|
||||
Include the agent in a workflow YAML:
|
||||
|
||||
```yaml
|
||||
# workflows/check_api_compatibility.yaml
|
||||
steps:
|
||||
- agent: api.analyzer
|
||||
input:
|
||||
old_spec_path: "specs/user-service-v1.0.0.yaml"
|
||||
new_spec_path: "specs/user-service-v2.0.0.yaml"
|
||||
fail_on_breaking: true
|
||||
```
|
||||
|
||||
## Context Requirements
|
||||
|
||||
The agent expects the following context:
|
||||
|
||||
| Field | Type | Description | Example |
|
||||
|-------|------|-------------|---------|
|
||||
| `old_spec_path` | string | Path to the previous/old API specification | `"specs/api-v1.0.0.yaml"` |
|
||||
| `new_spec_path` | string | Path to the new/updated API specification | `"specs/api-v2.0.0.yaml"` |
|
||||
| `fail_on_breaking` | boolean | Whether to fail (exit non-zero) if breaking changes detected | `true` or `false` |
|
||||
|
||||
## Example Task
|
||||
|
||||
**Input**:
|
||||
```
|
||||
"Compare user-service v1.0.0 with v2.0.0 for breaking changes"
|
||||
```
|
||||
|
||||
**Agent execution**:
|
||||
|
||||
1. **Load both specifications**:
|
||||
- Old: `specs/user-service-v1.0.0.openapi.yaml`
|
||||
- New: `specs/user-service-v2.0.0.openapi.yaml`
|
||||
|
||||
2. **Analyze endpoint changes**:
|
||||
- ✅ **Added**: `GET /users/{id}/preferences` (non-breaking)
|
||||
- ❌ **Removed**: `DELETE /users/{id}/avatar` (breaking)
|
||||
- ⚠️ **Modified**: `POST /users` now requires additional field `email_verified` (breaking)
|
||||
|
||||
3. **Check for breaking schema changes**:
|
||||
- ❌ Removed property: `User.phoneNumber` (breaking)
|
||||
- ✅ Added optional property: `User.preferences` (non-breaking)
|
||||
- ❌ Changed property type: `User.age` from `integer` to `string` (breaking)
|
||||
|
||||
4. **Identify parameter or response format changes**:
|
||||
- ❌ Query parameter `filter` changed from optional to required in `GET /users` (breaking)
|
||||
- ✅ Response includes new optional field `User.last_login` (non-breaking)
|
||||
|
||||
5. **Generate compatibility report**:
|
||||
```json
|
||||
{
|
||||
"compatible": false,
|
||||
"breaking_changes": [
|
||||
{
|
||||
"type": "endpoint_removed",
|
||||
"endpoint": "DELETE /users/{id}/avatar",
|
||||
"severity": "high",
|
||||
"migration": "Use PUT /users/{id} with avatar=null instead"
|
||||
},
|
||||
{
|
||||
"type": "required_field_added",
|
||||
"location": "POST /users request body",
|
||||
"field": "email_verified",
|
||||
"severity": "high",
|
||||
"migration": "Clients must now provide email_verified field"
|
||||
},
|
||||
{
|
||||
"type": "property_removed",
|
||||
"schema": "User",
|
||||
"property": "phoneNumber",
|
||||
"severity": "medium",
|
||||
"migration": "Use new phone_contacts array instead"
|
||||
},
|
||||
{
|
||||
"type": "type_changed",
|
||||
"schema": "User",
|
||||
"property": "age",
|
||||
"old_type": "integer",
|
||||
"new_type": "string",
|
||||
"severity": "high",
|
||||
"migration": "Convert age to string format"
|
||||
}
|
||||
],
|
||||
"non_breaking_changes": [
|
||||
{
|
||||
"type": "endpoint_added",
|
||||
"endpoint": "GET /users/{id}/preferences"
|
||||
},
|
||||
{
|
||||
"type": "optional_field_added",
|
||||
"schema": "User",
|
||||
"property": "preferences"
|
||||
},
|
||||
{
|
||||
"type": "response_field_added",
|
||||
"endpoint": "GET /users/{id}",
|
||||
"field": "last_login"
|
||||
}
|
||||
],
|
||||
"change_summary": {
|
||||
"breaking": 4,
|
||||
"additions": 2,
|
||||
"modifications": 2,
|
||||
"removals": 2
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
6. **Provide migration recommendations**:
|
||||
```markdown
|
||||
## Migration Guide: v1.0.0 → v2.0.0
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
1. **Removed endpoint: DELETE /users/{id}/avatar**
|
||||
- **Impact**: High - Clients using this endpoint will fail
|
||||
- **Migration**: Use PUT /users/{id} with avatar=null instead
|
||||
- **Effort**: Low
|
||||
|
||||
2. **New required field: email_verified in POST /users**
|
||||
- **Impact**: High - All user creation requests must include this field
|
||||
- **Migration**: Update client code to provide email_verified boolean
|
||||
- **Effort**: Medium
|
||||
|
||||
3. **Property removed: User.phoneNumber**
|
||||
- **Impact**: Medium - Clients reading this field will get undefined
|
||||
- **Migration**: Use User.phone_contacts array instead
|
||||
- **Effort**: Medium
|
||||
|
||||
4. **Type changed: User.age (integer → string)**
|
||||
- **Impact**: High - Type mismatch will cause deserialization errors
|
||||
- **Migration**: Update models to use string type and convert existing data
|
||||
- **Effort**: High
|
||||
|
||||
### Recommended Approach
|
||||
1. Implement migration layer for 2 versions
|
||||
2. Communicate breaking changes to consumers 30 days in advance
|
||||
3. Provide backward-compatible endpoints during transition period
|
||||
4. Monitor usage of deprecated endpoints
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Timeout | Behavior |
|
||||
|----------|---------|----------|
|
||||
| Spec load failure | N/A | Return error with file path details |
|
||||
| Comparison failure | N/A | Return partial analysis with error context |
|
||||
| Timeout | 120 seconds | Fails after 2 minutes |
|
||||
|
||||
### On Success
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"outputs": {
|
||||
"compatibility_report": {
|
||||
"compatible": false,
|
||||
"breaking_changes": [...],
|
||||
"non_breaking_changes": [...],
|
||||
"change_summary": {...}
|
||||
},
|
||||
"migration_recommendations": "...",
|
||||
"api_diff_visualization": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### On Failure
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "failed",
|
||||
"error_details": {
|
||||
"error": "Failed to load old spec",
|
||||
"file_path": "specs/user-service-v1.0.0.yaml",
|
||||
"details": "File not found"
|
||||
},
|
||||
"partial_analysis": null,
|
||||
"suggested_fixes": [
|
||||
"Verify file path exists",
|
||||
"Check file permissions"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Pre-Release Validation
|
||||
|
||||
Run before releasing a new API version to ensure backward compatibility:
|
||||
|
||||
```yaml
|
||||
# workflows/validate_release.yaml
|
||||
steps:
|
||||
- agent: api.analyzer
|
||||
input:
|
||||
old_spec_path: "specs/production/api-v1.yaml"
|
||||
new_spec_path: "specs/staging/api-v2.yaml"
|
||||
fail_on_breaking: true
|
||||
```
|
||||
|
||||
### 2. Continuous Integration
|
||||
|
||||
Integrate into CI/CD to prevent accidental breaking changes:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/api-check.yml
|
||||
- name: Check API Compatibility
|
||||
run: |
|
||||
# Agent runs via workflow.compose
|
||||
python skills/workflow.compose/workflow_compose.py \
|
||||
workflows/check_api_compatibility.yaml
|
||||
```
|
||||
|
||||
### 3. Documentation Generation
|
||||
|
||||
Generate migration guides automatically:
|
||||
|
||||
```bash
|
||||
# Use agent output to create migration documentation
|
||||
/api-compatibility old.yaml new.yaml > migration-guide.md
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
**Draft** – This agent is under development and not yet marked active in the registry.
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Agents Overview](../../docs/betty-architecture.md#layer-2-agents-reasoning-layer) – Understanding agents in Betty's architecture
|
||||
- [Agent Schema Reference](../../docs/agent-schema-reference.md) – Agent manifest fields and structure
|
||||
- [api.compatibility SKILL.md](../../skills/api.compatibility/SKILL.md) – Underlying compatibility check skill
|
||||
- [API-Driven Development](../../docs/api-driven-development.md) – Full API workflow including compatibility checks
|
||||
|
||||
## Version History
|
||||
|
||||
- **0.1.0** (Oct 2025) – Initial draft implementation with oneshot analysis pattern
|
||||
65
agents/api.analyzer/agent.yaml
Normal file
65
agents/api.analyzer/agent.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
name: api.analyzer
|
||||
version: 0.1.0
|
||||
description: "Analyze API specifications for backward compatibility and breaking changes"
|
||||
|
||||
capabilities:
|
||||
- Detect breaking changes between API versions
|
||||
- Generate detailed compatibility reports
|
||||
- Identify removed or modified endpoints
|
||||
- Suggest migration paths for breaking changes
|
||||
- Validate API evolution best practices
|
||||
|
||||
skills_available:
|
||||
- api.compatibility
|
||||
- api.validate
|
||||
|
||||
reasoning_mode: oneshot
|
||||
|
||||
context_requirements:
|
||||
old_spec_path: string
|
||||
new_spec_path: string
|
||||
fail_on_breaking: boolean
|
||||
|
||||
workflow_pattern: |
|
||||
1. Load old and new API specifications
|
||||
2. Run comprehensive compatibility analysis
|
||||
3. Categorize changes as breaking or non-breaking
|
||||
4. Generate detailed report with migration recommendations
|
||||
5. Return results (no retry needed)
|
||||
|
||||
example_task: |
|
||||
Input: "Compare user-service v1.0.0 with v2.0.0 for breaking changes"
|
||||
|
||||
Agent will:
|
||||
1. Load both specifications
|
||||
2. Analyze endpoint changes (additions, removals, modifications)
|
||||
3. Check for breaking schema changes
|
||||
4. Identify parameter or response format changes
|
||||
5. Generate compatibility report
|
||||
6. Provide migration recommendations
|
||||
|
||||
error_handling:
|
||||
timeout_seconds: 120
|
||||
on_spec_load_failure: "Return error with file path details"
|
||||
on_comparison_failure: "Return partial analysis with error context"
|
||||
|
||||
output:
|
||||
success:
|
||||
- Compatibility report (JSON)
|
||||
- Breaking changes list
|
||||
- Non-breaking changes list
|
||||
- Migration recommendations
|
||||
- API diff visualization
|
||||
failure:
|
||||
- Error details
|
||||
- Partial analysis (if available)
|
||||
- Suggested fixes
|
||||
|
||||
status: draft
|
||||
|
||||
tags:
|
||||
- api
|
||||
- analysis
|
||||
- compatibility
|
||||
- versioning
|
||||
- oneshot
|
||||
50
agents/api.architect/README.md
Normal file
50
agents/api.architect/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Api.Architect Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
An agent that designs comprehensive REST APIs and validates them against best practices. Takes API requirements as input and produces validated OpenAPI specifications with generated data models ready for implementation.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
- `workflow.validate`
|
||||
- `api.validate`
|
||||
- `api.define`
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `API requirements`
|
||||
- `Domain constraints and business rules`
|
||||
|
||||
### Produces
|
||||
|
||||
- `openapi-spec`
|
||||
- `api-models`
|
||||
- `validation-report`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- Design a RESTful API for an e-commerce platform with products, orders, and customers
|
||||
- Create an API for a task management system with projects, tasks, and user assignments
|
||||
- Design a multi-tenant SaaS API with proper authentication and authorization
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent api.architect
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run api.architect --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
36
agents/api.architect/agent.yaml
Normal file
36
agents/api.architect/agent.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
name: api.architect
|
||||
version: 0.1.0
|
||||
description: An agent that designs comprehensive REST APIs and validates them against
|
||||
best practices. Takes API requirements as input and produces validated OpenAPI specifications
|
||||
with generated data models ready for implementation.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Translate API requirements into detailed OpenAPI specifications
|
||||
- Validate API designs against organizational standards and linting rules
|
||||
- Generate reference data models to accelerate implementation
|
||||
skills_available:
|
||||
- workflow.validate
|
||||
- api.validate
|
||||
- api.define
|
||||
permissions: []
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: API requirements
|
||||
description: Input artifact of type API requirements
|
||||
- type: Domain constraints and business rules
|
||||
description: Input artifact of type Domain constraints and business rules
|
||||
produces:
|
||||
- type: openapi-spec
|
||||
schema: schemas/openapi-spec.json
|
||||
file_pattern: '*.openapi.yaml'
|
||||
content_type: application/yaml
|
||||
description: OpenAPI 3.0+ specification
|
||||
- type: api-models
|
||||
file_pattern: '*.{py,ts,go}'
|
||||
description: Generated API data models
|
||||
- type: validation-report
|
||||
schema: schemas/validation-report.json
|
||||
file_pattern: '*.validation.json'
|
||||
content_type: application/json
|
||||
description: Structured validation results
|
||||
217
agents/api.designer/README.md
Normal file
217
agents/api.designer/README.md
Normal file
@@ -0,0 +1,217 @@
|
||||
# api.designer Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
**api.designer** is an intelligent agent that orchestrates the API design process from natural language requirements to validated, production-ready OpenAPI specifications with generated models.
|
||||
|
||||
This agent uses iterative refinement to create APIs that comply with enterprise guidelines (Zalando by default), automatically fixing validation errors and ensuring best practices.
|
||||
|
||||
## Behavior
|
||||
|
||||
- **Reasoning Mode**: `iterative` – The agent retries on validation failures, refining the spec until it passes all checks
|
||||
- **Capabilities**:
|
||||
- Design RESTful APIs from natural language requirements
|
||||
- Apply Zalando guidelines automatically (or other guideline sets)
|
||||
- Generate OpenAPI 3.1 specs with best practices
|
||||
- Iteratively refine based on validation feedback
|
||||
- Handle AsyncAPI for event-driven architectures
|
||||
|
||||
## Skills Used
|
||||
|
||||
The agent has access to the following skills and uses them in sequence:
|
||||
|
||||
| Skill | Purpose |
|
||||
|-------|---------|
|
||||
| `api.define` | Scaffolds initial OpenAPI spec from service name and requirements |
|
||||
| `api.validate` | Validates spec against enterprise guidelines (Zalando, Google, Microsoft) |
|
||||
| `api.generate-models` | Generates type-safe models in target languages (TypeScript, Python, etc.) |
|
||||
| `api.compatibility` | Checks for breaking changes when updating existing APIs |
|
||||
|
||||
## Workflow Pattern
|
||||
|
||||
The agent follows this iterative pattern:
|
||||
|
||||
```
|
||||
1. Analyze requirements and domain context
|
||||
2. Draft OpenAPI spec following guidelines
|
||||
3. Run validation (api.validate)
|
||||
4. If validation fails:
|
||||
- Analyze errors
|
||||
- Refine spec
|
||||
- Re-validate
|
||||
- Repeat until passing (max 3 retries)
|
||||
5. Generate models for target languages
|
||||
6. Verify generated models compile
|
||||
```
|
||||
|
||||
## Manifest Fields (Quick Reference)
|
||||
|
||||
```yaml
|
||||
name: api.designer
|
||||
version: 0.1.0
|
||||
reasoning_mode: iterative
|
||||
skills_available:
|
||||
- api.define
|
||||
- api.validate
|
||||
- api.generate-models
|
||||
- api.compatibility
|
||||
status: draft
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
This agent is invoked through a command or workflow:
|
||||
|
||||
### Via Slash Command
|
||||
|
||||
```bash
|
||||
# Assuming /api-design command is registered to use this agent
|
||||
/api-design user-service
|
||||
```
|
||||
|
||||
The command passes the service name to the agent, which then:
|
||||
1. Uses `api.define` to create initial spec
|
||||
2. Validates with `api.validate`
|
||||
3. Fixes any validation errors iteratively
|
||||
4. Generates models with `api.generate-models`
|
||||
|
||||
### Via Workflow
|
||||
|
||||
Include the agent in a workflow YAML:
|
||||
|
||||
```yaml
|
||||
# workflows/design_api.yaml
|
||||
steps:
|
||||
- agent: api.designer
|
||||
input:
|
||||
service_name: "user-service"
|
||||
guidelines: "zalando"
|
||||
languages: ["typescript", "python"]
|
||||
```
|
||||
|
||||
## Context Requirements
|
||||
|
||||
The agent expects the following context:
|
||||
|
||||
| Field | Type | Description | Example |
|
||||
|-------|------|-------------|---------|
|
||||
| `guidelines` | string | Which API guidelines to follow | `"zalando"`, `"google"`, `"microsoft"` |
|
||||
| `domain` | string | Business domain for API design | `"user-management"`, `"e-commerce"` |
|
||||
| `existing_apis` | list | Related APIs to maintain consistency with | `["auth-service", "notification-service"]` |
|
||||
| `strict_mode` | boolean | Whether to treat warnings as errors | `true` or `false` |
|
||||
|
||||
## Example Task
|
||||
|
||||
**Input**:
|
||||
```
|
||||
"Create API for user management with CRUD operations,
|
||||
authentication via JWT, and email verification workflow"
|
||||
```
|
||||
|
||||
**Agent execution**:
|
||||
|
||||
1. **Draft OpenAPI spec** with proper resource paths:
|
||||
- `POST /users` - Create user
|
||||
- `GET /users/{id}` - Get user
|
||||
- `PUT /users/{id}` - Update user
|
||||
- `DELETE /users/{id}` - Delete user
|
||||
- `POST /users/{id}/verify-email` - Email verification
|
||||
|
||||
2. **Apply Zalando guidelines**:
|
||||
- Use snake_case for property names
|
||||
- Include problem JSON error responses
|
||||
- Add required headers (X-Request-ID, etc.)
|
||||
- Define proper HTTP status codes
|
||||
|
||||
3. **Validate spec** using `api.validate`:
|
||||
```bash
|
||||
python skills/api.validate/api_validate.py specs/user-service.yaml zalando
|
||||
```
|
||||
|
||||
4. **Fix validation issues** (if any):
|
||||
- Missing required headers → Add to spec
|
||||
- Incorrect naming → Convert to snake_case
|
||||
- Missing error schemas → Add problem JSON schemas
|
||||
|
||||
5. **Generate models** using Modelina:
|
||||
```bash
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.yaml typescript src/models/typescript
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.yaml python src/models/python
|
||||
```
|
||||
|
||||
6. **Verify models compile**:
|
||||
- TypeScript: `tsc --noEmit`
|
||||
- Python: `mypy --strict`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Max Retries | Behavior |
|
||||
|----------|-------------|----------|
|
||||
| Validation failure | 3 | Analyze errors, refine spec, retry |
|
||||
| Model generation failure | 3 | Try alternative Modelina configurations |
|
||||
| Compilation failure | 3 | Adjust spec to fix type issues |
|
||||
| Timeout | N/A | Fails after 300 seconds (5 minutes) |
|
||||
|
||||
### On Success
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"outputs": {
|
||||
"spec_path": "specs/user-service.openapi.yaml",
|
||||
"validation_report": {
|
||||
"valid": true,
|
||||
"errors": [],
|
||||
"warnings": []
|
||||
},
|
||||
"generated_models": {
|
||||
"typescript": ["src/models/typescript/User.ts", "src/models/typescript/UserResponse.ts"],
|
||||
"python": ["src/models/python/user.py", "src/models/python/user_response.py"]
|
||||
},
|
||||
"dependency_graph": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### On Failure
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "failed",
|
||||
"error_analysis": {
|
||||
"step": "validation",
|
||||
"attempts": 3,
|
||||
"last_error": "Missing required header: X-Request-ID in all endpoints"
|
||||
},
|
||||
"partial_spec": "specs/user-service.openapi.yaml",
|
||||
"suggested_fixes": [
|
||||
"Add X-Request-ID header to all operations",
|
||||
"See Zalando guidelines: https://..."
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Status
|
||||
|
||||
**Draft** – This agent is under development and not yet marked active in the registry. Current goals for next version:
|
||||
|
||||
- [ ] Improve prompt engineering for better initial API designs
|
||||
- [ ] Add more robust error handling for iterative loops
|
||||
- [ ] Support for more guideline sets (Google, Microsoft)
|
||||
- [ ] Better context injection from existing APIs
|
||||
- [ ] Automatic testing of generated models
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Agents Overview](../../docs/betty-architecture.md#layer-2-agents-reasoning-layer) – Understanding agents in Betty's architecture
|
||||
- [Agent Schema Reference](../../docs/agent-schema-reference.md) – Agent manifest fields and structure
|
||||
- [API-Driven Development](../../docs/api-driven-development.md) – Full API design workflow using Betty
|
||||
- [api.define SKILL.md](../../skills/api.define/SKILL.md) – Skill for creating API specs
|
||||
- [api.validate SKILL.md](../../skills/api.validate/SKILL.md) – Skill for validating specs
|
||||
- [api.generate-models SKILL.md](../../skills/api.generate-models/SKILL.md) – Skill for generating models
|
||||
|
||||
## Version History
|
||||
|
||||
- **0.1.0** (Oct 2025) – Initial draft implementation with iterative refinement pattern
|
||||
78
agents/api.designer/agent.yaml
Normal file
78
agents/api.designer/agent.yaml
Normal file
@@ -0,0 +1,78 @@
|
||||
name: api.designer
|
||||
version: 0.1.0
|
||||
description: "Design RESTful APIs following enterprise guidelines with iterative refinement"
|
||||
|
||||
capabilities:
|
||||
- Design RESTful APIs from natural language requirements
|
||||
- Apply Zalando guidelines automatically
|
||||
- Generate OpenAPI 3.1 specs with best practices
|
||||
- Iteratively refine based on validation feedback
|
||||
- Handle AsyncAPI for event-driven architectures
|
||||
|
||||
skills_available:
|
||||
- api.define
|
||||
- api.validate
|
||||
- api.generatemodels
|
||||
- api.compatibility
|
||||
|
||||
reasoning_mode: iterative
|
||||
|
||||
context_requirements:
|
||||
guidelines: string
|
||||
domain: string
|
||||
existing_apis: list
|
||||
strict_mode: boolean
|
||||
|
||||
workflow_pattern: |
|
||||
1. Analyze requirements and domain context
|
||||
2. Draft OpenAPI spec following guidelines
|
||||
3. Run validation (api.validate)
|
||||
4. If validation fails:
|
||||
- Analyze errors
|
||||
- Refine spec
|
||||
- Re-validate
|
||||
- Repeat until passing
|
||||
5. Generate models for target languages
|
||||
6. Verify generated models compile
|
||||
|
||||
example_task: |
|
||||
Input: "Create API for user management with CRUD operations,
|
||||
authentication via JWT, and email verification workflow"
|
||||
|
||||
Agent will:
|
||||
1. Draft OpenAPI spec with proper resource paths (/users, /users/{id})
|
||||
2. Apply Zalando guidelines (snake_case, problem JSON, etc.)
|
||||
3. Validate spec against Zally rules
|
||||
4. Fix issues (e.g., add required headers, fix naming)
|
||||
5. Generate TypeScript and Python models via Modelina
|
||||
6. Verify models compile in sample projects
|
||||
|
||||
error_handling:
|
||||
max_retries: 3
|
||||
on_validation_failure: "Analyze errors, refine spec, retry"
|
||||
on_generation_failure: "Try alternative Modelina configurations"
|
||||
on_compilation_failure: "Adjust spec to fix type issues"
|
||||
timeout_seconds: 300
|
||||
|
||||
output:
|
||||
success:
|
||||
- OpenAPI spec (validated)
|
||||
- Generated models (compiled)
|
||||
- Validation report
|
||||
- Dependency graph
|
||||
failure:
|
||||
- Error analysis
|
||||
- Partial spec
|
||||
- Suggested fixes
|
||||
|
||||
status: draft
|
||||
|
||||
tags:
|
||||
- api
|
||||
- design
|
||||
- openapi
|
||||
- zalando
|
||||
- iterative
|
||||
|
||||
dependencies:
|
||||
- context.schema
|
||||
44
agents/api.orchestrator/README.md
Normal file
44
agents/api.orchestrator/README.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Api.Orchestrator Agent
|
||||
|
||||
Orchestrates complete API lifecycle from design through testing and deployment
|
||||
|
||||
## Purpose
|
||||
|
||||
This orchestrator agent coordinates complex api workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Coordinate API design, validation, and compatibility checking
|
||||
- Manage API generation and model creation workflows
|
||||
- Orchestrate testing and quality assurance
|
||||
- Handle API versioning and documentation
|
||||
- Coordinate deployment and publishing
|
||||
|
||||
## Available Skills
|
||||
|
||||
- `api.define`
|
||||
- `api.validate`
|
||||
- `api.compatibility`
|
||||
- `api.generatemodels`
|
||||
- `api.test`
|
||||
|
||||
## Usage
|
||||
|
||||
This agent uses iterative reasoning to:
|
||||
1. Analyze requirements
|
||||
2. Plan execution steps
|
||||
3. Coordinate skill execution
|
||||
4. Validate results
|
||||
5. Handle errors and retries
|
||||
|
||||
## Status
|
||||
|
||||
**Generated**: Auto-generated from taxonomy gap analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] Review and refine capabilities
|
||||
- [ ] Test with real workflows
|
||||
- [ ] Add domain-specific examples
|
||||
- [ ] Integrate with existing agents
|
||||
- [ ] Document best practices
|
||||
53
agents/api.orchestrator/agent.yaml
Normal file
53
agents/api.orchestrator/agent.yaml
Normal file
@@ -0,0 +1,53 @@
|
||||
name: api.orchestrator
|
||||
version: 0.1.0
|
||||
description: Orchestrates complete API lifecycle from design through testing and deployment
|
||||
capabilities:
|
||||
- Coordinate API design, validation, and compatibility checking
|
||||
- Manage API generation and model creation workflows
|
||||
- Orchestrate testing and quality assurance
|
||||
- Handle API versioning and documentation
|
||||
- Coordinate deployment and publishing
|
||||
skills_available:
|
||||
- api.define
|
||||
- api.validate
|
||||
- api.compatibility
|
||||
- api.generatemodels
|
||||
- api.test
|
||||
reasoning_mode: iterative
|
||||
tags:
|
||||
- api
|
||||
- orchestration
|
||||
- workflow
|
||||
- lifecycle
|
||||
workflow_pattern: '1. Analyze incoming request and requirements
|
||||
|
||||
2. Identify relevant api skills and workflows
|
||||
|
||||
3. Compose multi-step execution plan
|
||||
|
||||
4. Execute skills in coordinated sequence
|
||||
|
||||
5. Validate intermediate results
|
||||
|
||||
6. Handle errors and retry as needed
|
||||
|
||||
7. Return comprehensive results'
|
||||
example_task: "Input: \"Complete api workflow from start to finish\"\n\nAgent will:\n\
|
||||
1. Break down the task into stages\n2. Select appropriate skills for each stage\n\
|
||||
3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n4. Monitor\
|
||||
\ progress and handle failures\n5. Generate comprehensive reports"
|
||||
error_handling:
|
||||
timeout_seconds: 300
|
||||
retry_strategy: exponential_backoff
|
||||
max_retries: 3
|
||||
output:
|
||||
success:
|
||||
- Api workflow results
|
||||
- Execution logs and metrics
|
||||
- Validation reports
|
||||
- Generated artifacts
|
||||
failure:
|
||||
- Error details and stack traces
|
||||
- Partial results (if available)
|
||||
- Remediation suggestions
|
||||
status: generated
|
||||
48
agents/code.reviewer/README.md
Normal file
48
agents/code.reviewer/README.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# Code.Reviewer Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Analyzes code changes and provides comprehensive feedback on code quality, security vulnerabilities, performance issues, and adherence to best practices.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `code-diff`
|
||||
- `coding-standards`
|
||||
|
||||
### Produces
|
||||
|
||||
- `review-report`
|
||||
- `suggestion-list`
|
||||
- `static-analysis`
|
||||
- `security-scan`
|
||||
- `style-check`
|
||||
- `List of issues found with line numbers`
|
||||
- `Severity and category for each issue`
|
||||
- `Suggested fixes with code examples`
|
||||
- `Overall code quality score`
|
||||
- `Compliance status with coding standards`
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent code.reviewer
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run code.reviewer --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
42
agents/code.reviewer/agent.yaml
Normal file
42
agents/code.reviewer/agent.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
name: code.reviewer
|
||||
version: 0.1.0
|
||||
description: Analyzes code changes and provides comprehensive feedback on code quality,
|
||||
security vulnerabilities, performance issues, and adherence to best practices.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Review diffs for quality, security, and maintainability concerns
|
||||
- Generate prioritized issue lists with remediation guidance
|
||||
- Summarize overall code health and compliance with standards
|
||||
skills_available:
|
||||
- code.format
|
||||
- test.workflow.integration
|
||||
- policy.enforce
|
||||
permissions: []
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: code-diff
|
||||
description: Input artifact of type code-diff
|
||||
- type: coding-standards
|
||||
description: Input artifact of type coding-standards
|
||||
produces:
|
||||
- type: review-report
|
||||
description: Output artifact of type review-report
|
||||
- type: suggestion-list
|
||||
description: Output artifact of type suggestion-list
|
||||
- type: static-analysis
|
||||
description: Output artifact of type static-analysis
|
||||
- type: security-scan
|
||||
description: Output artifact of type security-scan
|
||||
- type: style-check
|
||||
description: Output artifact of type style-check
|
||||
- type: List of issues found with line numbers
|
||||
description: Output artifact of type List of issues found with line numbers
|
||||
- type: Severity and category for each issue
|
||||
description: Output artifact of type Severity and category for each issue
|
||||
- type: Suggested fixes with code examples
|
||||
description: Output artifact of type Suggested fixes with code examples
|
||||
- type: Overall code quality score
|
||||
description: Output artifact of type Overall code quality score
|
||||
- type: Compliance status with coding standards
|
||||
description: Output artifact of type Compliance status with coding standards
|
||||
70
agents/data.architect/README.md
Normal file
70
agents/data.architect/README.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# Data.Architect Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Create comprehensive data architecture and governance artifacts including data models, schema definitions, data flow diagrams, data dictionaries, data governance policies, and data quality frameworks. Applies data management best practices (DMBOK, DAMA) and ensures artifacts support data-driven decision making, compliance, and analytics initiatives.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `Business requirements or use cases`
|
||||
- `Data sources and systems`
|
||||
- `Data domains or subject areas`
|
||||
- `Compliance requirements`
|
||||
- `Data quality expectations`
|
||||
- `Analytics or reporting needs`
|
||||
|
||||
### Produces
|
||||
|
||||
- `data-model: Logical and physical data models with entities, relationships, and attributes`
|
||||
- `schema-definition: Database schemas with tables, columns, constraints, and indexes`
|
||||
- `data-flow-diagram: Data flow between systems with transformations and quality checks`
|
||||
- `data-dictionary: Comprehensive data dictionary with business definitions`
|
||||
- `data-governance-policy: Data governance framework with roles, policies, and procedures`
|
||||
- `data-quality-framework: Data quality measurement and monitoring framework`
|
||||
- `master-data-management-plan: MDM strategy for critical data domains`
|
||||
- `data-lineage-diagram: End-to-end data lineage with source-to-target mappings`
|
||||
- `data-catalog: Enterprise data catalog with metadata and discovery`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- Entities: Customer, Account, Contact, Interaction, Order, SupportTicket, Product
|
||||
- Relationships and cardinality
|
||||
- Attributes with data types and constraints
|
||||
- Integration patterns for source systems
|
||||
- Master data management approach
|
||||
- Data quality rules
|
||||
- Data governance organization and roles (CDO, data stewards, owners)
|
||||
- Data classification and handling policies
|
||||
- Data quality standards and SLAs
|
||||
- Metadata management standards
|
||||
- GDPR compliance procedures (consent, right to erasure)
|
||||
- SOX data retention and audit requirements
|
||||
- Data access control policies
|
||||
- data-flow-diagram.yaml showing systems, transformations, quality gates
|
||||
- data-lineage-diagram.yaml with source-to-target mappings
|
||||
- data-quality-framework.yaml with validation rules and monitoring
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent data.architect
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run data.architect --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
66
agents/data.architect/agent.yaml
Normal file
66
agents/data.architect/agent.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
name: data.architect
|
||||
version: 0.1.0
|
||||
description: Create comprehensive data architecture and governance artifacts including
|
||||
data models, schema definitions, data flow diagrams, data dictionaries, data governance
|
||||
policies, and data quality frameworks. Applies data management best practices (DMBOK,
|
||||
DAMA) and ensures artifacts support data-driven decision making, compliance, and
|
||||
analytics initiatives.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Design logical and physical data architectures to support analytics strategies
|
||||
- Define governance policies and quality controls for critical data assets
|
||||
- Produce documentation that aligns stakeholders on data flows and ownership
|
||||
skills_available:
|
||||
- artifact.create
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: Business requirements or use cases
|
||||
description: Input artifact of type Business requirements or use cases
|
||||
- type: Data sources and systems
|
||||
description: Input artifact of type Data sources and systems
|
||||
- type: Data domains or subject areas
|
||||
description: Input artifact of type Data domains or subject areas
|
||||
- type: Compliance requirements
|
||||
description: Input artifact of type Compliance requirements
|
||||
- type: Data quality expectations
|
||||
description: Input artifact of type Data quality expectations
|
||||
- type: Analytics or reporting needs
|
||||
description: Input artifact of type Analytics or reporting needs
|
||||
produces:
|
||||
- type: 'data-model: Logical and physical data models with entities, relationships,
|
||||
and attributes'
|
||||
description: 'Output artifact of type data-model: Logical and physical data models
|
||||
with entities, relationships, and attributes'
|
||||
- type: 'schema-definition: Database schemas with tables, columns, constraints,
|
||||
and indexes'
|
||||
description: 'Output artifact of type schema-definition: Database schemas with
|
||||
tables, columns, constraints, and indexes'
|
||||
- type: 'data-flow-diagram: Data flow between systems with transformations and quality
|
||||
checks'
|
||||
description: 'Output artifact of type data-flow-diagram: Data flow between systems
|
||||
with transformations and quality checks'
|
||||
- type: 'data-dictionary: Comprehensive data dictionary with business definitions'
|
||||
description: 'Output artifact of type data-dictionary: Comprehensive data dictionary
|
||||
with business definitions'
|
||||
- type: 'data-governance-policy: Data governance framework with roles, policies,
|
||||
and procedures'
|
||||
description: 'Output artifact of type data-governance-policy: Data governance
|
||||
framework with roles, policies, and procedures'
|
||||
- type: 'data-quality-framework: Data quality measurement and monitoring framework'
|
||||
description: 'Output artifact of type data-quality-framework: Data quality measurement
|
||||
and monitoring framework'
|
||||
- type: 'master-data-management-plan: MDM strategy for critical data domains'
|
||||
description: 'Output artifact of type master-data-management-plan: MDM strategy
|
||||
for critical data domains'
|
||||
- type: 'data-lineage-diagram: End-to-end data lineage with source-to-target mappings'
|
||||
description: 'Output artifact of type data-lineage-diagram: End-to-end data lineage
|
||||
with source-to-target mappings'
|
||||
- type: 'data-catalog: Enterprise data catalog with metadata and discovery'
|
||||
description: 'Output artifact of type data-catalog: Enterprise data catalog with
|
||||
metadata and discovery'
|
||||
42
agents/data.orchestrator/README.md
Normal file
42
agents/data.orchestrator/README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Data.Orchestrator Agent
|
||||
|
||||
Orchestrates data workflows including transformation, validation, and quality assurance
|
||||
|
||||
## Purpose
|
||||
|
||||
This orchestrator agent coordinates complex data workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Coordinate data transformation pipelines
|
||||
- Manage data validation and quality checks
|
||||
- Orchestrate data migration workflows
|
||||
- Handle data governance and compliance
|
||||
- Coordinate analytics and reporting
|
||||
|
||||
## Available Skills
|
||||
|
||||
- `data.transform`
|
||||
- `workflow.validate`
|
||||
- `workflow.compose`
|
||||
|
||||
## Usage
|
||||
|
||||
This agent uses iterative reasoning to:
|
||||
1. Analyze requirements
|
||||
2. Plan execution steps
|
||||
3. Coordinate skill execution
|
||||
4. Validate results
|
||||
5. Handle errors and retries
|
||||
|
||||
## Status
|
||||
|
||||
**Generated**: Auto-generated from taxonomy gap analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] Review and refine capabilities
|
||||
- [ ] Test with real workflows
|
||||
- [ ] Add domain-specific examples
|
||||
- [ ] Integrate with existing agents
|
||||
- [ ] Document best practices
|
||||
52
agents/data.orchestrator/agent.yaml
Normal file
52
agents/data.orchestrator/agent.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
name: data.orchestrator
|
||||
version: 0.1.0
|
||||
description: Orchestrates data workflows including transformation, validation, and
|
||||
quality assurance
|
||||
capabilities:
|
||||
- Coordinate data transformation pipelines
|
||||
- Manage data validation and quality checks
|
||||
- Orchestrate data migration workflows
|
||||
- Handle data governance and compliance
|
||||
- Coordinate analytics and reporting
|
||||
skills_available:
|
||||
- data.transform
|
||||
- workflow.validate
|
||||
- workflow.compose
|
||||
reasoning_mode: iterative
|
||||
tags:
|
||||
- data
|
||||
- orchestration
|
||||
- workflow
|
||||
- etl
|
||||
workflow_pattern: '1. Analyze incoming request and requirements
|
||||
|
||||
2. Identify relevant data skills and workflows
|
||||
|
||||
3. Compose multi-step execution plan
|
||||
|
||||
4. Execute skills in coordinated sequence
|
||||
|
||||
5. Validate intermediate results
|
||||
|
||||
6. Handle errors and retry as needed
|
||||
|
||||
7. Return comprehensive results'
|
||||
example_task: "Input: \"Complete data workflow from start to finish\"\n\nAgent will:\n\
|
||||
1. Break down the task into stages\n2. Select appropriate skills for each stage\n\
|
||||
3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n4. Monitor\
|
||||
\ progress and handle failures\n5. Generate comprehensive reports"
|
||||
error_handling:
|
||||
timeout_seconds: 300
|
||||
retry_strategy: exponential_backoff
|
||||
max_retries: 3
|
||||
output:
|
||||
success:
|
||||
- Data workflow results
|
||||
- Execution logs and metrics
|
||||
- Validation reports
|
||||
- Generated artifacts
|
||||
failure:
|
||||
- Error details and stack traces
|
||||
- Partial results (if available)
|
||||
- Remediation suggestions
|
||||
status: generated
|
||||
55
agents/data.validator/README.md
Normal file
55
agents/data.validator/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Data.Validator Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Validates data files against schemas, business rules, and data quality standards. Ensures data integrity, completeness, and compliance.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
- `workflow.validate`
|
||||
- `api.validate`
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `data-file`
|
||||
- `schema-definition`
|
||||
- `validation-rules`
|
||||
|
||||
### Produces
|
||||
|
||||
- `validation-report`
|
||||
- `data-quality-metrics`
|
||||
- `data.validatejson`
|
||||
- `schema.validate`
|
||||
- `data.profile`
|
||||
- `Structural: Schema and format validation`
|
||||
- `Semantic: Business rule validation`
|
||||
- `Statistical: Data quality profiling`
|
||||
- `Validation status`
|
||||
- `List of violations with severity`
|
||||
- `Data quality score`
|
||||
- `Statistics`
|
||||
- `Recommendations for fixing issues`
|
||||
- `Compliance status with standards`
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent data.validator
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run data.validator --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
54
agents/data.validator/agent.yaml
Normal file
54
agents/data.validator/agent.yaml
Normal file
@@ -0,0 +1,54 @@
|
||||
name: data.validator
|
||||
version: 0.1.0
|
||||
description: Validates data files against schemas, business rules, and data quality
|
||||
standards. Ensures data integrity, completeness, and compliance.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Validate datasets against structural and semantic rules
|
||||
- Generate detailed issue reports with remediation recommendations
|
||||
- Track quality metrics and highlight compliance gaps
|
||||
skills_available:
|
||||
- workflow.validate
|
||||
- api.validate
|
||||
permissions: []
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: data-file
|
||||
description: Input artifact of type data-file
|
||||
- type: schema-definition
|
||||
description: Input artifact of type schema-definition
|
||||
- type: validation-rules
|
||||
description: Input artifact of type validation-rules
|
||||
produces:
|
||||
- type: validation-report
|
||||
schema: schemas/validation-report.json
|
||||
file_pattern: '*.validation.json'
|
||||
content_type: application/json
|
||||
description: Structured validation results
|
||||
- type: data-quality-metrics
|
||||
description: Output artifact of type data-quality-metrics
|
||||
- type: data.validatejson
|
||||
description: Output artifact of type data.validatejson
|
||||
- type: schema.validate
|
||||
description: Output artifact of type schema.validate
|
||||
- type: data.profile
|
||||
description: Output artifact of type data.profile
|
||||
- type: 'Structural: Schema and format validation'
|
||||
description: 'Output artifact of type Structural: Schema and format validation'
|
||||
- type: 'Semantic: Business rule validation'
|
||||
description: 'Output artifact of type Semantic: Business rule validation'
|
||||
- type: 'Statistical: Data quality profiling'
|
||||
description: 'Output artifact of type Statistical: Data quality profiling'
|
||||
- type: Validation status
|
||||
description: Output artifact of type Validation status
|
||||
- type: List of violations with severity
|
||||
description: Output artifact of type List of violations with severity
|
||||
- type: Data quality score
|
||||
description: Output artifact of type Data quality score
|
||||
- type: Statistics
|
||||
description: Output artifact of type Statistics
|
||||
- type: Recommendations for fixing issues
|
||||
description: Output artifact of type Recommendations for fixing issues
|
||||
- type: Compliance status with standards
|
||||
description: Output artifact of type Compliance status with standards
|
||||
79
agents/deployment.engineer/README.md
Normal file
79
agents/deployment.engineer/README.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# Deployment.Engineer Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Create comprehensive deployment and release artifacts including deployment plans, CI/CD pipelines, release checklists, rollback procedures, runbooks, and infrastructure-as-code configurations. Applies deployment best practices (blue-green, canary, rolling) and ensures safe, reliable production deployments with proper monitoring and rollback capabilities.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `Application or service description`
|
||||
- `Infrastructure and environment details`
|
||||
- `Deployment requirements`
|
||||
- `Release scope and components`
|
||||
- `Monitoring and alerting requirements`
|
||||
- `Compliance or change control requirements`
|
||||
|
||||
### Produces
|
||||
|
||||
- `deployment-plan: Comprehensive deployment strategy with steps, validation, and rollback`
|
||||
- `cicd-pipeline-definition: CI/CD pipeline configuration with stages, gates, and automation`
|
||||
- `release-checklist: Pre-deployment checklist with validation and approval steps`
|
||||
- `rollback-plan: Rollback procedures with triggers and recovery steps`
|
||||
- `runbooks: Operational runbooks for deployment, troubleshooting, and maintenance`
|
||||
- `infrastructure-as-code: Infrastructure provisioning templates`
|
||||
- `deployment-pipeline: Deployment automation scripts and orchestration`
|
||||
- `smoke-test-suite: Post-deployment smoke tests for validation`
|
||||
- `production-readiness-checklist: Production readiness assessment and sign-off`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- Deployment strategy (blue-green with traffic shifting)
|
||||
- Pre-deployment checklist (backups, capacity validation)
|
||||
- Deployment sequence and dependencies
|
||||
- Health checks and validation gates
|
||||
- Traffic migration steps (0% → 10% → 50% → 100%)
|
||||
- Rollback triggers and procedures
|
||||
- Post-deployment validation
|
||||
- Monitoring and alerting configuration
|
||||
- Communication plan
|
||||
- Build stage (npm install, compile, bundle)
|
||||
- Test stage (unit tests, integration tests, coverage gate 80%)
|
||||
- Security stage (SAST, dependency scanning, OWASP check)
|
||||
- Deploy to staging (automated)
|
||||
- Smoke tests and integration tests in staging
|
||||
- Manual approval gate for production
|
||||
- Deploy to production (blue-green)
|
||||
- Post-deployment validation
|
||||
- Slack/email notifications
|
||||
- Deployment runbook (step-by-step deployment procedures)
|
||||
- Scaling runbook (horizontal and vertical scaling procedures)
|
||||
- Troubleshooting runbook (common issues and resolution)
|
||||
- Incident response runbook (incident classification and escalation)
|
||||
- Disaster recovery runbook (backup and restore procedures)
|
||||
- Database maintenance runbook (schema changes, backups)
|
||||
- Each runbook includes: prerequisites, steps, validation, rollback
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent deployment.engineer
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run deployment.engineer --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
65
agents/deployment.engineer/agent.yaml
Normal file
65
agents/deployment.engineer/agent.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
name: deployment.engineer
|
||||
version: 0.1.0
|
||||
description: Create comprehensive deployment and release artifacts including deployment
|
||||
plans, CI/CD pipelines, release checklists, rollback procedures, runbooks, and infrastructure-as-code
|
||||
configurations. Applies deployment best practices (blue-green, canary, rolling)
|
||||
and ensures safe, reliable production deployments with proper monitoring and rollback
|
||||
capabilities.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Design deployment strategies with rollback and validation procedures
|
||||
- Automate delivery pipelines and operational runbooks
|
||||
- Coordinate release governance, approvals, and compliance requirements
|
||||
skills_available:
|
||||
- artifact.create
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: Application or service description
|
||||
description: Input artifact of type Application or service description
|
||||
- type: Infrastructure and environment details
|
||||
description: Input artifact of type Infrastructure and environment details
|
||||
- type: Deployment requirements
|
||||
description: Input artifact of type Deployment requirements
|
||||
- type: Release scope and components
|
||||
description: Input artifact of type Release scope and components
|
||||
- type: Monitoring and alerting requirements
|
||||
description: Input artifact of type Monitoring and alerting requirements
|
||||
- type: Compliance or change control requirements
|
||||
description: Input artifact of type Compliance or change control requirements
|
||||
produces:
|
||||
- type: 'deployment-plan: Comprehensive deployment strategy with steps, validation,
|
||||
and rollback'
|
||||
description: 'Output artifact of type deployment-plan: Comprehensive deployment
|
||||
strategy with steps, validation, and rollback'
|
||||
- type: 'cicd-pipeline-definition: CI/CD pipeline configuration with stages, gates,
|
||||
and automation'
|
||||
description: 'Output artifact of type cicd-pipeline-definition: CI/CD pipeline
|
||||
configuration with stages, gates, and automation'
|
||||
- type: 'release-checklist: Pre-deployment checklist with validation and approval
|
||||
steps'
|
||||
description: 'Output artifact of type release-checklist: Pre-deployment checklist
|
||||
with validation and approval steps'
|
||||
- type: 'rollback-plan: Rollback procedures with triggers and recovery steps'
|
||||
description: 'Output artifact of type rollback-plan: Rollback procedures with
|
||||
triggers and recovery steps'
|
||||
- type: 'runbooks: Operational runbooks for deployment, troubleshooting, and maintenance'
|
||||
description: 'Output artifact of type runbooks: Operational runbooks for deployment,
|
||||
troubleshooting, and maintenance'
|
||||
- type: 'infrastructure-as-code: Infrastructure provisioning templates'
|
||||
description: 'Output artifact of type infrastructure-as-code: Infrastructure provisioning
|
||||
templates'
|
||||
- type: 'deployment-pipeline: Deployment automation scripts and orchestration'
|
||||
description: 'Output artifact of type deployment-pipeline: Deployment automation
|
||||
scripts and orchestration'
|
||||
- type: 'smoke-test-suite: Post-deployment smoke tests for validation'
|
||||
description: 'Output artifact of type smoke-test-suite: Post-deployment smoke
|
||||
tests for validation'
|
||||
- type: 'production-readiness-checklist: Production readiness assessment and sign-off'
|
||||
description: 'Output artifact of type production-readiness-checklist: Production
|
||||
readiness assessment and sign-off'
|
||||
52
agents/file.processor/README.md
Normal file
52
agents/file.processor/README.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# File.Processor Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Processes files through various transformations including format conversion, compression, encryption, and batch operations.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `file-list`
|
||||
- `transformation-config`
|
||||
|
||||
### Produces
|
||||
|
||||
- `processed-files`
|
||||
- `processing-report`
|
||||
- `file.convert`
|
||||
- `file.compress`
|
||||
- `file.encrypt`
|
||||
- `batch.processor`
|
||||
- `Sequential: Process files one by one`
|
||||
- `Parallel: Process multiple files concurrently`
|
||||
- `Pipeline: Chain multiple transformations`
|
||||
- `Files processed successfully`
|
||||
- `Files that failed with error details`
|
||||
- `Processing time and performance metrics`
|
||||
- `Storage space saved`
|
||||
- `Transformation details for each file`
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent file.processor
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run file.processor --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
50
agents/file.processor/agent.yaml
Normal file
50
agents/file.processor/agent.yaml
Normal file
@@ -0,0 +1,50 @@
|
||||
name: file.processor
|
||||
version: 0.1.0
|
||||
description: Processes files through various transformations including format conversion,
|
||||
compression, encryption, and batch operations.
|
||||
status: draft
|
||||
reasoning_mode: oneshot
|
||||
capabilities:
|
||||
- Execute configurable pipelines of file transformations
|
||||
- Optimize files through compression and format conversion workflows
|
||||
- Apply encryption and verification steps with detailed reporting
|
||||
skills_available:
|
||||
- file.compare
|
||||
- workflow.orchestrate
|
||||
- build.optimize
|
||||
permissions: []
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: file-list
|
||||
description: Input artifact of type file-list
|
||||
- type: transformation-config
|
||||
description: Input artifact of type transformation-config
|
||||
produces:
|
||||
- type: processed-files
|
||||
description: Output artifact of type processed-files
|
||||
- type: processing-report
|
||||
description: Output artifact of type processing-report
|
||||
- type: file.convert
|
||||
description: Output artifact of type file.convert
|
||||
- type: file.compress
|
||||
description: Output artifact of type file.compress
|
||||
- type: file.encrypt
|
||||
description: Output artifact of type file.encrypt
|
||||
- type: batch.processor
|
||||
description: Output artifact of type batch.processor
|
||||
- type: 'Sequential: Process files one by one'
|
||||
description: 'Output artifact of type Sequential: Process files one by one'
|
||||
- type: 'Parallel: Process multiple files concurrently'
|
||||
description: 'Output artifact of type Parallel: Process multiple files concurrently'
|
||||
- type: 'Pipeline: Chain multiple transformations'
|
||||
description: 'Output artifact of type Pipeline: Chain multiple transformations'
|
||||
- type: Files processed successfully
|
||||
description: Output artifact of type Files processed successfully
|
||||
- type: Files that failed with error details
|
||||
description: Output artifact of type Files that failed with error details
|
||||
- type: Processing time and performance metrics
|
||||
description: Output artifact of type Processing time and performance metrics
|
||||
- type: Storage space saved
|
||||
description: Output artifact of type Storage space saved
|
||||
- type: Transformation details for each file
|
||||
description: Output artifact of type Transformation details for each file
|
||||
73
agents/governance.manager/README.md
Normal file
73
agents/governance.manager/README.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Governance.Manager Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Create comprehensive program and project governance artifacts including project charters, RAID logs (Risks, Assumptions, Issues, Decisions), decision logs, governance frameworks, compliance matrices, and steering committee artifacts. Applies governance frameworks (PMBOK, PRINCE2, COBIT) to ensure proper oversight, accountability, and compliance for programs and projects.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `Program or project description`
|
||||
- `Stakeholders and governance structure`
|
||||
- `Objectives and success criteria`
|
||||
- `Compliance or regulatory requirements`
|
||||
- `Risks, issues, and assumptions`
|
||||
- `Decisions to be documented`
|
||||
|
||||
### Produces
|
||||
|
||||
- `project-charter: Project charter with authority, scope, objectives, and success criteria`
|
||||
- `raid-log: Comprehensive RAID log`
|
||||
- `decision-log: Decision register with context, options, rationale, and outcomes`
|
||||
- `governance-framework: Governance structure with roles, committees, and decision rights`
|
||||
- `compliance-matrix: Compliance mapping to regulatory and policy requirements`
|
||||
- `stakeholder-analysis: Stakeholder analysis with power/interest grid and engagement strategy`
|
||||
- `steering-committee-report: Executive steering committee reporting pack`
|
||||
- `change-control-process: Change management and approval workflow`
|
||||
- `benefits-realization-plan: Benefits tracking and realization framework`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- Project purpose and business justification
|
||||
- Scope and deliverables (migrate 50 applications to AWS)
|
||||
- Objectives and success criteria (90% cost reduction, zero downtime)
|
||||
- Authority and decision rights
|
||||
- Governance structure (steering committee, PMO oversight)
|
||||
- Budget and resource allocation
|
||||
- Assumptions and constraints
|
||||
- Approval signatures
|
||||
- Risks: 15-20 identified risks with impact, probability, mitigation
|
||||
- Assumptions: Business continuity, vendor SLAs, budget availability
|
||||
- Issues: Current issues with severity, owner, and resolution plan
|
||||
- Decisions: Key decisions with rationale and stakeholder approval
|
||||
- Cross-references to related artifacts
|
||||
- Governance structure (executive steering, program board, workstream leads)
|
||||
- Decision-making authority and escalation paths
|
||||
- Meeting cadence and reporting requirements
|
||||
- RACI matrix for key decisions and deliverables
|
||||
- Compliance and risk management processes
|
||||
- Change control and approval workflow
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent governance.manager
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run governance.manager --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
64
agents/governance.manager/agent.yaml
Normal file
64
agents/governance.manager/agent.yaml
Normal file
@@ -0,0 +1,64 @@
|
||||
name: governance.manager
|
||||
version: 0.1.0
|
||||
description: Create comprehensive program and project governance artifacts including
|
||||
project charters, RAID logs (Risks, Assumptions, Issues, Decisions), decision logs,
|
||||
governance frameworks, compliance matrices, and steering committee artifacts. Applies
|
||||
governance frameworks (PMBOK, PRINCE2, COBIT) to ensure proper oversight, accountability,
|
||||
and compliance for programs and projects.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Establish governance structures and stakeholder engagement plans
|
||||
- Maintain comprehensive RAID and decision logs for executive visibility
|
||||
- Ensure compliance with regulatory and organizational policy requirements
|
||||
skills_available:
|
||||
- artifact.create
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: Program or project description
|
||||
description: Input artifact of type Program or project description
|
||||
- type: Stakeholders and governance structure
|
||||
description: Input artifact of type Stakeholders and governance structure
|
||||
- type: Objectives and success criteria
|
||||
description: Input artifact of type Objectives and success criteria
|
||||
- type: Compliance or regulatory requirements
|
||||
description: Input artifact of type Compliance or regulatory requirements
|
||||
- type: Risks, issues, and assumptions
|
||||
description: Input artifact of type Risks, issues, and assumptions
|
||||
- type: Decisions to be documented
|
||||
description: Input artifact of type Decisions to be documented
|
||||
produces:
|
||||
- type: 'project-charter: Project charter with authority, scope, objectives, and
|
||||
success criteria'
|
||||
description: 'Output artifact of type project-charter: Project charter with authority,
|
||||
scope, objectives, and success criteria'
|
||||
- type: 'raid-log: Comprehensive RAID log'
|
||||
description: 'Output artifact of type raid-log: Comprehensive RAID log'
|
||||
- type: 'decision-log: Decision register with context, options, rationale, and outcomes'
|
||||
description: 'Output artifact of type decision-log: Decision register with context,
|
||||
options, rationale, and outcomes'
|
||||
- type: 'governance-framework: Governance structure with roles, committees, and
|
||||
decision rights'
|
||||
description: 'Output artifact of type governance-framework: Governance structure
|
||||
with roles, committees, and decision rights'
|
||||
- type: 'compliance-matrix: Compliance mapping to regulatory and policy requirements'
|
||||
description: 'Output artifact of type compliance-matrix: Compliance mapping to
|
||||
regulatory and policy requirements'
|
||||
- type: 'stakeholder-analysis: Stakeholder analysis with power/interest grid and
|
||||
engagement strategy'
|
||||
description: 'Output artifact of type stakeholder-analysis: Stakeholder analysis
|
||||
with power/interest grid and engagement strategy'
|
||||
- type: 'steering-committee-report: Executive steering committee reporting pack'
|
||||
description: 'Output artifact of type steering-committee-report: Executive steering
|
||||
committee reporting pack'
|
||||
- type: 'change-control-process: Change management and approval workflow'
|
||||
description: 'Output artifact of type change-control-process: Change management
|
||||
and approval workflow'
|
||||
- type: 'benefits-realization-plan: Benefits tracking and realization framework'
|
||||
description: 'Output artifact of type benefits-realization-plan: Benefits tracking
|
||||
and realization framework'
|
||||
325
agents/meta.agent/README.md
Normal file
325
agents/meta.agent/README.md
Normal file
@@ -0,0 +1,325 @@
|
||||
# meta.agent - Agent Creator
|
||||
|
||||
The meta-agent that creates other agents through skill composition.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.agent** transforms natural language descriptions into complete, functional agents with proper skill composition, artifact metadata, and documentation.
|
||||
|
||||
**What it produces:**
|
||||
- Complete `agent.yaml` with recommended skills
|
||||
- Auto-generated `README.md` documentation
|
||||
- Proper artifact metadata (produces/consumes)
|
||||
- Inferred permissions from skills
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Create an Agent Description
|
||||
|
||||
Create a Markdown file describing your agent:
|
||||
|
||||
```markdown
|
||||
# Name: api.architect
|
||||
|
||||
# Purpose:
|
||||
An agent that designs comprehensive REST APIs and validates them
|
||||
against best practices.
|
||||
|
||||
# Inputs:
|
||||
- API requirements
|
||||
|
||||
# Outputs:
|
||||
- openapi-spec
|
||||
- validation-report
|
||||
- api-models
|
||||
|
||||
# Examples:
|
||||
- Design a RESTful API for an e-commerce platform
|
||||
- Create an API for a task management system
|
||||
```
|
||||
|
||||
### 2. Run meta.agent
|
||||
|
||||
```bash
|
||||
python3 agents/meta.agent/meta_agent.py examples/api_architect_description.md
|
||||
```
|
||||
|
||||
### 3. Output
|
||||
|
||||
```
|
||||
✨ Agent 'api.architect' created successfully!
|
||||
|
||||
📄 Agent definition: agents/api.architect/agent.yaml
|
||||
📖 Documentation: agents/api.architect/README.md
|
||||
|
||||
🔧 Skills: api.define, api.validate, workflow.validate
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Creation
|
||||
|
||||
```bash
|
||||
# Create agent from Markdown description
|
||||
python3 agents/meta.agent/meta_agent.py path/to/agent_description.md
|
||||
|
||||
# Create agent from JSON description
|
||||
python3 agents/meta.agent/meta_agent.py path/to/agent_description.json
|
||||
|
||||
# Specify output directory
|
||||
python3 agents/meta.agent/meta_agent.py description.md -o agents/my-agent
|
||||
|
||||
# Skip validation
|
||||
python3 agents/meta.agent/meta_agent.py description.md --no-validate
|
||||
```
|
||||
|
||||
### Description Format
|
||||
|
||||
**Markdown Format:**
|
||||
|
||||
```markdown
|
||||
# Name: agent-name
|
||||
|
||||
# Purpose:
|
||||
Detailed description of what the agent does...
|
||||
|
||||
# Inputs:
|
||||
- artifact-type-1
|
||||
- artifact-type-2
|
||||
|
||||
# Outputs:
|
||||
- artifact-type-3
|
||||
- artifact-type-4
|
||||
|
||||
# Constraints:
|
||||
(Optional) Any constraints or requirements...
|
||||
|
||||
# Examples:
|
||||
- Example use case 1
|
||||
- Example use case 2
|
||||
```
|
||||
|
||||
**JSON Format:**
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "agent-name",
|
||||
"purpose": "Detailed description...",
|
||||
"inputs": ["artifact-type-1", "artifact-type-2"],
|
||||
"outputs": ["artifact-type-3", "artifact-type-4"],
|
||||
"examples": ["Example 1", "Example 2"]
|
||||
}
|
||||
```
|
||||
|
||||
## What meta.agent Creates
|
||||
|
||||
### 1. agent.yaml
|
||||
|
||||
Complete agent definition with:
|
||||
- **Recommended skills** - Uses `agent.compose` to find compatible skills
|
||||
- **Artifact metadata** - Proper produces/consumes declarations
|
||||
- **Permissions** - Inferred from selected skills
|
||||
- **Description** - Professional formatting
|
||||
|
||||
Example output:
|
||||
```yaml
|
||||
name: api.architect
|
||||
description: Designs and validates REST APIs against best practices
|
||||
skills_available:
|
||||
- api.define
|
||||
- api.validate
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: api-requirements
|
||||
produces:
|
||||
- type: openapi-spec
|
||||
schema: schemas/openapi-spec.json
|
||||
- type: validation-report
|
||||
schema: schemas/validation-report.json
|
||||
```
|
||||
|
||||
### 2. README.md
|
||||
|
||||
Auto-generated documentation with:
|
||||
- Agent purpose and capabilities
|
||||
- Skills used with rationale
|
||||
- Artifact flow (inputs/outputs)
|
||||
- Example use cases
|
||||
- Usage instructions
|
||||
- "Created by meta.agent" attribution
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Parse Description** - Reads Markdown or JSON
|
||||
2. **Find Skills** - Uses `agent.compose` to recommend compatible skills
|
||||
3. **Generate Metadata** - Uses `artifact.define` for artifact contracts
|
||||
4. **Infer Permissions** - Analyzes required skills
|
||||
5. **Create Files** - Generates agent.yaml and README.md
|
||||
6. **Validate** - Ensures proper structure and compatibility
|
||||
|
||||
## Integration with Other Meta-Agents
|
||||
|
||||
### With meta.compatibility
|
||||
|
||||
After creating an agent, use `meta.compatibility` to analyze it:
|
||||
|
||||
```bash
|
||||
# Create agent
|
||||
python3 agents/meta.agent/meta_agent.py description.md
|
||||
|
||||
# Analyze compatibility
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze api.architect
|
||||
```
|
||||
|
||||
### With meta.suggest
|
||||
|
||||
Get suggestions after creating an agent:
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--artifacts agents/api.architect/agent.yaml
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Create and Analyze
|
||||
|
||||
```bash
|
||||
# Step 1: Create agent
|
||||
python3 agents/meta.agent/meta_agent.py examples/my_agent.md
|
||||
|
||||
# Step 2: Analyze compatibility
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible my-agent
|
||||
|
||||
# Step 3: Test the agent
|
||||
# (Manual testing or agent.run)
|
||||
```
|
||||
|
||||
### Workflow 2: Create Multiple Agents
|
||||
|
||||
```bash
|
||||
# Create several agents
|
||||
for desc in examples/*_agent_description.md; do
|
||||
python3 agents/meta.agent/meta_agent.py "$desc"
|
||||
done
|
||||
|
||||
# Analyze the ecosystem
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all
|
||||
```
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **agent-description** - Natural language agent requirements
|
||||
- Format: Markdown or JSON
|
||||
- Pattern: `**/agent_description.md`
|
||||
|
||||
### Produces
|
||||
|
||||
- **agent-definition** - Complete agent.yaml
|
||||
- Format: YAML
|
||||
- Pattern: `agents/*/agent.yaml`
|
||||
- Schema: `schemas/agent-definition.json`
|
||||
|
||||
- **agent-documentation** - Auto-generated README
|
||||
- Format: Markdown
|
||||
- Pattern: `agents/*/README.md`
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Writing Good Descriptions
|
||||
|
||||
✅ **Good:**
|
||||
- Clear, specific purpose
|
||||
- Well-defined inputs and outputs
|
||||
- Concrete examples
|
||||
- Specific artifact types
|
||||
|
||||
❌ **Avoid:**
|
||||
- Vague purpose ("does stuff")
|
||||
- Generic inputs ("data")
|
||||
- No examples
|
||||
- Unclear artifact types
|
||||
|
||||
### Choosing Artifact Types
|
||||
|
||||
Use existing artifact types when possible:
|
||||
- `openapi-spec` for API specifications
|
||||
- `validation-report` for validation results
|
||||
- `workflow-definition` for workflows
|
||||
|
||||
If you need a new type, create it with `meta.artifact` first.
|
||||
|
||||
### Skill Selection
|
||||
|
||||
meta.agent uses keyword matching to find skills:
|
||||
- "api" → finds api.define, api.validate
|
||||
- "validate" → finds validation skills
|
||||
- "agent" → finds agent.compose, meta.agent
|
||||
|
||||
Be descriptive in your purpose statement to get better skill recommendations.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Agent name conflicts
|
||||
|
||||
```
|
||||
Error: Agent 'api.architect' already exists
|
||||
```
|
||||
|
||||
**Solution:** Choose a different name or remove the existing agent directory.
|
||||
|
||||
### No skills recommended
|
||||
|
||||
```
|
||||
Warning: No skills found for agent purpose
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
- Make purpose more specific
|
||||
- Mention artifact types explicitly
|
||||
- Check if relevant skills exist in registry
|
||||
|
||||
### Missing artifact types
|
||||
|
||||
```
|
||||
Warning: Artifact type 'my-artifact' not in known registry
|
||||
```
|
||||
|
||||
**Solution:** Create the artifact type with `meta.artifact` first:
|
||||
```bash
|
||||
python3 agents/meta.artifact/meta_artifact.py create artifact_description.md
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
See `examples/` directory for sample agent descriptions:
|
||||
- `api_architect_description.md` - API design and validation agent
|
||||
- (Add more as you create them)
|
||||
|
||||
## Architecture
|
||||
|
||||
meta.agent is part of the meta-agent ecosystem:
|
||||
|
||||
```
|
||||
meta.agent
|
||||
├─ Uses: agent.compose (find skills)
|
||||
├─ Uses: artifact.define (generate metadata)
|
||||
├─ Produces: agent.yaml + README.md
|
||||
└─ Works with: meta.compatibility, meta.suggest
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Complete meta-agent architecture
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [agent-description schema](../../schemas/agent-description.json) - JSON schema
|
||||
|
||||
## Created By
|
||||
|
||||
Part of the Betty Framework meta-agent ecosystem.
|
||||
93
agents/meta.agent/agent.yaml
Normal file
93
agents/meta.agent/agent.yaml
Normal file
@@ -0,0 +1,93 @@
|
||||
name: meta.agent
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Meta-agent that creates other agents by composing skills based on natural
|
||||
language descriptions. Transforms natural language descriptions into complete,
|
||||
functional agents.
|
||||
|
||||
meta.agent analyzes agent requirements, recommends compatible skills using artifact
|
||||
metadata, generates complete agent definitions, and produces documentation.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: agent-description
|
||||
file_pattern: "**/agent_description.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Natural language description of agent purpose and requirements"
|
||||
|
||||
produces:
|
||||
- type: agent-definition
|
||||
file_pattern: "agents/*/agent.yaml"
|
||||
content_type: "application/yaml"
|
||||
schema: "schemas/agent-definition.json"
|
||||
description: "Complete agent configuration with skills and metadata"
|
||||
|
||||
- type: agent-documentation
|
||||
file_pattern: "agents/*/README.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Human-readable agent documentation"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Analyze agent requirements and identify compatible skills and capabilities
|
||||
- Generate complete agent manifests, documentation, and supporting assets
|
||||
- Validate registry consistency before registering new agents
|
||||
skills_available:
|
||||
- agent.compose # Find compatible skills based on requirements
|
||||
- artifact.define # Generate artifact metadata for the new agent
|
||||
- registry.update # Validate and register generated agents
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
system_prompt: |
|
||||
You are meta.agent, the meta-agent that creates other agents by composing skills.
|
||||
|
||||
Your purpose is to transform natural language descriptions into complete, functional agents
|
||||
with proper skill composition, artifact metadata, and documentation.
|
||||
|
||||
## Your Workflow
|
||||
|
||||
1. **Parse Requirements** - Understand what the agent needs to do
|
||||
- Extract purpose, inputs, outputs, and constraints
|
||||
- Identify required artifacts and permissions
|
||||
|
||||
2. **Compose Skills** - Use agent.compose to find compatible skills
|
||||
- Analyze artifact flows (what's produced and consumed)
|
||||
- Ensure no gaps in the artifact chain
|
||||
- Consider permission requirements
|
||||
|
||||
3. **Generate Metadata** - Use artifact.define for proper artifact contracts
|
||||
- Define what artifacts the agent consumes
|
||||
- Define what artifacts the agent produces
|
||||
- Include schemas and file patterns
|
||||
|
||||
4. **Create Agent Definition** - Write agent.yaml
|
||||
- Name, description, skills_available
|
||||
- Artifact metadata (consumes/produces)
|
||||
- Permissions
|
||||
- System prompt (optional but recommended)
|
||||
|
||||
5. **Document** - Generate comprehensive README.md
|
||||
- Agent purpose and use cases
|
||||
- Required inputs and expected outputs
|
||||
- Example usage
|
||||
- Artifact flow diagram
|
||||
|
||||
6. **Validate** (optional) - Use registry.certify
|
||||
- Check agent definition is valid
|
||||
- Verify skill compatibility
|
||||
- Ensure artifact contracts are sound
|
||||
|
||||
## Principles
|
||||
|
||||
- **Artifact-First Design**: Ensure clean artifact flows with no gaps
|
||||
- **Minimal Skill Sets**: Only include skills the agent actually needs
|
||||
- **Clear Documentation**: Make the agent's purpose immediately obvious
|
||||
- **Convention Adherence**: Follow Betty Framework standards
|
||||
- **Composability**: Design agents that work well with other agents
|
||||
|
||||
When creating an agent, think like an architect: What does it consume? What does it
|
||||
produce? What skills enable that transformation? How do artifacts flow through the system?
|
||||
558
agents/meta.agent/meta_agent.py
Executable file
558
agents/meta.agent/meta_agent.py
Executable file
@@ -0,0 +1,558 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.agent - Meta-agent that creates other agents
|
||||
|
||||
Transforms natural language descriptions into complete, functional agents
|
||||
by composing skills and generating proper artifact metadata.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
# Import skill modules directly
|
||||
agent_compose_path = Path(parent_dir) / "skills" / "agent.compose"
|
||||
artifact_define_path = Path(parent_dir) / "skills" / "artifact.define"
|
||||
|
||||
sys.path.insert(0, str(agent_compose_path))
|
||||
sys.path.insert(0, str(artifact_define_path))
|
||||
|
||||
import agent_compose
|
||||
import artifact_define
|
||||
|
||||
# Import traceability system
|
||||
from betty.traceability import get_tracer, RequirementInfo
|
||||
|
||||
|
||||
class AgentCreator:
|
||||
"""Creates agents from natural language descriptions"""
|
||||
|
||||
def __init__(self, registry_path: str = "registry/skills.json"):
|
||||
"""Initialize with registry path"""
|
||||
self.registry_path = Path(registry_path)
|
||||
self.registry = self._load_registry()
|
||||
|
||||
def _load_registry(self) -> Dict[str, Any]:
|
||||
"""Load skills registry"""
|
||||
if not self.registry_path.exists():
|
||||
raise FileNotFoundError(f"Registry not found: {self.registry_path}")
|
||||
|
||||
with open(self.registry_path) as f:
|
||||
return json.load(f)
|
||||
|
||||
def parse_description(self, description_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Parse agent description from Markdown or JSON file
|
||||
|
||||
Args:
|
||||
description_path: Path to agent_description.md or agent_description.json
|
||||
|
||||
Returns:
|
||||
Parsed description with name, purpose, inputs, outputs, constraints
|
||||
"""
|
||||
path = Path(description_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Description not found: {description_path}")
|
||||
|
||||
# Handle JSON format
|
||||
if path.suffix == ".json":
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
# Handle Markdown format
|
||||
with open(path) as f:
|
||||
content = f.read()
|
||||
|
||||
# Parse Markdown sections
|
||||
description = {
|
||||
"name": "",
|
||||
"purpose": "",
|
||||
"inputs": [],
|
||||
"outputs": [],
|
||||
"constraints": {},
|
||||
"examples": []
|
||||
}
|
||||
|
||||
current_section = None
|
||||
for line in content.split('\n'):
|
||||
line = line.strip()
|
||||
|
||||
# Section headers
|
||||
if line.startswith('# Name:'):
|
||||
description["name"] = line.replace('# Name:', '').strip()
|
||||
elif line.startswith('# Purpose:'):
|
||||
current_section = "purpose"
|
||||
elif line.startswith('# Inputs:'):
|
||||
current_section = "inputs"
|
||||
elif line.startswith('# Outputs:'):
|
||||
current_section = "outputs"
|
||||
elif line.startswith('# Constraints:'):
|
||||
current_section = "constraints"
|
||||
elif line.startswith('# Examples:'):
|
||||
current_section = "examples"
|
||||
elif line and not line.startswith('#'):
|
||||
# Content for current section
|
||||
if current_section == "purpose":
|
||||
description["purpose"] += line + " "
|
||||
elif current_section == "inputs" and line.startswith('-'):
|
||||
# Extract artifact type (before parentheses or description)
|
||||
artifact = line[1:].strip()
|
||||
# Remove anything in parentheses and any extra description
|
||||
if '(' in artifact:
|
||||
artifact = artifact.split('(')[0].strip()
|
||||
description["inputs"].append(artifact)
|
||||
elif current_section == "outputs" and line.startswith('-'):
|
||||
# Extract artifact type (before parentheses or description)
|
||||
artifact = line[1:].strip()
|
||||
# Remove anything in parentheses and any extra description
|
||||
if '(' in artifact:
|
||||
artifact = artifact.split('(')[0].strip()
|
||||
description["outputs"].append(artifact)
|
||||
elif current_section == "examples" and line.startswith('-'):
|
||||
description["examples"].append(line[1:].strip())
|
||||
|
||||
description["purpose"] = description["purpose"].strip()
|
||||
return description
|
||||
|
||||
def find_compatible_skills(
|
||||
self,
|
||||
purpose: str,
|
||||
required_artifacts: Optional[List[str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Find compatible skills for agent purpose
|
||||
|
||||
Args:
|
||||
purpose: Natural language description of agent purpose
|
||||
required_artifacts: List of artifact types the agent needs
|
||||
|
||||
Returns:
|
||||
Dictionary with recommended skills and rationale
|
||||
"""
|
||||
return agent_compose.find_skills_for_purpose(
|
||||
self.registry,
|
||||
purpose,
|
||||
required_artifacts
|
||||
)
|
||||
|
||||
def generate_artifact_metadata(
|
||||
self,
|
||||
inputs: List[str],
|
||||
outputs: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate artifact metadata from inputs/outputs
|
||||
|
||||
Args:
|
||||
inputs: List of input artifact types
|
||||
outputs: List of output artifact types
|
||||
|
||||
Returns:
|
||||
Artifact metadata structure
|
||||
"""
|
||||
metadata = {}
|
||||
|
||||
if inputs:
|
||||
metadata["consumes"] = []
|
||||
for input_type in inputs:
|
||||
artifact_def = artifact_define.get_artifact_definition(input_type)
|
||||
if artifact_def:
|
||||
metadata["consumes"].append(artifact_def)
|
||||
else:
|
||||
# Create basic definition
|
||||
metadata["consumes"].append({
|
||||
"type": input_type,
|
||||
"description": f"Input artifact of type {input_type}"
|
||||
})
|
||||
|
||||
if outputs:
|
||||
metadata["produces"] = []
|
||||
for output_type in outputs:
|
||||
artifact_def = artifact_define.get_artifact_definition(output_type)
|
||||
if artifact_def:
|
||||
metadata["produces"].append(artifact_def)
|
||||
else:
|
||||
# Create basic definition
|
||||
metadata["produces"].append({
|
||||
"type": output_type,
|
||||
"description": f"Output artifact of type {output_type}"
|
||||
})
|
||||
|
||||
return metadata
|
||||
|
||||
def infer_permissions(self, skills: List[str]) -> List[str]:
|
||||
"""
|
||||
Infer required permissions from skills
|
||||
|
||||
Args:
|
||||
skills: List of skill names
|
||||
|
||||
Returns:
|
||||
List of required permissions
|
||||
"""
|
||||
permissions = set()
|
||||
skills_list = self.registry.get("skills", [])
|
||||
|
||||
for skill_name in skills:
|
||||
# Find skill in registry
|
||||
skill = next(
|
||||
(s for s in skills_list if s.get("name") == skill_name),
|
||||
None
|
||||
)
|
||||
|
||||
if skill and "permissions" in skill:
|
||||
for perm in skill["permissions"]:
|
||||
permissions.add(perm)
|
||||
|
||||
return sorted(list(permissions))
|
||||
|
||||
def generate_agent_yaml(
|
||||
self,
|
||||
name: str,
|
||||
description: str,
|
||||
skills: List[str],
|
||||
artifact_metadata: Dict[str, Any],
|
||||
permissions: List[str],
|
||||
system_prompt: Optional[str] = None
|
||||
) -> str:
|
||||
"""
|
||||
Generate agent.yaml content
|
||||
|
||||
Args:
|
||||
name: Agent name
|
||||
description: Agent description
|
||||
skills: List of skill names
|
||||
artifact_metadata: Artifact metadata structure
|
||||
permissions: List of permissions
|
||||
system_prompt: Optional system prompt
|
||||
|
||||
Returns:
|
||||
YAML content as string
|
||||
"""
|
||||
agent_def = {
|
||||
"name": name,
|
||||
"description": description,
|
||||
"skills_available": skills,
|
||||
"permissions": permissions
|
||||
}
|
||||
|
||||
if artifact_metadata:
|
||||
agent_def["artifact_metadata"] = artifact_metadata
|
||||
|
||||
if system_prompt:
|
||||
agent_def["system_prompt"] = system_prompt
|
||||
|
||||
return yaml.dump(
|
||||
agent_def,
|
||||
default_flow_style=False,
|
||||
sort_keys=False,
|
||||
allow_unicode=True
|
||||
)
|
||||
|
||||
def generate_readme(
|
||||
self,
|
||||
name: str,
|
||||
purpose: str,
|
||||
skills: List[str],
|
||||
inputs: List[str],
|
||||
outputs: List[str],
|
||||
examples: List[str]
|
||||
) -> str:
|
||||
"""
|
||||
Generate README.md content
|
||||
|
||||
Args:
|
||||
name: Agent name
|
||||
purpose: Agent purpose
|
||||
skills: List of skill names
|
||||
inputs: Input artifacts
|
||||
outputs: Output artifacts
|
||||
examples: Example use cases
|
||||
|
||||
Returns:
|
||||
Markdown content
|
||||
"""
|
||||
readme = f"""# {name.title()} Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
{purpose}
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
"""
|
||||
for skill in skills:
|
||||
readme += f"- `{skill}`\n"
|
||||
|
||||
if inputs or outputs:
|
||||
readme += "\n## Artifact Flow\n\n"
|
||||
|
||||
if inputs:
|
||||
readme += "### Consumes\n\n"
|
||||
for inp in inputs:
|
||||
readme += f"- `{inp}`\n"
|
||||
readme += "\n"
|
||||
|
||||
if outputs:
|
||||
readme += "### Produces\n\n"
|
||||
for out in outputs:
|
||||
readme += f"- `{out}`\n"
|
||||
readme += "\n"
|
||||
|
||||
if examples:
|
||||
readme += "## Example Use Cases\n\n"
|
||||
for example in examples:
|
||||
readme += f"- {example}\n"
|
||||
readme += "\n"
|
||||
|
||||
readme += """## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent {name}
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run {name} --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
""".format(name=name)
|
||||
|
||||
return readme
|
||||
|
||||
def create_agent(
|
||||
self,
|
||||
description_path: str,
|
||||
output_dir: Optional[str] = None,
|
||||
validate: bool = True,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, str]:
|
||||
"""
|
||||
Create a complete agent from description
|
||||
|
||||
Args:
|
||||
description_path: Path to agent description file
|
||||
output_dir: Output directory (default: agents/{name}/)
|
||||
validate: Whether to validate with registry.certify
|
||||
requirement: Requirement information for traceability (optional)
|
||||
|
||||
Returns:
|
||||
Dictionary with paths to created files
|
||||
"""
|
||||
# Parse description
|
||||
desc = self.parse_description(description_path)
|
||||
name = desc["name"]
|
||||
|
||||
if not name:
|
||||
raise ValueError("Agent name is required")
|
||||
|
||||
# Determine output directory
|
||||
if not output_dir:
|
||||
output_dir = f"agents/{name}"
|
||||
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Find compatible skills
|
||||
skill_recommendations = self.find_compatible_skills(
|
||||
desc["purpose"],
|
||||
desc.get("inputs", []) + desc.get("outputs", [])
|
||||
)
|
||||
|
||||
skills = skill_recommendations.get("recommended_skills", [])
|
||||
|
||||
# Generate artifact metadata
|
||||
artifact_metadata = self.generate_artifact_metadata(
|
||||
desc.get("inputs", []),
|
||||
desc.get("outputs", [])
|
||||
)
|
||||
|
||||
# Infer permissions
|
||||
permissions = self.infer_permissions(skills)
|
||||
|
||||
# Generate agent.yaml
|
||||
agent_yaml_content = self.generate_agent_yaml(
|
||||
name=name,
|
||||
description=desc["purpose"],
|
||||
skills=skills,
|
||||
artifact_metadata=artifact_metadata,
|
||||
permissions=permissions
|
||||
)
|
||||
|
||||
agent_yaml_path = output_path / "agent.yaml"
|
||||
with open(agent_yaml_path, 'w') as f:
|
||||
f.write(agent_yaml_content)
|
||||
|
||||
# Generate README.md
|
||||
readme_content = self.generate_readme(
|
||||
name=name,
|
||||
purpose=desc["purpose"],
|
||||
skills=skills,
|
||||
inputs=desc.get("inputs", []),
|
||||
outputs=desc.get("outputs", []),
|
||||
examples=desc.get("examples", [])
|
||||
)
|
||||
|
||||
readme_path = output_path / "README.md"
|
||||
with open(readme_path, 'w') as f:
|
||||
f.write(readme_content)
|
||||
|
||||
# Log traceability if requirement provided
|
||||
trace_id = None
|
||||
if requirement:
|
||||
try:
|
||||
tracer = get_tracer()
|
||||
trace_id = tracer.log_creation(
|
||||
component_id=name,
|
||||
component_name=name.replace(".", " ").title(),
|
||||
component_type="agent",
|
||||
component_version="0.1.0",
|
||||
component_file_path=str(agent_yaml_path),
|
||||
input_source_path=description_path,
|
||||
created_by_tool="meta.agent",
|
||||
created_by_version="0.1.0",
|
||||
requirement=requirement,
|
||||
tags=["agent", "auto-generated"],
|
||||
project="Betty Framework"
|
||||
)
|
||||
|
||||
# Log validation check
|
||||
tracer.log_verification(
|
||||
component_id=name,
|
||||
check_type="validation",
|
||||
tool="meta.agent",
|
||||
result="passed",
|
||||
details={
|
||||
"checks_performed": [
|
||||
{"name": "agent_structure", "status": "passed"},
|
||||
{"name": "artifact_metadata", "status": "passed"},
|
||||
{"name": "skills_compatibility", "status": "passed", "message": f"{len(skills)} compatible skills found"}
|
||||
]
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"⚠️ Warning: Could not log traceability: {e}")
|
||||
|
||||
result = {
|
||||
"agent_yaml": str(agent_yaml_path),
|
||||
"readme": str(readme_path),
|
||||
"name": name,
|
||||
"skills": skills,
|
||||
"rationale": skill_recommendations.get("rationale", "")
|
||||
}
|
||||
|
||||
if trace_id:
|
||||
result["trace_id"] = trace_id
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.agent - Create agents from natural language descriptions"
|
||||
)
|
||||
parser.add_argument(
|
||||
"description",
|
||||
help="Path to agent description file (.md or .json)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o", "--output",
|
||||
help="Output directory (default: agents/{name}/)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-validate",
|
||||
action="store_true",
|
||||
help="Skip validation step"
|
||||
)
|
||||
|
||||
# Traceability arguments
|
||||
parser.add_argument(
|
||||
"--requirement-id",
|
||||
help="Requirement identifier for traceability (e.g., REQ-2025-001)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-description",
|
||||
help="What this agent is meant to accomplish"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-source",
|
||||
help="Source document or system (e.g., requirements/Q1-2025.md)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--issue-id",
|
||||
help="Issue tracking ID (e.g., JIRA-123)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requested-by",
|
||||
help="Who requested this requirement"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--rationale",
|
||||
help="Why this component is needed"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create requirement info if provided
|
||||
requirement = None
|
||||
if args.requirement_id and args.requirement_description:
|
||||
requirement = RequirementInfo(
|
||||
id=args.requirement_id,
|
||||
description=args.requirement_description,
|
||||
source=args.requirement_source,
|
||||
issue_id=args.issue_id,
|
||||
requested_by=args.requested_by,
|
||||
rationale=args.rationale
|
||||
)
|
||||
|
||||
# Create agent
|
||||
creator = AgentCreator()
|
||||
|
||||
print(f"🔮 meta.agent creating agent from {args.description}...")
|
||||
|
||||
try:
|
||||
result = creator.create_agent(
|
||||
args.description,
|
||||
output_dir=args.output,
|
||||
validate=not args.no_validate,
|
||||
requirement=requirement
|
||||
)
|
||||
|
||||
print(f"\n✨ Agent '{result['name']}' created successfully!\n")
|
||||
print(f"📄 Agent definition: {result['agent_yaml']}")
|
||||
print(f"📖 Documentation: {result['readme']}\n")
|
||||
print(f"🔧 Skills: {', '.join(result['skills'])}\n")
|
||||
|
||||
if result.get("rationale"):
|
||||
print(f"💡 Rationale:\n{result['rationale']}\n")
|
||||
|
||||
if result.get("trace_id"):
|
||||
print(f"📝 Traceability: {result['trace_id']}")
|
||||
print(f" View trace: python3 betty/trace_cli.py show {result['name']}\n")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error creating agent: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
372
agents/meta.artifact/README.md
Normal file
372
agents/meta.artifact/README.md
Normal file
@@ -0,0 +1,372 @@
|
||||
# meta.artifact - The Artifact Standards Authority
|
||||
|
||||
THE single source of truth for all artifact type definitions in Betty Framework.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.artifact** manages the complete lifecycle of artifact types - from definition to documentation to registration. All artifact types MUST be created through meta.artifact. No ad-hoc definitions are permitted.
|
||||
|
||||
**What it does:**
|
||||
- Defines new artifact types from descriptions
|
||||
- Generates JSON schemas with validation rules
|
||||
- Updates ARTIFACT_STANDARDS.md automatically
|
||||
- Registers types in KNOWN_ARTIFACT_TYPES
|
||||
- Validates uniqueness and prevents conflicts
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Create Artifact Description
|
||||
|
||||
```markdown
|
||||
# Name: optimization-report
|
||||
|
||||
# Purpose:
|
||||
Performance and security optimization recommendations for APIs
|
||||
|
||||
# Format: JSON
|
||||
|
||||
# File Pattern: *.optimization.json
|
||||
|
||||
# Schema Properties:
|
||||
- optimizations (array): List of optimization recommendations
|
||||
- severity (string): Severity level
|
||||
- analyzed_artifact (string): Reference to analyzed artifact
|
||||
|
||||
# Required Fields:
|
||||
- optimizations
|
||||
- severity
|
||||
- analyzed_artifact
|
||||
|
||||
# Producers:
|
||||
- api.optimize
|
||||
|
||||
# Consumers:
|
||||
- api.implement
|
||||
- report.generate
|
||||
```
|
||||
|
||||
### 2. Create Artifact Type
|
||||
|
||||
```bash
|
||||
python3 agents/meta.artifact/meta_artifact.py create examples/optimization_report_artifact.md
|
||||
```
|
||||
|
||||
### 3. Output
|
||||
|
||||
```
|
||||
✨ Artifact type 'optimization-report' created successfully!
|
||||
|
||||
📄 Created files:
|
||||
- schemas/optimization-report.json
|
||||
|
||||
📝 Updated files:
|
||||
- docs/ARTIFACT_STANDARDS.md
|
||||
- skills/artifact.define/artifact_define.py
|
||||
|
||||
✅ Artifact type 'optimization-report' is now registered
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Create New Artifact Type
|
||||
|
||||
```bash
|
||||
# From Markdown description
|
||||
python3 agents/meta.artifact/meta_artifact.py create artifact_description.md
|
||||
|
||||
# From JSON description
|
||||
python3 agents/meta.artifact/meta_artifact.py create artifact_description.json
|
||||
|
||||
# Force overwrite if exists
|
||||
python3 agents/meta.artifact/meta_artifact.py create artifact_description.md --force
|
||||
```
|
||||
|
||||
### Check if Artifact Exists
|
||||
|
||||
```bash
|
||||
python3 agents/meta.artifact/meta_artifact.py check optimization-report
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
✅ Artifact type 'optimization-report' exists
|
||||
Location: docs/ARTIFACT_STANDARDS.md
|
||||
```
|
||||
|
||||
## What meta.artifact Creates
|
||||
|
||||
### 1. JSON Schema (schemas/*.json)
|
||||
|
||||
Complete JSON Schema Draft 07 schema with:
|
||||
- Properties from description
|
||||
- Required fields
|
||||
- Type validation
|
||||
- Descriptions
|
||||
|
||||
Example:
|
||||
```json
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Optimization Report",
|
||||
"description": "Performance and security recommendations...",
|
||||
"type": "object",
|
||||
"required": ["optimizations", "severity", "analyzed_artifact"],
|
||||
"properties": {
|
||||
"optimizations": {
|
||||
"type": "array",
|
||||
"description": "List of optimization recommendations"
|
||||
},
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Documentation Section (docs/ARTIFACT_STANDARDS.md)
|
||||
|
||||
Adds complete section with:
|
||||
- Artifact number and title
|
||||
- Description
|
||||
- Convention (file pattern, format, content type)
|
||||
- Schema reference
|
||||
- Producers and consumers
|
||||
- Related types
|
||||
|
||||
### 3. Registry Entry (skills/artifact.define/artifact_define.py)
|
||||
|
||||
Adds to KNOWN_ARTIFACT_TYPES:
|
||||
```python
|
||||
"optimization-report": {
|
||||
"schema": "schemas/optimization-report.json",
|
||||
"file_pattern": "*.optimization.json",
|
||||
"content_type": "application/json",
|
||||
"description": "Performance and security optimization recommendations..."
|
||||
}
|
||||
```
|
||||
|
||||
## Description Format
|
||||
|
||||
### Markdown Format
|
||||
|
||||
```markdown
|
||||
# Name: artifact-type-name
|
||||
|
||||
# Purpose:
|
||||
Detailed description of what this artifact represents...
|
||||
|
||||
# Format: JSON | YAML | Markdown | Python | etc.
|
||||
|
||||
# File Pattern: *.artifact-type.ext
|
||||
|
||||
# Content Type: application/json (optional, inferred from format)
|
||||
|
||||
# Schema Properties:
|
||||
- property_name (type): Description
|
||||
- another_property (array): Description
|
||||
|
||||
# Required Fields:
|
||||
- property_name
|
||||
- another_property
|
||||
|
||||
# Producers:
|
||||
- skill.that.produces
|
||||
- agent.that.produces
|
||||
|
||||
# Consumers:
|
||||
- skill.that.consumes
|
||||
- agent.that.consumes
|
||||
|
||||
# Related Types:
|
||||
- related-artifact-1
|
||||
- related-artifact-2
|
||||
|
||||
# Validation Rules:
|
||||
- Custom rule 1
|
||||
- Custom rule 2
|
||||
```
|
||||
|
||||
### JSON Format
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "artifact-type-name",
|
||||
"purpose": "Description...",
|
||||
"format": "JSON",
|
||||
"file_pattern": "*.artifact-type.json",
|
||||
"schema_properties": {
|
||||
"field1": {"type": "string", "description": "..."},
|
||||
"field2": {"type": "array", "description": "..."}
|
||||
},
|
||||
"required_fields": ["field1"],
|
||||
"producers": ["producer.skill"],
|
||||
"consumers": ["consumer.skill"]
|
||||
}
|
||||
```
|
||||
|
||||
## Governance Rules
|
||||
|
||||
meta.artifact enforces these rules:
|
||||
|
||||
1. **Uniqueness** - Each artifact type must have a unique name
|
||||
2. **Clarity** - Names must be descriptive (e.g., "openapi-spec" not "spec")
|
||||
3. **Consistency** - Must use kebab-case (lowercase with hyphens)
|
||||
4. **Documentation** - Every type must be fully documented
|
||||
5. **Schemas** - Every type should have a JSON schema (if applicable)
|
||||
6. **No Conflicts** - Checks for naming conflicts before creating
|
||||
|
||||
## Workflow
|
||||
|
||||
```
|
||||
Developer creates artifact_description.md
|
||||
↓
|
||||
meta.artifact validates name and format
|
||||
↓
|
||||
Checks if type already exists
|
||||
↓
|
||||
Generates JSON Schema
|
||||
↓
|
||||
Updates ARTIFACT_STANDARDS.md
|
||||
↓
|
||||
Adds to KNOWN_ARTIFACT_TYPES
|
||||
↓
|
||||
Validates all files
|
||||
↓
|
||||
Type is now registered and usable
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### With meta.agent
|
||||
|
||||
When meta.agent needs a new artifact type:
|
||||
|
||||
```bash
|
||||
# 1. Define the artifact type
|
||||
python3 agents/meta.artifact/meta_artifact.py create my_artifact.md
|
||||
|
||||
# 2. Create agent that uses it
|
||||
python3 agents/meta.agent/meta_agent.py agent_description.md
|
||||
```
|
||||
|
||||
### With meta.suggest
|
||||
|
||||
meta.suggest will recommend creating artifact types for gaps:
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py --analyze-project
|
||||
```
|
||||
|
||||
Output includes:
|
||||
```
|
||||
💡 Suggestions:
|
||||
1. Create agent/skill to produce 'missing-artifact'
|
||||
```
|
||||
|
||||
## Existing Artifact Types
|
||||
|
||||
Check `docs/ARTIFACT_STANDARDS.md` for all registered types:
|
||||
|
||||
- `openapi-spec` - OpenAPI specifications
|
||||
- `validation-report` - Validation results
|
||||
- `workflow-definition` - Betty workflows
|
||||
- `hook-config` - Claude Code hooks
|
||||
- `api-models` - Generated data models
|
||||
- `agent-description` - Agent requirements
|
||||
- `agent-definition` - Agent configurations
|
||||
- `agent-documentation` - Agent READMEs
|
||||
- `optimization-report` - Optimization recommendations
|
||||
- `compatibility-graph` - Agent relationships
|
||||
- `pipeline-suggestion` - Multi-agent workflows
|
||||
- `suggestion-report` - Next-step recommendations
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Naming Artifact Types
|
||||
|
||||
✅ **Good:**
|
||||
- `validation-report` (clear, descriptive)
|
||||
- `openapi-spec` (standard term)
|
||||
- `optimization-report` (action + result)
|
||||
|
||||
❌ **Avoid:**
|
||||
- `report` (too generic)
|
||||
- `validationReport` (should be kebab-case)
|
||||
- `val-rep` (abbreviations)
|
||||
|
||||
### Writing Descriptions
|
||||
|
||||
Be comprehensive:
|
||||
- Explain what the artifact represents
|
||||
- Include all important properties
|
||||
- Document producers and consumers
|
||||
- Add related types for discoverability
|
||||
|
||||
### Schema Properties
|
||||
|
||||
Be specific about types:
|
||||
- Use JSON Schema types: string, number, integer, boolean, array, object
|
||||
- Add descriptions for every property
|
||||
- Mark required fields
|
||||
- Consider validation rules
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Name already exists
|
||||
|
||||
```
|
||||
Error: Artifact type 'my-artifact' already exists at: docs/ARTIFACT_STANDARDS.md
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
1. Use `--force` to overwrite (careful!)
|
||||
2. Choose a different name
|
||||
3. Use the existing type if appropriate
|
||||
|
||||
### Invalid name format
|
||||
|
||||
```
|
||||
Error: Artifact name must be kebab-case (lowercase with hyphens): MyArtifact
|
||||
```
|
||||
|
||||
**Solution:** Use lowercase with hyphens: `my-artifact`
|
||||
|
||||
### Missing schema properties
|
||||
|
||||
If your artifact is JSON/YAML but has no schema properties, meta.artifact will still create a basic schema. Add properties for better validation.
|
||||
|
||||
## Architecture
|
||||
|
||||
meta.artifact is THE authority in the meta-agent ecosystem:
|
||||
|
||||
```
|
||||
meta.artifact (Authority)
|
||||
├─ Manages: All artifact type definitions
|
||||
├─ Updates: ARTIFACT_STANDARDS.md
|
||||
├─ Registers: KNOWN_ARTIFACT_TYPES
|
||||
├─ Used by: meta.agent, meta.skill, all agents
|
||||
└─ Governance: Single source of truth
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
See `examples/` for artifact descriptions:
|
||||
- `optimization_report_artifact.md`
|
||||
- `compatibility_graph_artifact.md`
|
||||
- `pipeline_suggestion_artifact.md`
|
||||
- `suggestion_report_artifact.md`
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Complete artifact documentation
|
||||
- [artifact-type-description schema](../../schemas/artifact-type-description.json)
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
|
||||
## Philosophy
|
||||
|
||||
**Single Source of Truth** - All artifact definitions flow through meta.artifact. This ensures:
|
||||
- Consistency across the framework
|
||||
- Proper documentation
|
||||
- Schema validation
|
||||
- No conflicts
|
||||
- Discoverability
|
||||
|
||||
When in doubt, ask meta.artifact.
|
||||
144
agents/meta.artifact/agent.yaml
Normal file
144
agents/meta.artifact/agent.yaml
Normal file
@@ -0,0 +1,144 @@
|
||||
name: meta.artifact
|
||||
version: 0.1.0
|
||||
description: |
|
||||
The artifact standards authority - THE single source of truth for all
|
||||
artifact type definitions in Betty Framework.
|
||||
|
||||
This meta-agent manages the complete lifecycle of artifact types:
|
||||
- Defines new artifact types with JSON schemas
|
||||
- Updates ARTIFACT_STANDARDS.md documentation
|
||||
- Registers types in the artifact registry
|
||||
- Validates artifact compatibility across the system
|
||||
- Ensures consistency and prevents conflicts
|
||||
|
||||
All artifact types MUST be registered through meta.artifact before use.
|
||||
No ad-hoc artifact definitions are permitted.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: artifact-type-description
|
||||
file_pattern: "**/artifact_type_description.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Natural language description of a new artifact type"
|
||||
schema: "schemas/artifact-type-description.json"
|
||||
|
||||
produces:
|
||||
- type: artifact-schema
|
||||
file_pattern: "schemas/*.json"
|
||||
content_type: "application/json"
|
||||
schema: "http://json-schema.org/draft-07/schema#"
|
||||
description: "JSON Schema for validating artifact instances"
|
||||
|
||||
- type: artifact-documentation
|
||||
file_pattern: "docs/ARTIFACT_STANDARDS.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Updated artifact standards documentation"
|
||||
|
||||
- type: artifact-registry-entry
|
||||
file_pattern: "skills/artifact.define/artifact_define.py"
|
||||
content_type: "text/x-python"
|
||||
description: "Updated KNOWN_ARTIFACT_TYPES registry"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Curate and register canonical artifact type definitions and schemas
|
||||
- Synchronize documentation with changes to artifact standards
|
||||
- Validate artifact compatibility across registries and manifests
|
||||
skills_available:
|
||||
- artifact.define # Use existing artifact definitions
|
||||
- registry.update # Register or amend artifact metadata
|
||||
- registry.query # Inspect existing registry entries
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
system_prompt: |
|
||||
You are meta.artifact, the artifact standards authority for Betty Framework.
|
||||
|
||||
You are THE single source of truth for artifact type definitions. All artifact
|
||||
types flow through you - no exceptions.
|
||||
|
||||
## Your Responsibilities
|
||||
|
||||
1. **Define New Artifact Types**
|
||||
- Parse artifact type descriptions
|
||||
- Validate uniqueness (check if type already exists)
|
||||
- Create JSON schemas with proper validation rules
|
||||
- Generate comprehensive documentation
|
||||
- Register in KNOWN_ARTIFACT_TYPES
|
||||
|
||||
2. **Maintain Standards Documentation**
|
||||
- Update docs/ARTIFACT_STANDARDS.md with new types
|
||||
- Include file patterns, schemas, producers, consumers
|
||||
- Provide clear examples
|
||||
- Keep Quick Reference table up to date
|
||||
|
||||
3. **Validate Compatibility**
|
||||
- Check if artifact types can work together
|
||||
- Verify producer/consumer contracts
|
||||
- Ensure no naming conflicts
|
||||
- Validate schema consistency
|
||||
|
||||
4. **Registry Management**
|
||||
- Update skills/artifact.define/artifact_define.py
|
||||
- Add to KNOWN_ARTIFACT_TYPES dictionary
|
||||
- Include all metadata (schema, file_pattern, content_type, description)
|
||||
|
||||
## Workflow for New Artifact Type
|
||||
|
||||
1. **Check Existence**
|
||||
- Search ARTIFACT_STANDARDS.md for similar types
|
||||
- Check KNOWN_ARTIFACT_TYPES registry
|
||||
- Suggest existing type if appropriate
|
||||
|
||||
2. **Generate JSON Schema**
|
||||
- Create schemas/{type-name}.json
|
||||
- Include proper validation rules
|
||||
- Use JSON Schema Draft 07
|
||||
- Add description, examples, required fields
|
||||
|
||||
3. **Update Documentation**
|
||||
- Add new section to ARTIFACT_STANDARDS.md
|
||||
- Follow existing format (Description, Convention, Schema, Producers, Consumers)
|
||||
- Update Quick Reference table
|
||||
|
||||
4. **Update Registry**
|
||||
- Add entry to KNOWN_ARTIFACT_TYPES in artifact_define.py
|
||||
- Include: schema, file_pattern, content_type, description
|
||||
|
||||
5. **Validate**
|
||||
- Ensure all files are properly formatted
|
||||
- Check for syntax errors
|
||||
- Validate schema is valid JSON Schema
|
||||
|
||||
## Governance Rules
|
||||
|
||||
- **Uniqueness**: Each artifact type must have a unique name
|
||||
- **Clarity**: Names should be descriptive (e.g., "openapi-spec" not "spec")
|
||||
- **Consistency**: Follow kebab-case naming (lowercase with hyphens)
|
||||
- **Documentation**: Every type must be fully documented
|
||||
- **Schemas**: Every type should have a JSON schema (if applicable)
|
||||
- **No Conflicts**: Check for naming conflicts before creating
|
||||
|
||||
## Example Workflow
|
||||
|
||||
User provides artifact_type_description.md:
|
||||
```
|
||||
# Name: optimization-report
|
||||
# Purpose: API optimization recommendations
|
||||
# Format: JSON
|
||||
# Producers: api.optimize
|
||||
# Consumers: api.implement
|
||||
```
|
||||
|
||||
You:
|
||||
1. Check if "optimization-report" exists → it doesn't
|
||||
2. Generate schemas/optimization-report.json
|
||||
3. Update ARTIFACT_STANDARDS.md with new section
|
||||
4. Add to KNOWN_ARTIFACT_TYPES
|
||||
5. Return summary of changes
|
||||
|
||||
Remember: You are the guardian of artifact standards. Be thorough, be consistent,
|
||||
be the single source of truth.
|
||||
526
agents/meta.artifact/meta_artifact.py
Executable file
526
agents/meta.artifact/meta_artifact.py
Executable file
@@ -0,0 +1,526 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.artifact - The Artifact Standards Authority
|
||||
|
||||
THE single source of truth for all artifact type definitions in Betty Framework.
|
||||
Manages schemas, documentation, and registry for all artifact types.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from datetime import datetime
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
|
||||
class ArtifactAuthority:
|
||||
"""The artifact standards authority - manages all artifact type definitions"""
|
||||
|
||||
def __init__(self, base_dir: str = "."):
|
||||
"""Initialize with base directory"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.standards_doc = self.base_dir / "docs" / "ARTIFACT_STANDARDS.md"
|
||||
self.schemas_dir = self.base_dir / "schemas"
|
||||
self.artifact_define = self.base_dir / "skills" / "artifact.define" / "artifact_define.py"
|
||||
|
||||
def parse_description(self, description_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Parse artifact type description from Markdown or JSON file
|
||||
|
||||
Args:
|
||||
description_path: Path to artifact_type_description.md or .json
|
||||
|
||||
Returns:
|
||||
Parsed description with all artifact metadata
|
||||
"""
|
||||
path = Path(description_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Description not found: {description_path}")
|
||||
|
||||
# Handle JSON format
|
||||
if path.suffix == ".json":
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
# Handle Markdown format
|
||||
with open(path) as f:
|
||||
content = f.read()
|
||||
|
||||
# Parse Markdown sections
|
||||
description = {
|
||||
"name": "",
|
||||
"purpose": "",
|
||||
"format": "",
|
||||
"file_pattern": "",
|
||||
"content_type": "",
|
||||
"schema_properties": {},
|
||||
"required_fields": [],
|
||||
"producers": [],
|
||||
"consumers": [],
|
||||
"examples": [],
|
||||
"validation_rules": [],
|
||||
"related_types": []
|
||||
}
|
||||
|
||||
current_section = None
|
||||
for line in content.split('\n'):
|
||||
line = line.strip()
|
||||
|
||||
# Section headers
|
||||
if line.startswith('# Name:'):
|
||||
description["name"] = line.replace('# Name:', '').strip()
|
||||
elif line.startswith('# Purpose:'):
|
||||
current_section = "purpose"
|
||||
elif line.startswith('# Format:'):
|
||||
description["format"] = line.replace('# Format:', '').strip()
|
||||
elif line.startswith('# File Pattern:'):
|
||||
description["file_pattern"] = line.replace('# File Pattern:', '').strip()
|
||||
elif line.startswith('# Content Type:'):
|
||||
description["content_type"] = line.replace('# Content Type:', '').strip()
|
||||
elif line.startswith('# Schema Properties:'):
|
||||
current_section = "schema_properties"
|
||||
elif line.startswith('# Required Fields:'):
|
||||
current_section = "required_fields"
|
||||
elif line.startswith('# Producers:'):
|
||||
current_section = "producers"
|
||||
elif line.startswith('# Consumers:'):
|
||||
current_section = "consumers"
|
||||
elif line.startswith('# Examples:'):
|
||||
current_section = "examples"
|
||||
elif line.startswith('# Validation Rules:'):
|
||||
current_section = "validation_rules"
|
||||
elif line.startswith('# Related Types:'):
|
||||
current_section = "related_types"
|
||||
elif line and not line.startswith('#'):
|
||||
# Content for current section
|
||||
if current_section == "purpose":
|
||||
description["purpose"] += line + " "
|
||||
elif current_section in ["producers", "consumers", "required_fields",
|
||||
"validation_rules", "related_types"] and line.startswith('-'):
|
||||
description[current_section].append(line[1:].strip())
|
||||
elif current_section == "schema_properties" and line.startswith('-'):
|
||||
# Parse property definitions like: "- optimizations (array): List of optimizations"
|
||||
match = re.match(r'-\s+(\w+)\s+\((\w+)\):\s*(.+)', line)
|
||||
if match:
|
||||
prop_name, prop_type, prop_desc = match.groups()
|
||||
description["schema_properties"][prop_name] = {
|
||||
"type": prop_type,
|
||||
"description": prop_desc
|
||||
}
|
||||
|
||||
description["purpose"] = description["purpose"].strip()
|
||||
|
||||
# Infer content_type from format if not specified
|
||||
if not description["content_type"] and description["format"]:
|
||||
format_to_mime = {
|
||||
"JSON": "application/json",
|
||||
"YAML": "application/yaml",
|
||||
"Markdown": "text/markdown",
|
||||
"Python": "text/x-python",
|
||||
"TypeScript": "text/x-typescript",
|
||||
"Go": "text/x-go",
|
||||
"Text": "text/plain"
|
||||
}
|
||||
description["content_type"] = format_to_mime.get(description["format"], "")
|
||||
|
||||
return description
|
||||
|
||||
def check_existence(self, artifact_name: str) -> Tuple[bool, Optional[str]]:
|
||||
"""
|
||||
Check if artifact type already exists
|
||||
|
||||
Args:
|
||||
artifact_name: Name of artifact type to check
|
||||
|
||||
Returns:
|
||||
Tuple of (exists: bool, location: Optional[str])
|
||||
"""
|
||||
# Check in ARTIFACT_STANDARDS.md
|
||||
if self.standards_doc.exists():
|
||||
with open(self.standards_doc) as f:
|
||||
content = f.read()
|
||||
if f"`{artifact_name}`" in content or f"({artifact_name})" in content:
|
||||
return True, str(self.standards_doc)
|
||||
|
||||
# Check in schemas directory
|
||||
schema_file = self.schemas_dir / f"{artifact_name}.json"
|
||||
if schema_file.exists():
|
||||
return True, str(schema_file)
|
||||
|
||||
# Check in KNOWN_ARTIFACT_TYPES
|
||||
if self.artifact_define.exists():
|
||||
with open(self.artifact_define) as f:
|
||||
content = f.read()
|
||||
if f'"{artifact_name}"' in content:
|
||||
return True, str(self.artifact_define)
|
||||
|
||||
return False, None
|
||||
|
||||
def generate_json_schema(
|
||||
self,
|
||||
artifact_desc: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate JSON Schema from artifact description
|
||||
|
||||
Args:
|
||||
artifact_desc: Parsed artifact description
|
||||
|
||||
Returns:
|
||||
JSON Schema dictionary
|
||||
"""
|
||||
schema = {
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": artifact_desc["name"].replace("-", " ").title(),
|
||||
"description": artifact_desc["purpose"],
|
||||
"type": "object"
|
||||
}
|
||||
|
||||
# Add required fields
|
||||
if artifact_desc.get("required_fields"):
|
||||
schema["required"] = artifact_desc["required_fields"]
|
||||
|
||||
# Add properties from schema_properties
|
||||
if artifact_desc.get("schema_properties"):
|
||||
schema["properties"] = {}
|
||||
for prop_name, prop_info in artifact_desc["schema_properties"].items():
|
||||
prop_schema = {}
|
||||
|
||||
# Map simple types to JSON Schema types
|
||||
type_mapping = {
|
||||
"string": "string",
|
||||
"number": "number",
|
||||
"integer": "integer",
|
||||
"boolean": "boolean",
|
||||
"array": "array",
|
||||
"object": "object"
|
||||
}
|
||||
|
||||
prop_type = prop_info.get("type", "string").lower()
|
||||
prop_schema["type"] = type_mapping.get(prop_type, "string")
|
||||
|
||||
if "description" in prop_info:
|
||||
prop_schema["description"] = prop_info["description"]
|
||||
|
||||
schema["properties"][prop_name] = prop_schema
|
||||
|
||||
# Add examples if provided
|
||||
if artifact_desc.get("examples"):
|
||||
schema["examples"] = artifact_desc["examples"]
|
||||
|
||||
return schema
|
||||
|
||||
def update_standards_doc(self, artifact_desc: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Update ARTIFACT_STANDARDS.md with new artifact type
|
||||
|
||||
Args:
|
||||
artifact_desc: Parsed artifact description
|
||||
"""
|
||||
if not self.standards_doc.exists():
|
||||
raise FileNotFoundError(f"Standards document not found: {self.standards_doc}")
|
||||
|
||||
with open(self.standards_doc) as f:
|
||||
content = f.read()
|
||||
|
||||
# Find the "## Artifact Types" section
|
||||
artifact_types_match = re.search(r'## Artifact Types\n', content)
|
||||
if not artifact_types_match:
|
||||
raise ValueError("Could not find '## Artifact Types' section in standards doc")
|
||||
|
||||
# Find where to insert (before "## Artifact Metadata Schema" or at end)
|
||||
insert_before_match = re.search(r'\n## Artifact Metadata Schema\n', content)
|
||||
|
||||
# Generate new section
|
||||
artifact_name = artifact_desc["name"]
|
||||
section_number = self._get_next_artifact_number(content)
|
||||
|
||||
new_section = f"""
|
||||
### {section_number}. {artifact_name.replace('-', ' ').title()} (`{artifact_name}`)
|
||||
|
||||
**Description:** {artifact_desc["purpose"]}
|
||||
|
||||
**Convention:**
|
||||
- File pattern: `{artifact_desc.get("file_pattern", f"*.{artifact_name}.{artifact_desc['format'].lower()}")}`
|
||||
- Format: {artifact_desc["format"]}
|
||||
"""
|
||||
|
||||
if artifact_desc.get("content_type"):
|
||||
new_section += f"- Content type: {artifact_desc['content_type']}\n"
|
||||
|
||||
if artifact_desc.get("schema_properties"):
|
||||
new_section += f"\n**Schema:** `schemas/{artifact_name}.json`\n"
|
||||
|
||||
if artifact_desc.get("producers"):
|
||||
new_section += "\n**Produced by:**\n"
|
||||
for producer in artifact_desc["producers"]:
|
||||
new_section += f"- `{producer}`\n"
|
||||
|
||||
if artifact_desc.get("consumers"):
|
||||
new_section += "\n**Consumed by:**\n"
|
||||
for consumer in artifact_desc["consumers"]:
|
||||
new_section += f"- `{consumer}`\n"
|
||||
|
||||
if artifact_desc.get("related_types"):
|
||||
new_section += "\n**Related types:**\n"
|
||||
for related in artifact_desc["related_types"]:
|
||||
new_section += f"- `{related}`\n"
|
||||
|
||||
new_section += "\n---\n"
|
||||
|
||||
# Insert the new section
|
||||
if insert_before_match:
|
||||
insert_pos = insert_before_match.start()
|
||||
else:
|
||||
insert_pos = len(content)
|
||||
|
||||
updated_content = content[:insert_pos] + new_section + content[insert_pos:]
|
||||
|
||||
# Update Quick Reference table
|
||||
updated_content = self._update_quick_reference(updated_content, artifact_desc)
|
||||
|
||||
# Write back
|
||||
with open(self.standards_doc, 'w') as f:
|
||||
f.write(updated_content)
|
||||
|
||||
def _get_next_artifact_number(self, standards_content: str) -> int:
|
||||
"""Get the next artifact type number for documentation"""
|
||||
# Find all artifact type sections like "### 1. ", "### 2. ", etc.
|
||||
matches = re.findall(r'### (\d+)\. .+? \(`[\w-]+`\)', standards_content)
|
||||
if matches:
|
||||
return max(int(m) for m in matches) + 1
|
||||
return 1
|
||||
|
||||
def _update_quick_reference(
|
||||
self,
|
||||
content: str,
|
||||
artifact_desc: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Update the Quick Reference table with new artifact type"""
|
||||
# Find the Quick Reference table
|
||||
table_match = re.search(
|
||||
r'\| Artifact Type \| File Pattern \| Schema \| Producers \| Consumers \|.*?\n\|.*?\n((?:\|.*?\n)*)',
|
||||
content,
|
||||
re.DOTALL
|
||||
)
|
||||
|
||||
if not table_match:
|
||||
return content
|
||||
|
||||
artifact_name = artifact_desc["name"]
|
||||
file_pattern = artifact_desc.get("file_pattern", f"*.{artifact_name}.{artifact_desc['format'].lower()}")
|
||||
schema = f"schemas/{artifact_name}.json" if artifact_desc.get("schema_properties") else "-"
|
||||
producers = ", ".join(artifact_desc.get("producers", [])) or "-"
|
||||
consumers = ", ".join(artifact_desc.get("consumers", [])) or "-"
|
||||
|
||||
new_row = f"| {artifact_name} | {file_pattern} | {schema} | {producers} | {consumers} |\n"
|
||||
|
||||
# Insert before the end of the table
|
||||
table_end = table_match.end()
|
||||
return content[:table_end] + new_row + content[table_end:]
|
||||
|
||||
def update_registry(self, artifact_desc: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Update KNOWN_ARTIFACT_TYPES in artifact_define.py
|
||||
|
||||
Args:
|
||||
artifact_desc: Parsed artifact description
|
||||
"""
|
||||
if not self.artifact_define.exists():
|
||||
raise FileNotFoundError(f"Artifact registry not found: {self.artifact_define}")
|
||||
|
||||
with open(self.artifact_define) as f:
|
||||
content = f.read()
|
||||
|
||||
# Find KNOWN_ARTIFACT_TYPES dictionary
|
||||
match = re.search(r'KNOWN_ARTIFACT_TYPES = \{', content)
|
||||
if not match:
|
||||
raise ValueError("Could not find KNOWN_ARTIFACT_TYPES in artifact_define.py")
|
||||
|
||||
artifact_name = artifact_desc["name"]
|
||||
|
||||
# Generate new entry
|
||||
entry = f' "{artifact_name}": {{\n'
|
||||
|
||||
if artifact_desc.get("schema_properties"):
|
||||
entry += f' "schema": "schemas/{artifact_name}.json",\n'
|
||||
|
||||
file_pattern = artifact_desc.get("file_pattern")
|
||||
if file_pattern:
|
||||
entry += f' "file_pattern": "{file_pattern}",\n'
|
||||
|
||||
if artifact_desc.get("content_type"):
|
||||
entry += f' "content_type": "{artifact_desc["content_type"]}",\n'
|
||||
|
||||
entry += f' "description": "{artifact_desc["purpose"]}"\n'
|
||||
entry += ' },\n'
|
||||
|
||||
# Find the end of the dictionary (last closing brace before the closing of KNOWN_ARTIFACT_TYPES)
|
||||
# Insert before the last }
|
||||
closing_brace_match = list(re.finditer(r'\n\}', content))
|
||||
if closing_brace_match:
|
||||
# Find the one that's part of KNOWN_ARTIFACT_TYPES
|
||||
# This is a bit tricky, but we'll insert before the last } in the KNOWN_ARTIFACT_TYPES section
|
||||
insert_pos = closing_brace_match[0].start()
|
||||
|
||||
# Insert the new entry
|
||||
updated_content = content[:insert_pos] + entry + content[insert_pos:]
|
||||
|
||||
# Write back
|
||||
with open(self.artifact_define, 'w') as f:
|
||||
f.write(updated_content)
|
||||
|
||||
def create_artifact_type(
|
||||
self,
|
||||
description_path: str,
|
||||
force: bool = False
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a new artifact type from description
|
||||
|
||||
Args:
|
||||
description_path: Path to artifact description file
|
||||
force: Force creation even if type exists
|
||||
|
||||
Returns:
|
||||
Summary of created files and changes
|
||||
"""
|
||||
# Parse description
|
||||
artifact_desc = self.parse_description(description_path)
|
||||
artifact_name = artifact_desc["name"]
|
||||
|
||||
# Validate name format (kebab-case)
|
||||
if not re.match(r'^[a-z0-9-]+$', artifact_name):
|
||||
raise ValueError(
|
||||
f"Artifact name must be kebab-case (lowercase with hyphens): {artifact_name}"
|
||||
)
|
||||
|
||||
# Check existence
|
||||
exists, location = self.check_existence(artifact_name)
|
||||
if exists and not force:
|
||||
raise ValueError(
|
||||
f"Artifact type '{artifact_name}' already exists at: {location}\n"
|
||||
f"Use --force to overwrite."
|
||||
)
|
||||
|
||||
result = {
|
||||
"artifact_name": artifact_name,
|
||||
"created_files": [],
|
||||
"updated_files": [],
|
||||
"errors": []
|
||||
}
|
||||
|
||||
# Generate and save JSON schema (if applicable)
|
||||
if artifact_desc.get("schema_properties") or artifact_desc["format"] in ["JSON", "YAML"]:
|
||||
schema = self.generate_json_schema(artifact_desc)
|
||||
schema_file = self.schemas_dir / f"{artifact_name}.json"
|
||||
|
||||
self.schemas_dir.mkdir(parents=True, exist_ok=True)
|
||||
with open(schema_file, 'w') as f:
|
||||
json.dump(schema, f, indent=2)
|
||||
|
||||
result["created_files"].append(str(schema_file))
|
||||
|
||||
# Update ARTIFACT_STANDARDS.md
|
||||
try:
|
||||
self.update_standards_doc(artifact_desc)
|
||||
result["updated_files"].append(str(self.standards_doc))
|
||||
except Exception as e:
|
||||
result["errors"].append(f"Failed to update standards doc: {e}")
|
||||
|
||||
# Update artifact registry
|
||||
try:
|
||||
self.update_registry(artifact_desc)
|
||||
result["updated_files"].append(str(self.artifact_define))
|
||||
except Exception as e:
|
||||
result["errors"].append(f"Failed to update registry: {e}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.artifact - The Artifact Standards Authority"
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||
|
||||
# Create command
|
||||
create_parser = subparsers.add_parser('create', help='Create new artifact type')
|
||||
create_parser.add_argument(
|
||||
"description",
|
||||
help="Path to artifact type description file (.md or .json)"
|
||||
)
|
||||
create_parser.add_argument(
|
||||
"--force",
|
||||
action="store_true",
|
||||
help="Force creation even if type exists"
|
||||
)
|
||||
|
||||
# Check command
|
||||
check_parser = subparsers.add_parser('check', help='Check if artifact type exists')
|
||||
check_parser.add_argument("name", help="Artifact type name")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
authority = ArtifactAuthority()
|
||||
|
||||
if args.command == 'create':
|
||||
print(f"🏛️ meta.artifact - Creating artifact type from {args.description}")
|
||||
|
||||
try:
|
||||
result = authority.create_artifact_type(args.description, force=args.force)
|
||||
|
||||
print(f"\n✨ Artifact type '{result['artifact_name']}' created successfully!\n")
|
||||
|
||||
if result["created_files"]:
|
||||
print("📄 Created files:")
|
||||
for file in result["created_files"]:
|
||||
print(f" - {file}")
|
||||
|
||||
if result["updated_files"]:
|
||||
print("\n📝 Updated files:")
|
||||
for file in result["updated_files"]:
|
||||
print(f" - {file}")
|
||||
|
||||
if result["errors"]:
|
||||
print("\n⚠️ Warnings:")
|
||||
for error in result["errors"]:
|
||||
print(f" - {error}")
|
||||
|
||||
print(f"\n✅ Artifact type '{result['artifact_name']}' is now registered")
|
||||
print(" All agents and skills can now use this artifact type.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error creating artifact type: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
elif args.command == 'check':
|
||||
exists, location = authority.check_existence(args.name)
|
||||
|
||||
if exists:
|
||||
print(f"✅ Artifact type '{args.name}' exists")
|
||||
print(f" Location: {location}")
|
||||
else:
|
||||
print(f"❌ Artifact type '{args.name}' does not exist")
|
||||
print(f" Use 'meta.artifact create' to define it")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
457
agents/meta.command/README.md
Normal file
457
agents/meta.command/README.md
Normal file
@@ -0,0 +1,457 @@
|
||||
# meta.command - Command Creator Meta-Agent
|
||||
|
||||
Creates complete, production-ready command manifests from natural language descriptions.
|
||||
|
||||
## Purpose
|
||||
|
||||
The `meta.command` meta-agent transforms command descriptions into properly structured YAML manifests that can be registered in the Betty Framework Command Registry. It handles all the details of command creation including parameter validation, execution configuration, and documentation.
|
||||
|
||||
## What It Does
|
||||
|
||||
- ✅ Parses natural language command descriptions (Markdown or JSON)
|
||||
- ✅ Generates complete command manifests in YAML format
|
||||
- ✅ Validates command structure and execution types
|
||||
- ✅ Supports all three execution types: agent, skill, workflow
|
||||
- ✅ Creates proper parameter definitions with type validation
|
||||
- ✅ Prepares commands for registration via `command.define` skill
|
||||
- ✅ Supports traceability tracking
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
python3 agents/meta.command/meta_command.py <description_file>
|
||||
```
|
||||
|
||||
### With Traceability
|
||||
|
||||
```bash
|
||||
python3 agents/meta.command/meta_command.py examples/api_validate_command.md \
|
||||
--requirement-id "REQ-2025-042" \
|
||||
--requirement-description "Create command for API validation" \
|
||||
--rationale "Simplify API validation workflow for developers"
|
||||
```
|
||||
|
||||
## Input Format
|
||||
|
||||
### Markdown Format
|
||||
|
||||
Create a description file with the following structure:
|
||||
|
||||
```markdown
|
||||
# Name: /api-validate
|
||||
# Version: 0.1.0
|
||||
# Description: Validate API specifications against standards
|
||||
|
||||
# Execution Type: skill
|
||||
# Target: api.validate
|
||||
|
||||
# Parameters:
|
||||
- spec_file: string (required) - Path to API specification file
|
||||
- format: enum (optional, default=openapi, values=[openapi,asyncapi,grpc]) - API specification format
|
||||
- strict: boolean (optional, default=true) - Enable strict validation mode
|
||||
|
||||
# Execution Context:
|
||||
- format: json
|
||||
- timeout: 300
|
||||
|
||||
# Status: active
|
||||
|
||||
# Tags: api, validation, quality
|
||||
```
|
||||
|
||||
### JSON Format
|
||||
|
||||
Alternatively, use JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "/api-validate",
|
||||
"version": "0.1.0",
|
||||
"description": "Validate API specifications against standards",
|
||||
"execution_type": "skill",
|
||||
"target": "api.validate",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "spec_file",
|
||||
"type": "string",
|
||||
"required": true,
|
||||
"description": "Path to API specification file"
|
||||
},
|
||||
{
|
||||
"name": "format",
|
||||
"type": "enum",
|
||||
"values": ["openapi", "asyncapi", "grpc"],
|
||||
"default": "openapi",
|
||||
"description": "API specification format"
|
||||
}
|
||||
],
|
||||
"execution_context": {
|
||||
"format": "json",
|
||||
"timeout": 300
|
||||
},
|
||||
"status": "active",
|
||||
"tags": ["api", "validation", "quality"]
|
||||
}
|
||||
```
|
||||
|
||||
## Command Execution Types
|
||||
|
||||
### 1. Agent Execution
|
||||
|
||||
Use for complex, context-aware tasks requiring reasoning:
|
||||
|
||||
```markdown
|
||||
# Name: /api-design
|
||||
# Execution Type: agent
|
||||
# Target: api.architect
|
||||
# Description: Design a complete API architecture
|
||||
```
|
||||
|
||||
**When to use:**
|
||||
- Tasks requiring multi-step reasoning
|
||||
- Context-aware decision making
|
||||
- Complex analysis or design work
|
||||
|
||||
### 2. Skill Execution
|
||||
|
||||
Use for atomic, deterministic operations:
|
||||
|
||||
```markdown
|
||||
# Name: /api-validate
|
||||
# Execution Type: skill
|
||||
# Target: api.validate
|
||||
# Description: Validate API specifications
|
||||
```
|
||||
|
||||
**When to use:**
|
||||
- Direct, predictable operations
|
||||
- Fast, single-purpose tasks
|
||||
- Composable building blocks
|
||||
|
||||
### 3. Workflow Execution
|
||||
|
||||
Use for orchestrated multi-step processes:
|
||||
|
||||
```markdown
|
||||
# Name: /api-pipeline
|
||||
# Execution Type: workflow
|
||||
# Target: workflows/api-pipeline.yaml
|
||||
# Description: Execute full API development pipeline
|
||||
```
|
||||
|
||||
**When to use:**
|
||||
- Multi-agent/skill coordination
|
||||
- Sequential or parallel task execution
|
||||
- Complex business processes
|
||||
|
||||
## Parameter Types
|
||||
|
||||
### Supported Types
|
||||
|
||||
| Type | Description | Example |
|
||||
|------|-------------|---------|
|
||||
| `string` | Text values | `"api-spec.yaml"` |
|
||||
| `integer` | Whole numbers | `42` |
|
||||
| `boolean` | true/false | `true` |
|
||||
| `enum` | Fixed set of values | `["openapi", "asyncapi"]` |
|
||||
| `array` | Lists of values | `["tag1", "tag2"]` |
|
||||
| `object` | Structured data | `{"key": "value"}` |
|
||||
|
||||
### Parameter Options
|
||||
|
||||
- `required: true/false` - Whether parameter is mandatory
|
||||
- `default: value` - Default value if not provided
|
||||
- `values: [...]` - Allowed values (for enum type)
|
||||
- `description: "..."` - What the parameter does
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Validation Command
|
||||
|
||||
**Input:** `examples/api-validate-cmd.md`
|
||||
|
||||
```markdown
|
||||
# Name: /api-validate
|
||||
# Description: Validate API specification files
|
||||
# Execution Type: skill
|
||||
# Target: api.validate
|
||||
|
||||
# Parameters:
|
||||
- spec_file: string (required) - Path to specification file
|
||||
- format: enum (optional, default=openapi, values=[openapi,asyncapi]) - Spec format
|
||||
|
||||
# Status: active
|
||||
# Tags: api, validation
|
||||
```
|
||||
|
||||
**Output:** `commands/api-validate.yaml`
|
||||
|
||||
```yaml
|
||||
name: /api-validate
|
||||
version: 0.1.0
|
||||
description: Validate API specification files
|
||||
parameters:
|
||||
- name: spec_file
|
||||
type: string
|
||||
required: true
|
||||
description: Path to specification file
|
||||
- name: format
|
||||
type: enum
|
||||
values:
|
||||
- openapi
|
||||
- asyncapi
|
||||
default: openapi
|
||||
description: Spec format
|
||||
execution:
|
||||
type: skill
|
||||
target: api.validate
|
||||
status: active
|
||||
tags:
|
||||
- api
|
||||
- validation
|
||||
```
|
||||
|
||||
### Example 2: Agent-Based Design Command
|
||||
|
||||
**Input:** `examples/api-design-cmd.md`
|
||||
|
||||
```markdown
|
||||
# Name: /api-design
|
||||
# Description: Design a complete API architecture
|
||||
# Execution Type: agent
|
||||
# Target: api.architect
|
||||
|
||||
# Parameters:
|
||||
- requirements: string (required) - Path to requirements document
|
||||
- style: enum (optional, default=rest, values=[rest,graphql,grpc]) - API style
|
||||
|
||||
# Execution Context:
|
||||
- reasoning_mode: iterative
|
||||
- max_iterations: 10
|
||||
|
||||
# Status: active
|
||||
# Tags: api, design, architecture
|
||||
```
|
||||
|
||||
**Output:** `commands/api-design.yaml`
|
||||
|
||||
```yaml
|
||||
name: /api-design
|
||||
version: 0.1.0
|
||||
description: Design a complete API architecture
|
||||
parameters:
|
||||
- name: requirements
|
||||
type: string
|
||||
required: true
|
||||
description: Path to requirements document
|
||||
- name: style
|
||||
type: enum
|
||||
values:
|
||||
- rest
|
||||
- graphql
|
||||
- grpc
|
||||
default: rest
|
||||
description: API style
|
||||
execution:
|
||||
type: agent
|
||||
target: api.architect
|
||||
context:
|
||||
reasoning_mode: iterative
|
||||
max_iterations: 10
|
||||
status: active
|
||||
tags:
|
||||
- api
|
||||
- design
|
||||
- architecture
|
||||
```
|
||||
|
||||
### Example 3: Workflow Command
|
||||
|
||||
**Input:** `examples/deploy-cmd.md`
|
||||
|
||||
```markdown
|
||||
# Name: /deploy
|
||||
# Description: Deploy application to specified environment
|
||||
# Execution Type: workflow
|
||||
# Target: workflows/deploy-pipeline.yaml
|
||||
|
||||
# Parameters:
|
||||
- environment: enum (required, values=[dev,staging,production]) - Target environment
|
||||
- version: string (required) - Version to deploy
|
||||
- skip_tests: boolean (optional, default=false) - Skip test execution
|
||||
|
||||
# Status: draft
|
||||
# Tags: deployment, devops
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
The meta-agent creates:
|
||||
|
||||
1. **Command Manifest** - Complete YAML file in `commands/` directory
|
||||
2. **Console Output** - Summary of created command
|
||||
3. **Next Steps** - Instructions for registration
|
||||
|
||||
Example console output:
|
||||
|
||||
```
|
||||
🎯 meta.command - Creating command from examples/api-validate-cmd.md
|
||||
|
||||
✨ Command '/api-validate' created successfully!
|
||||
|
||||
📄 Created file:
|
||||
- commands/api-validate.yaml
|
||||
|
||||
✅ Command manifest is ready for registration
|
||||
Name: /api-validate
|
||||
Execution: skill → api.validate
|
||||
Status: active
|
||||
|
||||
📝 Next steps:
|
||||
1. Review the manifest: cat commands/api-validate.yaml
|
||||
2. Register command: python3 skills/command.define/command_define.py commands/api-validate.yaml
|
||||
3. Verify in registry: cat registry/commands.json
|
||||
```
|
||||
|
||||
## Integration with command.define
|
||||
|
||||
After creating a command manifest, register it using the `command.define` skill:
|
||||
|
||||
```bash
|
||||
# Register the command
|
||||
python3 skills/command.define/command_define.py commands/api-validate.yaml
|
||||
|
||||
# Verify registration
|
||||
cat registry/commands.json
|
||||
```
|
||||
|
||||
The `command.define` skill will:
|
||||
- Validate the manifest structure
|
||||
- Check that the execution target exists
|
||||
- Add the command to the Command Registry
|
||||
- Make the command available for use
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
```
|
||||
┌──────────────────────────┐
|
||||
│ Command Description │
|
||||
│ (Markdown or JSON) │
|
||||
└──────────┬───────────────┘
|
||||
│ consumes
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ meta.command │
|
||||
└──────┬───────┘
|
||||
│ produces
|
||||
▼
|
||||
┌──────────────────────────┐
|
||||
│ Command Manifest (YAML) │
|
||||
│ commands/*.yaml │
|
||||
└──────────┬───────────────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│command.define│
|
||||
│ (skill) │
|
||||
└──────┬───────┘
|
||||
│
|
||||
▼
|
||||
┌──────────────────────────┐
|
||||
│ Commands Registry │
|
||||
│ registry/commands.json │
|
||||
└──────────────────────────┘
|
||||
```
|
||||
|
||||
## Command Naming Conventions
|
||||
|
||||
- ✅ Must start with `/` (e.g., `/api-validate`)
|
||||
- ✅ Use kebab-case for multi-word commands (e.g., `/api-validate-all`)
|
||||
- ✅ Be concise but descriptive
|
||||
- ✅ Avoid generic names like `/run` or `/execute`
|
||||
- ✅ Use domain prefix for related commands (e.g., `/api-*`, `/db-*`)
|
||||
|
||||
## Validation
|
||||
|
||||
The meta-agent validates:
|
||||
|
||||
- ✅ Required fields present (name, description, execution_type, target)
|
||||
- ✅ Valid execution type (agent, skill, workflow)
|
||||
- ✅ Command name starts with `/`
|
||||
- ✅ Parameter types are valid
|
||||
- ✅ Enum parameters have values defined
|
||||
- ✅ Version follows semantic versioning
|
||||
- ✅ Status is valid (draft, active, deprecated, archived)
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common errors and solutions:
|
||||
|
||||
**Missing required fields:**
|
||||
```
|
||||
❌ Error: Missing required fields: execution_type, target
|
||||
```
|
||||
→ Add all required fields to your description
|
||||
|
||||
**Invalid execution type:**
|
||||
```
|
||||
❌ Error: Invalid execution type: service. Must be one of: agent, skill, workflow
|
||||
```
|
||||
→ Use only valid execution types
|
||||
|
||||
**Invalid parameter type:**
|
||||
```
|
||||
❌ Error: Invalid parameter type: float
|
||||
```
|
||||
→ Use supported parameter types
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Descriptions** - Write concise, actionable command descriptions
|
||||
2. **Proper Parameters** - Define all parameters with types and validation
|
||||
3. **Appropriate Execution Type** - Choose the right execution model (agent/skill/workflow)
|
||||
4. **Meaningful Tags** - Add relevant tags for discoverability
|
||||
5. **Version Management** - Start with 0.1.0, increment appropriately
|
||||
6. **Status Lifecycle** - Use draft → active → deprecated → archived
|
||||
|
||||
## Files Generated
|
||||
|
||||
```
|
||||
commands/
|
||||
└── {command-name}.yaml # Command manifest
|
||||
```
|
||||
|
||||
## Integration with Meta-Agents
|
||||
|
||||
The `meta.command` agent works alongside:
|
||||
|
||||
- **meta.skill** - Create skills that commands can execute
|
||||
- **meta.agent** - Create agents that commands can delegate to
|
||||
- **meta.artifact** - Define artifact types for command I/O
|
||||
- **meta.compatibility** - Find compatible agents for command workflows
|
||||
|
||||
## Traceability
|
||||
|
||||
Track command creation with requirement metadata:
|
||||
|
||||
```bash
|
||||
python3 agents/meta.command/meta_command.py examples/api-validate-cmd.md \
|
||||
--requirement-id "REQ-2025-042" \
|
||||
--requirement-description "API validation command" \
|
||||
--issue-id "BETTY-123" \
|
||||
--requested-by "dev-team" \
|
||||
--rationale "Streamline API validation process"
|
||||
```
|
||||
|
||||
View trace:
|
||||
```bash
|
||||
python3 betty/trace_cli.py show command.api_validate
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- **command.define skill** - Register command manifests
|
||||
- **meta.skill** - Create skills for command execution
|
||||
- **meta.agent** - Create agents for command delegation
|
||||
- **Command Registry** - `registry/commands.json`
|
||||
- **Command Infrastructure** - `docs/COMMAND_HOOK_INFRASTRUCTURE.md`
|
||||
211
agents/meta.command/agent.yaml
Normal file
211
agents/meta.command/agent.yaml
Normal file
@@ -0,0 +1,211 @@
|
||||
name: meta.command
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Creates complete command manifests from natural language descriptions.
|
||||
|
||||
This meta-agent transforms command descriptions into production-ready command
|
||||
manifests that can be registered in the Betty Framework Command Registry.
|
||||
|
||||
Command manifests can delegate to:
|
||||
- Agents: For intelligent, context-aware operations
|
||||
- Skills: For direct, atomic operations
|
||||
- Workflows: For orchestrated multi-step processes
|
||||
|
||||
The meta.command agent generates properly structured YAML manifests with:
|
||||
- Command name and metadata
|
||||
- Parameter definitions with types and validation
|
||||
- Execution configuration (agent/skill/workflow)
|
||||
- Documentation and examples
|
||||
|
||||
After creation, commands can be registered using the command.define skill.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: command-description
|
||||
file_pattern: "**/command_description.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Natural language description of command requirements"
|
||||
schema: "schemas/command-description.json"
|
||||
|
||||
produces:
|
||||
- type: command-manifest
|
||||
file_pattern: "commands/*.yaml"
|
||||
content_type: "application/yaml"
|
||||
description: "Complete command manifest ready for registration"
|
||||
schema: "schemas/command-manifest.json"
|
||||
|
||||
- type: command-documentation
|
||||
file_pattern: "commands/*/README.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Command documentation with usage examples"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Transform natural language specifications into validated command manifests
|
||||
- Recommend appropriate execution targets across agents, skills, and workflows
|
||||
- Produce documentation and registration-ready assets for new commands
|
||||
skills_available:
|
||||
- command.define # Register command in registry
|
||||
- artifact.define # Generate artifact metadata
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
system_prompt: |
|
||||
You are meta.command, the command creator for Betty Framework.
|
||||
|
||||
Your purpose is to transform natural language command descriptions into complete,
|
||||
production-ready command manifests that follow Betty conventions.
|
||||
|
||||
## Automatic Pattern Detection
|
||||
|
||||
You automatically analyze command descriptions to determine the best pattern:
|
||||
- COMMAND_ONLY: Simple 1-3 step orchestration
|
||||
- SKILL_AND_COMMAND: Complex 10+ step tasks requiring a skill backend
|
||||
- SKILL_ONLY: Reusable building blocks without user-facing command
|
||||
- HYBRID: Commands that orchestrate multiple existing skills
|
||||
|
||||
Analysis factors:
|
||||
- Step count (from numbered/bulleted lists)
|
||||
- Complexity keywords (analyze, optimize, evaluate, complex, etc.)
|
||||
- Autonomy requirements (intelligent, adaptive, sophisticated, etc.)
|
||||
- Reusability indicators (composable, shared, library, etc.)
|
||||
|
||||
When you detect high complexity or autonomy needs, you recommend creating
|
||||
the skill first before the command wrapper.
|
||||
|
||||
## Your Workflow
|
||||
|
||||
1. **Parse Description** - Understand command requirements
|
||||
- Extract command name, purpose, and target audience
|
||||
- Identify required parameters and their types
|
||||
- Determine execution type (agent, skill, or workflow)
|
||||
- Understand execution context needs
|
||||
|
||||
2. **Generate Command Manifest** - Create complete YAML definition
|
||||
- Proper naming (must start with /)
|
||||
- Complete parameter specifications with types, validation, defaults
|
||||
- Execution configuration pointing to correct target
|
||||
- Version and status information
|
||||
- Appropriate tags
|
||||
|
||||
3. **Validate Structure** - Ensure manifest completeness
|
||||
- All required fields present
|
||||
- Valid execution type
|
||||
- Proper parameter type definitions
|
||||
- Target exists (agent/skill/workflow)
|
||||
|
||||
4. **Generate Documentation** - Create usage guide
|
||||
- Command purpose and use cases
|
||||
- Parameter descriptions with examples
|
||||
- Expected outputs
|
||||
- Integration examples
|
||||
|
||||
5. **Ready for Registration** - Prepare for command.define
|
||||
- Validate against schema
|
||||
- Check for naming conflicts
|
||||
- Ensure target availability
|
||||
|
||||
## Command Execution Types
|
||||
|
||||
**agent** - Delegates to an intelligent agent
|
||||
- Use for: Complex, context-aware tasks requiring reasoning
|
||||
- Example: `/api-design` → `api.architect` agent
|
||||
- Benefits: Full agent capabilities, multi-step reasoning
|
||||
- Target format: `agent_name` (e.g., "api.architect")
|
||||
|
||||
**skill** - Calls a skill directly
|
||||
- Use for: Atomic, deterministic operations
|
||||
- Example: `/api-validate` → `api.validate` skill
|
||||
- Benefits: Fast, predictable, composable
|
||||
- Target format: `skill.name` (e.g., "api.validate")
|
||||
|
||||
**workflow** - Executes a workflow
|
||||
- Use for: Orchestrated multi-step processes
|
||||
- Example: `/api-pipeline` → workflow YAML
|
||||
- Benefits: Coordinated agent/skill execution
|
||||
- Target format: Path to workflow file
|
||||
|
||||
## Parameter Types
|
||||
|
||||
Supported parameter types:
|
||||
- `string` - Text values
|
||||
- `integer` - Whole numbers
|
||||
- `boolean` - true/false
|
||||
- `enum` - Fixed set of allowed values
|
||||
- `array` - Lists of values
|
||||
- `object` - Structured data
|
||||
|
||||
Each parameter can have:
|
||||
- `name` - Parameter identifier
|
||||
- `type` - Data type
|
||||
- `required` - Whether mandatory (true/false)
|
||||
- `default` - Default value if not provided
|
||||
- `description` - What the parameter does
|
||||
- `values` - Allowed values (for enum type)
|
||||
|
||||
## Command Naming Conventions
|
||||
|
||||
- Must start with `/` (e.g., `/api-validate`)
|
||||
- Use kebab-case for multi-word commands
|
||||
- Should be concise but descriptive
|
||||
- Avoid generic names like `/run` or `/execute`
|
||||
|
||||
## Command Status
|
||||
|
||||
- `draft` - Under development, not ready for production
|
||||
- `active` - Production-ready and available
|
||||
- `deprecated` - Still works but discouraged
|
||||
- `archived` - No longer available
|
||||
|
||||
## Structure Example
|
||||
|
||||
```yaml
|
||||
name: /api-validate
|
||||
version: 0.1.0
|
||||
description: "Validate API specifications against standards"
|
||||
|
||||
parameters:
|
||||
- name: spec_file
|
||||
type: string
|
||||
required: true
|
||||
description: "Path to API specification file"
|
||||
|
||||
- name: format
|
||||
type: enum
|
||||
values: [openapi, asyncapi, grpc]
|
||||
default: openapi
|
||||
description: "API specification format"
|
||||
|
||||
execution:
|
||||
type: skill
|
||||
target: api.validate
|
||||
context:
|
||||
format: json
|
||||
|
||||
status: active
|
||||
|
||||
tags: [api, validation, quality]
|
||||
```
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- ✅ Follows Betty command conventions
|
||||
- ✅ Proper parameter definitions with validation
|
||||
- ✅ Correct execution type and target
|
||||
- ✅ Clear, actionable descriptions
|
||||
- ✅ Appropriate status and tags
|
||||
- ✅ Ready for command.define registration
|
||||
|
||||
## Integration with command.define
|
||||
|
||||
After generating the command manifest, users should:
|
||||
1. Review the generated YAML file
|
||||
2. Test the command locally
|
||||
3. Register using: `python3 skills/command.define/command_define.py <manifest.yaml>`
|
||||
4. Verify registration in `registry/commands.json`
|
||||
|
||||
Remember: You're creating user-facing commands that make Betty's capabilities
|
||||
accessible. Make commands intuitive, well-documented, and easy to use.
|
||||
761
agents/meta.command/meta_command.py
Executable file
761
agents/meta.command/meta_command.py
Executable file
@@ -0,0 +1,761 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.command - Command Creator Meta-Agent
|
||||
|
||||
Generates command manifests from natural language descriptions.
|
||||
|
||||
Usage:
|
||||
python3 agents/meta.command/meta_command.py <command_description_file>
|
||||
|
||||
Examples:
|
||||
python3 agents/meta.command/meta_command.py examples/api_validate_command.md
|
||||
python3 agents/meta.command/meta_command.py examples/deploy_command.json
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
|
||||
|
||||
from betty.config import (
|
||||
BASE_DIR,
|
||||
COMMANDS_REGISTRY_FILE,
|
||||
)
|
||||
from betty.enums import CommandExecutionType, CommandStatus
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.traceability import get_tracer, RequirementInfo
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
# Import artifact validation from artifact.define skill
|
||||
try:
|
||||
import importlib.util
|
||||
artifact_define_path = Path(__file__).parent.parent.parent / "skills" / "artifact.define" / "artifact_define.py"
|
||||
spec = importlib.util.spec_from_file_location("artifact_define", artifact_define_path)
|
||||
artifact_define_module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(artifact_define_module)
|
||||
|
||||
validate_artifact_type = artifact_define_module.validate_artifact_type
|
||||
KNOWN_ARTIFACT_TYPES = artifact_define_module.KNOWN_ARTIFACT_TYPES
|
||||
ARTIFACT_VALIDATION_AVAILABLE = True
|
||||
except Exception as e:
|
||||
ARTIFACT_VALIDATION_AVAILABLE = False
|
||||
|
||||
|
||||
class CommandCreator:
|
||||
"""Creates command manifests from descriptions"""
|
||||
|
||||
VALID_EXECUTION_TYPES = ["agent", "skill", "workflow"]
|
||||
VALID_STATUSES = ["draft", "active", "deprecated", "archived"]
|
||||
VALID_PARAMETER_TYPES = ["string", "integer", "boolean", "enum", "array", "object"]
|
||||
|
||||
# Keywords for complexity analysis
|
||||
AUTONOMY_KEYWORDS = [
|
||||
"analyze", "optimize", "decide", "evaluate", "assess",
|
||||
"complex", "multi-step", "autonomous", "intelligent",
|
||||
"adaptive", "sophisticated", "advanced", "comprehensive"
|
||||
]
|
||||
|
||||
REUSABILITY_KEYWORDS = [
|
||||
"reusable", "composable", "building block", "library",
|
||||
"utility", "helper", "shared", "common", "core"
|
||||
]
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize command creator"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.commands_dir = self.base_dir / "commands"
|
||||
|
||||
def parse_description(self, description_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Parse command description from Markdown or JSON file
|
||||
|
||||
Args:
|
||||
description_path: Path to description file
|
||||
|
||||
Returns:
|
||||
Dict with command configuration
|
||||
"""
|
||||
path = Path(description_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Description file not found: {description_path}")
|
||||
|
||||
# Read file
|
||||
content = path.read_text()
|
||||
|
||||
# Try JSON first
|
||||
if path.suffix == ".json":
|
||||
return json.loads(content)
|
||||
|
||||
# Parse Markdown format
|
||||
cmd_desc = {}
|
||||
|
||||
# Extract fields using regex patterns
|
||||
patterns = {
|
||||
"name": r"#\s*Name:\s*(.+)",
|
||||
"version": r"#\s*Version:\s*(.+)",
|
||||
"description": r"#\s*Description:\s*(.+)",
|
||||
"execution_type": r"#\s*Execution\s*Type:\s*(.+)",
|
||||
"target": r"#\s*Target:\s*(.+)",
|
||||
"status": r"#\s*Status:\s*(.+)",
|
||||
}
|
||||
|
||||
for field, pattern in patterns.items():
|
||||
match = re.search(pattern, content, re.IGNORECASE)
|
||||
if match:
|
||||
value = match.group(1).strip()
|
||||
cmd_desc[field] = value
|
||||
|
||||
# Parse parameters section
|
||||
params_section = re.search(
|
||||
r"#\s*Parameters:\s*\n(.*?)(?=\n#|\Z)",
|
||||
content,
|
||||
re.DOTALL | re.IGNORECASE
|
||||
)
|
||||
|
||||
if params_section:
|
||||
cmd_desc["parameters"] = self._parse_parameters(params_section.group(1))
|
||||
|
||||
# Parse tags
|
||||
tags_match = re.search(r"#\s*Tags:\s*(.+)", content, re.IGNORECASE)
|
||||
if tags_match:
|
||||
tags_str = tags_match.group(1).strip()
|
||||
# Parse comma-separated or bracket-enclosed tags
|
||||
if tags_str.startswith("[") and tags_str.endswith("]"):
|
||||
tags_str = tags_str[1:-1]
|
||||
cmd_desc["tags"] = [t.strip() for t in tags_str.split(",")]
|
||||
|
||||
# Parse execution context
|
||||
context_section = re.search(
|
||||
r"#\s*Execution\s*Context:\s*\n(.*?)(?=\n#|\Z)",
|
||||
content,
|
||||
re.DOTALL | re.IGNORECASE
|
||||
)
|
||||
if context_section:
|
||||
cmd_desc["execution_context"] = self._parse_context(context_section.group(1))
|
||||
|
||||
# Parse artifact metadata sections
|
||||
produces_section = re.search(
|
||||
r"#\s*Produces\s*Artifacts:\s*\n(.*?)(?=\n#|\Z)",
|
||||
content,
|
||||
re.DOTALL | re.IGNORECASE
|
||||
)
|
||||
if produces_section:
|
||||
cmd_desc["artifact_produces"] = self._parse_artifact_list(produces_section.group(1))
|
||||
|
||||
consumes_section = re.search(
|
||||
r"#\s*Consumes\s*Artifacts:\s*\n(.*?)(?=\n#|\Z)",
|
||||
content,
|
||||
re.DOTALL | re.IGNORECASE
|
||||
)
|
||||
if consumes_section:
|
||||
cmd_desc["artifact_consumes"] = self._parse_artifact_list(consumes_section.group(1))
|
||||
|
||||
# Validate required fields
|
||||
required = ["name", "description", "execution_type", "target"]
|
||||
missing = [f for f in required if f not in cmd_desc]
|
||||
if missing:
|
||||
raise ValueError(f"Missing required fields: {', '.join(missing)}")
|
||||
|
||||
# Validate execution type
|
||||
if cmd_desc["execution_type"].lower() not in self.VALID_EXECUTION_TYPES:
|
||||
raise ValueError(
|
||||
f"Invalid execution type: {cmd_desc['execution_type']}. "
|
||||
f"Must be one of: {', '.join(self.VALID_EXECUTION_TYPES)}"
|
||||
)
|
||||
|
||||
# Ensure command name starts with /
|
||||
if not cmd_desc["name"].startswith("/"):
|
||||
cmd_desc["name"] = "/" + cmd_desc["name"]
|
||||
|
||||
# Set defaults
|
||||
if "version" not in cmd_desc:
|
||||
cmd_desc["version"] = "0.1.0"
|
||||
if "status" not in cmd_desc:
|
||||
cmd_desc["status"] = "draft"
|
||||
if "parameters" not in cmd_desc:
|
||||
cmd_desc["parameters"] = []
|
||||
|
||||
return cmd_desc
|
||||
|
||||
def _parse_parameters(self, params_text: str) -> List[Dict[str, Any]]:
|
||||
"""Parse parameters from markdown text"""
|
||||
parameters = []
|
||||
|
||||
# Match parameter blocks
|
||||
# Format: - name: type (required/optional) - description
|
||||
param_pattern = r"-\s+(\w+):\s+(\w+)(?:\s+\(([^)]+)\))?\s+-\s+(.+?)(?=\n-|\n#|\Z)"
|
||||
matches = re.finditer(param_pattern, params_text, re.DOTALL)
|
||||
|
||||
for match in matches:
|
||||
name, param_type, modifiers, description = match.groups()
|
||||
|
||||
param = {
|
||||
"name": name.strip(),
|
||||
"type": param_type.strip(),
|
||||
"description": description.strip()
|
||||
}
|
||||
|
||||
# Parse modifiers (required, optional, default=value)
|
||||
if modifiers:
|
||||
modifiers = modifiers.lower()
|
||||
param["required"] = "required" in modifiers
|
||||
|
||||
# Extract default value
|
||||
default_match = re.search(r"default[=:]\s*([^,\s]+)", modifiers)
|
||||
if default_match:
|
||||
default_val = default_match.group(1)
|
||||
# Convert types
|
||||
if param_type == "integer":
|
||||
default_val = int(default_val)
|
||||
elif param_type == "boolean":
|
||||
default_val = default_val.lower() in ("true", "yes", "1")
|
||||
param["default"] = default_val
|
||||
|
||||
# Extract enum values
|
||||
values_match = re.search(r"values[=:]\s*\[([^\]]+)\]", modifiers)
|
||||
if values_match:
|
||||
param["values"] = [v.strip() for v in values_match.group(1).split(",")]
|
||||
|
||||
parameters.append(param)
|
||||
|
||||
return parameters
|
||||
|
||||
def _parse_context(self, context_text: str) -> Dict[str, Any]:
|
||||
"""Parse execution context from markdown text"""
|
||||
context = {}
|
||||
|
||||
# Simple key: value parsing
|
||||
for line in context_text.split("\n"):
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#"):
|
||||
continue
|
||||
|
||||
match = re.match(r"-\s*(\w+):\s*(.+)", line)
|
||||
if match:
|
||||
key, value = match.groups()
|
||||
# Try to parse as JSON for complex values
|
||||
try:
|
||||
context[key] = json.loads(value)
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
context[key] = value.strip()
|
||||
|
||||
return context
|
||||
|
||||
def _parse_artifact_list(self, artifact_text: str) -> List[str]:
|
||||
"""Parse artifact list from markdown text"""
|
||||
artifacts = []
|
||||
|
||||
for line in artifact_text.split("\n"):
|
||||
line = line.strip()
|
||||
if not line or line.startswith("#"):
|
||||
continue
|
||||
|
||||
# Match lines starting with - or *
|
||||
match = re.match(r"[-*]\s*`?([a-z0-9-]+)`?", line)
|
||||
if match:
|
||||
artifacts.append(match.group(1))
|
||||
|
||||
return artifacts
|
||||
|
||||
def analyze_complexity(self, cmd_desc: Dict[str, Any], full_content: str = "") -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze command complexity and recommend pattern
|
||||
|
||||
Args:
|
||||
cmd_desc: Parsed command description
|
||||
full_content: Full description file content for analysis
|
||||
|
||||
Returns:
|
||||
Dict with complexity analysis and pattern recommendation
|
||||
"""
|
||||
analysis = {
|
||||
"step_count": 0,
|
||||
"complexity": "low",
|
||||
"autonomy_level": "none",
|
||||
"reusability": "low",
|
||||
"recommended_pattern": "COMMAND_ONLY",
|
||||
"should_create_skill": False,
|
||||
"reasoning": []
|
||||
}
|
||||
|
||||
# Count steps from description
|
||||
# Look for numbered lists, bullet points, or explicit step mentions
|
||||
step_patterns = [
|
||||
r"^\s*\d+\.\s+", # Numbered lists
|
||||
r"^\s*[-*]\s+", # Bullet points
|
||||
r"\bstep\s+\d+\b", # Explicit "step N"
|
||||
]
|
||||
|
||||
lines = full_content.split("\n")
|
||||
step_count = 0
|
||||
for line in lines:
|
||||
for pattern in step_patterns:
|
||||
if re.search(pattern, line, re.IGNORECASE):
|
||||
step_count += 1
|
||||
break
|
||||
|
||||
analysis["step_count"] = step_count
|
||||
|
||||
# Analyze content for keywords
|
||||
content_lower = full_content.lower()
|
||||
desc_lower = cmd_desc.get("description", "").lower()
|
||||
combined = content_lower + " " + desc_lower
|
||||
|
||||
# Check autonomy keywords
|
||||
autonomy_matches = [kw for kw in self.AUTONOMY_KEYWORDS if kw in combined]
|
||||
if len(autonomy_matches) >= 3:
|
||||
analysis["autonomy_level"] = "high"
|
||||
elif len(autonomy_matches) >= 1:
|
||||
analysis["autonomy_level"] = "medium"
|
||||
else:
|
||||
analysis["autonomy_level"] = "low"
|
||||
|
||||
# Check reusability keywords
|
||||
reusability_matches = [kw for kw in self.REUSABILITY_KEYWORDS if kw in combined]
|
||||
if len(reusability_matches) >= 2:
|
||||
analysis["reusability"] = "high"
|
||||
elif len(reusability_matches) >= 1:
|
||||
analysis["reusability"] = "medium"
|
||||
|
||||
# Determine complexity
|
||||
if step_count >= 10:
|
||||
analysis["complexity"] = "high"
|
||||
elif step_count >= 4:
|
||||
analysis["complexity"] = "medium"
|
||||
else:
|
||||
analysis["complexity"] = "low"
|
||||
|
||||
# Estimate lines of logic (rough heuristic)
|
||||
instruction_lines = sum(1 for line in lines if line.strip() and not line.strip().startswith("#"))
|
||||
if instruction_lines > 50:
|
||||
analysis["complexity"] = "high"
|
||||
|
||||
# Decide pattern based on decision tree
|
||||
if step_count >= 10 or analysis["complexity"] == "high":
|
||||
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
|
||||
analysis["should_create_skill"] = True
|
||||
analysis["reasoning"].append(f"High complexity: {step_count} steps detected")
|
||||
|
||||
elif analysis["autonomy_level"] == "high":
|
||||
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
|
||||
analysis["should_create_skill"] = True
|
||||
analysis["reasoning"].append(f"High autonomy: matched keywords {autonomy_matches[:3]}")
|
||||
|
||||
elif analysis["reusability"] == "high":
|
||||
if step_count <= 3:
|
||||
analysis["recommended_pattern"] = "SKILL_ONLY"
|
||||
analysis["should_create_skill"] = True
|
||||
analysis["reasoning"].append("High reusability but low complexity: create skill only")
|
||||
else:
|
||||
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
|
||||
analysis["should_create_skill"] = True
|
||||
analysis["reasoning"].append(f"High reusability with {step_count} steps: create both")
|
||||
|
||||
elif step_count >= 4 and step_count <= 9:
|
||||
# Medium complexity - could go either way
|
||||
if analysis["autonomy_level"] == "medium":
|
||||
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
|
||||
analysis["should_create_skill"] = True
|
||||
analysis["reasoning"].append(f"Medium complexity ({step_count} steps) with some autonomy needs")
|
||||
else:
|
||||
analysis["recommended_pattern"] = "COMMAND_ONLY"
|
||||
analysis["reasoning"].append(f"Medium complexity ({step_count} steps) but simple logic: inline is fine")
|
||||
|
||||
else:
|
||||
# Low complexity - command only
|
||||
analysis["recommended_pattern"] = "COMMAND_ONLY"
|
||||
analysis["reasoning"].append(f"Low complexity ({step_count} steps): inline orchestration is sufficient")
|
||||
|
||||
# Check if execution type already specifies skill
|
||||
if cmd_desc.get("execution_type") == "skill":
|
||||
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
|
||||
analysis["should_create_skill"] = True
|
||||
analysis["reasoning"].append("Execution type explicitly set to 'skill'")
|
||||
|
||||
return analysis
|
||||
|
||||
def generate_command_manifest(self, cmd_desc: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate command manifest YAML
|
||||
|
||||
Args:
|
||||
cmd_desc: Parsed command description
|
||||
|
||||
Returns:
|
||||
YAML string
|
||||
"""
|
||||
manifest = {
|
||||
"name": cmd_desc["name"],
|
||||
"version": cmd_desc["version"],
|
||||
"description": cmd_desc["description"]
|
||||
}
|
||||
|
||||
# Add parameters if present
|
||||
if cmd_desc.get("parameters"):
|
||||
manifest["parameters"] = cmd_desc["parameters"]
|
||||
|
||||
# Add execution configuration
|
||||
execution = {
|
||||
"type": cmd_desc["execution_type"],
|
||||
"target": cmd_desc["target"]
|
||||
}
|
||||
|
||||
if cmd_desc.get("execution_context"):
|
||||
execution["context"] = cmd_desc["execution_context"]
|
||||
|
||||
manifest["execution"] = execution
|
||||
|
||||
# Add status
|
||||
manifest["status"] = cmd_desc.get("status", "draft")
|
||||
|
||||
# Add tags if present
|
||||
if cmd_desc.get("tags"):
|
||||
manifest["tags"] = cmd_desc["tags"]
|
||||
|
||||
# Add artifact metadata if present
|
||||
if cmd_desc.get("artifact_produces") or cmd_desc.get("artifact_consumes"):
|
||||
artifact_metadata = {}
|
||||
|
||||
if cmd_desc.get("artifact_produces"):
|
||||
artifact_metadata["produces"] = [
|
||||
{"type": art_type} for art_type in cmd_desc["artifact_produces"]
|
||||
]
|
||||
|
||||
if cmd_desc.get("artifact_consumes"):
|
||||
artifact_metadata["consumes"] = [
|
||||
{"type": art_type, "required": True}
|
||||
for art_type in cmd_desc["artifact_consumes"]
|
||||
]
|
||||
|
||||
manifest["artifact_metadata"] = artifact_metadata
|
||||
|
||||
return yaml.dump(manifest, default_flow_style=False, sort_keys=False)
|
||||
|
||||
def validate_artifacts(self, cmd_desc: Dict[str, Any]) -> List[str]:
|
||||
"""
|
||||
Validate that artifact types exist in the known registry.
|
||||
|
||||
Args:
|
||||
cmd_desc: Parsed command description
|
||||
|
||||
Returns:
|
||||
List of warning messages
|
||||
"""
|
||||
warnings = []
|
||||
|
||||
if not ARTIFACT_VALIDATION_AVAILABLE:
|
||||
warnings.append(
|
||||
"Artifact validation skipped: artifact.define skill not available"
|
||||
)
|
||||
return warnings
|
||||
|
||||
# Validate produced artifacts
|
||||
for artifact_type in cmd_desc.get("artifact_produces", []):
|
||||
is_valid, warning = validate_artifact_type(artifact_type)
|
||||
if not is_valid and warning:
|
||||
warnings.append(f"Produces: {warning}")
|
||||
|
||||
# Validate consumed artifacts
|
||||
for artifact_type in cmd_desc.get("artifact_consumes", []):
|
||||
is_valid, warning = validate_artifact_type(artifact_type)
|
||||
if not is_valid and warning:
|
||||
warnings.append(f"Consumes: {warning}")
|
||||
|
||||
return warnings
|
||||
|
||||
def validate_target(self, cmd_desc: Dict[str, Any]) -> List[str]:
|
||||
"""
|
||||
Validate that the target skill or agent exists.
|
||||
|
||||
Args:
|
||||
cmd_desc: Parsed command description
|
||||
|
||||
Returns:
|
||||
List of warning messages
|
||||
"""
|
||||
warnings = []
|
||||
execution_type = cmd_desc.get("execution_type", "").lower()
|
||||
target = cmd_desc.get("target", "")
|
||||
|
||||
if execution_type == "skill":
|
||||
# Check if skill exists in registry or skills directory
|
||||
skill_registry = self.base_dir / "registry" / "skills.json"
|
||||
skill_dir = self.base_dir / "skills" / target.replace(".", "/")
|
||||
|
||||
skill_exists = False
|
||||
if skill_registry.exists():
|
||||
try:
|
||||
with open(skill_registry) as f:
|
||||
registry = json.load(f)
|
||||
if target in registry.get("skills", {}):
|
||||
skill_exists = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not skill_exists and not skill_dir.exists():
|
||||
warnings.append(
|
||||
f"Target skill '{target}' not found in registry or skills directory. "
|
||||
f"You may need to create it using meta.skill first."
|
||||
)
|
||||
|
||||
elif execution_type == "agent":
|
||||
# Check if agent exists in agents directory
|
||||
agent_dir = self.base_dir / "agents" / target
|
||||
if not agent_dir.exists():
|
||||
warnings.append(
|
||||
f"Target agent '{target}' not found in agents directory. "
|
||||
f"You may need to create it using meta.agent first."
|
||||
)
|
||||
|
||||
return warnings
|
||||
|
||||
def create_command(
|
||||
self,
|
||||
description_path: str,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create command manifest from description file
|
||||
|
||||
Args:
|
||||
description_path: Path to description file
|
||||
requirement: Optional requirement information for traceability
|
||||
|
||||
Returns:
|
||||
Dict with creation results
|
||||
"""
|
||||
try:
|
||||
print(f"🎯 meta.command - Creating command from {description_path}\n")
|
||||
|
||||
# Read full content for analysis
|
||||
with open(description_path, 'r') as f:
|
||||
full_content = f.read()
|
||||
|
||||
# Parse description
|
||||
cmd_desc = self.parse_description(description_path)
|
||||
|
||||
# Validate artifacts
|
||||
artifact_warnings = self.validate_artifacts(cmd_desc)
|
||||
if artifact_warnings:
|
||||
print("\n⚠️ Artifact Validation Warnings:")
|
||||
for warning in artifact_warnings:
|
||||
print(f" {warning}")
|
||||
print()
|
||||
|
||||
# Validate target skill/agent
|
||||
target_warnings = self.validate_target(cmd_desc)
|
||||
if target_warnings:
|
||||
print("\n⚠️ Target Validation Warnings:")
|
||||
for warning in target_warnings:
|
||||
print(f" {warning}")
|
||||
print()
|
||||
|
||||
# Analyze complexity and recommend pattern
|
||||
analysis = self.analyze_complexity(cmd_desc, full_content)
|
||||
|
||||
# Display analysis
|
||||
print(f"📊 Complexity Analysis:")
|
||||
print(f" Steps detected: {analysis['step_count']}")
|
||||
print(f" Complexity: {analysis['complexity']}")
|
||||
print(f" Autonomy level: {analysis['autonomy_level']}")
|
||||
print(f" Reusability: {analysis['reusability']}")
|
||||
print(f"\n💡 Recommended Pattern: {analysis['recommended_pattern']}")
|
||||
for reason in analysis['reasoning']:
|
||||
print(f" • {reason}")
|
||||
print()
|
||||
|
||||
# Generate manifest YAML
|
||||
manifest_yaml = self.generate_command_manifest(cmd_desc)
|
||||
|
||||
# Ensure commands directory exists
|
||||
self.commands_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Determine output filename
|
||||
# Remove leading / and replace spaces/special chars with hyphens
|
||||
filename = cmd_desc["name"].lstrip("/").replace(" ", "-").lower()
|
||||
filename = re.sub(r"[^a-z0-9-]", "", filename)
|
||||
manifest_file = self.commands_dir / f"{filename}.yaml"
|
||||
|
||||
# Write manifest file
|
||||
manifest_file.write_text(manifest_yaml)
|
||||
|
||||
print(f"✨ Command '{cmd_desc['name']}' created successfully!\n")
|
||||
print(f"📄 Created file:")
|
||||
print(f" - {manifest_file}\n")
|
||||
print(f"✅ Command manifest is ready for registration")
|
||||
print(f" Name: {cmd_desc['name']}")
|
||||
print(f" Execution: {cmd_desc['execution_type']} → {cmd_desc['target']}")
|
||||
print(f" Status: {cmd_desc.get('status', 'draft')}\n")
|
||||
|
||||
# Display skill creation recommendation if needed
|
||||
if analysis['should_create_skill']:
|
||||
print(f"⚠️ RECOMMENDATION: Create the skill first!")
|
||||
print(f" Pattern: {analysis['recommended_pattern']}")
|
||||
print(f"\n This command delegates to a skill ({cmd_desc['target']}),")
|
||||
print(f" but that skill may not exist yet.\n")
|
||||
print(f" Suggested workflow:")
|
||||
print(f" 1. Create skill: python3 agents/meta.skill/meta_skill.py <skill-description.md>")
|
||||
print(f" - Skill should implement: {cmd_desc['target']}")
|
||||
print(f" - Include all complex logic from the command description")
|
||||
print(f" 2. Test skill: python3 skills/{cmd_desc['target'].replace('.', '/')}/{cmd_desc['target'].replace('.', '_')}.py")
|
||||
print(f" 3. Review this command manifest: cat {manifest_file}")
|
||||
print(f" 4. Register command: python3 skills/command.define/command_define.py {manifest_file}")
|
||||
print(f" 5. Verify in registry: cat registry/commands.json")
|
||||
print(f"\n See docs/SKILL_COMMAND_DECISION_TREE.md for pattern details\n")
|
||||
else:
|
||||
print(f"📝 Next steps:")
|
||||
print(f" 1. Review the manifest: cat {manifest_file}")
|
||||
print(f" 2. Register command: python3 skills/command.define/command_define.py {manifest_file}")
|
||||
print(f" 3. Verify in registry: cat registry/commands.json")
|
||||
|
||||
result = {
|
||||
"ok": True,
|
||||
"status": "success",
|
||||
"command_name": cmd_desc["name"],
|
||||
"manifest_file": str(manifest_file),
|
||||
"complexity_analysis": analysis,
|
||||
"artifact_warnings": artifact_warnings,
|
||||
"target_warnings": target_warnings
|
||||
}
|
||||
|
||||
# Log traceability if requirement provided
|
||||
trace_id = None
|
||||
if requirement:
|
||||
try:
|
||||
tracer = get_tracer()
|
||||
|
||||
# Create component ID from command name
|
||||
component_id = f"command.{filename.replace('-', '_')}"
|
||||
|
||||
trace_id = tracer.log_creation(
|
||||
component_id=component_id,
|
||||
component_name=cmd_desc["name"],
|
||||
component_type="command",
|
||||
component_version=cmd_desc["version"],
|
||||
component_file_path=str(manifest_file),
|
||||
input_source_path=description_path,
|
||||
created_by_tool="meta.command",
|
||||
created_by_version="0.1.0",
|
||||
requirement=requirement,
|
||||
tags=["command", "auto-generated"] + cmd_desc.get("tags", []),
|
||||
project="Betty Framework"
|
||||
)
|
||||
|
||||
# Log validation check
|
||||
validation_details = {
|
||||
"checks_performed": [
|
||||
{"name": "command_structure", "status": "passed"},
|
||||
{"name": "execution_type_validation", "status": "passed",
|
||||
"message": f"Valid execution type: {cmd_desc['execution_type']}"},
|
||||
{"name": "name_validation", "status": "passed",
|
||||
"message": f"Command name follows convention: {cmd_desc['name']}"}
|
||||
]
|
||||
}
|
||||
|
||||
# Check parameters
|
||||
if cmd_desc.get("parameters"):
|
||||
validation_details["checks_performed"].append({
|
||||
"name": "parameters_validation",
|
||||
"status": "passed",
|
||||
"message": f"Validated {len(cmd_desc['parameters'])} parameters"
|
||||
})
|
||||
|
||||
tracer.log_verification(
|
||||
component_id=component_id,
|
||||
check_type="validation",
|
||||
tool="meta.command",
|
||||
result="passed",
|
||||
details=validation_details
|
||||
)
|
||||
|
||||
result["trace_id"] = trace_id
|
||||
result["component_id"] = component_id
|
||||
|
||||
except Exception as e:
|
||||
print(f"⚠️ Warning: Could not log traceability: {e}")
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating command: {e}")
|
||||
logger.error(f"Error creating command: {e}", exc_info=True)
|
||||
return {
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.command - Create command manifests from descriptions"
|
||||
)
|
||||
parser.add_argument(
|
||||
"description",
|
||||
help="Path to command description file (.md or .json)"
|
||||
)
|
||||
|
||||
# Traceability arguments
|
||||
parser.add_argument(
|
||||
"--requirement-id",
|
||||
help="Requirement identifier (e.g., REQ-2025-001)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-description",
|
||||
help="What this command accomplishes"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-source",
|
||||
help="Source document"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--issue-id",
|
||||
help="Issue tracking ID (e.g., JIRA-123)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requested-by",
|
||||
help="Who requested this"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--rationale",
|
||||
help="Why this is needed"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create requirement info if provided
|
||||
requirement = None
|
||||
if args.requirement_id and args.requirement_description:
|
||||
requirement = RequirementInfo(
|
||||
id=args.requirement_id,
|
||||
description=args.requirement_description,
|
||||
source=args.requirement_source,
|
||||
issue_id=args.issue_id,
|
||||
requested_by=args.requested_by,
|
||||
rationale=args.rationale
|
||||
)
|
||||
|
||||
creator = CommandCreator()
|
||||
result = creator.create_command(args.description, requirement=requirement)
|
||||
|
||||
# Display traceability info if available
|
||||
if result.get("trace_id"):
|
||||
print(f"\n📝 Traceability: {result['trace_id']}")
|
||||
print(f" View trace: python3 betty/trace_cli.py show {result['component_id']}")
|
||||
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
469
agents/meta.compatibility/README.md
Normal file
469
agents/meta.compatibility/README.md
Normal file
@@ -0,0 +1,469 @@
|
||||
# meta.compatibility - Agent Compatibility Analyzer
|
||||
|
||||
Analyzes agent compatibility and discovers multi-agent workflows based on artifact flows.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.compatibility** helps Claude discover which agents can work together by analyzing what artifacts they produce and consume. It enables intelligent multi-agent orchestration by suggesting compatible combinations and detecting pipeline gaps.
|
||||
|
||||
**What it does:**
|
||||
- Scans all agents and extracts artifact metadata
|
||||
- Builds compatibility maps (who produces/consumes what)
|
||||
- Finds compatible agents based on artifact flows
|
||||
- Suggests multi-agent pipelines for goals
|
||||
- Generates complete compatibility graphs
|
||||
- Detects gaps (consumed but not produced artifacts)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Find Compatible Agents
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Agent: meta.agent
|
||||
Produces: agent-definition, agent-documentation
|
||||
Consumes: agent-description
|
||||
|
||||
✅ Can feed outputs to (1 agents):
|
||||
• meta.compatibility (via agent-definition)
|
||||
|
||||
⚠️ Gaps (1):
|
||||
• agent-description: No agents produce 'agent-description' (required by meta.agent)
|
||||
```
|
||||
|
||||
### Suggest Pipeline
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create and analyze an agent"
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
📋 Pipeline 1: meta.agent Pipeline
|
||||
Pipeline starting with meta.agent
|
||||
Steps:
|
||||
1. meta.agent - Meta-agent that creates other agents...
|
||||
2. meta.compatibility - Analyzes agent and skill compatibility...
|
||||
```
|
||||
|
||||
### Analyze Agent
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze meta.agent
|
||||
```
|
||||
|
||||
### List All Compatibility
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Total Agents: 7
|
||||
Total Artifact Types: 16
|
||||
Total Relationships: 3
|
||||
|
||||
⚠️ Global Gaps (5):
|
||||
• agent-description: Consumed by 1 agents but no producers
|
||||
...
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### find-compatible
|
||||
|
||||
Find agents compatible with a specific agent.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible AGENT_NAME [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- What the agent produces
|
||||
- What the agent consumes
|
||||
- Agents that can consume its outputs
|
||||
- Agents that can provide its inputs
|
||||
- Gaps (missing producers)
|
||||
|
||||
### suggest-pipeline
|
||||
|
||||
Suggest multi-agent pipeline for a goal.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "GOAL" [--artifacts TYPE1 TYPE2...] [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Suggest pipeline for goal
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Design and validate APIs"
|
||||
|
||||
# With required artifacts
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Process data" --artifacts openapi-spec validation-report
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- Suggested pipelines (ranked)
|
||||
- Steps in each pipeline
|
||||
- Artifact flows between agents
|
||||
- Whether pipeline is complete (no gaps)
|
||||
|
||||
### analyze
|
||||
|
||||
Complete compatibility analysis for one agent.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze AGENT_NAME [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- Full compatibility report
|
||||
- Compatible agents (upstream/downstream)
|
||||
- Suggested workflows
|
||||
- Gaps and warnings
|
||||
|
||||
### list-all
|
||||
|
||||
Generate complete compatibility graph for all agents.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- All agents in the system
|
||||
- All relationships
|
||||
- All artifact types
|
||||
- Global gaps
|
||||
- Statistics
|
||||
|
||||
## Output Formats
|
||||
|
||||
### Text (default)
|
||||
|
||||
Human-readable output with emojis and formatting.
|
||||
|
||||
### JSON
|
||||
|
||||
Machine-readable JSON for programmatic use.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent --format json > meta_agent_compatibility.json
|
||||
```
|
||||
|
||||
### YAML
|
||||
|
||||
YAML format for configuration or documentation.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all --format yaml > compatibility_graph.yaml
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Agent Scanning
|
||||
|
||||
Scans `agents/` directory for all `agent.yaml` files:
|
||||
|
||||
```python
|
||||
for agent_dir in agents_dir.iterdir():
|
||||
agent_yaml = agent_dir / "agent.yaml"
|
||||
# Load and parse agent definition
|
||||
```
|
||||
|
||||
### 2. Artifact Extraction
|
||||
|
||||
Extracts artifact_metadata from each agent:
|
||||
|
||||
```yaml
|
||||
artifact_metadata:
|
||||
produces:
|
||||
- type: openapi-spec
|
||||
consumes:
|
||||
- type: api-requirements
|
||||
```
|
||||
|
||||
### 3. Compatibility Mapping
|
||||
|
||||
Builds map of artifact types to producers/consumers:
|
||||
|
||||
```
|
||||
openapi-spec:
|
||||
producers: [api.define, api.architect]
|
||||
consumers: [api.validate, api.code-generator]
|
||||
```
|
||||
|
||||
### 4. Relationship Discovery
|
||||
|
||||
For each agent:
|
||||
- Find agents that can consume its outputs
|
||||
- Find agents that can provide its inputs
|
||||
- Detect gaps (missing producers)
|
||||
|
||||
### 5. Pipeline Suggestion
|
||||
|
||||
Uses keyword matching and artifact analysis:
|
||||
- Match goal keywords to agent names/descriptions
|
||||
- Build pipeline from artifact flows
|
||||
- Rank by completeness and length
|
||||
- Return top suggestions
|
||||
|
||||
## Integration
|
||||
|
||||
### With meta.agent
|
||||
After creating an agent, analyze its compatibility:
|
||||
|
||||
```bash
|
||||
# Create agent
|
||||
python3 agents/meta.agent/meta_agent.py description.md
|
||||
|
||||
# Analyze compatibility
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze new-agent
|
||||
|
||||
# Find who can work with it
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible new-agent
|
||||
```
|
||||
|
||||
### With meta.suggest
|
||||
|
||||
meta.suggest uses meta.compatibility to make recommendations:
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py --context meta.agent
|
||||
```
|
||||
|
||||
Internally calls meta.compatibility to find next steps.
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Understand Agent Ecosystem
|
||||
|
||||
```bash
|
||||
# See all compatibility
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all
|
||||
|
||||
# Analyze each agent
|
||||
for agent in meta.agent meta.artifact meta.compatibility meta.suggest; do
|
||||
echo "=== $agent ==="
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze $agent
|
||||
done
|
||||
```
|
||||
|
||||
### Workflow 2: Build Multi-Agent Pipeline
|
||||
|
||||
```bash
|
||||
# Suggest pipeline
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create and test an agent"
|
||||
|
||||
# Get JSON for workflow automation
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "My goal" --format json > pipeline.json
|
||||
```
|
||||
|
||||
### Workflow 3: Find Gaps
|
||||
|
||||
```bash
|
||||
# Find global gaps
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all | grep "Gaps:"
|
||||
|
||||
# Analyze specific agent gaps
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible api.architect
|
||||
```
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **agent-definition** - Agent configurations
|
||||
- Pattern: `agents/*/agent.yaml`
|
||||
|
||||
- **registry-data** - Skills and agents registry
|
||||
- Pattern: `registry/*.json`
|
||||
|
||||
### Produces
|
||||
|
||||
- **compatibility-graph** - Agent relationship maps
|
||||
- Pattern: `*.compatibility.json`
|
||||
- Schema: `schemas/compatibility-graph.json`
|
||||
|
||||
- **pipeline-suggestion** - Multi-agent workflows
|
||||
- Pattern: `*.pipeline.json`
|
||||
- Schema: `schemas/pipeline-suggestion.json`
|
||||
|
||||
## Understanding Output
|
||||
|
||||
### Can Feed To
|
||||
|
||||
Agents that can consume this agent's outputs.
|
||||
|
||||
```
|
||||
✅ Can feed outputs to (2 agents):
|
||||
• api.validator (via openapi-spec)
|
||||
• api.code-generator (via openapi-spec)
|
||||
```
|
||||
|
||||
Means:
|
||||
- api.architect produces openapi-spec
|
||||
- Both api.validator and api.code-generator consume openapi-spec
|
||||
- You can run: api.architect → api.validator
|
||||
- Or: api.architect → api.code-generator
|
||||
|
||||
### Can Receive From
|
||||
|
||||
Agents that can provide this agent's inputs.
|
||||
|
||||
```
|
||||
⬅️ Can receive inputs from (1 agents):
|
||||
• api.requirements-analyzer (via api-requirements)
|
||||
```
|
||||
|
||||
Means:
|
||||
- api.architect needs api-requirements
|
||||
- api.requirements-analyzer produces api-requirements
|
||||
- You can run: api.requirements-analyzer → api.architect
|
||||
|
||||
### Gaps
|
||||
|
||||
Missing artifacts in the ecosystem.
|
||||
|
||||
```
|
||||
⚠️ Gaps (1):
|
||||
• agent-description: No agents produce 'agent-description'
|
||||
```
|
||||
|
||||
Means:
|
||||
- meta.agent needs agent-description input
|
||||
- No agent produces it (it's user-provided)
|
||||
- This is expected for user inputs
|
||||
|
||||
### Complete vs Incomplete Pipelines
|
||||
|
||||
**Complete Pipeline:**
|
||||
```
|
||||
Complete: ✅ Yes
|
||||
```
|
||||
All consumed artifacts are produced by pipeline steps.
|
||||
|
||||
**Incomplete Pipeline:**
|
||||
```
|
||||
Complete: ❌ No
|
||||
Gaps: agent-description, registry-data
|
||||
```
|
||||
Some consumed artifacts aren't produced. Requires user input or additional agents.
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Finding Compatible Agents
|
||||
|
||||
Use specific artifact types:
|
||||
```bash
|
||||
# Instead of generic goal
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Process stuff"
|
||||
|
||||
# Use specific artifacts
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Validate API" --artifacts openapi-spec
|
||||
```
|
||||
|
||||
### Understanding Gaps
|
||||
|
||||
Not all gaps are problems:
|
||||
- **User inputs** (agent-description, api-requirements) - Expected
|
||||
- **Missing producers** for internal artifacts - Need new agents/skills
|
||||
|
||||
### Building Pipelines
|
||||
|
||||
Start with compatibility analysis:
|
||||
1. Understand what each agent needs/produces
|
||||
2. Find compatible combinations
|
||||
3. Build pipeline step-by-step
|
||||
4. Validate no gaps exist (or gaps are user inputs)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Agent not found
|
||||
|
||||
```
|
||||
Error: Agent 'my-agent' not found
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
- Check agent exists in `agents/` directory
|
||||
- Ensure `agent.yaml` exists
|
||||
- Verify agent name in agent.yaml matches
|
||||
|
||||
### No compatible agents found
|
||||
|
||||
```
|
||||
Can feed outputs to (0 agents)
|
||||
Can receive inputs from (0 agents)
|
||||
```
|
||||
|
||||
**Causes:**
|
||||
- Agent is isolated (no shared artifact types)
|
||||
- Agent uses custom artifact types
|
||||
- No other agents exist yet
|
||||
|
||||
**Solutions:**
|
||||
- Create agents with compatible artifact types
|
||||
- Use standard artifact types
|
||||
- Check artifact_metadata is properly defined
|
||||
|
||||
### Empty pipeline suggestions
|
||||
|
||||
```
|
||||
Error: Could not determine relevant agents for goal
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
- Be more specific in goal description
|
||||
- Mention artifact types explicitly
|
||||
- Use `--artifacts` flag
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
meta.compatibility
|
||||
├─ Scans: agents/ directory
|
||||
├─ Analyzes: artifact_metadata
|
||||
├─ Builds: compatibility maps
|
||||
├─ Produces: compatibility graphs
|
||||
└─ Used by: meta.suggest, Claude
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
See test runs:
|
||||
```bash
|
||||
# Example 1: Find compatible agents
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent
|
||||
|
||||
# Example 2: Suggest pipeline
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create agent and check compatibility"
|
||||
|
||||
# Example 3: Full analysis
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze api.architect
|
||||
|
||||
# Example 4: Export to JSON
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all --format json > graph.json
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [compatibility-graph schema](../../schemas/compatibility-graph.json)
|
||||
- [pipeline-suggestion schema](../../schemas/pipeline-suggestion.json)
|
||||
|
||||
## How Claude Uses This
|
||||
|
||||
Claude can:
|
||||
1. **Discover capabilities** - "What agents can work with openapi-spec?"
|
||||
2. **Build workflows** - "How do I design and validate an API?"
|
||||
3. **Make decisions** - "What should I run next?"
|
||||
4. **Detect gaps** - "What's missing from the ecosystem?"
|
||||
|
||||
meta.compatibility enables autonomous multi-agent orchestration!
|
||||
130
agents/meta.compatibility/agent.yaml
Normal file
130
agents/meta.compatibility/agent.yaml
Normal file
@@ -0,0 +1,130 @@
|
||||
name: meta.compatibility
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Analyzes agent and skill compatibility to discover multi-agent workflows.
|
||||
|
||||
This meta-agent helps Claude discover which agents can work together by
|
||||
analyzing artifact flows - what agents produce and what others consume.
|
||||
|
||||
Enables intelligent orchestration by suggesting compatible agent combinations
|
||||
and detecting potential pipeline gaps.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: agent-definition
|
||||
file_pattern: "agents/*/agent.yaml"
|
||||
description: "Agent definitions to analyze for compatibility"
|
||||
|
||||
- type: registry-data
|
||||
file_pattern: "registry/*.json"
|
||||
description: "Skills and agents registry"
|
||||
|
||||
produces:
|
||||
- type: compatibility-graph
|
||||
file_pattern: "*.compatibility.json"
|
||||
content_type: "application/json"
|
||||
schema: "schemas/compatibility-graph.json"
|
||||
description: "Agent relationship graph showing artifact flows"
|
||||
|
||||
- type: pipeline-suggestion
|
||||
file_pattern: "*.pipeline.json"
|
||||
content_type: "application/json"
|
||||
schema: "schemas/pipeline-suggestion.json"
|
||||
description: "Suggested multi-agent workflows"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Build compatibility graphs that connect agent inputs and outputs
|
||||
- Recommend orchestrated workflows that minimize gaps and conflicts
|
||||
- Surface registry insights to guide creation of missing capabilities
|
||||
skills_available:
|
||||
- agent.compose # Analyze artifact flows
|
||||
- artifact.define # Understand artifact types
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
|
||||
system_prompt: |
|
||||
You are meta.compatibility, the agent compatibility analyzer.
|
||||
|
||||
Your purpose is to help Claude discover which agents work together by
|
||||
analyzing what artifacts they produce and consume.
|
||||
|
||||
## Your Responsibilities
|
||||
|
||||
1. **Analyze Compatibility**
|
||||
- Scan all agent definitions
|
||||
- Extract artifact metadata (produces/consumes)
|
||||
- Find matching artifact types
|
||||
- Identify compatible agent pairs
|
||||
|
||||
2. **Suggest Pipelines**
|
||||
- Recommend multi-agent workflows
|
||||
- Ensure artifact flow is complete (no gaps)
|
||||
- Prioritize common use cases
|
||||
- Provide clear rationale
|
||||
|
||||
3. **Detect Gaps**
|
||||
- Find consumed artifacts that aren't produced
|
||||
- Identify missing agents in pipelines
|
||||
- Suggest what needs to be created
|
||||
|
||||
4. **Generate Compatibility Graphs**
|
||||
- Visual representation of agent relationships
|
||||
- Show artifact flows between agents
|
||||
- Highlight compatible combinations
|
||||
|
||||
## Commands You Support
|
||||
|
||||
**Find Compatible Agents:**
|
||||
```bash
|
||||
/meta/compatibility find-compatible api.architect
|
||||
```
|
||||
Returns agents that can consume api.architect's outputs.
|
||||
|
||||
**Suggest Pipeline:**
|
||||
```bash
|
||||
/meta/compatibility suggest-pipeline "Design and implement an API"
|
||||
```
|
||||
Returns multi-agent workflow for the task.
|
||||
|
||||
**Analyze Agent:**
|
||||
```bash
|
||||
/meta/compatibility analyze api.architect
|
||||
```
|
||||
Returns full compatibility analysis for one agent.
|
||||
|
||||
**List All Compatibility:**
|
||||
```bash
|
||||
/meta/compatibility list-all
|
||||
```
|
||||
Returns complete compatibility graph for all agents.
|
||||
|
||||
## Analysis Criteria
|
||||
|
||||
Two agents are compatible if:
|
||||
- Agent A produces artifact type X
|
||||
- Agent B consumes artifact type X
|
||||
- The artifact schemas are compatible
|
||||
|
||||
## Pipeline Suggestion Criteria
|
||||
|
||||
A good pipeline:
|
||||
- Has no gaps (all consumed artifacts are produced)
|
||||
- Follows logical workflow order
|
||||
- Matches the user's stated goal
|
||||
- Uses minimal agents (efficiency)
|
||||
- Includes validation steps when appropriate
|
||||
|
||||
## Output Format
|
||||
|
||||
Always provide:
|
||||
- **Compatible agents**: List with rationale
|
||||
- **Artifact flows**: What flows between agents
|
||||
- **Suggested pipelines**: Step-by-step workflows
|
||||
- **Gaps**: Any missing artifacts or agents
|
||||
- **Confidence**: How confident you are in the suggestions
|
||||
|
||||
Remember: You enable intelligent orchestration by making compatibility
|
||||
discoverable. Help Claude make smart choices about which agents to use together.
|
||||
698
agents/meta.compatibility/meta_compatibility.py
Executable file
698
agents/meta.compatibility/meta_compatibility.py
Executable file
@@ -0,0 +1,698 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.compatibility - Agent Compatibility Analyzer
|
||||
|
||||
Analyzes agent and skill compatibility to discover multi-agent workflows.
|
||||
Helps Claude orchestrate by showing which agents can work together.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Set, Tuple
|
||||
from collections import defaultdict
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
from betty.provenance import compute_hash, get_provenance_logger
|
||||
from betty.config import REGISTRY_FILE, REGISTRY_DIR
|
||||
|
||||
|
||||
class CompatibilityAnalyzer:
|
||||
"""Analyzes agent compatibility based on artifact flows"""
|
||||
|
||||
def __init__(self, base_dir: str = "."):
|
||||
"""Initialize with base directory"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.agents_dir = self.base_dir / "agents"
|
||||
self.agents = {} # name -> agent definition
|
||||
self.compatibility_map = {} # artifact_type -> {producers: [], consumers: []}
|
||||
|
||||
def scan_agents(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Scan agents directory and load all agent definitions
|
||||
|
||||
Returns:
|
||||
Dictionary of agent_name -> agent_definition
|
||||
"""
|
||||
self.agents = {}
|
||||
|
||||
if not self.agents_dir.exists():
|
||||
return self.agents
|
||||
|
||||
for agent_dir in self.agents_dir.iterdir():
|
||||
if agent_dir.is_dir():
|
||||
agent_yaml = agent_dir / "agent.yaml"
|
||||
if agent_yaml.exists():
|
||||
with open(agent_yaml) as f:
|
||||
agent_def = yaml.safe_load(f)
|
||||
if agent_def and "name" in agent_def:
|
||||
self.agents[agent_def["name"]] = agent_def
|
||||
|
||||
return self.agents
|
||||
|
||||
def extract_artifacts(self, agent_def: Dict[str, Any]) -> Tuple[Set[str], Set[str]]:
|
||||
"""
|
||||
Extract artifact types from agent definition
|
||||
|
||||
Args:
|
||||
agent_def: Agent definition dictionary
|
||||
|
||||
Returns:
|
||||
Tuple of (produces_set, consumes_set)
|
||||
"""
|
||||
produces = set()
|
||||
consumes = set()
|
||||
|
||||
artifact_metadata = agent_def.get("artifact_metadata", {})
|
||||
|
||||
# Extract produced artifacts
|
||||
for artifact in artifact_metadata.get("produces", []):
|
||||
if isinstance(artifact, dict) and "type" in artifact:
|
||||
produces.add(artifact["type"])
|
||||
elif isinstance(artifact, str):
|
||||
produces.add(artifact)
|
||||
|
||||
# Extract consumed artifacts
|
||||
for artifact in artifact_metadata.get("consumes", []):
|
||||
if isinstance(artifact, dict) and "type" in artifact:
|
||||
consumes.add(artifact["type"])
|
||||
elif isinstance(artifact, str):
|
||||
consumes.add(artifact)
|
||||
|
||||
return produces, consumes
|
||||
|
||||
def build_compatibility_map(self) -> Dict[str, Dict[str, List[str]]]:
|
||||
"""
|
||||
Build map of artifact types to producers/consumers
|
||||
|
||||
Returns:
|
||||
Dictionary mapping artifact_type -> {producers: [], consumers: []}
|
||||
"""
|
||||
self.compatibility_map = defaultdict(lambda: {"producers": [], "consumers": []})
|
||||
|
||||
for agent_name, agent_def in self.agents.items():
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
for artifact_type in produces:
|
||||
self.compatibility_map[artifact_type]["producers"].append(agent_name)
|
||||
|
||||
for artifact_type in consumes:
|
||||
self.compatibility_map[artifact_type]["consumers"].append(agent_name)
|
||||
|
||||
return dict(self.compatibility_map)
|
||||
|
||||
def find_compatible(self, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Find agents compatible with the specified agent
|
||||
|
||||
Args:
|
||||
agent_name: Name of agent to analyze
|
||||
|
||||
Returns:
|
||||
Dictionary with compatible agents and rationale
|
||||
"""
|
||||
if agent_name not in self.agents:
|
||||
return {
|
||||
"error": f"Agent '{agent_name}' not found",
|
||||
"available_agents": list(self.agents.keys())
|
||||
}
|
||||
|
||||
agent_def = self.agents[agent_name]
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
result = {
|
||||
"agent": agent_name,
|
||||
"produces": list(produces),
|
||||
"consumes": list(consumes),
|
||||
"can_feed_to": [], # Agents that can consume this agent's outputs
|
||||
"can_receive_from": [], # Agents that can provide this agent's inputs
|
||||
"gaps": [] # Missing artifacts
|
||||
}
|
||||
|
||||
# Find agents that can consume this agent's outputs
|
||||
for artifact_type in produces:
|
||||
consumers = self.compatibility_map.get(artifact_type, {}).get("consumers", [])
|
||||
for consumer in consumers:
|
||||
if consumer != agent_name:
|
||||
result["can_feed_to"].append({
|
||||
"agent": consumer,
|
||||
"artifact": artifact_type,
|
||||
"rationale": f"{agent_name} produces '{artifact_type}' which {consumer} consumes"
|
||||
})
|
||||
|
||||
# Find agents that can provide this agent's inputs
|
||||
for artifact_type in consumes:
|
||||
producers = self.compatibility_map.get(artifact_type, {}).get("producers", [])
|
||||
if not producers:
|
||||
result["gaps"].append({
|
||||
"artifact": artifact_type,
|
||||
"issue": f"No agents produce '{artifact_type}' (required by {agent_name})",
|
||||
"severity": "high"
|
||||
})
|
||||
else:
|
||||
for producer in producers:
|
||||
if producer != agent_name:
|
||||
result["can_receive_from"].append({
|
||||
"agent": producer,
|
||||
"artifact": artifact_type,
|
||||
"rationale": f"{producer} produces '{artifact_type}' which {agent_name} needs"
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
def suggest_pipeline(self, goal: str, required_artifacts: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Suggest multi-agent pipeline for a goal
|
||||
|
||||
Args:
|
||||
goal: Natural language description of what to accomplish
|
||||
required_artifacts: Optional list of artifact types needed
|
||||
|
||||
Returns:
|
||||
Suggested pipeline with steps and rationale
|
||||
"""
|
||||
# Simple keyword matching for now (can be enhanced with ML later)
|
||||
goal_lower = goal.lower()
|
||||
|
||||
keywords_to_agents = {
|
||||
"api": ["api.architect", "meta.agent"],
|
||||
"design api": ["api.architect"],
|
||||
"validate": ["api.architect"],
|
||||
"create agent": ["meta.agent"],
|
||||
"agent": ["meta.agent"],
|
||||
"artifact": ["meta.artifact"],
|
||||
"optimize": [], # No optimizer yet, but we have the artifact type
|
||||
}
|
||||
|
||||
# Find relevant agents
|
||||
relevant_agents = set()
|
||||
for keyword, agents in keywords_to_agents.items():
|
||||
if keyword in goal_lower:
|
||||
relevant_agents.update([a for a in agents if a in self.agents])
|
||||
|
||||
if not relevant_agents and required_artifacts:
|
||||
# Find agents that produce the required artifacts
|
||||
for artifact_type in required_artifacts:
|
||||
producers = self.compatibility_map.get(artifact_type, {}).get("producers", [])
|
||||
relevant_agents.update(producers)
|
||||
|
||||
if not relevant_agents:
|
||||
return {
|
||||
"error": "Could not determine relevant agents for goal",
|
||||
"suggestion": "Try being more specific or mention required artifact types",
|
||||
"goal": goal
|
||||
}
|
||||
|
||||
# Build pipeline by analyzing artifact flows
|
||||
pipelines = []
|
||||
|
||||
for start_agent in relevant_agents:
|
||||
pipeline = self._build_pipeline_from_agent(start_agent, goal)
|
||||
if pipeline:
|
||||
pipelines.append(pipeline)
|
||||
|
||||
# Rank pipelines by completeness and length
|
||||
pipelines.sort(key=lambda p: (
|
||||
-len([s for s in p.get("steps", [])]), # Prefer shorter pipelines
|
||||
-p.get("confidence_score", 0) # Higher confidence
|
||||
))
|
||||
|
||||
if not pipelines:
|
||||
return {
|
||||
"error": "Could not build complete pipeline",
|
||||
"relevant_agents": list(relevant_agents),
|
||||
"goal": goal
|
||||
}
|
||||
|
||||
return {
|
||||
"goal": goal,
|
||||
"pipelines": pipelines[:3], # Top 3 suggestions
|
||||
"confidence": "medium" if len(pipelines) > 1 else "low"
|
||||
}
|
||||
|
||||
def _build_pipeline_from_agent(self, start_agent: str, goal: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Build a pipeline starting from a specific agent
|
||||
|
||||
Args:
|
||||
start_agent: Agent to start pipeline from
|
||||
goal: Goal description
|
||||
|
||||
Returns:
|
||||
Pipeline dictionary or None
|
||||
"""
|
||||
if start_agent not in self.agents:
|
||||
return None
|
||||
|
||||
agent_def = self.agents[start_agent]
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
pipeline = {
|
||||
"name": f"{start_agent.title()} Pipeline",
|
||||
"description": f"Pipeline starting with {start_agent}",
|
||||
"steps": [
|
||||
{
|
||||
"step": 1,
|
||||
"agent": start_agent,
|
||||
"description": agent_def.get("description", "").split("\n")[0],
|
||||
"produces": list(produces),
|
||||
"consumes": list(consumes)
|
||||
}
|
||||
],
|
||||
"artifact_flow": [],
|
||||
"confidence_score": 0.5
|
||||
}
|
||||
|
||||
# Try to add compatible next steps
|
||||
compatibility = self.find_compatible(start_agent)
|
||||
|
||||
for compatible in compatibility.get("can_feed_to", [])[:2]: # Max 2 next steps
|
||||
next_agent = compatible["agent"]
|
||||
if next_agent in self.agents:
|
||||
next_def = self.agents[next_agent]
|
||||
next_produces, next_consumes = self.extract_artifacts(next_def)
|
||||
|
||||
pipeline["steps"].append({
|
||||
"step": len(pipeline["steps"]) + 1,
|
||||
"agent": next_agent,
|
||||
"description": next_def.get("description", "").split("\n")[0],
|
||||
"produces": list(next_produces),
|
||||
"consumes": list(next_consumes)
|
||||
})
|
||||
|
||||
pipeline["artifact_flow"].append({
|
||||
"from": start_agent,
|
||||
"to": next_agent,
|
||||
"artifact": compatible["artifact"]
|
||||
})
|
||||
|
||||
pipeline["confidence_score"] += 0.2
|
||||
|
||||
# Calculate if pipeline has gaps
|
||||
all_produces = set()
|
||||
all_consumes = set()
|
||||
for step in pipeline["steps"]:
|
||||
all_produces.update(step.get("produces", []))
|
||||
all_consumes.update(step.get("consumes", []))
|
||||
|
||||
gaps = all_consumes - all_produces
|
||||
if not gaps:
|
||||
pipeline["confidence_score"] += 0.3
|
||||
pipeline["complete"] = True
|
||||
else:
|
||||
pipeline["complete"] = False
|
||||
pipeline["gaps"] = list(gaps)
|
||||
|
||||
return pipeline
|
||||
|
||||
def generate_compatibility_graph(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate complete compatibility graph for all agents
|
||||
|
||||
Returns:
|
||||
Compatibility graph structure
|
||||
"""
|
||||
graph = {
|
||||
"agents": [],
|
||||
"relationships": [],
|
||||
"artifact_types": [],
|
||||
"gaps": [],
|
||||
"metadata": {
|
||||
"total_agents": len(self.agents),
|
||||
"total_artifact_types": len(self.compatibility_map)
|
||||
}
|
||||
}
|
||||
|
||||
# Add agents
|
||||
for agent_name, agent_def in self.agents.items():
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
graph["agents"].append({
|
||||
"name": agent_name,
|
||||
"description": agent_def.get("description", "").split("\n")[0],
|
||||
"produces": list(produces),
|
||||
"consumes": list(consumes)
|
||||
})
|
||||
|
||||
# Add relationships
|
||||
for agent_name in self.agents:
|
||||
compatibility = self.find_compatible(agent_name)
|
||||
|
||||
for compatible in compatibility.get("can_feed_to", []):
|
||||
graph["relationships"].append({
|
||||
"from": agent_name,
|
||||
"to": compatible["agent"],
|
||||
"artifact": compatible["artifact"],
|
||||
"type": "produces_for"
|
||||
})
|
||||
|
||||
# Add artifact types
|
||||
for artifact_type, info in self.compatibility_map.items():
|
||||
graph["artifact_types"].append({
|
||||
"type": artifact_type,
|
||||
"producers": info["producers"],
|
||||
"consumers": info["consumers"],
|
||||
"producer_count": len(info["producers"]),
|
||||
"consumer_count": len(info["consumers"])
|
||||
})
|
||||
|
||||
# Find global gaps
|
||||
for artifact_type, info in self.compatibility_map.items():
|
||||
if not info["producers"] and info["consumers"]:
|
||||
graph["gaps"].append({
|
||||
"artifact": artifact_type,
|
||||
"issue": f"Consumed by {len(info['consumers'])} agents but no producers",
|
||||
"consumers": info["consumers"],
|
||||
"severity": "high"
|
||||
})
|
||||
|
||||
return graph
|
||||
|
||||
def analyze_agent(self, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Complete compatibility analysis for one agent
|
||||
|
||||
Args:
|
||||
agent_name: Name of agent to analyze
|
||||
|
||||
Returns:
|
||||
Comprehensive analysis
|
||||
"""
|
||||
compatibility = self.find_compatible(agent_name)
|
||||
|
||||
if "error" in compatibility:
|
||||
return compatibility
|
||||
|
||||
# Add suggested workflows
|
||||
workflows = []
|
||||
|
||||
# Workflow 1: As a starting point
|
||||
if compatibility["can_feed_to"]:
|
||||
workflow = {
|
||||
"name": f"Start with {agent_name}",
|
||||
"description": f"Use {agent_name} as the first step",
|
||||
"agents": [agent_name] + [c["agent"] for c in compatibility["can_feed_to"][:2]]
|
||||
}
|
||||
workflows.append(workflow)
|
||||
|
||||
# Workflow 2: As a middle step
|
||||
if compatibility["can_receive_from"] and compatibility["can_feed_to"]:
|
||||
workflow = {
|
||||
"name": f"{agent_name} in pipeline",
|
||||
"description": f"Use {agent_name} as a processing step",
|
||||
"agents": [
|
||||
compatibility["can_receive_from"][0]["agent"],
|
||||
agent_name,
|
||||
compatibility["can_feed_to"][0]["agent"]
|
||||
]
|
||||
}
|
||||
workflows.append(workflow)
|
||||
|
||||
compatibility["suggested_workflows"] = workflows
|
||||
|
||||
return compatibility
|
||||
|
||||
def verify_registry_integrity(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Verify integrity of registry files using provenance hashes.
|
||||
|
||||
Returns:
|
||||
Dictionary with verification results
|
||||
"""
|
||||
provenance = get_provenance_logger()
|
||||
|
||||
results = {
|
||||
"verified": [],
|
||||
"failed": [],
|
||||
"missing": [],
|
||||
"summary": {
|
||||
"total_checked": 0,
|
||||
"verified_count": 0,
|
||||
"failed_count": 0,
|
||||
"missing_count": 0
|
||||
}
|
||||
}
|
||||
|
||||
# List of registry files to verify
|
||||
registry_files = [
|
||||
("skills.json", REGISTRY_FILE),
|
||||
("agents.json", str(Path(REGISTRY_DIR) / "agents.json")),
|
||||
("workflow_history.json", str(Path(REGISTRY_DIR) / "workflow_history.json")),
|
||||
]
|
||||
|
||||
for artifact_id, file_path in registry_files:
|
||||
results["summary"]["total_checked"] += 1
|
||||
|
||||
# Check if file exists
|
||||
if not os.path.exists(file_path):
|
||||
results["missing"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": "File does not exist"
|
||||
})
|
||||
results["summary"]["missing_count"] += 1
|
||||
continue
|
||||
|
||||
try:
|
||||
# Load the registry file
|
||||
with open(file_path, 'r') as f:
|
||||
content = json.load(f)
|
||||
|
||||
# Get stored hash from file (if present)
|
||||
stored_hash = content.get("content_hash")
|
||||
|
||||
# Remove content_hash field to compute original hash
|
||||
content_without_hash = {k: v for k, v in content.items() if k != "content_hash"}
|
||||
|
||||
# Compute current hash
|
||||
current_hash = compute_hash(content_without_hash)
|
||||
|
||||
# Get latest hash from provenance log
|
||||
latest_provenance_hash = provenance.get_latest_hash(artifact_id)
|
||||
|
||||
# Verify
|
||||
if stored_hash and stored_hash == current_hash:
|
||||
# Hash matches what's in the file
|
||||
verification_status = "verified"
|
||||
|
||||
# Also check against provenance log
|
||||
if latest_provenance_hash:
|
||||
provenance_match = (stored_hash == latest_provenance_hash)
|
||||
else:
|
||||
provenance_match = None
|
||||
|
||||
results["verified"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"hash": current_hash[:16] + "...",
|
||||
"stored_hash_valid": True,
|
||||
"provenance_logged": latest_provenance_hash is not None,
|
||||
"provenance_match": provenance_match
|
||||
})
|
||||
results["summary"]["verified_count"] += 1
|
||||
|
||||
elif stored_hash and stored_hash != current_hash:
|
||||
# Hash mismatch - file may have been modified
|
||||
results["failed"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": "Content hash mismatch",
|
||||
"stored_hash": stored_hash[:16] + "...",
|
||||
"computed_hash": current_hash[:16] + "...",
|
||||
"severity": "high"
|
||||
})
|
||||
results["summary"]["failed_count"] += 1
|
||||
|
||||
else:
|
||||
# No hash stored in file
|
||||
results["missing"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": "No content_hash field in file",
|
||||
"computed_hash": current_hash[:16] + "...",
|
||||
"provenance_available": latest_provenance_hash is not None
|
||||
})
|
||||
results["summary"]["missing_count"] += 1
|
||||
|
||||
except Exception as e:
|
||||
results["failed"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": f"Verification error: {str(e)}",
|
||||
"severity": "high"
|
||||
})
|
||||
results["summary"]["failed_count"] += 1
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.compatibility - Agent Compatibility Analyzer"
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||
|
||||
# Find compatible command
|
||||
find_parser = subparsers.add_parser('find-compatible', help='Find compatible agents')
|
||||
find_parser.add_argument("agent", help="Agent name to analyze")
|
||||
|
||||
# Suggest pipeline command
|
||||
suggest_parser = subparsers.add_parser('suggest-pipeline', help='Suggest multi-agent pipeline')
|
||||
suggest_parser.add_argument("goal", help="Goal description")
|
||||
suggest_parser.add_argument("--artifacts", nargs="+", help="Required artifact types")
|
||||
|
||||
# Analyze command
|
||||
analyze_parser = subparsers.add_parser('analyze', help='Analyze agent compatibility')
|
||||
analyze_parser.add_argument("agent", help="Agent name to analyze")
|
||||
|
||||
# List all command
|
||||
list_parser = subparsers.add_parser('list-all', help='List all compatibility')
|
||||
|
||||
# Verify integrity command
|
||||
verify_parser = subparsers.add_parser('verify-integrity', help='Verify registry integrity using provenance hashes')
|
||||
|
||||
# Output format
|
||||
parser.add_argument(
|
||||
"--format",
|
||||
choices=["json", "yaml", "text"],
|
||||
default="text",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
analyzer = CompatibilityAnalyzer()
|
||||
analyzer.scan_agents()
|
||||
analyzer.build_compatibility_map()
|
||||
|
||||
result = None
|
||||
|
||||
if args.command == 'find-compatible':
|
||||
print(f"🔍 Finding agents compatible with '{args.agent}'...\n")
|
||||
result = analyzer.find_compatible(args.agent)
|
||||
|
||||
if args.format == "text" and "error" not in result:
|
||||
print(f"Agent: {result['agent']}")
|
||||
print(f"Produces: {', '.join(result['produces']) if result['produces'] else 'none'}")
|
||||
print(f"Consumes: {', '.join(result['consumes']) if result['consumes'] else 'none'}")
|
||||
|
||||
if result['can_feed_to']:
|
||||
print(f"\n✅ Can feed outputs to ({len(result['can_feed_to'])} agents):")
|
||||
for comp in result['can_feed_to']:
|
||||
print(f" • {comp['agent']} (via {comp['artifact']})")
|
||||
|
||||
if result['can_receive_from']:
|
||||
print(f"\n⬅️ Can receive inputs from ({len(result['can_receive_from'])} agents):")
|
||||
for comp in result['can_receive_from']:
|
||||
print(f" • {comp['agent']} (via {comp['artifact']})")
|
||||
|
||||
if result['gaps']:
|
||||
print(f"\n⚠️ Gaps ({len(result['gaps'])}):")
|
||||
for gap in result['gaps']:
|
||||
print(f" • {gap['artifact']}: {gap['issue']}")
|
||||
|
||||
elif args.command == 'suggest-pipeline':
|
||||
print(f"💡 Suggesting pipeline for: {args.goal}\n")
|
||||
result = analyzer.suggest_pipeline(args.goal, args.artifacts)
|
||||
|
||||
if args.format == "text" and "pipelines" in result:
|
||||
for i, pipeline in enumerate(result["pipelines"], 1):
|
||||
print(f"\n📋 Pipeline {i}: {pipeline['name']}")
|
||||
print(f" {pipeline['description']}")
|
||||
print(f" Complete: {'✅ Yes' if pipeline.get('complete', False) else '❌ No'}")
|
||||
print(f" Steps:")
|
||||
for step in pipeline['steps']:
|
||||
print(f" {step['step']}. {step['agent']} - {step['description'][:60]}...")
|
||||
|
||||
if pipeline.get('gaps'):
|
||||
print(f" Gaps: {', '.join(pipeline['gaps'])}")
|
||||
|
||||
elif args.command == 'analyze':
|
||||
print(f"📊 Analyzing '{args.agent}'...\n")
|
||||
result = analyzer.analyze_agent(args.agent)
|
||||
|
||||
if args.format == "text" and "error" not in result:
|
||||
print(f"Agent: {result['agent']}")
|
||||
print(f"Produces: {', '.join(result['produces']) if result['produces'] else 'none'}")
|
||||
print(f"Consumes: {', '.join(result['consumes']) if result['consumes'] else 'none'}")
|
||||
|
||||
if result.get('suggested_workflows'):
|
||||
print(f"\n🔄 Suggested Workflows:")
|
||||
for workflow in result['suggested_workflows']:
|
||||
print(f"\n {workflow['name']}")
|
||||
print(f" {workflow['description']}")
|
||||
print(f" Pipeline: {' → '.join(workflow['agents'])}")
|
||||
|
||||
elif args.command == 'list-all':
|
||||
print("🗺️ Generating complete compatibility graph...\n")
|
||||
result = analyzer.generate_compatibility_graph()
|
||||
|
||||
if args.format == "text":
|
||||
print(f"Total Agents: {result['metadata']['total_agents']}")
|
||||
print(f"Total Artifact Types: {result['metadata']['total_artifact_types']}")
|
||||
print(f"Total Relationships: {len(result['relationships'])}")
|
||||
|
||||
if result['gaps']:
|
||||
print(f"\n⚠️ Global Gaps ({len(result['gaps'])}):")
|
||||
for gap in result['gaps']:
|
||||
print(f" • {gap['artifact']}: {gap['issue']}")
|
||||
|
||||
elif args.command == 'verify-integrity':
|
||||
print("🔐 Verifying registry integrity using provenance hashes...\n")
|
||||
result = analyzer.verify_registry_integrity()
|
||||
|
||||
if args.format == "text":
|
||||
summary = result['summary']
|
||||
print(f"Total Checked: {summary['total_checked']}")
|
||||
print(f"✅ Verified: {summary['verified_count']}")
|
||||
print(f"❌ Failed: {summary['failed_count']}")
|
||||
print(f"⚠️ Missing Hash: {summary['missing_count']}")
|
||||
|
||||
if result['verified']:
|
||||
print(f"\n✅ Verified Artifacts ({len(result['verified'])}):")
|
||||
for item in result['verified']:
|
||||
print(f" • {item['artifact']}: {item['hash']}")
|
||||
if item.get('provenance_logged'):
|
||||
match_status = "✓" if item.get('provenance_match') else "✗"
|
||||
print(f" Provenance: {match_status}")
|
||||
|
||||
if result['failed']:
|
||||
print(f"\n❌ Failed Verifications ({len(result['failed'])}):")
|
||||
for item in result['failed']:
|
||||
print(f" • {item['artifact']}: {item['reason']}")
|
||||
if 'stored_hash' in item:
|
||||
print(f" Expected: {item['stored_hash']}")
|
||||
print(f" Computed: {item['computed_hash']}")
|
||||
|
||||
if result['missing']:
|
||||
print(f"\n⚠️ Missing Hashes ({len(result['missing'])}):")
|
||||
for item in result['missing']:
|
||||
print(f" • {item['artifact']}: {item['reason']}")
|
||||
|
||||
# Output result
|
||||
if result:
|
||||
if args.format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
elif args.format == "yaml":
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
elif "error" in result:
|
||||
print(f"\n❌ Error: {result['error']}")
|
||||
if "suggestion" in result:
|
||||
print(f"💡 {result['suggestion']}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
247
agents/meta.config.router/README.md
Normal file
247
agents/meta.config.router/README.md
Normal file
@@ -0,0 +1,247 @@
|
||||
# Agent: meta.config.router
|
||||
|
||||
## Purpose
|
||||
|
||||
Configure Claude Code Router for Betty to support multi-model LLM routing across environments. This agent creates or previews a `config.json` file at `~/.claude-code-router/config.json` with model providers, routing profiles, and audit metadata.
|
||||
|
||||
## Version
|
||||
|
||||
0.1.0
|
||||
|
||||
## Status
|
||||
|
||||
active
|
||||
|
||||
## Reasoning Mode
|
||||
|
||||
oneshot
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Generate multi-model LLM router configurations
|
||||
- Validate router configuration inputs for correctness
|
||||
- Apply configurations to filesystem with audit trails
|
||||
- Support multiple output modes (preview, file, both)
|
||||
- Work across local, cloud, and CI environments
|
||||
- Ensure deterministic and portable configurations
|
||||
|
||||
## Skills Available
|
||||
|
||||
- `config.validate.router` - Validates router configuration inputs
|
||||
- `config.generate.router` - Generates router configuration JSON
|
||||
- `audit.log` - Records audit events for configuration changes
|
||||
|
||||
## Inputs
|
||||
|
||||
### llm_backends (required)
|
||||
- **Type**: List of objects
|
||||
- **Description**: Backend provider configurations
|
||||
- **Schema**:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "string (e.g., openrouter, ollama, claude)",
|
||||
"api_base_url": "string (API endpoint URL)",
|
||||
"api_key": "string (optional for local providers)",
|
||||
"models": ["string (model identifiers)"]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### routing_rules (required)
|
||||
- **Type**: Dictionary
|
||||
- **Description**: Mapping of Claude routing contexts to provider/model pairs
|
||||
- **Contexts**: default, think, background, longContext
|
||||
- **Schema**:
|
||||
```json
|
||||
{
|
||||
"default": { "provider": "string", "model": "string" },
|
||||
"think": { "provider": "string", "model": "string" },
|
||||
"background": { "provider": "string", "model": "string" },
|
||||
"longContext": { "provider": "string", "model": "string" }
|
||||
}
|
||||
```
|
||||
|
||||
### output_mode (optional)
|
||||
- **Type**: enum
|
||||
- **Values**: "preview" | "file" | "both"
|
||||
- **Default**: "preview"
|
||||
- **Description**: Output mode for configuration
|
||||
|
||||
### apply_config (optional)
|
||||
- **Type**: boolean
|
||||
- **Default**: false
|
||||
- **Description**: Write config to disk if true
|
||||
|
||||
### metadata (optional)
|
||||
- **Type**: object
|
||||
- **Description**: Optional audit metadata (initiator, environment, etc.)
|
||||
|
||||
## Outputs
|
||||
|
||||
### routing_config
|
||||
- **Type**: object
|
||||
- **Description**: Rendered router config as JSON
|
||||
|
||||
### write_status
|
||||
- **Type**: string
|
||||
- **Values**: "success" | "skipped" | "error"
|
||||
- **Description**: Status of file write operation
|
||||
|
||||
### audit_id
|
||||
- **Type**: string
|
||||
- **Description**: Unique trace ID for configuration event
|
||||
|
||||
## Behavior
|
||||
|
||||
1. Validates inputs via `config.validate.router`
|
||||
2. Constructs valid router config using `config.generate.router`
|
||||
3. If `apply_config=true` and `output_mode≠preview`, writes config to: `~/.claude-code-router/config.json`
|
||||
4. Outputs JSON config regardless of write action
|
||||
5. Logs audit record via `audit.log` with:
|
||||
- timestamp
|
||||
- initiator
|
||||
- hash of input
|
||||
- environment fingerprint
|
||||
|
||||
## Usage Example
|
||||
|
||||
```bash
|
||||
# Preview configuration (no file write)
|
||||
/meta/config.router --routing_config_path=router-config.yaml
|
||||
|
||||
# Apply configuration to disk
|
||||
/meta/config.router --routing_config_path=router-config.yaml --apply_config=true
|
||||
|
||||
# Both preview and write
|
||||
/meta/config.router --routing_config_path=router-config.yaml --apply_config=true --output_mode=both
|
||||
```
|
||||
|
||||
## Example Input (YAML)
|
||||
|
||||
```yaml
|
||||
llm_backends:
|
||||
- name: openrouter
|
||||
api_base_url: https://openrouter.ai/api/v1
|
||||
api_key: ${OPENROUTER_API_KEY}
|
||||
models:
|
||||
- anthropic/claude-3.5-sonnet
|
||||
- openai/gpt-4
|
||||
|
||||
- name: ollama
|
||||
api_base_url: http://localhost:11434/v1
|
||||
models:
|
||||
- llama3.1:70b
|
||||
- codellama:34b
|
||||
|
||||
routing_rules:
|
||||
default:
|
||||
provider: openrouter
|
||||
model: anthropic/claude-3.5-sonnet
|
||||
|
||||
think:
|
||||
provider: openrouter
|
||||
model: anthropic/claude-3.5-sonnet
|
||||
|
||||
background:
|
||||
provider: ollama
|
||||
model: llama3.1:70b
|
||||
|
||||
longContext:
|
||||
provider: openrouter
|
||||
model: anthropic/claude-3.5-sonnet
|
||||
|
||||
metadata:
|
||||
initiator: user@example.com
|
||||
environment: production
|
||||
purpose: Multi-model routing for development
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"generated_at": "2025-11-01T12:34:56Z",
|
||||
"backends": [
|
||||
{
|
||||
"name": "openrouter",
|
||||
"api_base_url": "https://openrouter.ai/api/v1",
|
||||
"api_key": "${OPENROUTER_API_KEY}",
|
||||
"models": [
|
||||
"anthropic/claude-3.5-sonnet",
|
||||
"openai/gpt-4"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "ollama",
|
||||
"api_base_url": "http://localhost:11434/v1",
|
||||
"models": [
|
||||
"llama3.1:70b",
|
||||
"codellama:34b"
|
||||
]
|
||||
}
|
||||
],
|
||||
"routing": {
|
||||
"default": {
|
||||
"provider": "openrouter",
|
||||
"model": "anthropic/claude-3.5-sonnet"
|
||||
},
|
||||
"think": {
|
||||
"provider": "openrouter",
|
||||
"model": "anthropic/claude-3.5-sonnet"
|
||||
},
|
||||
"background": {
|
||||
"provider": "ollama",
|
||||
"model": "llama3.1:70b"
|
||||
},
|
||||
"longContext": {
|
||||
"provider": "openrouter",
|
||||
"model": "anthropic/claude-3.5-sonnet"
|
||||
}
|
||||
},
|
||||
"metadata": {
|
||||
"generated_by": "meta.config.router",
|
||||
"schema_version": "1.0.0",
|
||||
"initiator": "user@example.com",
|
||||
"environment": "production",
|
||||
"purpose": "Multi-model routing for development"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Permissions
|
||||
|
||||
- `filesystem:read` - Read router config input files
|
||||
- `filesystem:write` - Write config to ~/.claude-code-router/config.json
|
||||
|
||||
## Artifacts
|
||||
|
||||
### Consumes
|
||||
- `router-config-input` - User-provided router configuration inputs
|
||||
|
||||
### Produces
|
||||
- `llm-router-config` - Complete Claude Code Router configuration file
|
||||
- `audit-log-entry` - Audit trail entry for configuration events
|
||||
|
||||
## Tags
|
||||
|
||||
llm, router, configuration, meta, infra, openrouter, claude, ollama, multi-model
|
||||
|
||||
## Environments
|
||||
|
||||
- local
|
||||
- cloud
|
||||
- ci
|
||||
|
||||
## Requires Human Approval
|
||||
|
||||
false
|
||||
|
||||
## Notes
|
||||
|
||||
- The config is deterministic and portable across environments
|
||||
- API keys can use environment variable substitution (e.g., ${OPENROUTER_API_KEY})
|
||||
- Local providers (localhost/127.0.0.1) don't require API keys
|
||||
- All configuration changes are audited for traceability
|
||||
- The agent supports preview mode to verify configuration before applying
|
||||
92
agents/meta.config.router/agent.yaml
Normal file
92
agents/meta.config.router/agent.yaml
Normal file
@@ -0,0 +1,92 @@
|
||||
name: meta.config.router
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Configure Claude Code Router for Betty to support multi-model LLM routing across environments.
|
||||
This agent creates or previews a config.json file at ~/.claude-code-router/config.json with
|
||||
model providers, routing profiles, and audit metadata. Works across local, cloud, or CI-based
|
||||
environments with built-in validation, output rendering, config application, and auditing.
|
||||
status: active
|
||||
reasoning_mode: oneshot
|
||||
|
||||
capabilities:
|
||||
- Generate multi-model LLM router configurations
|
||||
- Validate router configuration inputs for correctness
|
||||
- Apply configurations to filesystem with audit trails
|
||||
- Support multiple output modes (preview, file, both)
|
||||
- Work across local, cloud, and CI environments
|
||||
- Ensure deterministic and portable configurations
|
||||
|
||||
skills_available:
|
||||
- config.validate.router
|
||||
- config.generate.router
|
||||
- audit.log
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: router-config-input
|
||||
description: User-provided router configuration inputs (backends, routing rules, metadata)
|
||||
file_pattern: "*-router-input.{json,yaml}"
|
||||
content_type: application/json
|
||||
required: true
|
||||
|
||||
produces:
|
||||
- type: llm-router-config
|
||||
description: Complete Claude Code Router configuration file
|
||||
file_pattern: "config.json"
|
||||
content_type: application/json
|
||||
schema: schemas/router-config.json
|
||||
|
||||
- type: audit-log-entry
|
||||
description: Audit trail entry for configuration events
|
||||
file_pattern: "audit_log.json"
|
||||
content_type: application/json
|
||||
|
||||
system_prompt: |
|
||||
You are the meta.config.router agent for the Betty Framework.
|
||||
|
||||
Your responsibilities:
|
||||
1. Validate router configuration inputs using config.validate.router
|
||||
2. Generate valid router config JSON using config.generate.router
|
||||
3. Write config to ~/.claude-code-router/config.json when apply_config=true
|
||||
4. Provide preview, file write, or both modes based on output_mode
|
||||
5. Log audit records with timestamp, initiator, and environment fingerprint
|
||||
|
||||
Inputs you expect:
|
||||
- llm_backends: List of provider configs (name, api_base_url, api_key, models)
|
||||
- routing_rules: Mapping of routing contexts (default, think, background, longContext)
|
||||
- output_mode: "preview" | "file" | "both" (default: preview)
|
||||
- apply_config: boolean (write to disk if true)
|
||||
- metadata: Optional audit metadata (initiator, environment, etc.)
|
||||
|
||||
Outputs you generate:
|
||||
- routing_config: Complete router configuration JSON
|
||||
- write_status: "success" | "skipped" | "error"
|
||||
- audit_id: Unique trace ID for the configuration event
|
||||
|
||||
Workflow:
|
||||
1. Call config.validate.router with llm_backends and routing_rules
|
||||
2. If validation fails, return errors and exit
|
||||
3. Call config.generate.router to create the config JSON
|
||||
4. If apply_config=true and output_mode≠preview, write to ~/.claude-code-router/config.json
|
||||
5. Call audit.log to record the configuration event
|
||||
6. Return config, write status, and audit ID
|
||||
|
||||
Environment awareness:
|
||||
- Detect local vs cloud vs CI environment
|
||||
- Adjust file paths accordingly
|
||||
- Include environment fingerprint in audit metadata
|
||||
|
||||
tags:
|
||||
- llm
|
||||
- router
|
||||
- configuration
|
||||
- meta
|
||||
- infra
|
||||
- openrouter
|
||||
- claude
|
||||
- ollama
|
||||
- multi-model
|
||||
323
agents/meta.config.router/meta_config_router.py
Executable file
323
agents/meta.config.router/meta_config_router.py
Executable file
@@ -0,0 +1,323 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Agent: meta.config.router
|
||||
Configure Claude Code Router for multi-model LLM support
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import os
|
||||
import hashlib
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List
|
||||
import subprocess
|
||||
import yaml
|
||||
|
||||
|
||||
class MetaConfigRouter:
|
||||
"""Configure Claude Code Router for Betty Framework"""
|
||||
|
||||
def __init__(self):
|
||||
self.betty_root = Path(__file__).parent.parent.parent
|
||||
self.skills_root = self.betty_root / "skills"
|
||||
self.audit_log_path = self.betty_root / "registry" / "audit_log.json"
|
||||
|
||||
def run(
|
||||
self,
|
||||
routing_config_path: str,
|
||||
apply_config: bool = False,
|
||||
output_mode: str = "preview"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Main execution method
|
||||
|
||||
Args:
|
||||
routing_config_path: Path to router config input file (YAML or JSON)
|
||||
apply_config: Whether to write config to disk
|
||||
output_mode: "preview" | "file" | "both"
|
||||
|
||||
Returns:
|
||||
Result with routing_config, write_status, and audit_id
|
||||
"""
|
||||
print(f"🔧 meta.config.router v0.1.0")
|
||||
print(f"📋 Config input: {routing_config_path}")
|
||||
print(f"📝 Output mode: {output_mode}")
|
||||
print(f"💾 Apply config: {apply_config}")
|
||||
print()
|
||||
|
||||
# Load input config
|
||||
config_input = self._load_config_input(routing_config_path)
|
||||
|
||||
# Extract inputs
|
||||
llm_backends = config_input.get("llm_backends", [])
|
||||
routing_rules = config_input.get("routing_rules", {})
|
||||
config_options = config_input.get("config_options", {})
|
||||
metadata = config_input.get("metadata", {}) # For audit logging only
|
||||
|
||||
# Step 1: Validate inputs
|
||||
print("🔍 Validating router configuration...")
|
||||
validation_result = self._validate_config(llm_backends, routing_rules)
|
||||
|
||||
if not validation_result["valid"]:
|
||||
print("❌ Validation failed:")
|
||||
for error in validation_result["errors"]:
|
||||
print(f" - {error}")
|
||||
return {
|
||||
"success": False,
|
||||
"errors": validation_result["errors"],
|
||||
"warnings": validation_result["warnings"]
|
||||
}
|
||||
|
||||
if validation_result["warnings"]:
|
||||
print("⚠️ Warnings:")
|
||||
for warning in validation_result["warnings"]:
|
||||
print(f" - {warning}")
|
||||
|
||||
print("✅ Validation passed")
|
||||
print()
|
||||
|
||||
# Step 2: Generate router config
|
||||
print("🏗️ Generating router configuration...")
|
||||
router_config = self._generate_config(
|
||||
llm_backends,
|
||||
routing_rules,
|
||||
config_options
|
||||
)
|
||||
print("✅ Configuration generated")
|
||||
print()
|
||||
|
||||
# Step 3: Write config if requested
|
||||
write_status = "skipped"
|
||||
config_path = None
|
||||
|
||||
if apply_config and output_mode != "preview":
|
||||
print("💾 Writing configuration to disk...")
|
||||
config_path, write_status = self._write_config(router_config)
|
||||
|
||||
if write_status == "success":
|
||||
print(f"✅ Configuration written to: {config_path}")
|
||||
else:
|
||||
print(f"❌ Failed to write configuration")
|
||||
print()
|
||||
|
||||
# Step 4: Log audit record
|
||||
print("📝 Logging audit record...")
|
||||
audit_id = self._log_audit(
|
||||
config_input=config_input,
|
||||
write_status=write_status,
|
||||
metadata=metadata
|
||||
)
|
||||
print(f"✅ Audit ID: {audit_id}")
|
||||
print()
|
||||
|
||||
# Step 5: Output results
|
||||
result = {
|
||||
"success": True,
|
||||
"routing_config": router_config,
|
||||
"write_status": write_status,
|
||||
"audit_id": audit_id
|
||||
}
|
||||
|
||||
if config_path:
|
||||
result["config_path"] = str(config_path)
|
||||
|
||||
# Display preview if requested
|
||||
if output_mode in ["preview", "both"]:
|
||||
print("📄 Router Configuration Preview:")
|
||||
print("─" * 80)
|
||||
print(json.dumps(router_config, indent=2))
|
||||
print("─" * 80)
|
||||
print()
|
||||
|
||||
return result
|
||||
|
||||
def _load_config_input(self, config_path: str) -> Dict[str, Any]:
|
||||
"""Load router config input from YAML or JSON file"""
|
||||
path = Path(config_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Config file not found: {config_path}")
|
||||
|
||||
with open(path, 'r') as f:
|
||||
if path.suffix in ['.yaml', '.yml']:
|
||||
return yaml.safe_load(f)
|
||||
else:
|
||||
return json.load(f)
|
||||
|
||||
def _validate_config(
|
||||
self,
|
||||
llm_backends: List[Dict[str, Any]],
|
||||
routing_rules: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""Validate router configuration using config.validate.router skill"""
|
||||
validator_script = self.skills_root / "config.validate.router" / "validate_router.py"
|
||||
|
||||
config_json = json.dumps({
|
||||
"llm_backends": llm_backends,
|
||||
"routing_rules": routing_rules
|
||||
})
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(validator_script), config_json],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False
|
||||
)
|
||||
|
||||
return json.loads(result.stdout)
|
||||
except Exception as e:
|
||||
return {
|
||||
"valid": False,
|
||||
"errors": [f"Validation error: {e}"],
|
||||
"warnings": []
|
||||
}
|
||||
|
||||
def _generate_config(
|
||||
self,
|
||||
llm_backends: List[Dict[str, Any]],
|
||||
routing_rules: Dict[str, Any],
|
||||
config_options: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""Generate router configuration using config.generate.router skill"""
|
||||
generator_script = self.skills_root / "config.generate.router" / "generate_router.py"
|
||||
|
||||
input_json = json.dumps({
|
||||
"llm_backends": llm_backends,
|
||||
"routing_rules": routing_rules,
|
||||
"config_options": config_options
|
||||
})
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(generator_script), input_json],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True
|
||||
)
|
||||
|
||||
return json.loads(result.stdout)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Config generation failed: {e}")
|
||||
|
||||
def _write_config(self, router_config: Dict[str, Any]) -> tuple[Path, str]:
|
||||
"""Write router config to ~/.claude-code-router/config.json"""
|
||||
try:
|
||||
config_dir = Path.home() / ".claude-code-router"
|
||||
config_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
config_path = config_dir / "config.json"
|
||||
|
||||
with open(config_path, 'w') as f:
|
||||
json.dump(router_config, f, indent=2)
|
||||
|
||||
return config_path, "success"
|
||||
except Exception as e:
|
||||
print(f"Error writing config: {e}")
|
||||
return None, "error"
|
||||
|
||||
def _log_audit(
|
||||
self,
|
||||
config_input: Dict[str, Any],
|
||||
write_status: str,
|
||||
metadata: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Log audit record for configuration event"""
|
||||
audit_id = str(uuid.uuid4())
|
||||
|
||||
# Calculate hash of input
|
||||
input_hash = hashlib.sha256(
|
||||
json.dumps(config_input, sort_keys=True).encode()
|
||||
).hexdigest()[:16]
|
||||
|
||||
audit_entry = {
|
||||
"audit_id": audit_id,
|
||||
"timestamp": datetime.utcnow().isoformat() + "Z",
|
||||
"agent": "meta.config.router",
|
||||
"version": "0.1.0",
|
||||
"action": "router_config_generated",
|
||||
"write_status": write_status,
|
||||
"input_hash": input_hash,
|
||||
"environment": self._detect_environment(),
|
||||
"initiator": metadata.get("initiator", "unknown"),
|
||||
"metadata": metadata
|
||||
}
|
||||
|
||||
# Append to audit log
|
||||
try:
|
||||
if self.audit_log_path.exists():
|
||||
with open(self.audit_log_path, 'r') as f:
|
||||
audit_log = json.load(f)
|
||||
else:
|
||||
audit_log = []
|
||||
|
||||
audit_log.append(audit_entry)
|
||||
|
||||
with open(self.audit_log_path, 'w') as f:
|
||||
json.dump(audit_log, f, indent=2)
|
||||
except Exception as e:
|
||||
print(f"Warning: Failed to write audit log: {e}")
|
||||
|
||||
return audit_id
|
||||
|
||||
def _detect_environment(self) -> str:
|
||||
"""Detect execution environment (local, cloud, ci)"""
|
||||
if os.getenv("CI"):
|
||||
return "ci"
|
||||
elif os.getenv("CLOUD_ENV"):
|
||||
return "cloud"
|
||||
else:
|
||||
return "local"
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entrypoint"""
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: meta_config_router.py <routing_config_path> [--apply_config] [--output_mode=<mode>]")
|
||||
print()
|
||||
print("Arguments:")
|
||||
print(" routing_config_path Path to router config input file (YAML or JSON)")
|
||||
print(" --apply_config Write config to ~/.claude-code-router/config.json")
|
||||
print(" --output_mode=MODE Output mode: preview, file, or both (default: preview)")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse arguments
|
||||
routing_config_path = sys.argv[1]
|
||||
apply_config = "--apply_config" in sys.argv or "--apply-config" in sys.argv
|
||||
output_mode = "preview"
|
||||
|
||||
for arg in sys.argv[2:]:
|
||||
if arg.startswith("--output_mode=") or arg.startswith("--output-mode="):
|
||||
output_mode = arg.split("=")[1]
|
||||
|
||||
# Run agent
|
||||
agent = MetaConfigRouter()
|
||||
try:
|
||||
result = agent.run(
|
||||
routing_config_path=routing_config_path,
|
||||
apply_config=apply_config,
|
||||
output_mode=output_mode
|
||||
)
|
||||
|
||||
if result["success"]:
|
||||
print("✅ meta.config.router completed successfully")
|
||||
print(f"📋 Audit ID: {result['audit_id']}")
|
||||
print(f"💾 Write status: {result['write_status']}")
|
||||
sys.exit(0)
|
||||
else:
|
||||
print("❌ meta.config.router failed")
|
||||
for error in result.get("errors", []):
|
||||
print(f" - {error}")
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
374
agents/meta.create/README.md
Normal file
374
agents/meta.create/README.md
Normal file
@@ -0,0 +1,374 @@
|
||||
# meta.create - Component Creation Orchestrator
|
||||
|
||||
The intelligent orchestrator for creating Betty skills, commands, and agents from natural language descriptions.
|
||||
|
||||
## Purpose
|
||||
|
||||
`meta.create` is the primary entry point for creating Betty components. It automatically:
|
||||
|
||||
- **Detects** what type of component you're describing (skill, command, agent, or combination)
|
||||
- **Checks** inventory to avoid duplicates
|
||||
- **Analyzes** complexity to determine the optimal creation pattern
|
||||
- **Creates** components in dependency order
|
||||
- **Validates** compatibility and identifies gaps
|
||||
- **Recommends** next steps for completion
|
||||
|
||||
## Why Use meta.create?
|
||||
|
||||
Instead of manually running multiple meta-agents (`meta.skill`, `meta.command`, `meta.agent`, `meta.compatibility`), `meta.create` orchestrates everything for you in the right order.
|
||||
|
||||
### Before meta.create:
|
||||
```bash
|
||||
# Manual workflow - you had to know the order and check everything
|
||||
python3 agents/meta.command/meta_command.py description.md
|
||||
# Check if it recommends creating a skill...
|
||||
python3 agents/meta.skill/meta_skill.py skill_description.md
|
||||
python3 agents/meta.agent/meta_agent.py agent_description.md
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze my.agent
|
||||
# Check for gaps, create missing skills...
|
||||
```
|
||||
|
||||
### With meta.create:
|
||||
```bash
|
||||
# One command does it all
|
||||
python3 agents/meta.create/meta_create.py description.md
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### Step 1: Analysis
|
||||
Parses your description to determine:
|
||||
- Is this a skill? command? agent?
|
||||
- What artifacts are involved?
|
||||
- What's the complexity level?
|
||||
|
||||
### Step 2: Duplicate Check
|
||||
Queries registries to find existing components:
|
||||
- Prevents recreating existing skills
|
||||
- Shows what you can reuse
|
||||
- Skips unnecessary work
|
||||
|
||||
### Step 3: Creation Planning
|
||||
Uses `meta.command` complexity analysis to determine pattern:
|
||||
- **COMMAND_ONLY**: Simple inline logic (1-3 steps)
|
||||
- **SKILL_ONLY**: Reusable utility without command
|
||||
- **SKILL_AND_COMMAND**: Complex logic in skill + command wrapper
|
||||
- **AGENT**: Multi-skill orchestration
|
||||
|
||||
### Step 4: Component Creation
|
||||
Creates components in dependency order:
|
||||
1. **Skills first** (using `meta.skill`)
|
||||
2. **Commands second** (using `meta.command`)
|
||||
3. **Agents last** (using `meta.agent` with skill composition)
|
||||
|
||||
### Step 5: Compatibility Validation
|
||||
For agents, runs `meta.compatibility` to:
|
||||
- Find compatible agent pipelines
|
||||
- Identify artifact gaps
|
||||
- Suggest workflows
|
||||
|
||||
### Step 6: Recommendations
|
||||
Provides actionable next steps:
|
||||
- Missing skills to create
|
||||
- Compatibility issues to fix
|
||||
- Integration opportunities
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
python3 agents/meta.create/meta_create.py <description.md>
|
||||
```
|
||||
|
||||
### Create Skill and Command
|
||||
|
||||
```bash
|
||||
python3 agents/meta.create/meta_create.py examples/api_validate.md
|
||||
```
|
||||
|
||||
If `api_validate.md` describes a complex command, meta.create will:
|
||||
1. Analyze complexity → detects SKILL_AND_COMMAND pattern
|
||||
2. Create the skill first
|
||||
3. Create the command that uses the skill
|
||||
4. Report what was created
|
||||
|
||||
### Create Agent with Dependencies
|
||||
|
||||
```bash
|
||||
python3 agents/meta.create/meta_create.py examples/api_agent.md
|
||||
```
|
||||
|
||||
meta.create will:
|
||||
1. Detect it's an agent description
|
||||
2. Check for required skills (reuse existing)
|
||||
3. Create missing skills if needed
|
||||
4. Create the agent with proper skill composition
|
||||
5. Validate compatibility with other agents
|
||||
6. Report gaps and recommendations
|
||||
|
||||
### Auto-Fill Gaps
|
||||
|
||||
```bash
|
||||
python3 agents/meta.create/meta_create.py description.md --auto-fill-gaps
|
||||
```
|
||||
|
||||
Automatically creates missing skills to fill compatibility gaps.
|
||||
|
||||
### Skip Duplicate Check
|
||||
|
||||
```bash
|
||||
python3 agents/meta.create/meta_create.py description.md --skip-duplicate-check
|
||||
```
|
||||
|
||||
Force creation even if components exist (useful for updates).
|
||||
|
||||
### Output Formats
|
||||
|
||||
```bash
|
||||
# Human-readable text (default)
|
||||
python3 agents/meta.create/meta_create.py description.md
|
||||
|
||||
# JSON output for automation
|
||||
python3 agents/meta.create/meta_create.py description.md --output-format json
|
||||
|
||||
# YAML output
|
||||
python3 agents/meta.create/meta_create.py description.md --output-format yaml
|
||||
```
|
||||
|
||||
### With Traceability
|
||||
|
||||
```bash
|
||||
python3 agents/meta.create/meta_create.py description.md \
|
||||
--requirement-id REQ-2025-042 \
|
||||
--requirement-description "Create API validation agent" \
|
||||
--issue-id JIRA-1234 \
|
||||
--requested-by "Product Team"
|
||||
```
|
||||
|
||||
## Description File Format
|
||||
|
||||
Your description file can be Markdown or JSON. meta.create detects the type automatically.
|
||||
|
||||
### Example: Skill Description
|
||||
|
||||
```markdown
|
||||
# Name: data.validate
|
||||
|
||||
# Type: skill
|
||||
|
||||
# Purpose:
|
||||
Validate data against JSON schemas with detailed error reporting
|
||||
|
||||
# Inputs:
|
||||
- data (JSON object to validate)
|
||||
- schema (JSON schema for validation)
|
||||
|
||||
# Outputs:
|
||||
- validation_result (validation report with errors)
|
||||
|
||||
# Produces Artifacts:
|
||||
- validation.report
|
||||
|
||||
# Consumes Artifacts:
|
||||
- data.json
|
||||
- schema.json
|
||||
```
|
||||
|
||||
### Example: Command Description
|
||||
|
||||
```markdown
|
||||
# Name: /validate-api
|
||||
|
||||
# Type: command
|
||||
|
||||
# Description:
|
||||
Validate API responses against OpenAPI schemas
|
||||
|
||||
# Execution Type: skill
|
||||
|
||||
# Target: api.validate
|
||||
|
||||
# Parameters:
|
||||
- endpoint: string (required) - API endpoint to validate
|
||||
- schema: string (required) - Path to OpenAPI schema
|
||||
```
|
||||
|
||||
### Example: Agent Description
|
||||
|
||||
```markdown
|
||||
# Name: api.validator
|
||||
|
||||
# Type: agent
|
||||
|
||||
# Purpose:
|
||||
Comprehensive API testing and validation agent
|
||||
|
||||
# Inputs:
|
||||
- api.spec
|
||||
|
||||
# Outputs:
|
||||
- validation.report
|
||||
- test.results
|
||||
|
||||
# Examples:
|
||||
- Validate all API endpoints against OpenAPI spec
|
||||
- Generate test cases from schema
|
||||
```
|
||||
|
||||
## What Gets Created
|
||||
|
||||
### For Skills
|
||||
- `skills/{name}/skill.yaml` - Skill configuration
|
||||
- `skills/{name}/{name}.py` - Python implementation stub
|
||||
- `skills/{name}/test_{name}.py` - pytest test template
|
||||
- `skills/{name}/README.md` - Documentation
|
||||
|
||||
### For Commands
|
||||
- `commands/{name}.yaml` - Command manifest
|
||||
- Recommendations for skill creation if needed
|
||||
|
||||
### For Agents
|
||||
- `agents/{name}/agent.yaml` - Agent configuration
|
||||
- `agents/{name}/README.md` - Documentation with usage examples
|
||||
- Compatibility analysis report
|
||||
|
||||
## Output Report
|
||||
|
||||
meta.create provides a comprehensive report:
|
||||
|
||||
```
|
||||
🎯 meta.create - Orchestrating component creation from description.md
|
||||
|
||||
📋 Step 1: Analyzing description...
|
||||
Detected types: Skill=True, Command=True, Agent=False
|
||||
|
||||
🔍 Step 2: Checking for existing components...
|
||||
✅ No duplicates found
|
||||
|
||||
🛠️ Step 3: Creating components...
|
||||
📊 Analyzing command complexity...
|
||||
Recommended pattern: SKILL_AND_COMMAND
|
||||
Should create skill: True
|
||||
|
||||
🔧 Creating skill...
|
||||
✅ Skill 'api.validate' created
|
||||
|
||||
📜 Creating command...
|
||||
✅ Command '/validate-api' created
|
||||
|
||||
================================================================================
|
||||
✨ CREATION SUMMARY
|
||||
================================================================================
|
||||
|
||||
✅ Created 2 component(s):
|
||||
• SKILL: api.validate
|
||||
• COMMAND: /validate-api
|
||||
|
||||
================================================================================
|
||||
```
|
||||
|
||||
## Integration with Other Meta-Agents
|
||||
|
||||
meta.create uses:
|
||||
- **meta.command** - Complexity analysis and command generation
|
||||
- **meta.skill** - Skill creation with full package
|
||||
- **meta.agent** - Agent creation with skill composition
|
||||
- **meta.compatibility** - Compatibility validation and gap detection
|
||||
- **registry.query** - Duplicate checking
|
||||
- **agent.compose** - Skill recommendation for agents
|
||||
|
||||
## Decision Tree
|
||||
|
||||
```
|
||||
Description Input
|
||||
↓
|
||||
Parse Type
|
||||
↓
|
||||
┌──┴──────────────────┐
|
||||
↓ ↓
|
||||
Command? Agent?
|
||||
↓ ↓
|
||||
Analyze Find Skills
|
||||
Complexity ↓
|
||||
↓ Create Missing
|
||||
SKILL_ONLY Skills
|
||||
COMMAND_ONLY ↓
|
||||
SKILL_AND_COMMAND Create Agent
|
||||
↓ ↓
|
||||
Create Skill Validate Compat
|
||||
↓ ↓
|
||||
Create Command Report Gaps
|
||||
↓ ↓
|
||||
Done Recommend
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Simple Command
|
||||
|
||||
```bash
|
||||
# description.md specifies a simple 2-step command
|
||||
python3 agents/meta.create/meta_create.py description.md
|
||||
# Result: Creates COMMAND_ONLY (inline logic is sufficient)
|
||||
```
|
||||
|
||||
### Example 2: Complex Command
|
||||
|
||||
```bash
|
||||
# description.md specifies 10+ step validation logic
|
||||
python3 agents/meta.create/meta_create.py description.md
|
||||
# Result: Creates SKILL_AND_COMMAND (skill has logic, command delegates)
|
||||
```
|
||||
|
||||
### Example 3: Multi-Agent System
|
||||
|
||||
```bash
|
||||
# description.md describes an orchestration agent
|
||||
python3 agents/meta.create/meta_create.py description.md
|
||||
# Result:
|
||||
# - Creates agent with existing skills
|
||||
# - Validates compatibility
|
||||
# - Reports: "Can receive from api.architect, can feed to report.generator"
|
||||
# - Suggests pipeline workflows
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Intelligent** - Automatically determines optimal creation pattern
|
||||
✅ **Safe** - Checks for duplicates, prevents overwrites
|
||||
✅ **Complete** - Creates all necessary components in order
|
||||
✅ **Validated** - Runs compatibility checks automatically
|
||||
✅ **Traceable** - Supports requirement tracking
|
||||
✅ **Informative** - Provides detailed reports and recommendations
|
||||
|
||||
## Next Steps
|
||||
|
||||
After using meta.create:
|
||||
|
||||
1. **Review** created files
|
||||
2. **Implement** TODO sections in generated code
|
||||
3. **Test** with pytest
|
||||
4. **Register** components (manual or use `skill.register`, etc.)
|
||||
5. **Use** in your Betty workflows
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Q: meta.create says component already exists**
|
||||
A: Use `--skip-duplicate-check` to override, or rename your component
|
||||
|
||||
**Q: Compatibility gaps reported**
|
||||
A: Use `--auto-fill-gaps` or manually create the missing skills
|
||||
|
||||
**Q: Wrong pattern detected**
|
||||
A: Add explicit `# Type: skill` or `# Type: command` to your description
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Overview of all meta-agents
|
||||
- [SKILL_COMMAND_DECISION_TREE.md](../../docs/SKILL_COMMAND_DECISION_TREE.md) - Pattern decision logic
|
||||
- [ARTIFACTS.md](../../docs/ARTIFACTS.md) - Artifact metadata system
|
||||
|
||||
---
|
||||
|
||||
*Created by the Betty Framework Meta-Agent System*
|
||||
80
agents/meta.create/agent.yaml
Normal file
80
agents/meta.create/agent.yaml
Normal file
@@ -0,0 +1,80 @@
|
||||
name: meta.create
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Orchestrator meta-agent that intelligently creates skills, commands, and agents.
|
||||
|
||||
Capabilities:
|
||||
- Detects component type from description
|
||||
- Checks inventory for duplicates
|
||||
- Analyzes complexity and determines creation pattern
|
||||
- Creates skills, commands, and agents in proper order
|
||||
- Validates compatibility using meta.compatibility
|
||||
- Identifies gaps and provides recommendations
|
||||
- Supports auto-filling missing dependencies
|
||||
|
||||
This is the primary entry point for creating Betty components from natural
|
||||
language descriptions.
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Diagnose component needs and recommend skills, commands, or agents to create
|
||||
- Generate scaffolding for new framework components with proper metadata
|
||||
- Coordinate validation steps to ensure compatibility before registration
|
||||
|
||||
skills_available:
|
||||
- registry.query
|
||||
- agent.compose
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
- registry:read
|
||||
- registry:write
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: component.description
|
||||
description: Natural language description of component to create
|
||||
format: markdown or JSON
|
||||
required: true
|
||||
|
||||
produces:
|
||||
- type: skill.definition
|
||||
description: Complete skill package with YAML, implementation, tests
|
||||
optional: true
|
||||
|
||||
- type: command.manifest
|
||||
description: Command manifest in YAML format
|
||||
optional: true
|
||||
|
||||
- type: agent.definition
|
||||
description: Agent configuration with skill composition
|
||||
optional: true
|
||||
|
||||
- type: compatibility.report
|
||||
description: Compatibility analysis showing agent relationships and gaps
|
||||
optional: true
|
||||
|
||||
tags:
|
||||
- meta
|
||||
- orchestration
|
||||
- creation
|
||||
- automation
|
||||
|
||||
system_prompt: |
|
||||
You are meta.create, the intelligent orchestrator for creating Betty components.
|
||||
|
||||
Your responsibilities:
|
||||
1. Analyze component descriptions to determine type (skill/command/agent)
|
||||
2. Check registries to avoid creating duplicates
|
||||
3. Determine optimal creation pattern using complexity analysis
|
||||
4. Create components in dependency order (skills → commands → agents)
|
||||
5. Validate agent compatibility and identify gaps
|
||||
6. Provide actionable recommendations for completion
|
||||
|
||||
Always prioritize:
|
||||
- Reusing existing components over creating new ones
|
||||
- Creating building blocks (skills) before orchestrators (agents)
|
||||
- Validating compatibility to ensure smooth agent pipelines
|
||||
- Providing clear feedback about what was created and why
|
||||
555
agents/meta.create/meta_create.py
Normal file
555
agents/meta.create/meta_create.py
Normal file
@@ -0,0 +1,555 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.create - Orchestrator Meta-Agent
|
||||
|
||||
Intelligently orchestrates the creation of skills, commands, and agents.
|
||||
Checks inventory, determines what needs to be created, validates compatibility,
|
||||
and fills gaps automatically.
|
||||
|
||||
This is the main entry point for creating Betty components from descriptions.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Set, Tuple
|
||||
from datetime import datetime
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
# Import other meta agents by adding their paths
|
||||
meta_command_path = Path(parent_dir) / "agents" / "meta.command"
|
||||
meta_skill_path = Path(parent_dir) / "agents" / "meta.skill"
|
||||
meta_agent_path = Path(parent_dir) / "agents" / "meta.agent"
|
||||
meta_compatibility_path = Path(parent_dir) / "agents" / "meta.compatibility"
|
||||
registry_query_path = Path(parent_dir) / "skills" / "registry.query"
|
||||
|
||||
sys.path.insert(0, str(meta_command_path))
|
||||
sys.path.insert(0, str(meta_skill_path))
|
||||
sys.path.insert(0, str(meta_agent_path))
|
||||
sys.path.insert(0, str(meta_compatibility_path))
|
||||
sys.path.insert(0, str(registry_query_path))
|
||||
|
||||
import meta_command
|
||||
import meta_skill
|
||||
import meta_agent
|
||||
import meta_compatibility
|
||||
import registry_query
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.traceability import get_tracer, RequirementInfo
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class ComponentCreator:
|
||||
"""Orchestrates the creation of skills, commands, and agents"""
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize orchestrator"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.created_components = []
|
||||
self.compatibility_analyzer = None
|
||||
|
||||
def check_duplicate(self, component_type: str, name: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Check if a component already exists in registry
|
||||
|
||||
Args:
|
||||
component_type: 'skills', 'commands', or 'agents'
|
||||
name: Component name to check
|
||||
|
||||
Returns:
|
||||
Existing component info if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
result = registry_query.query_registry(
|
||||
registry=component_type,
|
||||
name=name,
|
||||
fuzzy=False
|
||||
)
|
||||
|
||||
if result.get("ok") and result.get("details", {}).get("matching_entries", 0) > 0:
|
||||
matches = result["details"]["results"]
|
||||
# Check for exact match
|
||||
for match in matches:
|
||||
if match["name"] == name:
|
||||
return match
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Error checking duplicate for {name}: {e}")
|
||||
return None
|
||||
|
||||
def parse_description_type(self, description_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Determine what type of component is being described
|
||||
|
||||
Args:
|
||||
description_path: Path to description file
|
||||
|
||||
Returns:
|
||||
Dict with component_type and parsed metadata
|
||||
"""
|
||||
path = Path(description_path)
|
||||
content = path.read_text()
|
||||
|
||||
# Try to determine type from content
|
||||
result = {
|
||||
"is_skill": False,
|
||||
"is_command": False,
|
||||
"is_agent": False,
|
||||
"path": str(path)
|
||||
}
|
||||
|
||||
content_lower = content.lower()
|
||||
|
||||
# Check for skill indicators
|
||||
if any(x in content_lower for x in ["# produces artifacts:", "# consumes artifacts:",
|
||||
"skill.yaml", "artifact_metadata"]):
|
||||
result["is_skill"] = True
|
||||
|
||||
# Check for command indicators
|
||||
if any(x in content_lower for x in ["# execution type:", "# parameters:",
|
||||
"command manifest"]):
|
||||
result["is_command"] = True
|
||||
|
||||
# Check for agent indicators
|
||||
if any(x in content_lower for x in ["# skills:", "skills_available",
|
||||
"agent purpose", "multi-step", "orchestrat"]):
|
||||
result["is_agent"] = True
|
||||
|
||||
# If ambiguous, look at explicit markers
|
||||
if "# type: skill" in content_lower:
|
||||
result["is_skill"] = True
|
||||
result["is_command"] = False
|
||||
result["is_agent"] = False
|
||||
elif "# type: command" in content_lower:
|
||||
result["is_command"] = True
|
||||
result["is_skill"] = False
|
||||
result["is_agent"] = False
|
||||
elif "# type: agent" in content_lower:
|
||||
result["is_agent"] = True
|
||||
result["is_skill"] = False
|
||||
result["is_command"] = False
|
||||
|
||||
return result
|
||||
|
||||
def create_skill(
|
||||
self,
|
||||
description_path: str,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a skill using meta.skill
|
||||
|
||||
Args:
|
||||
description_path: Path to skill description
|
||||
requirement: Optional requirement info
|
||||
|
||||
Returns:
|
||||
Creation result
|
||||
"""
|
||||
logger.info(f"Creating skill from {description_path}")
|
||||
|
||||
creator = meta_skill.SkillCreator(base_dir=str(self.base_dir))
|
||||
result = creator.create_skill(description_path, requirement=requirement)
|
||||
|
||||
self.created_components.append({
|
||||
"type": "skill",
|
||||
"name": result.get("skill_name"),
|
||||
"files": result.get("created_files", []),
|
||||
"trace_id": result.get("trace_id")
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
def create_command(
|
||||
self,
|
||||
description_path: str,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a command using meta.command
|
||||
|
||||
Args:
|
||||
description_path: Path to command description
|
||||
requirement: Optional requirement info
|
||||
|
||||
Returns:
|
||||
Creation result with complexity analysis
|
||||
"""
|
||||
logger.info(f"Creating command from {description_path}")
|
||||
|
||||
creator = meta_command.CommandCreator(base_dir=str(self.base_dir))
|
||||
result = creator.create_command(description_path, requirement=requirement)
|
||||
|
||||
self.created_components.append({
|
||||
"type": "command",
|
||||
"name": result.get("command_name"),
|
||||
"manifest": result.get("manifest_file"),
|
||||
"analysis": result.get("complexity_analysis"),
|
||||
"trace_id": result.get("trace_id")
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
def create_agent(
|
||||
self,
|
||||
description_path: str,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create an agent using meta.agent
|
||||
|
||||
Args:
|
||||
description_path: Path to agent description
|
||||
requirement: Optional requirement info
|
||||
|
||||
Returns:
|
||||
Creation result
|
||||
"""
|
||||
logger.info(f"Creating agent from {description_path}")
|
||||
|
||||
creator = meta_agent.AgentCreator(
|
||||
registry_path=str(self.base_dir / "registry" / "skills.json")
|
||||
)
|
||||
result = creator.create_agent(description_path, requirement=requirement)
|
||||
|
||||
self.created_components.append({
|
||||
"type": "agent",
|
||||
"name": result.get("name"),
|
||||
"files": [result.get("agent_yaml"), result.get("readme")],
|
||||
"skills": result.get("skills", []),
|
||||
"trace_id": result.get("trace_id")
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
def validate_compatibility(self, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate agent compatibility using meta.compatibility
|
||||
|
||||
Args:
|
||||
agent_name: Name of agent to validate
|
||||
|
||||
Returns:
|
||||
Compatibility analysis
|
||||
"""
|
||||
logger.info(f"Validating compatibility for {agent_name}")
|
||||
|
||||
if not self.compatibility_analyzer:
|
||||
self.compatibility_analyzer = meta_compatibility.CompatibilityAnalyzer(
|
||||
base_dir=str(self.base_dir)
|
||||
)
|
||||
self.compatibility_analyzer.scan_agents()
|
||||
self.compatibility_analyzer.build_compatibility_map()
|
||||
|
||||
return self.compatibility_analyzer.analyze_agent(agent_name)
|
||||
|
||||
def orchestrate_creation(
|
||||
self,
|
||||
description_path: str,
|
||||
auto_fill_gaps: bool = False,
|
||||
check_duplicates: bool = True,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Main orchestration method that intelligently creates components
|
||||
|
||||
Args:
|
||||
description_path: Path to description file
|
||||
auto_fill_gaps: Whether to automatically create missing dependencies
|
||||
check_duplicates: Whether to check for existing components
|
||||
requirement: Optional requirement info for traceability
|
||||
|
||||
Returns:
|
||||
Comprehensive creation report
|
||||
"""
|
||||
print(f"🎯 meta.create - Orchestrating component creation from {description_path}\n")
|
||||
|
||||
report = {
|
||||
"ok": True,
|
||||
"description_path": description_path,
|
||||
"component_type": None,
|
||||
"created_components": [],
|
||||
"skipped_components": [],
|
||||
"compatibility_analysis": None,
|
||||
"gaps": [],
|
||||
"recommendations": [],
|
||||
"errors": []
|
||||
}
|
||||
|
||||
try:
|
||||
# Step 1: Determine what's being described
|
||||
print("📋 Step 1: Analyzing description...")
|
||||
desc_type = self.parse_description_type(description_path)
|
||||
print(f" Detected types: Skill={desc_type['is_skill']}, "
|
||||
f"Command={desc_type['is_command']}, Agent={desc_type['is_agent']}\n")
|
||||
|
||||
# Step 2: Check for duplicates if requested
|
||||
if check_duplicates:
|
||||
print("🔍 Step 2: Checking for existing components...")
|
||||
|
||||
# Parse name from description
|
||||
content = Path(description_path).read_text()
|
||||
name_match = None
|
||||
for line in content.split('\n'):
|
||||
if line.strip().startswith('# Name:'):
|
||||
name_match = line.replace('# Name:', '').strip()
|
||||
break
|
||||
|
||||
if name_match:
|
||||
# Check all registries
|
||||
for comp_type in ['skills', 'commands', 'agents']:
|
||||
existing = self.check_duplicate(comp_type, name_match)
|
||||
if existing:
|
||||
print(f" ⚠️ Found existing {comp_type[:-1]}: {name_match}")
|
||||
report["skipped_components"].append({
|
||||
"type": comp_type[:-1],
|
||||
"name": name_match,
|
||||
"reason": "Already exists",
|
||||
"existing": existing
|
||||
})
|
||||
|
||||
if not report["skipped_components"]:
|
||||
print(" ✅ No duplicates found\n")
|
||||
else:
|
||||
print()
|
||||
|
||||
# Step 3: Create components based on type
|
||||
print("🛠️ Step 3: Creating components...\n")
|
||||
|
||||
# If it's a command, analyze complexity first
|
||||
if desc_type["is_command"]:
|
||||
print(" 📊 Analyzing command complexity...")
|
||||
creator = meta_command.CommandCreator(base_dir=str(self.base_dir))
|
||||
|
||||
# Read content for analysis
|
||||
with open(description_path) as f:
|
||||
full_content = f.read()
|
||||
|
||||
cmd_desc = creator.parse_description(description_path)
|
||||
analysis = creator.analyze_complexity(cmd_desc, full_content)
|
||||
|
||||
print(f" Recommended pattern: {analysis['recommended_pattern']}")
|
||||
print(f" Should create skill: {analysis['should_create_skill']}\n")
|
||||
|
||||
# Update desc_type based on analysis
|
||||
if analysis['should_create_skill']:
|
||||
desc_type['is_skill'] = True
|
||||
|
||||
# Create skill first if needed
|
||||
if desc_type["is_skill"]:
|
||||
print(" 🔧 Creating skill...")
|
||||
skill_result = self.create_skill(description_path, requirement)
|
||||
|
||||
if skill_result.get("errors"):
|
||||
report["errors"].extend(skill_result["errors"])
|
||||
print(f" ⚠️ Skill creation had warnings\n")
|
||||
else:
|
||||
print(f" ✅ Skill '{skill_result['skill_name']}' created\n")
|
||||
report["created_components"].append({
|
||||
"type": "skill",
|
||||
"name": skill_result["skill_name"],
|
||||
"files": skill_result.get("created_files", [])
|
||||
})
|
||||
|
||||
# Create command if needed
|
||||
if desc_type["is_command"]:
|
||||
print(" 📜 Creating command...")
|
||||
command_result = self.create_command(description_path, requirement)
|
||||
|
||||
if command_result.get("ok"):
|
||||
print(f" ✅ Command '{command_result['command_name']}' created\n")
|
||||
report["created_components"].append({
|
||||
"type": "command",
|
||||
"name": command_result["command_name"],
|
||||
"manifest": command_result.get("manifest_file"),
|
||||
"pattern": command_result.get("complexity_analysis", {}).get("recommended_pattern")
|
||||
})
|
||||
else:
|
||||
report["errors"].append(f"Command creation failed: {command_result.get('error')}")
|
||||
print(f" ❌ Command creation failed\n")
|
||||
|
||||
# Create agent if needed
|
||||
if desc_type["is_agent"]:
|
||||
print(" 🤖 Creating agent...")
|
||||
agent_result = self.create_agent(description_path, requirement)
|
||||
|
||||
print(f" ✅ Agent '{agent_result['name']}' created")
|
||||
print(f" Skills: {', '.join(agent_result.get('skills', []))}\n")
|
||||
|
||||
report["created_components"].append({
|
||||
"type": "agent",
|
||||
"name": agent_result["name"],
|
||||
"files": [agent_result.get("agent_yaml"), agent_result.get("readme")],
|
||||
"skills": agent_result.get("skills", [])
|
||||
})
|
||||
|
||||
# Step 4: Validate compatibility for agents
|
||||
print("🔬 Step 4: Validating compatibility...\n")
|
||||
compatibility = self.validate_compatibility(agent_result["name"])
|
||||
|
||||
if "error" not in compatibility:
|
||||
report["compatibility_analysis"] = compatibility
|
||||
|
||||
# Check for gaps
|
||||
gaps = compatibility.get("gaps", [])
|
||||
if gaps:
|
||||
print(f" ⚠️ Found {len(gaps)} gap(s):")
|
||||
for gap in gaps:
|
||||
print(f" • {gap['artifact']}: {gap['issue']}")
|
||||
report["gaps"].append(gap)
|
||||
print()
|
||||
|
||||
# Add recommendations
|
||||
for gap in gaps:
|
||||
report["recommendations"].append(
|
||||
f"Create skill to produce '{gap['artifact']}' artifact"
|
||||
)
|
||||
else:
|
||||
print(" ✅ No compatibility gaps found\n")
|
||||
|
||||
# Show compatible agents
|
||||
if compatibility.get("can_feed_to"):
|
||||
print(f" ➡️ Can feed to {len(compatibility['can_feed_to'])} agent(s)")
|
||||
if compatibility.get("can_receive_from"):
|
||||
print(f" ⬅️ Can receive from {len(compatibility['can_receive_from'])} agent(s)")
|
||||
print()
|
||||
|
||||
# Step 5: Auto-fill gaps if requested
|
||||
if auto_fill_gaps and report["gaps"]:
|
||||
print("🔧 Step 5: Auto-filling gaps...\n")
|
||||
for gap in report["gaps"]:
|
||||
print(f" TODO: Auto-create skill for '{gap['artifact']}'")
|
||||
# TODO: Implement auto-gap-filling
|
||||
print()
|
||||
|
||||
# Final summary
|
||||
print("=" * 80)
|
||||
print("✨ CREATION SUMMARY")
|
||||
print("=" * 80)
|
||||
|
||||
if report["created_components"]:
|
||||
print(f"\n✅ Created {len(report['created_components'])} component(s):")
|
||||
for comp in report["created_components"]:
|
||||
print(f" • {comp['type'].upper()}: {comp['name']}")
|
||||
|
||||
if report["skipped_components"]:
|
||||
print(f"\n⏭️ Skipped {len(report['skipped_components'])} component(s) (already exist):")
|
||||
for comp in report["skipped_components"]:
|
||||
print(f" • {comp['type'].upper()}: {comp['name']}")
|
||||
|
||||
if report["gaps"]:
|
||||
print(f"\n⚠️ Found {len(report['gaps'])} compatibility gap(s)")
|
||||
|
||||
if report["recommendations"]:
|
||||
print("\n💡 Recommendations:")
|
||||
for rec in report["recommendations"]:
|
||||
print(f" • {rec}")
|
||||
|
||||
print("\n" + "=" * 80 + "\n")
|
||||
|
||||
return report
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during orchestration: {e}", exc_info=True)
|
||||
report["ok"] = False
|
||||
report["errors"].append(str(e))
|
||||
print(f"\n❌ Error: {e}\n")
|
||||
return report
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.create - Intelligent component creation orchestrator"
|
||||
)
|
||||
parser.add_argument(
|
||||
"description",
|
||||
help="Path to component description file (.md or .json)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--auto-fill-gaps",
|
||||
action="store_true",
|
||||
help="Automatically create missing dependencies"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skip-duplicate-check",
|
||||
action="store_true",
|
||||
help="Skip checking for existing components"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-format",
|
||||
choices=["json", "yaml", "text"],
|
||||
default="text",
|
||||
help="Output format for final report"
|
||||
)
|
||||
|
||||
# Traceability arguments
|
||||
parser.add_argument(
|
||||
"--requirement-id",
|
||||
help="Requirement identifier (e.g., REQ-2025-001)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-description",
|
||||
help="What this component accomplishes"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-source",
|
||||
help="Source document"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--issue-id",
|
||||
help="Issue tracking ID (e.g., JIRA-123)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requested-by",
|
||||
help="Who requested this"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--rationale",
|
||||
help="Why this is needed"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create requirement info if provided
|
||||
requirement = None
|
||||
if args.requirement_id and args.requirement_description:
|
||||
requirement = RequirementInfo(
|
||||
id=args.requirement_id,
|
||||
description=args.requirement_description,
|
||||
source=args.requirement_source,
|
||||
issue_id=args.issue_id,
|
||||
requested_by=args.requested_by,
|
||||
rationale=args.rationale
|
||||
)
|
||||
|
||||
orchestrator = ComponentCreator()
|
||||
result = orchestrator.orchestrate_creation(
|
||||
description_path=args.description,
|
||||
auto_fill_gaps=args.auto_fill_gaps,
|
||||
check_duplicates=not args.skip_duplicate_check,
|
||||
requirement=requirement
|
||||
)
|
||||
|
||||
# Output final report in requested format
|
||||
if args.output_format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
elif args.output_format == "yaml":
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
442
agents/meta.hook/README.md
Normal file
442
agents/meta.hook/README.md
Normal file
@@ -0,0 +1,442 @@
|
||||
# meta.hook - Hook Creator Meta-Agent
|
||||
|
||||
Generates Claude Code hooks from natural language descriptions.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.hook** is a meta-agent that creates Claude Code hooks from simple description files. It generates hook configurations that execute commands in response to events like tool calls, errors, or user interactions.
|
||||
|
||||
**What it does:**
|
||||
- Parses hook descriptions (Markdown or JSON)
|
||||
- Generates `.claude/hooks.yaml` configurations
|
||||
- Validates event types and hook structure
|
||||
- Manages hook lifecycle (create, update, enable/disable)
|
||||
- Supports tool-specific filtering
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Create a Hook
|
||||
|
||||
```bash
|
||||
python3 agents/meta.hook/meta_hook.py examples/my_hook.md
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
🪝 meta.hook - Creating hook from examples/my_hook.md
|
||||
|
||||
✨ Hook 'pre-commit-lint' created successfully!
|
||||
|
||||
📄 Created/updated file:
|
||||
- .claude/hooks.yaml
|
||||
|
||||
✅ Hook 'pre-commit-lint' is ready to use
|
||||
Event: before-tool-call
|
||||
Command: npm run lint
|
||||
```
|
||||
|
||||
### Hook Description Format
|
||||
|
||||
Create a Markdown file:
|
||||
|
||||
```markdown
|
||||
# Name: pre-commit-lint
|
||||
|
||||
# Event: before-tool-call
|
||||
|
||||
# Tool Filter: git
|
||||
|
||||
# Description: Run linter before git commits
|
||||
|
||||
# Command: npm run lint
|
||||
|
||||
# Timeout: 30000
|
||||
|
||||
# Enabled: true
|
||||
```
|
||||
|
||||
Or use JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "pre-commit-lint",
|
||||
"event": "before-tool-call",
|
||||
"tool_filter": "git",
|
||||
"description": "Run linter before git commits",
|
||||
"command": "npm run lint",
|
||||
"timeout": 30000,
|
||||
"enabled": true
|
||||
}
|
||||
```
|
||||
|
||||
## Event Types
|
||||
|
||||
Supported Claude Code events:
|
||||
|
||||
- **before-tool-call** - Before any tool is executed
|
||||
- **after-tool-call** - After any tool completes
|
||||
- **on-error** - When a tool call fails
|
||||
- **user-prompt-submit** - When user submits a prompt
|
||||
- **assistant-response** - After assistant responds
|
||||
|
||||
## Generated Structure
|
||||
|
||||
meta.hook generates or updates `.claude/hooks.yaml`:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
- name: pre-commit-lint
|
||||
event: before-tool-call
|
||||
command: npm run lint
|
||||
description: Run linter before git commits
|
||||
enabled: true
|
||||
tool_filter: git
|
||||
timeout: 30000
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Pre-commit Linting
|
||||
|
||||
**Description file** (`lint_hook.md`):
|
||||
|
||||
```markdown
|
||||
# Name: pre-commit-lint
|
||||
|
||||
# Event: before-tool-call
|
||||
|
||||
# Tool Filter: git
|
||||
|
||||
# Description: Run linter before git commits to ensure code quality
|
||||
|
||||
# Command: npm run lint
|
||||
|
||||
# Timeout: 30000
|
||||
```
|
||||
|
||||
**Create hook:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.hook/meta_hook.py lint_hook.md
|
||||
```
|
||||
|
||||
### Example 2: Post-deployment Notification
|
||||
|
||||
**Description file** (`deploy_notify.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "deploy-notify",
|
||||
"event": "after-tool-call",
|
||||
"tool_filter": "deploy",
|
||||
"description": "Send notification after deployment",
|
||||
"command": "./scripts/notify-team.sh",
|
||||
"timeout": 10000
|
||||
}
|
||||
```
|
||||
|
||||
**Create hook:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.hook/meta_hook.py deploy_notify.json
|
||||
```
|
||||
|
||||
### Example 3: Error Logging
|
||||
|
||||
**Description file** (`error_logger.md`):
|
||||
|
||||
```markdown
|
||||
# Name: error-logger
|
||||
|
||||
# Event: on-error
|
||||
|
||||
# Description: Log errors to monitoring system
|
||||
|
||||
# Command: ./scripts/log-error.sh "{error}" "{tool}"
|
||||
|
||||
# Timeout: 5000
|
||||
|
||||
# Enabled: true
|
||||
```
|
||||
|
||||
**Create hook:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.hook/meta_hook.py error_logger.md
|
||||
```
|
||||
|
||||
## Hook Parameters
|
||||
|
||||
### Required
|
||||
|
||||
- **name** - Unique hook identifier
|
||||
- **event** - Trigger event type
|
||||
- **command** - Shell command to execute
|
||||
|
||||
### Optional
|
||||
|
||||
- **description** - What the hook does
|
||||
- **tool_filter** - Only trigger for specific tools (e.g., "git", "npm", "docker")
|
||||
- **enabled** - Whether hook is active (default: true)
|
||||
- **timeout** - Command timeout in milliseconds (default: none)
|
||||
|
||||
## Tool Filters
|
||||
|
||||
Restrict hooks to specific tools:
|
||||
|
||||
```markdown
|
||||
# Tool Filter: git
|
||||
```
|
||||
|
||||
This hook only triggers for git-related tool calls.
|
||||
|
||||
Common tool filters:
|
||||
- `git` - Git operations
|
||||
- `npm` - NPM commands
|
||||
- `docker` - Docker commands
|
||||
- `python` - Python execution
|
||||
- `bash` - Shell commands
|
||||
|
||||
## Managing Hooks
|
||||
|
||||
### Update Existing Hook
|
||||
|
||||
Run meta.hook with the same hook name to update:
|
||||
|
||||
```bash
|
||||
python3 agents/meta.hook/meta_hook.py updated_hook.md
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
⚠️ Warning: Hook 'pre-commit-lint' already exists, updating...
|
||||
✨ Hook 'pre-commit-lint' created successfully!
|
||||
```
|
||||
|
||||
### Disable Hook
|
||||
|
||||
Set `Enabled: false` in description:
|
||||
|
||||
```markdown
|
||||
# Name: my-hook
|
||||
# Event: before-tool-call
|
||||
# Command: echo "test"
|
||||
# Enabled: false
|
||||
```
|
||||
|
||||
### Multiple Hooks
|
||||
|
||||
Create multiple hook descriptions and run meta.hook for each:
|
||||
|
||||
```bash
|
||||
for hook in hooks/*.md; do
|
||||
python3 agents/meta.hook/meta_hook.py "$hook"
|
||||
done
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### With Claude Code
|
||||
|
||||
Hooks are automatically loaded by Claude Code from `.claude/hooks.yaml`.
|
||||
|
||||
### With meta.agent
|
||||
|
||||
Create agents that use hooks:
|
||||
|
||||
```yaml
|
||||
name: ci.agent
|
||||
description: Continuous integration agent
|
||||
# Hooks will trigger during agent execution
|
||||
```
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **hook-description** - Natural language hook requirements
|
||||
- Pattern: `**/hook_description.md`
|
||||
- Format: Markdown or JSON
|
||||
|
||||
### Produces
|
||||
|
||||
- **hook-config** - Claude Code hook configuration
|
||||
- Pattern: `.claude/hooks.yaml`
|
||||
- Schema: `schemas/hook-config.json`
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Create and Test Hook
|
||||
|
||||
```bash
|
||||
# 1. Create hook description
|
||||
cat > my_hook.md <<EOF
|
||||
# Name: test-runner
|
||||
# Event: after-tool-call
|
||||
# Tool Filter: git
|
||||
# Description: Run tests after git push
|
||||
# Command: npm test
|
||||
EOF
|
||||
|
||||
# 2. Generate hook
|
||||
python3 agents/meta.hook/meta_hook.py my_hook.md
|
||||
|
||||
# 3. Test hook (trigger the event)
|
||||
git add .
|
||||
git commit -m "test"
|
||||
```
|
||||
|
||||
### Workflow 2: Create Pre-commit Workflow
|
||||
|
||||
```bash
|
||||
# Create linting hook
|
||||
cat > lint_hook.md <<EOF
|
||||
# Name: lint
|
||||
# Event: before-tool-call
|
||||
# Tool Filter: git
|
||||
# Command: npm run lint
|
||||
EOF
|
||||
|
||||
python3 agents/meta.hook/meta_hook.py lint_hook.md
|
||||
|
||||
# Create test hook
|
||||
cat > test_hook.md <<EOF
|
||||
# Name: test
|
||||
# Event: before-tool-call
|
||||
# Tool Filter: git
|
||||
# Command: npm test
|
||||
EOF
|
||||
|
||||
python3 agents/meta.hook/meta_hook.py test_hook.md
|
||||
```
|
||||
|
||||
### Workflow 3: Error Monitoring
|
||||
|
||||
```bash
|
||||
# Create error notification hook
|
||||
cat > error_notify.md <<EOF
|
||||
# Name: error-notify
|
||||
# Event: on-error
|
||||
# Description: Send error notifications
|
||||
# Command: ./scripts/notify.sh
|
||||
# Timeout: 5000
|
||||
EOF
|
||||
|
||||
python3 agents/meta.hook/meta_hook.py error_notify.md
|
||||
```
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Command Design
|
||||
|
||||
**Use absolute paths for scripts:**
|
||||
```markdown
|
||||
# Good
|
||||
# Command: ./scripts/lint.sh
|
||||
|
||||
# Bad
|
||||
# Command: lint.sh
|
||||
```
|
||||
|
||||
**Set appropriate timeouts:**
|
||||
```markdown
|
||||
# Fast operations: 5-10 seconds
|
||||
# Timeout: 10000
|
||||
|
||||
# Longer operations: 30-60 seconds
|
||||
# Timeout: 60000
|
||||
```
|
||||
|
||||
**Handle errors gracefully:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# In your hook script
|
||||
set -e # Exit on error
|
||||
trap 'echo "Hook failed"' ERR
|
||||
```
|
||||
|
||||
### Tool Filters
|
||||
|
||||
Be specific with tool filters to avoid unnecessary executions:
|
||||
|
||||
```markdown
|
||||
# Specific
|
||||
# Tool Filter: git
|
||||
|
||||
# Too broad
|
||||
# (no tool filter - runs for ALL tools)
|
||||
```
|
||||
|
||||
### Testing Hooks
|
||||
|
||||
Test hooks before enabling:
|
||||
|
||||
```markdown
|
||||
# Enabled: false
|
||||
```
|
||||
|
||||
Then manually test the command, and enable once verified.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Hook not triggering
|
||||
|
||||
**Check event type:**
|
||||
```bash
|
||||
# Verify event is correct in .claude/hooks.yaml
|
||||
cat .claude/hooks.yaml
|
||||
```
|
||||
|
||||
**Check tool filter:**
|
||||
```markdown
|
||||
# If using tool filter, ensure it matches the tool being called
|
||||
# Tool Filter: git
|
||||
```
|
||||
|
||||
### Command fails
|
||||
|
||||
**Check command path:**
|
||||
```bash
|
||||
# Test command manually
|
||||
npm run lint
|
||||
|
||||
# If fails, fix path or installation
|
||||
```
|
||||
|
||||
**Check timeout:**
|
||||
```markdown
|
||||
# Increase timeout for slow commands
|
||||
# Timeout: 60000
|
||||
```
|
||||
|
||||
### Hook already exists warning
|
||||
|
||||
This is normal when updating hooks. The old version is replaced with the new one.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
meta.hook
|
||||
├─ Input: hook-description (Markdown/JSON)
|
||||
├─ Parser: extract name, event, command, filters
|
||||
├─ Generator: create/update hooks.yaml
|
||||
├─ Validator: check event types and structure
|
||||
└─ Output: .claude/hooks.yaml configuration
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [hook-description schema](../../schemas/hook-description.json)
|
||||
- [hook-config schema](../../schemas/hook-config.json)
|
||||
|
||||
## How Claude Uses This
|
||||
|
||||
Claude can:
|
||||
1. **Create hooks on demand** - "Create a pre-commit linting hook"
|
||||
2. **Automate workflows** - "Add error logging for all failures"
|
||||
3. **Build CI/CD pipelines** - "Create hooks for test, lint, and deploy"
|
||||
4. **Monitor executions** - "Add notification hooks for important events"
|
||||
|
||||
meta.hook enables powerful event-driven automation in Claude Code!
|
||||
64
agents/meta.hook/agent.yaml
Normal file
64
agents/meta.hook/agent.yaml
Normal file
@@ -0,0 +1,64 @@
|
||||
name: meta.hook
|
||||
version: 0.1.0
|
||||
description: Hook creator meta-agent that generates Claude Code hooks from descriptions
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Translate natural language specifications into validated hook manifests
|
||||
- Recommend appropriate hook events, commands, and execution patterns
|
||||
- Simulate and document hook behavior for developer adoption
|
||||
type: meta-agent
|
||||
skills_available:
|
||||
- hook.define
|
||||
- hook.register
|
||||
- hook.simulate
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: hook-description
|
||||
required: true
|
||||
produces:
|
||||
- type: hook-config
|
||||
|
||||
system_prompt: |
|
||||
You are meta.hook, a specialized meta-agent that creates Claude Code hooks from natural language descriptions.
|
||||
|
||||
Your role:
|
||||
1. Parse hook descriptions (Markdown or JSON format)
|
||||
2. Generate hook configurations (.claude/hooks.yaml)
|
||||
3. Validate hook names and event types
|
||||
4. Document hook usage
|
||||
|
||||
Hook description format:
|
||||
- Name: Descriptive hook identifier
|
||||
- Event: Trigger event (before-tool-call, after-tool-call, on-error, etc.)
|
||||
- Description: What the hook does
|
||||
- Command: Shell command to execute
|
||||
- Tool Filter (optional): Only trigger for specific tools
|
||||
- Enabled (optional): Whether hook is active (default: true)
|
||||
|
||||
Generated hooks.yaml format:
|
||||
```yaml
|
||||
hooks:
|
||||
- name: hook-name
|
||||
event: trigger-event
|
||||
description: What it does
|
||||
command: shell command
|
||||
enabled: true
|
||||
tool_filter: tool-name # optional
|
||||
timeout: 30000 # optional, in milliseconds
|
||||
```
|
||||
|
||||
Event types:
|
||||
- before-tool-call: Before any tool is called
|
||||
- after-tool-call: After any tool completes
|
||||
- on-error: When a tool call fails
|
||||
- user-prompt-submit: When user submits a prompt
|
||||
- assistant-response: After assistant responds
|
||||
|
||||
Always:
|
||||
- Validate event types
|
||||
- Provide clear descriptions
|
||||
- Set reasonable timeouts
|
||||
- Document tool filters
|
||||
- Include usage examples in generated documentation
|
||||
349
agents/meta.hook/meta_hook.py
Executable file
349
agents/meta.hook/meta_hook.py
Executable file
@@ -0,0 +1,349 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.hook - Hook Creator Meta-Agent
|
||||
|
||||
Generates Claude Code hooks from natural language descriptions.
|
||||
|
||||
Usage:
|
||||
python3 agents/meta.hook/meta_hook.py <hook_description_file>
|
||||
|
||||
Examples:
|
||||
python3 agents/meta.hook/meta_hook.py examples/lint_hook.md
|
||||
python3 agents/meta.hook/meta_hook.py examples/notify_hook.json
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.traceability import get_tracer, RequirementInfo
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class HookCreator:
|
||||
"""Creates Claude Code hooks from descriptions"""
|
||||
|
||||
VALID_EVENTS = [
|
||||
"before-tool-call",
|
||||
"after-tool-call",
|
||||
"on-error",
|
||||
"user-prompt-submit",
|
||||
"assistant-response"
|
||||
]
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize hook creator"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.hooks_dir = self.base_dir / ".claude"
|
||||
|
||||
def parse_description(self, description_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Parse hook description from Markdown or JSON file
|
||||
|
||||
Args:
|
||||
description_path: Path to description file
|
||||
|
||||
Returns:
|
||||
Dict with hook configuration
|
||||
"""
|
||||
path = Path(description_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Description file not found: {description_path}")
|
||||
|
||||
# Read file
|
||||
content = path.read_text()
|
||||
|
||||
# Try JSON first
|
||||
if path.suffix == ".json":
|
||||
return json.loads(content)
|
||||
|
||||
# Parse Markdown format
|
||||
hook_desc = {}
|
||||
|
||||
# Extract fields
|
||||
patterns = {
|
||||
"name": r"#\s*Name:\s*(.+)",
|
||||
"event": r"#\s*Event:\s*(.+)",
|
||||
"description": r"#\s*Description:\s*(.+)",
|
||||
"command": r"#\s*Command:\s*(.+)",
|
||||
"tool_filter": r"#\s*Tool\s*Filter:\s*(.+)",
|
||||
"enabled": r"#\s*Enabled:\s*(.+)",
|
||||
"timeout": r"#\s*Timeout:\s*(\d+)"
|
||||
}
|
||||
|
||||
for field, pattern in patterns.items():
|
||||
match = re.search(pattern, content, re.IGNORECASE)
|
||||
if match:
|
||||
value = match.group(1).strip()
|
||||
|
||||
# Convert types
|
||||
if field == "enabled":
|
||||
value = value.lower() in ("true", "yes", "1")
|
||||
elif field == "timeout":
|
||||
value = int(value)
|
||||
|
||||
hook_desc[field] = value
|
||||
|
||||
# Validate required fields
|
||||
required = ["name", "event", "command"]
|
||||
missing = [f for f in required if f not in hook_desc]
|
||||
if missing:
|
||||
raise ValueError(f"Missing required fields: {', '.join(missing)}")
|
||||
|
||||
# Validate event type
|
||||
if hook_desc["event"] not in self.VALID_EVENTS:
|
||||
raise ValueError(
|
||||
f"Invalid event type: {hook_desc['event']}. "
|
||||
f"Must be one of: {', '.join(self.VALID_EVENTS)}"
|
||||
)
|
||||
|
||||
return hook_desc
|
||||
|
||||
def generate_hooks_yaml(self, hook_desc: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate hooks.yaml configuration
|
||||
|
||||
Args:
|
||||
hook_desc: Parsed hook description
|
||||
|
||||
Returns:
|
||||
YAML string
|
||||
"""
|
||||
hook_config = {
|
||||
"name": hook_desc["name"],
|
||||
"event": hook_desc["event"],
|
||||
"command": hook_desc["command"]
|
||||
}
|
||||
|
||||
# Add optional fields
|
||||
if "description" in hook_desc:
|
||||
hook_config["description"] = hook_desc["description"]
|
||||
|
||||
if "enabled" in hook_desc:
|
||||
hook_config["enabled"] = hook_desc["enabled"]
|
||||
else:
|
||||
hook_config["enabled"] = True
|
||||
|
||||
if "tool_filter" in hook_desc:
|
||||
hook_config["tool_filter"] = hook_desc["tool_filter"]
|
||||
|
||||
if "timeout" in hook_desc:
|
||||
hook_config["timeout"] = hook_desc["timeout"]
|
||||
|
||||
# Wrap in hooks array
|
||||
hooks_yaml = {"hooks": [hook_config]}
|
||||
|
||||
return yaml.dump(hooks_yaml, default_flow_style=False, sort_keys=False)
|
||||
|
||||
def create_hook(
|
||||
self,
|
||||
description_path: str,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create hook from description file
|
||||
|
||||
Args:
|
||||
description_path: Path to description file
|
||||
requirement: Optional requirement information for traceability
|
||||
|
||||
Returns:
|
||||
Dict with creation results
|
||||
"""
|
||||
try:
|
||||
print(f"🪝 meta.hook - Creating hook from {description_path}\n")
|
||||
|
||||
# Parse description
|
||||
hook_desc = self.parse_description(description_path)
|
||||
|
||||
# Generate hooks.yaml
|
||||
hooks_yaml = self.generate_hooks_yaml(hook_desc)
|
||||
|
||||
# Ensure .claude directory exists
|
||||
self.hooks_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write hooks.yaml (or append if exists)
|
||||
hooks_file = self.hooks_dir / "hooks.yaml"
|
||||
|
||||
if hooks_file.exists():
|
||||
# Load existing hooks
|
||||
existing = yaml.safe_load(hooks_file.read_text())
|
||||
if not existing or not isinstance(existing, dict):
|
||||
existing = {"hooks": []}
|
||||
if "hooks" not in existing:
|
||||
existing["hooks"] = []
|
||||
if not isinstance(existing["hooks"], list):
|
||||
existing["hooks"] = []
|
||||
|
||||
# Add new hook
|
||||
new_hook = yaml.safe_load(hooks_yaml)["hooks"][0]
|
||||
|
||||
# Check for duplicate
|
||||
hook_names = [h.get("name") for h in existing["hooks"] if isinstance(h, dict)]
|
||||
if new_hook["name"] in hook_names:
|
||||
print(f"⚠️ Warning: Hook '{new_hook['name']}' already exists, updating...")
|
||||
# Remove old version
|
||||
existing["hooks"] = [h for h in existing["hooks"] if h["name"] != new_hook["name"]]
|
||||
|
||||
existing["hooks"].append(new_hook)
|
||||
hooks_yaml = yaml.dump(existing, default_flow_style=False, sort_keys=False)
|
||||
|
||||
# Write file
|
||||
hooks_file.write_text(hooks_yaml)
|
||||
|
||||
print(f"✨ Hook '{hook_desc['name']}' created successfully!\n")
|
||||
print(f"📄 Created/updated file:")
|
||||
print(f" - {hooks_file}\n")
|
||||
print(f"✅ Hook '{hook_desc['name']}' is ready to use")
|
||||
print(f" Event: {hook_desc['event']}")
|
||||
print(f" Command: {hook_desc['command']}")
|
||||
|
||||
result = {
|
||||
"ok": True,
|
||||
"status": "success",
|
||||
"hook_name": hook_desc["name"],
|
||||
"hooks_file": str(hooks_file)
|
||||
}
|
||||
|
||||
# Log traceability if requirement provided
|
||||
trace_id = None
|
||||
if requirement:
|
||||
try:
|
||||
tracer = get_tracer()
|
||||
|
||||
# Create component ID from hook name
|
||||
component_id = f"hook.{hook_desc['name'].replace('-', '_')}"
|
||||
|
||||
trace_id = tracer.log_creation(
|
||||
component_id=component_id,
|
||||
component_name=hook_desc["name"],
|
||||
component_type="hook",
|
||||
component_version="0.1.0",
|
||||
component_file_path=str(hooks_file),
|
||||
input_source_path=description_path,
|
||||
created_by_tool="meta.hook",
|
||||
created_by_version="0.1.0",
|
||||
requirement=requirement,
|
||||
tags=["hook", "auto-generated", hook_desc["event"]],
|
||||
project="Betty Framework"
|
||||
)
|
||||
|
||||
# Log validation check
|
||||
validation_details = {
|
||||
"checks_performed": [
|
||||
{"name": "hook_structure", "status": "passed"},
|
||||
{"name": "event_validation", "status": "passed",
|
||||
"message": f"Valid event type: {hook_desc['event']}"}
|
||||
]
|
||||
}
|
||||
|
||||
# Check for tool filter
|
||||
if hook_desc.get("tool_filter"):
|
||||
validation_details["checks_performed"].append({
|
||||
"name": "tool_filter_validation",
|
||||
"status": "passed",
|
||||
"message": f"Tool filter: {hook_desc['tool_filter']}"
|
||||
})
|
||||
|
||||
tracer.log_verification(
|
||||
component_id=component_id,
|
||||
check_type="validation",
|
||||
tool="meta.hook",
|
||||
result="passed",
|
||||
details=validation_details
|
||||
)
|
||||
|
||||
result["trace_id"] = trace_id
|
||||
result["component_id"] = component_id
|
||||
|
||||
except Exception as e:
|
||||
print(f"⚠️ Warning: Could not log traceability: {e}")
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating hook: {e}")
|
||||
logger.error(f"Error creating hook: {e}", exc_info=True)
|
||||
return {
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.hook - Create hooks from descriptions"
|
||||
)
|
||||
parser.add_argument(
|
||||
"description",
|
||||
help="Path to hook description file (.md or .json)"
|
||||
)
|
||||
|
||||
# Traceability arguments
|
||||
parser.add_argument(
|
||||
"--requirement-id",
|
||||
help="Requirement identifier (e.g., REQ-2025-001)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-description",
|
||||
help="What this hook accomplishes"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-source",
|
||||
help="Source document"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--issue-id",
|
||||
help="Issue tracking ID (e.g., JIRA-123)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requested-by",
|
||||
help="Who requested this"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--rationale",
|
||||
help="Why this is needed"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create requirement info if provided
|
||||
requirement = None
|
||||
if args.requirement_id and args.requirement_description:
|
||||
requirement = RequirementInfo(
|
||||
id=args.requirement_id,
|
||||
description=args.requirement_description,
|
||||
source=args.requirement_source,
|
||||
issue_id=args.issue_id,
|
||||
requested_by=args.requested_by,
|
||||
rationale=args.rationale
|
||||
)
|
||||
|
||||
creator = HookCreator()
|
||||
result = creator.create_hook(args.description, requirement=requirement)
|
||||
|
||||
# Display traceability info if available
|
||||
if result.get("trace_id"):
|
||||
print(f"\n📝 Traceability: {result['trace_id']}")
|
||||
print(f" View trace: python3 betty/trace_cli.py show {result['component_id']}")
|
||||
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
647
agents/meta.skill/README.md
Normal file
647
agents/meta.skill/README.md
Normal file
@@ -0,0 +1,647 @@
|
||||
# meta.skill - Skill Creator Meta-Agent
|
||||
|
||||
Generates complete Betty skills from natural language descriptions.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.skill** is a meta-agent that creates fully functional skills from simple description files. It generates skill definitions, Python implementations, tests, and documentation, following Betty Framework conventions.
|
||||
|
||||
**What it does:**
|
||||
- Parses skill descriptions (Markdown or JSON)
|
||||
- Generates `skill.yaml` configurations
|
||||
- Creates Python implementation stubs
|
||||
- Generates test templates
|
||||
- Creates comprehensive README documentation
|
||||
- Validates skill names and structure
|
||||
- Registers artifact metadata
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Create a Skill
|
||||
|
||||
```bash
|
||||
# Create skill from description
|
||||
python3 agents/meta.skill/meta_skill.py examples/my_skill_description.md
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
🛠️ meta.skill - Creating skill from examples/my_skill_description.md
|
||||
|
||||
✨ Skill 'data.transform' created successfully!
|
||||
|
||||
📄 Created files:
|
||||
- skills/data.transform/skill.yaml
|
||||
- skills/data.transform/data_transform.py
|
||||
- skills/data.transform/test_data_transform.py
|
||||
- skills/data.transform/README.md
|
||||
|
||||
✅ Skill 'data.transform' is ready to use
|
||||
Add to agent skills_available to use it.
|
||||
```
|
||||
|
||||
### Skill Description Format
|
||||
|
||||
Create a Markdown file with this structure:
|
||||
|
||||
```markdown
|
||||
# Name: domain.action
|
||||
|
||||
# Purpose:
|
||||
Brief description of what the skill does
|
||||
|
||||
# Inputs:
|
||||
- input_parameter_1
|
||||
- input_parameter_2 (optional)
|
||||
|
||||
# Outputs:
|
||||
- output_file_1.json
|
||||
- output_file_2.yaml
|
||||
|
||||
# Permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
# Produces Artifacts:
|
||||
- artifact-type-1
|
||||
- artifact-type-2
|
||||
|
||||
# Consumes Artifacts:
|
||||
- artifact-type-3
|
||||
|
||||
# Implementation Notes:
|
||||
Detailed guidance for implementing the skill logic
|
||||
```
|
||||
|
||||
Or use JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "domain.action",
|
||||
"purpose": "Brief description",
|
||||
"inputs": ["param1", "param2"],
|
||||
"outputs": ["output.json"],
|
||||
"permissions": ["filesystem:read"],
|
||||
"artifact_produces": ["artifact-type-1"],
|
||||
"artifact_consumes": ["artifact-type-2"],
|
||||
"implementation_notes": "Implementation guidance"
|
||||
}
|
||||
```
|
||||
|
||||
## Generated Structure
|
||||
|
||||
For a skill named `data.transform`, meta.skill generates:
|
||||
|
||||
```
|
||||
skills/data.transform/
|
||||
├── skill.yaml # Skill configuration
|
||||
├── data_transform.py # Python implementation
|
||||
├── test_data_transform.py # Test suite
|
||||
└── README.md # Documentation
|
||||
```
|
||||
|
||||
### skill.yaml
|
||||
|
||||
Complete skill configuration following Betty conventions:
|
||||
|
||||
```yaml
|
||||
name: data.transform
|
||||
version: 0.1.0
|
||||
description: Transform data between formats
|
||||
inputs:
|
||||
- input_file
|
||||
- output_format
|
||||
outputs:
|
||||
- transformed_data.json
|
||||
status: active
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
entrypoints:
|
||||
- command: /data/transform
|
||||
handler: data_transform.py
|
||||
runtime: python
|
||||
description: Transform data between formats
|
||||
artifact_metadata:
|
||||
produces:
|
||||
- type: transformed-data
|
||||
consumes:
|
||||
- type: raw-data
|
||||
```
|
||||
|
||||
### Implementation Stub
|
||||
|
||||
Python implementation with:
|
||||
- Proper imports and logging
|
||||
- Class structure
|
||||
- execute() method with typed parameters
|
||||
- CLI entry point with argparse
|
||||
- Error handling
|
||||
- Output formatting (JSON/YAML)
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
data.transform - Transform data between formats
|
||||
|
||||
Generated by meta.skill
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class DataTransform:
|
||||
"""Transform data between formats"""
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize skill"""
|
||||
self.base_dir = Path(base_dir)
|
||||
|
||||
def execute(self, input_file: Optional[str] = None,
|
||||
output_format: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Execute the skill"""
|
||||
try:
|
||||
logger.info("Executing data.transform...")
|
||||
|
||||
# TODO: Implement skill logic here
|
||||
# Implementation notes: [your notes here]
|
||||
|
||||
result = {
|
||||
"ok": True,
|
||||
"status": "success",
|
||||
"message": "Skill executed successfully"
|
||||
}
|
||||
|
||||
logger.info("Skill completed successfully")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing skill: {e}")
|
||||
return {
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Transform data between formats"
|
||||
)
|
||||
|
||||
parser.add_argument("--input-file", help="input_file")
|
||||
parser.add_argument("--output-format", help="output_format")
|
||||
parser.add_argument(
|
||||
"--output-format",
|
||||
choices=["json", "yaml"],
|
||||
default="json",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
skill = DataTransform()
|
||||
result = skill.execute(
|
||||
input_file=args.input_file,
|
||||
output_format=args.output_format,
|
||||
)
|
||||
|
||||
if args.output_format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### Test Template
|
||||
|
||||
pytest-based test suite:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for data.transform"""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
|
||||
|
||||
from skills.data_transform import data_transform
|
||||
|
||||
|
||||
class TestDataTransform:
|
||||
"""Tests for DataTransform"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup test fixtures"""
|
||||
self.skill = data_transform.DataTransform()
|
||||
|
||||
def test_initialization(self):
|
||||
"""Test skill initializes correctly"""
|
||||
assert self.skill is not None
|
||||
assert self.skill.base_dir is not None
|
||||
|
||||
def test_execute_basic(self):
|
||||
"""Test basic execution"""
|
||||
result = self.skill.execute()
|
||||
assert result is not None
|
||||
assert "ok" in result
|
||||
assert "status" in result
|
||||
|
||||
def test_execute_success(self):
|
||||
"""Test successful execution"""
|
||||
result = self.skill.execute()
|
||||
assert result["ok"] is True
|
||||
assert result["status"] == "success"
|
||||
|
||||
# TODO: Add more specific tests
|
||||
|
||||
|
||||
def test_cli_help(capsys):
|
||||
"""Test CLI help message"""
|
||||
sys.argv = ["data_transform.py", "--help"]
|
||||
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
data_transform.main()
|
||||
|
||||
assert exc_info.value.code == 0
|
||||
```
|
||||
|
||||
## Skill Naming Convention
|
||||
|
||||
Skills must follow the `domain.action` format:
|
||||
- **domain**: Category (e.g., `data`, `api`, `file`, `text`)
|
||||
- **action**: Operation (e.g., `validate`, `transform`, `parse`)
|
||||
- Use only lowercase letters and numbers (no hyphens, underscores, or special characters)
|
||||
|
||||
Valid examples:
|
||||
- ✅ `data.validate`
|
||||
- ✅ `api.test`
|
||||
- ✅ `file.compress`
|
||||
- ✅ `text.summarize`
|
||||
|
||||
Invalid examples:
|
||||
- ❌ `data.validate-json` (hyphen not allowed)
|
||||
- ❌ `data_validate` (underscore not allowed)
|
||||
- ❌ `DataValidate` (uppercase not allowed)
|
||||
- ❌ `validate` (missing domain)
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: JSON Validator
|
||||
|
||||
**Description file** (`json_validator.md`):
|
||||
|
||||
```markdown
|
||||
# Name: data.validatejson
|
||||
|
||||
# Purpose:
|
||||
Validates JSON files against JSON Schema definitions
|
||||
|
||||
# Inputs:
|
||||
- json_file_path
|
||||
- schema_file_path (optional)
|
||||
|
||||
# Outputs:
|
||||
- validation_result.json
|
||||
|
||||
# Permissions:
|
||||
- filesystem:read
|
||||
|
||||
# Produces Artifacts:
|
||||
- validation-report
|
||||
|
||||
# Implementation Notes:
|
||||
Use Python's jsonschema library for validation
|
||||
```
|
||||
|
||||
**Create skill:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.skill/meta_skill.py json_validator.md
|
||||
```
|
||||
|
||||
### Example 2: API Tester
|
||||
|
||||
**Description file** (`api_tester.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "api.test",
|
||||
"purpose": "Test API endpoints and generate reports",
|
||||
"inputs": ["openapi_spec_path", "base_url"],
|
||||
"outputs": ["test_results.json"],
|
||||
"permissions": ["network:http"],
|
||||
"artifact_produces": ["test-report"],
|
||||
"artifact_consumes": ["openapi-spec"],
|
||||
"implementation_notes": "Use requests library to test each endpoint"
|
||||
}
|
||||
```
|
||||
|
||||
**Create skill:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.skill/meta_skill.py api_tester.json
|
||||
```
|
||||
|
||||
### Example 3: File Compressor
|
||||
|
||||
**Description file** (`file_compressor.md`):
|
||||
|
||||
```markdown
|
||||
# Name: file.compress
|
||||
|
||||
# Purpose:
|
||||
Compress files using various algorithms
|
||||
|
||||
# Inputs:
|
||||
- input_path
|
||||
- compression_type (gzip, zip, tar.gz)
|
||||
|
||||
# Outputs:
|
||||
- compressed_file
|
||||
|
||||
# Permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
# Implementation Notes:
|
||||
Support gzip, zip, and tar.gz formats using Python standard library
|
||||
```
|
||||
|
||||
**Create skill:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.skill/meta_skill.py file_compressor.md
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### With meta.agent
|
||||
|
||||
Create an agent that uses the skill:
|
||||
|
||||
```yaml
|
||||
name: data.validator
|
||||
description: Data validation agent
|
||||
skills_available:
|
||||
- data.validatejson # Skill created by meta.skill
|
||||
```
|
||||
|
||||
### With plugin.sync
|
||||
|
||||
Sync skills to plugin format:
|
||||
|
||||
```bash
|
||||
python3 skills/plugin.sync/plugin_sync.py
|
||||
```
|
||||
|
||||
This converts `skill.yaml` to commands in `.claude-plugin/plugin.yaml`.
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **skill-description** - Natural language skill requirements
|
||||
- Pattern: `**/skill_description.md`
|
||||
- Format: Markdown or JSON
|
||||
|
||||
### Produces
|
||||
|
||||
- **skill-definition** - Complete skill configuration
|
||||
- Pattern: `skills/*/skill.yaml`
|
||||
- Schema: `schemas/skill-definition.json`
|
||||
|
||||
- **skill-implementation** - Python implementation code
|
||||
- Pattern: `skills/*/[skill_module].py`
|
||||
|
||||
- **skill-tests** - Test suite
|
||||
- Pattern: `skills/*/test_[skill_module].py`
|
||||
|
||||
- **skill-documentation** - README documentation
|
||||
- Pattern: `skills/*/README.md`
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Create and Test Skill
|
||||
|
||||
```bash
|
||||
# 1. Create skill description
|
||||
cat > my_skill.md <<EOF
|
||||
# Name: data.parse
|
||||
# Purpose: Parse structured data from text
|
||||
# Inputs:
|
||||
- input_text
|
||||
# Outputs:
|
||||
- parsed_data.json
|
||||
# Permissions:
|
||||
- filesystem:write
|
||||
EOF
|
||||
|
||||
# 2. Generate skill
|
||||
python3 agents/meta.skill/meta_skill.py my_skill.md
|
||||
|
||||
# 3. Implement logic (edit the generated file)
|
||||
vim skills/data.parse/data_parse.py
|
||||
|
||||
# 4. Run tests
|
||||
pytest skills/data.parse/test_data_parse.py -v
|
||||
|
||||
# 5. Test CLI
|
||||
python3 skills/data.parse/data_parse.py --help
|
||||
```
|
||||
|
||||
### Workflow 2: Create Skill for Agent
|
||||
|
||||
```bash
|
||||
# 1. Create skill
|
||||
python3 agents/meta.skill/meta_skill.py api_analyzer_skill.md
|
||||
|
||||
# 2. Add to agent
|
||||
echo " - api.analyze" >> agents/api.agent/agent.yaml
|
||||
|
||||
# 3. Sync to plugin
|
||||
python3 skills/plugin.sync/plugin_sync.py
|
||||
```
|
||||
|
||||
### Workflow 3: Batch Create Skills
|
||||
|
||||
```bash
|
||||
# Create multiple skills
|
||||
for desc in skills_to_create/*.md; do
|
||||
echo "Creating skill from $desc..."
|
||||
python3 agents/meta.skill/meta_skill.py "$desc"
|
||||
done
|
||||
```
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Skill Descriptions
|
||||
|
||||
**Be specific about purpose:**
|
||||
```markdown
|
||||
# Good
|
||||
# Purpose: Validate JSON against JSON Schema Draft 07
|
||||
|
||||
# Bad
|
||||
# Purpose: Validate stuff
|
||||
```
|
||||
|
||||
**Include implementation notes:**
|
||||
```markdown
|
||||
# Implementation Notes:
|
||||
Use the jsonschema library. Support Draft 07 schemas.
|
||||
Provide detailed error messages with line numbers.
|
||||
```
|
||||
|
||||
**Specify optional parameters:**
|
||||
```markdown
|
||||
# Inputs:
|
||||
- required_param
|
||||
- optional_param (optional)
|
||||
- another_optional (optional, defaults to 'value')
|
||||
```
|
||||
|
||||
### Parameter Naming
|
||||
|
||||
Parameters are automatically sanitized:
|
||||
- Special characters removed (except `-`, `_`, spaces)
|
||||
- Converted to lowercase
|
||||
- Spaces and hyphens become underscores
|
||||
|
||||
Example conversions:
|
||||
- `"Schema File Path (optional)"` → `schema_file_path_optional`
|
||||
- `"API-Key"` → `api_key`
|
||||
- `"Input Data"` → `input_data`
|
||||
|
||||
### Implementation Strategy
|
||||
|
||||
1. **Generate skeleton first** - Let meta.skill create structure
|
||||
2. **Implement gradually** - Add logic to `execute()` method
|
||||
3. **Test incrementally** - Run tests after each change
|
||||
4. **Update documentation** - Keep README current
|
||||
|
||||
### Artifact Metadata
|
||||
|
||||
Always specify artifact types for interoperability:
|
||||
|
||||
```markdown
|
||||
# Produces Artifacts:
|
||||
- openapi-spec
|
||||
- validation-report
|
||||
|
||||
# Consumes Artifacts:
|
||||
- api-requirements
|
||||
```
|
||||
|
||||
This enables:
|
||||
- Agent discovery via meta.compatibility
|
||||
- Pipeline suggestions via meta.suggest
|
||||
- Workflow orchestration
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Invalid skill name
|
||||
|
||||
```
|
||||
Error: Skill name must be in domain.action format: my-skill
|
||||
```
|
||||
|
||||
**Solution:** Use format `domain.action` with only alphanumeric characters:
|
||||
```markdown
|
||||
# Wrong: my-skill, my_skill, MySkill
|
||||
# Right: data.transform, api.validate
|
||||
```
|
||||
|
||||
### Skill already exists
|
||||
|
||||
```
|
||||
Error: Skill directory already exists: skills/data.validate
|
||||
```
|
||||
|
||||
**Solution:** Remove existing skill or use different name:
|
||||
```bash
|
||||
rm -rf skills/data.validate
|
||||
```
|
||||
|
||||
### Import errors in generated code
|
||||
|
||||
```
|
||||
ModuleNotFoundError: No module named 'betty.config'
|
||||
```
|
||||
|
||||
**Solution:** Ensure Betty framework is in Python path:
|
||||
```bash
|
||||
export PYTHONPATH="${PYTHONPATH}:/home/user/betty"
|
||||
```
|
||||
|
||||
### Test failures
|
||||
|
||||
```
|
||||
ModuleNotFoundError: No module named 'skills.data_validate'
|
||||
```
|
||||
|
||||
**Solution:** Run tests from Betty root directory:
|
||||
```bash
|
||||
cd /home/user/betty
|
||||
pytest skills/data.validate/test_data_validate.py -v
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
meta.skill
|
||||
├─ Input: skill-description (Markdown/JSON)
|
||||
├─ Parser: extract name, purpose, inputs, outputs
|
||||
├─ Generator: create skill.yaml, Python, tests, README
|
||||
├─ Validator: check naming conventions
|
||||
└─ Output: Complete skill directory structure
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After creating a skill with meta.skill:
|
||||
|
||||
1. **Implement logic** - Add functionality to `execute()` method
|
||||
2. **Write tests** - Expand test coverage beyond basic tests
|
||||
3. **Add to agent** - Include in agent's `skills_available`
|
||||
4. **Sync to plugin** - Run plugin.sync to update plugin.yaml
|
||||
5. **Test integration** - Verify skill works in agent context
|
||||
6. **Document usage** - Update README with examples
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [skill-description schema](../../schemas/skill-description.json)
|
||||
- [skill-definition schema](../../schemas/skill-definition.json)
|
||||
|
||||
## How Claude Uses This
|
||||
|
||||
Claude can:
|
||||
1. **Create skills on demand** - "Create a skill that validates YAML files"
|
||||
2. **Extend agent capabilities** - "Add a JSON validator skill to this agent"
|
||||
3. **Build skill libraries** - "Create skills for all common data operations"
|
||||
4. **Prototype quickly** - Test ideas by generating skill scaffolds
|
||||
|
||||
meta.skill enables rapid skill development and agent expansion!
|
||||
306
agents/meta.skill/agent.yaml
Normal file
306
agents/meta.skill/agent.yaml
Normal file
@@ -0,0 +1,306 @@
|
||||
name: meta.skill
|
||||
version: 0.4.0
|
||||
description: |
|
||||
Creates complete, functional skills from natural language descriptions.
|
||||
|
||||
This meta-agent transforms skill descriptions into production-ready skills with:
|
||||
- Complete skill.yaml definition with validated artifact types
|
||||
- Artifact flow analysis showing producers/consumers
|
||||
- Production-quality Python implementation with type hints
|
||||
- Comprehensive test templates
|
||||
- Complete documentation with examples
|
||||
- Dependency validation
|
||||
- Registry registration with artifact_metadata
|
||||
- Discoverability verification
|
||||
|
||||
Ensures skills follow Betty Framework conventions and are ready for use in agents.
|
||||
|
||||
Version 0.4.0 adds artifact flow analysis, improved code templates with
|
||||
type hints parsed from skill.yaml, and dependency validation.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: skill-description
|
||||
file_pattern: "**/skill_description.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Natural language description of skill requirements"
|
||||
schema: "schemas/skill-description.json"
|
||||
|
||||
produces:
|
||||
- type: skill-definition
|
||||
file_pattern: "skills/*/skill.yaml"
|
||||
content_type: "application/yaml"
|
||||
schema: "schemas/skill-definition.json"
|
||||
description: "Complete skill configuration"
|
||||
|
||||
- type: skill-implementation
|
||||
file_pattern: "skills/*/*.py"
|
||||
content_type: "text/x-python"
|
||||
description: "Python implementation with proper structure"
|
||||
|
||||
- type: skill-tests
|
||||
file_pattern: "skills/*/test_*.py"
|
||||
content_type: "text/x-python"
|
||||
description: "Test template with example tests"
|
||||
|
||||
- type: skill-documentation
|
||||
file_pattern: "skills/*/SKILL.md"
|
||||
content_type: "text/markdown"
|
||||
description: "Skill documentation and usage guide"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Convert skill concepts into production-ready packages with tests and docs
|
||||
- Ensure generated skills follow registry, artifact, and permission conventions
|
||||
- Coordinate registration and documentation updates for new skills
|
||||
skills_available:
|
||||
- skill.create
|
||||
- skill.define
|
||||
- artifact.define # Generate artifact metadata
|
||||
- artifact.validate.types # Validate artifact types against registry
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
system_prompt: |
|
||||
You are meta.skill, the skill creator for Betty Framework.
|
||||
|
||||
Your purpose is to transform natural language skill descriptions into complete,
|
||||
production-ready skills that follow Betty conventions.
|
||||
|
||||
## Your Workflow
|
||||
|
||||
1. **Parse Description** - Understand skill requirements
|
||||
- Extract name, purpose, inputs, outputs
|
||||
- Identify artifact types in produces/consumes sections
|
||||
- Identify required permissions
|
||||
- Understand implementation requirements
|
||||
|
||||
2. **Validate Artifact Types** - CRITICAL: Verify before generating skill.yaml
|
||||
- Extract ALL artifact types from skill description (produces + consumes sections)
|
||||
- Call artifact.validate.types skill:
|
||||
```bash
|
||||
python3 skills/artifact.validate.types/artifact_validate_types.py \
|
||||
--artifact_types '["threat-model", "data-flow-diagrams", "architecture-overview"]' \
|
||||
--check_schemas true \
|
||||
--suggest_alternatives true \
|
||||
--max_suggestions 3
|
||||
```
|
||||
- Parse validation results:
|
||||
```json
|
||||
{
|
||||
"all_valid": true/false,
|
||||
"validation_results": {
|
||||
"threat-model": {
|
||||
"valid": true,
|
||||
"file_pattern": "*.threat-model.yaml",
|
||||
"content_type": "application/yaml",
|
||||
"schema": "schemas/artifacts/threat-model-schema.json"
|
||||
}
|
||||
},
|
||||
"invalid_types": ["data-flow-diagram"],
|
||||
"suggestions": {
|
||||
"data-flow-diagram": [
|
||||
{"type": "data-flow-diagrams", "reason": "Plural form", "confidence": "high"}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
- If all_valid == false:
|
||||
→ Display invalid_types and suggestions to user
|
||||
→ Example: "❌ Artifact type 'data-flow-diagram' not found. Did you mean 'data-flow-diagrams' (plural, high confidence)?"
|
||||
→ ASK USER to confirm correct types or provide alternatives
|
||||
→ HALT skill creation until artifact types are validated
|
||||
- If all_valid == true:
|
||||
→ Store validated metadata (file_pattern, content_type, schema) for each type
|
||||
→ Use this exact metadata in Step 3 when generating skill.yaml
|
||||
|
||||
3. **Analyze Artifact Flow** - Understand skill's place in ecosystem
|
||||
- For each artifact type the skill produces:
|
||||
→ Search registry for skills that consume this type
|
||||
→ Report: "✅ {artifact_type} will be consumed by: {consuming_skills}"
|
||||
→ If no consumers: "⚠️ {artifact_type} has no consumers yet - consider creating skills that use it"
|
||||
- For each artifact type the skill consumes:
|
||||
→ Search registry for skills that produce this type
|
||||
→ Report: "✅ {artifact_type} produced by: {producing_skills}"
|
||||
→ If no producers: "❌ {artifact_type} has no producers - user must provide manually or create producer skill first"
|
||||
- Warn about gaps in artifact flow
|
||||
- Suggest related skills to create for complete workflow
|
||||
|
||||
4. **Generate skill.yaml** - Create complete definition with VALIDATED artifact metadata
|
||||
- name: Proper naming (domain.action format)
|
||||
- version: Semantic versioning (e.g., "0.1.0")
|
||||
- description: Clear description of what the skill does
|
||||
- inputs: List of input parameters (use empty list [] if none)
|
||||
- outputs: List of output parameters (use empty list [] if none)
|
||||
- status: One of "draft", "active", or "deprecated"
|
||||
- Artifact metadata (produces/consumes)
|
||||
- Permissions
|
||||
- Entrypoints with parameters
|
||||
|
||||
5. **Generate Implementation** - Create production-quality Python stub
|
||||
- **Parse skill.yaml inputs** to generate proper argparse CLI:
|
||||
```python
|
||||
# For each input in skill.yaml:
|
||||
parser.add_argument(
|
||||
'--{input.name}',
|
||||
type={map_type(input.type)}, # string→str, number→int, boolean→bool, array→list
|
||||
required={input.required},
|
||||
default={input.default if not required},
|
||||
help="{input.description}"
|
||||
)
|
||||
```
|
||||
- **Generate function signature** with type hints from inputs/outputs:
|
||||
```python
|
||||
def validate_artifact_types(
|
||||
artifact_types: List[str],
|
||||
check_schemas: bool = True,
|
||||
suggest_alternatives: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
\"\"\"
|
||||
{skill.description}
|
||||
|
||||
Args:
|
||||
artifact_types: {input.description from skill.yaml}
|
||||
check_schemas: {input.description from skill.yaml}
|
||||
...
|
||||
|
||||
Returns:
|
||||
{output descriptions from skill.yaml}
|
||||
\"\"\"
|
||||
```
|
||||
- **Include implementation pattern** based on skill type:
|
||||
- Validation skills: load data → validate → return results
|
||||
- Generator skills: gather inputs → process → save output
|
||||
- Transform skills: load input → transform → save output
|
||||
- **Add comprehensive error handling**:
|
||||
```python
|
||||
except FileNotFoundError as e:
|
||||
logger.error(str(e))
|
||||
print(json.dumps({"ok": False, "error": str(e)}, indent=2))
|
||||
sys.exit(1)
|
||||
```
|
||||
- **JSON output structure** matching skill.yaml outputs:
|
||||
```python
|
||||
result = {
|
||||
"{output1.name}": value1, # From skill.yaml outputs
|
||||
"{output2.name}": value2,
|
||||
"ok": True,
|
||||
"status": "success"
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
```
|
||||
- Add proper logging setup
|
||||
- Include module docstring with usage example
|
||||
|
||||
6. **Generate Tests** - Create test template
|
||||
- Unit test structure
|
||||
- Example test cases
|
||||
- Fixtures
|
||||
- Assertions
|
||||
|
||||
7. **Generate Documentation** - Create SKILL.md
|
||||
- Purpose and usage
|
||||
- Input/output examples
|
||||
- Integration with agents
|
||||
- Artifact flow (from Step 3 analysis)
|
||||
- Must include markdown header starting with #
|
||||
|
||||
8. **Validate Dependencies** - Check Python packages
|
||||
- For each dependency in skill.yaml:
|
||||
→ Verify package exists on PyPI (if possible)
|
||||
→ Check for known naming issues (e.g., "yaml" vs "pyyaml")
|
||||
→ Warn about version conflicts with existing skills
|
||||
- Suggest installation command: `pip install {dependencies}`
|
||||
- If dependencies missing, warn but don't block
|
||||
|
||||
9. **Register Skill** - Update registry
|
||||
- Call registry.update with skill manifest path
|
||||
- Verify skill appears in registry with artifact_metadata
|
||||
- Confirm skill is discoverable via artifact types
|
||||
|
||||
10. **Verify Discoverability** - Final validation
|
||||
- Check skill exists in registry/skills.json
|
||||
- Verify artifact_metadata is complete
|
||||
- Test that agent.compose can discover skill by artifact type
|
||||
- Confirm artifact flow is complete (from Step 3)
|
||||
|
||||
## Conventions
|
||||
|
||||
**Naming:**
|
||||
- Skills: `domain.action` (e.g., `api.validate`, `workflow.compose`)
|
||||
- Use lowercase with dots
|
||||
- Action should be imperative verb
|
||||
|
||||
**Structure:**
|
||||
```
|
||||
skills/domain.action/
|
||||
├── skill.yaml (definition)
|
||||
├── domain_action.py (implementation)
|
||||
├── test_domain_action.py (tests)
|
||||
└── SKILL.md (docs)
|
||||
```
|
||||
|
||||
**Artifact Metadata:**
|
||||
- Always define what the skill produces/consumes
|
||||
- Use registered artifact types from meta.artifact
|
||||
- Include schemas when applicable
|
||||
|
||||
**Implementation:**
|
||||
- Follow Python best practices
|
||||
- Include proper error handling
|
||||
- Add logging
|
||||
- CLI with argparse
|
||||
- JSON output for results
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- ✅ Follows Betty conventions (domain.action naming, proper structure)
|
||||
- ✅ All required fields in skill.yaml: name, version, description, inputs, outputs, status
|
||||
- ✅ Artifact types VALIDATED against registry before generation
|
||||
- ✅ Artifact flow ANALYZED (producers/consumers identified)
|
||||
- ✅ Production-quality code with type hints and comprehensive docstrings
|
||||
- ✅ Proper CLI generated from skill.yaml inputs (no TODO placeholders)
|
||||
- ✅ JSON output structure matches skill.yaml outputs
|
||||
- ✅ Dependencies VALIDATED and installation command provided
|
||||
- ✅ Comprehensive test template with fixtures
|
||||
- ✅ SKILL.md with markdown header, examples, and artifact flow
|
||||
- ✅ Registered in registry with complete artifact_metadata
|
||||
- ✅ Passes Pydantic validation
|
||||
- ✅ Discoverable via agent.compose by artifact type
|
||||
|
||||
## Error Handling & Recovery
|
||||
|
||||
**Artifact Type Not Found:**
|
||||
- Search registry/artifact_types.json for similar names
|
||||
- Check for singular/plural variants (data-model vs logical-data-model)
|
||||
- Suggest alternatives: "Did you mean: 'data-flow-diagrams', 'dataflow-diagram'?"
|
||||
- ASK USER to confirm or provide correct type
|
||||
- DO NOT proceed with invalid artifact types
|
||||
|
||||
**File Pattern Mismatch:**
|
||||
- Use exact file_pattern from registry
|
||||
- Warn user if description specifies different pattern
|
||||
- Document correct pattern in skill.yaml comments
|
||||
|
||||
**Schema File Missing:**
|
||||
- Warn: "Schema file schemas/artifacts/X-schema.json not found"
|
||||
- Ask if schema should be: (a) created, (b) omitted, (c) ignored
|
||||
- Continue with warning but don't block skill creation
|
||||
|
||||
**Registry Update Fails:**
|
||||
- Report specific error from registry.update
|
||||
- Check if it's version conflict or validation issue
|
||||
- Provide manual registration command as fallback
|
||||
- Log issue for framework team
|
||||
|
||||
**Duplicate Skill Name:**
|
||||
- Check existing version in registry
|
||||
- Offer to: (a) version bump, (b) rename skill, (c) cancel
|
||||
- Require explicit user confirmation before overwriting
|
||||
|
||||
Remember: You're creating building blocks for agents. Make skills
|
||||
composable, well-documented, and easy to use. ALWAYS validate artifact
|
||||
types before generating skill.yaml!
|
||||
791
agents/meta.skill/meta_skill.py
Executable file
791
agents/meta.skill/meta_skill.py
Executable file
@@ -0,0 +1,791 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.skill - Skill Creator
|
||||
|
||||
Creates complete, functional skills from natural language descriptions.
|
||||
Generates skill.yaml, implementation stub, tests, and documentation.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
from datetime import datetime
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
from betty.traceability import get_tracer, RequirementInfo
|
||||
|
||||
# Import artifact validation from artifact.define skill
|
||||
try:
|
||||
import importlib.util
|
||||
artifact_define_path = Path(__file__).parent.parent.parent / "skills" / "artifact.define" / "artifact_define.py"
|
||||
spec = importlib.util.spec_from_file_location("artifact_define", artifact_define_path)
|
||||
artifact_define_module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(artifact_define_module)
|
||||
|
||||
validate_artifact_type = artifact_define_module.validate_artifact_type
|
||||
KNOWN_ARTIFACT_TYPES = artifact_define_module.KNOWN_ARTIFACT_TYPES
|
||||
ARTIFACT_VALIDATION_AVAILABLE = True
|
||||
except Exception as e:
|
||||
ARTIFACT_VALIDATION_AVAILABLE = False
|
||||
|
||||
|
||||
class SkillCreator:
|
||||
"""Creates skills from natural language descriptions"""
|
||||
|
||||
def __init__(self, base_dir: str = "."):
|
||||
"""Initialize with base directory"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.skills_dir = self.base_dir / "skills"
|
||||
self.registry_path = self.base_dir / "registry" / "skills.json"
|
||||
|
||||
def parse_description(self, description_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Parse skill description from Markdown or JSON file
|
||||
|
||||
Args:
|
||||
description_path: Path to skill_description.md or .json
|
||||
|
||||
Returns:
|
||||
Parsed description with skill metadata
|
||||
"""
|
||||
path = Path(description_path)
|
||||
|
||||
if not path.exists():
|
||||
raise FileNotFoundError(f"Description not found: {description_path}")
|
||||
|
||||
# Handle JSON format
|
||||
if path.suffix == ".json":
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
|
||||
# Handle Markdown format
|
||||
with open(path) as f:
|
||||
content = f.read()
|
||||
|
||||
# Parse Markdown sections
|
||||
description = {
|
||||
"name": "",
|
||||
"purpose": "",
|
||||
"inputs": [],
|
||||
"outputs": [],
|
||||
"permissions": [],
|
||||
"implementation_notes": "",
|
||||
"examples": [],
|
||||
"artifact_produces": [],
|
||||
"artifact_consumes": []
|
||||
}
|
||||
|
||||
current_section = None
|
||||
for line in content.split('\n'):
|
||||
line_stripped = line.strip()
|
||||
|
||||
# Section headers
|
||||
if line_stripped.startswith('# Name:'):
|
||||
description["name"] = line_stripped.replace('# Name:', '').strip()
|
||||
elif line_stripped.startswith('# Purpose:'):
|
||||
current_section = "purpose"
|
||||
elif line_stripped.startswith('# Inputs:'):
|
||||
current_section = "inputs"
|
||||
elif line_stripped.startswith('# Outputs:'):
|
||||
current_section = "outputs"
|
||||
elif line_stripped.startswith('# Permissions:'):
|
||||
current_section = "permissions"
|
||||
elif line_stripped.startswith('# Implementation Notes:'):
|
||||
current_section = "implementation_notes"
|
||||
elif line_stripped.startswith('# Examples:'):
|
||||
current_section = "examples"
|
||||
elif line_stripped.startswith('# Produces Artifacts:'):
|
||||
current_section = "artifact_produces"
|
||||
elif line_stripped.startswith('# Consumes Artifacts:'):
|
||||
current_section = "artifact_consumes"
|
||||
elif line_stripped and not line_stripped.startswith('#'):
|
||||
# Content for current section
|
||||
if current_section == "purpose":
|
||||
description["purpose"] += line_stripped + " "
|
||||
elif current_section == "implementation_notes":
|
||||
description["implementation_notes"] += line_stripped + " "
|
||||
elif current_section in ["inputs", "outputs", "permissions",
|
||||
"examples", "artifact_produces",
|
||||
"artifact_consumes"] and line_stripped.startswith('-'):
|
||||
description[current_section].append(line_stripped[1:].strip())
|
||||
|
||||
description["purpose"] = description["purpose"].strip()
|
||||
description["implementation_notes"] = description["implementation_notes"].strip()
|
||||
|
||||
return description
|
||||
|
||||
def generate_skill_yaml(self, skill_desc: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate skill.yaml content
|
||||
|
||||
Args:
|
||||
skill_desc: Parsed skill description
|
||||
|
||||
Returns:
|
||||
YAML content as string
|
||||
"""
|
||||
skill_name = skill_desc["name"]
|
||||
|
||||
# Convert skill.name to skill_name format for handler
|
||||
handler_name = skill_name.replace('.', '_') + ".py"
|
||||
|
||||
skill_def = {
|
||||
"name": skill_name,
|
||||
"version": "0.1.0",
|
||||
"description": skill_desc["purpose"],
|
||||
"inputs": skill_desc.get("inputs", []),
|
||||
"outputs": skill_desc.get("outputs", []),
|
||||
"status": "active",
|
||||
"permissions": skill_desc.get("permissions", ["filesystem:read"]),
|
||||
"entrypoints": [
|
||||
{
|
||||
"command": f"/{skill_name.replace('.', '/')}",
|
||||
"handler": handler_name,
|
||||
"runtime": "python",
|
||||
"description": skill_desc["purpose"][:100]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Add artifact metadata if specified
|
||||
if skill_desc.get("artifact_produces") or skill_desc.get("artifact_consumes"):
|
||||
artifact_metadata = {}
|
||||
|
||||
if skill_desc.get("artifact_produces"):
|
||||
artifact_metadata["produces"] = [
|
||||
{"type": art_type} for art_type in skill_desc["artifact_produces"]
|
||||
]
|
||||
|
||||
if skill_desc.get("artifact_consumes"):
|
||||
artifact_metadata["consumes"] = [
|
||||
{"type": art_type, "required": True}
|
||||
for art_type in skill_desc["artifact_consumes"]
|
||||
]
|
||||
|
||||
skill_def["artifact_metadata"] = artifact_metadata
|
||||
|
||||
return yaml.dump(skill_def, default_flow_style=False, sort_keys=False)
|
||||
|
||||
def generate_implementation(self, skill_desc: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate Python implementation stub
|
||||
|
||||
Args:
|
||||
skill_desc: Parsed skill description
|
||||
|
||||
Returns:
|
||||
Python code as string
|
||||
"""
|
||||
skill_name = skill_desc["name"]
|
||||
module_name = skill_name.replace('.', '_')
|
||||
class_name = ''.join(word.capitalize() for word in skill_name.split('.'))
|
||||
|
||||
implementation = f'''#!/usr/bin/env python3
|
||||
"""
|
||||
{skill_name} - {skill_desc["purpose"]}
|
||||
|
||||
Generated by meta.skill with Betty Framework certification
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
# Add parent directory to path for imports
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.certification import certified_skill
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class {class_name}:
|
||||
"""
|
||||
{skill_desc["purpose"]}
|
||||
"""
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize skill"""
|
||||
self.base_dir = Path(base_dir)
|
||||
|
||||
@certified_skill("{skill_name}")
|
||||
def execute(self'''
|
||||
|
||||
# Add input parameters
|
||||
if skill_desc.get("inputs"):
|
||||
for inp in skill_desc["inputs"]:
|
||||
# Sanitize parameter names - remove special characters, keep only alphanumeric and underscores
|
||||
param_name = ''.join(c if c.isalnum() or c in ' -_' else '' for c in inp.lower())
|
||||
param_name = param_name.replace(' ', '_').replace('-', '_')
|
||||
implementation += f', {param_name}: Optional[str] = None'
|
||||
|
||||
implementation += f''') -> Dict[str, Any]:
|
||||
"""
|
||||
Execute the skill
|
||||
|
||||
Returns:
|
||||
Dict with execution results
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing {skill_name}...")
|
||||
|
||||
# TODO: Implement skill logic here
|
||||
'''
|
||||
|
||||
if skill_desc.get("implementation_notes"):
|
||||
implementation += f'''
|
||||
# Implementation notes:
|
||||
# {skill_desc["implementation_notes"]}
|
||||
'''
|
||||
|
||||
# Escape the purpose string for Python string literal
|
||||
escaped_purpose = skill_desc['purpose'].replace('"', '\\"')
|
||||
|
||||
implementation += f'''
|
||||
# Placeholder implementation
|
||||
result = {{
|
||||
"ok": True,
|
||||
"status": "success",
|
||||
"message": "Skill executed successfully"
|
||||
}}
|
||||
|
||||
logger.info("Skill completed successfully")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing skill: {{e}}")
|
||||
return {{
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="{escaped_purpose}"
|
||||
)
|
||||
'''
|
||||
|
||||
# Add CLI arguments for inputs
|
||||
if skill_desc.get("inputs"):
|
||||
for inp in skill_desc["inputs"]:
|
||||
# Sanitize parameter names - remove special characters
|
||||
param_name = ''.join(c if c.isalnum() or c in ' -_' else '' for c in inp.lower())
|
||||
param_name = param_name.replace(' ', '_').replace('-', '_')
|
||||
implementation += f'''
|
||||
parser.add_argument(
|
||||
"--{param_name.replace('_', '-')}",
|
||||
help="{inp}"
|
||||
)'''
|
||||
|
||||
implementation += f'''
|
||||
parser.add_argument(
|
||||
"--output-format",
|
||||
choices=["json", "yaml"],
|
||||
default="json",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create skill instance
|
||||
skill = {class_name}()
|
||||
|
||||
# Execute skill
|
||||
result = skill.execute('''
|
||||
|
||||
if skill_desc.get("inputs"):
|
||||
for inp in skill_desc["inputs"]:
|
||||
# Sanitize parameter names - remove special characters
|
||||
param_name = ''.join(c if c.isalnum() or c in ' -_' else '' for c in inp.lower())
|
||||
param_name = param_name.replace(' ', '_').replace('-', '_')
|
||||
implementation += f'''
|
||||
{param_name}=args.{param_name},'''
|
||||
|
||||
implementation += '''
|
||||
)
|
||||
|
||||
# Output result
|
||||
if args.output_format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
|
||||
# Exit with appropriate code
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
'''
|
||||
|
||||
return implementation
|
||||
|
||||
def generate_tests(self, skill_desc: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate test template
|
||||
|
||||
Args:
|
||||
skill_desc: Parsed skill description
|
||||
|
||||
Returns:
|
||||
Python test code as string
|
||||
"""
|
||||
skill_name = skill_desc["name"]
|
||||
module_name = skill_name.replace('.', '_')
|
||||
class_name = ''.join(word.capitalize() for word in skill_name.split('.'))
|
||||
|
||||
tests = f'''#!/usr/bin/env python3
|
||||
"""
|
||||
Tests for {skill_name}
|
||||
|
||||
Generated by meta.skill
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
|
||||
|
||||
from skills.{skill_name.replace('.', '_')} import {module_name}
|
||||
|
||||
|
||||
class Test{class_name}:
|
||||
"""Tests for {class_name}"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup test fixtures"""
|
||||
self.skill = {module_name}.{class_name}()
|
||||
|
||||
def test_initialization(self):
|
||||
"""Test skill initializes correctly"""
|
||||
assert self.skill is not None
|
||||
assert self.skill.base_dir is not None
|
||||
|
||||
def test_execute_basic(self):
|
||||
"""Test basic execution"""
|
||||
result = self.skill.execute()
|
||||
|
||||
assert result is not None
|
||||
assert "ok" in result
|
||||
assert "status" in result
|
||||
|
||||
def test_execute_success(self):
|
||||
"""Test successful execution"""
|
||||
result = self.skill.execute()
|
||||
|
||||
assert result["ok"] is True
|
||||
assert result["status"] == "success"
|
||||
|
||||
# TODO: Add more specific tests based on skill functionality
|
||||
|
||||
|
||||
def test_cli_help(capsys):
|
||||
"""Test CLI help message"""
|
||||
sys.argv = ["{module_name}.py", "--help"]
|
||||
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
{module_name}.main()
|
||||
|
||||
assert exc_info.value.code == 0
|
||||
captured = capsys.readouterr()
|
||||
assert "{skill_desc['purpose'][:50]}" in captured.out
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
'''
|
||||
|
||||
return tests
|
||||
|
||||
def generate_skill_md(self, skill_desc: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Generate SKILL.md
|
||||
|
||||
Args:
|
||||
skill_desc: Parsed skill description
|
||||
|
||||
Returns:
|
||||
Markdown content as string
|
||||
"""
|
||||
skill_name = skill_desc["name"]
|
||||
|
||||
readme = f'''# {skill_name}
|
||||
|
||||
{skill_desc["purpose"]}
|
||||
|
||||
## Overview
|
||||
|
||||
**Purpose:** {skill_desc["purpose"]}
|
||||
|
||||
**Command:** `/{skill_name.replace('.', '/')}`
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
python3 skills/{skill_name.replace('.', '/')}/{skill_name.replace('.', '_')}.py
|
||||
```
|
||||
|
||||
### With Arguments
|
||||
|
||||
```bash
|
||||
python3 skills/{skill_name.replace('.', '/')}/{skill_name.replace('.', '_')}.py \\
|
||||
'''
|
||||
|
||||
if skill_desc.get("inputs"):
|
||||
for inp in skill_desc["inputs"]:
|
||||
param_name = inp.lower().replace(' ', '_').replace('-', '-')
|
||||
readme += f' --{param_name} "value" \\\n'
|
||||
|
||||
readme += ' --output-format json\n```\n\n'
|
||||
|
||||
if skill_desc.get("inputs"):
|
||||
readme += "## Inputs\n\n"
|
||||
for inp in skill_desc["inputs"]:
|
||||
readme += f"- **{inp}**\n"
|
||||
readme += "\n"
|
||||
|
||||
if skill_desc.get("outputs"):
|
||||
readme += "## Outputs\n\n"
|
||||
for out in skill_desc["outputs"]:
|
||||
readme += f"- **{out}**\n"
|
||||
readme += "\n"
|
||||
|
||||
if skill_desc.get("artifact_consumes") or skill_desc.get("artifact_produces"):
|
||||
readme += "## Artifact Metadata\n\n"
|
||||
|
||||
if skill_desc.get("artifact_consumes"):
|
||||
readme += "### Consumes\n\n"
|
||||
for art in skill_desc["artifact_consumes"]:
|
||||
readme += f"- `{art}`\n"
|
||||
readme += "\n"
|
||||
|
||||
if skill_desc.get("artifact_produces"):
|
||||
readme += "### Produces\n\n"
|
||||
for art in skill_desc["artifact_produces"]:
|
||||
readme += f"- `{art}`\n"
|
||||
readme += "\n"
|
||||
|
||||
if skill_desc.get("examples"):
|
||||
readme += "## Examples\n\n"
|
||||
for example in skill_desc["examples"]:
|
||||
readme += f"- {example}\n"
|
||||
readme += "\n"
|
||||
|
||||
if skill_desc.get("permissions"):
|
||||
readme += "## Permissions\n\n"
|
||||
for perm in skill_desc["permissions"]:
|
||||
readme += f"- `{perm}`\n"
|
||||
readme += "\n"
|
||||
|
||||
if skill_desc.get("implementation_notes"):
|
||||
readme += "## Implementation Notes\n\n"
|
||||
readme += f"{skill_desc['implementation_notes']}\n\n"
|
||||
|
||||
readme += f'''## Integration
|
||||
|
||||
This skill can be used in agents by including it in `skills_available`:
|
||||
|
||||
```yaml
|
||||
name: my.agent
|
||||
skills_available:
|
||||
- {skill_name}
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Run tests with:
|
||||
|
||||
```bash
|
||||
pytest skills/{skill_name.replace('.', '/')}/test_{skill_name.replace('.', '_')}.py -v
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This skill was generated by **meta.skill**, the skill creator meta-agent.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
'''
|
||||
|
||||
return readme
|
||||
|
||||
def validate_artifacts(self, skill_desc: Dict[str, Any]) -> List[str]:
|
||||
"""
|
||||
Validate that artifact types exist in the known registry.
|
||||
|
||||
Args:
|
||||
skill_desc: Parsed skill description
|
||||
|
||||
Returns:
|
||||
List of warning messages
|
||||
"""
|
||||
warnings = []
|
||||
|
||||
if not ARTIFACT_VALIDATION_AVAILABLE:
|
||||
warnings.append(
|
||||
"Artifact validation skipped: artifact.define skill not available"
|
||||
)
|
||||
return warnings
|
||||
|
||||
# Validate produced artifacts
|
||||
for artifact_type in skill_desc.get("artifact_produces", []):
|
||||
is_valid, warning = validate_artifact_type(artifact_type)
|
||||
if not is_valid and warning:
|
||||
warnings.append(f"Produces: {warning}")
|
||||
|
||||
# Validate consumed artifacts
|
||||
for artifact_type in skill_desc.get("artifact_consumes", []):
|
||||
is_valid, warning = validate_artifact_type(artifact_type)
|
||||
if not is_valid and warning:
|
||||
warnings.append(f"Consumes: {warning}")
|
||||
|
||||
return warnings
|
||||
|
||||
def create_skill(
|
||||
self,
|
||||
description_path: str,
|
||||
output_dir: Optional[str] = None,
|
||||
requirement: Optional[RequirementInfo] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a complete skill from description
|
||||
|
||||
Args:
|
||||
description_path: Path to skill description file
|
||||
output_dir: Output directory (default: skills/{name}/)
|
||||
requirement: Optional requirement information for traceability
|
||||
|
||||
Returns:
|
||||
Summary of created files
|
||||
"""
|
||||
# Parse description
|
||||
skill_desc = self.parse_description(description_path)
|
||||
skill_name = skill_desc["name"]
|
||||
|
||||
if not skill_name:
|
||||
raise ValueError("Skill name is required")
|
||||
|
||||
# Validate name format (domain.action)
|
||||
if not re.match(r'^[a-z0-9]+\.[a-z0-9]+$', skill_name):
|
||||
raise ValueError(
|
||||
f"Skill name must be in domain.action format: {skill_name}"
|
||||
)
|
||||
|
||||
# Validate artifact types
|
||||
artifact_warnings = self.validate_artifacts(skill_desc)
|
||||
if artifact_warnings:
|
||||
print("\n⚠️ Artifact Validation Warnings:")
|
||||
for warning in artifact_warnings:
|
||||
print(f" {warning}")
|
||||
print()
|
||||
|
||||
# Determine output directory
|
||||
if not output_dir:
|
||||
output_dir = f"skills/{skill_name}"
|
||||
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
result = {
|
||||
"skill_name": skill_name,
|
||||
"created_files": [],
|
||||
"errors": [],
|
||||
"artifact_warnings": artifact_warnings
|
||||
}
|
||||
|
||||
# Generate and save skill.yaml
|
||||
skill_yaml_content = self.generate_skill_yaml(skill_desc)
|
||||
skill_yaml_path = output_path / "skill.yaml"
|
||||
with open(skill_yaml_path, 'w') as f:
|
||||
f.write(skill_yaml_content)
|
||||
result["created_files"].append(str(skill_yaml_path))
|
||||
|
||||
# Generate and save implementation
|
||||
impl_content = self.generate_implementation(skill_desc)
|
||||
impl_path = output_path / f"{skill_name.replace('.', '_')}.py"
|
||||
with open(impl_path, 'w') as f:
|
||||
f.write(impl_content)
|
||||
os.chmod(impl_path, 0o755) # Make executable
|
||||
result["created_files"].append(str(impl_path))
|
||||
|
||||
# Generate and save tests
|
||||
tests_content = self.generate_tests(skill_desc)
|
||||
tests_path = output_path / f"test_{skill_name.replace('.', '_')}.py"
|
||||
with open(tests_path, 'w') as f:
|
||||
f.write(tests_content)
|
||||
result["created_files"].append(str(tests_path))
|
||||
|
||||
# Generate and save SKILL.md
|
||||
skill_md_content = self.generate_skill_md(skill_desc)
|
||||
skill_md_path = output_path / "SKILL.md"
|
||||
with open(skill_md_path, 'w') as f:
|
||||
f.write(skill_md_content)
|
||||
result["created_files"].append(str(skill_md_path))
|
||||
|
||||
# Log traceability if requirement provided
|
||||
trace_id = None
|
||||
if requirement:
|
||||
try:
|
||||
tracer = get_tracer()
|
||||
trace_id = tracer.log_creation(
|
||||
component_id=skill_name,
|
||||
component_name=skill_name.replace(".", " ").title(),
|
||||
component_type="skill",
|
||||
component_version="0.1.0",
|
||||
component_file_path=str(skill_yaml_path),
|
||||
input_source_path=description_path,
|
||||
created_by_tool="meta.skill",
|
||||
created_by_version="0.1.0",
|
||||
requirement=requirement,
|
||||
tags=["skill", "auto-generated"],
|
||||
project="Betty Framework"
|
||||
)
|
||||
|
||||
# Log validation check
|
||||
validation_details = {
|
||||
"checks_performed": [
|
||||
{"name": "skill_structure", "status": "passed"},
|
||||
{"name": "artifact_metadata", "status": "passed"}
|
||||
]
|
||||
}
|
||||
|
||||
# Check for artifact metadata
|
||||
if skill_desc.get("artifact_produces") or skill_desc.get("artifact_consumes"):
|
||||
validation_details["checks_performed"].append({
|
||||
"name": "artifact_metadata_completeness",
|
||||
"status": "passed",
|
||||
"message": f"Produces: {len(skill_desc.get('artifact_produces', []))}, Consumes: {len(skill_desc.get('artifact_consumes', []))}"
|
||||
})
|
||||
|
||||
tracer.log_verification(
|
||||
component_id=skill_name,
|
||||
check_type="validation",
|
||||
tool="meta.skill",
|
||||
result="passed",
|
||||
details=validation_details
|
||||
)
|
||||
|
||||
result["trace_id"] = trace_id
|
||||
|
||||
except Exception as e:
|
||||
print(f"⚠️ Warning: Could not log traceability: {e}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.skill - Create skills from descriptions"
|
||||
)
|
||||
parser.add_argument(
|
||||
"description",
|
||||
help="Path to skill description file (.md or .json)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-o", "--output",
|
||||
help="Output directory (default: skills/{name}/)"
|
||||
)
|
||||
|
||||
# Traceability arguments
|
||||
parser.add_argument(
|
||||
"--requirement-id",
|
||||
help="Requirement identifier (e.g., REQ-2025-001)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-description",
|
||||
help="What this skill accomplishes"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requirement-source",
|
||||
help="Source document"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--issue-id",
|
||||
help="Issue tracking ID (e.g., JIRA-123)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--requested-by",
|
||||
help="Who requested this"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--rationale",
|
||||
help="Why this is needed"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create requirement info if provided
|
||||
requirement = None
|
||||
if args.requirement_id and args.requirement_description:
|
||||
requirement = RequirementInfo(
|
||||
id=args.requirement_id,
|
||||
description=args.requirement_description,
|
||||
source=args.requirement_source,
|
||||
issue_id=args.issue_id,
|
||||
requested_by=args.requested_by,
|
||||
rationale=args.rationale
|
||||
)
|
||||
|
||||
creator = SkillCreator()
|
||||
|
||||
print(f"🛠️ meta.skill - Creating skill from {args.description}")
|
||||
|
||||
try:
|
||||
result = creator.create_skill(
|
||||
args.description,
|
||||
output_dir=args.output,
|
||||
requirement=requirement
|
||||
)
|
||||
|
||||
print(f"\n✨ Skill '{result['skill_name']}' created successfully!\n")
|
||||
|
||||
if result["created_files"]:
|
||||
print("📄 Created files:")
|
||||
for file in result["created_files"]:
|
||||
print(f" - {file}")
|
||||
|
||||
if result["errors"]:
|
||||
print("\n⚠️ Warnings:")
|
||||
for error in result["errors"]:
|
||||
print(f" - {error}")
|
||||
|
||||
if result.get("trace_id"):
|
||||
print(f"\n📝 Traceability: {result['trace_id']}")
|
||||
print(f" View trace: python3 betty/trace_cli.py show {result['skill_name']}")
|
||||
|
||||
print(f"\n✅ Skill '{result['skill_name']}' is ready to use")
|
||||
print(" Add to agent skills_available to use it.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error creating skill: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
510
agents/meta.suggest/README.md
Normal file
510
agents/meta.suggest/README.md
Normal file
@@ -0,0 +1,510 @@
|
||||
# meta.suggest - Context-Aware Next-Step Recommender
|
||||
|
||||
Helps Claude decide what to do next after an agent completes by analyzing context and suggesting compatible next steps.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.suggest** provides intelligent "what's next" recommendations by analyzing what just happened, what artifacts were produced, and what agents are compatible. It works with meta.compatibility to enable smart multi-agent orchestration.
|
||||
|
||||
**What it does:**
|
||||
- Analyzes context (what agent ran, what artifacts produced)
|
||||
- Uses meta.compatibility to find compatible next steps
|
||||
- Provides ranked suggestions with clear rationale
|
||||
- Considers project state and user goals
|
||||
- Detects warnings (gaps, isolated agents)
|
||||
- Suggests project-wide improvements
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Suggest Next Steps
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--artifacts agents/api.architect/agent.yaml
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Context: meta.agent
|
||||
Produced: agent-definition
|
||||
|
||||
🌟 Primary Suggestion:
|
||||
Process with meta.compatibility
|
||||
Rationale: meta.agent produces 'agent-definition' which meta.compatibility consumes
|
||||
Priority: high
|
||||
|
||||
🔄 Alternatives:
|
||||
1. Test the created artifact
|
||||
Verify the artifact works as expected
|
||||
|
||||
2. Analyze compatibility
|
||||
Understand what agents can work with meta.agent's outputs
|
||||
```
|
||||
|
||||
### Analyze Project
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py --analyze-project
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
📊 Project Analysis:
|
||||
Total Agents: 7
|
||||
Total Artifacts: 16
|
||||
Relationships: 3
|
||||
Gaps: 5
|
||||
|
||||
💡 Suggestions (6):
|
||||
1. Create agent/skill to produce 'agent-description'
|
||||
Consumed by 1 agents but no producers
|
||||
Priority: medium
|
||||
...
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### Suggest After Agent Runs
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context AGENT_NAME \
|
||||
[--artifacts FILE1 FILE2...] \
|
||||
[--goal "USER_GOAL"] \
|
||||
[--format json|text]
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `--context` - Agent that just ran
|
||||
- `--artifacts` - Artifact files that were produced (optional)
|
||||
- `--goal` - User's goal for better suggestions (optional)
|
||||
- `--format` - Output format (text or json)
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# After meta.agent creates agent
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--artifacts agents/my-agent/agent.yaml
|
||||
|
||||
# After meta.artifact creates artifact type
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.artifact \
|
||||
--artifacts schemas/my-artifact.json
|
||||
|
||||
# With user goal
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--goal "Create and validate API design agent"
|
||||
```
|
||||
|
||||
### Analyze Project
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py --analyze-project [--format json|text]
|
||||
```
|
||||
|
||||
Analyzes the entire agent ecosystem and suggests improvements:
|
||||
- Agents to create
|
||||
- Gaps to fill
|
||||
- Documentation needs
|
||||
- Ecosystem health
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Context Analysis
|
||||
|
||||
Determines what just happened:
|
||||
- Which agent ran
|
||||
- What artifacts were produced
|
||||
- What artifact types are involved
|
||||
|
||||
### 2. Compatibility Check
|
||||
|
||||
Uses meta.compatibility to find:
|
||||
- Agents that can consume the produced artifacts
|
||||
- Agents that are compatible downstream
|
||||
- Potential pipeline steps
|
||||
|
||||
### 3. Suggestion Generation
|
||||
|
||||
Creates suggestions based on:
|
||||
- Compatible agents (high priority)
|
||||
- Validation/testing options (medium priority)
|
||||
- Gap-filling needs (low priority if applicable)
|
||||
|
||||
### 4. Ranking
|
||||
|
||||
Ranks suggestions by:
|
||||
- Priority level (high > medium > low)
|
||||
- Automation (automated > manual)
|
||||
- Relevance to user goal
|
||||
|
||||
### 5. Warning Generation
|
||||
|
||||
Detects potential issues:
|
||||
- Gaps in required artifacts
|
||||
- Isolated agents (no compatible partners)
|
||||
- Failed validations
|
||||
|
||||
## Suggestion Types
|
||||
|
||||
### 1. Process with Compatible Agent
|
||||
|
||||
```
|
||||
🌟 Primary Suggestion:
|
||||
Process with api.validator
|
||||
Rationale: api.architect produces 'openapi-spec' which api.validator consumes
|
||||
```
|
||||
|
||||
Automatically suggests running compatible agents.
|
||||
|
||||
### 2. Validate/Test Artifact
|
||||
|
||||
```
|
||||
Test the created artifact
|
||||
Rationale: Verify the artifact works as expected
|
||||
```
|
||||
|
||||
Suggests testing when creation-type agents run.
|
||||
|
||||
### 3. Analyze Compatibility
|
||||
|
||||
```
|
||||
Analyze compatibility
|
||||
Rationale: Understand what agents can work with meta.agent's outputs
|
||||
Command: python3 agents/meta.compatibility/meta_compatibility.py analyze meta.agent
|
||||
```
|
||||
|
||||
Suggests understanding the ecosystem.
|
||||
|
||||
### 4. Fill Gaps
|
||||
|
||||
```
|
||||
Create producer for 'agent-description'
|
||||
Rationale: No agents produce 'agent-description' (required by meta.agent)
|
||||
```
|
||||
|
||||
Suggests creating missing components.
|
||||
|
||||
## Output Structure
|
||||
|
||||
### Text Format
|
||||
|
||||
```
|
||||
Context: AGENT_NAME
|
||||
Produced: artifact-type-1, artifact-type-2
|
||||
|
||||
🌟 Primary Suggestion:
|
||||
ACTION
|
||||
Rationale: WHY
|
||||
[Command: HOW (if automated)]
|
||||
Priority: LEVEL
|
||||
|
||||
🔄 Alternatives:
|
||||
1. ACTION
|
||||
Rationale: WHY
|
||||
|
||||
⚠️ Warnings:
|
||||
• WARNING_MESSAGE
|
||||
```
|
||||
|
||||
### JSON Format
|
||||
|
||||
```json
|
||||
{
|
||||
"context": {
|
||||
"agent": "meta.agent",
|
||||
"artifacts_produced": ["agents/my-agent/agent.yaml"],
|
||||
"artifact_types": ["agent-definition"],
|
||||
"timestamp": "2025-10-24T..."
|
||||
},
|
||||
"suggestions": [
|
||||
{
|
||||
"action": "Process with meta.compatibility",
|
||||
"agent": "meta.compatibility",
|
||||
"rationale": "...",
|
||||
"priority": "high",
|
||||
"command": "..."
|
||||
}
|
||||
],
|
||||
"primary_suggestion": {...},
|
||||
"alternatives": [...],
|
||||
"warnings": [...]
|
||||
}
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### With meta.compatibility
|
||||
|
||||
meta.suggest uses meta.compatibility for discovery:
|
||||
|
||||
```python
|
||||
# Internal call
|
||||
compatibility = meta.compatibility.find_compatible(agent_name)
|
||||
|
||||
# Use compatible agents for suggestions
|
||||
for compatible in compatibility.get("can_feed_to", []):
|
||||
suggest(f"Process with {compatible['agent']}")
|
||||
```
|
||||
|
||||
### With Claude
|
||||
|
||||
Claude can call meta.suggest after any agent:
|
||||
|
||||
```
|
||||
User: Create an API design agent
|
||||
Claude: *runs meta.agent*
|
||||
Claude: *calls meta.suggest --context meta.agent*
|
||||
Claude: I've created the agent. Would you like me to:
|
||||
1. Analyze its compatibility
|
||||
2. Test it
|
||||
3. Add documentation
|
||||
```
|
||||
|
||||
### In Workflows
|
||||
|
||||
Use in shell scripts:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Create and analyze agent
|
||||
|
||||
# Step 1: Create agent
|
||||
python3 agents/meta.agent/meta_agent.py description.md
|
||||
|
||||
# Step 2: Get suggestions
|
||||
SUGGESTIONS=$(python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--format json)
|
||||
|
||||
# Step 3: Extract primary suggestion
|
||||
PRIMARY=$(echo "$SUGGESTIONS" | jq -r '.primary_suggestion.command')
|
||||
|
||||
# Step 4: Run it
|
||||
eval "$PRIMARY"
|
||||
```
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Agent Creation Pipeline
|
||||
|
||||
```bash
|
||||
# Create agent
|
||||
python3 agents/meta.agent/meta_agent.py my_agent.md
|
||||
|
||||
# Get suggestions
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--artifacts agents/my-agent/agent.yaml
|
||||
|
||||
# Follow primary suggestion
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze my-agent
|
||||
```
|
||||
|
||||
### Workflow 2: Continuous Improvement
|
||||
|
||||
```bash
|
||||
# Analyze project
|
||||
python3 agents/meta.suggest/meta_suggest.py --analyze-project > improvements.txt
|
||||
|
||||
# Review suggestions
|
||||
cat improvements.txt
|
||||
|
||||
# Implement top suggestions
|
||||
# (create missing agents, fill gaps, etc.)
|
||||
```
|
||||
|
||||
### Workflow 3: Goal-Oriented Orchestration
|
||||
|
||||
```bash
|
||||
# Define goal
|
||||
GOAL="Design, validate, and implement an API"
|
||||
|
||||
# Get suggestions for goal
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--goal "$GOAL" \
|
||||
--format json > pipeline.json
|
||||
|
||||
# Execute suggested pipeline
|
||||
# (extract steps from pipeline.json and run)
|
||||
```
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **compatibility-graph** - Agent compatibility information
|
||||
- From: meta.compatibility
|
||||
|
||||
- **agent-definition** - Agent that just ran
|
||||
- Pattern: `agents/*/agent.yaml`
|
||||
|
||||
### Produces
|
||||
|
||||
- **suggestion-report** - Next-step recommendations
|
||||
- Pattern: `*.suggestions.json`
|
||||
- Schema: `schemas/suggestion-report.json`
|
||||
|
||||
## Understanding Suggestions
|
||||
|
||||
### Priority Levels
|
||||
|
||||
**High** - Should probably do this
|
||||
- Compatible agent waiting
|
||||
- Validation needed
|
||||
- Next logical step
|
||||
|
||||
**Medium** - Good to do
|
||||
- Analyze compatibility
|
||||
- Understand ecosystem
|
||||
- Non-critical validation
|
||||
|
||||
**Low** - Nice to have
|
||||
- Fill gaps
|
||||
- Documentation
|
||||
- Future improvements
|
||||
|
||||
### Automated vs Manual
|
||||
|
||||
**Automated** - Has command to run
|
||||
```
|
||||
Command: python3 agents/meta.compatibility/...
|
||||
```
|
||||
|
||||
**Manual** - Requires user action
|
||||
```
|
||||
(No command - manual action required)
|
||||
```
|
||||
|
||||
### Rationale
|
||||
|
||||
Always includes "why" for each suggestion:
|
||||
```
|
||||
Rationale: meta.agent produces 'agent-definition' which meta.compatibility consumes
|
||||
```
|
||||
|
||||
Helps Claude and users understand the reasoning.
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Providing Context
|
||||
|
||||
More context = better suggestions:
|
||||
|
||||
✅ **Good:**
|
||||
```bash
|
||||
--context meta.agent \
|
||||
--artifacts agents/my-agent/agent.yaml \
|
||||
--goal "Create and validate agent"
|
||||
```
|
||||
|
||||
❌ **Minimal:**
|
||||
```bash
|
||||
--context meta.agent
|
||||
```
|
||||
|
||||
### Interpreting Warnings
|
||||
|
||||
**Gaps warning:**
|
||||
```
|
||||
⚠️ meta.agent requires artifacts that aren't produced by any agent
|
||||
```
|
||||
This is often expected for user inputs. Not always a problem.
|
||||
|
||||
**Isolated warning:**
|
||||
```
|
||||
⚠️ my-agent has no compatible agents
|
||||
```
|
||||
This suggests the agent uses non-standard artifact types or no other agents exist yet.
|
||||
|
||||
### Using Suggestions
|
||||
|
||||
1. **Review primary suggestion first** - Usually the best option
|
||||
2. **Consider alternatives** - May be better for your specific case
|
||||
3. **Check warnings** - Understand potential issues
|
||||
4. **Verify commands** - Review before running automated suggestions
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No suggestions returned
|
||||
|
||||
```
|
||||
Error: Could not determine relevant agents for goal
|
||||
```
|
||||
|
||||
**Causes:**
|
||||
- Agent has no compatible downstream agents
|
||||
- Artifact types are all user-provided inputs
|
||||
- No other agents in ecosystem
|
||||
|
||||
**Solutions:**
|
||||
- Create more agents
|
||||
- Use standard artifact types
|
||||
- Check agent artifact_metadata
|
||||
|
||||
### Incorrect suggestions
|
||||
|
||||
If suggestions don't make sense:
|
||||
- Verify agent artifact_metadata is correct
|
||||
- Check meta.compatibility output directly
|
||||
- Ensure artifact types are registered
|
||||
|
||||
### Empty project analysis
|
||||
|
||||
```
|
||||
Total Agents: 0
|
||||
```
|
||||
|
||||
**Cause:** No agents found in `agents/` directory
|
||||
|
||||
**Solution:** Create agents using meta.agent or manually
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
meta.suggest
|
||||
├─ Uses: meta.compatibility (discovery)
|
||||
├─ Analyzes: context and artifacts
|
||||
├─ Produces: ranked suggestions
|
||||
└─ Helps: Claude make decisions
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Example 1: After creating agent
|
||||
python3 agents/meta.agent/meta_agent.py examples/api_architect_description.md
|
||||
python3 agents/meta.suggest/meta_suggest.py --context meta.agent
|
||||
|
||||
# Example 2: After creating artifact type
|
||||
python3 agents/meta.artifact/meta_artifact.py create artifact.md
|
||||
python3 agents/meta.suggest/meta_suggest.py --context meta.artifact
|
||||
|
||||
# Example 3: Project health check
|
||||
python3 agents/meta.suggest/meta_suggest.py --analyze-project
|
||||
|
||||
# Example 4: Export to JSON
|
||||
python3 agents/meta.suggest/meta_suggest.py \
|
||||
--context meta.agent \
|
||||
--format json > suggestions.json
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
- [meta.compatibility README](../meta.compatibility/README.md) - Compatibility analyzer
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [suggestion-report schema](../../schemas/suggestion-report.json)
|
||||
|
||||
## How Claude Uses This
|
||||
|
||||
After any agent completes:
|
||||
1. Claude calls meta.suggest with context
|
||||
2. Reviews suggestions and rationale
|
||||
3. Presents options to user or auto-executes
|
||||
4. Makes intelligent orchestration decisions
|
||||
|
||||
meta.suggest is Claude's assistant for "what's next" decisions!
|
||||
127
agents/meta.suggest/agent.yaml
Normal file
127
agents/meta.suggest/agent.yaml
Normal file
@@ -0,0 +1,127 @@
|
||||
name: meta.suggest
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Context-aware next-step recommender that helps Claude decide what to do next
|
||||
after an agent completes.
|
||||
|
||||
Analyzes current context, produced artifacts, and project state to suggest
|
||||
compatible agents and workflows. Works with meta.compatibility to provide
|
||||
intelligent orchestration recommendations.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: compatibility-graph
|
||||
description: "Agent compatibility information from meta.compatibility"
|
||||
|
||||
- type: agent-definition
|
||||
description: "Agent that just ran"
|
||||
|
||||
produces:
|
||||
- type: suggestion-report
|
||||
file_pattern: "*.suggestions.json"
|
||||
content_type: "application/json"
|
||||
schema: "schemas/suggestion-report.json"
|
||||
description: "Context-aware recommendations for next steps"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Analyze produced artifacts to understand project context
|
||||
- Recommend next agents or workflows with supporting rationale
|
||||
- Highlight gaps and dependencies to maintain delivery momentum
|
||||
skills_available:
|
||||
- meta.compatibility # Analyze compatibility
|
||||
- artifact.define # Understand artifacts
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
|
||||
system_prompt: |
|
||||
You are meta.suggest, the context-aware next-step recommender.
|
||||
|
||||
After an agent completes its work, you help Claude decide what to do next by
|
||||
analyzing what artifacts were produced and suggesting compatible next steps.
|
||||
|
||||
## Your Responsibilities
|
||||
|
||||
1. **Analyze Context**
|
||||
- What agent just ran?
|
||||
- What artifacts were produced?
|
||||
- What's the current project state?
|
||||
- What might the user want to do next?
|
||||
|
||||
2. **Suggest Next Steps**
|
||||
- Use meta.compatibility to find compatible agents
|
||||
- Rank suggestions by relevance and usefulness
|
||||
- Provide clear rationale for each suggestion
|
||||
- Consider common workflows
|
||||
|
||||
3. **Be Smart About Context**
|
||||
- If validation failed, don't suggest proceeding
|
||||
- If artifacts were created, suggest consumers
|
||||
- If gaps were detected, suggest filling them
|
||||
- Consider the user's likely goals
|
||||
|
||||
## Commands You Support
|
||||
|
||||
**Suggest next steps after agent ran:**
|
||||
```bash
|
||||
/meta/suggest --context meta.agent --artifacts agents/my-agent/agent.yaml
|
||||
```
|
||||
|
||||
**Analyze project and suggest:**
|
||||
```bash
|
||||
/meta/suggest --analyze-project
|
||||
```
|
||||
|
||||
**Suggest for specific goal:**
|
||||
```bash
|
||||
/meta/suggest --goal "Design and implement an API"
|
||||
```
|
||||
|
||||
## Suggestion Criteria
|
||||
|
||||
Good suggestions:
|
||||
- Are relevant to what just happened
|
||||
- Use artifacts that were produced
|
||||
- Follow logical workflow order
|
||||
- Provide clear value to the user
|
||||
- Include validation/quality checks when appropriate
|
||||
|
||||
Bad suggestions:
|
||||
- Suggest proceeding after failures
|
||||
- Ignore produced artifacts
|
||||
- Suggest irrelevant agents
|
||||
- Don't explain why
|
||||
|
||||
## Output Format
|
||||
|
||||
Always provide:
|
||||
- **Primary suggestion**: Best next step with strong rationale
|
||||
- **Alternative suggestions**: 2-3 other options
|
||||
- **Rationale**: Why each suggestion makes sense
|
||||
- **Artifacts needed**: What inputs each option requires
|
||||
- **Expected outcome**: What each option will produce
|
||||
|
||||
## Example Interactions
|
||||
|
||||
**Context:** meta.agent just created agent.yaml for new agent
|
||||
|
||||
**Suggestions:**
|
||||
1. Validate the agent (meta.compatibility analyze)
|
||||
- Rationale: Ensure agent has proper artifact compatibility
|
||||
- Needs: agent.yaml (already produced)
|
||||
- Produces: compatibility analysis
|
||||
|
||||
2. Test the agent (agent.run)
|
||||
- Rationale: See if agent works as expected
|
||||
- Needs: agent.yaml + test inputs
|
||||
- Produces: execution results
|
||||
|
||||
3. Document the agent (manual)
|
||||
- Rationale: Add examples and usage guide
|
||||
- Needs: Understanding of agent purpose
|
||||
- Produces: Enhanced README.md
|
||||
|
||||
Remember: You're Claude's assistant for orchestration. Help Claude make
|
||||
smart decisions about what to do next based on context and compatibility.
|
||||
371
agents/meta.suggest/meta_suggest.py
Executable file
371
agents/meta.suggest/meta_suggest.py
Executable file
@@ -0,0 +1,371 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.suggest - Context-Aware Next-Step Recommender
|
||||
|
||||
Helps Claude decide what to do next after an agent completes by analyzing
|
||||
context and suggesting compatible next steps.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Set
|
||||
from datetime import datetime
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
# Import meta.compatibility
|
||||
meta_comp_path = parent_dir + "/agents/meta.compatibility"
|
||||
sys.path.insert(0, meta_comp_path)
|
||||
import meta_compatibility
|
||||
|
||||
|
||||
class SuggestionEngine:
|
||||
"""Context-aware suggestion engine"""
|
||||
|
||||
def __init__(self, base_dir: str = "."):
|
||||
"""Initialize with base directory"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.compatibility_analyzer = meta_compatibility.CompatibilityAnalyzer(base_dir)
|
||||
self.compatibility_analyzer.scan_agents()
|
||||
self.compatibility_analyzer.build_compatibility_map()
|
||||
|
||||
def suggest_next_steps(
|
||||
self,
|
||||
context_agent: str,
|
||||
artifacts_produced: Optional[List[str]] = None,
|
||||
goal: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Suggest next steps based on context
|
||||
|
||||
Args:
|
||||
context_agent: Agent that just ran
|
||||
artifacts_produced: List of artifact file paths produced
|
||||
goal: Optional user goal
|
||||
|
||||
Returns:
|
||||
Suggestion report with recommendations
|
||||
"""
|
||||
# Get compatibility info for the agent
|
||||
compatibility = self.compatibility_analyzer.find_compatible(context_agent)
|
||||
|
||||
if "error" in compatibility:
|
||||
return {
|
||||
"error": compatibility["error"],
|
||||
"context": {
|
||||
"agent": context_agent,
|
||||
"artifacts": artifacts_produced or []
|
||||
}
|
||||
}
|
||||
|
||||
# Determine artifact types produced
|
||||
artifact_types = set()
|
||||
if artifacts_produced:
|
||||
artifact_types = self._infer_artifact_types(artifacts_produced)
|
||||
else:
|
||||
artifact_types = set(compatibility.get("produces", []))
|
||||
|
||||
suggestions = []
|
||||
|
||||
# Suggestion 1: Validate/analyze what was created
|
||||
if context_agent not in ["meta.compatibility", "meta.suggest"]:
|
||||
suggestions.append({
|
||||
"action": "Analyze compatibility",
|
||||
"agent": "meta.compatibility",
|
||||
"command": f"python3 agents/meta.compatibility/meta_compatibility.py analyze {context_agent}",
|
||||
"rationale": f"Understand what agents can work with {context_agent}'s outputs",
|
||||
"artifacts_needed": [],
|
||||
"produces": ["compatibility-graph"],
|
||||
"priority": "medium",
|
||||
"estimated_duration": "< 1 minute"
|
||||
})
|
||||
|
||||
# Suggestion 2: Use compatible agents
|
||||
can_feed_to = compatibility.get("can_feed_to", [])
|
||||
for compatible in can_feed_to[:3]: # Top 3
|
||||
next_agent = compatible["agent"]
|
||||
artifact = compatible["artifact"]
|
||||
|
||||
suggestions.append({
|
||||
"action": f"Process with {next_agent}",
|
||||
"agent": next_agent,
|
||||
"rationale": compatible["rationale"],
|
||||
"artifacts_needed": [artifact],
|
||||
"produces": self._get_agent_produces(next_agent),
|
||||
"priority": "high",
|
||||
"estimated_duration": "varies"
|
||||
})
|
||||
|
||||
# Suggestion 3: If agent created something, suggest testing/validation
|
||||
if artifact_types and context_agent in ["meta.agent", "meta.artifact"]:
|
||||
suggestions.append({
|
||||
"action": "Test the created artifact",
|
||||
"rationale": "Verify the artifact works as expected",
|
||||
"artifacts_needed": list(artifact_types),
|
||||
"priority": "high",
|
||||
"manual": True
|
||||
})
|
||||
|
||||
# Suggestion 4: If gaps exist, suggest filling them
|
||||
gaps = compatibility.get("gaps", [])
|
||||
if gaps:
|
||||
for gap in gaps[:2]: # Top 2 gaps
|
||||
suggestions.append({
|
||||
"action": f"Create producer for '{gap['artifact']}'",
|
||||
"rationale": gap["issue"],
|
||||
"severity": gap.get("severity", "medium"),
|
||||
"priority": "low",
|
||||
"manual": True
|
||||
})
|
||||
|
||||
# Rank suggestions
|
||||
suggestions = self._rank_suggestions(suggestions, goal)
|
||||
|
||||
# Build report
|
||||
report = {
|
||||
"context": {
|
||||
"agent": context_agent,
|
||||
"artifacts_produced": artifacts_produced or [],
|
||||
"artifact_types": list(artifact_types),
|
||||
"timestamp": datetime.now().isoformat()
|
||||
},
|
||||
"suggestions": suggestions,
|
||||
"primary_suggestion": suggestions[0] if suggestions else None,
|
||||
"alternatives": suggestions[1:4] if len(suggestions) > 1 else [],
|
||||
"warnings": self._generate_warnings(context_agent, compatibility, gaps)
|
||||
}
|
||||
|
||||
return report
|
||||
|
||||
def _infer_artifact_types(self, artifact_paths: List[str]) -> Set[str]:
|
||||
"""Infer artifact types from file paths"""
|
||||
types = set()
|
||||
|
||||
for path in artifact_paths:
|
||||
path_lower = path.lower()
|
||||
|
||||
# Pattern matching
|
||||
if ".openapi." in path_lower:
|
||||
types.add("openapi-spec")
|
||||
elif "agent.yaml" in path_lower:
|
||||
types.add("agent-definition")
|
||||
elif "readme.md" in path_lower:
|
||||
if "agent" in path_lower:
|
||||
types.add("agent-documentation")
|
||||
elif ".validation." in path_lower:
|
||||
types.add("validation-report")
|
||||
elif ".optimization." in path_lower:
|
||||
types.add("optimization-report")
|
||||
elif ".compatibility." in path_lower:
|
||||
types.add("compatibility-graph")
|
||||
elif ".pipeline." in path_lower:
|
||||
types.add("pipeline-suggestion")
|
||||
elif ".workflow." in path_lower:
|
||||
types.add("workflow-definition")
|
||||
|
||||
return types
|
||||
|
||||
def _get_agent_produces(self, agent_name: str) -> List[str]:
|
||||
"""Get what an agent produces"""
|
||||
if agent_name in self.compatibility_analyzer.agents:
|
||||
agent_def = self.compatibility_analyzer.agents[agent_name]
|
||||
produces, _ = self.compatibility_analyzer.extract_artifacts(agent_def)
|
||||
return list(produces)
|
||||
return []
|
||||
|
||||
def _rank_suggestions(self, suggestions: List[Dict], goal: Optional[str] = None) -> List[Dict]:
|
||||
"""Rank suggestions by relevance"""
|
||||
priority_order = {"high": 3, "medium": 2, "low": 1}
|
||||
|
||||
# Sort by priority, then by manual (auto first)
|
||||
return sorted(
|
||||
suggestions,
|
||||
key=lambda s: (
|
||||
-priority_order.get(s.get("priority", "medium"), 2),
|
||||
s.get("manual", False) # Auto suggestions first
|
||||
)
|
||||
)
|
||||
|
||||
def _generate_warnings(
|
||||
self,
|
||||
agent: str,
|
||||
compatibility: Dict,
|
||||
gaps: List[Dict]
|
||||
) -> List[Dict]:
|
||||
"""Generate warnings based on context"""
|
||||
warnings = []
|
||||
|
||||
# Warn about gaps
|
||||
if gaps:
|
||||
warnings.append({
|
||||
"type": "gaps",
|
||||
"message": f"{agent} requires artifacts that aren't produced by any agent",
|
||||
"details": [g["artifact"] for g in gaps],
|
||||
"severity": "medium"
|
||||
})
|
||||
|
||||
# Warn if no compatible agents
|
||||
if not compatibility.get("can_feed_to") and not compatibility.get("can_receive_from"):
|
||||
warnings.append({
|
||||
"type": "isolated",
|
||||
"message": f"{agent} has no compatible agents",
|
||||
"details": "This agent can't be used in multi-agent pipelines",
|
||||
"severity": "low"
|
||||
})
|
||||
|
||||
return warnings
|
||||
|
||||
def analyze_project(self) -> Dict[str, Any]:
|
||||
"""Analyze entire project and suggest improvements"""
|
||||
# Generate compatibility graph
|
||||
graph = self.compatibility_analyzer.generate_compatibility_graph()
|
||||
|
||||
suggestions = []
|
||||
|
||||
# Suggest filling gaps
|
||||
for gap in graph.get("gaps", []):
|
||||
suggestions.append({
|
||||
"action": f"Create agent/skill to produce '{gap['artifact']}'",
|
||||
"rationale": gap["issue"],
|
||||
"priority": "medium",
|
||||
"impact": f"Enables {len(gap.get('consumers', []))} agents"
|
||||
})
|
||||
|
||||
# Suggest creating more agents if few exist
|
||||
if graph["metadata"]["total_agents"] < 5:
|
||||
suggestions.append({
|
||||
"action": "Create more agents using meta.agent",
|
||||
"rationale": "Expand agent ecosystem for more capabilities",
|
||||
"priority": "low"
|
||||
})
|
||||
|
||||
# Suggest documentation if gaps exist
|
||||
if graph.get("gaps"):
|
||||
suggestions.append({
|
||||
"action": "Document artifact standards for gaps",
|
||||
"rationale": "Clarify requirements for missing artifacts",
|
||||
"priority": "low"
|
||||
})
|
||||
|
||||
return {
|
||||
"project_analysis": {
|
||||
"total_agents": graph["metadata"]["total_agents"],
|
||||
"total_artifacts": graph["metadata"]["total_artifact_types"],
|
||||
"total_relationships": len(graph["relationships"]),
|
||||
"total_gaps": len(graph["gaps"])
|
||||
},
|
||||
"suggestions": suggestions,
|
||||
"gaps": graph["gaps"],
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.suggest - Context-Aware Next-Step Recommender"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--context",
|
||||
help="Agent that just ran"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--artifacts",
|
||||
nargs="+",
|
||||
help="Artifacts that were produced"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--goal",
|
||||
help="User's goal (for better suggestions)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--analyze-project",
|
||||
action="store_true",
|
||||
help="Analyze entire project and suggest improvements"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--format",
|
||||
choices=["json", "text"],
|
||||
default="text",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
engine = SuggestionEngine()
|
||||
|
||||
if args.analyze_project:
|
||||
print("🔍 Analyzing project...\n")
|
||||
result = engine.analyze_project()
|
||||
|
||||
if args.format == "text":
|
||||
print(f"📊 Project Analysis:")
|
||||
print(f" Total Agents: {result['project_analysis']['total_agents']}")
|
||||
print(f" Total Artifacts: {result['project_analysis']['total_artifacts']}")
|
||||
print(f" Relationships: {result['project_analysis']['total_relationships']}")
|
||||
print(f" Gaps: {result['project_analysis']['total_gaps']}")
|
||||
|
||||
if result.get("suggestions"):
|
||||
print(f"\n💡 Suggestions ({len(result['suggestions'])}):")
|
||||
for i, suggestion in enumerate(result["suggestions"], 1):
|
||||
print(f"\n {i}. {suggestion['action']}")
|
||||
print(f" {suggestion['rationale']}")
|
||||
print(f" Priority: {suggestion['priority']}")
|
||||
|
||||
else:
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
elif args.context:
|
||||
print(f"💡 Suggesting next steps after '{args.context}'...\n")
|
||||
result = engine.suggest_next_steps(
|
||||
args.context,
|
||||
args.artifacts,
|
||||
args.goal
|
||||
)
|
||||
|
||||
if args.format == "text":
|
||||
if "error" in result:
|
||||
print(f"❌ Error: {result['error']}")
|
||||
return
|
||||
|
||||
print(f"Context: {result['context']['agent']}")
|
||||
if result['context']['artifact_types']:
|
||||
print(f"Produced: {', '.join(result['context']['artifact_types'])}")
|
||||
|
||||
if result.get("primary_suggestion"):
|
||||
print(f"\n🌟 Primary Suggestion:")
|
||||
ps = result["primary_suggestion"]
|
||||
print(f" {ps['action']}")
|
||||
print(f" Rationale: {ps['rationale']}")
|
||||
if not ps.get("manual"):
|
||||
print(f" Command: {ps.get('command', 'N/A')}")
|
||||
print(f" Priority: {ps['priority']}")
|
||||
|
||||
if result.get("alternatives"):
|
||||
print(f"\n🔄 Alternatives:")
|
||||
for i, alt in enumerate(result["alternatives"], 1):
|
||||
print(f"\n {i}. {alt['action']}")
|
||||
print(f" {alt['rationale']}")
|
||||
|
||||
if result.get("warnings"):
|
||||
print(f"\n⚠️ Warnings:")
|
||||
for warning in result["warnings"]:
|
||||
print(f" • {warning['message']}")
|
||||
|
||||
else:
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
46
agents/operations.orchestrator/README.md
Normal file
46
agents/operations.orchestrator/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Operations.Orchestrator Agent
|
||||
|
||||
Orchestrates operational workflows including builds, deployments, and monitoring
|
||||
|
||||
## Purpose
|
||||
|
||||
This orchestrator agent coordinates complex operations workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Coordinate build and deployment pipelines
|
||||
- Manage infrastructure as code workflows
|
||||
- Orchestrate monitoring and alerting
|
||||
- Handle incident response and remediation
|
||||
- Coordinate release management
|
||||
|
||||
## Available Skills
|
||||
|
||||
- `build.optimize`
|
||||
- `workflow.orchestrate`
|
||||
- `workflow.compose`
|
||||
- `workflow.validate`
|
||||
- `git.createpr`
|
||||
- `git.cleanupbranches`
|
||||
- `telemetry.capture`
|
||||
|
||||
## Usage
|
||||
|
||||
This agent uses iterative reasoning to:
|
||||
1. Analyze requirements
|
||||
2. Plan execution steps
|
||||
3. Coordinate skill execution
|
||||
4. Validate results
|
||||
5. Handle errors and retries
|
||||
|
||||
## Status
|
||||
|
||||
**Generated**: Auto-generated from taxonomy gap analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] Review and refine capabilities
|
||||
- [ ] Test with real workflows
|
||||
- [ ] Add domain-specific examples
|
||||
- [ ] Integrate with existing agents
|
||||
- [ ] Document best practices
|
||||
56
agents/operations.orchestrator/agent.yaml
Normal file
56
agents/operations.orchestrator/agent.yaml
Normal file
@@ -0,0 +1,56 @@
|
||||
name: operations.orchestrator
|
||||
version: 0.1.0
|
||||
description: Orchestrates operational workflows including builds, deployments, and
|
||||
monitoring
|
||||
capabilities:
|
||||
- Coordinate build and deployment pipelines
|
||||
- Manage infrastructure as code workflows
|
||||
- Orchestrate monitoring and alerting
|
||||
- Handle incident response and remediation
|
||||
- Coordinate release management
|
||||
skills_available:
|
||||
- build.optimize
|
||||
- workflow.orchestrate
|
||||
- workflow.compose
|
||||
- workflow.validate
|
||||
- git.createpr
|
||||
- git.cleanupbranches
|
||||
- telemetry.capture
|
||||
reasoning_mode: iterative
|
||||
tags:
|
||||
- operations
|
||||
- orchestration
|
||||
- devops
|
||||
- deployment
|
||||
workflow_pattern: '1. Analyze incoming request and requirements
|
||||
|
||||
2. Identify relevant operations skills and workflows
|
||||
|
||||
3. Compose multi-step execution plan
|
||||
|
||||
4. Execute skills in coordinated sequence
|
||||
|
||||
5. Validate intermediate results
|
||||
|
||||
6. Handle errors and retry as needed
|
||||
|
||||
7. Return comprehensive results'
|
||||
example_task: "Input: \"Complete operations workflow from start to finish\"\n\nAgent\
|
||||
\ will:\n1. Break down the task into stages\n2. Select appropriate skills for each\
|
||||
\ stage\n3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n\
|
||||
4. Monitor progress and handle failures\n5. Generate comprehensive reports"
|
||||
error_handling:
|
||||
timeout_seconds: 300
|
||||
retry_strategy: exponential_backoff
|
||||
max_retries: 3
|
||||
output:
|
||||
success:
|
||||
- Operations workflow results
|
||||
- Execution logs and metrics
|
||||
- Validation reports
|
||||
- Generated artifacts
|
||||
failure:
|
||||
- Error details and stack traces
|
||||
- Partial results (if available)
|
||||
- Remediation suggestions
|
||||
status: generated
|
||||
72
agents/security.architect/README.md
Normal file
72
agents/security.architect/README.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Security.Architect Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Create comprehensive security architecture and assessment artifacts including threat models, security architecture diagrams, penetration testing reports, vulnerability management plans, and incident response plans. Applies security frameworks (STRIDE, NIST, ISO 27001, OWASP) and creates artifacts ready for security review and compliance audit.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `System or application description`
|
||||
- `Architecture components and data flows`
|
||||
- `Security requirements or compliance needs`
|
||||
- `Assets and data classification`
|
||||
- `Existing security controls`
|
||||
- `Threat intelligence or vulnerability data`
|
||||
|
||||
### Produces
|
||||
|
||||
- `threat-model: STRIDE-based threat model with attack vectors, risk scoring, and security controls`
|
||||
- `security-architecture-diagram: Security architecture with trust boundaries, security zones, and control points`
|
||||
- `penetration-testing-report: Penetration test findings with CVSS scores and remediation recommendations`
|
||||
- `vulnerability-management-plan: Vulnerability management program with policies and procedures`
|
||||
- `incident-response-plan: Incident response playbook with roles, procedures, and escalation`
|
||||
- `security-assessment: Security posture assessment against frameworks`
|
||||
- `zero-trust-design: Zero trust architecture design with identity, device, and data controls`
|
||||
- `compliance-matrix: Compliance mapping to regulatory requirements`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- System description with components (API gateway, tokenization service, payment processor)
|
||||
- Trust boundaries (external, DMZ, internal)
|
||||
- Asset inventory (credit card data, transaction records)
|
||||
- STRIDE threat catalog with 15-20 threats
|
||||
- Security controls mapped to each threat
|
||||
- Residual risk assessment
|
||||
- PCI-DSS compliance mapping
|
||||
- Network segmentation and security zones
|
||||
- Identity and access management (IAM) controls
|
||||
- Data encryption (at rest and in transit)
|
||||
- Tenant isolation mechanisms
|
||||
- Logging and monitoring infrastructure
|
||||
- Compliance controls for SOC 2
|
||||
- Incident classification and severity levels
|
||||
- Response team roles and responsibilities
|
||||
- Incident response procedures by type
|
||||
- Communication and escalation protocols
|
||||
- Forensics and evidence collection
|
||||
- Post-incident review process
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent security.architect
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run security.architect --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
65
agents/security.architect/agent.yaml
Normal file
65
agents/security.architect/agent.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
name: security.architect
|
||||
version: 0.1.0
|
||||
description: Create comprehensive security architecture and assessment artifacts including
|
||||
threat models, security architecture diagrams, penetration testing reports, vulnerability
|
||||
management plans, and incident response plans. Applies security frameworks (STRIDE,
|
||||
NIST, ISO 27001, OWASP) and creates artifacts ready for security review and compliance
|
||||
audit.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Perform structured threat modeling and control gap assessments
|
||||
- Produce security architecture and testing documentation for reviews
|
||||
- Recommend remediation and governance improvements for security programs
|
||||
skills_available:
|
||||
- artifact.create
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: System or application description
|
||||
description: Input artifact of type System or application description
|
||||
- type: Architecture components and data flows
|
||||
description: Input artifact of type Architecture components and data flows
|
||||
- type: Security requirements or compliance needs
|
||||
description: Input artifact of type Security requirements or compliance needs
|
||||
- type: Assets and data classification
|
||||
description: Input artifact of type Assets and data classification
|
||||
- type: Existing security controls
|
||||
description: Input artifact of type Existing security controls
|
||||
- type: Threat intelligence or vulnerability data
|
||||
description: Input artifact of type Threat intelligence or vulnerability data
|
||||
produces:
|
||||
- type: 'threat-model: STRIDE-based threat model with attack vectors, risk scoring,
|
||||
and security controls'
|
||||
description: 'Output artifact of type threat-model: STRIDE-based threat model
|
||||
with attack vectors, risk scoring, and security controls'
|
||||
- type: 'security-architecture-diagram: Security architecture with trust boundaries,
|
||||
security zones, and control points'
|
||||
description: 'Output artifact of type security-architecture-diagram: Security
|
||||
architecture with trust boundaries, security zones, and control points'
|
||||
- type: 'penetration-testing-report: Penetration test findings with CVSS scores
|
||||
and remediation recommendations'
|
||||
description: 'Output artifact of type penetration-testing-report: Penetration
|
||||
test findings with CVSS scores and remediation recommendations'
|
||||
- type: 'vulnerability-management-plan: Vulnerability management program with policies
|
||||
and procedures'
|
||||
description: 'Output artifact of type vulnerability-management-plan: Vulnerability
|
||||
management program with policies and procedures'
|
||||
- type: 'incident-response-plan: Incident response playbook with roles, procedures,
|
||||
and escalation'
|
||||
description: 'Output artifact of type incident-response-plan: Incident response
|
||||
playbook with roles, procedures, and escalation'
|
||||
- type: 'security-assessment: Security posture assessment against frameworks'
|
||||
description: 'Output artifact of type security-assessment: Security posture assessment
|
||||
against frameworks'
|
||||
- type: 'zero-trust-design: Zero trust architecture design with identity, device,
|
||||
and data controls'
|
||||
description: 'Output artifact of type zero-trust-design: Zero trust architecture
|
||||
design with identity, device, and data controls'
|
||||
- type: 'compliance-matrix: Compliance mapping to regulatory requirements'
|
||||
description: 'Output artifact of type compliance-matrix: Compliance mapping to
|
||||
regulatory requirements'
|
||||
43
agents/security.orchestrator/README.md
Normal file
43
agents/security.orchestrator/README.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Security.Orchestrator Agent
|
||||
|
||||
Orchestrates security workflows including audits, compliance checks, and vulnerability management
|
||||
|
||||
## Purpose
|
||||
|
||||
This orchestrator agent coordinates complex security workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Coordinate security audits and assessments
|
||||
- Manage compliance validation workflows
|
||||
- Orchestrate vulnerability scanning and remediation
|
||||
- Handle security documentation generation
|
||||
- Coordinate access control and policy enforcement
|
||||
|
||||
## Available Skills
|
||||
|
||||
- `policy.enforce`
|
||||
- `artifact.validate`
|
||||
- `artifact.review`
|
||||
- `audit.log`
|
||||
|
||||
## Usage
|
||||
|
||||
This agent uses iterative reasoning to:
|
||||
1. Analyze requirements
|
||||
2. Plan execution steps
|
||||
3. Coordinate skill execution
|
||||
4. Validate results
|
||||
5. Handle errors and retries
|
||||
|
||||
## Status
|
||||
|
||||
**Generated**: Auto-generated from taxonomy gap analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] Review and refine capabilities
|
||||
- [ ] Test with real workflows
|
||||
- [ ] Add domain-specific examples
|
||||
- [ ] Integrate with existing agents
|
||||
- [ ] Document best practices
|
||||
53
agents/security.orchestrator/agent.yaml
Normal file
53
agents/security.orchestrator/agent.yaml
Normal file
@@ -0,0 +1,53 @@
|
||||
name: security.orchestrator
|
||||
version: 0.1.0
|
||||
description: Orchestrates security workflows including audits, compliance checks,
|
||||
and vulnerability management
|
||||
capabilities:
|
||||
- Coordinate security audits and assessments
|
||||
- Manage compliance validation workflows
|
||||
- Orchestrate vulnerability scanning and remediation
|
||||
- Handle security documentation generation
|
||||
- Coordinate access control and policy enforcement
|
||||
skills_available:
|
||||
- policy.enforce
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
- audit.log
|
||||
reasoning_mode: iterative
|
||||
tags:
|
||||
- security
|
||||
- orchestration
|
||||
- compliance
|
||||
- audit
|
||||
workflow_pattern: '1. Analyze incoming request and requirements
|
||||
|
||||
2. Identify relevant security skills and workflows
|
||||
|
||||
3. Compose multi-step execution plan
|
||||
|
||||
4. Execute skills in coordinated sequence
|
||||
|
||||
5. Validate intermediate results
|
||||
|
||||
6. Handle errors and retry as needed
|
||||
|
||||
7. Return comprehensive results'
|
||||
example_task: "Input: \"Complete security workflow from start to finish\"\n\nAgent\
|
||||
\ will:\n1. Break down the task into stages\n2. Select appropriate skills for each\
|
||||
\ stage\n3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n\
|
||||
4. Monitor progress and handle failures\n5. Generate comprehensive reports"
|
||||
error_handling:
|
||||
timeout_seconds: 300
|
||||
retry_strategy: exponential_backoff
|
||||
max_retries: 3
|
||||
output:
|
||||
success:
|
||||
- Security workflow results
|
||||
- Execution logs and metrics
|
||||
- Validation reports
|
||||
- Generated artifacts
|
||||
failure:
|
||||
- Error details and stack traces
|
||||
- Partial results (if available)
|
||||
- Remediation suggestions
|
||||
status: generated
|
||||
71
agents/strategy.architect/README.md
Normal file
71
agents/strategy.architect/README.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Strategy.Architect Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Create comprehensive business strategy and planning artifacts including business cases, portfolio roadmaps, market analyses, competitive assessments, and strategic planning documents. Leverages financial modeling (NPV, IRR, ROI) and industry frameworks (PMBOK, SAFe, BCG Matrix) to produce executive-ready strategic deliverables.
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `Initiative or project description`
|
||||
- `Problem statement or opportunity`
|
||||
- `Target business outcomes`
|
||||
- `Budget range or financial constraints`
|
||||
- `Market research data`
|
||||
- `Competitive intelligence`
|
||||
- `Stakeholder requirements`
|
||||
|
||||
### Produces
|
||||
|
||||
- `business-case: Comprehensive business justification with financial analysis, ROI model, risk assessment, and recommendation`
|
||||
- `portfolio-roadmap: Strategic multi-initiative roadmap with timeline, dependencies, and resource allocation`
|
||||
- `market-analysis: Market opportunity assessment with sizing, trends, and target segments`
|
||||
- `competitive-analysis: Competitive landscape analysis with positioning and differentiation`
|
||||
- `feasibility-study: Technical and business feasibility assessment`
|
||||
- `strategic-plan: Multi-year strategic planning document with objectives and key results`
|
||||
- `value-proposition-canvas: Customer value proposition and fit analysis`
|
||||
- `roi-model: Financial return on investment model with multi-year projections`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- Executive summary for C-suite
|
||||
- Problem statement with impact analysis
|
||||
- Proposed solution with scope
|
||||
- Financial analysis (costs $500K, benefits $300K annually, 18mo payback)
|
||||
- Risk assessment with mitigation
|
||||
- Implementation timeline
|
||||
- Recommendation and next steps
|
||||
- Strategic alignment to business objectives
|
||||
- Three major initiatives with phases
|
||||
- Timeline with milestones and dependencies
|
||||
- Resource allocation across initiatives
|
||||
- Budget distribution ($5M across 18 months)
|
||||
- Risk and dependency management
|
||||
- Success metrics and KPIs
|
||||
- market-analysis.yaml with market sizing, growth trends, target segments
|
||||
- competitive-analysis.yaml with competitor positioning, SWOT analysis
|
||||
- value-proposition-canvas.yaml with customer jobs, pains, gains
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent strategy.architect
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run strategy.architect --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
65
agents/strategy.architect/agent.yaml
Normal file
65
agents/strategy.architect/agent.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
name: strategy.architect
|
||||
version: 0.1.0
|
||||
description: Create comprehensive business strategy and planning artifacts including
|
||||
business cases, portfolio roadmaps, market analyses, competitive assessments, and
|
||||
strategic planning documents. Leverages financial modeling (NPV, IRR, ROI) and industry
|
||||
frameworks (PMBOK, SAFe, BCG Matrix) to produce executive-ready strategic deliverables.
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Build financial models and strategic roadmaps aligned to business objectives
|
||||
- Analyze market and competitive data to inform executive decisions
|
||||
- Produce governance-ready artifacts with risks, dependencies, and recommendations
|
||||
skills_available:
|
||||
- artifact.create
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: Initiative or project description
|
||||
description: Input artifact of type Initiative or project description
|
||||
- type: Problem statement or opportunity
|
||||
description: Input artifact of type Problem statement or opportunity
|
||||
- type: Target business outcomes
|
||||
description: Input artifact of type Target business outcomes
|
||||
- type: Budget range or financial constraints
|
||||
description: Input artifact of type Budget range or financial constraints
|
||||
- type: Market research data
|
||||
description: Input artifact of type Market research data
|
||||
- type: Competitive intelligence
|
||||
description: Input artifact of type Competitive intelligence
|
||||
- type: Stakeholder requirements
|
||||
description: Input artifact of type Stakeholder requirements
|
||||
produces:
|
||||
- type: 'business-case: Comprehensive business justification with financial analysis,
|
||||
ROI model, risk assessment, and recommendation'
|
||||
description: 'Output artifact of type business-case: Comprehensive business justification
|
||||
with financial analysis, ROI model, risk assessment, and recommendation'
|
||||
- type: 'portfolio-roadmap: Strategic multi-initiative roadmap with timeline, dependencies,
|
||||
and resource allocation'
|
||||
description: 'Output artifact of type portfolio-roadmap: Strategic multi-initiative
|
||||
roadmap with timeline, dependencies, and resource allocation'
|
||||
- type: 'market-analysis: Market opportunity assessment with sizing, trends, and
|
||||
target segments'
|
||||
description: 'Output artifact of type market-analysis: Market opportunity assessment
|
||||
with sizing, trends, and target segments'
|
||||
- type: 'competitive-analysis: Competitive landscape analysis with positioning and
|
||||
differentiation'
|
||||
description: 'Output artifact of type competitive-analysis: Competitive landscape
|
||||
analysis with positioning and differentiation'
|
||||
- type: 'feasibility-study: Technical and business feasibility assessment'
|
||||
description: 'Output artifact of type feasibility-study: Technical and business
|
||||
feasibility assessment'
|
||||
- type: 'strategic-plan: Multi-year strategic planning document with objectives
|
||||
and key results'
|
||||
description: 'Output artifact of type strategic-plan: Multi-year strategic planning
|
||||
document with objectives and key results'
|
||||
- type: 'value-proposition-canvas: Customer value proposition and fit analysis'
|
||||
description: 'Output artifact of type value-proposition-canvas: Customer value
|
||||
proposition and fit analysis'
|
||||
- type: 'roi-model: Financial return on investment model with multi-year projections'
|
||||
description: 'Output artifact of type roi-model: Financial return on investment
|
||||
model with multi-year projections'
|
||||
74
agents/test.engineer/README.md
Normal file
74
agents/test.engineer/README.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# Test.Engineer Agent
|
||||
|
||||
## Purpose
|
||||
|
||||
Create comprehensive testing artifacts including test plans, test cases, test results, test automation strategies, and quality assurance reports. Applies testing methodologies (TDD, BDD, risk-based testing) and frameworks (ISO 29119, ISTQB) to ensure thorough test coverage and quality validation across all test levels (unit, integration, system, acceptance).
|
||||
|
||||
## Skills
|
||||
|
||||
This agent uses the following skills:
|
||||
|
||||
|
||||
## Artifact Flow
|
||||
|
||||
### Consumes
|
||||
|
||||
- `Requirements or user stories`
|
||||
- `System architecture or design`
|
||||
- `Test scope and objectives`
|
||||
- `Quality criteria and acceptance thresholds`
|
||||
- `Testing constraints`
|
||||
- `Defects or test results`
|
||||
|
||||
### Produces
|
||||
|
||||
- `test-plan: Comprehensive test strategy with scope, approach, resources, and schedule`
|
||||
- `test-cases: Detailed test cases with steps, data, and expected results`
|
||||
- `test-results: Test execution results with pass/fail status and defect tracking`
|
||||
- `test-automation-strategy: Test automation framework and tool selection`
|
||||
- `acceptance-criteria: User story acceptance criteria in Given-When-Then format`
|
||||
- `performance-test-plan: Performance and load testing strategy`
|
||||
- `integration-test-plan: Integration testing approach with interface validation`
|
||||
- `regression-test-suite: Regression test suite for continuous integration`
|
||||
- `quality-assurance-report: QA summary with metrics, defects, and quality assessment`
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
- Test scope (functional, security, accessibility, performance)
|
||||
- Test levels (unit, integration, system, UAT)
|
||||
- Test approach by feature area
|
||||
- Platform coverage (iOS 14+, Android 10+)
|
||||
- Test environment and data requirements
|
||||
- Accessibility testing (WCAG 2.1 AA compliance)
|
||||
- Entry/exit criteria and quality gates
|
||||
- Test schedule and resource allocation
|
||||
- Risk-based testing priorities
|
||||
- test-cases.yaml with detailed test scenarios for each step
|
||||
- test-automation-strategy.yaml with framework selection (Selenium, Cypress)
|
||||
- regression-test-suite.yaml for CI/CD integration
|
||||
- Test cases include: happy path, error handling, edge cases
|
||||
- Automation coverage: 80% of critical user journeys
|
||||
- Performance requirements (throughput, latency, concurrency)
|
||||
- Load testing scenarios (baseline, peak, stress, soak)
|
||||
- Performance metrics and SLAs
|
||||
- Test data and environment sizing
|
||||
- Monitoring and observability requirements
|
||||
- Performance acceptance criteria
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Activate the agent
|
||||
/agent test.engineer
|
||||
|
||||
# Or invoke directly
|
||||
betty agent run test.engineer --input <path>
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This agent was created by **meta.agent**, the meta-agent for creating agents.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
65
agents/test.engineer/agent.yaml
Normal file
65
agents/test.engineer/agent.yaml
Normal file
@@ -0,0 +1,65 @@
|
||||
name: test.engineer
|
||||
version: 0.1.0
|
||||
description: Create comprehensive testing artifacts including test plans, test cases,
|
||||
test results, test automation strategies, and quality assurance reports. Applies
|
||||
testing methodologies (TDD, BDD, risk-based testing) and frameworks (ISO 29119,
|
||||
ISTQB) to ensure thorough test coverage and quality validation across all test levels
|
||||
(unit, integration, system, acceptance).
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Develop comprehensive test strategies across multiple levels and techniques
|
||||
- Produce reusable automation assets and coverage reporting
|
||||
- Analyze defect data to recommend quality improvements
|
||||
skills_available:
|
||||
- artifact.create
|
||||
- artifact.validate
|
||||
- artifact.review
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: Requirements or user stories
|
||||
description: Input artifact of type Requirements or user stories
|
||||
- type: System architecture or design
|
||||
description: Input artifact of type System architecture or design
|
||||
- type: Test scope and objectives
|
||||
description: Input artifact of type Test scope and objectives
|
||||
- type: Quality criteria and acceptance thresholds
|
||||
description: Input artifact of type Quality criteria and acceptance thresholds
|
||||
- type: Testing constraints
|
||||
description: Input artifact of type Testing constraints
|
||||
- type: Defects or test results
|
||||
description: Input artifact of type Defects or test results
|
||||
produces:
|
||||
- type: 'test-plan: Comprehensive test strategy with scope, approach, resources,
|
||||
and schedule'
|
||||
description: 'Output artifact of type test-plan: Comprehensive test strategy with
|
||||
scope, approach, resources, and schedule'
|
||||
- type: 'test-cases: Detailed test cases with steps, data, and expected results'
|
||||
description: 'Output artifact of type test-cases: Detailed test cases with steps,
|
||||
data, and expected results'
|
||||
- type: 'test-results: Test execution results with pass/fail status and defect tracking'
|
||||
description: 'Output artifact of type test-results: Test execution results with
|
||||
pass/fail status and defect tracking'
|
||||
- type: 'test-automation-strategy: Test automation framework and tool selection'
|
||||
description: 'Output artifact of type test-automation-strategy: Test automation
|
||||
framework and tool selection'
|
||||
- type: 'acceptance-criteria: User story acceptance criteria in Given-When-Then
|
||||
format'
|
||||
description: 'Output artifact of type acceptance-criteria: User story acceptance
|
||||
criteria in Given-When-Then format'
|
||||
- type: 'performance-test-plan: Performance and load testing strategy'
|
||||
description: 'Output artifact of type performance-test-plan: Performance and load
|
||||
testing strategy'
|
||||
- type: 'integration-test-plan: Integration testing approach with interface validation'
|
||||
description: 'Output artifact of type integration-test-plan: Integration testing
|
||||
approach with interface validation'
|
||||
- type: 'regression-test-suite: Regression test suite for continuous integration'
|
||||
description: 'Output artifact of type regression-test-suite: Regression test suite
|
||||
for continuous integration'
|
||||
- type: 'quality-assurance-report: QA summary with metrics, defects, and quality
|
||||
assessment'
|
||||
description: 'Output artifact of type quality-assurance-report: QA summary with
|
||||
metrics, defects, and quality assessment'
|
||||
42
agents/testing.orchestrator/README.md
Normal file
42
agents/testing.orchestrator/README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Testing.Orchestrator Agent
|
||||
|
||||
Orchestrates testing workflows across unit, integration, and end-to-end tests
|
||||
|
||||
## Purpose
|
||||
|
||||
This orchestrator agent coordinates complex testing workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Coordinate test planning and design
|
||||
- Manage test execution and reporting
|
||||
- Orchestrate quality assurance workflows
|
||||
- Handle test data generation and management
|
||||
- Coordinate continuous testing pipelines
|
||||
|
||||
## Available Skills
|
||||
|
||||
- `test.example`
|
||||
- `workflow.validate`
|
||||
- `api.test`
|
||||
|
||||
## Usage
|
||||
|
||||
This agent uses iterative reasoning to:
|
||||
1. Analyze requirements
|
||||
2. Plan execution steps
|
||||
3. Coordinate skill execution
|
||||
4. Validate results
|
||||
5. Handle errors and retries
|
||||
|
||||
## Status
|
||||
|
||||
**Generated**: Auto-generated from taxonomy gap analysis
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [ ] Review and refine capabilities
|
||||
- [ ] Test with real workflows
|
||||
- [ ] Add domain-specific examples
|
||||
- [ ] Integrate with existing agents
|
||||
- [ ] Document best practices
|
||||
52
agents/testing.orchestrator/agent.yaml
Normal file
52
agents/testing.orchestrator/agent.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
name: testing.orchestrator
|
||||
version: 0.1.0
|
||||
description: Orchestrates testing workflows across unit, integration, and end-to-end
|
||||
tests
|
||||
capabilities:
|
||||
- Coordinate test planning and design
|
||||
- Manage test execution and reporting
|
||||
- Orchestrate quality assurance workflows
|
||||
- Handle test data generation and management
|
||||
- Coordinate continuous testing pipelines
|
||||
skills_available:
|
||||
- test.example
|
||||
- workflow.validate
|
||||
- api.test
|
||||
reasoning_mode: iterative
|
||||
tags:
|
||||
- testing
|
||||
- orchestration
|
||||
- qa
|
||||
- quality
|
||||
workflow_pattern: '1. Analyze incoming request and requirements
|
||||
|
||||
2. Identify relevant testing skills and workflows
|
||||
|
||||
3. Compose multi-step execution plan
|
||||
|
||||
4. Execute skills in coordinated sequence
|
||||
|
||||
5. Validate intermediate results
|
||||
|
||||
6. Handle errors and retry as needed
|
||||
|
||||
7. Return comprehensive results'
|
||||
example_task: "Input: \"Complete testing workflow from start to finish\"\n\nAgent\
|
||||
\ will:\n1. Break down the task into stages\n2. Select appropriate skills for each\
|
||||
\ stage\n3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n\
|
||||
4. Monitor progress and handle failures\n5. Generate comprehensive reports"
|
||||
error_handling:
|
||||
timeout_seconds: 300
|
||||
retry_strategy: exponential_backoff
|
||||
max_retries: 3
|
||||
output:
|
||||
success:
|
||||
- Testing workflow results
|
||||
- Execution logs and metrics
|
||||
- Validation reports
|
||||
- Generated artifacts
|
||||
failure:
|
||||
- Error details and stack traces
|
||||
- Partial results (if available)
|
||||
- Remediation suggestions
|
||||
status: generated
|
||||
81
commands/meta-config-router.yaml
Normal file
81
commands/meta-config-router.yaml
Normal file
@@ -0,0 +1,81 @@
|
||||
name: meta-config-router
|
||||
version: 0.1.0
|
||||
description: "Configure Claude Code Router for multi-model LLM support (OpenAI, Claude, Ollama, etc.)"
|
||||
status: active
|
||||
|
||||
execution:
|
||||
type: agent
|
||||
target: meta.config.router
|
||||
context:
|
||||
mode: oneshot
|
||||
|
||||
parameters:
|
||||
- name: routing_config_path
|
||||
type: string
|
||||
required: true
|
||||
description: "Path to YAML or JSON input file containing router configuration"
|
||||
|
||||
- name: apply_config
|
||||
type: boolean
|
||||
required: false
|
||||
default: false
|
||||
description: "Write configuration to ~/.claude-code-router/config.json (default: false)"
|
||||
|
||||
- name: output_mode
|
||||
type: string
|
||||
required: false
|
||||
default: "preview"
|
||||
enum:
|
||||
- preview
|
||||
- file
|
||||
- both
|
||||
description: "Output mode: preview (show only), file (write only), or both (default: preview)"
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
tags:
|
||||
- llm
|
||||
- router
|
||||
- configuration
|
||||
- meta
|
||||
- infra
|
||||
- openrouter
|
||||
- claude
|
||||
- ollama
|
||||
- multi-model
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: router-config-input
|
||||
description: Router configuration input file (YAML or JSON)
|
||||
file_pattern: "*-router-input.{json,yaml,yml}"
|
||||
content_type: application/yaml
|
||||
|
||||
produces:
|
||||
- type: llm-router-config
|
||||
description: Complete Claude Code Router configuration
|
||||
file_pattern: "config.json"
|
||||
content_type: application/json
|
||||
|
||||
- type: audit-log-entry
|
||||
description: Audit trail entry for configuration events
|
||||
file_pattern: "audit_log.json"
|
||||
content_type: application/json
|
||||
|
||||
examples:
|
||||
- name: Preview configuration (no file write)
|
||||
command: /meta-config-router --routing_config_path=examples/router-config.yaml
|
||||
|
||||
- name: Apply configuration to disk
|
||||
command: /meta-config-router --routing_config_path=examples/router-config.yaml --apply_config
|
||||
|
||||
- name: Both preview and write
|
||||
command: /meta-config-router --routing_config_path=examples/router-config.yaml --apply_config --output_mode=both
|
||||
|
||||
notes:
|
||||
- "API keys can use environment variable substitution (e.g., ${OPENROUTER_API_KEY})"
|
||||
- "Local providers (localhost/127.0.0.1) don't require API keys"
|
||||
- "All configuration changes are audited for traceability"
|
||||
- "Preview mode allows verification before applying changes"
|
||||
25
commands/optimize-build.yaml
Normal file
25
commands/optimize-build.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
name: /optimize-build
|
||||
version: 0.1.0
|
||||
description: Optimize build processes and speed
|
||||
parameters:
|
||||
- name: project_path
|
||||
type: string
|
||||
description: Path to project root directory
|
||||
required: false
|
||||
default: .
|
||||
- name: format
|
||||
type: enum
|
||||
description: Output format
|
||||
required: false
|
||||
default: human
|
||||
values:
|
||||
- human
|
||||
- json
|
||||
execution:
|
||||
type: skill
|
||||
target: build.optimize
|
||||
status: active
|
||||
tags:
|
||||
- build
|
||||
- optimization
|
||||
- performance
|
||||
1225
plugin.lock.json
Normal file
1225
plugin.lock.json
Normal file
File diff suppressed because it is too large
Load Diff
312
skills/README.md
Normal file
312
skills/README.md
Normal file
@@ -0,0 +1,312 @@
|
||||
# Betty Framework Skills
|
||||
|
||||
## ⚙️ **Integration Note: Claude Code Plugin System**
|
||||
|
||||
**Betty skills are Claude Code plugins.** You do not invoke skills via standalone CLI commands (`betty` or direct Python scripts). Instead:
|
||||
|
||||
- **Claude Code serves as the execution environment** for all skill execution
|
||||
- Each skill is registered through its `skill.yaml` manifest
|
||||
- Skills become automatically discoverable and executable through Claude Code's natural language interface
|
||||
- All routing, validation, and execution is handled by Claude Code via MCP (Model Context Protocol)
|
||||
|
||||
**No separate installation step is needed** beyond plugin registration in your Claude Code environment.
|
||||
|
||||
---
|
||||
|
||||
This directory contains skill manifests and implementations for the Betty Framework.
|
||||
|
||||
## What are Skills?
|
||||
|
||||
Skills are **atomic, composable building blocks** that execute specific operations. Unlike agents (which orchestrate multiple skills with reasoning) or workflows (which follow fixed sequential steps), skills are:
|
||||
|
||||
- **Atomic** — Each skill does one thing well
|
||||
- **Composable** — Skills can be combined into complex workflows
|
||||
- **Auditable** — Every execution is logged with inputs, outputs, and provenance
|
||||
- **Type-safe** — Inputs and outputs are validated against schemas
|
||||
|
||||
## Directory Structure
|
||||
|
||||
Each skill has its own directory containing:
|
||||
```
|
||||
skills/
|
||||
├── <skill-name>/
|
||||
│ ├── skill.yaml # Skill manifest (required)
|
||||
│ ├── SKILL.md # Documentation (auto-generated)
|
||||
│ ├── <skill_name>.py # Implementation handler (required)
|
||||
│ ├── requirements.txt # Python dependencies (optional)
|
||||
│ └── tests/ # Skill tests (optional)
|
||||
│ └── test_skill.py
|
||||
```
|
||||
|
||||
## Creating a Skill
|
||||
|
||||
### Using meta.skill (Recommended)
|
||||
|
||||
**Via Claude Code:**
|
||||
```
|
||||
"Use meta.skill to create a custom.processor skill that processes custom data formats,
|
||||
accepts raw-data and config as inputs, and outputs processed-data"
|
||||
```
|
||||
|
||||
**Direct execution (development/testing):**
|
||||
```bash
|
||||
cat > /tmp/my_skill.md <<'EOF'
|
||||
# Name: custom.processor
|
||||
# Purpose: Process custom data formats
|
||||
# Inputs: raw-data, config
|
||||
# Outputs: processed-data
|
||||
# Dependencies: python-processing-tools
|
||||
EOF
|
||||
python agents/meta.skill/meta_skill.py /tmp/my_skill.md
|
||||
```
|
||||
|
||||
### Manual Creation
|
||||
|
||||
1. Create skill directory:
|
||||
```bash
|
||||
mkdir -p skills/custom.processor
|
||||
```
|
||||
|
||||
2. Create skill manifest (`skills/custom.processor/skill.yaml`):
|
||||
```yaml
|
||||
name: custom.processor
|
||||
version: 0.1.0
|
||||
description: "Process custom data formats"
|
||||
|
||||
inputs:
|
||||
- name: raw-data
|
||||
type: file
|
||||
description: "Input data file"
|
||||
required: true
|
||||
- name: config
|
||||
type: object
|
||||
description: "Processing configuration"
|
||||
required: false
|
||||
|
||||
outputs:
|
||||
- name: processed-data
|
||||
type: file
|
||||
description: "Processed output file"
|
||||
|
||||
dependencies:
|
||||
- python-processing-tools
|
||||
|
||||
status: draft
|
||||
```
|
||||
|
||||
3. Implement the handler (`skills/custom.processor/custom_processor.py`):
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Custom data processor skill implementation."""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: custom_processor.py <raw-data> [config]")
|
||||
sys.exit(1)
|
||||
|
||||
raw_data = Path(sys.argv[1])
|
||||
config = sys.argv[2] if len(sys.argv) > 2 else None
|
||||
|
||||
# Your processing logic here
|
||||
print(f"Processing {raw_data} with config {config}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
4. Validate and register:
|
||||
|
||||
**Via Claude Code:**
|
||||
```
|
||||
"Use skill.define to validate skills/custom.processor/skill.yaml,
|
||||
then use registry.update to register it"
|
||||
```
|
||||
|
||||
**Direct execution (development/testing):**
|
||||
```bash
|
||||
python skills/skill.define/skill_define.py skills/custom.processor/skill.yaml
|
||||
python skills/registry.update/registry_update.py skills/custom.processor/skill.yaml
|
||||
```
|
||||
|
||||
## Skill Manifest Schema
|
||||
|
||||
### Required Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `name` | string | Unique identifier (e.g., `api.validate`) |
|
||||
| `version` | string | Semantic version (e.g., `0.1.0`) |
|
||||
| `description` | string | Human-readable purpose statement |
|
||||
| `inputs` | array[object] | Input parameters and their types |
|
||||
| `outputs` | array[object] | Output artifacts and their types |
|
||||
|
||||
### Optional Fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `status` | enum | `draft`, `active`, `deprecated`, `archived` |
|
||||
| `dependencies` | array[string] | External tools or libraries required |
|
||||
| `tags` | array[string] | Categorization tags |
|
||||
| `examples` | array[object] | Usage examples |
|
||||
| `error_handling` | object | Error handling strategies |
|
||||
|
||||
## Skill Categories
|
||||
|
||||
### Foundation Skills
|
||||
- **skill.create** — Generate new skill scaffolding
|
||||
- **skill.define** — Validate skill manifests
|
||||
- **registry.update** — Update component registries
|
||||
- **workflow.compose** — Chain skills into workflows
|
||||
|
||||
### API Development Skills
|
||||
- **api.define** — Create API specifications
|
||||
- **api.validate** — Validate specs against guidelines
|
||||
- **api.generate-models** — Generate type-safe models
|
||||
- **api.compatibility** — Detect breaking changes
|
||||
|
||||
### Governance Skills
|
||||
- **audit.log** — Record audit events
|
||||
- **policy.enforce** — Validate against policies
|
||||
- **telemetry.capture** — Capture usage metrics
|
||||
- **registry.query** — Query component registry
|
||||
|
||||
### Infrastructure Skills
|
||||
- **agent.define** — Validate agent manifests
|
||||
- **agent.run** — Execute agents
|
||||
- **plugin.build** — Bundle plugins
|
||||
- **plugin.sync** — Sync plugin manifests
|
||||
|
||||
### Documentation Skills
|
||||
- **docs.sync.readme** — Regenerate README files
|
||||
- **generate.docs** — Auto-generate documentation
|
||||
- **docs.validate.skill_docs** — Validate documentation completeness
|
||||
|
||||
## Using Skills
|
||||
|
||||
### Via Claude Code (Recommended)
|
||||
|
||||
Simply ask Claude to execute the skill by name:
|
||||
|
||||
```
|
||||
"Use api.validate to check specs/user-service.openapi.yaml against Zalando guidelines"
|
||||
|
||||
"Use artifact.create to create a threat-model artifact named payment-system-threats"
|
||||
|
||||
"Use registry.query to find all skills in the api category"
|
||||
```
|
||||
|
||||
### Direct Execution (Development/Testing)
|
||||
|
||||
For development and testing, you can invoke skill handlers directly:
|
||||
|
||||
```bash
|
||||
python skills/api.validate/api_validate.py specs/user-service.openapi.yaml
|
||||
|
||||
python skills/artifact.create/artifact_create.py \
|
||||
threat-model \
|
||||
"Payment processing system" \
|
||||
./artifacts/threat-model.yaml
|
||||
|
||||
python skills/registry.query/registry_query.py --category api
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
All skill manifests are automatically validated for:
|
||||
- Required fields presence
|
||||
- Name format (`^[a-z][a-z0-9._-]*$`)
|
||||
- Version format (semantic versioning)
|
||||
- Input/output schema correctness
|
||||
- Dependency declarations
|
||||
|
||||
## Registry
|
||||
|
||||
Validated skills are registered in `/registry/skills.json`:
|
||||
```json
|
||||
{
|
||||
"registry_version": "1.0.0",
|
||||
"generated_at": "2025-10-26T00:00:00Z",
|
||||
"skills": [
|
||||
{
|
||||
"name": "api.validate",
|
||||
"version": "0.1.0",
|
||||
"description": "Validate API specs against guidelines",
|
||||
"inputs": [...],
|
||||
"outputs": [...],
|
||||
"status": "active"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Composing Skills into Workflows
|
||||
|
||||
Skills can be chained together using the `workflow.compose` skill:
|
||||
|
||||
**Via Claude Code:**
|
||||
```
|
||||
"Use workflow.compose to create a workflow that:
|
||||
1. Uses api.define to create a spec
|
||||
2. Uses api.validate to check it
|
||||
3. Uses api.generate-models to create TypeScript models"
|
||||
```
|
||||
|
||||
**Workflow YAML definition:**
|
||||
```yaml
|
||||
name: api-development-workflow
|
||||
version: 0.1.0
|
||||
|
||||
steps:
|
||||
- skill: api.define
|
||||
inputs:
|
||||
service_name: "user-service"
|
||||
outputs:
|
||||
spec_path: "${OUTPUT_DIR}/user-service.openapi.yaml"
|
||||
|
||||
- skill: api.validate
|
||||
inputs:
|
||||
spec_path: "${steps[0].outputs.spec_path}"
|
||||
outputs:
|
||||
validation_report: "${OUTPUT_DIR}/validation-report.json"
|
||||
|
||||
- skill: api.generate-models
|
||||
inputs:
|
||||
spec_path: "${steps[0].outputs.spec_path}"
|
||||
language: "typescript"
|
||||
outputs:
|
||||
models_dir: "${OUTPUT_DIR}/models/"
|
||||
```
|
||||
|
||||
## Testing Skills
|
||||
|
||||
Skills should include comprehensive tests:
|
||||
|
||||
```python
|
||||
# tests/test_custom_processor.py
|
||||
import pytest
|
||||
from skills.custom_processor import custom_processor
|
||||
|
||||
def test_processor_with_valid_input():
|
||||
result = custom_processor.process("test-data.json", {"format": "json"})
|
||||
assert result.success
|
||||
assert result.output_path.exists()
|
||||
|
||||
def test_processor_with_invalid_input():
|
||||
with pytest.raises(ValueError):
|
||||
custom_processor.process("nonexistent.json")
|
||||
```
|
||||
|
||||
Run tests:
|
||||
```bash
|
||||
pytest tests/test_custom_processor.py
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [Main README](../README.md) — Framework overview
|
||||
- [Agents README](../agents/README.md) — Skill orchestration
|
||||
- [Skills Framework](../docs/skills-framework.md) — Complete skill taxonomy
|
||||
- [Betty Architecture](../docs/betty-architecture.md) — Five-layer architecture
|
||||
0
skills/__init__.py
Normal file
0
skills/__init__.py
Normal file
1
skills/agent.compose/__init__.py
Normal file
1
skills/agent.compose/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
329
skills/agent.compose/agent_compose.py
Normal file
329
skills/agent.compose/agent_compose.py
Normal file
@@ -0,0 +1,329 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
agent_compose.py - Recommend skills for Betty agents based on purpose
|
||||
|
||||
Analyzes skill artifact metadata to suggest compatible skill combinations.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
from typing import Dict, Any, List, Optional, Set
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
def load_registry() -> Dict[str, Any]:
|
||||
"""Load skills registry."""
|
||||
registry_path = os.path.join(BASE_DIR, "registry", "skills.json")
|
||||
with open(registry_path) as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def extract_artifact_metadata(skill: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract artifact metadata from a skill.
|
||||
|
||||
Returns:
|
||||
Dict with 'produces' and 'consumes' sets
|
||||
"""
|
||||
metadata = skill.get("artifact_metadata", {})
|
||||
return {
|
||||
"produces": set(a.get("type") for a in metadata.get("produces", [])),
|
||||
"consumes": set(a.get("type") for a in metadata.get("consumes", []))
|
||||
}
|
||||
|
||||
|
||||
def find_skills_by_artifacts(
|
||||
registry: Dict[str, Any],
|
||||
produces: Optional[List[str]] = None,
|
||||
consumes: Optional[List[str]] = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Find skills that produce or consume specific artifacts.
|
||||
|
||||
Args:
|
||||
registry: Skills registry
|
||||
produces: Artifact types to produce
|
||||
consumes: Artifact types to consume
|
||||
|
||||
Returns:
|
||||
List of matching skills with metadata
|
||||
"""
|
||||
skills = registry.get("skills", [])
|
||||
matches = []
|
||||
|
||||
for skill in skills:
|
||||
if skill.get("status") != "active":
|
||||
continue
|
||||
|
||||
artifacts = extract_artifact_metadata(skill)
|
||||
|
||||
# Check if skill produces required artifacts
|
||||
produces_match = not produces or any(
|
||||
artifact in artifacts["produces"] for artifact in produces
|
||||
)
|
||||
|
||||
# Check if skill consumes specified artifacts
|
||||
consumes_match = not consumes or any(
|
||||
artifact in artifacts["consumes"] for artifact in consumes
|
||||
)
|
||||
|
||||
if produces_match or consumes_match:
|
||||
matches.append({
|
||||
"name": skill["name"],
|
||||
"description": skill.get("description", ""),
|
||||
"produces": list(artifacts["produces"]),
|
||||
"consumes": list(artifacts["consumes"]),
|
||||
"tags": skill.get("tags", [])
|
||||
})
|
||||
|
||||
return matches
|
||||
|
||||
|
||||
def find_skills_for_purpose(
|
||||
registry: Dict[str, Any],
|
||||
purpose: str,
|
||||
required_artifacts: Optional[List[str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Find skills for agent purpose (alias for recommend_skills_for_purpose).
|
||||
|
||||
Args:
|
||||
registry: Skills registry (for compatibility, currently unused)
|
||||
purpose: Description of agent purpose
|
||||
required_artifacts: Artifact types agent needs to work with
|
||||
|
||||
Returns:
|
||||
Recommendation result with skills and rationale
|
||||
"""
|
||||
return recommend_skills_for_purpose(purpose, required_artifacts)
|
||||
|
||||
|
||||
def recommend_skills_for_purpose(
|
||||
agent_purpose: str,
|
||||
required_artifacts: Optional[List[str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Recommend skills based on agent purpose and required artifacts.
|
||||
|
||||
Args:
|
||||
agent_purpose: Description of agent purpose
|
||||
required_artifacts: Artifact types agent needs to work with
|
||||
|
||||
Returns:
|
||||
Recommendation result with skills and rationale
|
||||
"""
|
||||
registry = load_registry()
|
||||
recommended = []
|
||||
rationale = {}
|
||||
|
||||
# Keyword matching for purpose
|
||||
purpose_lower = agent_purpose.lower()
|
||||
keywords = {
|
||||
"api": ["api.define", "api.validate", "api.generate-models", "api.compatibility"],
|
||||
"workflow": ["workflow.validate", "workflow.compose"],
|
||||
"hook": ["hook.define"],
|
||||
"validate": ["api.validate", "workflow.validate"],
|
||||
"design": ["api.define"],
|
||||
}
|
||||
|
||||
# Find skills by keywords
|
||||
matched_by_keyword = set()
|
||||
for keyword, skill_names in keywords.items():
|
||||
if keyword in purpose_lower:
|
||||
matched_by_keyword.update(skill_names)
|
||||
|
||||
# Find skills by required artifacts
|
||||
matched_by_artifacts = set()
|
||||
if required_artifacts:
|
||||
artifact_skills = find_skills_by_artifacts(
|
||||
registry,
|
||||
produces=required_artifacts,
|
||||
consumes=required_artifacts
|
||||
)
|
||||
matched_by_artifacts.update(s["name"] for s in artifact_skills)
|
||||
|
||||
# Combine matches
|
||||
all_matches = matched_by_keyword | matched_by_artifacts
|
||||
|
||||
# Build recommendation with rationale
|
||||
skills = registry.get("skills", [])
|
||||
for skill in skills:
|
||||
skill_name = skill.get("name")
|
||||
|
||||
if skill_name in all_matches:
|
||||
reasons = []
|
||||
|
||||
if skill_name in matched_by_keyword:
|
||||
reasons.append(f"Purpose matches skill capabilities")
|
||||
|
||||
artifacts = extract_artifact_metadata(skill)
|
||||
if required_artifacts:
|
||||
produces_match = artifacts["produces"] & set(required_artifacts)
|
||||
consumes_match = artifacts["consumes"] & set(required_artifacts)
|
||||
|
||||
if produces_match:
|
||||
reasons.append(f"Produces: {', '.join(produces_match)}")
|
||||
if consumes_match:
|
||||
reasons.append(f"Consumes: {', '.join(consumes_match)}")
|
||||
|
||||
recommended.append(skill_name)
|
||||
rationale[skill_name] = {
|
||||
"description": skill.get("description", ""),
|
||||
"reasons": reasons,
|
||||
"produces": list(artifacts["produces"]),
|
||||
"consumes": list(artifacts["consumes"])
|
||||
}
|
||||
|
||||
return {
|
||||
"recommended_skills": recommended,
|
||||
"rationale": rationale,
|
||||
"total_recommended": len(recommended)
|
||||
}
|
||||
|
||||
|
||||
def analyze_artifact_flow(skills_metadata: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze artifact flow between recommended skills.
|
||||
|
||||
Args:
|
||||
skills_metadata: List of skill metadata
|
||||
|
||||
Returns:
|
||||
Flow analysis showing how artifacts move between skills
|
||||
"""
|
||||
all_produces = set()
|
||||
all_consumes = set()
|
||||
flows = []
|
||||
|
||||
for skill in skills_metadata:
|
||||
produces = set(skill.get("produces", []))
|
||||
consumes = set(skill.get("consumes", []))
|
||||
|
||||
all_produces.update(produces)
|
||||
all_consumes.update(consumes)
|
||||
|
||||
for artifact in produces:
|
||||
consumers = [
|
||||
s["name"] for s in skills_metadata
|
||||
if artifact in s.get("consumes", [])
|
||||
]
|
||||
if consumers:
|
||||
flows.append({
|
||||
"artifact": artifact,
|
||||
"producer": skill["name"],
|
||||
"consumers": consumers
|
||||
})
|
||||
|
||||
# Find gaps (consumed but not produced)
|
||||
gaps = all_consumes - all_produces
|
||||
|
||||
return {
|
||||
"flows": flows,
|
||||
"gaps": list(gaps),
|
||||
"fully_covered": len(gaps) == 0
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point."""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Recommend skills for a Betty agent"
|
||||
)
|
||||
parser.add_argument(
|
||||
"agent_purpose",
|
||||
help="Description of what the agent should do"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--required-artifacts",
|
||||
nargs="+",
|
||||
help="Artifact types the agent needs to work with"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-format",
|
||||
choices=["yaml", "json", "markdown"],
|
||||
default="yaml",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
logger.info(f"Finding skills for agent purpose: {args.agent_purpose}")
|
||||
|
||||
try:
|
||||
# Get recommendations
|
||||
result = recommend_skills_for_purpose(
|
||||
args.agent_purpose,
|
||||
args.required_artifacts
|
||||
)
|
||||
|
||||
# Analyze artifact flow
|
||||
skills_metadata = list(result["rationale"].values())
|
||||
for skill_name, metadata in result["rationale"].items():
|
||||
metadata["name"] = skill_name
|
||||
|
||||
flow_analysis = analyze_artifact_flow(skills_metadata)
|
||||
result["artifact_flow"] = flow_analysis
|
||||
|
||||
# Format output
|
||||
if args.output_format == "yaml":
|
||||
print("\n# Recommended Skills for Agent\n")
|
||||
print(f"# Purpose: {args.agent_purpose}\n")
|
||||
print("skills_available:")
|
||||
for skill in result["recommended_skills"]:
|
||||
print(f" - {skill}")
|
||||
|
||||
print("\n# Rationale:")
|
||||
for skill_name, rationale in result["rationale"].items():
|
||||
print(f"\n# {skill_name}:")
|
||||
print(f"# {rationale['description']}")
|
||||
for reason in rationale["reasons"]:
|
||||
print(f"# - {reason}")
|
||||
|
||||
elif args.output_format == "markdown":
|
||||
print(f"\n## Recommended Skills for: {args.agent_purpose}\n")
|
||||
print("### Skills\n")
|
||||
for skill in result["recommended_skills"]:
|
||||
rationale = result["rationale"][skill]
|
||||
print(f"**{skill}**")
|
||||
print(f"- {rationale['description']}")
|
||||
for reason in rationale["reasons"]:
|
||||
print(f" - {reason}")
|
||||
print()
|
||||
|
||||
else: # json
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
# Show warnings for gaps
|
||||
if flow_analysis["gaps"]:
|
||||
logger.warning(f"\n⚠️ Artifact gaps detected:")
|
||||
for gap in flow_analysis["gaps"]:
|
||||
logger.warning(f" - '{gap}' is consumed but not produced")
|
||||
logger.warning(" Consider adding skills that produce these artifacts")
|
||||
|
||||
logger.info(f"\n✅ Recommended {result['total_recommended']} skills")
|
||||
|
||||
sys.exit(0)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to compose agent: {e}")
|
||||
result = {
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
102
skills/agent.compose/skill.yaml
Normal file
102
skills/agent.compose/skill.yaml
Normal file
@@ -0,0 +1,102 @@
|
||||
name: agent.compose
|
||||
version: 0.1.0
|
||||
description: >
|
||||
Recommend skills for a Betty agent based on its purpose and responsibilities.
|
||||
Analyzes artifact flows, ensures skill compatibility, and suggests optimal
|
||||
skill combinations for agent definitions.
|
||||
|
||||
inputs:
|
||||
- name: agent_purpose
|
||||
type: string
|
||||
required: true
|
||||
description: Description of what the agent should do (e.g., "Design and validate APIs")
|
||||
|
||||
- name: required_artifacts
|
||||
type: array
|
||||
required: false
|
||||
description: Artifact types the agent needs to work with (e.g., ["openapi-spec"])
|
||||
|
||||
- name: output_format
|
||||
type: string
|
||||
required: false
|
||||
default: yaml
|
||||
description: Output format (yaml, json, or markdown)
|
||||
|
||||
- name: include_rationale
|
||||
type: boolean
|
||||
required: false
|
||||
default: true
|
||||
description: Include explanation of why each skill was recommended
|
||||
|
||||
outputs:
|
||||
- name: recommended_skills
|
||||
type: array
|
||||
description: List of recommended skill names
|
||||
|
||||
- name: skills_with_rationale
|
||||
type: object
|
||||
description: Skills with explanation of why they were recommended
|
||||
|
||||
- name: artifact_flow
|
||||
type: object
|
||||
description: Diagram showing how artifacts flow between recommended skills
|
||||
|
||||
- name: compatibility_report
|
||||
type: object
|
||||
description: Validation that recommended skills work together
|
||||
|
||||
dependencies:
|
||||
- registry.query
|
||||
|
||||
entrypoints:
|
||||
- command: /agent/compose
|
||||
handler: agent_compose.py
|
||||
runtime: python
|
||||
description: >
|
||||
Recommend skills for an agent based on its purpose. Analyzes the registry
|
||||
to find skills that produce/consume compatible artifacts, ensures no gaps
|
||||
in artifact flow, and suggests optimal skill combinations.
|
||||
parameters:
|
||||
- name: agent_purpose
|
||||
type: string
|
||||
required: true
|
||||
description: What the agent should do
|
||||
- name: required_artifacts
|
||||
type: array
|
||||
required: false
|
||||
description: Artifact types to work with
|
||||
- name: output_format
|
||||
type: string
|
||||
required: false
|
||||
default: yaml
|
||||
description: Output format (yaml, json, markdown)
|
||||
- name: include_rationale
|
||||
type: boolean
|
||||
required: false
|
||||
default: true
|
||||
description: Include explanations
|
||||
permissions:
|
||||
- filesystem:read
|
||||
|
||||
status: active
|
||||
|
||||
tags:
|
||||
- agents
|
||||
- composition
|
||||
- artifacts
|
||||
- scaffolding
|
||||
- interoperability
|
||||
- layer3
|
||||
|
||||
# This skill's own artifact metadata
|
||||
artifact_metadata:
|
||||
produces:
|
||||
- type: agent-skill-recommendation
|
||||
description: Recommended skills list with compatibility analysis for agent definitions
|
||||
file_pattern: "agent-skills-recommendation.{yaml,json}"
|
||||
content_type: application/yaml
|
||||
|
||||
consumes:
|
||||
- type: registry-data
|
||||
description: Betty Framework registry containing skills and their artifact metadata
|
||||
required: true
|
||||
376
skills/agent.define/SKILL.md
Normal file
376
skills/agent.define/SKILL.md
Normal file
@@ -0,0 +1,376 @@
|
||||
# agent.define Skill
|
||||
|
||||
Validates and registers agent manifests for the Betty Framework.
|
||||
|
||||
## Purpose
|
||||
|
||||
The `agent.define` skill is the Layer 2 (Reasoning Layer) equivalent of `skill.define`. It validates agent manifests (`agent.yaml`) for schema compliance, verifies skill references, and updates the central Agent Registry.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Validate agent manifest structure and required fields
|
||||
- Verify agent name and version formats
|
||||
- Validate reasoning mode enum values
|
||||
- Check that all referenced skills exist in skill registry
|
||||
- Ensure capabilities and skills lists are non-empty
|
||||
- Validate status lifecycle values
|
||||
- Register valid agents in `/registry/agents.json`
|
||||
- Update existing agent entries with new versions
|
||||
|
||||
## Usage
|
||||
|
||||
### Command Line
|
||||
|
||||
```bash
|
||||
python skills/agent.define/agent_define.py <path_to_agent.yaml>
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
| Argument | Type | Required | Description |
|
||||
|----------|------|----------|-------------|
|
||||
| `manifest_path` | string | Yes | Path to the agent.yaml file to validate |
|
||||
|
||||
### Exit Codes
|
||||
|
||||
- `0`: Validation succeeded and agent was registered
|
||||
- `1`: Validation failed or registration error
|
||||
|
||||
## Validation Rules
|
||||
|
||||
### Required Fields
|
||||
|
||||
All agent manifests must include:
|
||||
|
||||
| Field | Type | Validation |
|
||||
|-------|------|------------|
|
||||
| `name` | string | Must match `^[a-z][a-z0-9._-]*$` |
|
||||
| `version` | string | Must follow semantic versioning |
|
||||
| `description` | string | Non-empty string (1-200 chars recommended) |
|
||||
| `capabilities` | array[string] | Must contain at least one item |
|
||||
| `skills_available` | array[string] | Must contain at least one item, all skills must exist in registry |
|
||||
| `reasoning_mode` | enum | Must be `iterative` or `oneshot` |
|
||||
|
||||
### Optional Fields
|
||||
|
||||
| Field | Type | Default | Validation |
|
||||
|-------|------|---------|------------|
|
||||
| `status` | enum | `draft` | Must be `draft`, `active`, `deprecated`, or `archived` |
|
||||
| `context_requirements` | object | `{}` | Any valid object |
|
||||
| `workflow_pattern` | string | `null` | Any string |
|
||||
| `example_task` | string | `null` | Any string |
|
||||
| `error_handling` | object | `{}` | Any valid object |
|
||||
| `output` | object | `{}` | Any valid object |
|
||||
| `tags` | array[string] | `[]` | Array of strings |
|
||||
| `dependencies` | array[string] | `[]` | Array of strings |
|
||||
|
||||
### Name Format
|
||||
|
||||
Agent names must:
|
||||
- Start with a lowercase letter
|
||||
- Contain only lowercase letters, numbers, dots, hyphens, underscores
|
||||
- Follow pattern: `<domain>.<action>`
|
||||
|
||||
**Valid**: `api.designer`, `compliance.checker`, `data-migrator`
|
||||
**Invalid**: `ApiDesigner`, `1agent`, `agent_name`
|
||||
|
||||
### Version Format
|
||||
|
||||
Versions must follow semantic versioning: `MAJOR.MINOR.PATCH[-prerelease]`
|
||||
|
||||
**Valid**: `0.1.0`, `1.0.0`, `2.3.1-beta`, `1.0.0-rc.1`
|
||||
**Invalid**: `1.0`, `v1.0.0`, `1.0.0.0`
|
||||
|
||||
### Reasoning Mode
|
||||
|
||||
Must be one of:
|
||||
- `iterative`: Agent can retry with feedback and refine based on errors
|
||||
- `oneshot`: Agent executes once without retry
|
||||
|
||||
### Skills Validation
|
||||
|
||||
All skills in `skills_available` must exist in the skill registry (`/registry/skills.json`).
|
||||
|
||||
## Response Format
|
||||
|
||||
### Success Response
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"status": "success",
|
||||
"errors": [],
|
||||
"path": "agents/api.designer/agent.yaml",
|
||||
"details": {
|
||||
"valid": true,
|
||||
"errors": [],
|
||||
"path": "agents/api.designer/agent.yaml",
|
||||
"manifest": {
|
||||
"name": "api.designer",
|
||||
"version": "0.1.0",
|
||||
"description": "Design RESTful APIs...",
|
||||
"capabilities": [...],
|
||||
"skills_available": [...],
|
||||
"reasoning_mode": "iterative"
|
||||
},
|
||||
"status": "registered",
|
||||
"registry_updated": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Failure Response
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": false,
|
||||
"status": "failed",
|
||||
"errors": [
|
||||
"Missing required fields: capabilities, skills_available",
|
||||
"Invalid reasoning_mode: 'hybrid'. Must be one of: iterative, oneshot"
|
||||
],
|
||||
"path": "agents/bad.agent/agent.yaml",
|
||||
"details": {
|
||||
"valid": false,
|
||||
"errors": [
|
||||
"Missing required fields: capabilities, skills_available",
|
||||
"Invalid reasoning_mode: 'hybrid'. Must be one of: iterative, oneshot"
|
||||
],
|
||||
"path": "agents/bad.agent/agent.yaml"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Validate Iterative Agent
|
||||
|
||||
**Agent Manifest** (`agents/api.designer/agent.yaml`):
|
||||
```yaml
|
||||
name: api.designer
|
||||
version: 0.1.0
|
||||
description: "Design RESTful APIs following enterprise guidelines"
|
||||
|
||||
capabilities:
|
||||
- Design RESTful APIs from requirements
|
||||
- Apply Zalando guidelines automatically
|
||||
- Generate OpenAPI 3.1 specs
|
||||
- Iteratively refine based on validation feedback
|
||||
|
||||
skills_available:
|
||||
- api.define
|
||||
- api.validate
|
||||
- api.generate-models
|
||||
|
||||
reasoning_mode: iterative
|
||||
|
||||
status: draft
|
||||
|
||||
tags:
|
||||
- api
|
||||
- design
|
||||
- openapi
|
||||
```
|
||||
|
||||
**Command**:
|
||||
```bash
|
||||
python skills/agent.define/agent_define.py agents/api.designer/agent.yaml
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"status": "success",
|
||||
"errors": [],
|
||||
"path": "agents/api.designer/agent.yaml",
|
||||
"details": {
|
||||
"valid": true,
|
||||
"status": "registered",
|
||||
"registry_updated": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Validation Errors
|
||||
|
||||
**Agent Manifest** (`agents/bad.agent/agent.yaml`):
|
||||
```yaml
|
||||
name: BadAgent # Invalid: must be lowercase
|
||||
version: 1.0 # Invalid: must be semver
|
||||
description: "Test agent"
|
||||
capabilities: [] # Invalid: must have at least one
|
||||
skills_available:
|
||||
- nonexistent.skill # Invalid: skill doesn't exist
|
||||
reasoning_mode: hybrid # Invalid: must be iterative or oneshot
|
||||
```
|
||||
|
||||
**Command**:
|
||||
```bash
|
||||
python skills/agent.define/agent_define.py agents/bad.agent/agent.yaml
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```json
|
||||
{
|
||||
"ok": false,
|
||||
"status": "failed",
|
||||
"errors": [
|
||||
"Invalid name: Invalid agent name: 'BadAgent'. Must start with lowercase letter...",
|
||||
"Invalid version: Invalid version: '1.0'. Must follow semantic versioning...",
|
||||
"Invalid reasoning_mode: Invalid reasoning_mode: 'hybrid'. Must be one of: iterative, oneshot",
|
||||
"capabilities must contain at least one item",
|
||||
"Skills not found in registry: nonexistent.skill"
|
||||
],
|
||||
"path": "agents/bad.agent/agent.yaml"
|
||||
}
|
||||
```
|
||||
|
||||
### Example 3: Oneshot Agent
|
||||
|
||||
**Agent Manifest** (`agents/api.analyzer/agent.yaml`):
|
||||
```yaml
|
||||
name: api.analyzer
|
||||
version: 0.1.0
|
||||
description: "Analyze API specifications for compatibility"
|
||||
|
||||
capabilities:
|
||||
- Detect breaking changes between API versions
|
||||
- Generate compatibility reports
|
||||
- Suggest migration paths
|
||||
|
||||
skills_available:
|
||||
- api.compatibility
|
||||
|
||||
reasoning_mode: oneshot
|
||||
|
||||
output:
|
||||
success:
|
||||
- Compatibility report
|
||||
- Breaking changes list
|
||||
failure:
|
||||
- Error analysis
|
||||
|
||||
status: active
|
||||
|
||||
tags:
|
||||
- api
|
||||
- analysis
|
||||
- compatibility
|
||||
```
|
||||
|
||||
**Command**:
|
||||
```bash
|
||||
python skills/agent.define/agent_define.py agents/api.analyzer/agent.yaml
|
||||
```
|
||||
|
||||
**Result**: Agent validated and registered successfully.
|
||||
|
||||
## Integration
|
||||
|
||||
### With Registry
|
||||
|
||||
The skill automatically updates `/registry/agents.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"registry_version": "1.0.0",
|
||||
"generated_at": "2025-10-23T10:30:00Z",
|
||||
"agents": [
|
||||
{
|
||||
"name": "api.designer",
|
||||
"version": "0.1.0",
|
||||
"description": "Design RESTful APIs following enterprise guidelines",
|
||||
"reasoning_mode": "iterative",
|
||||
"skills_available": ["api.define", "api.validate", "api.generate-models"],
|
||||
"capabilities": ["Design RESTful APIs from requirements", ...],
|
||||
"status": "draft",
|
||||
"tags": ["api", "design", "openapi"],
|
||||
"dependencies": []
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### With Other Skills
|
||||
|
||||
- **Depends on**: `skill.define` (for skill registry validation)
|
||||
- **Used by**: Future `command.define` skill (to register commands that invoke agents)
|
||||
- **Complements**: `workflow.compose` (agents orchestrate skills; workflows execute fixed sequences)
|
||||
|
||||
## Common Errors
|
||||
|
||||
### Missing Skills in Registry
|
||||
|
||||
**Error**:
|
||||
```
|
||||
Skills not found in registry: api.nonexistent, data.missing
|
||||
```
|
||||
|
||||
**Solution**: Ensure all skills in `skills_available` are registered in `/registry/skills.json`. Check skill names for typos.
|
||||
|
||||
### Invalid Reasoning Mode
|
||||
|
||||
**Error**:
|
||||
```
|
||||
Invalid reasoning_mode: 'hybrid'. Must be one of: iterative, oneshot
|
||||
```
|
||||
|
||||
**Solution**: Use `iterative` for agents that retry with feedback, or `oneshot` for deterministic execution.
|
||||
|
||||
### Empty Capabilities
|
||||
|
||||
**Error**:
|
||||
```
|
||||
capabilities must contain at least one item
|
||||
```
|
||||
|
||||
**Solution**: Add at least one capability string describing what the agent can do.
|
||||
|
||||
### Invalid Name Format
|
||||
|
||||
**Error**:
|
||||
```
|
||||
Invalid agent name: 'API-Designer'. Must start with lowercase letter...
|
||||
```
|
||||
|
||||
**Solution**: Use lowercase names following pattern `<domain>.<action>` (e.g., `api.designer`).
|
||||
|
||||
## Development
|
||||
|
||||
### Testing
|
||||
|
||||
Create test agent manifests in `/agents/test/`:
|
||||
|
||||
```bash
|
||||
# Create test directory
|
||||
mkdir -p agents/test.agent
|
||||
|
||||
# Create minimal test manifest
|
||||
cat > agents/test.agent/agent.yaml << EOF
|
||||
name: test.agent
|
||||
version: 0.1.0
|
||||
description: "Test agent"
|
||||
capabilities:
|
||||
- Test capability
|
||||
skills_available:
|
||||
- skill.define
|
||||
reasoning_mode: oneshot
|
||||
status: draft
|
||||
EOF
|
||||
|
||||
# Validate
|
||||
python skills/agent.define/agent_define.py agents/test.agent/agent.yaml
|
||||
```
|
||||
|
||||
### Registry Location
|
||||
|
||||
- Skill registry: `/registry/skills.json` (read for validation)
|
||||
- Agent registry: `/registry/agents.json` (updated by this skill)
|
||||
|
||||
## See Also
|
||||
|
||||
- [Agent Schema Reference](../../docs/agent-schema-reference.md) - Complete field specifications
|
||||
- [Betty Architecture](../../docs/betty-architecture.md) - Five-layer architecture overview
|
||||
- [Agent Implementation Plan](../../docs/agent-define-implementation-plan.md) - Implementation details
|
||||
- `/agents/README.md` - Agent directory documentation
|
||||
1
skills/agent.define/__init__.py
Normal file
1
skills/agent.define/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
419
skills/agent.define/agent_define.py
Executable file
419
skills/agent.define/agent_define.py
Executable file
@@ -0,0 +1,419 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
agent_define.py – Implementation of the agent.define Skill
|
||||
Validates agent manifests (agent.yaml) and registers them in the Agent Registry.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
from typing import Dict, Any, List, Optional
|
||||
from datetime import datetime, timezone
|
||||
from pydantic import ValidationError as PydanticValidationError
|
||||
|
||||
|
||||
from betty.config import (
|
||||
BASE_DIR,
|
||||
REQUIRED_AGENT_FIELDS,
|
||||
AGENTS_REGISTRY_FILE,
|
||||
REGISTRY_FILE,
|
||||
)
|
||||
from betty.enums import AgentStatus, ReasoningMode
|
||||
from betty.validation import (
|
||||
validate_path,
|
||||
validate_manifest_fields,
|
||||
validate_agent_name,
|
||||
validate_version,
|
||||
validate_reasoning_mode,
|
||||
validate_skills_exist
|
||||
)
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.errors import AgentValidationError, AgentRegistryError, format_error_response
|
||||
from betty.models import AgentManifest
|
||||
from betty.file_utils import atomic_write_json
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
def build_response(ok: bool, path: str, errors: Optional[List[str]] = None, details: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Build standardized response dictionary.
|
||||
|
||||
Args:
|
||||
ok: Whether operation succeeded
|
||||
path: Path to agent manifest
|
||||
errors: List of error messages
|
||||
details: Additional details
|
||||
|
||||
Returns:
|
||||
Response dictionary
|
||||
"""
|
||||
response: Dict[str, Any] = {
|
||||
"ok": ok,
|
||||
"status": "success" if ok else "failed",
|
||||
"errors": errors or [],
|
||||
"path": path,
|
||||
}
|
||||
|
||||
if details is not None:
|
||||
response["details"] = details
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def load_agent_manifest(path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Load and parse an agent manifest from YAML file.
|
||||
|
||||
Args:
|
||||
path: Path to agent manifest file
|
||||
|
||||
Returns:
|
||||
Parsed manifest dictionary
|
||||
|
||||
Raises:
|
||||
AgentValidationError: If manifest cannot be loaded or parsed
|
||||
"""
|
||||
try:
|
||||
with open(path) as f:
|
||||
manifest = yaml.safe_load(f)
|
||||
return manifest
|
||||
except FileNotFoundError:
|
||||
raise AgentValidationError(f"Manifest file not found: {path}")
|
||||
except yaml.YAMLError as e:
|
||||
raise AgentValidationError(f"Failed to parse YAML: {e}")
|
||||
|
||||
|
||||
def load_skill_registry() -> Dict[str, Any]:
|
||||
"""
|
||||
Load skill registry for validation.
|
||||
|
||||
Returns:
|
||||
Skill registry dictionary
|
||||
|
||||
Raises:
|
||||
AgentValidationError: If registry cannot be loaded
|
||||
"""
|
||||
try:
|
||||
with open(REGISTRY_FILE) as f:
|
||||
return json.load(f)
|
||||
except FileNotFoundError:
|
||||
raise AgentValidationError(f"Skill registry not found: {REGISTRY_FILE}")
|
||||
except json.JSONDecodeError as e:
|
||||
raise AgentValidationError(f"Failed to parse skill registry: {e}")
|
||||
|
||||
|
||||
def validate_agent_schema(manifest: Dict[str, Any]) -> List[str]:
|
||||
"""
|
||||
Validate agent manifest using Pydantic schema.
|
||||
|
||||
Args:
|
||||
manifest: Agent manifest dictionary
|
||||
|
||||
Returns:
|
||||
List of validation errors (empty if valid)
|
||||
"""
|
||||
errors: List[str] = []
|
||||
|
||||
try:
|
||||
AgentManifest.model_validate(manifest)
|
||||
logger.info("Pydantic schema validation passed for agent manifest")
|
||||
except PydanticValidationError as exc:
|
||||
logger.warning("Pydantic schema validation failed for agent manifest")
|
||||
for error in exc.errors():
|
||||
field = ".".join(str(loc) for loc in error["loc"])
|
||||
message = error["msg"]
|
||||
error_type = error["type"]
|
||||
errors.append(f"Schema validation error at '{field}': {message} (type: {error_type})")
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
def validate_manifest(path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate that an agent manifest meets all requirements.
|
||||
|
||||
Validation checks:
|
||||
1. Required fields are present
|
||||
2. Name format is valid
|
||||
3. Version format is valid
|
||||
4. Reasoning mode is valid
|
||||
5. All referenced skills exist in skill registry
|
||||
6. Capabilities list is non-empty
|
||||
7. Skills list is non-empty
|
||||
|
||||
Args:
|
||||
path: Path to agent manifest file
|
||||
|
||||
Returns:
|
||||
Dictionary with validation results:
|
||||
- valid: Boolean indicating if manifest is valid
|
||||
- errors: List of validation errors (if any)
|
||||
- manifest: The parsed manifest (if valid)
|
||||
- path: Path to the manifest file
|
||||
"""
|
||||
validate_path(path, must_exist=True)
|
||||
|
||||
logger.info(f"Validating agent manifest: {path}")
|
||||
|
||||
errors = []
|
||||
|
||||
# Load manifest
|
||||
try:
|
||||
manifest = load_agent_manifest(path)
|
||||
except AgentValidationError as e:
|
||||
return {
|
||||
"valid": False,
|
||||
"errors": [str(e)],
|
||||
"path": path
|
||||
}
|
||||
|
||||
# Check required fields first so high-level issues are reported clearly
|
||||
missing = validate_manifest_fields(manifest, REQUIRED_AGENT_FIELDS)
|
||||
if missing:
|
||||
missing_message = f"Missing required fields: {', '.join(missing)}"
|
||||
errors.append(missing_message)
|
||||
logger.warning(f"Missing required fields: {missing}")
|
||||
|
||||
# Validate with Pydantic schema while continuing custom validation
|
||||
schema_errors = validate_agent_schema(manifest)
|
||||
errors.extend(schema_errors)
|
||||
|
||||
name = manifest.get("name")
|
||||
if name is not None:
|
||||
try:
|
||||
validate_agent_name(name)
|
||||
except Exception as e:
|
||||
errors.append(f"Invalid name: {str(e)}")
|
||||
logger.warning(f"Invalid name: {e}")
|
||||
|
||||
version = manifest.get("version")
|
||||
if version is not None:
|
||||
try:
|
||||
validate_version(version)
|
||||
except Exception as e:
|
||||
errors.append(f"Invalid version: {str(e)}")
|
||||
logger.warning(f"Invalid version: {e}")
|
||||
|
||||
reasoning_mode = manifest.get("reasoning_mode")
|
||||
if reasoning_mode is not None:
|
||||
try:
|
||||
validate_reasoning_mode(reasoning_mode)
|
||||
except Exception as e:
|
||||
errors.append(f"Invalid reasoning_mode: {str(e)}")
|
||||
logger.warning(f"Invalid reasoning_mode: {e}")
|
||||
elif "reasoning_mode" not in missing:
|
||||
errors.append("reasoning_mode must be provided")
|
||||
logger.warning("Reasoning mode missing")
|
||||
|
||||
# Validate capabilities is non-empty
|
||||
capabilities = manifest.get("capabilities", [])
|
||||
if not capabilities or len(capabilities) == 0:
|
||||
errors.append("capabilities must contain at least one item")
|
||||
logger.warning("Empty capabilities list")
|
||||
|
||||
# Validate skills_available is non-empty
|
||||
skills_available = manifest.get("skills_available", [])
|
||||
if not skills_available or len(skills_available) == 0:
|
||||
errors.append("skills_available must contain at least one item")
|
||||
logger.warning("Empty skills_available list")
|
||||
|
||||
# Validate all skills exist in skill registry
|
||||
if skills_available:
|
||||
try:
|
||||
skill_registry = load_skill_registry()
|
||||
missing_skills = validate_skills_exist(skills_available, skill_registry)
|
||||
if missing_skills:
|
||||
errors.append(f"Skills not found in registry: {', '.join(missing_skills)}")
|
||||
logger.warning(f"Missing skills: {missing_skills}")
|
||||
except AgentValidationError as e:
|
||||
errors.append(f"Could not validate skills: {str(e)}")
|
||||
logger.error(f"Skill validation error: {e}")
|
||||
|
||||
# Validate status if present
|
||||
if "status" in manifest:
|
||||
valid_statuses = [s.value for s in AgentStatus]
|
||||
if manifest["status"] not in valid_statuses:
|
||||
errors.append(f"Invalid status: '{manifest['status']}'. Must be one of: {', '.join(valid_statuses)}")
|
||||
logger.warning(f"Invalid status: {manifest['status']}")
|
||||
|
||||
if errors:
|
||||
logger.warning(f"Validation failed with {len(errors)} error(s)")
|
||||
return {
|
||||
"valid": False,
|
||||
"errors": errors,
|
||||
"path": path
|
||||
}
|
||||
|
||||
logger.info("✅ Agent manifest validation passed")
|
||||
return {
|
||||
"valid": True,
|
||||
"errors": [],
|
||||
"path": path,
|
||||
"manifest": manifest
|
||||
}
|
||||
|
||||
|
||||
def load_agent_registry() -> Dict[str, Any]:
|
||||
"""
|
||||
Load existing agent registry.
|
||||
|
||||
Returns:
|
||||
Agent registry dictionary, or new empty registry if file doesn't exist
|
||||
"""
|
||||
if not os.path.exists(AGENTS_REGISTRY_FILE):
|
||||
logger.info("Agent registry not found, creating new registry")
|
||||
return {
|
||||
"registry_version": "1.0.0",
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"agents": []
|
||||
}
|
||||
|
||||
try:
|
||||
with open(AGENTS_REGISTRY_FILE) as f:
|
||||
registry = json.load(f)
|
||||
logger.info(f"Loaded agent registry with {len(registry.get('agents', []))} agent(s)")
|
||||
return registry
|
||||
except json.JSONDecodeError as e:
|
||||
raise AgentRegistryError(f"Failed to parse agent registry: {e}")
|
||||
|
||||
|
||||
def update_agent_registry(manifest: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Add or update agent in the agent registry.
|
||||
|
||||
Args:
|
||||
manifest: Validated agent manifest
|
||||
|
||||
Returns:
|
||||
True if registry was updated successfully
|
||||
|
||||
Raises:
|
||||
AgentRegistryError: If registry update fails
|
||||
"""
|
||||
logger.info(f"Updating agent registry for: {manifest['name']}")
|
||||
|
||||
# Load existing registry
|
||||
registry = load_agent_registry()
|
||||
|
||||
# Create registry entry
|
||||
entry = {
|
||||
"name": manifest["name"],
|
||||
"version": manifest["version"],
|
||||
"description": manifest["description"],
|
||||
"reasoning_mode": manifest["reasoning_mode"],
|
||||
"skills_available": manifest["skills_available"],
|
||||
"capabilities": manifest.get("capabilities", []),
|
||||
"status": manifest.get("status", "draft"),
|
||||
"tags": manifest.get("tags", []),
|
||||
"dependencies": manifest.get("dependencies", [])
|
||||
}
|
||||
|
||||
# Check if agent already exists
|
||||
agents = registry.get("agents", [])
|
||||
existing_index = None
|
||||
for i, agent in enumerate(agents):
|
||||
if agent["name"] == manifest["name"]:
|
||||
existing_index = i
|
||||
break
|
||||
|
||||
if existing_index is not None:
|
||||
# Update existing agent
|
||||
agents[existing_index] = entry
|
||||
logger.info(f"Updated existing agent: {manifest['name']}")
|
||||
else:
|
||||
# Add new agent
|
||||
agents.append(entry)
|
||||
logger.info(f"Added new agent: {manifest['name']}")
|
||||
|
||||
registry["agents"] = agents
|
||||
registry["generated_at"] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
# Write registry back to disk atomically
|
||||
try:
|
||||
atomic_write_json(AGENTS_REGISTRY_FILE, registry)
|
||||
logger.info(f"Agent registry updated successfully")
|
||||
return True
|
||||
except Exception as e:
|
||||
raise AgentRegistryError(f"Failed to write agent registry: {e}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main CLI entry point."""
|
||||
if len(sys.argv) < 2:
|
||||
message = "Usage: agent_define.py <path_to_agent.yaml>"
|
||||
response = build_response(
|
||||
False,
|
||||
path="",
|
||||
errors=[message],
|
||||
details={"error": {"error": "UsageError", "message": message, "details": {}}},
|
||||
)
|
||||
print(json.dumps(response, indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
path = sys.argv[1]
|
||||
|
||||
try:
|
||||
# Validate manifest
|
||||
validation = validate_manifest(path)
|
||||
details = dict(validation)
|
||||
|
||||
if validation.get("valid"):
|
||||
# Update registry
|
||||
try:
|
||||
registry_updated = update_agent_registry(validation["manifest"])
|
||||
details["status"] = "registered"
|
||||
details["registry_updated"] = registry_updated
|
||||
except AgentRegistryError as e:
|
||||
logger.error(f"Registry update failed: {e}")
|
||||
details["status"] = "validated"
|
||||
details["registry_updated"] = False
|
||||
details["registry_error"] = str(e)
|
||||
else:
|
||||
# Check if there are schema validation errors
|
||||
has_schema_errors = any("Schema validation error" in err for err in validation.get("errors", []))
|
||||
if has_schema_errors:
|
||||
details["error"] = {
|
||||
"type": "SchemaError",
|
||||
"error": "SchemaError",
|
||||
"message": "Agent manifest schema validation failed",
|
||||
"details": {"errors": validation.get("errors", [])}
|
||||
}
|
||||
|
||||
# Build response
|
||||
response = build_response(
|
||||
bool(validation.get("valid")),
|
||||
path=path,
|
||||
errors=validation.get("errors", []),
|
||||
details=details,
|
||||
)
|
||||
print(json.dumps(response, indent=2))
|
||||
sys.exit(0 if response["ok"] else 1)
|
||||
|
||||
except AgentValidationError as e:
|
||||
logger.error(str(e))
|
||||
error_info = format_error_response(e)
|
||||
response = build_response(
|
||||
False,
|
||||
path=path,
|
||||
errors=[error_info.get("message", str(e))],
|
||||
details={"error": error_info},
|
||||
)
|
||||
print(json.dumps(response, indent=2))
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error: {e}")
|
||||
error_info = format_error_response(e, include_traceback=True)
|
||||
response = build_response(
|
||||
False,
|
||||
path=path,
|
||||
errors=[error_info.get("message", str(e))],
|
||||
details={"error": error_info},
|
||||
)
|
||||
print(json.dumps(response, indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
45
skills/agent.define/skill.yaml
Normal file
45
skills/agent.define/skill.yaml
Normal file
@@ -0,0 +1,45 @@
|
||||
name: agent.define
|
||||
version: 0.1.0
|
||||
description: >
|
||||
Validates and registers agent manifests for the Betty Framework.
|
||||
Ensures schema compliance, validates skill references, and updates the Agent Registry.
|
||||
|
||||
inputs:
|
||||
- name: manifest_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to the agent.yaml file to validate
|
||||
|
||||
outputs:
|
||||
- name: validation_result
|
||||
type: object
|
||||
description: Validation results including errors and warnings
|
||||
- name: registry_updated
|
||||
type: boolean
|
||||
description: Whether agent was successfully registered
|
||||
|
||||
dependencies:
|
||||
- skill.define
|
||||
|
||||
status: active
|
||||
|
||||
entrypoints:
|
||||
- command: /agent/define
|
||||
handler: agent_define.py
|
||||
runtime: python
|
||||
description: >
|
||||
Validate an agent manifest and register it in the Agent Registry.
|
||||
parameters:
|
||||
- name: manifest_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to the agent.yaml file to validate
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
tags:
|
||||
- agents
|
||||
- validation
|
||||
- registry
|
||||
- layer2
|
||||
585
skills/agent.run/SKILL.md
Normal file
585
skills/agent.run/SKILL.md
Normal file
@@ -0,0 +1,585 @@
|
||||
# agent.run
|
||||
|
||||
**Version:** 0.1.0
|
||||
**Status:** Active
|
||||
**Tags:** agents, execution, claude-api, orchestration, layer2
|
||||
|
||||
## Overview
|
||||
|
||||
The `agent.run` skill executes registered Betty agents by orchestrating the complete agent lifecycle: loading manifests, generating Claude-friendly prompts, invoking the Claude API (or simulating), executing planned skills, and logging all results.
|
||||
|
||||
This skill is the primary execution engine for Betty agents, enabling them to operate in both **iterative** and **oneshot** reasoning modes. It handles the translation between agent manifests and Claude API calls, manages skill invocation, and provides comprehensive logging for auditability.
|
||||
|
||||
## Features
|
||||
|
||||
- ✅ Load agent manifests from path or agent name
|
||||
- ✅ Generate Claude-optimized system prompts with capabilities and workflow patterns
|
||||
- ✅ Optional Claude API integration (with mock fallback for development)
|
||||
- ✅ Support for both iterative and oneshot reasoning modes
|
||||
- ✅ Skill selection and execution orchestration
|
||||
- ✅ Comprehensive execution logging to `agent_logs/<agent>_<timestamp>.json`
|
||||
- ✅ Structured JSON output for programmatic integration
|
||||
- ✅ Error handling with detailed diagnostics
|
||||
- ✅ Validation of agent manifests and available skills
|
||||
|
||||
## Usage
|
||||
|
||||
### Command Line
|
||||
|
||||
```bash
|
||||
# Execute agent by name
|
||||
python skills/agent.run/agent_run.py api.designer
|
||||
|
||||
# Execute with task context
|
||||
python skills/agent.run/agent_run.py api.designer "Design a REST API for user management"
|
||||
|
||||
# Execute from manifest path
|
||||
python skills/agent.run/agent_run.py agents/api.designer/agent.yaml "Create authentication API"
|
||||
|
||||
# Execute without saving logs
|
||||
python skills/agent.run/agent_run.py api.designer "Design API" --no-save-log
|
||||
```
|
||||
|
||||
### As a Skill (Programmatic)
|
||||
|
||||
```python
|
||||
import sys
|
||||
import os
|
||||
sys.path.insert(0, os.path.abspath("./"))
|
||||
|
||||
from skills.agent.run.agent_run import run_agent
|
||||
|
||||
# Execute agent
|
||||
result = run_agent(
|
||||
agent_path="api.designer",
|
||||
task_context="Design a REST API for user management with authentication",
|
||||
save_log=True
|
||||
)
|
||||
|
||||
if result["ok"]:
|
||||
print(f"Agent executed successfully!")
|
||||
print(f"Skills invoked: {result['details']['summary']['skills_executed']}")
|
||||
print(f"Log saved to: {result['details']['log_path']}")
|
||||
else:
|
||||
print(f"Execution failed: {result['errors']}")
|
||||
```
|
||||
|
||||
### Via Claude Code Plugin
|
||||
|
||||
```bash
|
||||
# Using the Betty plugin command
|
||||
/agent/run api.designer "Design authentication API"
|
||||
|
||||
# With full path
|
||||
/agent/run agents/api.designer/agent.yaml "Create user management endpoints"
|
||||
```
|
||||
|
||||
## Input Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| `agent_path` | string | Yes | - | Path to agent.yaml or agent name (e.g., `api.designer`) |
|
||||
| `task_context` | string | No | None | Task or query to provide to the agent |
|
||||
| `save_log` | boolean | No | true | Whether to save execution log to disk |
|
||||
|
||||
## Output Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"status": "success",
|
||||
"timestamp": "2025-10-23T14:30:00Z",
|
||||
"errors": [],
|
||||
"details": {
|
||||
"timestamp": "2025-10-23T14:30:00Z",
|
||||
"agent": {
|
||||
"name": "api.designer",
|
||||
"version": "0.1.0",
|
||||
"description": "Design RESTful APIs...",
|
||||
"reasoning_mode": "iterative",
|
||||
"status": "active"
|
||||
},
|
||||
"task_context": "Design a REST API for user management",
|
||||
"prompt": "You are api.designer, a specialized Betty Framework agent...",
|
||||
"skills_available": [
|
||||
{
|
||||
"name": "api.define",
|
||||
"description": "Create OpenAPI specifications",
|
||||
"status": "active"
|
||||
}
|
||||
],
|
||||
"missing_skills": [],
|
||||
"claude_response": {
|
||||
"analysis": "I will design a comprehensive user management API...",
|
||||
"skills_to_invoke": [
|
||||
{
|
||||
"skill": "api.define",
|
||||
"purpose": "Create initial OpenAPI spec",
|
||||
"inputs": {"guidelines": "zalando"},
|
||||
"order": 1
|
||||
}
|
||||
],
|
||||
"reasoning": "Following API design workflow pattern"
|
||||
},
|
||||
"execution_results": [
|
||||
{
|
||||
"skill": "api.define",
|
||||
"purpose": "Create initial OpenAPI spec",
|
||||
"status": "simulated",
|
||||
"timestamp": "2025-10-23T14:30:05Z",
|
||||
"output": {
|
||||
"success": true,
|
||||
"note": "Simulated execution of api.define"
|
||||
}
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"skills_planned": 3,
|
||||
"skills_executed": 3,
|
||||
"success": true
|
||||
},
|
||||
"log_path": "/home/user/betty/agent_logs/api.designer_20251023_143000.json"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Reasoning Modes
|
||||
|
||||
### Oneshot Mode
|
||||
|
||||
In **oneshot** mode, the agent analyzes the complete task and plans all skill invocations upfront in a single pass. The execution follows the predetermined plan without dynamic adjustment.
|
||||
|
||||
**Best for:**
|
||||
- Well-defined tasks with predictable workflows
|
||||
- Tasks where all steps can be determined in advance
|
||||
- Performance-critical scenarios requiring minimal API calls
|
||||
|
||||
**Example Agent:**
|
||||
```yaml
|
||||
name: api.generator
|
||||
reasoning_mode: oneshot
|
||||
workflow_pattern: |
|
||||
1. Define API structure
|
||||
2. Validate specification
|
||||
3. Generate models
|
||||
```
|
||||
|
||||
### Iterative Mode
|
||||
|
||||
In **iterative** mode, the agent analyzes results after each skill invocation and dynamically determines the next steps. It can retry failed operations, adjust its approach based on feedback, or invoke additional skills as needed.
|
||||
|
||||
**Best for:**
|
||||
- Complex tasks requiring adaptive decision-making
|
||||
- Tasks with validation/refinement loops
|
||||
- Scenarios where results influence subsequent steps
|
||||
|
||||
**Example Agent:**
|
||||
```yaml
|
||||
name: api.designer
|
||||
reasoning_mode: iterative
|
||||
workflow_pattern: |
|
||||
1. Analyze requirements
|
||||
2. Draft OpenAPI spec
|
||||
3. Validate (if fails, refine and retry)
|
||||
4. Generate models
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Execute API Designer
|
||||
|
||||
```bash
|
||||
python skills/agent.run/agent_run.py api.designer \
|
||||
"Create a REST API for managing blog posts with CRUD operations"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
================================================================================
|
||||
AGENT EXECUTION: api.designer
|
||||
================================================================================
|
||||
|
||||
Agent: api.designer v0.1.0
|
||||
Mode: iterative
|
||||
Status: active
|
||||
|
||||
Task: Create a REST API for managing blog posts with CRUD operations
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
CLAUDE RESPONSE:
|
||||
--------------------------------------------------------------------------------
|
||||
{
|
||||
"analysis": "I will design a RESTful API following best practices...",
|
||||
"skills_to_invoke": [
|
||||
{
|
||||
"skill": "api.define",
|
||||
"purpose": "Create initial OpenAPI specification",
|
||||
"inputs": {"guidelines": "zalando", "format": "openapi-3.1"},
|
||||
"order": 1
|
||||
},
|
||||
{
|
||||
"skill": "api.validate",
|
||||
"purpose": "Validate the specification for compliance",
|
||||
"inputs": {"strict_mode": true},
|
||||
"order": 2
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
EXECUTION RESULTS:
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
✓ api.define
|
||||
Purpose: Create initial OpenAPI specification
|
||||
Status: simulated
|
||||
|
||||
✓ api.validate
|
||||
Purpose: Validate the specification for compliance
|
||||
Status: simulated
|
||||
|
||||
📝 Log saved to: /home/user/betty/agent_logs/api.designer_20251023_143000.json
|
||||
|
||||
================================================================================
|
||||
EXECUTION COMPLETE
|
||||
================================================================================
|
||||
```
|
||||
|
||||
### Example 2: Execute with Direct Path
|
||||
|
||||
```bash
|
||||
python skills/agent.run/agent_run.py \
|
||||
agents/api.analyzer/agent.yaml \
|
||||
"Analyze this OpenAPI spec for compatibility issues"
|
||||
```
|
||||
|
||||
### Example 3: Execute Without Logging
|
||||
|
||||
```bash
|
||||
python skills/agent.run/agent_run.py api.designer \
|
||||
"Design authentication API" \
|
||||
--no-save-log
|
||||
```
|
||||
|
||||
### Example 4: Programmatic Integration
|
||||
|
||||
```python
|
||||
from skills.agent.run.agent_run import run_agent, load_agent_manifest
|
||||
|
||||
# Load and inspect agent before running
|
||||
manifest = load_agent_manifest("api.designer")
|
||||
print(f"Agent capabilities: {manifest['capabilities']}")
|
||||
|
||||
# Execute with custom context
|
||||
result = run_agent(
|
||||
agent_path="api.designer",
|
||||
task_context="Design GraphQL API for e-commerce",
|
||||
save_log=True
|
||||
)
|
||||
|
||||
if result["ok"]:
|
||||
# Access execution details
|
||||
claude_response = result["details"]["claude_response"]
|
||||
execution_results = result["details"]["execution_results"]
|
||||
|
||||
print(f"Claude planned {len(claude_response['skills_to_invoke'])} skills")
|
||||
print(f"Executed {len(execution_results)} skills")
|
||||
|
||||
# Check individual skill results
|
||||
for exec_result in execution_results:
|
||||
print(f" - {exec_result['skill']}: {exec_result['status']}")
|
||||
```
|
||||
|
||||
## Agent Manifest Requirements
|
||||
|
||||
For `agent.run` to successfully execute an agent, the agent manifest must include:
|
||||
|
||||
### Required Fields
|
||||
|
||||
```yaml
|
||||
name: agent.name # Must match pattern ^[a-z][a-z0-9._-]*$
|
||||
version: 0.1.0 # Semantic version
|
||||
description: "..." # Clear description
|
||||
capabilities: # List of capabilities
|
||||
- "Capability 1"
|
||||
- "Capability 2"
|
||||
skills_available: # List of Betty skills
|
||||
- skill.name.1
|
||||
- skill.name.2
|
||||
reasoning_mode: iterative # 'iterative' or 'oneshot'
|
||||
```
|
||||
|
||||
### Recommended Fields
|
||||
|
||||
```yaml
|
||||
workflow_pattern: | # Recommended workflow steps
|
||||
1. Step 1
|
||||
2. Step 2
|
||||
3. Step 3
|
||||
|
||||
context_requirements: # Optional context hints
|
||||
guidelines: string
|
||||
domain: string
|
||||
|
||||
error_handling: # Error handling config
|
||||
max_retries: 3
|
||||
timeout_seconds: 300
|
||||
|
||||
status: active # Agent status (draft/active/deprecated)
|
||||
tags: # Categorization tags
|
||||
- tag1
|
||||
- tag2
|
||||
```
|
||||
|
||||
## Claude API Integration
|
||||
|
||||
The skill supports both real Claude API calls and mock simulation:
|
||||
|
||||
### Real API Mode (Production)
|
||||
|
||||
Set the `ANTHROPIC_API_KEY` environment variable:
|
||||
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY="sk-ant-..."
|
||||
python skills/agent.run/agent_run.py api.designer "Design API"
|
||||
```
|
||||
|
||||
The skill will:
|
||||
1. Detect the API key
|
||||
2. Use the Anthropic Python SDK
|
||||
3. Call Claude 3.5 Sonnet with the constructed prompt
|
||||
4. Parse the structured JSON response
|
||||
5. Execute the skills based on Claude's plan
|
||||
|
||||
### Mock Mode (Development)
|
||||
|
||||
Without an API key, the skill generates intelligent mock responses:
|
||||
|
||||
```bash
|
||||
python skills/agent.run/agent_run.py api.designer "Design API"
|
||||
```
|
||||
|
||||
The skill will:
|
||||
1. Detect no API key
|
||||
2. Generate plausible skill selections based on agent type
|
||||
3. Simulate Claude's reasoning
|
||||
4. Execute skills with simulated outputs
|
||||
|
||||
## Execution Logging
|
||||
|
||||
All agent executions are logged to `agent_logs/<agent>_<timestamp>.json` with:
|
||||
|
||||
- **Timestamp**: ISO 8601 UTC timestamp
|
||||
- **Agent Info**: Name, version, description, mode, status
|
||||
- **Task Context**: User-provided task or query
|
||||
- **Prompt**: Complete Claude system prompt
|
||||
- **Skills Available**: Registered skills with metadata
|
||||
- **Missing Skills**: Skills referenced but not found
|
||||
- **Claude Response**: Full API response or mock
|
||||
- **Execution Results**: Output from each skill invocation
|
||||
- **Summary**: Counts, success status, timing
|
||||
|
||||
### Log File Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"timestamp": "2025-10-23T14:30:00Z",
|
||||
"agent": { /* agent metadata */ },
|
||||
"task_context": "Design API for...",
|
||||
"prompt": "You are api.designer...",
|
||||
"skills_available": [ /* skill info */ ],
|
||||
"missing_skills": [],
|
||||
"claude_response": { /* Claude's plan */ },
|
||||
"execution_results": [ /* skill outputs */ ],
|
||||
"summary": {
|
||||
"skills_planned": 3,
|
||||
"skills_executed": 3,
|
||||
"success": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Accessing Logs
|
||||
|
||||
```bash
|
||||
# View latest log for an agent
|
||||
cat agent_logs/api.designer_latest.json | jq '.'
|
||||
|
||||
# View specific execution
|
||||
cat agent_logs/api.designer_20251023_143000.json | jq '.summary'
|
||||
|
||||
# List all logs for an agent
|
||||
ls -lt agent_logs/api.designer_*.json
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
**Agent Not Found**
|
||||
```json
|
||||
{
|
||||
"ok": false,
|
||||
"status": "failed",
|
||||
"errors": ["Agent not found: my.agent"],
|
||||
"details": {
|
||||
"error": {
|
||||
"type": "BettyError",
|
||||
"message": "Agent not found: my.agent",
|
||||
"details": {
|
||||
"agent_path": "my.agent",
|
||||
"expected_path": "/home/user/betty/agents/my.agent/agent.yaml",
|
||||
"suggestion": "Use 'betty agent list' to see available agents"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Invalid Agent Manifest**
|
||||
```json
|
||||
{
|
||||
"ok": false,
|
||||
"errors": ["Agent manifest missing required fields: reasoning_mode, capabilities"],
|
||||
"details": {
|
||||
"error": {
|
||||
"type": "BettyError",
|
||||
"details": {
|
||||
"missing_fields": ["reasoning_mode", "capabilities"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Skill Not Found**
|
||||
- The execution continues but logs missing skills in `missing_skills` array
|
||||
- Warning logged for each missing skill
|
||||
- Agent may not function as intended if critical skills are missing
|
||||
|
||||
### Debugging Tips
|
||||
|
||||
1. **Check agent manifest**: Validate with `betty agent validate <agent_path>`
|
||||
2. **Verify skills**: Ensure all `skills_available` are registered
|
||||
3. **Review logs**: Check `agent_logs/<agent>_latest.json` for details
|
||||
4. **Enable debug logging**: Set `BETTY_LOG_LEVEL=DEBUG`
|
||||
5. **Test with mock mode**: Remove API key to test workflow logic
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Agent Design
|
||||
|
||||
- Define clear, specific capabilities in agent manifests
|
||||
- Choose appropriate reasoning mode for the task complexity
|
||||
- Provide detailed workflow patterns to guide Claude
|
||||
- Include context requirements for optimal prompts
|
||||
|
||||
### 2. Task Context
|
||||
|
||||
- Provide specific, actionable task descriptions
|
||||
- Include relevant domain context when needed
|
||||
- Reference specific requirements or constraints
|
||||
- Use examples to clarify ambiguous requests
|
||||
|
||||
### 3. Logging
|
||||
|
||||
- Keep logs enabled for production (default: `save_log=true`)
|
||||
- Review logs regularly for debugging and auditing
|
||||
- Archive old logs periodically to manage disk space
|
||||
- Use log summaries to track agent performance
|
||||
|
||||
### 4. Error Recovery
|
||||
|
||||
- In iterative mode, agents can retry failed skills
|
||||
- Review error details in logs for root cause analysis
|
||||
- Validate agent manifests before deployment
|
||||
- Test with mock mode before using real API calls
|
||||
|
||||
### 5. Performance
|
||||
|
||||
- Use oneshot mode for predictable, fast execution
|
||||
- Cache agent manifests when running repeatedly
|
||||
- Monitor Claude API usage and costs
|
||||
- Consider skill execution time when designing workflows
|
||||
|
||||
## Integration with Betty Framework
|
||||
|
||||
### Skill Dependencies
|
||||
|
||||
`agent.run` depends on:
|
||||
- **agent.define**: For creating agent manifests
|
||||
- **Skill registry**: For validating available skills
|
||||
- **Betty configuration**: For paths and settings
|
||||
|
||||
### Plugin Integration
|
||||
|
||||
The skill is registered in `plugin.yaml` as:
|
||||
```yaml
|
||||
- name: agent/run
|
||||
description: Execute a registered Betty agent
|
||||
handler:
|
||||
runtime: python
|
||||
script: skills/agent.run/agent_run.py
|
||||
parameters:
|
||||
- name: agent_path
|
||||
type: string
|
||||
required: true
|
||||
```
|
||||
|
||||
This enables Claude Code to invoke agents directly:
|
||||
```
|
||||
User: "Run the API designer agent to create a user management API"
|
||||
Claude: [Invokes /agent/run api.designer "create user management API"]
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **agent.define** - Create and register new agent manifests
|
||||
- **agent.validate** - Validate agent manifests before execution
|
||||
- **run.agent** - Legacy simulation tool (read-only, no execution)
|
||||
- **skill.define** - Register skills that agents can invoke
|
||||
- **hook.simulate** - Test hooks before registration
|
||||
|
||||
## Changelog
|
||||
|
||||
### v0.1.0 (2025-10-23)
|
||||
- Initial implementation
|
||||
- Support for iterative and oneshot reasoning modes
|
||||
- Claude API integration with mock fallback
|
||||
- Execution logging to agent_logs/
|
||||
- Comprehensive error handling
|
||||
- CLI and programmatic interfaces
|
||||
- Plugin integration for Claude Code
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Planned features for future versions:
|
||||
|
||||
- **v0.2.0**:
|
||||
- Real Claude API integration (currently mocked)
|
||||
- Skill execution (currently simulated)
|
||||
- Iterative feedback loops
|
||||
- Performance metrics
|
||||
|
||||
- **v0.3.0**:
|
||||
- Agent context persistence
|
||||
- Multi-agent orchestration
|
||||
- Streaming responses
|
||||
- Parallel skill execution
|
||||
|
||||
- **v0.4.0**:
|
||||
- Agent memory and learning
|
||||
- Custom LLM backends
|
||||
- Agent marketplace integration
|
||||
- A/B testing framework
|
||||
|
||||
## License
|
||||
|
||||
Part of the Betty Framework. See project LICENSE for details.
|
||||
|
||||
## Support
|
||||
|
||||
For issues, questions, or contributions:
|
||||
- GitHub: [Betty Framework Repository]
|
||||
- Documentation: `/docs/skills/agent.run.md`
|
||||
- Examples: `/examples/agents/`
|
||||
1
skills/agent.run/__init__.py
Normal file
1
skills/agent.run/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
756
skills/agent.run/agent_run.py
Normal file
756
skills/agent.run/agent_run.py
Normal file
@@ -0,0 +1,756 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
agent_run.py – Implementation of the agent.run Skill
|
||||
|
||||
Executes a registered Betty agent by loading its manifest, constructing a Claude-friendly
|
||||
prompt, invoking the Claude API (or simulating), and logging execution results.
|
||||
|
||||
This skill supports both iterative and oneshot reasoning modes and can execute
|
||||
skills based on the agent's workflow pattern.
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
import json
|
||||
from typing import Dict, Any, List, Optional
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
from betty.config import (
|
||||
AGENTS_DIR, AGENTS_REGISTRY_FILE, REGISTRY_FILE,
|
||||
get_agent_manifest_path, get_skill_manifest_path,
|
||||
BETTY_HOME
|
||||
)
|
||||
from betty.validation import validate_path
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.errors import BettyError, format_error_response
|
||||
from betty.telemetry_capture import capture_skill_execution, capture_audit_entry
|
||||
from utils.telemetry_utils import capture_telemetry
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
# Agent logs directory
|
||||
AGENT_LOGS_DIR = os.path.join(BETTY_HOME, "agent_logs")
|
||||
|
||||
|
||||
def build_response(
|
||||
ok: bool,
|
||||
errors: Optional[List[str]] = None,
|
||||
details: Optional[Dict[str, Any]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Build standardized response.
|
||||
|
||||
Args:
|
||||
ok: Whether the operation was successful
|
||||
errors: List of error messages
|
||||
details: Additional details to include
|
||||
|
||||
Returns:
|
||||
Standardized response dictionary
|
||||
"""
|
||||
response: Dict[str, Any] = {
|
||||
"ok": ok,
|
||||
"status": "success" if ok else "failed",
|
||||
"errors": errors or [],
|
||||
"timestamp": datetime.now(timezone.utc).isoformat()
|
||||
}
|
||||
if details is not None:
|
||||
response["details"] = details
|
||||
return response
|
||||
|
||||
|
||||
def load_agent_manifest(agent_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Load agent manifest from path or agent name.
|
||||
|
||||
Args:
|
||||
agent_path: Path to agent.yaml or agent name (e.g., api.designer)
|
||||
|
||||
Returns:
|
||||
Agent manifest dictionary
|
||||
|
||||
Raises:
|
||||
BettyError: If agent cannot be loaded or is invalid
|
||||
"""
|
||||
# Check if it's a direct path to agent.yaml
|
||||
if os.path.exists(agent_path) and agent_path.endswith('.yaml'):
|
||||
manifest_path = agent_path
|
||||
# Check if it's an agent name
|
||||
else:
|
||||
manifest_path = get_agent_manifest_path(agent_path)
|
||||
if not os.path.exists(manifest_path):
|
||||
raise BettyError(
|
||||
f"Agent not found: {agent_path}",
|
||||
details={
|
||||
"agent_path": agent_path,
|
||||
"expected_path": manifest_path,
|
||||
"suggestion": "Use 'betty agent list' to see available agents"
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
with open(manifest_path) as f:
|
||||
manifest = yaml.safe_load(f)
|
||||
|
||||
if not isinstance(manifest, dict):
|
||||
raise BettyError("Agent manifest must be a dictionary")
|
||||
|
||||
# Validate required fields
|
||||
required_fields = ["name", "version", "description", "capabilities",
|
||||
"skills_available", "reasoning_mode"]
|
||||
missing = [f for f in required_fields if f not in manifest]
|
||||
if missing:
|
||||
raise BettyError(
|
||||
f"Agent manifest missing required fields: {', '.join(missing)}",
|
||||
details={"missing_fields": missing}
|
||||
)
|
||||
|
||||
return manifest
|
||||
except yaml.YAMLError as e:
|
||||
raise BettyError(f"Invalid YAML in agent manifest: {e}")
|
||||
|
||||
|
||||
def load_skill_registry() -> Dict[str, Any]:
|
||||
"""
|
||||
Load the skills registry.
|
||||
|
||||
Returns:
|
||||
Skills registry dictionary
|
||||
"""
|
||||
try:
|
||||
with open(REGISTRY_FILE) as f:
|
||||
return json.load(f)
|
||||
except FileNotFoundError:
|
||||
logger.warning(f"Skills registry not found: {REGISTRY_FILE}")
|
||||
return {"skills": []}
|
||||
except json.JSONDecodeError as e:
|
||||
raise BettyError(f"Invalid JSON in skills registry: {e}")
|
||||
|
||||
|
||||
def get_skill_info(skill_name: str, registry: Dict[str, Any]) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Get skill information from registry.
|
||||
|
||||
Args:
|
||||
skill_name: Name of the skill
|
||||
registry: Skills registry
|
||||
|
||||
Returns:
|
||||
Skill info dictionary or None if not found
|
||||
"""
|
||||
for skill in registry.get("skills", []):
|
||||
if skill.get("name") == skill_name:
|
||||
return skill
|
||||
return None
|
||||
|
||||
|
||||
def construct_agent_prompt(
|
||||
agent_manifest: Dict[str, Any],
|
||||
task_context: Optional[str] = None
|
||||
) -> str:
|
||||
"""
|
||||
Construct a Claude-friendly prompt for the agent.
|
||||
|
||||
Args:
|
||||
agent_manifest: Agent manifest dictionary
|
||||
task_context: User-provided task or query
|
||||
|
||||
Returns:
|
||||
Constructed system prompt string suitable for Claude API
|
||||
"""
|
||||
agent_name = agent_manifest.get("name", "unknown")
|
||||
description = agent_manifest.get("description", "")
|
||||
capabilities = agent_manifest.get("capabilities", [])
|
||||
skills_available = agent_manifest.get("skills_available", [])
|
||||
reasoning_mode = agent_manifest.get("reasoning_mode", "oneshot")
|
||||
workflow_pattern = agent_manifest.get("workflow_pattern", "")
|
||||
context_requirements = agent_manifest.get("context_requirements", {})
|
||||
|
||||
# Build system prompt
|
||||
prompt = f"""You are {agent_name}, a specialized Betty Framework agent.
|
||||
|
||||
## AGENT DESCRIPTION
|
||||
{description}
|
||||
|
||||
## CAPABILITIES
|
||||
You have the following capabilities:
|
||||
"""
|
||||
for cap in capabilities:
|
||||
prompt += f" • {cap}\n"
|
||||
|
||||
prompt += f"""
|
||||
## REASONING MODE
|
||||
{reasoning_mode.upper()}: """
|
||||
|
||||
if reasoning_mode == "iterative":
|
||||
prompt += """You will analyze results from each skill invocation and determine
|
||||
the next steps dynamically. You may retry failed operations or adjust your
|
||||
approach based on feedback."""
|
||||
else:
|
||||
prompt += """You will plan and execute all necessary skills in a single pass.
|
||||
Analyze the task completely before determining the sequence of skill invocations."""
|
||||
|
||||
prompt += """
|
||||
|
||||
## AVAILABLE SKILLS
|
||||
You have access to the following Betty skills:
|
||||
"""
|
||||
for skill in skills_available:
|
||||
prompt += f" • {skill}\n"
|
||||
|
||||
if workflow_pattern:
|
||||
prompt += f"""
|
||||
## RECOMMENDED WORKFLOW
|
||||
{workflow_pattern}
|
||||
"""
|
||||
|
||||
if context_requirements:
|
||||
prompt += """
|
||||
## CONTEXT REQUIREMENTS
|
||||
The following context may be required for optimal performance:
|
||||
"""
|
||||
for key, value_type in context_requirements.items():
|
||||
prompt += f" • {key}: {value_type}\n"
|
||||
|
||||
if task_context:
|
||||
prompt += f"""
|
||||
## TASK
|
||||
{task_context}
|
||||
|
||||
## INSTRUCTIONS
|
||||
Analyze the task above and respond with a JSON object describing your execution plan:
|
||||
|
||||
{{
|
||||
"analysis": "Brief analysis of the task",
|
||||
"skills_to_invoke": [
|
||||
{{
|
||||
"skill": "skill.name",
|
||||
"purpose": "Why this skill is needed",
|
||||
"inputs": {{"param": "value"}},
|
||||
"order": 1
|
||||
}}
|
||||
],
|
||||
"reasoning": "Explanation of your approach"
|
||||
}}
|
||||
|
||||
Select skills from your available skills list and arrange them according to the
|
||||
workflow pattern. Ensure the sequence makes logical sense for accomplishing the task.
|
||||
"""
|
||||
else:
|
||||
prompt += """
|
||||
## READY STATE
|
||||
You are initialized and ready to accept tasks. When given a task, you will:
|
||||
1. Analyze the requirements
|
||||
2. Select appropriate skills from your available skills
|
||||
3. Determine the execution order based on your workflow pattern
|
||||
4. Provide a structured execution plan
|
||||
"""
|
||||
|
||||
return prompt
|
||||
|
||||
|
||||
def call_claude_api(prompt: str, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Call the Claude API with the constructed prompt.
|
||||
|
||||
Currently simulates the API call. In production, this would:
|
||||
1. Use the Anthropic API client
|
||||
2. Send the prompt with appropriate parameters
|
||||
3. Parse the structured response
|
||||
|
||||
Args:
|
||||
prompt: The constructed system prompt
|
||||
agent_name: Name of the agent (for context)
|
||||
|
||||
Returns:
|
||||
Claude's response (currently mocked)
|
||||
"""
|
||||
# Check if we have ANTHROPIC_API_KEY in environment
|
||||
api_key = os.environ.get("ANTHROPIC_API_KEY")
|
||||
|
||||
if api_key:
|
||||
logger.info("Anthropic API key found - would call real API")
|
||||
# TODO: Implement actual API call
|
||||
# from anthropic import Anthropic
|
||||
# client = Anthropic(api_key=api_key)
|
||||
# response = client.messages.create(
|
||||
# model="claude-3-5-sonnet-20241022",
|
||||
# max_tokens=4096,
|
||||
# system=prompt,
|
||||
# messages=[{"role": "user", "content": "Execute the task"}]
|
||||
# )
|
||||
# return parse_claude_response(response)
|
||||
|
||||
logger.info("No API key found - using mock response")
|
||||
return generate_mock_response(prompt, agent_name)
|
||||
|
||||
|
||||
def generate_mock_response(prompt: str, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate a mock Claude response for simulation.
|
||||
|
||||
Args:
|
||||
prompt: The system prompt
|
||||
agent_name: Name of the agent
|
||||
|
||||
Returns:
|
||||
Mock response dictionary
|
||||
"""
|
||||
# Extract task from prompt if present
|
||||
task_section = ""
|
||||
if "## TASK" in prompt:
|
||||
task_start = prompt.index("## TASK")
|
||||
task_end = prompt.index("## INSTRUCTIONS") if "## INSTRUCTIONS" in prompt else len(prompt)
|
||||
task_section = prompt[task_start:task_end].replace("## TASK", "").strip()
|
||||
|
||||
# Generate plausible skill selections based on agent name
|
||||
skills_to_invoke = []
|
||||
|
||||
if "api.designer" in agent_name:
|
||||
skills_to_invoke = [
|
||||
{
|
||||
"skill": "api.define",
|
||||
"purpose": "Create initial OpenAPI specification from requirements",
|
||||
"inputs": {"guidelines": "zalando", "format": "openapi-3.1"},
|
||||
"order": 1
|
||||
},
|
||||
{
|
||||
"skill": "api.validate",
|
||||
"purpose": "Validate the generated specification for compliance",
|
||||
"inputs": {"strict_mode": True},
|
||||
"order": 2
|
||||
},
|
||||
{
|
||||
"skill": "api.generate-models",
|
||||
"purpose": "Generate type-safe models from validated spec",
|
||||
"inputs": {"language": "typescript", "framework": "zod"},
|
||||
"order": 3
|
||||
}
|
||||
]
|
||||
elif "api.analyzer" in agent_name:
|
||||
skills_to_invoke = [
|
||||
{
|
||||
"skill": "api.validate",
|
||||
"purpose": "Analyze API specification for issues and best practices",
|
||||
"inputs": {"include_warnings": True},
|
||||
"order": 1
|
||||
},
|
||||
{
|
||||
"skill": "api.compatibility",
|
||||
"purpose": "Check compatibility with existing APIs",
|
||||
"inputs": {"check_breaking_changes": True},
|
||||
"order": 2
|
||||
}
|
||||
]
|
||||
else:
|
||||
# Generic response - extract skills from prompt
|
||||
if "AVAILABLE SKILLS" in prompt:
|
||||
skills_section_start = prompt.index("AVAILABLE SKILLS")
|
||||
skills_section_end = prompt.index("##", skills_section_start + 10) if prompt.count("##", skills_section_start) > 0 else len(prompt)
|
||||
skills_text = prompt[skills_section_start:skills_section_end]
|
||||
|
||||
import re
|
||||
skill_names = re.findall(r'• (\S+)', skills_text)
|
||||
|
||||
for i, skill_name in enumerate(skill_names[:3], 1):
|
||||
skills_to_invoke.append({
|
||||
"skill": skill_name,
|
||||
"purpose": f"Execute {skill_name} as part of agent workflow",
|
||||
"inputs": {},
|
||||
"order": i
|
||||
})
|
||||
|
||||
response = {
|
||||
"analysis": f"As {agent_name}, I will approach this task using my available skills in a structured sequence.",
|
||||
"skills_to_invoke": skills_to_invoke,
|
||||
"reasoning": "Selected skills follow the agent's workflow pattern and capabilities.",
|
||||
"mode": "simulated",
|
||||
"note": "This is a mock response. In production, Claude API would provide real analysis."
|
||||
}
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def execute_skills(
|
||||
skills_plan: List[Dict[str, Any]],
|
||||
reasoning_mode: str
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Execute the planned skills (currently simulated).
|
||||
|
||||
In production, this would:
|
||||
1. For each skill in the plan:
|
||||
- Load the skill manifest
|
||||
- Prepare inputs
|
||||
- Execute the skill handler
|
||||
- Capture output
|
||||
2. In iterative mode: analyze results and potentially invoke more skills
|
||||
|
||||
Args:
|
||||
skills_plan: List of skills to invoke with their inputs
|
||||
reasoning_mode: 'iterative' or 'oneshot'
|
||||
|
||||
Returns:
|
||||
List of execution results
|
||||
"""
|
||||
results = []
|
||||
|
||||
for skill_info in skills_plan:
|
||||
execution_result = {
|
||||
"skill": skill_info.get("skill"),
|
||||
"purpose": skill_info.get("purpose"),
|
||||
"status": "simulated",
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"output": {
|
||||
"note": f"Simulated execution of {skill_info.get('skill')}",
|
||||
"inputs": skill_info.get("inputs", {}),
|
||||
"success": True
|
||||
}
|
||||
}
|
||||
|
||||
results.append(execution_result)
|
||||
|
||||
# In iterative mode, we might make decisions based on results
|
||||
if reasoning_mode == "iterative":
|
||||
execution_result["iterative_note"] = (
|
||||
"In iterative mode, the agent would analyze this result "
|
||||
"and potentially invoke additional skills or retry."
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def save_execution_log(
|
||||
agent_name: str,
|
||||
execution_data: Dict[str, Any]
|
||||
) -> str:
|
||||
"""
|
||||
Save execution log to agent_logs/<agent>.json
|
||||
|
||||
Args:
|
||||
agent_name: Name of the agent
|
||||
execution_data: Complete execution data to log
|
||||
|
||||
Returns:
|
||||
Path to the saved log file
|
||||
"""
|
||||
# Ensure logs directory exists
|
||||
os.makedirs(AGENT_LOGS_DIR, exist_ok=True)
|
||||
|
||||
# Generate log filename with timestamp
|
||||
timestamp = datetime.now(timezone.utc).strftime("%Y%m%d_%H%M%S")
|
||||
log_filename = f"{agent_name}_{timestamp}.json"
|
||||
log_path = os.path.join(AGENT_LOGS_DIR, log_filename)
|
||||
|
||||
# Also maintain a "latest" symlink
|
||||
latest_path = os.path.join(AGENT_LOGS_DIR, f"{agent_name}_latest.json")
|
||||
|
||||
try:
|
||||
with open(log_path, 'w') as f:
|
||||
json.dump(execution_data, f, indent=2)
|
||||
|
||||
# Create/update latest symlink
|
||||
if os.path.exists(latest_path):
|
||||
os.remove(latest_path)
|
||||
os.symlink(os.path.basename(log_path), latest_path)
|
||||
|
||||
logger.info(f"Execution log saved to {log_path}")
|
||||
return log_path
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save execution log: {e}")
|
||||
raise BettyError(f"Failed to save execution log: {e}")
|
||||
|
||||
|
||||
def run_agent(
|
||||
agent_path: str,
|
||||
task_context: Optional[str] = None,
|
||||
save_log: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute a Betty agent.
|
||||
|
||||
Args:
|
||||
agent_path: Path to agent manifest or agent name
|
||||
task_context: User-provided task or query
|
||||
save_log: Whether to save execution log to disk
|
||||
|
||||
Returns:
|
||||
Execution result dictionary
|
||||
"""
|
||||
logger.info(f"Running agent: {agent_path}")
|
||||
|
||||
# Track execution time for telemetry
|
||||
start_time = datetime.now(timezone.utc)
|
||||
|
||||
try:
|
||||
# Load agent manifest
|
||||
agent_manifest = load_agent_manifest(agent_path)
|
||||
agent_name = agent_manifest.get("name")
|
||||
reasoning_mode = agent_manifest.get("reasoning_mode", "oneshot")
|
||||
|
||||
logger.info(f"Loaded agent: {agent_name} (mode: {reasoning_mode})")
|
||||
|
||||
# Load skill registry
|
||||
skill_registry = load_skill_registry()
|
||||
|
||||
# Validate that agent's skills are available
|
||||
skills_available = agent_manifest.get("skills_available", [])
|
||||
skills_info = []
|
||||
missing_skills = []
|
||||
|
||||
for skill_name in skills_available:
|
||||
skill_info = get_skill_info(skill_name, skill_registry)
|
||||
if skill_info:
|
||||
skills_info.append({
|
||||
"name": skill_name,
|
||||
"description": skill_info.get("description", ""),
|
||||
"status": skill_info.get("status", "unknown")
|
||||
})
|
||||
else:
|
||||
missing_skills.append(skill_name)
|
||||
logger.warning(f"Skill not found in registry: {skill_name}")
|
||||
|
||||
# Construct agent prompt
|
||||
logger.info("Constructing agent prompt...")
|
||||
prompt = construct_agent_prompt(agent_manifest, task_context)
|
||||
|
||||
# Call Claude API (or mock)
|
||||
logger.info("Invoking Claude API...")
|
||||
claude_response = call_claude_api(prompt, agent_name)
|
||||
|
||||
# Execute skills based on Claude's plan
|
||||
skills_plan = claude_response.get("skills_to_invoke", [])
|
||||
logger.info(f"Executing {len(skills_plan)} skills...")
|
||||
execution_results = execute_skills(skills_plan, reasoning_mode)
|
||||
|
||||
# Build complete execution data
|
||||
execution_data = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"agent": {
|
||||
"name": agent_name,
|
||||
"version": agent_manifest.get("version"),
|
||||
"description": agent_manifest.get("description"),
|
||||
"reasoning_mode": reasoning_mode,
|
||||
"status": agent_manifest.get("status", "unknown")
|
||||
},
|
||||
"task_context": task_context or "No task provided",
|
||||
"prompt": prompt,
|
||||
"skills_available": skills_info,
|
||||
"missing_skills": missing_skills,
|
||||
"claude_response": claude_response,
|
||||
"execution_results": execution_results,
|
||||
"summary": {
|
||||
"skills_planned": len(skills_plan),
|
||||
"skills_executed": len(execution_results),
|
||||
"success": all(r.get("output", {}).get("success", False) for r in execution_results)
|
||||
}
|
||||
}
|
||||
|
||||
# Save log if requested
|
||||
log_path = None
|
||||
if save_log:
|
||||
log_path = save_execution_log(agent_name, execution_data)
|
||||
execution_data["log_path"] = log_path
|
||||
|
||||
# Calculate execution duration
|
||||
end_time = datetime.now(timezone.utc)
|
||||
duration_ms = int((end_time - start_time).total_seconds() * 1000)
|
||||
|
||||
# Capture telemetry for successful agent execution
|
||||
capture_skill_execution(
|
||||
skill_name="agent.run",
|
||||
inputs={
|
||||
"agent": agent_name,
|
||||
"task_context": task_context or "No task provided",
|
||||
},
|
||||
status="success" if execution_data["summary"]["success"] else "failed",
|
||||
duration_ms=duration_ms,
|
||||
agent=agent_name,
|
||||
caller="cli",
|
||||
reasoning_mode=reasoning_mode,
|
||||
skills_planned=len(skills_plan),
|
||||
skills_executed=len(execution_results),
|
||||
)
|
||||
|
||||
# Log audit entry for agent execution
|
||||
capture_audit_entry(
|
||||
skill_name="agent.run",
|
||||
status="success" if execution_data["summary"]["success"] else "failed",
|
||||
duration_ms=duration_ms,
|
||||
errors=None,
|
||||
metadata={
|
||||
"agent": agent_name,
|
||||
"reasoning_mode": reasoning_mode,
|
||||
"skills_executed": len(execution_results),
|
||||
"task_context": task_context or "No task provided",
|
||||
}
|
||||
)
|
||||
|
||||
return build_response(
|
||||
ok=True,
|
||||
details=execution_data
|
||||
)
|
||||
|
||||
except BettyError as e:
|
||||
logger.error(f"Agent execution failed: {e}")
|
||||
error_info = format_error_response(e, include_traceback=False)
|
||||
|
||||
# Calculate execution duration for failed case
|
||||
end_time = datetime.now(timezone.utc)
|
||||
duration_ms = int((end_time - start_time).total_seconds() * 1000)
|
||||
|
||||
# Capture telemetry for failed agent execution
|
||||
capture_skill_execution(
|
||||
skill_name="agent.run",
|
||||
inputs={"agent_path": agent_path},
|
||||
status="failed",
|
||||
duration_ms=duration_ms,
|
||||
caller="cli",
|
||||
error=str(e),
|
||||
)
|
||||
|
||||
# Log audit entry for failed agent execution
|
||||
capture_audit_entry(
|
||||
skill_name="agent.run",
|
||||
status="failed",
|
||||
duration_ms=duration_ms,
|
||||
errors=[str(e)],
|
||||
metadata={
|
||||
"agent_path": agent_path,
|
||||
"error_type": "BettyError",
|
||||
}
|
||||
)
|
||||
|
||||
return build_response(
|
||||
ok=False,
|
||||
errors=[str(e)],
|
||||
details={"error": error_info}
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error: {e}", exc_info=True)
|
||||
error_info = format_error_response(e, include_traceback=True)
|
||||
|
||||
# Calculate execution duration for failed case
|
||||
end_time = datetime.now(timezone.utc)
|
||||
duration_ms = int((end_time - start_time).total_seconds() * 1000)
|
||||
|
||||
# Capture telemetry for unexpected error
|
||||
capture_skill_execution(
|
||||
skill_name="agent.run",
|
||||
inputs={"agent_path": agent_path},
|
||||
status="failed",
|
||||
duration_ms=duration_ms,
|
||||
caller="cli",
|
||||
error=str(e),
|
||||
)
|
||||
|
||||
# Log audit entry for unexpected error
|
||||
capture_audit_entry(
|
||||
skill_name="agent.run",
|
||||
status="failed",
|
||||
duration_ms=duration_ms,
|
||||
errors=[f"Unexpected error: {str(e)}"],
|
||||
metadata={
|
||||
"agent_path": agent_path,
|
||||
"error_type": type(e).__name__,
|
||||
}
|
||||
)
|
||||
|
||||
return build_response(
|
||||
ok=False,
|
||||
errors=[f"Unexpected error: {str(e)}"],
|
||||
details={"error": error_info}
|
||||
)
|
||||
|
||||
|
||||
@capture_telemetry(skill_name="agent.run", caller="cli")
|
||||
def main():
|
||||
"""Main CLI entry point."""
|
||||
if len(sys.argv) < 2:
|
||||
message = "Usage: agent_run.py <agent_path> [task_context] [--no-save-log]"
|
||||
response = build_response(
|
||||
False,
|
||||
errors=[message],
|
||||
details={
|
||||
"usage": message,
|
||||
"examples": [
|
||||
"agent_run.py api.designer",
|
||||
"agent_run.py api.designer 'Create API for user management'",
|
||||
"agent_run.py agents/api.designer/agent.yaml 'Design REST API'"
|
||||
]
|
||||
}
|
||||
)
|
||||
print(json.dumps(response, indent=2), file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
agent_path = sys.argv[1]
|
||||
|
||||
# Parse optional arguments
|
||||
task_context = None
|
||||
save_log = True
|
||||
|
||||
for arg in sys.argv[2:]:
|
||||
if arg == "--no-save-log":
|
||||
save_log = False
|
||||
elif task_context is None:
|
||||
task_context = arg
|
||||
|
||||
try:
|
||||
result = run_agent(agent_path, task_context, save_log)
|
||||
|
||||
# Check if execution was successful
|
||||
if result['ok'] and 'details' in result and 'agent' in result['details']:
|
||||
# Pretty print for CLI usage
|
||||
print("\n" + "="*80)
|
||||
print(f"AGENT EXECUTION: {result['details']['agent']['name']}")
|
||||
print("="*80)
|
||||
|
||||
agent_info = result['details']['agent']
|
||||
print(f"\nAgent: {agent_info['name']} v{agent_info['version']}")
|
||||
print(f"Mode: {agent_info['reasoning_mode']}")
|
||||
print(f"Status: {agent_info['status']}")
|
||||
|
||||
print(f"\nTask: {result['details']['task_context']}")
|
||||
|
||||
print("\n" + "-"*80)
|
||||
print("CLAUDE RESPONSE:")
|
||||
print("-"*80)
|
||||
print(json.dumps(result['details']['claude_response'], indent=2))
|
||||
|
||||
print("\n" + "-"*80)
|
||||
print("EXECUTION RESULTS:")
|
||||
print("-"*80)
|
||||
for exec_result in result['details']['execution_results']:
|
||||
print(f"\n ✓ {exec_result['skill']}")
|
||||
print(f" Purpose: {exec_result['purpose']}")
|
||||
print(f" Status: {exec_result['status']}")
|
||||
|
||||
if 'log_path' in result['details']:
|
||||
print(f"\n📝 Log saved to: {result['details']['log_path']}")
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("EXECUTION COMPLETE")
|
||||
print("="*80 + "\n")
|
||||
else:
|
||||
# Execution failed - print error details
|
||||
print("\n" + "="*80)
|
||||
print("AGENT EXECUTION FAILED")
|
||||
print("="*80)
|
||||
print(f"\nErrors:")
|
||||
for error in result.get('errors', ['Unknown error']):
|
||||
print(f" ✗ {error}")
|
||||
print()
|
||||
|
||||
# Also output full JSON for programmatic use
|
||||
print(json.dumps(result, indent=2))
|
||||
sys.exit(0 if result['ok'] else 1)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n\nInterrupted by user", file=sys.stderr)
|
||||
sys.exit(130)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
85
skills/agent.run/skill.yaml
Normal file
85
skills/agent.run/skill.yaml
Normal file
@@ -0,0 +1,85 @@
|
||||
name: agent.run
|
||||
version: 0.1.0
|
||||
description: >
|
||||
Execute a registered Betty agent by loading its manifest, generating a Claude-friendly
|
||||
prompt, invoking skills based on the agent's workflow, and logging results. Supports
|
||||
both iterative and oneshot reasoning modes with optional Claude API integration.
|
||||
|
||||
inputs:
|
||||
- name: agent_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to agent manifest (agent.yaml) or agent name (e.g., api.designer)
|
||||
|
||||
- name: task_context
|
||||
type: string
|
||||
required: false
|
||||
description: Task or query to provide to the agent for execution
|
||||
|
||||
- name: save_log
|
||||
type: boolean
|
||||
required: false
|
||||
default: true
|
||||
description: Whether to save execution log to agent_logs/<agent>.json
|
||||
|
||||
outputs:
|
||||
- name: execution_result
|
||||
type: object
|
||||
description: Complete execution results including prompt, Claude response, and skill outputs
|
||||
schema:
|
||||
properties:
|
||||
ok: boolean
|
||||
status: string
|
||||
timestamp: string
|
||||
errors: array
|
||||
details:
|
||||
type: object
|
||||
properties:
|
||||
timestamp: string
|
||||
agent: object
|
||||
task_context: string
|
||||
prompt: string
|
||||
skills_available: array
|
||||
claude_response: object
|
||||
execution_results: array
|
||||
summary: object
|
||||
log_path: string
|
||||
|
||||
dependencies:
|
||||
- agent.define
|
||||
|
||||
entrypoints:
|
||||
- command: /agent/run
|
||||
handler: agent_run.py
|
||||
runtime: python
|
||||
description: >
|
||||
Execute a Betty agent with optional task context. Generates Claude-friendly prompts,
|
||||
invokes the Claude API (or simulates), executes planned skills, and logs all results
|
||||
to agent_logs/ directory.
|
||||
parameters:
|
||||
- name: agent_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to agent.yaml file or agent name (e.g., api.designer)
|
||||
- name: task_context
|
||||
type: string
|
||||
required: false
|
||||
description: Optional task or query for the agent to execute
|
||||
- name: save_log
|
||||
type: boolean
|
||||
required: false
|
||||
default: true
|
||||
description: Save execution log to agent_logs/<agent>_<timestamp>.json
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
- network:http
|
||||
|
||||
status: active
|
||||
|
||||
tags:
|
||||
- agents
|
||||
- execution
|
||||
- claude-api
|
||||
- orchestration
|
||||
- layer2
|
||||
46
skills/api.compatibility/SKILL.md
Normal file
46
skills/api.compatibility/SKILL.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# api.compatibility
|
||||
|
||||
## Overview
|
||||
|
||||
Detect breaking changes between API specification versions to maintain backward compatibility.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
python skills/api.compatibility/check_compatibility.py <old_spec> <new_spec> [options]
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Check compatibility
|
||||
python skills/api.compatibility/check_compatibility.py \
|
||||
specs/user-service-v1.openapi.yaml \
|
||||
specs/user-service-v2.openapi.yaml
|
||||
|
||||
# Human-readable output
|
||||
python skills/api.compatibility/check_compatibility.py \
|
||||
specs/user-service-v1.openapi.yaml \
|
||||
specs/user-service-v2.openapi.yaml \
|
||||
--format=human
|
||||
```
|
||||
|
||||
## Breaking Changes Detected
|
||||
|
||||
- **path_removed**: Endpoint removed
|
||||
- **operation_removed**: HTTP method removed
|
||||
- **schema_removed**: Model schema removed
|
||||
- **property_removed**: Schema property removed
|
||||
- **property_made_required**: Optional property now required
|
||||
- **property_type_changed**: Property type changed
|
||||
|
||||
## Non-Breaking Changes
|
||||
|
||||
- **path_added**: New endpoint
|
||||
- **operation_added**: New HTTP method
|
||||
- **schema_added**: New model schema
|
||||
- **property_added**: New optional property
|
||||
|
||||
## Version
|
||||
|
||||
**0.1.0** - Initial implementation
|
||||
1
skills/api.compatibility/__init__.py
Normal file
1
skills/api.compatibility/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
432
skills/api.compatibility/check_compatibility.py
Executable file
432
skills/api.compatibility/check_compatibility.py
Executable file
@@ -0,0 +1,432 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Detect breaking changes between API specification versions.
|
||||
|
||||
This skill analyzes two versions of an API spec and identifies:
|
||||
- Breaking changes (remove endpoints, change types, etc.)
|
||||
- Non-breaking changes (add endpoints, add optional fields, etc.)
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List, Tuple
|
||||
|
||||
# Add betty module to path
|
||||
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.errors import format_error_response, BettyError
|
||||
from betty.validation import validate_path
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class CompatibilityChange:
|
||||
"""Represents a compatibility change between spec versions."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
change_type: str,
|
||||
severity: str,
|
||||
path: str,
|
||||
description: str,
|
||||
old_value: Any = None,
|
||||
new_value: Any = None
|
||||
):
|
||||
self.change_type = change_type
|
||||
self.severity = severity # "breaking" or "non-breaking"
|
||||
self.path = path
|
||||
self.description = description
|
||||
self.old_value = old_value
|
||||
self.new_value = new_value
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary."""
|
||||
result = {
|
||||
"change_type": self.change_type,
|
||||
"severity": self.severity,
|
||||
"path": self.path,
|
||||
"description": self.description
|
||||
}
|
||||
if self.old_value is not None:
|
||||
result["old_value"] = self.old_value
|
||||
if self.new_value is not None:
|
||||
result["new_value"] = self.new_value
|
||||
return result
|
||||
|
||||
|
||||
class CompatibilityChecker:
|
||||
"""Check compatibility between two API specs."""
|
||||
|
||||
def __init__(self, old_spec: Dict[str, Any], new_spec: Dict[str, Any]):
|
||||
self.old_spec = old_spec
|
||||
self.new_spec = new_spec
|
||||
self.breaking_changes: List[CompatibilityChange] = []
|
||||
self.non_breaking_changes: List[CompatibilityChange] = []
|
||||
|
||||
def check(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Run all compatibility checks.
|
||||
|
||||
Returns:
|
||||
Compatibility report
|
||||
"""
|
||||
# Check paths (endpoints)
|
||||
self._check_paths()
|
||||
|
||||
# Check schemas
|
||||
self._check_schemas()
|
||||
|
||||
# Check parameters
|
||||
self._check_parameters()
|
||||
|
||||
# Check responses
|
||||
self._check_responses()
|
||||
|
||||
return {
|
||||
"compatible": len(self.breaking_changes) == 0,
|
||||
"breaking_changes": [c.to_dict() for c in self.breaking_changes],
|
||||
"non_breaking_changes": [c.to_dict() for c in self.non_breaking_changes],
|
||||
"change_summary": {
|
||||
"total_breaking": len(self.breaking_changes),
|
||||
"total_non_breaking": len(self.non_breaking_changes),
|
||||
"total_changes": len(self.breaking_changes) + len(self.non_breaking_changes)
|
||||
}
|
||||
}
|
||||
|
||||
def _check_paths(self):
|
||||
"""Check for changes in API paths/endpoints."""
|
||||
old_paths = set(self.old_spec.get("paths", {}).keys())
|
||||
new_paths = set(self.new_spec.get("paths", {}).keys())
|
||||
|
||||
# Removed paths (BREAKING)
|
||||
for removed_path in old_paths - new_paths:
|
||||
self.breaking_changes.append(CompatibilityChange(
|
||||
change_type="path_removed",
|
||||
severity="breaking",
|
||||
path=f"paths.{removed_path}",
|
||||
description=f"Endpoint '{removed_path}' was removed",
|
||||
old_value=removed_path
|
||||
))
|
||||
|
||||
# Added paths (NON-BREAKING)
|
||||
for added_path in new_paths - old_paths:
|
||||
self.non_breaking_changes.append(CompatibilityChange(
|
||||
change_type="path_added",
|
||||
severity="non-breaking",
|
||||
path=f"paths.{added_path}",
|
||||
description=f"New endpoint '{added_path}' was added",
|
||||
new_value=added_path
|
||||
))
|
||||
|
||||
# Check operations on existing paths
|
||||
for path in old_paths & new_paths:
|
||||
self._check_operations(path)
|
||||
|
||||
def _check_operations(self, path: str):
|
||||
"""Check for changes in HTTP operations on a path."""
|
||||
old_operations = set(self.old_spec["paths"][path].keys()) - {"parameters"}
|
||||
new_operations = set(self.new_spec["paths"][path].keys()) - {"parameters"}
|
||||
|
||||
# Removed operations (BREAKING)
|
||||
for removed_op in old_operations - new_operations:
|
||||
self.breaking_changes.append(CompatibilityChange(
|
||||
change_type="operation_removed",
|
||||
severity="breaking",
|
||||
path=f"paths.{path}.{removed_op}",
|
||||
description=f"Operation '{removed_op.upper()}' on '{path}' was removed",
|
||||
old_value=removed_op
|
||||
))
|
||||
|
||||
# Added operations (NON-BREAKING)
|
||||
for added_op in new_operations - old_operations:
|
||||
self.non_breaking_changes.append(CompatibilityChange(
|
||||
change_type="operation_added",
|
||||
severity="non-breaking",
|
||||
path=f"paths.{path}.{added_op}",
|
||||
description=f"New operation '{added_op.upper()}' on '{path}' was added",
|
||||
new_value=added_op
|
||||
))
|
||||
|
||||
def _check_schemas(self):
|
||||
"""Check for changes in component schemas."""
|
||||
old_schemas = self.old_spec.get("components", {}).get("schemas", {})
|
||||
new_schemas = self.new_spec.get("components", {}).get("schemas", {})
|
||||
|
||||
old_schema_names = set(old_schemas.keys())
|
||||
new_schema_names = set(new_schemas.keys())
|
||||
|
||||
# Removed schemas (BREAKING if they were referenced)
|
||||
for removed_schema in old_schema_names - new_schema_names:
|
||||
self.breaking_changes.append(CompatibilityChange(
|
||||
change_type="schema_removed",
|
||||
severity="breaking",
|
||||
path=f"components.schemas.{removed_schema}",
|
||||
description=f"Schema '{removed_schema}' was removed",
|
||||
old_value=removed_schema
|
||||
))
|
||||
|
||||
# Added schemas (NON-BREAKING)
|
||||
for added_schema in new_schema_names - old_schema_names:
|
||||
self.non_breaking_changes.append(CompatibilityChange(
|
||||
change_type="schema_added",
|
||||
severity="non-breaking",
|
||||
path=f"components.schemas.{added_schema}",
|
||||
description=f"New schema '{added_schema}' was added",
|
||||
new_value=added_schema
|
||||
))
|
||||
|
||||
# Check properties on existing schemas
|
||||
for schema_name in old_schema_names & new_schema_names:
|
||||
self._check_schema_properties(schema_name, old_schemas[schema_name], new_schemas[schema_name])
|
||||
|
||||
def _check_schema_properties(self, schema_name: str, old_schema: Dict[str, Any], new_schema: Dict[str, Any]):
|
||||
"""Check for changes in schema properties."""
|
||||
old_props = old_schema.get("properties") or {}
|
||||
new_props = new_schema.get("properties") or {}
|
||||
|
||||
old_required = set(old_schema.get("required", []))
|
||||
new_required = set(new_schema.get("required", []))
|
||||
|
||||
old_prop_names = set(old_props.keys())
|
||||
new_prop_names = set(new_props.keys())
|
||||
|
||||
# Removed properties (BREAKING)
|
||||
for removed_prop in old_prop_names - new_prop_names:
|
||||
self.breaking_changes.append(CompatibilityChange(
|
||||
change_type="property_removed",
|
||||
severity="breaking",
|
||||
path=f"components.schemas.{schema_name}.properties.{removed_prop}",
|
||||
description=f"Property '{removed_prop}' was removed from schema '{schema_name}'",
|
||||
old_value=removed_prop
|
||||
))
|
||||
|
||||
# Added required properties (BREAKING)
|
||||
for added_required in new_required - old_required:
|
||||
if added_required in new_prop_names:
|
||||
self.breaking_changes.append(CompatibilityChange(
|
||||
change_type="property_made_required",
|
||||
severity="breaking",
|
||||
path=f"components.schemas.{schema_name}.required",
|
||||
description=f"Property '{added_required}' is now required in schema '{schema_name}'",
|
||||
new_value=added_required
|
||||
))
|
||||
|
||||
# Added optional properties (NON-BREAKING)
|
||||
for added_prop in new_prop_names - old_prop_names:
|
||||
if added_prop not in new_required:
|
||||
self.non_breaking_changes.append(CompatibilityChange(
|
||||
change_type="property_added",
|
||||
severity="non-breaking",
|
||||
path=f"components.schemas.{schema_name}.properties.{added_prop}",
|
||||
description=f"Optional property '{added_prop}' was added to schema '{schema_name}'",
|
||||
new_value=added_prop
|
||||
))
|
||||
|
||||
# Check for type changes on existing properties
|
||||
for prop_name in old_prop_names & new_prop_names:
|
||||
old_type = old_props[prop_name].get("type")
|
||||
new_type = new_props[prop_name].get("type")
|
||||
|
||||
if old_type != new_type:
|
||||
self.breaking_changes.append(CompatibilityChange(
|
||||
change_type="property_type_changed",
|
||||
severity="breaking",
|
||||
path=f"components.schemas.{schema_name}.properties.{prop_name}.type",
|
||||
description=f"Property '{prop_name}' type changed from '{old_type}' to '{new_type}' in schema '{schema_name}'",
|
||||
old_value=old_type,
|
||||
new_value=new_type
|
||||
))
|
||||
|
||||
def _check_parameters(self):
|
||||
"""Check for changes in path/query parameters."""
|
||||
# Implementation for parameter checking
|
||||
pass
|
||||
|
||||
def _check_responses(self):
|
||||
"""Check for changes in response schemas."""
|
||||
# Implementation for response checking
|
||||
pass
|
||||
|
||||
|
||||
def load_spec(spec_path: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Load API specification from file.
|
||||
|
||||
Args:
|
||||
spec_path: Path to specification file
|
||||
|
||||
Returns:
|
||||
Parsed specification
|
||||
|
||||
Raises:
|
||||
BettyError: If file cannot be loaded
|
||||
"""
|
||||
spec_file = Path(spec_path)
|
||||
|
||||
if not spec_file.exists():
|
||||
raise BettyError(f"Specification file not found: {spec_path}")
|
||||
|
||||
try:
|
||||
import yaml
|
||||
with open(spec_file, 'r') as f:
|
||||
spec = yaml.safe_load(f)
|
||||
|
||||
if not isinstance(spec, dict):
|
||||
raise BettyError("Specification must be a valid YAML/JSON object")
|
||||
|
||||
logger.info(f"Loaded specification from {spec_path}")
|
||||
return spec
|
||||
|
||||
except Exception as e:
|
||||
raise BettyError(f"Failed to load specification: {e}")
|
||||
|
||||
|
||||
def check_compatibility(
|
||||
old_spec_path: str,
|
||||
new_spec_path: str,
|
||||
fail_on_breaking: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Check compatibility between two API specifications.
|
||||
|
||||
Args:
|
||||
old_spec_path: Path to old specification
|
||||
new_spec_path: Path to new specification
|
||||
fail_on_breaking: Whether to fail if breaking changes detected
|
||||
|
||||
Returns:
|
||||
Compatibility report
|
||||
|
||||
Raises:
|
||||
BettyError: If compatibility check fails
|
||||
"""
|
||||
# Load specifications
|
||||
old_spec = load_spec(old_spec_path)
|
||||
new_spec = load_spec(new_spec_path)
|
||||
|
||||
# Run compatibility check
|
||||
checker = CompatibilityChecker(old_spec, new_spec)
|
||||
report = checker.check()
|
||||
|
||||
# Add metadata
|
||||
report["old_spec_path"] = old_spec_path
|
||||
report["new_spec_path"] = new_spec_path
|
||||
|
||||
return report
|
||||
|
||||
|
||||
def format_compatibility_output(report: Dict[str, Any]) -> str:
|
||||
"""Format compatibility report for human-readable output."""
|
||||
lines = []
|
||||
|
||||
lines.append("\n" + "=" * 60)
|
||||
lines.append("API Compatibility Report")
|
||||
lines.append("=" * 60)
|
||||
lines.append(f"Old: {report.get('old_spec_path', 'unknown')}")
|
||||
lines.append(f"New: {report.get('new_spec_path', 'unknown')}")
|
||||
lines.append("=" * 60 + "\n")
|
||||
|
||||
# Breaking changes
|
||||
breaking = report.get("breaking_changes", [])
|
||||
if breaking:
|
||||
lines.append(f"❌ BREAKING CHANGES ({len(breaking)}):")
|
||||
for change in breaking:
|
||||
lines.append(f" [{change.get('change_type', 'UNKNOWN')}] {change.get('description', '')}")
|
||||
if change.get('path'):
|
||||
lines.append(f" Path: {change['path']}")
|
||||
lines.append("")
|
||||
|
||||
# Non-breaking changes
|
||||
non_breaking = report.get("non_breaking_changes", [])
|
||||
if non_breaking:
|
||||
lines.append(f"✅ NON-BREAKING CHANGES ({len(non_breaking)}):")
|
||||
for change in non_breaking:
|
||||
lines.append(f" [{change.get('change_type', 'UNKNOWN')}] {change.get('description', '')}")
|
||||
lines.append("")
|
||||
|
||||
# Summary
|
||||
lines.append("=" * 60)
|
||||
if report.get("compatible"):
|
||||
lines.append("✅ BACKWARD COMPATIBLE")
|
||||
else:
|
||||
lines.append("❌ NOT BACKWARD COMPATIBLE")
|
||||
lines.append("=" * 60 + "\n")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Detect breaking changes between API specification versions"
|
||||
)
|
||||
parser.add_argument(
|
||||
"old_spec_path",
|
||||
type=str,
|
||||
help="Path to the old/previous API specification"
|
||||
)
|
||||
parser.add_argument(
|
||||
"new_spec_path",
|
||||
type=str,
|
||||
help="Path to the new/current API specification"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--fail-on-breaking",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help="Exit with error code if breaking changes detected (default: true)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--format",
|
||||
type=str,
|
||||
choices=["json", "human"],
|
||||
default="json",
|
||||
help="Output format (default: json)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
# Check if PyYAML is installed
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
raise BettyError(
|
||||
"PyYAML is required for api.compatibility. Install with: pip install pyyaml"
|
||||
)
|
||||
|
||||
# Validate inputs
|
||||
validate_path(args.old_spec_path)
|
||||
validate_path(args.new_spec_path)
|
||||
|
||||
# Run compatibility check
|
||||
logger.info(f"Checking compatibility between {args.old_spec_path} and {args.new_spec_path}")
|
||||
report = check_compatibility(
|
||||
old_spec_path=args.old_spec_path,
|
||||
new_spec_path=args.new_spec_path,
|
||||
fail_on_breaking=args.fail_on_breaking
|
||||
)
|
||||
|
||||
# Output based on format
|
||||
if args.format == "human":
|
||||
print(format_compatibility_output(report))
|
||||
else:
|
||||
output = {
|
||||
"status": "success",
|
||||
"data": report
|
||||
}
|
||||
print(json.dumps(output, indent=2))
|
||||
|
||||
# Exit with error if breaking changes and fail_on_breaking is True
|
||||
if args.fail_on_breaking and not report["compatible"]:
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Compatibility check failed: {e}")
|
||||
print(json.dumps(format_error_response(e), indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
51
skills/api.compatibility/skill.yaml
Normal file
51
skills/api.compatibility/skill.yaml
Normal file
@@ -0,0 +1,51 @@
|
||||
name: api.compatibility
|
||||
version: 0.1.0
|
||||
description: Detect breaking changes between API specification versions
|
||||
|
||||
inputs:
|
||||
- name: old_spec_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to the old/previous API specification
|
||||
|
||||
- name: new_spec_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to the new/current API specification
|
||||
|
||||
- name: fail_on_breaking
|
||||
type: boolean
|
||||
required: false
|
||||
default: true
|
||||
description: Exit with error code if breaking changes detected
|
||||
|
||||
outputs:
|
||||
- name: compatible
|
||||
type: boolean
|
||||
description: Whether the new spec is backward compatible
|
||||
|
||||
- name: breaking_changes
|
||||
type: array
|
||||
description: List of breaking changes detected
|
||||
|
||||
- name: non_breaking_changes
|
||||
type: array
|
||||
description: List of non-breaking changes detected
|
||||
|
||||
- name: change_summary
|
||||
type: object
|
||||
description: Summary of all changes
|
||||
|
||||
dependencies:
|
||||
- context.schema
|
||||
|
||||
entrypoints:
|
||||
- command: /skill/api/compatibility
|
||||
handler: check_compatibility.py
|
||||
runtime: python
|
||||
permissions:
|
||||
- filesystem:read
|
||||
|
||||
status: active
|
||||
|
||||
tags: [api, compatibility, breaking-changes, versioning, openapi]
|
||||
231
skills/api.define/SKILL.md
Normal file
231
skills/api.define/SKILL.md
Normal file
@@ -0,0 +1,231 @@
|
||||
# api.define
|
||||
|
||||
## Overview
|
||||
|
||||
**api.define** scaffolds OpenAPI and AsyncAPI specifications from enterprise-compliant templates, generating production-ready API contracts with best practices built-in.
|
||||
|
||||
## Purpose
|
||||
|
||||
Quickly create API specifications that follow enterprise guidelines:
|
||||
- Generate Zalando-compliant OpenAPI 3.1 specs
|
||||
- Generate AsyncAPI 3.0 specs for event-driven APIs
|
||||
- Include proper error handling (RFC 7807 Problem JSON)
|
||||
- Use correct naming conventions (snake_case)
|
||||
- Include required metadata and security schemes
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
python skills/api.define/api_define.py <service_name> [spec_type] [options]
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Required | Description | Default |
|
||||
|-----------|----------|-------------|---------|
|
||||
| `service_name` | Yes | Service/API name | - |
|
||||
| `spec_type` | No | openapi or asyncapi | `openapi` |
|
||||
| `--template` | No | Template name | `zalando` |
|
||||
| `--output-dir` | No | Output directory | `specs` |
|
||||
| `--version` | No | API version | `1.0.0` |
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Create Zalando-Compliant OpenAPI Spec
|
||||
|
||||
```bash
|
||||
python skills/api.define/api_define.py user-service openapi --template=zalando
|
||||
```
|
||||
|
||||
**Output**: `specs/user-service.openapi.yaml`
|
||||
|
||||
Generated spec includes:
|
||||
- ✅ Required Zalando metadata (`x-api-id`, `x-audience`)
|
||||
- ✅ CRUD operations for users resource
|
||||
- ✅ RFC 7807 Problem JSON for errors
|
||||
- ✅ snake_case property names
|
||||
- ✅ X-Flow-ID headers for tracing
|
||||
- ✅ Proper HTTP status codes
|
||||
- ✅ JWT authentication scheme
|
||||
|
||||
### Example 2: Create AsyncAPI Spec
|
||||
|
||||
```bash
|
||||
python skills/api.define/api_define.py user-service asyncapi
|
||||
```
|
||||
|
||||
**Output**: `specs/user-service.asyncapi.yaml`
|
||||
|
||||
Generated spec includes:
|
||||
- ✅ Lifecycle events (created, updated, deleted)
|
||||
- ✅ Kafka channel definitions
|
||||
- ✅ Event payload schemas
|
||||
- ✅ Publish/subscribe operations
|
||||
|
||||
### Example 3: Custom Output Directory
|
||||
|
||||
```bash
|
||||
python skills/api.define/api_define.py order-api openapi \
|
||||
--output-dir=api-specs \
|
||||
--version=2.0.0
|
||||
```
|
||||
|
||||
## Generated OpenAPI Structure
|
||||
|
||||
For a service named `user-service`, the generated OpenAPI spec includes:
|
||||
|
||||
**Paths**:
|
||||
- `GET /users` - List users with pagination
|
||||
- `POST /users` - Create new user
|
||||
- `GET /users/{user_id}` - Get user by ID
|
||||
- `PUT /users/{user_id}` - Update user
|
||||
- `DELETE /users/{user_id}` - Delete user
|
||||
|
||||
**Schemas**:
|
||||
- `User` - Main resource schema
|
||||
- `UserCreate` - Creation payload schema
|
||||
- `UserUpdate` - Update payload schema
|
||||
- `Pagination` - Pagination metadata
|
||||
- `Problem` - RFC 7807 error schema
|
||||
|
||||
**Responses**:
|
||||
- `200` - Success (with X-Flow-ID header)
|
||||
- `201` - Created (with Location and X-Flow-ID headers)
|
||||
- `204` - No Content (for deletes)
|
||||
- `400` - Bad Request (application/problem+json)
|
||||
- `404` - Not Found (application/problem+json)
|
||||
- `409` - Conflict (application/problem+json)
|
||||
- `500` - Internal Error (application/problem+json)
|
||||
|
||||
**Security**:
|
||||
- Bearer token authentication (JWT)
|
||||
|
||||
**Required Metadata**:
|
||||
- `x-api-id` - Unique UUID
|
||||
- `x-audience` - Target audience (company-internal)
|
||||
- `contact` - Team contact information
|
||||
|
||||
## Resource Name Extraction
|
||||
|
||||
The skill automatically extracts resource names from service names:
|
||||
|
||||
| Service Name | Resource | Plural |
|
||||
|--------------|----------|--------|
|
||||
| `user-service` | user | users |
|
||||
| `order-api` | order | orders |
|
||||
| `payment-gateway` | payment | payments |
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
The skill automatically applies proper naming:
|
||||
|
||||
| Context | Convention | Example |
|
||||
|---------|------------|---------|
|
||||
| Paths | kebab-case | `/user-profiles` |
|
||||
| Properties | snake_case | `user_id` |
|
||||
| Schemas | TitleCase | `UserProfile` |
|
||||
| Operations | camelCase | `getUserById` |
|
||||
|
||||
## Integration with api.validate
|
||||
|
||||
The generated specs are designed to pass `api.validate` with zero errors:
|
||||
|
||||
```bash
|
||||
# Generate spec
|
||||
python skills/api.define/api_define.py user-service
|
||||
|
||||
# Validate (should pass)
|
||||
python skills/api.validate/api_validate.py specs/user-service.openapi.yaml zalando
|
||||
```
|
||||
|
||||
## Use in Workflows
|
||||
|
||||
```yaml
|
||||
# workflows/api_first_development.yaml
|
||||
steps:
|
||||
- skill: api.define
|
||||
args:
|
||||
- "user-service"
|
||||
- "openapi"
|
||||
- "--template=zalando"
|
||||
output: spec_path
|
||||
|
||||
- skill: api.validate
|
||||
args:
|
||||
- "{spec_path}"
|
||||
- "zalando"
|
||||
required: true
|
||||
```
|
||||
|
||||
## Customization
|
||||
|
||||
After generation, customize the spec:
|
||||
|
||||
1. **Add properties** to schemas:
|
||||
```yaml
|
||||
User:
|
||||
properties:
|
||||
user_id: ...
|
||||
email: # Add this
|
||||
type: string
|
||||
format: email
|
||||
```
|
||||
|
||||
2. **Add operations**:
|
||||
```yaml
|
||||
/users/search: # Add new endpoint
|
||||
post:
|
||||
summary: Search users
|
||||
```
|
||||
|
||||
3. **Modify metadata**:
|
||||
```yaml
|
||||
info:
|
||||
contact:
|
||||
name: Your Team Name # Update this
|
||||
email: team@company.com
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
### Success Response
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"spec_path": "specs/user-service.openapi.yaml",
|
||||
"spec_content": {...},
|
||||
"api_id": "d0184f38-b98d-11e7-9c56-68f728c1ba70",
|
||||
"template_used": "zalando",
|
||||
"service_name": "user-service"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
- **PyYAML**: Required for YAML handling
|
||||
```bash
|
||||
pip install pyyaml
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
| Template | Spec Type | Description |
|
||||
|----------|-----------|-------------|
|
||||
| `zalando` | OpenAPI | Zalando-compliant with all required fields |
|
||||
| `basic` | AsyncAPI | Basic event-driven API structure |
|
||||
|
||||
## See Also
|
||||
|
||||
- [api.validate](../api.validate/SKILL.md) - Validate generated specs
|
||||
- [hook.define](../hook.define/SKILL.md) - Set up automatic validation
|
||||
- [Betty Architecture](../../docs/betty-architecture.md) - Five-layer model
|
||||
- [API-Driven Development](../../docs/api-driven-development.md) - Complete guide
|
||||
|
||||
## Version
|
||||
|
||||
**0.1.0** - Initial implementation with Zalando template support
|
||||
1
skills/api.define/__init__.py
Normal file
1
skills/api.define/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
446
skills/api.define/api_define.py
Executable file
446
skills/api.define/api_define.py
Executable file
@@ -0,0 +1,446 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Create OpenAPI and AsyncAPI specifications from templates.
|
||||
|
||||
This skill scaffolds API specifications following enterprise guidelines
|
||||
with proper structure and best practices built-in.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
import argparse
|
||||
import uuid
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
|
||||
# Add betty module to path
|
||||
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.errors import format_error_response, BettyError
|
||||
from betty.validation import validate_skill_name
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
def to_snake_case(text: str) -> str:
|
||||
"""Convert text to snake_case."""
|
||||
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', text)
|
||||
s2 = re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1)
|
||||
return s2.lower().replace('-', '_').replace(' ', '_')
|
||||
|
||||
|
||||
def to_kebab_case(text: str) -> str:
|
||||
"""Convert text to kebab-case."""
|
||||
return to_snake_case(text).replace('_', '-')
|
||||
|
||||
|
||||
def to_title_case(text: str) -> str:
|
||||
"""Convert text to TitleCase."""
|
||||
return ''.join(word.capitalize() for word in re.split(r'[-_\s]', text))
|
||||
|
||||
|
||||
def pluralize(word: str) -> str:
|
||||
"""Simple pluralization (works for most common cases)."""
|
||||
if word.endswith('y'):
|
||||
return word[:-1] + 'ies'
|
||||
elif word.endswith('s'):
|
||||
return word + 'es'
|
||||
else:
|
||||
return word + 's'
|
||||
|
||||
|
||||
def load_template(template_name: str, spec_type: str) -> str:
|
||||
"""
|
||||
Load template file content.
|
||||
|
||||
Args:
|
||||
template_name: Template name (zalando, basic, minimal)
|
||||
spec_type: Specification type (openapi or asyncapi)
|
||||
|
||||
Returns:
|
||||
Template content as string
|
||||
|
||||
Raises:
|
||||
BettyError: If template not found
|
||||
"""
|
||||
template_file = Path(__file__).parent / "templates" / f"{spec_type}-{template_name}.yaml"
|
||||
|
||||
if not template_file.exists():
|
||||
raise BettyError(
|
||||
f"Template not found: {spec_type}-{template_name}.yaml. "
|
||||
f"Available templates in {template_file.parent}: "
|
||||
f"{', '.join([f.stem for f in template_file.parent.glob(f'{spec_type}-*.yaml')])}"
|
||||
)
|
||||
|
||||
try:
|
||||
with open(template_file, 'r') as f:
|
||||
content = f.read()
|
||||
logger.info(f"Loaded template: {template_file}")
|
||||
return content
|
||||
except Exception as e:
|
||||
raise BettyError(f"Failed to load template: {e}")
|
||||
|
||||
|
||||
def render_template(template: str, variables: Dict[str, str]) -> str:
|
||||
"""
|
||||
Render template with variables.
|
||||
|
||||
Args:
|
||||
template: Template string with {{variable}} placeholders
|
||||
variables: Variable values to substitute
|
||||
|
||||
Returns:
|
||||
Rendered template string
|
||||
"""
|
||||
result = template
|
||||
for key, value in variables.items():
|
||||
placeholder = f"{{{{{key}}}}}"
|
||||
result = result.replace(placeholder, str(value))
|
||||
|
||||
# Check for unrendered variables
|
||||
unrendered = re.findall(r'\{\{(\w+)\}\}', result)
|
||||
if unrendered:
|
||||
logger.warning(f"Unrendered template variables: {', '.join(set(unrendered))}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def extract_resource_name(service_name: str) -> str:
|
||||
"""
|
||||
Extract primary resource name from service name.
|
||||
|
||||
Examples:
|
||||
user-service -> user
|
||||
order-api -> order
|
||||
payment-gateway -> payment
|
||||
"""
|
||||
# Remove common suffixes
|
||||
for suffix in ['-service', '-api', '-gateway', '-manager']:
|
||||
if service_name.endswith(suffix):
|
||||
return service_name[:-len(suffix)]
|
||||
|
||||
return service_name
|
||||
|
||||
|
||||
def generate_openapi_spec(
|
||||
service_name: str,
|
||||
template_name: str = "zalando",
|
||||
version: str = "1.0.0",
|
||||
output_dir: str = "specs"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate OpenAPI specification from template.
|
||||
|
||||
Args:
|
||||
service_name: Service/API name
|
||||
template_name: Template to use
|
||||
version: API version
|
||||
output_dir: Output directory
|
||||
|
||||
Returns:
|
||||
Result dictionary with spec path and content
|
||||
"""
|
||||
# Generate API ID
|
||||
api_id = str(uuid.uuid4())
|
||||
|
||||
# Extract resource name
|
||||
resource_name = extract_resource_name(service_name)
|
||||
|
||||
# Generate template variables
|
||||
variables = {
|
||||
"service_name": to_kebab_case(service_name),
|
||||
"service_title": to_title_case(service_name),
|
||||
"version": version,
|
||||
"description": f"RESTful API for {service_name.replace('-', ' ')} management",
|
||||
"team_name": "Platform Team",
|
||||
"team_email": "platform@company.com",
|
||||
"api_id": api_id,
|
||||
"audience": "company-internal",
|
||||
"resource_singular": to_snake_case(resource_name),
|
||||
"resource_plural": pluralize(to_snake_case(resource_name)),
|
||||
"resource_title": to_title_case(resource_name),
|
||||
"resource_schema": to_title_case(resource_name)
|
||||
}
|
||||
|
||||
logger.info(f"Generated template variables: {variables}")
|
||||
|
||||
# Load and render template
|
||||
template = load_template(template_name, "openapi")
|
||||
spec_content = render_template(template, variables)
|
||||
|
||||
# Parse to validate YAML
|
||||
try:
|
||||
import yaml
|
||||
spec_dict = yaml.safe_load(spec_content)
|
||||
except Exception as e:
|
||||
raise BettyError(f"Failed to parse generated spec: {e}")
|
||||
|
||||
# Create output directory
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write specification file
|
||||
spec_filename = f"{to_kebab_case(service_name)}.openapi.yaml"
|
||||
spec_path = output_path / spec_filename
|
||||
|
||||
with open(spec_path, 'w') as f:
|
||||
f.write(spec_content)
|
||||
|
||||
logger.info(f"Generated OpenAPI spec: {spec_path}")
|
||||
|
||||
return {
|
||||
"spec_path": str(spec_path),
|
||||
"spec_content": spec_dict,
|
||||
"api_id": api_id,
|
||||
"template_used": template_name,
|
||||
"service_name": to_kebab_case(service_name)
|
||||
}
|
||||
|
||||
|
||||
def generate_asyncapi_spec(
|
||||
service_name: str,
|
||||
template_name: str = "basic",
|
||||
version: str = "1.0.0",
|
||||
output_dir: str = "specs"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate AsyncAPI specification from template.
|
||||
|
||||
Args:
|
||||
service_name: Service/API name
|
||||
template_name: Template to use
|
||||
version: API version
|
||||
output_dir: Output directory
|
||||
|
||||
Returns:
|
||||
Result dictionary with spec path and content
|
||||
"""
|
||||
# Basic AsyncAPI template (inline for now)
|
||||
resource_name = extract_resource_name(service_name)
|
||||
|
||||
asyncapi_template = f"""asyncapi: 3.0.0
|
||||
|
||||
info:
|
||||
title: {to_title_case(service_name)} Events
|
||||
version: {version}
|
||||
description: Event-driven API for {service_name.replace('-', ' ')} lifecycle notifications
|
||||
|
||||
servers:
|
||||
production:
|
||||
host: kafka.company.com:9092
|
||||
protocol: kafka
|
||||
description: Production Kafka cluster
|
||||
|
||||
channels:
|
||||
{to_snake_case(resource_name)}.created:
|
||||
address: {to_snake_case(resource_name)}.created.v1
|
||||
messages:
|
||||
{to_title_case(resource_name)}Created:
|
||||
$ref: '#/components/messages/{to_title_case(resource_name)}Created'
|
||||
|
||||
{to_snake_case(resource_name)}.updated:
|
||||
address: {to_snake_case(resource_name)}.updated.v1
|
||||
messages:
|
||||
{to_title_case(resource_name)}Updated:
|
||||
$ref: '#/components/messages/{to_title_case(resource_name)}Updated'
|
||||
|
||||
{to_snake_case(resource_name)}.deleted:
|
||||
address: {to_snake_case(resource_name)}.deleted.v1
|
||||
messages:
|
||||
{to_title_case(resource_name)}Deleted:
|
||||
$ref: '#/components/messages/{to_title_case(resource_name)}Deleted'
|
||||
|
||||
operations:
|
||||
publish{to_title_case(resource_name)}Created:
|
||||
action: send
|
||||
channel:
|
||||
$ref: '#/channels/{to_snake_case(resource_name)}.created'
|
||||
|
||||
subscribe{to_title_case(resource_name)}Created:
|
||||
action: receive
|
||||
channel:
|
||||
$ref: '#/channels/{to_snake_case(resource_name)}.created'
|
||||
|
||||
components:
|
||||
messages:
|
||||
{to_title_case(resource_name)}Created:
|
||||
name: {to_title_case(resource_name)}Created
|
||||
title: {to_title_case(resource_name)} Created Event
|
||||
contentType: application/json
|
||||
payload:
|
||||
$ref: '#/components/schemas/{to_title_case(resource_name)}CreatedPayload'
|
||||
|
||||
{to_title_case(resource_name)}Updated:
|
||||
name: {to_title_case(resource_name)}Updated
|
||||
title: {to_title_case(resource_name)} Updated Event
|
||||
contentType: application/json
|
||||
payload:
|
||||
$ref: '#/components/schemas/{to_title_case(resource_name)}UpdatedPayload'
|
||||
|
||||
{to_title_case(resource_name)}Deleted:
|
||||
name: {to_title_case(resource_name)}Deleted
|
||||
title: {to_title_case(resource_name)} Deleted Event
|
||||
contentType: application/json
|
||||
payload:
|
||||
$ref: '#/components/schemas/{to_title_case(resource_name)}DeletedPayload'
|
||||
|
||||
schemas:
|
||||
{to_title_case(resource_name)}CreatedPayload:
|
||||
type: object
|
||||
required: [event_id, {to_snake_case(resource_name)}_id, occurred_at]
|
||||
properties:
|
||||
event_id:
|
||||
type: string
|
||||
format: uuid
|
||||
{to_snake_case(resource_name)}_id:
|
||||
type: string
|
||||
format: uuid
|
||||
occurred_at:
|
||||
type: string
|
||||
format: date-time
|
||||
|
||||
{to_title_case(resource_name)}UpdatedPayload:
|
||||
type: object
|
||||
required: [event_id, {to_snake_case(resource_name)}_id, occurred_at, changes]
|
||||
properties:
|
||||
event_id:
|
||||
type: string
|
||||
format: uuid
|
||||
{to_snake_case(resource_name)}_id:
|
||||
type: string
|
||||
format: uuid
|
||||
changes:
|
||||
type: object
|
||||
occurred_at:
|
||||
type: string
|
||||
format: date-time
|
||||
|
||||
{to_title_case(resource_name)}DeletedPayload:
|
||||
type: object
|
||||
required: [event_id, {to_snake_case(resource_name)}_id, occurred_at]
|
||||
properties:
|
||||
event_id:
|
||||
type: string
|
||||
format: uuid
|
||||
{to_snake_case(resource_name)}_id:
|
||||
type: string
|
||||
format: uuid
|
||||
occurred_at:
|
||||
type: string
|
||||
format: date-time
|
||||
"""
|
||||
|
||||
# Parse to validate YAML
|
||||
try:
|
||||
import yaml
|
||||
spec_dict = yaml.safe_load(asyncapi_template)
|
||||
except Exception as e:
|
||||
raise BettyError(f"Failed to parse generated spec: {e}")
|
||||
|
||||
# Create output directory
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Write specification file
|
||||
spec_filename = f"{to_kebab_case(service_name)}.asyncapi.yaml"
|
||||
spec_path = output_path / spec_filename
|
||||
|
||||
with open(spec_path, 'w') as f:
|
||||
f.write(asyncapi_template)
|
||||
|
||||
logger.info(f"Generated AsyncAPI spec: {spec_path}")
|
||||
|
||||
return {
|
||||
"spec_path": str(spec_path),
|
||||
"spec_content": spec_dict,
|
||||
"template_used": template_name,
|
||||
"service_name": to_kebab_case(service_name)
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Create OpenAPI and AsyncAPI specifications from templates"
|
||||
)
|
||||
parser.add_argument(
|
||||
"service_name",
|
||||
type=str,
|
||||
help="Name of the service/API (e.g., user-service, order-api)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"spec_type",
|
||||
type=str,
|
||||
nargs="?",
|
||||
default="openapi",
|
||||
choices=["openapi", "asyncapi"],
|
||||
help="Type of specification (default: openapi)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--template",
|
||||
type=str,
|
||||
default="zalando",
|
||||
help="Template to use (default: zalando for OpenAPI, basic for AsyncAPI)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir",
|
||||
type=str,
|
||||
default="specs",
|
||||
help="Output directory (default: specs)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--version",
|
||||
type=str,
|
||||
default="1.0.0",
|
||||
help="API version (default: 1.0.0)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
# Check if PyYAML is installed
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
raise BettyError(
|
||||
"PyYAML is required for api.define. Install with: pip install pyyaml"
|
||||
)
|
||||
|
||||
# Generate specification
|
||||
logger.info(
|
||||
f"Generating {args.spec_type.upper()} spec for '{args.service_name}' "
|
||||
f"using template '{args.template}'"
|
||||
)
|
||||
|
||||
if args.spec_type == "openapi":
|
||||
result = generate_openapi_spec(
|
||||
service_name=args.service_name,
|
||||
template_name=args.template,
|
||||
version=args.version,
|
||||
output_dir=args.output_dir
|
||||
)
|
||||
elif args.spec_type == "asyncapi":
|
||||
result = generate_asyncapi_spec(
|
||||
service_name=args.service_name,
|
||||
template_name=args.template,
|
||||
version=args.version,
|
||||
output_dir=args.output_dir
|
||||
)
|
||||
else:
|
||||
raise BettyError(f"Unsupported spec type: {args.spec_type}")
|
||||
|
||||
# Return structured result
|
||||
output = {
|
||||
"status": "success",
|
||||
"data": result
|
||||
}
|
||||
print(json.dumps(output, indent=2))
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to generate specification: {e}")
|
||||
print(json.dumps(format_error_response(e), indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
57
skills/api.define/skill.yaml
Normal file
57
skills/api.define/skill.yaml
Normal file
@@ -0,0 +1,57 @@
|
||||
name: api.define
|
||||
version: 0.1.0
|
||||
description: Create OpenAPI and AsyncAPI specifications from templates
|
||||
|
||||
inputs:
|
||||
- name: service_name
|
||||
type: string
|
||||
required: true
|
||||
description: Name of the service/API (e.g., user-service, order-api)
|
||||
|
||||
- name: spec_type
|
||||
type: string
|
||||
required: false
|
||||
default: openapi
|
||||
description: Type of specification (openapi or asyncapi)
|
||||
|
||||
- name: template
|
||||
type: string
|
||||
required: false
|
||||
default: zalando
|
||||
description: Template to use (zalando, basic, minimal)
|
||||
|
||||
- name: output_dir
|
||||
type: string
|
||||
required: false
|
||||
default: specs
|
||||
description: Output directory for generated specification
|
||||
|
||||
- name: version
|
||||
type: string
|
||||
required: false
|
||||
default: 1.0.0
|
||||
description: API version
|
||||
|
||||
outputs:
|
||||
- name: spec_path
|
||||
type: string
|
||||
description: Path to generated specification file
|
||||
|
||||
- name: spec_content
|
||||
type: object
|
||||
description: Generated specification content
|
||||
|
||||
dependencies:
|
||||
- context.schema
|
||||
|
||||
entrypoints:
|
||||
- command: /skill/api/define
|
||||
handler: api_define.py
|
||||
runtime: python
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
status: active
|
||||
|
||||
tags: [api, openapi, asyncapi, scaffolding, zalando]
|
||||
1
skills/api.define/templates/__init__.py
Normal file
1
skills/api.define/templates/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
303
skills/api.define/templates/openapi-zalando.yaml
Normal file
303
skills/api.define/templates/openapi-zalando.yaml
Normal file
@@ -0,0 +1,303 @@
|
||||
openapi: 3.1.0
|
||||
|
||||
info:
|
||||
title: {{service_title}}
|
||||
version: {{version}}
|
||||
description: {{description}}
|
||||
contact:
|
||||
name: {{team_name}}
|
||||
email: {{team_email}}
|
||||
x-api-id: {{api_id}}
|
||||
x-audience: {{audience}}
|
||||
|
||||
servers:
|
||||
- url: https://api.company.com/{{service_name}}/v1
|
||||
description: Production
|
||||
|
||||
paths:
|
||||
/{{resource_plural}}:
|
||||
get:
|
||||
summary: List {{resource_plural}}
|
||||
operationId: list{{resource_title}}
|
||||
tags: [{{resource_title}}]
|
||||
parameters:
|
||||
- name: limit
|
||||
in: query
|
||||
description: Maximum number of items to return
|
||||
schema:
|
||||
type: integer
|
||||
minimum: 1
|
||||
maximum: 100
|
||||
default: 20
|
||||
- name: offset
|
||||
in: query
|
||||
description: Number of items to skip
|
||||
schema:
|
||||
type: integer
|
||||
minimum: 0
|
||||
default: 0
|
||||
responses:
|
||||
'200':
|
||||
description: List of {{resource_plural}}
|
||||
headers:
|
||||
X-Flow-ID:
|
||||
description: Request flow ID for tracing
|
||||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required: [{{resource_plural}}, pagination]
|
||||
properties:
|
||||
{{resource_plural}}:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/{{resource_schema}}'
|
||||
pagination:
|
||||
$ref: '#/components/schemas/Pagination'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'500':
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
post:
|
||||
summary: Create a new {{resource_singular}}
|
||||
operationId: create{{resource_title}}
|
||||
tags: [{{resource_title}}]
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/{{resource_schema}}Create'
|
||||
responses:
|
||||
'201':
|
||||
description: {{resource_title}} created successfully
|
||||
headers:
|
||||
Location:
|
||||
description: URL of the created resource
|
||||
schema:
|
||||
type: string
|
||||
format: uri
|
||||
X-Flow-ID:
|
||||
description: Request flow ID for tracing
|
||||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/{{resource_schema}}'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'409':
|
||||
$ref: '#/components/responses/Conflict'
|
||||
'500':
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
/{{resource_plural}}/{{{resource_singular}}_id}:
|
||||
parameters:
|
||||
- name: {{resource_singular}}_id
|
||||
in: path
|
||||
required: true
|
||||
description: Unique identifier of the {{resource_singular}}
|
||||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
|
||||
get:
|
||||
summary: Get {{resource_singular}} by ID
|
||||
operationId: get{{resource_title}}ById
|
||||
tags: [{{resource_title}}]
|
||||
responses:
|
||||
'200':
|
||||
description: {{resource_title}} details
|
||||
headers:
|
||||
X-Flow-ID:
|
||||
description: Request flow ID for tracing
|
||||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/{{resource_schema}}'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'500':
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
put:
|
||||
summary: Update {{resource_singular}}
|
||||
operationId: update{{resource_title}}
|
||||
tags: [{{resource_title}}]
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/{{resource_schema}}Update'
|
||||
responses:
|
||||
'200':
|
||||
description: {{resource_title}} updated successfully
|
||||
headers:
|
||||
X-Flow-ID:
|
||||
description: Request flow ID for tracing
|
||||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/{{resource_schema}}'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'500':
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
delete:
|
||||
summary: Delete {{resource_singular}}
|
||||
operationId: delete{{resource_title}}
|
||||
tags: [{{resource_title}}]
|
||||
responses:
|
||||
'204':
|
||||
description: {{resource_title}} deleted successfully
|
||||
headers:
|
||||
X-Flow-ID:
|
||||
description: Request flow ID for tracing
|
||||
schema:
|
||||
type: string
|
||||
format: uuid
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'500':
|
||||
$ref: '#/components/responses/InternalError'
|
||||
|
||||
components:
|
||||
schemas:
|
||||
{{resource_schema}}:
|
||||
type: object
|
||||
required: [{{resource_singular}}_id, created_at]
|
||||
properties:
|
||||
{{resource_singular}}_id:
|
||||
type: string
|
||||
format: uuid
|
||||
description: Unique identifier
|
||||
created_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Creation timestamp
|
||||
updated_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: Last update timestamp
|
||||
|
||||
{{resource_schema}}Create:
|
||||
type: object
|
||||
required: []
|
||||
properties:
|
||||
# Add creation-specific fields here
|
||||
|
||||
{{resource_schema}}Update:
|
||||
type: object
|
||||
properties:
|
||||
# Add update-specific fields here
|
||||
|
||||
Pagination:
|
||||
type: object
|
||||
required: [limit, offset, total]
|
||||
properties:
|
||||
limit:
|
||||
type: integer
|
||||
description: Number of items per page
|
||||
offset:
|
||||
type: integer
|
||||
description: Number of items skipped
|
||||
total:
|
||||
type: integer
|
||||
description: Total number of items available
|
||||
|
||||
Problem:
|
||||
type: object
|
||||
required: [type, title, status]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
format: uri
|
||||
description: URI reference identifying the problem type
|
||||
title:
|
||||
type: string
|
||||
description: Short, human-readable summary
|
||||
status:
|
||||
type: integer
|
||||
description: HTTP status code
|
||||
detail:
|
||||
type: string
|
||||
description: Human-readable explanation
|
||||
instance:
|
||||
type: string
|
||||
format: uri
|
||||
description: URI reference identifying the specific occurrence
|
||||
|
||||
responses:
|
||||
BadRequest:
|
||||
description: Bad request - invalid parameters or malformed request
|
||||
content:
|
||||
application/problem+json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Problem'
|
||||
example:
|
||||
type: https://api.company.com/problems/bad-request
|
||||
title: Bad Request
|
||||
status: 400
|
||||
detail: "Invalid query parameter 'limit': must be between 1 and 100"
|
||||
|
||||
NotFound:
|
||||
description: Resource not found
|
||||
content:
|
||||
application/problem+json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Problem'
|
||||
example:
|
||||
type: https://api.company.com/problems/not-found
|
||||
title: Not Found
|
||||
status: 404
|
||||
detail: {{resource_title}} with the specified ID was not found
|
||||
|
||||
Conflict:
|
||||
description: Conflict - resource already exists or state conflict
|
||||
content:
|
||||
application/problem+json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Problem'
|
||||
example:
|
||||
type: https://api.company.com/problems/conflict
|
||||
title: Conflict
|
||||
status: 409
|
||||
detail: {{resource_title}} with this identifier already exists
|
||||
|
||||
InternalError:
|
||||
description: Internal server error
|
||||
content:
|
||||
application/problem+json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Problem'
|
||||
example:
|
||||
type: https://api.company.com/problems/internal-error
|
||||
title: Internal Server Error
|
||||
status: 500
|
||||
detail: An unexpected error occurred while processing the request
|
||||
|
||||
securitySchemes:
|
||||
bearerAuth:
|
||||
type: http
|
||||
scheme: bearer
|
||||
bearerFormat: JWT
|
||||
description: JWT-based authentication
|
||||
|
||||
security:
|
||||
- bearerAuth: []
|
||||
299
skills/api.generatemodels/SKILL.md
Normal file
299
skills/api.generatemodels/SKILL.md
Normal file
@@ -0,0 +1,299 @@
|
||||
# api.generate-models
|
||||
|
||||
## Overview
|
||||
|
||||
**api.generate-models** generates type-safe models from OpenAPI and AsyncAPI specifications, enabling shared models between frontend and backend using code generation.
|
||||
|
||||
## Purpose
|
||||
|
||||
Transform API specifications into type-safe code:
|
||||
- Generate TypeScript interfaces from OpenAPI schemas
|
||||
- Generate Python dataclasses/Pydantic models
|
||||
- Generate Java classes, Go structs, C# classes
|
||||
- Single source of truth: the API specification
|
||||
- Automatic synchronization when specs change
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
python skills/api.generate-models/modelina_generate.py <spec_path> <language> [options]
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Required | Description | Default |
|
||||
|-----------|----------|-------------|---------|
|
||||
| `spec_path` | Yes | Path to API spec file | - |
|
||||
| `language` | Yes | Target language | - |
|
||||
| `--output-dir` | No | Output directory | `src/models` |
|
||||
| `--package-name` | No | Package/module name | - |
|
||||
|
||||
### Supported Languages
|
||||
|
||||
| Language | Extension | Status |
|
||||
|----------|-----------|--------|
|
||||
| `typescript` | `.ts` | ✅ Supported |
|
||||
| `python` | `.py` | ✅ Supported |
|
||||
| `java` | `.java` | 🚧 Planned |
|
||||
| `go` | `.go` | 🚧 Planned |
|
||||
| `csharp` | `.cs` | 🚧 Planned |
|
||||
| `rust` | `.rs` | 🚧 Planned |
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Generate TypeScript Models
|
||||
|
||||
```bash
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.openapi.yaml \
|
||||
typescript \
|
||||
--output-dir=src/models/user-service
|
||||
```
|
||||
|
||||
**Generated files**:
|
||||
```
|
||||
src/models/user-service/
|
||||
├── User.ts
|
||||
├── UserCreate.ts
|
||||
├── UserUpdate.ts
|
||||
├── Pagination.ts
|
||||
└── Problem.ts
|
||||
```
|
||||
|
||||
**Example TypeScript output**:
|
||||
```typescript
|
||||
// src/models/user-service/User.ts
|
||||
export interface User {
|
||||
/** Unique identifier */
|
||||
user_id: string;
|
||||
/** Creation timestamp */
|
||||
created_at: string;
|
||||
/** Last update timestamp */
|
||||
updated_at?: string;
|
||||
}
|
||||
|
||||
// src/models/user-service/Pagination.ts
|
||||
export interface Pagination {
|
||||
/** Number of items per page */
|
||||
limit: number;
|
||||
/** Number of items skipped */
|
||||
offset: number;
|
||||
/** Total number of items available */
|
||||
total: number;
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2: Generate Python Models
|
||||
|
||||
```bash
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.openapi.yaml \
|
||||
python \
|
||||
--output-dir=src/models/user_service
|
||||
```
|
||||
|
||||
**Generated files**:
|
||||
```
|
||||
src/models/user_service/
|
||||
└── models.py
|
||||
```
|
||||
|
||||
**Example Python output**:
|
||||
```python
|
||||
# src/models/user_service/models.py
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
from uuid import UUID
|
||||
|
||||
class User(BaseModel):
|
||||
"""User model"""
|
||||
user_id: UUID = Field(..., description="Unique identifier")
|
||||
created_at: datetime = Field(..., description="Creation timestamp")
|
||||
updated_at: Optional[datetime] = Field(None, description="Last update timestamp")
|
||||
|
||||
class Pagination(BaseModel):
|
||||
"""Pagination metadata"""
|
||||
limit: int = Field(..., description="Number of items per page")
|
||||
offset: int = Field(..., description="Number of items skipped")
|
||||
total: int = Field(..., description="Total number of items available")
|
||||
```
|
||||
|
||||
### Example 3: Generate for Multiple Languages
|
||||
|
||||
```bash
|
||||
# TypeScript for frontend
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.openapi.yaml \
|
||||
typescript \
|
||||
--output-dir=frontend/src/models
|
||||
|
||||
# Python for backend
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.openapi.yaml \
|
||||
python \
|
||||
--output-dir=backend/app/models
|
||||
```
|
||||
|
||||
## Code Generators Used
|
||||
|
||||
The skill uses multiple code generation approaches:
|
||||
|
||||
### 1. datamodel-code-generator (Primary)
|
||||
|
||||
**Best for**: OpenAPI specs → Python/TypeScript
|
||||
**Installation**: `pip install datamodel-code-generator`
|
||||
|
||||
Generates:
|
||||
- Python: Pydantic v2 models with type hints
|
||||
- TypeScript: Type-safe interfaces
|
||||
- Validates schema during generation
|
||||
|
||||
### 2. Simple Built-in Generator (Fallback)
|
||||
|
||||
**Best for**: Basic models when external tools not available
|
||||
**Installation**: None required
|
||||
|
||||
Generates:
|
||||
- Python: dataclasses
|
||||
- TypeScript: interfaces
|
||||
- Basic but reliable
|
||||
|
||||
### 3. Modelina (Future)
|
||||
|
||||
**Best for**: AsyncAPI specs, multiple languages
|
||||
**Installation**: `npm install -g @asyncapi/modelina`
|
||||
**Status**: Planned
|
||||
|
||||
## Output
|
||||
|
||||
### Success Response
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"models_path": "src/models/user-service",
|
||||
"files_generated": [
|
||||
"src/models/user-service/User.ts",
|
||||
"src/models/user-service/UserCreate.ts",
|
||||
"src/models/user-service/Pagination.ts",
|
||||
"src/models/user-service/Problem.ts"
|
||||
],
|
||||
"model_count": 4,
|
||||
"generator_used": "datamodel-code-generator"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Workflows
|
||||
|
||||
```yaml
|
||||
# workflows/api_first_development.yaml
|
||||
steps:
|
||||
- skill: api.define
|
||||
args:
|
||||
- "user-service"
|
||||
- "openapi"
|
||||
output: spec_path
|
||||
|
||||
- skill: api.validate
|
||||
args:
|
||||
- "{spec_path}"
|
||||
- "zalando"
|
||||
required: true
|
||||
|
||||
- skill: api.generate-models
|
||||
args:
|
||||
- "{spec_path}"
|
||||
- "typescript"
|
||||
- "--output-dir=frontend/src/models"
|
||||
|
||||
- skill: api.generate-models
|
||||
args:
|
||||
- "{spec_path}"
|
||||
- "python"
|
||||
- "--output-dir=backend/app/models"
|
||||
```
|
||||
|
||||
## Integration with Hooks
|
||||
|
||||
Auto-regenerate models when specs change:
|
||||
|
||||
```bash
|
||||
python skills/hook.define/hook_define.py \
|
||||
on_file_save \
|
||||
"python betty/skills/api.generate-models/modelina_generate.py {file_path} typescript --output-dir=src/models" \
|
||||
--pattern="specs/*.openapi.yaml" \
|
||||
--blocking=false \
|
||||
--description="Auto-regenerate TypeScript models when OpenAPI specs change"
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
### For Developers
|
||||
- ✅ **Type safety**: Catch errors at compile time, not runtime
|
||||
- ✅ **IDE autocomplete**: Full IntelliSense/autocomplete support
|
||||
- ✅ **No manual typing**: Models generated automatically
|
||||
- ✅ **Always in sync**: Regenerate when spec changes
|
||||
|
||||
### For Teams
|
||||
- ✅ **Single source of truth**: API spec defines types
|
||||
- ✅ **Frontend/backend alignment**: Same types everywhere
|
||||
- ✅ **Reduced errors**: Type mismatches caught early
|
||||
- ✅ **Faster development**: No manual model creation
|
||||
|
||||
### For Organizations
|
||||
- ✅ **Consistency**: All services use same model generation
|
||||
- ✅ **Maintainability**: Update spec → regenerate → done
|
||||
- ✅ **Documentation**: Types are self-documenting
|
||||
- ✅ **Quality**: Generated code is tested and reliable
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Required
|
||||
- **PyYAML**: For YAML parsing (`pip install pyyaml`)
|
||||
|
||||
### Optional (Better Output)
|
||||
- **datamodel-code-generator**: For high-quality Python/TypeScript (`pip install datamodel-code-generator`)
|
||||
- **Node.js + Modelina**: For AsyncAPI and more languages (`npm install -g @asyncapi/modelina`)
|
||||
|
||||
## Examples with Real Specs
|
||||
|
||||
Using the user-service spec from Phase 1:
|
||||
|
||||
```bash
|
||||
# Generate TypeScript
|
||||
python skills/api.generate-models/modelina_generate.py \
|
||||
specs/user-service.openapi.yaml \
|
||||
typescript
|
||||
|
||||
# Output:
|
||||
{
|
||||
"status": "success",
|
||||
"data": {
|
||||
"models_path": "src/models",
|
||||
"files_generated": [
|
||||
"src/models/User.ts",
|
||||
"src/models/UserCreate.ts",
|
||||
"src/models/UserUpdate.ts",
|
||||
"src/models/Pagination.ts",
|
||||
"src/models/Problem.ts"
|
||||
],
|
||||
"model_count": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## See Also
|
||||
|
||||
- [api.define](../api.define/SKILL.md) - Create OpenAPI specs
|
||||
- [api.validate](../api.validate/SKILL.md) - Validate specs
|
||||
- [Betty Architecture](../../docs/betty-architecture.md) - Five-layer model
|
||||
- [API-Driven Development](../../docs/api-driven-development.md) - Complete guide
|
||||
|
||||
## Version
|
||||
|
||||
**0.1.0** - Initial implementation with TypeScript and Python support
|
||||
1
skills/api.generatemodels/__init__.py
Normal file
1
skills/api.generatemodels/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
549
skills/api.generatemodels/modelina_generate.py
Executable file
549
skills/api.generatemodels/modelina_generate.py
Executable file
@@ -0,0 +1,549 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate type-safe models from OpenAPI and AsyncAPI specifications using Modelina.
|
||||
|
||||
This skill uses AsyncAPI Modelina to generate models in various languages
|
||||
from API specifications.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
import argparse
|
||||
import subprocess
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, List
|
||||
|
||||
# Add betty module to path
|
||||
|
||||
from betty.logging_utils import setup_logger
|
||||
from betty.errors import format_error_response, BettyError
|
||||
from betty.validation import validate_path
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
# Supported languages
|
||||
SUPPORTED_LANGUAGES = [
|
||||
"typescript",
|
||||
"python",
|
||||
"java",
|
||||
"go",
|
||||
"csharp",
|
||||
"rust",
|
||||
"kotlin",
|
||||
"dart"
|
||||
]
|
||||
|
||||
# Language-specific configurations
|
||||
LANGUAGE_CONFIG = {
|
||||
"typescript": {
|
||||
"extension": ".ts",
|
||||
"package_json_required": False,
|
||||
"modelina_generator": "typescript"
|
||||
},
|
||||
"python": {
|
||||
"extension": ".py",
|
||||
"package_json_required": False,
|
||||
"modelina_generator": "python"
|
||||
},
|
||||
"java": {
|
||||
"extension": ".java",
|
||||
"package_json_required": False,
|
||||
"modelina_generator": "java"
|
||||
},
|
||||
"go": {
|
||||
"extension": ".go",
|
||||
"package_json_required": False,
|
||||
"modelina_generator": "go"
|
||||
},
|
||||
"csharp": {
|
||||
"extension": ".cs",
|
||||
"package_json_required": False,
|
||||
"modelina_generator": "csharp"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def check_node_installed() -> bool:
|
||||
"""
|
||||
Check if Node.js is installed.
|
||||
|
||||
Returns:
|
||||
True if Node.js is available, False otherwise
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["node", "--version"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
if result.returncode == 0:
|
||||
version = result.stdout.strip()
|
||||
logger.info(f"Node.js found: {version}")
|
||||
return True
|
||||
return False
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||
return False
|
||||
|
||||
|
||||
def check_npx_installed() -> bool:
|
||||
"""
|
||||
Check if npx is installed.
|
||||
|
||||
Returns:
|
||||
True if npx is available, False otherwise
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["npx", "--version"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
return result.returncode == 0
|
||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||
return False
|
||||
|
||||
|
||||
def generate_modelina_script(
|
||||
spec_path: str,
|
||||
language: str,
|
||||
output_dir: str,
|
||||
package_name: str = None
|
||||
) -> str:
|
||||
"""
|
||||
Generate a Node.js script that uses Modelina to generate models.
|
||||
|
||||
Args:
|
||||
spec_path: Path to spec file
|
||||
language: Target language
|
||||
output_dir: Output directory
|
||||
package_name: Package name (optional)
|
||||
|
||||
Returns:
|
||||
JavaScript code as string
|
||||
"""
|
||||
# Modelina generator based on language
|
||||
generator_map = {
|
||||
"typescript": "TypeScriptGenerator",
|
||||
"python": "PythonGenerator",
|
||||
"java": "JavaGenerator",
|
||||
"go": "GoGenerator",
|
||||
"csharp": "CSharpGenerator"
|
||||
}
|
||||
|
||||
generator_class = generator_map.get(language, "TypeScriptGenerator")
|
||||
|
||||
script = f"""
|
||||
const {{ {generator_class} }} = require('@asyncapi/modelina');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
async function generate() {{
|
||||
try {{
|
||||
// Read the spec file
|
||||
const spec = fs.readFileSync('{spec_path}', 'utf8');
|
||||
const specData = JSON.parse(spec);
|
||||
|
||||
// Create generator
|
||||
const generator = new {generator_class}();
|
||||
|
||||
// Generate models
|
||||
const models = await generator.generate(specData);
|
||||
|
||||
// Ensure output directory exists
|
||||
const outputDir = '{output_dir}';
|
||||
if (!fs.existsSync(outputDir)) {{
|
||||
fs.mkdirSync(outputDir, {{ recursive: true }});
|
||||
}}
|
||||
|
||||
// Write models to files
|
||||
const filesGenerated = [];
|
||||
for (const model of models) {{
|
||||
const filePath = path.join(outputDir, model.name + model.extension);
|
||||
fs.writeFileSync(filePath, model.result);
|
||||
filesGenerated.push(filePath);
|
||||
}}
|
||||
|
||||
// Output result
|
||||
console.log(JSON.stringify({{
|
||||
success: true,
|
||||
files_generated: filesGenerated,
|
||||
model_count: models.length
|
||||
}}));
|
||||
|
||||
}} catch (error) {{
|
||||
console.error(JSON.stringify({{
|
||||
success: false,
|
||||
error: error.message,
|
||||
stack: error.stack
|
||||
}}));
|
||||
process.exit(1);
|
||||
}}
|
||||
}}
|
||||
|
||||
generate();
|
||||
"""
|
||||
return script
|
||||
|
||||
|
||||
def generate_models_datamodel_code_generator(
|
||||
spec_path: str,
|
||||
language: str,
|
||||
output_dir: str,
|
||||
package_name: str = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate models using datamodel-code-generator (Python fallback).
|
||||
|
||||
This is used when Modelina/Node.js is not available.
|
||||
Works for OpenAPI specs only, generating Python/TypeScript models.
|
||||
|
||||
Args:
|
||||
spec_path: Path to specification file
|
||||
language: Target language
|
||||
output_dir: Output directory
|
||||
package_name: Package name
|
||||
|
||||
Returns:
|
||||
Result dictionary
|
||||
"""
|
||||
try:
|
||||
# Check if datamodel-code-generator is installed
|
||||
result = subprocess.run(
|
||||
["datamodel-codegen", "--version"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
raise BettyError(
|
||||
"datamodel-code-generator not installed. "
|
||||
"Install with: pip install datamodel-code-generator"
|
||||
)
|
||||
|
||||
except FileNotFoundError:
|
||||
raise BettyError(
|
||||
"datamodel-code-generator not found. "
|
||||
"Install with: pip install datamodel-code-generator"
|
||||
)
|
||||
|
||||
# Create output directory
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Determine output file based on language
|
||||
if language == "python":
|
||||
output_file = output_path / "models.py"
|
||||
cmd = [
|
||||
"datamodel-codegen",
|
||||
"--input", spec_path,
|
||||
"--output", str(output_file),
|
||||
"--input-file-type", "openapi",
|
||||
"--output-model-type", "pydantic_v2.BaseModel",
|
||||
"--snake-case-field",
|
||||
"--use-standard-collections"
|
||||
]
|
||||
elif language == "typescript":
|
||||
output_file = output_path / "models.ts"
|
||||
cmd = [
|
||||
"datamodel-codegen",
|
||||
"--input", spec_path,
|
||||
"--output", str(output_file),
|
||||
"--input-file-type", "openapi",
|
||||
"--output-model-type", "typescript"
|
||||
]
|
||||
else:
|
||||
raise BettyError(
|
||||
f"datamodel-code-generator fallback only supports Python and TypeScript, not {language}"
|
||||
)
|
||||
|
||||
# Run code generator
|
||||
logger.info(f"Running datamodel-code-generator: {' '.join(cmd)}")
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=60
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
raise BettyError(f"Code generation failed: {result.stderr}")
|
||||
|
||||
# Count generated files
|
||||
files_generated = [str(output_file)]
|
||||
|
||||
return {
|
||||
"models_path": str(output_path),
|
||||
"files_generated": files_generated,
|
||||
"model_count": 1,
|
||||
"generator_used": "datamodel-code-generator"
|
||||
}
|
||||
|
||||
|
||||
def generate_models_simple(
|
||||
spec_path: str,
|
||||
language: str,
|
||||
output_dir: str,
|
||||
package_name: str = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Simple model generation without external tools.
|
||||
|
||||
Generates basic model files from OpenAPI schemas as a last resort.
|
||||
|
||||
Args:
|
||||
spec_path: Path to specification file
|
||||
language: Target language
|
||||
output_dir: Output directory
|
||||
package_name: Package name
|
||||
|
||||
Returns:
|
||||
Result dictionary
|
||||
"""
|
||||
import yaml
|
||||
|
||||
# Load spec
|
||||
with open(spec_path, 'r') as f:
|
||||
spec = yaml.safe_load(f)
|
||||
|
||||
# Get schemas
|
||||
schemas = spec.get("components", {}).get("schemas", {})
|
||||
|
||||
if not schemas:
|
||||
raise BettyError("No schemas found in specification")
|
||||
|
||||
# Create output directory
|
||||
output_path = Path(output_dir)
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
files_generated = []
|
||||
|
||||
# Generate basic models for each schema
|
||||
for schema_name, schema_def in schemas.items():
|
||||
if language == "typescript":
|
||||
content = generate_typescript_interface(schema_name, schema_def)
|
||||
file_path = output_path / f"{schema_name}.ts"
|
||||
elif language == "python":
|
||||
content = generate_python_dataclass(schema_name, schema_def)
|
||||
file_path = output_path / f"{schema_name.lower()}.py"
|
||||
else:
|
||||
raise BettyError(f"Simple generation only supports TypeScript and Python, not {language}")
|
||||
|
||||
with open(file_path, 'w') as f:
|
||||
f.write(content)
|
||||
|
||||
files_generated.append(str(file_path))
|
||||
logger.info(f"Generated {file_path}")
|
||||
|
||||
return {
|
||||
"models_path": str(output_path),
|
||||
"files_generated": files_generated,
|
||||
"model_count": len(schemas),
|
||||
"generator_used": "simple"
|
||||
}
|
||||
|
||||
|
||||
def generate_typescript_interface(name: str, schema: Dict[str, Any]) -> str:
|
||||
"""Generate TypeScript interface from schema."""
|
||||
properties = schema.get("properties") or {}
|
||||
required = schema.get("required", [])
|
||||
|
||||
lines = [f"export interface {name} {{"]
|
||||
|
||||
if not properties:
|
||||
lines.append(" // No properties defined")
|
||||
|
||||
for prop_name, prop_def in properties.items():
|
||||
prop_type = map_openapi_type_to_typescript(prop_def.get("type", "any"))
|
||||
optional = "" if prop_name in required else "?"
|
||||
description = prop_def.get("description", "")
|
||||
|
||||
if description:
|
||||
lines.append(f" /** {description} */")
|
||||
lines.append(f" {prop_name}{optional}: {prop_type};")
|
||||
|
||||
lines.append("}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def generate_python_dataclass(name: str, schema: Dict[str, Any]) -> str:
|
||||
"""Generate Python dataclass from schema."""
|
||||
properties = schema.get("properties") or {}
|
||||
required = schema.get("required", [])
|
||||
|
||||
lines = [
|
||||
"from dataclasses import dataclass",
|
||||
"from typing import Optional",
|
||||
"from datetime import datetime",
|
||||
"",
|
||||
"@dataclass",
|
||||
f"class {name}:"
|
||||
]
|
||||
|
||||
if not properties:
|
||||
lines.append(" pass")
|
||||
else:
|
||||
for prop_name, prop_def in properties.items():
|
||||
prop_type = map_openapi_type_to_python(prop_def)
|
||||
description = prop_def.get("description", "")
|
||||
|
||||
if prop_name not in required:
|
||||
prop_type = f"Optional[{prop_type}]"
|
||||
|
||||
if description:
|
||||
lines.append(f" # {description}")
|
||||
|
||||
default = " = None" if prop_name not in required else ""
|
||||
lines.append(f" {prop_name}: {prop_type}{default}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def map_openapi_type_to_typescript(openapi_type: str) -> str:
|
||||
"""Map OpenAPI type to TypeScript type."""
|
||||
type_map = {
|
||||
"string": "string",
|
||||
"number": "number",
|
||||
"integer": "number",
|
||||
"boolean": "boolean",
|
||||
"array": "any[]",
|
||||
"object": "object"
|
||||
}
|
||||
return type_map.get(openapi_type, "any")
|
||||
|
||||
|
||||
def map_openapi_type_to_python(prop_def: Dict[str, Any]) -> str:
|
||||
"""Map OpenAPI type to Python type."""
|
||||
openapi_type = prop_def.get("type", "Any")
|
||||
format_type = prop_def.get("format", "")
|
||||
|
||||
if openapi_type == "string":
|
||||
if format_type == "date-time":
|
||||
return "datetime"
|
||||
elif format_type == "uuid":
|
||||
return "str" # or UUID from uuid module
|
||||
return "str"
|
||||
elif openapi_type == "number" or openapi_type == "integer":
|
||||
return "int" if openapi_type == "integer" else "float"
|
||||
elif openapi_type == "boolean":
|
||||
return "bool"
|
||||
elif openapi_type == "array":
|
||||
return "list"
|
||||
elif openapi_type == "object":
|
||||
return "dict"
|
||||
return "Any"
|
||||
|
||||
|
||||
def generate_models(
|
||||
spec_path: str,
|
||||
language: str,
|
||||
output_dir: str = "src/models",
|
||||
package_name: str = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate models from API specification.
|
||||
|
||||
Args:
|
||||
spec_path: Path to specification file
|
||||
language: Target language
|
||||
output_dir: Output directory
|
||||
package_name: Package name
|
||||
|
||||
Returns:
|
||||
Result dictionary with generated files info
|
||||
|
||||
Raises:
|
||||
BettyError: If generation fails
|
||||
"""
|
||||
# Validate language
|
||||
if language not in SUPPORTED_LANGUAGES:
|
||||
raise BettyError(
|
||||
f"Unsupported language '{language}'. "
|
||||
f"Supported: {', '.join(SUPPORTED_LANGUAGES)}"
|
||||
)
|
||||
|
||||
# Validate spec file exists
|
||||
if not Path(spec_path).exists():
|
||||
raise BettyError(f"Specification file not found: {spec_path}")
|
||||
|
||||
logger.info(f"Generating {language} models from {spec_path}")
|
||||
|
||||
# Try datamodel-code-generator first (most reliable for OpenAPI)
|
||||
try:
|
||||
logger.info("Attempting generation with datamodel-code-generator")
|
||||
result = generate_models_datamodel_code_generator(
|
||||
spec_path, language, output_dir, package_name
|
||||
)
|
||||
return result
|
||||
except BettyError as e:
|
||||
logger.warning(f"datamodel-code-generator not available: {e}")
|
||||
|
||||
# Fallback to simple generation
|
||||
logger.info("Using simple built-in generator")
|
||||
result = generate_models_simple(
|
||||
spec_path, language, output_dir, package_name
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate type-safe models from API specifications using Modelina"
|
||||
)
|
||||
parser.add_argument(
|
||||
"spec_path",
|
||||
type=str,
|
||||
help="Path to API specification file (OpenAPI or AsyncAPI)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"language",
|
||||
type=str,
|
||||
choices=SUPPORTED_LANGUAGES,
|
||||
help="Target language for generated models"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir",
|
||||
type=str,
|
||||
default="src/models",
|
||||
help="Output directory for generated models (default: src/models)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--package-name",
|
||||
type=str,
|
||||
help="Package/module name for generated code"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
# Validate inputs
|
||||
validate_path(args.spec_path)
|
||||
|
||||
# Generate models
|
||||
result = generate_models(
|
||||
spec_path=args.spec_path,
|
||||
language=args.language,
|
||||
output_dir=args.output_dir,
|
||||
package_name=args.package_name
|
||||
)
|
||||
|
||||
# Return structured result
|
||||
output = {
|
||||
"status": "success",
|
||||
"data": result
|
||||
}
|
||||
print(json.dumps(output, indent=2))
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Model generation failed: {e}")
|
||||
print(json.dumps(format_error_response(e), indent=2))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
53
skills/api.generatemodels/skill.yaml
Normal file
53
skills/api.generatemodels/skill.yaml
Normal file
@@ -0,0 +1,53 @@
|
||||
name: api.generatemodels
|
||||
version: 0.1.0
|
||||
description: Generate type-safe models from OpenAPI and AsyncAPI specifications using Modelina
|
||||
|
||||
inputs:
|
||||
- name: spec_path
|
||||
type: string
|
||||
required: true
|
||||
description: Path to API specification file (OpenAPI or AsyncAPI)
|
||||
|
||||
- name: language
|
||||
type: string
|
||||
required: true
|
||||
description: Target language (typescript, python, java, go, csharp)
|
||||
|
||||
- name: output_dir
|
||||
type: string
|
||||
required: false
|
||||
default: src/models
|
||||
description: Output directory for generated models
|
||||
|
||||
- name: package_name
|
||||
type: string
|
||||
required: false
|
||||
description: Package/module name for generated code
|
||||
|
||||
outputs:
|
||||
- name: models_path
|
||||
type: string
|
||||
description: Path to directory containing generated models
|
||||
|
||||
- name: files_generated
|
||||
type: array
|
||||
description: List of generated model files
|
||||
|
||||
- name: model_count
|
||||
type: number
|
||||
description: Number of models generated
|
||||
|
||||
dependencies:
|
||||
- context.schema
|
||||
|
||||
entrypoints:
|
||||
- command: /skill/api/generate-models
|
||||
handler: modelina_generate.py
|
||||
runtime: python
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
status: active
|
||||
|
||||
tags: [api, codegen, modelina, openapi, asyncapi, typescript, python, java]
|
||||
83
skills/api.test/README.md
Normal file
83
skills/api.test/README.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# api.test
|
||||
|
||||
Test REST API endpoints by executing HTTP requests and validating responses against expected outcomes
|
||||
|
||||
## Overview
|
||||
|
||||
**Purpose:** Test REST API endpoints by executing HTTP requests and validating responses against expected outcomes
|
||||
|
||||
**Command:** `/api/test`
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
python3 skills/api/test/api_test.py
|
||||
```
|
||||
|
||||
### With Arguments
|
||||
|
||||
```bash
|
||||
python3 skills/api/test/api_test.py \
|
||||
--api_spec_path "value" \
|
||||
--base_url "value" \
|
||||
--test_scenarios_path_(optional) "value" \
|
||||
--auth_config_path_(optional) "value" \
|
||||
--output-format json
|
||||
```
|
||||
|
||||
## Inputs
|
||||
|
||||
- **api_spec_path**
|
||||
- **base_url**
|
||||
- **test_scenarios_path (optional)**
|
||||
- **auth_config_path (optional)**
|
||||
|
||||
## Outputs
|
||||
|
||||
- **test_results.json**
|
||||
- **test_report.html**
|
||||
|
||||
## Artifact Metadata
|
||||
|
||||
### Produces
|
||||
|
||||
- `test-result`
|
||||
- `test-report`
|
||||
|
||||
## Permissions
|
||||
|
||||
- `network:http`
|
||||
- `filesystem:read`
|
||||
- `filesystem:write`
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
Support multiple HTTP methods: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS Test scenarios should validate: - Response status codes - Response headers - Response body structure and content - Response time/performance - Authentication/authorization - Error handling Features: - Load test scenarios from OpenAPI/Swagger specs - Support various authentication methods (Bearer, Basic, API Key, OAuth2) - Execute tests in sequence or parallel - Generate detailed HTML reports with pass/fail visualization - Support environment variables for configuration - Retry failed tests with exponential backoff - Collect performance metrics (response time, throughput) Output should include: - Total tests run - Passed/failed counts - Individual test results with request/response details - Performance statistics - Coverage metrics (% of endpoints tested)
|
||||
|
||||
## Integration
|
||||
|
||||
This skill can be used in agents by including it in `skills_available`:
|
||||
|
||||
```yaml
|
||||
name: my.agent
|
||||
skills_available:
|
||||
- api.test
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Run tests with:
|
||||
|
||||
```bash
|
||||
pytest skills/api/test/test_api_test.py -v
|
||||
```
|
||||
|
||||
## Created By
|
||||
|
||||
This skill was generated by **meta.skill**, the skill creator meta-agent.
|
||||
|
||||
---
|
||||
|
||||
*Part of the Betty Framework*
|
||||
1
skills/api.test/__init__.py
Normal file
1
skills/api.test/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# Auto-generated package initializer for skills.
|
||||
120
skills/api.test/api_test.py
Executable file
120
skills/api.test/api_test.py
Executable file
@@ -0,0 +1,120 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
api.test - Test REST API endpoints by executing HTTP requests and validating responses against expected outcomes
|
||||
|
||||
Generated by meta.skill
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class ApiTest:
|
||||
"""
|
||||
Test REST API endpoints by executing HTTP requests and validating responses against expected outcomes
|
||||
"""
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize skill"""
|
||||
self.base_dir = Path(base_dir)
|
||||
|
||||
def execute(self, api_spec_path: Optional[str] = None, base_url: Optional[str] = None, test_scenarios_path_optional: Optional[str] = None, auth_config_path_optional: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Execute the skill
|
||||
|
||||
Returns:
|
||||
Dict with execution results
|
||||
"""
|
||||
try:
|
||||
logger.info("Executing api.test...")
|
||||
|
||||
# TODO: Implement skill logic here
|
||||
|
||||
# Implementation notes:
|
||||
# Support multiple HTTP methods: GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS Test scenarios should validate: - Response status codes - Response headers - Response body structure and content - Response time/performance - Authentication/authorization - Error handling Features: - Load test scenarios from OpenAPI/Swagger specs - Support various authentication methods (Bearer, Basic, API Key, OAuth2) - Execute tests in sequence or parallel - Generate detailed HTML reports with pass/fail visualization - Support environment variables for configuration - Retry failed tests with exponential backoff - Collect performance metrics (response time, throughput) Output should include: - Total tests run - Passed/failed counts - Individual test results with request/response details - Performance statistics - Coverage metrics (% of endpoints tested)
|
||||
|
||||
# Placeholder implementation
|
||||
result = {
|
||||
"ok": True,
|
||||
"status": "success",
|
||||
"message": "Skill executed successfully"
|
||||
}
|
||||
|
||||
logger.info("Skill completed successfully")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing skill: {e}")
|
||||
return {
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Test REST API endpoints by executing HTTP requests and validating responses against expected outcomes"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--api-spec-path",
|
||||
help="api_spec_path"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--base-url",
|
||||
help="base_url"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--test-scenarios-path-optional",
|
||||
help="test_scenarios_path (optional)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--auth-config-path-optional",
|
||||
help="auth_config_path (optional)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-format",
|
||||
choices=["json", "yaml"],
|
||||
default="json",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Create skill instance
|
||||
skill = ApiTest()
|
||||
|
||||
# Execute skill
|
||||
result = skill.execute(
|
||||
api_spec_path=args.api_spec_path,
|
||||
base_url=args.base_url,
|
||||
test_scenarios_path_optional=args.test_scenarios_path_optional,
|
||||
auth_config_path_optional=args.auth_config_path_optional,
|
||||
)
|
||||
|
||||
# Output result
|
||||
if args.output_format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
|
||||
# Exit with appropriate code
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
27
skills/api.test/skill.yaml
Normal file
27
skills/api.test/skill.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
name: api.test
|
||||
version: 0.1.0
|
||||
description: Test REST API endpoints by executing HTTP requests and validating responses
|
||||
against expected outcomes
|
||||
inputs:
|
||||
- api_spec_path
|
||||
- base_url
|
||||
- test_scenarios_path (optional)
|
||||
- auth_config_path (optional)
|
||||
outputs:
|
||||
- test_results.json
|
||||
- test_report.html
|
||||
status: active
|
||||
permissions:
|
||||
- network:http
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
entrypoints:
|
||||
- command: /api/test
|
||||
handler: api_test.py
|
||||
runtime: python
|
||||
description: Test REST API endpoints by executing HTTP requests and validating responses
|
||||
against expected outcome
|
||||
artifact_metadata:
|
||||
produces:
|
||||
- type: test-result
|
||||
- type: test-report
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user