Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:26:08 +08:00
commit 8f22ddf339
295 changed files with 59710 additions and 0 deletions

203
agents/README.md Normal file
View File

@@ -0,0 +1,203 @@
# Betty Framework Agents
## ⚙️ **Integration Note: Claude Code Plugin System**
**Betty agents are Claude Code plugins.** You do not invoke agents via standalone CLI commands (`betty` or direct Python scripts). Instead:
- **Claude Code serves as the execution environment** for all agent invocation
- Each agent is registered through its `agent.yaml` manifest
- Agents become automatically discoverable and executable through Claude Code's natural language interface
- All routing, validation, and execution is handled by Claude Code via MCP (Model Context Protocol)
**No separate installation step is needed** beyond plugin registration in your Claude Code environment.
---
This directory contains agent manifests for the Betty Framework.
## What are Agents?
Agents are intelligent orchestrators that compose skills with reasoning, context awareness, and error recovery. Unlike workflows (which follow fixed sequential steps) or skills (which execute atomic operations), agents can:
- **Reason** about requirements and choose appropriate strategies
- **Iterate** based on feedback and validation results
- **Recover** from errors with intelligent retry logic
- **Adapt** their approach based on context
## Directory Structure
Each agent has its own directory containing:
```
agents/
├── <agent-name>/
│ ├── agent.yaml # Agent manifest (required)
│ ├── README.md # Documentation (auto-generated)
│ └── tests/ # Agent behavior tests (optional)
│ └── test_agent.py
```
## Creating an Agent
### Using meta.agent (Recommended)
**Via Claude Code:**
```
"Use meta.agent to create a my.agent that does [description],
with capabilities [list], using skills [skill.one, skill.two],
and iterative reasoning mode"
```
**Direct execution (development/testing):**
```bash
cat > /tmp/my_agent.md <<'EOF'
# Name: my.agent
# Purpose: What your agent does
# Capabilities: First capability, Second capability
# Skills: skill.one, skill.two
# Reasoning: iterative
EOF
python agents/meta.agent/meta_agent.py /tmp/my_agent.md
```
### Manual Creation
1. Create agent directory:
```bash
mkdir -p agents/my.agent
```
2. Create agent manifest (`agents/my.agent/agent.yaml`):
```yaml
name: my.agent
version: 0.1.0
description: "What your agent does"
capabilities:
- First capability
- Second capability
skills_available:
- skill.one
- skill.two
reasoning_mode: iterative # or oneshot
status: draft
```
3. Validate and register:
**Via Claude Code:**
```
"Use agent.define to validate agents/my.agent/agent.yaml"
```
**Direct execution (development/testing):**
```bash
python skills/agent.define/agent_define.py agents/my.agent/agent.yaml
```
## Agent Manifest Schema
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `name` | string | Unique identifier (e.g., `api.designer`) |
| `version` | string | Semantic version (e.g., `0.1.0`) |
| `description` | string | Human-readable purpose statement |
| `capabilities` | array[string] | List of what the agent can do |
| `skills_available` | array[string] | Skills the agent can orchestrate |
| `reasoning_mode` | enum | `iterative` or `oneshot` |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `status` | enum | `draft`, `active`, `deprecated`, `archived` |
| `context_requirements` | object | Structured context the agent needs |
| `workflow_pattern` | string | Narrative description of reasoning process |
| `example_task` | string | Concrete usage example |
| `error_handling` | object | Retry strategies and failure handling |
| `output` | object | Expected success/failure outputs |
| `tags` | array[string] | Categorization tags |
| `dependencies` | array[string] | Other agents or schemas |
## Reasoning Modes
### Iterative
Agent can retry with feedback, refine based on errors, and improve incrementally.
**Use for:**
- Validation loops (API design with validation feedback)
- Refinement tasks (code optimization)
- Error correction (fixing compilation errors)
**Example:**
```yaml
reasoning_mode: iterative
error_handling:
max_retries: 3
on_validation_failure: "Analyze errors, refine spec, retry"
```
### Oneshot
Agent executes once without retry.
**Use for:**
- Analysis and reporting (compatibility checks)
- Deterministic transformations (code generation)
- Tasks where retry doesn't help (documentation)
**Example:**
```yaml
reasoning_mode: oneshot
output:
success:
- Analysis report
failure:
- Error details
```
## Example Agents
See the documentation for example agent manifests:
- [API Designer](../docs/agent-schema-reference.md#example-iterative-refinement-agent) - Iterative API design
- [Compliance Checker](../docs/agent-schema-reference.md#example-multi-domain-agent) - Multi-domain compliance
## Validation
All agent manifests are automatically validated for:
- Required fields presence
- Name format (`^[a-z][a-z0-9._-]*$`)
- Version format (semantic versioning)
- Reasoning mode enum (`iterative` or `oneshot`)
- Skill references (all skills must exist in skill registry)
## Registry
Validated agents are registered in `/registry/agents.json`:
```json
{
"registry_version": "1.0.0",
"generated_at": "2025-10-23T00:00:00Z",
"agents": [
{
"name": "api.designer",
"version": "0.1.0",
"description": "Design RESTful APIs...",
"reasoning_mode": "iterative",
"skills_available": ["api.define", "api.validate"],
"status": "draft"
}
]
}
```
## See Also
- [Agent Schema Reference](../docs/agent-schema-reference.md) - Complete field reference
- [Betty Architecture](../docs/betty-architecture.md) - Five-layer architecture
- [Agent Implementation Plan](../docs/agent-define-implementation-plan.md) - Implementation details

View File

@@ -0,0 +1,45 @@
# Ai.Orchestrator Agent
Orchestrates AI/ML workflows including model training, evaluation, and deployment
## Purpose
This orchestrator agent coordinates complex ai workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
## Capabilities
- Coordinate meta-agent creation and composition
- Manage skill and agent generation workflows
- Orchestrate AI-powered automation
- Handle agent compatibility and optimization
- Coordinate marketplace publishing
## Available Skills
- `agent.compose`
- `agent.define`
- `agent.run`
- `generate.docs`
- `generate.marketplace`
- `meta.compatibility`
## Usage
This agent uses iterative reasoning to:
1. Analyze requirements
2. Plan execution steps
3. Coordinate skill execution
4. Validate results
5. Handle errors and retries
## Status
**Generated**: Auto-generated from taxonomy gap analysis
## Next Steps
- [ ] Review and refine capabilities
- [ ] Test with real workflows
- [ ] Add domain-specific examples
- [ ] Integrate with existing agents
- [ ] Document best practices

View File

@@ -0,0 +1,55 @@
name: ai.orchestrator
version: 0.1.0
description: Orchestrates AI/ML workflows including model training, evaluation, and
deployment
capabilities:
- Coordinate meta-agent creation and composition
- Manage skill and agent generation workflows
- Orchestrate AI-powered automation
- Handle agent compatibility and optimization
- Coordinate marketplace publishing
skills_available:
- agent.compose
- agent.define
- agent.run
- generate.docs
- generate.marketplace
- meta.compatibility
reasoning_mode: iterative
tags:
- ai
- orchestration
- meta
- automation
workflow_pattern: '1. Analyze incoming request and requirements
2. Identify relevant ai skills and workflows
3. Compose multi-step execution plan
4. Execute skills in coordinated sequence
5. Validate intermediate results
6. Handle errors and retry as needed
7. Return comprehensive results'
example_task: "Input: \"Complete ai workflow from start to finish\"\n\nAgent will:\n\
1. Break down the task into stages\n2. Select appropriate skills for each stage\n\
3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n4. Monitor\
\ progress and handle failures\n5. Generate comprehensive reports"
error_handling:
timeout_seconds: 300
retry_strategy: exponential_backoff
max_retries: 3
output:
success:
- Ai workflow results
- Execution logs and metrics
- Validation reports
- Generated artifacts
failure:
- Error details and stack traces
- Partial results (if available)
- Remediation suggestions
status: generated

View File

@@ -0,0 +1,302 @@
# api.analyzer Agent
## Purpose
**api.analyzer** is a specialized agent that analyzes API specifications for backward compatibility and breaking changes between versions.
This agent provides detailed compatibility reports, identifies breaking vs non-breaking changes, and suggests migration paths for consumers when breaking changes are unavoidable.
## Behavior
- **Reasoning Mode**: `oneshot` The agent executes once without retries, as compatibility analysis is deterministic
- **Capabilities**:
- Detect breaking changes between API versions
- Generate detailed compatibility reports
- Identify removed or modified endpoints
- Suggest migration paths for breaking changes
- Validate API evolution best practices
## Skills Used
The agent has access to the following skills:
| Skill | Purpose |
|-------|---------|
| `api.compatibility` | Compares two API spec versions and detects breaking changes |
| `api.validate` | Validates individual specs for well-formedness |
## Workflow Pattern
The agent follows this straightforward pattern:
```
1. Load old and new API specifications
2. Run comprehensive compatibility analysis
3. Categorize changes as breaking or non-breaking
4. Generate detailed report with migration recommendations
5. Return results (no retry needed)
```
## Manifest Fields (Quick Reference)
```yaml
name: api.analyzer
version: 0.1.0
reasoning_mode: oneshot
skills_available:
- api.compatibility
- api.validate
status: draft
```
## Usage
This agent is invoked through a command or workflow:
### Via Slash Command
```bash
# Assuming /api-compatibility command is registered to use this agent
/api-compatibility specs/user-service-v1.yaml specs/user-service-v2.yaml
```
### Via Workflow
Include the agent in a workflow YAML:
```yaml
# workflows/check_api_compatibility.yaml
steps:
- agent: api.analyzer
input:
old_spec_path: "specs/user-service-v1.0.0.yaml"
new_spec_path: "specs/user-service-v2.0.0.yaml"
fail_on_breaking: true
```
## Context Requirements
The agent expects the following context:
| Field | Type | Description | Example |
|-------|------|-------------|---------|
| `old_spec_path` | string | Path to the previous/old API specification | `"specs/api-v1.0.0.yaml"` |
| `new_spec_path` | string | Path to the new/updated API specification | `"specs/api-v2.0.0.yaml"` |
| `fail_on_breaking` | boolean | Whether to fail (exit non-zero) if breaking changes detected | `true` or `false` |
## Example Task
**Input**:
```
"Compare user-service v1.0.0 with v2.0.0 for breaking changes"
```
**Agent execution**:
1. **Load both specifications**:
- Old: `specs/user-service-v1.0.0.openapi.yaml`
- New: `specs/user-service-v2.0.0.openapi.yaml`
2. **Analyze endpoint changes**:
-**Added**: `GET /users/{id}/preferences` (non-breaking)
-**Removed**: `DELETE /users/{id}/avatar` (breaking)
- ⚠️ **Modified**: `POST /users` now requires additional field `email_verified` (breaking)
3. **Check for breaking schema changes**:
- ❌ Removed property: `User.phoneNumber` (breaking)
- ✅ Added optional property: `User.preferences` (non-breaking)
- ❌ Changed property type: `User.age` from `integer` to `string` (breaking)
4. **Identify parameter or response format changes**:
- ❌ Query parameter `filter` changed from optional to required in `GET /users` (breaking)
- ✅ Response includes new optional field `User.last_login` (non-breaking)
5. **Generate compatibility report**:
```json
{
"compatible": false,
"breaking_changes": [
{
"type": "endpoint_removed",
"endpoint": "DELETE /users/{id}/avatar",
"severity": "high",
"migration": "Use PUT /users/{id} with avatar=null instead"
},
{
"type": "required_field_added",
"location": "POST /users request body",
"field": "email_verified",
"severity": "high",
"migration": "Clients must now provide email_verified field"
},
{
"type": "property_removed",
"schema": "User",
"property": "phoneNumber",
"severity": "medium",
"migration": "Use new phone_contacts array instead"
},
{
"type": "type_changed",
"schema": "User",
"property": "age",
"old_type": "integer",
"new_type": "string",
"severity": "high",
"migration": "Convert age to string format"
}
],
"non_breaking_changes": [
{
"type": "endpoint_added",
"endpoint": "GET /users/{id}/preferences"
},
{
"type": "optional_field_added",
"schema": "User",
"property": "preferences"
},
{
"type": "response_field_added",
"endpoint": "GET /users/{id}",
"field": "last_login"
}
],
"change_summary": {
"breaking": 4,
"additions": 2,
"modifications": 2,
"removals": 2
}
}
```
6. **Provide migration recommendations**:
```markdown
## Migration Guide: v1.0.0 → v2.0.0
### Breaking Changes
1. **Removed endpoint: DELETE /users/{id}/avatar**
- **Impact**: High - Clients using this endpoint will fail
- **Migration**: Use PUT /users/{id} with avatar=null instead
- **Effort**: Low
2. **New required field: email_verified in POST /users**
- **Impact**: High - All user creation requests must include this field
- **Migration**: Update client code to provide email_verified boolean
- **Effort**: Medium
3. **Property removed: User.phoneNumber**
- **Impact**: Medium - Clients reading this field will get undefined
- **Migration**: Use User.phone_contacts array instead
- **Effort**: Medium
4. **Type changed: User.age (integer → string)**
- **Impact**: High - Type mismatch will cause deserialization errors
- **Migration**: Update models to use string type and convert existing data
- **Effort**: High
### Recommended Approach
1. Implement migration layer for 2 versions
2. Communicate breaking changes to consumers 30 days in advance
3. Provide backward-compatible endpoints during transition period
4. Monitor usage of deprecated endpoints
```
## Error Handling
| Scenario | Timeout | Behavior |
|----------|---------|----------|
| Spec load failure | N/A | Return error with file path details |
| Comparison failure | N/A | Return partial analysis with error context |
| Timeout | 120 seconds | Fails after 2 minutes |
### On Success
```json
{
"status": "success",
"outputs": {
"compatibility_report": {
"compatible": false,
"breaking_changes": [...],
"non_breaking_changes": [...],
"change_summary": {...}
},
"migration_recommendations": "...",
"api_diff_visualization": "..."
}
}
```
### On Failure
```json
{
"status": "failed",
"error_details": {
"error": "Failed to load old spec",
"file_path": "specs/user-service-v1.0.0.yaml",
"details": "File not found"
},
"partial_analysis": null,
"suggested_fixes": [
"Verify file path exists",
"Check file permissions"
]
}
```
## Use Cases
### 1. Pre-Release Validation
Run before releasing a new API version to ensure backward compatibility:
```yaml
# workflows/validate_release.yaml
steps:
- agent: api.analyzer
input:
old_spec_path: "specs/production/api-v1.yaml"
new_spec_path: "specs/staging/api-v2.yaml"
fail_on_breaking: true
```
### 2. Continuous Integration
Integrate into CI/CD to prevent accidental breaking changes:
```yaml
# .github/workflows/api-check.yml
- name: Check API Compatibility
run: |
# Agent runs via workflow.compose
python skills/workflow.compose/workflow_compose.py \
workflows/check_api_compatibility.yaml
```
### 3. Documentation Generation
Generate migration guides automatically:
```bash
# Use agent output to create migration documentation
/api-compatibility old.yaml new.yaml > migration-guide.md
```
## Status
**Draft** This agent is under development and not yet marked active in the registry.
## Related Documentation
- [Agents Overview](../../docs/betty-architecture.md#layer-2-agents-reasoning-layer) Understanding agents in Betty's architecture
- [Agent Schema Reference](../../docs/agent-schema-reference.md) Agent manifest fields and structure
- [api.compatibility SKILL.md](../../skills/api.compatibility/SKILL.md) Underlying compatibility check skill
- [API-Driven Development](../../docs/api-driven-development.md) Full API workflow including compatibility checks
## Version History
- **0.1.0** (Oct 2025) Initial draft implementation with oneshot analysis pattern

View File

@@ -0,0 +1,65 @@
name: api.analyzer
version: 0.1.0
description: "Analyze API specifications for backward compatibility and breaking changes"
capabilities:
- Detect breaking changes between API versions
- Generate detailed compatibility reports
- Identify removed or modified endpoints
- Suggest migration paths for breaking changes
- Validate API evolution best practices
skills_available:
- api.compatibility
- api.validate
reasoning_mode: oneshot
context_requirements:
old_spec_path: string
new_spec_path: string
fail_on_breaking: boolean
workflow_pattern: |
1. Load old and new API specifications
2. Run comprehensive compatibility analysis
3. Categorize changes as breaking or non-breaking
4. Generate detailed report with migration recommendations
5. Return results (no retry needed)
example_task: |
Input: "Compare user-service v1.0.0 with v2.0.0 for breaking changes"
Agent will:
1. Load both specifications
2. Analyze endpoint changes (additions, removals, modifications)
3. Check for breaking schema changes
4. Identify parameter or response format changes
5. Generate compatibility report
6. Provide migration recommendations
error_handling:
timeout_seconds: 120
on_spec_load_failure: "Return error with file path details"
on_comparison_failure: "Return partial analysis with error context"
output:
success:
- Compatibility report (JSON)
- Breaking changes list
- Non-breaking changes list
- Migration recommendations
- API diff visualization
failure:
- Error details
- Partial analysis (if available)
- Suggested fixes
status: draft
tags:
- api
- analysis
- compatibility
- versioning
- oneshot

View File

@@ -0,0 +1,50 @@
# Api.Architect Agent
## Purpose
An agent that designs comprehensive REST APIs and validates them against best practices. Takes API requirements as input and produces validated OpenAPI specifications with generated data models ready for implementation.
## Skills
This agent uses the following skills:
- `workflow.validate`
- `api.validate`
- `api.define`
## Artifact Flow
### Consumes
- `API requirements`
- `Domain constraints and business rules`
### Produces
- `openapi-spec`
- `api-models`
- `validation-report`
## Example Use Cases
- Design a RESTful API for an e-commerce platform with products, orders, and customers
- Create an API for a task management system with projects, tasks, and user assignments
- Design a multi-tenant SaaS API with proper authentication and authorization
## Usage
```bash
# Activate the agent
/agent api.architect
# Or invoke directly
betty agent run api.architect --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,36 @@
name: api.architect
version: 0.1.0
description: An agent that designs comprehensive REST APIs and validates them against
best practices. Takes API requirements as input and produces validated OpenAPI specifications
with generated data models ready for implementation.
status: draft
reasoning_mode: iterative
capabilities:
- Translate API requirements into detailed OpenAPI specifications
- Validate API designs against organizational standards and linting rules
- Generate reference data models to accelerate implementation
skills_available:
- workflow.validate
- api.validate
- api.define
permissions: []
artifact_metadata:
consumes:
- type: API requirements
description: Input artifact of type API requirements
- type: Domain constraints and business rules
description: Input artifact of type Domain constraints and business rules
produces:
- type: openapi-spec
schema: schemas/openapi-spec.json
file_pattern: '*.openapi.yaml'
content_type: application/yaml
description: OpenAPI 3.0+ specification
- type: api-models
file_pattern: '*.{py,ts,go}'
description: Generated API data models
- type: validation-report
schema: schemas/validation-report.json
file_pattern: '*.validation.json'
content_type: application/json
description: Structured validation results

View File

@@ -0,0 +1,217 @@
# api.designer Agent
## Purpose
**api.designer** is an intelligent agent that orchestrates the API design process from natural language requirements to validated, production-ready OpenAPI specifications with generated models.
This agent uses iterative refinement to create APIs that comply with enterprise guidelines (Zalando by default), automatically fixing validation errors and ensuring best practices.
## Behavior
- **Reasoning Mode**: `iterative` The agent retries on validation failures, refining the spec until it passes all checks
- **Capabilities**:
- Design RESTful APIs from natural language requirements
- Apply Zalando guidelines automatically (or other guideline sets)
- Generate OpenAPI 3.1 specs with best practices
- Iteratively refine based on validation feedback
- Handle AsyncAPI for event-driven architectures
## Skills Used
The agent has access to the following skills and uses them in sequence:
| Skill | Purpose |
|-------|---------|
| `api.define` | Scaffolds initial OpenAPI spec from service name and requirements |
| `api.validate` | Validates spec against enterprise guidelines (Zalando, Google, Microsoft) |
| `api.generate-models` | Generates type-safe models in target languages (TypeScript, Python, etc.) |
| `api.compatibility` | Checks for breaking changes when updating existing APIs |
## Workflow Pattern
The agent follows this iterative pattern:
```
1. Analyze requirements and domain context
2. Draft OpenAPI spec following guidelines
3. Run validation (api.validate)
4. If validation fails:
- Analyze errors
- Refine spec
- Re-validate
- Repeat until passing (max 3 retries)
5. Generate models for target languages
6. Verify generated models compile
```
## Manifest Fields (Quick Reference)
```yaml
name: api.designer
version: 0.1.0
reasoning_mode: iterative
skills_available:
- api.define
- api.validate
- api.generate-models
- api.compatibility
status: draft
```
## Usage
This agent is invoked through a command or workflow:
### Via Slash Command
```bash
# Assuming /api-design command is registered to use this agent
/api-design user-service
```
The command passes the service name to the agent, which then:
1. Uses `api.define` to create initial spec
2. Validates with `api.validate`
3. Fixes any validation errors iteratively
4. Generates models with `api.generate-models`
### Via Workflow
Include the agent in a workflow YAML:
```yaml
# workflows/design_api.yaml
steps:
- agent: api.designer
input:
service_name: "user-service"
guidelines: "zalando"
languages: ["typescript", "python"]
```
## Context Requirements
The agent expects the following context:
| Field | Type | Description | Example |
|-------|------|-------------|---------|
| `guidelines` | string | Which API guidelines to follow | `"zalando"`, `"google"`, `"microsoft"` |
| `domain` | string | Business domain for API design | `"user-management"`, `"e-commerce"` |
| `existing_apis` | list | Related APIs to maintain consistency with | `["auth-service", "notification-service"]` |
| `strict_mode` | boolean | Whether to treat warnings as errors | `true` or `false` |
## Example Task
**Input**:
```
"Create API for user management with CRUD operations,
authentication via JWT, and email verification workflow"
```
**Agent execution**:
1. **Draft OpenAPI spec** with proper resource paths:
- `POST /users` - Create user
- `GET /users/{id}` - Get user
- `PUT /users/{id}` - Update user
- `DELETE /users/{id}` - Delete user
- `POST /users/{id}/verify-email` - Email verification
2. **Apply Zalando guidelines**:
- Use snake_case for property names
- Include problem JSON error responses
- Add required headers (X-Request-ID, etc.)
- Define proper HTTP status codes
3. **Validate spec** using `api.validate`:
```bash
python skills/api.validate/api_validate.py specs/user-service.yaml zalando
```
4. **Fix validation issues** (if any):
- Missing required headers → Add to spec
- Incorrect naming → Convert to snake_case
- Missing error schemas → Add problem JSON schemas
5. **Generate models** using Modelina:
```bash
python skills/api.generate-models/modelina_generate.py \
specs/user-service.yaml typescript src/models/typescript
python skills/api.generate-models/modelina_generate.py \
specs/user-service.yaml python src/models/python
```
6. **Verify models compile**:
- TypeScript: `tsc --noEmit`
- Python: `mypy --strict`
## Error Handling
| Scenario | Max Retries | Behavior |
|----------|-------------|----------|
| Validation failure | 3 | Analyze errors, refine spec, retry |
| Model generation failure | 3 | Try alternative Modelina configurations |
| Compilation failure | 3 | Adjust spec to fix type issues |
| Timeout | N/A | Fails after 300 seconds (5 minutes) |
### On Success
```json
{
"status": "success",
"outputs": {
"spec_path": "specs/user-service.openapi.yaml",
"validation_report": {
"valid": true,
"errors": [],
"warnings": []
},
"generated_models": {
"typescript": ["src/models/typescript/User.ts", "src/models/typescript/UserResponse.ts"],
"python": ["src/models/python/user.py", "src/models/python/user_response.py"]
},
"dependency_graph": "..."
}
}
```
### On Failure
```json
{
"status": "failed",
"error_analysis": {
"step": "validation",
"attempts": 3,
"last_error": "Missing required header: X-Request-ID in all endpoints"
},
"partial_spec": "specs/user-service.openapi.yaml",
"suggested_fixes": [
"Add X-Request-ID header to all operations",
"See Zalando guidelines: https://..."
]
}
```
## Status
**Draft** This agent is under development and not yet marked active in the registry. Current goals for next version:
- [ ] Improve prompt engineering for better initial API designs
- [ ] Add more robust error handling for iterative loops
- [ ] Support for more guideline sets (Google, Microsoft)
- [ ] Better context injection from existing APIs
- [ ] Automatic testing of generated models
## Related Documentation
- [Agents Overview](../../docs/betty-architecture.md#layer-2-agents-reasoning-layer) Understanding agents in Betty's architecture
- [Agent Schema Reference](../../docs/agent-schema-reference.md) Agent manifest fields and structure
- [API-Driven Development](../../docs/api-driven-development.md) Full API design workflow using Betty
- [api.define SKILL.md](../../skills/api.define/SKILL.md) Skill for creating API specs
- [api.validate SKILL.md](../../skills/api.validate/SKILL.md) Skill for validating specs
- [api.generate-models SKILL.md](../../skills/api.generate-models/SKILL.md) Skill for generating models
## Version History
- **0.1.0** (Oct 2025) Initial draft implementation with iterative refinement pattern

View File

@@ -0,0 +1,78 @@
name: api.designer
version: 0.1.0
description: "Design RESTful APIs following enterprise guidelines with iterative refinement"
capabilities:
- Design RESTful APIs from natural language requirements
- Apply Zalando guidelines automatically
- Generate OpenAPI 3.1 specs with best practices
- Iteratively refine based on validation feedback
- Handle AsyncAPI for event-driven architectures
skills_available:
- api.define
- api.validate
- api.generatemodels
- api.compatibility
reasoning_mode: iterative
context_requirements:
guidelines: string
domain: string
existing_apis: list
strict_mode: boolean
workflow_pattern: |
1. Analyze requirements and domain context
2. Draft OpenAPI spec following guidelines
3. Run validation (api.validate)
4. If validation fails:
- Analyze errors
- Refine spec
- Re-validate
- Repeat until passing
5. Generate models for target languages
6. Verify generated models compile
example_task: |
Input: "Create API for user management with CRUD operations,
authentication via JWT, and email verification workflow"
Agent will:
1. Draft OpenAPI spec with proper resource paths (/users, /users/{id})
2. Apply Zalando guidelines (snake_case, problem JSON, etc.)
3. Validate spec against Zally rules
4. Fix issues (e.g., add required headers, fix naming)
5. Generate TypeScript and Python models via Modelina
6. Verify models compile in sample projects
error_handling:
max_retries: 3
on_validation_failure: "Analyze errors, refine spec, retry"
on_generation_failure: "Try alternative Modelina configurations"
on_compilation_failure: "Adjust spec to fix type issues"
timeout_seconds: 300
output:
success:
- OpenAPI spec (validated)
- Generated models (compiled)
- Validation report
- Dependency graph
failure:
- Error analysis
- Partial spec
- Suggested fixes
status: draft
tags:
- api
- design
- openapi
- zalando
- iterative
dependencies:
- context.schema

View File

@@ -0,0 +1,44 @@
# Api.Orchestrator Agent
Orchestrates complete API lifecycle from design through testing and deployment
## Purpose
This orchestrator agent coordinates complex api workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
## Capabilities
- Coordinate API design, validation, and compatibility checking
- Manage API generation and model creation workflows
- Orchestrate testing and quality assurance
- Handle API versioning and documentation
- Coordinate deployment and publishing
## Available Skills
- `api.define`
- `api.validate`
- `api.compatibility`
- `api.generatemodels`
- `api.test`
## Usage
This agent uses iterative reasoning to:
1. Analyze requirements
2. Plan execution steps
3. Coordinate skill execution
4. Validate results
5. Handle errors and retries
## Status
**Generated**: Auto-generated from taxonomy gap analysis
## Next Steps
- [ ] Review and refine capabilities
- [ ] Test with real workflows
- [ ] Add domain-specific examples
- [ ] Integrate with existing agents
- [ ] Document best practices

View File

@@ -0,0 +1,53 @@
name: api.orchestrator
version: 0.1.0
description: Orchestrates complete API lifecycle from design through testing and deployment
capabilities:
- Coordinate API design, validation, and compatibility checking
- Manage API generation and model creation workflows
- Orchestrate testing and quality assurance
- Handle API versioning and documentation
- Coordinate deployment and publishing
skills_available:
- api.define
- api.validate
- api.compatibility
- api.generatemodels
- api.test
reasoning_mode: iterative
tags:
- api
- orchestration
- workflow
- lifecycle
workflow_pattern: '1. Analyze incoming request and requirements
2. Identify relevant api skills and workflows
3. Compose multi-step execution plan
4. Execute skills in coordinated sequence
5. Validate intermediate results
6. Handle errors and retry as needed
7. Return comprehensive results'
example_task: "Input: \"Complete api workflow from start to finish\"\n\nAgent will:\n\
1. Break down the task into stages\n2. Select appropriate skills for each stage\n\
3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n4. Monitor\
\ progress and handle failures\n5. Generate comprehensive reports"
error_handling:
timeout_seconds: 300
retry_strategy: exponential_backoff
max_retries: 3
output:
success:
- Api workflow results
- Execution logs and metrics
- Validation reports
- Generated artifacts
failure:
- Error details and stack traces
- Partial results (if available)
- Remediation suggestions
status: generated

View File

@@ -0,0 +1,48 @@
# Code.Reviewer Agent
## Purpose
Analyzes code changes and provides comprehensive feedback on code quality, security vulnerabilities, performance issues, and adherence to best practices.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `code-diff`
- `coding-standards`
### Produces
- `review-report`
- `suggestion-list`
- `static-analysis`
- `security-scan`
- `style-check`
- `List of issues found with line numbers`
- `Severity and category for each issue`
- `Suggested fixes with code examples`
- `Overall code quality score`
- `Compliance status with coding standards`
## Usage
```bash
# Activate the agent
/agent code.reviewer
# Or invoke directly
betty agent run code.reviewer --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,42 @@
name: code.reviewer
version: 0.1.0
description: Analyzes code changes and provides comprehensive feedback on code quality,
security vulnerabilities, performance issues, and adherence to best practices.
status: draft
reasoning_mode: iterative
capabilities:
- Review diffs for quality, security, and maintainability concerns
- Generate prioritized issue lists with remediation guidance
- Summarize overall code health and compliance with standards
skills_available:
- code.format
- test.workflow.integration
- policy.enforce
permissions: []
artifact_metadata:
consumes:
- type: code-diff
description: Input artifact of type code-diff
- type: coding-standards
description: Input artifact of type coding-standards
produces:
- type: review-report
description: Output artifact of type review-report
- type: suggestion-list
description: Output artifact of type suggestion-list
- type: static-analysis
description: Output artifact of type static-analysis
- type: security-scan
description: Output artifact of type security-scan
- type: style-check
description: Output artifact of type style-check
- type: List of issues found with line numbers
description: Output artifact of type List of issues found with line numbers
- type: Severity and category for each issue
description: Output artifact of type Severity and category for each issue
- type: Suggested fixes with code examples
description: Output artifact of type Suggested fixes with code examples
- type: Overall code quality score
description: Output artifact of type Overall code quality score
- type: Compliance status with coding standards
description: Output artifact of type Compliance status with coding standards

View File

@@ -0,0 +1,70 @@
# Data.Architect Agent
## Purpose
Create comprehensive data architecture and governance artifacts including data models, schema definitions, data flow diagrams, data dictionaries, data governance policies, and data quality frameworks. Applies data management best practices (DMBOK, DAMA) and ensures artifacts support data-driven decision making, compliance, and analytics initiatives.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `Business requirements or use cases`
- `Data sources and systems`
- `Data domains or subject areas`
- `Compliance requirements`
- `Data quality expectations`
- `Analytics or reporting needs`
### Produces
- `data-model: Logical and physical data models with entities, relationships, and attributes`
- `schema-definition: Database schemas with tables, columns, constraints, and indexes`
- `data-flow-diagram: Data flow between systems with transformations and quality checks`
- `data-dictionary: Comprehensive data dictionary with business definitions`
- `data-governance-policy: Data governance framework with roles, policies, and procedures`
- `data-quality-framework: Data quality measurement and monitoring framework`
- `master-data-management-plan: MDM strategy for critical data domains`
- `data-lineage-diagram: End-to-end data lineage with source-to-target mappings`
- `data-catalog: Enterprise data catalog with metadata and discovery`
## Example Use Cases
- Entities: Customer, Account, Contact, Interaction, Order, SupportTicket, Product
- Relationships and cardinality
- Attributes with data types and constraints
- Integration patterns for source systems
- Master data management approach
- Data quality rules
- Data governance organization and roles (CDO, data stewards, owners)
- Data classification and handling policies
- Data quality standards and SLAs
- Metadata management standards
- GDPR compliance procedures (consent, right to erasure)
- SOX data retention and audit requirements
- Data access control policies
- data-flow-diagram.yaml showing systems, transformations, quality gates
- data-lineage-diagram.yaml with source-to-target mappings
- data-quality-framework.yaml with validation rules and monitoring
## Usage
```bash
# Activate the agent
/agent data.architect
# Or invoke directly
betty agent run data.architect --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,66 @@
name: data.architect
version: 0.1.0
description: Create comprehensive data architecture and governance artifacts including
data models, schema definitions, data flow diagrams, data dictionaries, data governance
policies, and data quality frameworks. Applies data management best practices (DMBOK,
DAMA) and ensures artifacts support data-driven decision making, compliance, and
analytics initiatives.
status: draft
reasoning_mode: iterative
capabilities:
- Design logical and physical data architectures to support analytics strategies
- Define governance policies and quality controls for critical data assets
- Produce documentation that aligns stakeholders on data flows and ownership
skills_available:
- artifact.create
- artifact.validate
- artifact.review
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: Business requirements or use cases
description: Input artifact of type Business requirements or use cases
- type: Data sources and systems
description: Input artifact of type Data sources and systems
- type: Data domains or subject areas
description: Input artifact of type Data domains or subject areas
- type: Compliance requirements
description: Input artifact of type Compliance requirements
- type: Data quality expectations
description: Input artifact of type Data quality expectations
- type: Analytics or reporting needs
description: Input artifact of type Analytics or reporting needs
produces:
- type: 'data-model: Logical and physical data models with entities, relationships,
and attributes'
description: 'Output artifact of type data-model: Logical and physical data models
with entities, relationships, and attributes'
- type: 'schema-definition: Database schemas with tables, columns, constraints,
and indexes'
description: 'Output artifact of type schema-definition: Database schemas with
tables, columns, constraints, and indexes'
- type: 'data-flow-diagram: Data flow between systems with transformations and quality
checks'
description: 'Output artifact of type data-flow-diagram: Data flow between systems
with transformations and quality checks'
- type: 'data-dictionary: Comprehensive data dictionary with business definitions'
description: 'Output artifact of type data-dictionary: Comprehensive data dictionary
with business definitions'
- type: 'data-governance-policy: Data governance framework with roles, policies,
and procedures'
description: 'Output artifact of type data-governance-policy: Data governance
framework with roles, policies, and procedures'
- type: 'data-quality-framework: Data quality measurement and monitoring framework'
description: 'Output artifact of type data-quality-framework: Data quality measurement
and monitoring framework'
- type: 'master-data-management-plan: MDM strategy for critical data domains'
description: 'Output artifact of type master-data-management-plan: MDM strategy
for critical data domains'
- type: 'data-lineage-diagram: End-to-end data lineage with source-to-target mappings'
description: 'Output artifact of type data-lineage-diagram: End-to-end data lineage
with source-to-target mappings'
- type: 'data-catalog: Enterprise data catalog with metadata and discovery'
description: 'Output artifact of type data-catalog: Enterprise data catalog with
metadata and discovery'

View File

@@ -0,0 +1,42 @@
# Data.Orchestrator Agent
Orchestrates data workflows including transformation, validation, and quality assurance
## Purpose
This orchestrator agent coordinates complex data workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
## Capabilities
- Coordinate data transformation pipelines
- Manage data validation and quality checks
- Orchestrate data migration workflows
- Handle data governance and compliance
- Coordinate analytics and reporting
## Available Skills
- `data.transform`
- `workflow.validate`
- `workflow.compose`
## Usage
This agent uses iterative reasoning to:
1. Analyze requirements
2. Plan execution steps
3. Coordinate skill execution
4. Validate results
5. Handle errors and retries
## Status
**Generated**: Auto-generated from taxonomy gap analysis
## Next Steps
- [ ] Review and refine capabilities
- [ ] Test with real workflows
- [ ] Add domain-specific examples
- [ ] Integrate with existing agents
- [ ] Document best practices

View File

@@ -0,0 +1,52 @@
name: data.orchestrator
version: 0.1.0
description: Orchestrates data workflows including transformation, validation, and
quality assurance
capabilities:
- Coordinate data transformation pipelines
- Manage data validation and quality checks
- Orchestrate data migration workflows
- Handle data governance and compliance
- Coordinate analytics and reporting
skills_available:
- data.transform
- workflow.validate
- workflow.compose
reasoning_mode: iterative
tags:
- data
- orchestration
- workflow
- etl
workflow_pattern: '1. Analyze incoming request and requirements
2. Identify relevant data skills and workflows
3. Compose multi-step execution plan
4. Execute skills in coordinated sequence
5. Validate intermediate results
6. Handle errors and retry as needed
7. Return comprehensive results'
example_task: "Input: \"Complete data workflow from start to finish\"\n\nAgent will:\n\
1. Break down the task into stages\n2. Select appropriate skills for each stage\n\
3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n4. Monitor\
\ progress and handle failures\n5. Generate comprehensive reports"
error_handling:
timeout_seconds: 300
retry_strategy: exponential_backoff
max_retries: 3
output:
success:
- Data workflow results
- Execution logs and metrics
- Validation reports
- Generated artifacts
failure:
- Error details and stack traces
- Partial results (if available)
- Remediation suggestions
status: generated

View File

@@ -0,0 +1,55 @@
# Data.Validator Agent
## Purpose
Validates data files against schemas, business rules, and data quality standards. Ensures data integrity, completeness, and compliance.
## Skills
This agent uses the following skills:
- `workflow.validate`
- `api.validate`
## Artifact Flow
### Consumes
- `data-file`
- `schema-definition`
- `validation-rules`
### Produces
- `validation-report`
- `data-quality-metrics`
- `data.validatejson`
- `schema.validate`
- `data.profile`
- `Structural: Schema and format validation`
- `Semantic: Business rule validation`
- `Statistical: Data quality profiling`
- `Validation status`
- `List of violations with severity`
- `Data quality score`
- `Statistics`
- `Recommendations for fixing issues`
- `Compliance status with standards`
## Usage
```bash
# Activate the agent
/agent data.validator
# Or invoke directly
betty agent run data.validator --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,54 @@
name: data.validator
version: 0.1.0
description: Validates data files against schemas, business rules, and data quality
standards. Ensures data integrity, completeness, and compliance.
status: draft
reasoning_mode: iterative
capabilities:
- Validate datasets against structural and semantic rules
- Generate detailed issue reports with remediation recommendations
- Track quality metrics and highlight compliance gaps
skills_available:
- workflow.validate
- api.validate
permissions: []
artifact_metadata:
consumes:
- type: data-file
description: Input artifact of type data-file
- type: schema-definition
description: Input artifact of type schema-definition
- type: validation-rules
description: Input artifact of type validation-rules
produces:
- type: validation-report
schema: schemas/validation-report.json
file_pattern: '*.validation.json'
content_type: application/json
description: Structured validation results
- type: data-quality-metrics
description: Output artifact of type data-quality-metrics
- type: data.validatejson
description: Output artifact of type data.validatejson
- type: schema.validate
description: Output artifact of type schema.validate
- type: data.profile
description: Output artifact of type data.profile
- type: 'Structural: Schema and format validation'
description: 'Output artifact of type Structural: Schema and format validation'
- type: 'Semantic: Business rule validation'
description: 'Output artifact of type Semantic: Business rule validation'
- type: 'Statistical: Data quality profiling'
description: 'Output artifact of type Statistical: Data quality profiling'
- type: Validation status
description: Output artifact of type Validation status
- type: List of violations with severity
description: Output artifact of type List of violations with severity
- type: Data quality score
description: Output artifact of type Data quality score
- type: Statistics
description: Output artifact of type Statistics
- type: Recommendations for fixing issues
description: Output artifact of type Recommendations for fixing issues
- type: Compliance status with standards
description: Output artifact of type Compliance status with standards

View File

@@ -0,0 +1,79 @@
# Deployment.Engineer Agent
## Purpose
Create comprehensive deployment and release artifacts including deployment plans, CI/CD pipelines, release checklists, rollback procedures, runbooks, and infrastructure-as-code configurations. Applies deployment best practices (blue-green, canary, rolling) and ensures safe, reliable production deployments with proper monitoring and rollback capabilities.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `Application or service description`
- `Infrastructure and environment details`
- `Deployment requirements`
- `Release scope and components`
- `Monitoring and alerting requirements`
- `Compliance or change control requirements`
### Produces
- `deployment-plan: Comprehensive deployment strategy with steps, validation, and rollback`
- `cicd-pipeline-definition: CI/CD pipeline configuration with stages, gates, and automation`
- `release-checklist: Pre-deployment checklist with validation and approval steps`
- `rollback-plan: Rollback procedures with triggers and recovery steps`
- `runbooks: Operational runbooks for deployment, troubleshooting, and maintenance`
- `infrastructure-as-code: Infrastructure provisioning templates`
- `deployment-pipeline: Deployment automation scripts and orchestration`
- `smoke-test-suite: Post-deployment smoke tests for validation`
- `production-readiness-checklist: Production readiness assessment and sign-off`
## Example Use Cases
- Deployment strategy (blue-green with traffic shifting)
- Pre-deployment checklist (backups, capacity validation)
- Deployment sequence and dependencies
- Health checks and validation gates
- Traffic migration steps (0% → 10% → 50% → 100%)
- Rollback triggers and procedures
- Post-deployment validation
- Monitoring and alerting configuration
- Communication plan
- Build stage (npm install, compile, bundle)
- Test stage (unit tests, integration tests, coverage gate 80%)
- Security stage (SAST, dependency scanning, OWASP check)
- Deploy to staging (automated)
- Smoke tests and integration tests in staging
- Manual approval gate for production
- Deploy to production (blue-green)
- Post-deployment validation
- Slack/email notifications
- Deployment runbook (step-by-step deployment procedures)
- Scaling runbook (horizontal and vertical scaling procedures)
- Troubleshooting runbook (common issues and resolution)
- Incident response runbook (incident classification and escalation)
- Disaster recovery runbook (backup and restore procedures)
- Database maintenance runbook (schema changes, backups)
- Each runbook includes: prerequisites, steps, validation, rollback
## Usage
```bash
# Activate the agent
/agent deployment.engineer
# Or invoke directly
betty agent run deployment.engineer --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,65 @@
name: deployment.engineer
version: 0.1.0
description: Create comprehensive deployment and release artifacts including deployment
plans, CI/CD pipelines, release checklists, rollback procedures, runbooks, and infrastructure-as-code
configurations. Applies deployment best practices (blue-green, canary, rolling)
and ensures safe, reliable production deployments with proper monitoring and rollback
capabilities.
status: draft
reasoning_mode: iterative
capabilities:
- Design deployment strategies with rollback and validation procedures
- Automate delivery pipelines and operational runbooks
- Coordinate release governance, approvals, and compliance requirements
skills_available:
- artifact.create
- artifact.validate
- artifact.review
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: Application or service description
description: Input artifact of type Application or service description
- type: Infrastructure and environment details
description: Input artifact of type Infrastructure and environment details
- type: Deployment requirements
description: Input artifact of type Deployment requirements
- type: Release scope and components
description: Input artifact of type Release scope and components
- type: Monitoring and alerting requirements
description: Input artifact of type Monitoring and alerting requirements
- type: Compliance or change control requirements
description: Input artifact of type Compliance or change control requirements
produces:
- type: 'deployment-plan: Comprehensive deployment strategy with steps, validation,
and rollback'
description: 'Output artifact of type deployment-plan: Comprehensive deployment
strategy with steps, validation, and rollback'
- type: 'cicd-pipeline-definition: CI/CD pipeline configuration with stages, gates,
and automation'
description: 'Output artifact of type cicd-pipeline-definition: CI/CD pipeline
configuration with stages, gates, and automation'
- type: 'release-checklist: Pre-deployment checklist with validation and approval
steps'
description: 'Output artifact of type release-checklist: Pre-deployment checklist
with validation and approval steps'
- type: 'rollback-plan: Rollback procedures with triggers and recovery steps'
description: 'Output artifact of type rollback-plan: Rollback procedures with
triggers and recovery steps'
- type: 'runbooks: Operational runbooks for deployment, troubleshooting, and maintenance'
description: 'Output artifact of type runbooks: Operational runbooks for deployment,
troubleshooting, and maintenance'
- type: 'infrastructure-as-code: Infrastructure provisioning templates'
description: 'Output artifact of type infrastructure-as-code: Infrastructure provisioning
templates'
- type: 'deployment-pipeline: Deployment automation scripts and orchestration'
description: 'Output artifact of type deployment-pipeline: Deployment automation
scripts and orchestration'
- type: 'smoke-test-suite: Post-deployment smoke tests for validation'
description: 'Output artifact of type smoke-test-suite: Post-deployment smoke
tests for validation'
- type: 'production-readiness-checklist: Production readiness assessment and sign-off'
description: 'Output artifact of type production-readiness-checklist: Production
readiness assessment and sign-off'

View File

@@ -0,0 +1,52 @@
# File.Processor Agent
## Purpose
Processes files through various transformations including format conversion, compression, encryption, and batch operations.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `file-list`
- `transformation-config`
### Produces
- `processed-files`
- `processing-report`
- `file.convert`
- `file.compress`
- `file.encrypt`
- `batch.processor`
- `Sequential: Process files one by one`
- `Parallel: Process multiple files concurrently`
- `Pipeline: Chain multiple transformations`
- `Files processed successfully`
- `Files that failed with error details`
- `Processing time and performance metrics`
- `Storage space saved`
- `Transformation details for each file`
## Usage
```bash
# Activate the agent
/agent file.processor
# Or invoke directly
betty agent run file.processor --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,50 @@
name: file.processor
version: 0.1.0
description: Processes files through various transformations including format conversion,
compression, encryption, and batch operations.
status: draft
reasoning_mode: oneshot
capabilities:
- Execute configurable pipelines of file transformations
- Optimize files through compression and format conversion workflows
- Apply encryption and verification steps with detailed reporting
skills_available:
- file.compare
- workflow.orchestrate
- build.optimize
permissions: []
artifact_metadata:
consumes:
- type: file-list
description: Input artifact of type file-list
- type: transformation-config
description: Input artifact of type transformation-config
produces:
- type: processed-files
description: Output artifact of type processed-files
- type: processing-report
description: Output artifact of type processing-report
- type: file.convert
description: Output artifact of type file.convert
- type: file.compress
description: Output artifact of type file.compress
- type: file.encrypt
description: Output artifact of type file.encrypt
- type: batch.processor
description: Output artifact of type batch.processor
- type: 'Sequential: Process files one by one'
description: 'Output artifact of type Sequential: Process files one by one'
- type: 'Parallel: Process multiple files concurrently'
description: 'Output artifact of type Parallel: Process multiple files concurrently'
- type: 'Pipeline: Chain multiple transformations'
description: 'Output artifact of type Pipeline: Chain multiple transformations'
- type: Files processed successfully
description: Output artifact of type Files processed successfully
- type: Files that failed with error details
description: Output artifact of type Files that failed with error details
- type: Processing time and performance metrics
description: Output artifact of type Processing time and performance metrics
- type: Storage space saved
description: Output artifact of type Storage space saved
- type: Transformation details for each file
description: Output artifact of type Transformation details for each file

View File

@@ -0,0 +1,73 @@
# Governance.Manager Agent
## Purpose
Create comprehensive program and project governance artifacts including project charters, RAID logs (Risks, Assumptions, Issues, Decisions), decision logs, governance frameworks, compliance matrices, and steering committee artifacts. Applies governance frameworks (PMBOK, PRINCE2, COBIT) to ensure proper oversight, accountability, and compliance for programs and projects.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `Program or project description`
- `Stakeholders and governance structure`
- `Objectives and success criteria`
- `Compliance or regulatory requirements`
- `Risks, issues, and assumptions`
- `Decisions to be documented`
### Produces
- `project-charter: Project charter with authority, scope, objectives, and success criteria`
- `raid-log: Comprehensive RAID log`
- `decision-log: Decision register with context, options, rationale, and outcomes`
- `governance-framework: Governance structure with roles, committees, and decision rights`
- `compliance-matrix: Compliance mapping to regulatory and policy requirements`
- `stakeholder-analysis: Stakeholder analysis with power/interest grid and engagement strategy`
- `steering-committee-report: Executive steering committee reporting pack`
- `change-control-process: Change management and approval workflow`
- `benefits-realization-plan: Benefits tracking and realization framework`
## Example Use Cases
- Project purpose and business justification
- Scope and deliverables (migrate 50 applications to AWS)
- Objectives and success criteria (90% cost reduction, zero downtime)
- Authority and decision rights
- Governance structure (steering committee, PMO oversight)
- Budget and resource allocation
- Assumptions and constraints
- Approval signatures
- Risks: 15-20 identified risks with impact, probability, mitigation
- Assumptions: Business continuity, vendor SLAs, budget availability
- Issues: Current issues with severity, owner, and resolution plan
- Decisions: Key decisions with rationale and stakeholder approval
- Cross-references to related artifacts
- Governance structure (executive steering, program board, workstream leads)
- Decision-making authority and escalation paths
- Meeting cadence and reporting requirements
- RACI matrix for key decisions and deliverables
- Compliance and risk management processes
- Change control and approval workflow
## Usage
```bash
# Activate the agent
/agent governance.manager
# Or invoke directly
betty agent run governance.manager --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,64 @@
name: governance.manager
version: 0.1.0
description: Create comprehensive program and project governance artifacts including
project charters, RAID logs (Risks, Assumptions, Issues, Decisions), decision logs,
governance frameworks, compliance matrices, and steering committee artifacts. Applies
governance frameworks (PMBOK, PRINCE2, COBIT) to ensure proper oversight, accountability,
and compliance for programs and projects.
status: draft
reasoning_mode: iterative
capabilities:
- Establish governance structures and stakeholder engagement plans
- Maintain comprehensive RAID and decision logs for executive visibility
- Ensure compliance with regulatory and organizational policy requirements
skills_available:
- artifact.create
- artifact.validate
- artifact.review
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: Program or project description
description: Input artifact of type Program or project description
- type: Stakeholders and governance structure
description: Input artifact of type Stakeholders and governance structure
- type: Objectives and success criteria
description: Input artifact of type Objectives and success criteria
- type: Compliance or regulatory requirements
description: Input artifact of type Compliance or regulatory requirements
- type: Risks, issues, and assumptions
description: Input artifact of type Risks, issues, and assumptions
- type: Decisions to be documented
description: Input artifact of type Decisions to be documented
produces:
- type: 'project-charter: Project charter with authority, scope, objectives, and
success criteria'
description: 'Output artifact of type project-charter: Project charter with authority,
scope, objectives, and success criteria'
- type: 'raid-log: Comprehensive RAID log'
description: 'Output artifact of type raid-log: Comprehensive RAID log'
- type: 'decision-log: Decision register with context, options, rationale, and outcomes'
description: 'Output artifact of type decision-log: Decision register with context,
options, rationale, and outcomes'
- type: 'governance-framework: Governance structure with roles, committees, and
decision rights'
description: 'Output artifact of type governance-framework: Governance structure
with roles, committees, and decision rights'
- type: 'compliance-matrix: Compliance mapping to regulatory and policy requirements'
description: 'Output artifact of type compliance-matrix: Compliance mapping to
regulatory and policy requirements'
- type: 'stakeholder-analysis: Stakeholder analysis with power/interest grid and
engagement strategy'
description: 'Output artifact of type stakeholder-analysis: Stakeholder analysis
with power/interest grid and engagement strategy'
- type: 'steering-committee-report: Executive steering committee reporting pack'
description: 'Output artifact of type steering-committee-report: Executive steering
committee reporting pack'
- type: 'change-control-process: Change management and approval workflow'
description: 'Output artifact of type change-control-process: Change management
and approval workflow'
- type: 'benefits-realization-plan: Benefits tracking and realization framework'
description: 'Output artifact of type benefits-realization-plan: Benefits tracking
and realization framework'

325
agents/meta.agent/README.md Normal file
View File

@@ -0,0 +1,325 @@
# meta.agent - Agent Creator
The meta-agent that creates other agents through skill composition.
## Overview
**meta.agent** transforms natural language descriptions into complete, functional agents with proper skill composition, artifact metadata, and documentation.
**What it produces:**
- Complete `agent.yaml` with recommended skills
- Auto-generated `README.md` documentation
- Proper artifact metadata (produces/consumes)
- Inferred permissions from skills
## Quick Start
### 1. Create an Agent Description
Create a Markdown file describing your agent:
```markdown
# Name: api.architect
# Purpose:
An agent that designs comprehensive REST APIs and validates them
against best practices.
# Inputs:
- API requirements
# Outputs:
- openapi-spec
- validation-report
- api-models
# Examples:
- Design a RESTful API for an e-commerce platform
- Create an API for a task management system
```
### 2. Run meta.agent
```bash
python3 agents/meta.agent/meta_agent.py examples/api_architect_description.md
```
### 3. Output
```
✨ Agent 'api.architect' created successfully!
📄 Agent definition: agents/api.architect/agent.yaml
📖 Documentation: agents/api.architect/README.md
🔧 Skills: api.define, api.validate, workflow.validate
```
## Usage
### Basic Creation
```bash
# Create agent from Markdown description
python3 agents/meta.agent/meta_agent.py path/to/agent_description.md
# Create agent from JSON description
python3 agents/meta.agent/meta_agent.py path/to/agent_description.json
# Specify output directory
python3 agents/meta.agent/meta_agent.py description.md -o agents/my-agent
# Skip validation
python3 agents/meta.agent/meta_agent.py description.md --no-validate
```
### Description Format
**Markdown Format:**
```markdown
# Name: agent-name
# Purpose:
Detailed description of what the agent does...
# Inputs:
- artifact-type-1
- artifact-type-2
# Outputs:
- artifact-type-3
- artifact-type-4
# Constraints:
(Optional) Any constraints or requirements...
# Examples:
- Example use case 1
- Example use case 2
```
**JSON Format:**
```json
{
"name": "agent-name",
"purpose": "Detailed description...",
"inputs": ["artifact-type-1", "artifact-type-2"],
"outputs": ["artifact-type-3", "artifact-type-4"],
"examples": ["Example 1", "Example 2"]
}
```
## What meta.agent Creates
### 1. agent.yaml
Complete agent definition with:
- **Recommended skills** - Uses `agent.compose` to find compatible skills
- **Artifact metadata** - Proper produces/consumes declarations
- **Permissions** - Inferred from selected skills
- **Description** - Professional formatting
Example output:
```yaml
name: api.architect
description: Designs and validates REST APIs against best practices
skills_available:
- api.define
- api.validate
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: api-requirements
produces:
- type: openapi-spec
schema: schemas/openapi-spec.json
- type: validation-report
schema: schemas/validation-report.json
```
### 2. README.md
Auto-generated documentation with:
- Agent purpose and capabilities
- Skills used with rationale
- Artifact flow (inputs/outputs)
- Example use cases
- Usage instructions
- "Created by meta.agent" attribution
## How It Works
1. **Parse Description** - Reads Markdown or JSON
2. **Find Skills** - Uses `agent.compose` to recommend compatible skills
3. **Generate Metadata** - Uses `artifact.define` for artifact contracts
4. **Infer Permissions** - Analyzes required skills
5. **Create Files** - Generates agent.yaml and README.md
6. **Validate** - Ensures proper structure and compatibility
## Integration with Other Meta-Agents
### With meta.compatibility
After creating an agent, use `meta.compatibility` to analyze it:
```bash
# Create agent
python3 agents/meta.agent/meta_agent.py description.md
# Analyze compatibility
python3 agents/meta.compatibility/meta_compatibility.py analyze api.architect
```
### With meta.suggest
Get suggestions after creating an agent:
```bash
python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--artifacts agents/api.architect/agent.yaml
```
## Common Workflows
### Workflow 1: Create and Analyze
```bash
# Step 1: Create agent
python3 agents/meta.agent/meta_agent.py examples/my_agent.md
# Step 2: Analyze compatibility
python3 agents/meta.compatibility/meta_compatibility.py find-compatible my-agent
# Step 3: Test the agent
# (Manual testing or agent.run)
```
### Workflow 2: Create Multiple Agents
```bash
# Create several agents
for desc in examples/*_agent_description.md; do
python3 agents/meta.agent/meta_agent.py "$desc"
done
# Analyze the ecosystem
python3 agents/meta.compatibility/meta_compatibility.py list-all
```
## Artifact Types
### Consumes
- **agent-description** - Natural language agent requirements
- Format: Markdown or JSON
- Pattern: `**/agent_description.md`
### Produces
- **agent-definition** - Complete agent.yaml
- Format: YAML
- Pattern: `agents/*/agent.yaml`
- Schema: `schemas/agent-definition.json`
- **agent-documentation** - Auto-generated README
- Format: Markdown
- Pattern: `agents/*/README.md`
## Tips & Best Practices
### Writing Good Descriptions
**Good:**
- Clear, specific purpose
- Well-defined inputs and outputs
- Concrete examples
- Specific artifact types
**Avoid:**
- Vague purpose ("does stuff")
- Generic inputs ("data")
- No examples
- Unclear artifact types
### Choosing Artifact Types
Use existing artifact types when possible:
- `openapi-spec` for API specifications
- `validation-report` for validation results
- `workflow-definition` for workflows
If you need a new type, create it with `meta.artifact` first.
### Skill Selection
meta.agent uses keyword matching to find skills:
- "api" → finds api.define, api.validate
- "validate" → finds validation skills
- "agent" → finds agent.compose, meta.agent
Be descriptive in your purpose statement to get better skill recommendations.
## Troubleshooting
### Agent name conflicts
```
Error: Agent 'api.architect' already exists
```
**Solution:** Choose a different name or remove the existing agent directory.
### No skills recommended
```
Warning: No skills found for agent purpose
```
**Solutions:**
- Make purpose more specific
- Mention artifact types explicitly
- Check if relevant skills exist in registry
### Missing artifact types
```
Warning: Artifact type 'my-artifact' not in known registry
```
**Solution:** Create the artifact type with `meta.artifact` first:
```bash
python3 agents/meta.artifact/meta_artifact.py create artifact_description.md
```
## Examples
See `examples/` directory for sample agent descriptions:
- `api_architect_description.md` - API design and validation agent
- (Add more as you create them)
## Architecture
meta.agent is part of the meta-agent ecosystem:
```
meta.agent
├─ Uses: agent.compose (find skills)
├─ Uses: artifact.define (generate metadata)
├─ Produces: agent.yaml + README.md
└─ Works with: meta.compatibility, meta.suggest
```
## Related Documentation
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Complete meta-agent architecture
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
- [agent-description schema](../../schemas/agent-description.json) - JSON schema
## Created By
Part of the Betty Framework meta-agent ecosystem.

View File

@@ -0,0 +1,93 @@
name: meta.agent
version: 0.1.0
description: |
Meta-agent that creates other agents by composing skills based on natural
language descriptions. Transforms natural language descriptions into complete,
functional agents.
meta.agent analyzes agent requirements, recommends compatible skills using artifact
metadata, generates complete agent definitions, and produces documentation.
artifact_metadata:
consumes:
- type: agent-description
file_pattern: "**/agent_description.md"
content_type: "text/markdown"
description: "Natural language description of agent purpose and requirements"
produces:
- type: agent-definition
file_pattern: "agents/*/agent.yaml"
content_type: "application/yaml"
schema: "schemas/agent-definition.json"
description: "Complete agent configuration with skills and metadata"
- type: agent-documentation
file_pattern: "agents/*/README.md"
content_type: "text/markdown"
description: "Human-readable agent documentation"
status: draft
reasoning_mode: iterative
capabilities:
- Analyze agent requirements and identify compatible skills and capabilities
- Generate complete agent manifests, documentation, and supporting assets
- Validate registry consistency before registering new agents
skills_available:
- agent.compose # Find compatible skills based on requirements
- artifact.define # Generate artifact metadata for the new agent
- registry.update # Validate and register generated agents
permissions:
- filesystem:read
- filesystem:write
system_prompt: |
You are meta.agent, the meta-agent that creates other agents by composing skills.
Your purpose is to transform natural language descriptions into complete, functional agents
with proper skill composition, artifact metadata, and documentation.
## Your Workflow
1. **Parse Requirements** - Understand what the agent needs to do
- Extract purpose, inputs, outputs, and constraints
- Identify required artifacts and permissions
2. **Compose Skills** - Use agent.compose to find compatible skills
- Analyze artifact flows (what's produced and consumed)
- Ensure no gaps in the artifact chain
- Consider permission requirements
3. **Generate Metadata** - Use artifact.define for proper artifact contracts
- Define what artifacts the agent consumes
- Define what artifacts the agent produces
- Include schemas and file patterns
4. **Create Agent Definition** - Write agent.yaml
- Name, description, skills_available
- Artifact metadata (consumes/produces)
- Permissions
- System prompt (optional but recommended)
5. **Document** - Generate comprehensive README.md
- Agent purpose and use cases
- Required inputs and expected outputs
- Example usage
- Artifact flow diagram
6. **Validate** (optional) - Use registry.certify
- Check agent definition is valid
- Verify skill compatibility
- Ensure artifact contracts are sound
## Principles
- **Artifact-First Design**: Ensure clean artifact flows with no gaps
- **Minimal Skill Sets**: Only include skills the agent actually needs
- **Clear Documentation**: Make the agent's purpose immediately obvious
- **Convention Adherence**: Follow Betty Framework standards
- **Composability**: Design agents that work well with other agents
When creating an agent, think like an architect: What does it consume? What does it
produce? What skills enable that transformation? How do artifacts flow through the system?

558
agents/meta.agent/meta_agent.py Executable file
View File

@@ -0,0 +1,558 @@
#!/usr/bin/env python3
"""
meta.agent - Meta-agent that creates other agents
Transforms natural language descriptions into complete, functional agents
by composing skills and generating proper artifact metadata.
"""
import json
import yaml
import sys
import os
from pathlib import Path
from typing import Dict, List, Any, Optional
# Add parent directory to path for imports
parent_dir = str(Path(__file__).parent.parent.parent)
sys.path.insert(0, parent_dir)
# Import skill modules directly
agent_compose_path = Path(parent_dir) / "skills" / "agent.compose"
artifact_define_path = Path(parent_dir) / "skills" / "artifact.define"
sys.path.insert(0, str(agent_compose_path))
sys.path.insert(0, str(artifact_define_path))
import agent_compose
import artifact_define
# Import traceability system
from betty.traceability import get_tracer, RequirementInfo
class AgentCreator:
"""Creates agents from natural language descriptions"""
def __init__(self, registry_path: str = "registry/skills.json"):
"""Initialize with registry path"""
self.registry_path = Path(registry_path)
self.registry = self._load_registry()
def _load_registry(self) -> Dict[str, Any]:
"""Load skills registry"""
if not self.registry_path.exists():
raise FileNotFoundError(f"Registry not found: {self.registry_path}")
with open(self.registry_path) as f:
return json.load(f)
def parse_description(self, description_path: str) -> Dict[str, Any]:
"""
Parse agent description from Markdown or JSON file
Args:
description_path: Path to agent_description.md or agent_description.json
Returns:
Parsed description with name, purpose, inputs, outputs, constraints
"""
path = Path(description_path)
if not path.exists():
raise FileNotFoundError(f"Description not found: {description_path}")
# Handle JSON format
if path.suffix == ".json":
with open(path) as f:
return json.load(f)
# Handle Markdown format
with open(path) as f:
content = f.read()
# Parse Markdown sections
description = {
"name": "",
"purpose": "",
"inputs": [],
"outputs": [],
"constraints": {},
"examples": []
}
current_section = None
for line in content.split('\n'):
line = line.strip()
# Section headers
if line.startswith('# Name:'):
description["name"] = line.replace('# Name:', '').strip()
elif line.startswith('# Purpose:'):
current_section = "purpose"
elif line.startswith('# Inputs:'):
current_section = "inputs"
elif line.startswith('# Outputs:'):
current_section = "outputs"
elif line.startswith('# Constraints:'):
current_section = "constraints"
elif line.startswith('# Examples:'):
current_section = "examples"
elif line and not line.startswith('#'):
# Content for current section
if current_section == "purpose":
description["purpose"] += line + " "
elif current_section == "inputs" and line.startswith('-'):
# Extract artifact type (before parentheses or description)
artifact = line[1:].strip()
# Remove anything in parentheses and any extra description
if '(' in artifact:
artifact = artifact.split('(')[0].strip()
description["inputs"].append(artifact)
elif current_section == "outputs" and line.startswith('-'):
# Extract artifact type (before parentheses or description)
artifact = line[1:].strip()
# Remove anything in parentheses and any extra description
if '(' in artifact:
artifact = artifact.split('(')[0].strip()
description["outputs"].append(artifact)
elif current_section == "examples" and line.startswith('-'):
description["examples"].append(line[1:].strip())
description["purpose"] = description["purpose"].strip()
return description
def find_compatible_skills(
self,
purpose: str,
required_artifacts: Optional[List[str]] = None
) -> Dict[str, Any]:
"""
Find compatible skills for agent purpose
Args:
purpose: Natural language description of agent purpose
required_artifacts: List of artifact types the agent needs
Returns:
Dictionary with recommended skills and rationale
"""
return agent_compose.find_skills_for_purpose(
self.registry,
purpose,
required_artifacts
)
def generate_artifact_metadata(
self,
inputs: List[str],
outputs: List[str]
) -> Dict[str, Any]:
"""
Generate artifact metadata from inputs/outputs
Args:
inputs: List of input artifact types
outputs: List of output artifact types
Returns:
Artifact metadata structure
"""
metadata = {}
if inputs:
metadata["consumes"] = []
for input_type in inputs:
artifact_def = artifact_define.get_artifact_definition(input_type)
if artifact_def:
metadata["consumes"].append(artifact_def)
else:
# Create basic definition
metadata["consumes"].append({
"type": input_type,
"description": f"Input artifact of type {input_type}"
})
if outputs:
metadata["produces"] = []
for output_type in outputs:
artifact_def = artifact_define.get_artifact_definition(output_type)
if artifact_def:
metadata["produces"].append(artifact_def)
else:
# Create basic definition
metadata["produces"].append({
"type": output_type,
"description": f"Output artifact of type {output_type}"
})
return metadata
def infer_permissions(self, skills: List[str]) -> List[str]:
"""
Infer required permissions from skills
Args:
skills: List of skill names
Returns:
List of required permissions
"""
permissions = set()
skills_list = self.registry.get("skills", [])
for skill_name in skills:
# Find skill in registry
skill = next(
(s for s in skills_list if s.get("name") == skill_name),
None
)
if skill and "permissions" in skill:
for perm in skill["permissions"]:
permissions.add(perm)
return sorted(list(permissions))
def generate_agent_yaml(
self,
name: str,
description: str,
skills: List[str],
artifact_metadata: Dict[str, Any],
permissions: List[str],
system_prompt: Optional[str] = None
) -> str:
"""
Generate agent.yaml content
Args:
name: Agent name
description: Agent description
skills: List of skill names
artifact_metadata: Artifact metadata structure
permissions: List of permissions
system_prompt: Optional system prompt
Returns:
YAML content as string
"""
agent_def = {
"name": name,
"description": description,
"skills_available": skills,
"permissions": permissions
}
if artifact_metadata:
agent_def["artifact_metadata"] = artifact_metadata
if system_prompt:
agent_def["system_prompt"] = system_prompt
return yaml.dump(
agent_def,
default_flow_style=False,
sort_keys=False,
allow_unicode=True
)
def generate_readme(
self,
name: str,
purpose: str,
skills: List[str],
inputs: List[str],
outputs: List[str],
examples: List[str]
) -> str:
"""
Generate README.md content
Args:
name: Agent name
purpose: Agent purpose
skills: List of skill names
inputs: Input artifacts
outputs: Output artifacts
examples: Example use cases
Returns:
Markdown content
"""
readme = f"""# {name.title()} Agent
## Purpose
{purpose}
## Skills
This agent uses the following skills:
"""
for skill in skills:
readme += f"- `{skill}`\n"
if inputs or outputs:
readme += "\n## Artifact Flow\n\n"
if inputs:
readme += "### Consumes\n\n"
for inp in inputs:
readme += f"- `{inp}`\n"
readme += "\n"
if outputs:
readme += "### Produces\n\n"
for out in outputs:
readme += f"- `{out}`\n"
readme += "\n"
if examples:
readme += "## Example Use Cases\n\n"
for example in examples:
readme += f"- {example}\n"
readme += "\n"
readme += """## Usage
```bash
# Activate the agent
/agent {name}
# Or invoke directly
betty agent run {name} --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*
""".format(name=name)
return readme
def create_agent(
self,
description_path: str,
output_dir: Optional[str] = None,
validate: bool = True,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, str]:
"""
Create a complete agent from description
Args:
description_path: Path to agent description file
output_dir: Output directory (default: agents/{name}/)
validate: Whether to validate with registry.certify
requirement: Requirement information for traceability (optional)
Returns:
Dictionary with paths to created files
"""
# Parse description
desc = self.parse_description(description_path)
name = desc["name"]
if not name:
raise ValueError("Agent name is required")
# Determine output directory
if not output_dir:
output_dir = f"agents/{name}"
output_path = Path(output_dir)
output_path.mkdir(parents=True, exist_ok=True)
# Find compatible skills
skill_recommendations = self.find_compatible_skills(
desc["purpose"],
desc.get("inputs", []) + desc.get("outputs", [])
)
skills = skill_recommendations.get("recommended_skills", [])
# Generate artifact metadata
artifact_metadata = self.generate_artifact_metadata(
desc.get("inputs", []),
desc.get("outputs", [])
)
# Infer permissions
permissions = self.infer_permissions(skills)
# Generate agent.yaml
agent_yaml_content = self.generate_agent_yaml(
name=name,
description=desc["purpose"],
skills=skills,
artifact_metadata=artifact_metadata,
permissions=permissions
)
agent_yaml_path = output_path / "agent.yaml"
with open(agent_yaml_path, 'w') as f:
f.write(agent_yaml_content)
# Generate README.md
readme_content = self.generate_readme(
name=name,
purpose=desc["purpose"],
skills=skills,
inputs=desc.get("inputs", []),
outputs=desc.get("outputs", []),
examples=desc.get("examples", [])
)
readme_path = output_path / "README.md"
with open(readme_path, 'w') as f:
f.write(readme_content)
# Log traceability if requirement provided
trace_id = None
if requirement:
try:
tracer = get_tracer()
trace_id = tracer.log_creation(
component_id=name,
component_name=name.replace(".", " ").title(),
component_type="agent",
component_version="0.1.0",
component_file_path=str(agent_yaml_path),
input_source_path=description_path,
created_by_tool="meta.agent",
created_by_version="0.1.0",
requirement=requirement,
tags=["agent", "auto-generated"],
project="Betty Framework"
)
# Log validation check
tracer.log_verification(
component_id=name,
check_type="validation",
tool="meta.agent",
result="passed",
details={
"checks_performed": [
{"name": "agent_structure", "status": "passed"},
{"name": "artifact_metadata", "status": "passed"},
{"name": "skills_compatibility", "status": "passed", "message": f"{len(skills)} compatible skills found"}
]
}
)
except Exception as e:
print(f"⚠️ Warning: Could not log traceability: {e}")
result = {
"agent_yaml": str(agent_yaml_path),
"readme": str(readme_path),
"name": name,
"skills": skills,
"rationale": skill_recommendations.get("rationale", "")
}
if trace_id:
result["trace_id"] = trace_id
return result
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.agent - Create agents from natural language descriptions"
)
parser.add_argument(
"description",
help="Path to agent description file (.md or .json)"
)
parser.add_argument(
"-o", "--output",
help="Output directory (default: agents/{name}/)"
)
parser.add_argument(
"--no-validate",
action="store_true",
help="Skip validation step"
)
# Traceability arguments
parser.add_argument(
"--requirement-id",
help="Requirement identifier for traceability (e.g., REQ-2025-001)"
)
parser.add_argument(
"--requirement-description",
help="What this agent is meant to accomplish"
)
parser.add_argument(
"--requirement-source",
help="Source document or system (e.g., requirements/Q1-2025.md)"
)
parser.add_argument(
"--issue-id",
help="Issue tracking ID (e.g., JIRA-123)"
)
parser.add_argument(
"--requested-by",
help="Who requested this requirement"
)
parser.add_argument(
"--rationale",
help="Why this component is needed"
)
args = parser.parse_args()
# Create requirement info if provided
requirement = None
if args.requirement_id and args.requirement_description:
requirement = RequirementInfo(
id=args.requirement_id,
description=args.requirement_description,
source=args.requirement_source,
issue_id=args.issue_id,
requested_by=args.requested_by,
rationale=args.rationale
)
# Create agent
creator = AgentCreator()
print(f"🔮 meta.agent creating agent from {args.description}...")
try:
result = creator.create_agent(
args.description,
output_dir=args.output,
validate=not args.no_validate,
requirement=requirement
)
print(f"\n✨ Agent '{result['name']}' created successfully!\n")
print(f"📄 Agent definition: {result['agent_yaml']}")
print(f"📖 Documentation: {result['readme']}\n")
print(f"🔧 Skills: {', '.join(result['skills'])}\n")
if result.get("rationale"):
print(f"💡 Rationale:\n{result['rationale']}\n")
if result.get("trace_id"):
print(f"📝 Traceability: {result['trace_id']}")
print(f" View trace: python3 betty/trace_cli.py show {result['name']}\n")
except Exception as e:
print(f"\n❌ Error creating agent: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,372 @@
# meta.artifact - The Artifact Standards Authority
THE single source of truth for all artifact type definitions in Betty Framework.
## Overview
**meta.artifact** manages the complete lifecycle of artifact types - from definition to documentation to registration. All artifact types MUST be created through meta.artifact. No ad-hoc definitions are permitted.
**What it does:**
- Defines new artifact types from descriptions
- Generates JSON schemas with validation rules
- Updates ARTIFACT_STANDARDS.md automatically
- Registers types in KNOWN_ARTIFACT_TYPES
- Validates uniqueness and prevents conflicts
## Quick Start
### 1. Create Artifact Description
```markdown
# Name: optimization-report
# Purpose:
Performance and security optimization recommendations for APIs
# Format: JSON
# File Pattern: *.optimization.json
# Schema Properties:
- optimizations (array): List of optimization recommendations
- severity (string): Severity level
- analyzed_artifact (string): Reference to analyzed artifact
# Required Fields:
- optimizations
- severity
- analyzed_artifact
# Producers:
- api.optimize
# Consumers:
- api.implement
- report.generate
```
### 2. Create Artifact Type
```bash
python3 agents/meta.artifact/meta_artifact.py create examples/optimization_report_artifact.md
```
### 3. Output
```
✨ Artifact type 'optimization-report' created successfully!
📄 Created files:
- schemas/optimization-report.json
📝 Updated files:
- docs/ARTIFACT_STANDARDS.md
- skills/artifact.define/artifact_define.py
✅ Artifact type 'optimization-report' is now registered
```
## Usage
### Create New Artifact Type
```bash
# From Markdown description
python3 agents/meta.artifact/meta_artifact.py create artifact_description.md
# From JSON description
python3 agents/meta.artifact/meta_artifact.py create artifact_description.json
# Force overwrite if exists
python3 agents/meta.artifact/meta_artifact.py create artifact_description.md --force
```
### Check if Artifact Exists
```bash
python3 agents/meta.artifact/meta_artifact.py check optimization-report
```
Output:
```
✅ Artifact type 'optimization-report' exists
Location: docs/ARTIFACT_STANDARDS.md
```
## What meta.artifact Creates
### 1. JSON Schema (schemas/*.json)
Complete JSON Schema Draft 07 schema with:
- Properties from description
- Required fields
- Type validation
- Descriptions
Example:
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Optimization Report",
"description": "Performance and security recommendations...",
"type": "object",
"required": ["optimizations", "severity", "analyzed_artifact"],
"properties": {
"optimizations": {
"type": "array",
"description": "List of optimization recommendations"
},
...
}
}
```
### 2. Documentation Section (docs/ARTIFACT_STANDARDS.md)
Adds complete section with:
- Artifact number and title
- Description
- Convention (file pattern, format, content type)
- Schema reference
- Producers and consumers
- Related types
### 3. Registry Entry (skills/artifact.define/artifact_define.py)
Adds to KNOWN_ARTIFACT_TYPES:
```python
"optimization-report": {
"schema": "schemas/optimization-report.json",
"file_pattern": "*.optimization.json",
"content_type": "application/json",
"description": "Performance and security optimization recommendations..."
}
```
## Description Format
### Markdown Format
```markdown
# Name: artifact-type-name
# Purpose:
Detailed description of what this artifact represents...
# Format: JSON | YAML | Markdown | Python | etc.
# File Pattern: *.artifact-type.ext
# Content Type: application/json (optional, inferred from format)
# Schema Properties:
- property_name (type): Description
- another_property (array): Description
# Required Fields:
- property_name
- another_property
# Producers:
- skill.that.produces
- agent.that.produces
# Consumers:
- skill.that.consumes
- agent.that.consumes
# Related Types:
- related-artifact-1
- related-artifact-2
# Validation Rules:
- Custom rule 1
- Custom rule 2
```
### JSON Format
```json
{
"name": "artifact-type-name",
"purpose": "Description...",
"format": "JSON",
"file_pattern": "*.artifact-type.json",
"schema_properties": {
"field1": {"type": "string", "description": "..."},
"field2": {"type": "array", "description": "..."}
},
"required_fields": ["field1"],
"producers": ["producer.skill"],
"consumers": ["consumer.skill"]
}
```
## Governance Rules
meta.artifact enforces these rules:
1. **Uniqueness** - Each artifact type must have a unique name
2. **Clarity** - Names must be descriptive (e.g., "openapi-spec" not "spec")
3. **Consistency** - Must use kebab-case (lowercase with hyphens)
4. **Documentation** - Every type must be fully documented
5. **Schemas** - Every type should have a JSON schema (if applicable)
6. **No Conflicts** - Checks for naming conflicts before creating
## Workflow
```
Developer creates artifact_description.md
meta.artifact validates name and format
Checks if type already exists
Generates JSON Schema
Updates ARTIFACT_STANDARDS.md
Adds to KNOWN_ARTIFACT_TYPES
Validates all files
Type is now registered and usable
```
## Integration
### With meta.agent
When meta.agent needs a new artifact type:
```bash
# 1. Define the artifact type
python3 agents/meta.artifact/meta_artifact.py create my_artifact.md
# 2. Create agent that uses it
python3 agents/meta.agent/meta_agent.py agent_description.md
```
### With meta.suggest
meta.suggest will recommend creating artifact types for gaps:
```bash
python3 agents/meta.suggest/meta_suggest.py --analyze-project
```
Output includes:
```
💡 Suggestions:
1. Create agent/skill to produce 'missing-artifact'
```
## Existing Artifact Types
Check `docs/ARTIFACT_STANDARDS.md` for all registered types:
- `openapi-spec` - OpenAPI specifications
- `validation-report` - Validation results
- `workflow-definition` - Betty workflows
- `hook-config` - Claude Code hooks
- `api-models` - Generated data models
- `agent-description` - Agent requirements
- `agent-definition` - Agent configurations
- `agent-documentation` - Agent READMEs
- `optimization-report` - Optimization recommendations
- `compatibility-graph` - Agent relationships
- `pipeline-suggestion` - Multi-agent workflows
- `suggestion-report` - Next-step recommendations
## Tips & Best Practices
### Naming Artifact Types
**Good:**
- `validation-report` (clear, descriptive)
- `openapi-spec` (standard term)
- `optimization-report` (action + result)
**Avoid:**
- `report` (too generic)
- `validationReport` (should be kebab-case)
- `val-rep` (abbreviations)
### Writing Descriptions
Be comprehensive:
- Explain what the artifact represents
- Include all important properties
- Document producers and consumers
- Add related types for discoverability
### Schema Properties
Be specific about types:
- Use JSON Schema types: string, number, integer, boolean, array, object
- Add descriptions for every property
- Mark required fields
- Consider validation rules
## Troubleshooting
### Name already exists
```
Error: Artifact type 'my-artifact' already exists at: docs/ARTIFACT_STANDARDS.md
```
**Solutions:**
1. Use `--force` to overwrite (careful!)
2. Choose a different name
3. Use the existing type if appropriate
### Invalid name format
```
Error: Artifact name must be kebab-case (lowercase with hyphens): MyArtifact
```
**Solution:** Use lowercase with hyphens: `my-artifact`
### Missing schema properties
If your artifact is JSON/YAML but has no schema properties, meta.artifact will still create a basic schema. Add properties for better validation.
## Architecture
meta.artifact is THE authority in the meta-agent ecosystem:
```
meta.artifact (Authority)
├─ Manages: All artifact type definitions
├─ Updates: ARTIFACT_STANDARDS.md
├─ Registers: KNOWN_ARTIFACT_TYPES
├─ Used by: meta.agent, meta.skill, all agents
└─ Governance: Single source of truth
```
## Examples
See `examples/` for artifact descriptions:
- `optimization_report_artifact.md`
- `compatibility_graph_artifact.md`
- `pipeline_suggestion_artifact.md`
- `suggestion_report_artifact.md`
## Related Documentation
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Complete artifact documentation
- [artifact-type-description schema](../../schemas/artifact-type-description.json)
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
## Philosophy
**Single Source of Truth** - All artifact definitions flow through meta.artifact. This ensures:
- Consistency across the framework
- Proper documentation
- Schema validation
- No conflicts
- Discoverability
When in doubt, ask meta.artifact.

View File

@@ -0,0 +1,144 @@
name: meta.artifact
version: 0.1.0
description: |
The artifact standards authority - THE single source of truth for all
artifact type definitions in Betty Framework.
This meta-agent manages the complete lifecycle of artifact types:
- Defines new artifact types with JSON schemas
- Updates ARTIFACT_STANDARDS.md documentation
- Registers types in the artifact registry
- Validates artifact compatibility across the system
- Ensures consistency and prevents conflicts
All artifact types MUST be registered through meta.artifact before use.
No ad-hoc artifact definitions are permitted.
artifact_metadata:
consumes:
- type: artifact-type-description
file_pattern: "**/artifact_type_description.md"
content_type: "text/markdown"
description: "Natural language description of a new artifact type"
schema: "schemas/artifact-type-description.json"
produces:
- type: artifact-schema
file_pattern: "schemas/*.json"
content_type: "application/json"
schema: "http://json-schema.org/draft-07/schema#"
description: "JSON Schema for validating artifact instances"
- type: artifact-documentation
file_pattern: "docs/ARTIFACT_STANDARDS.md"
content_type: "text/markdown"
description: "Updated artifact standards documentation"
- type: artifact-registry-entry
file_pattern: "skills/artifact.define/artifact_define.py"
content_type: "text/x-python"
description: "Updated KNOWN_ARTIFACT_TYPES registry"
status: draft
reasoning_mode: iterative
capabilities:
- Curate and register canonical artifact type definitions and schemas
- Synchronize documentation with changes to artifact standards
- Validate artifact compatibility across registries and manifests
skills_available:
- artifact.define # Use existing artifact definitions
- registry.update # Register or amend artifact metadata
- registry.query # Inspect existing registry entries
permissions:
- filesystem:read
- filesystem:write
system_prompt: |
You are meta.artifact, the artifact standards authority for Betty Framework.
You are THE single source of truth for artifact type definitions. All artifact
types flow through you - no exceptions.
## Your Responsibilities
1. **Define New Artifact Types**
- Parse artifact type descriptions
- Validate uniqueness (check if type already exists)
- Create JSON schemas with proper validation rules
- Generate comprehensive documentation
- Register in KNOWN_ARTIFACT_TYPES
2. **Maintain Standards Documentation**
- Update docs/ARTIFACT_STANDARDS.md with new types
- Include file patterns, schemas, producers, consumers
- Provide clear examples
- Keep Quick Reference table up to date
3. **Validate Compatibility**
- Check if artifact types can work together
- Verify producer/consumer contracts
- Ensure no naming conflicts
- Validate schema consistency
4. **Registry Management**
- Update skills/artifact.define/artifact_define.py
- Add to KNOWN_ARTIFACT_TYPES dictionary
- Include all metadata (schema, file_pattern, content_type, description)
## Workflow for New Artifact Type
1. **Check Existence**
- Search ARTIFACT_STANDARDS.md for similar types
- Check KNOWN_ARTIFACT_TYPES registry
- Suggest existing type if appropriate
2. **Generate JSON Schema**
- Create schemas/{type-name}.json
- Include proper validation rules
- Use JSON Schema Draft 07
- Add description, examples, required fields
3. **Update Documentation**
- Add new section to ARTIFACT_STANDARDS.md
- Follow existing format (Description, Convention, Schema, Producers, Consumers)
- Update Quick Reference table
4. **Update Registry**
- Add entry to KNOWN_ARTIFACT_TYPES in artifact_define.py
- Include: schema, file_pattern, content_type, description
5. **Validate**
- Ensure all files are properly formatted
- Check for syntax errors
- Validate schema is valid JSON Schema
## Governance Rules
- **Uniqueness**: Each artifact type must have a unique name
- **Clarity**: Names should be descriptive (e.g., "openapi-spec" not "spec")
- **Consistency**: Follow kebab-case naming (lowercase with hyphens)
- **Documentation**: Every type must be fully documented
- **Schemas**: Every type should have a JSON schema (if applicable)
- **No Conflicts**: Check for naming conflicts before creating
## Example Workflow
User provides artifact_type_description.md:
```
# Name: optimization-report
# Purpose: API optimization recommendations
# Format: JSON
# Producers: api.optimize
# Consumers: api.implement
```
You:
1. Check if "optimization-report" exists → it doesn't
2. Generate schemas/optimization-report.json
3. Update ARTIFACT_STANDARDS.md with new section
4. Add to KNOWN_ARTIFACT_TYPES
5. Return summary of changes
Remember: You are the guardian of artifact standards. Be thorough, be consistent,
be the single source of truth.

View File

@@ -0,0 +1,526 @@
#!/usr/bin/env python3
"""
meta.artifact - The Artifact Standards Authority
THE single source of truth for all artifact type definitions in Betty Framework.
Manages schemas, documentation, and registry for all artifact types.
"""
import json
import yaml
import sys
import os
import re
from pathlib import Path
from typing import Dict, List, Any, Optional, Tuple
from datetime import datetime
# Add parent directory to path for imports
parent_dir = str(Path(__file__).parent.parent.parent)
sys.path.insert(0, parent_dir)
class ArtifactAuthority:
"""The artifact standards authority - manages all artifact type definitions"""
def __init__(self, base_dir: str = "."):
"""Initialize with base directory"""
self.base_dir = Path(base_dir)
self.standards_doc = self.base_dir / "docs" / "ARTIFACT_STANDARDS.md"
self.schemas_dir = self.base_dir / "schemas"
self.artifact_define = self.base_dir / "skills" / "artifact.define" / "artifact_define.py"
def parse_description(self, description_path: str) -> Dict[str, Any]:
"""
Parse artifact type description from Markdown or JSON file
Args:
description_path: Path to artifact_type_description.md or .json
Returns:
Parsed description with all artifact metadata
"""
path = Path(description_path)
if not path.exists():
raise FileNotFoundError(f"Description not found: {description_path}")
# Handle JSON format
if path.suffix == ".json":
with open(path) as f:
return json.load(f)
# Handle Markdown format
with open(path) as f:
content = f.read()
# Parse Markdown sections
description = {
"name": "",
"purpose": "",
"format": "",
"file_pattern": "",
"content_type": "",
"schema_properties": {},
"required_fields": [],
"producers": [],
"consumers": [],
"examples": [],
"validation_rules": [],
"related_types": []
}
current_section = None
for line in content.split('\n'):
line = line.strip()
# Section headers
if line.startswith('# Name:'):
description["name"] = line.replace('# Name:', '').strip()
elif line.startswith('# Purpose:'):
current_section = "purpose"
elif line.startswith('# Format:'):
description["format"] = line.replace('# Format:', '').strip()
elif line.startswith('# File Pattern:'):
description["file_pattern"] = line.replace('# File Pattern:', '').strip()
elif line.startswith('# Content Type:'):
description["content_type"] = line.replace('# Content Type:', '').strip()
elif line.startswith('# Schema Properties:'):
current_section = "schema_properties"
elif line.startswith('# Required Fields:'):
current_section = "required_fields"
elif line.startswith('# Producers:'):
current_section = "producers"
elif line.startswith('# Consumers:'):
current_section = "consumers"
elif line.startswith('# Examples:'):
current_section = "examples"
elif line.startswith('# Validation Rules:'):
current_section = "validation_rules"
elif line.startswith('# Related Types:'):
current_section = "related_types"
elif line and not line.startswith('#'):
# Content for current section
if current_section == "purpose":
description["purpose"] += line + " "
elif current_section in ["producers", "consumers", "required_fields",
"validation_rules", "related_types"] and line.startswith('-'):
description[current_section].append(line[1:].strip())
elif current_section == "schema_properties" and line.startswith('-'):
# Parse property definitions like: "- optimizations (array): List of optimizations"
match = re.match(r'-\s+(\w+)\s+\((\w+)\):\s*(.+)', line)
if match:
prop_name, prop_type, prop_desc = match.groups()
description["schema_properties"][prop_name] = {
"type": prop_type,
"description": prop_desc
}
description["purpose"] = description["purpose"].strip()
# Infer content_type from format if not specified
if not description["content_type"] and description["format"]:
format_to_mime = {
"JSON": "application/json",
"YAML": "application/yaml",
"Markdown": "text/markdown",
"Python": "text/x-python",
"TypeScript": "text/x-typescript",
"Go": "text/x-go",
"Text": "text/plain"
}
description["content_type"] = format_to_mime.get(description["format"], "")
return description
def check_existence(self, artifact_name: str) -> Tuple[bool, Optional[str]]:
"""
Check if artifact type already exists
Args:
artifact_name: Name of artifact type to check
Returns:
Tuple of (exists: bool, location: Optional[str])
"""
# Check in ARTIFACT_STANDARDS.md
if self.standards_doc.exists():
with open(self.standards_doc) as f:
content = f.read()
if f"`{artifact_name}`" in content or f"({artifact_name})" in content:
return True, str(self.standards_doc)
# Check in schemas directory
schema_file = self.schemas_dir / f"{artifact_name}.json"
if schema_file.exists():
return True, str(schema_file)
# Check in KNOWN_ARTIFACT_TYPES
if self.artifact_define.exists():
with open(self.artifact_define) as f:
content = f.read()
if f'"{artifact_name}"' in content:
return True, str(self.artifact_define)
return False, None
def generate_json_schema(
self,
artifact_desc: Dict[str, Any]
) -> Dict[str, Any]:
"""
Generate JSON Schema from artifact description
Args:
artifact_desc: Parsed artifact description
Returns:
JSON Schema dictionary
"""
schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"title": artifact_desc["name"].replace("-", " ").title(),
"description": artifact_desc["purpose"],
"type": "object"
}
# Add required fields
if artifact_desc.get("required_fields"):
schema["required"] = artifact_desc["required_fields"]
# Add properties from schema_properties
if artifact_desc.get("schema_properties"):
schema["properties"] = {}
for prop_name, prop_info in artifact_desc["schema_properties"].items():
prop_schema = {}
# Map simple types to JSON Schema types
type_mapping = {
"string": "string",
"number": "number",
"integer": "integer",
"boolean": "boolean",
"array": "array",
"object": "object"
}
prop_type = prop_info.get("type", "string").lower()
prop_schema["type"] = type_mapping.get(prop_type, "string")
if "description" in prop_info:
prop_schema["description"] = prop_info["description"]
schema["properties"][prop_name] = prop_schema
# Add examples if provided
if artifact_desc.get("examples"):
schema["examples"] = artifact_desc["examples"]
return schema
def update_standards_doc(self, artifact_desc: Dict[str, Any]) -> None:
"""
Update ARTIFACT_STANDARDS.md with new artifact type
Args:
artifact_desc: Parsed artifact description
"""
if not self.standards_doc.exists():
raise FileNotFoundError(f"Standards document not found: {self.standards_doc}")
with open(self.standards_doc) as f:
content = f.read()
# Find the "## Artifact Types" section
artifact_types_match = re.search(r'## Artifact Types\n', content)
if not artifact_types_match:
raise ValueError("Could not find '## Artifact Types' section in standards doc")
# Find where to insert (before "## Artifact Metadata Schema" or at end)
insert_before_match = re.search(r'\n## Artifact Metadata Schema\n', content)
# Generate new section
artifact_name = artifact_desc["name"]
section_number = self._get_next_artifact_number(content)
new_section = f"""
### {section_number}. {artifact_name.replace('-', ' ').title()} (`{artifact_name}`)
**Description:** {artifact_desc["purpose"]}
**Convention:**
- File pattern: `{artifact_desc.get("file_pattern", f"*.{artifact_name}.{artifact_desc['format'].lower()}")}`
- Format: {artifact_desc["format"]}
"""
if artifact_desc.get("content_type"):
new_section += f"- Content type: {artifact_desc['content_type']}\n"
if artifact_desc.get("schema_properties"):
new_section += f"\n**Schema:** `schemas/{artifact_name}.json`\n"
if artifact_desc.get("producers"):
new_section += "\n**Produced by:**\n"
for producer in artifact_desc["producers"]:
new_section += f"- `{producer}`\n"
if artifact_desc.get("consumers"):
new_section += "\n**Consumed by:**\n"
for consumer in artifact_desc["consumers"]:
new_section += f"- `{consumer}`\n"
if artifact_desc.get("related_types"):
new_section += "\n**Related types:**\n"
for related in artifact_desc["related_types"]:
new_section += f"- `{related}`\n"
new_section += "\n---\n"
# Insert the new section
if insert_before_match:
insert_pos = insert_before_match.start()
else:
insert_pos = len(content)
updated_content = content[:insert_pos] + new_section + content[insert_pos:]
# Update Quick Reference table
updated_content = self._update_quick_reference(updated_content, artifact_desc)
# Write back
with open(self.standards_doc, 'w') as f:
f.write(updated_content)
def _get_next_artifact_number(self, standards_content: str) -> int:
"""Get the next artifact type number for documentation"""
# Find all artifact type sections like "### 1. ", "### 2. ", etc.
matches = re.findall(r'### (\d+)\. .+? \(`[\w-]+`\)', standards_content)
if matches:
return max(int(m) for m in matches) + 1
return 1
def _update_quick_reference(
self,
content: str,
artifact_desc: Dict[str, Any]
) -> str:
"""Update the Quick Reference table with new artifact type"""
# Find the Quick Reference table
table_match = re.search(
r'\| Artifact Type \| File Pattern \| Schema \| Producers \| Consumers \|.*?\n\|.*?\n((?:\|.*?\n)*)',
content,
re.DOTALL
)
if not table_match:
return content
artifact_name = artifact_desc["name"]
file_pattern = artifact_desc.get("file_pattern", f"*.{artifact_name}.{artifact_desc['format'].lower()}")
schema = f"schemas/{artifact_name}.json" if artifact_desc.get("schema_properties") else "-"
producers = ", ".join(artifact_desc.get("producers", [])) or "-"
consumers = ", ".join(artifact_desc.get("consumers", [])) or "-"
new_row = f"| {artifact_name} | {file_pattern} | {schema} | {producers} | {consumers} |\n"
# Insert before the end of the table
table_end = table_match.end()
return content[:table_end] + new_row + content[table_end:]
def update_registry(self, artifact_desc: Dict[str, Any]) -> None:
"""
Update KNOWN_ARTIFACT_TYPES in artifact_define.py
Args:
artifact_desc: Parsed artifact description
"""
if not self.artifact_define.exists():
raise FileNotFoundError(f"Artifact registry not found: {self.artifact_define}")
with open(self.artifact_define) as f:
content = f.read()
# Find KNOWN_ARTIFACT_TYPES dictionary
match = re.search(r'KNOWN_ARTIFACT_TYPES = \{', content)
if not match:
raise ValueError("Could not find KNOWN_ARTIFACT_TYPES in artifact_define.py")
artifact_name = artifact_desc["name"]
# Generate new entry
entry = f' "{artifact_name}": {{\n'
if artifact_desc.get("schema_properties"):
entry += f' "schema": "schemas/{artifact_name}.json",\n'
file_pattern = artifact_desc.get("file_pattern")
if file_pattern:
entry += f' "file_pattern": "{file_pattern}",\n'
if artifact_desc.get("content_type"):
entry += f' "content_type": "{artifact_desc["content_type"]}",\n'
entry += f' "description": "{artifact_desc["purpose"]}"\n'
entry += ' },\n'
# Find the end of the dictionary (last closing brace before the closing of KNOWN_ARTIFACT_TYPES)
# Insert before the last }
closing_brace_match = list(re.finditer(r'\n\}', content))
if closing_brace_match:
# Find the one that's part of KNOWN_ARTIFACT_TYPES
# This is a bit tricky, but we'll insert before the last } in the KNOWN_ARTIFACT_TYPES section
insert_pos = closing_brace_match[0].start()
# Insert the new entry
updated_content = content[:insert_pos] + entry + content[insert_pos:]
# Write back
with open(self.artifact_define, 'w') as f:
f.write(updated_content)
def create_artifact_type(
self,
description_path: str,
force: bool = False
) -> Dict[str, Any]:
"""
Create a new artifact type from description
Args:
description_path: Path to artifact description file
force: Force creation even if type exists
Returns:
Summary of created files and changes
"""
# Parse description
artifact_desc = self.parse_description(description_path)
artifact_name = artifact_desc["name"]
# Validate name format (kebab-case)
if not re.match(r'^[a-z0-9-]+$', artifact_name):
raise ValueError(
f"Artifact name must be kebab-case (lowercase with hyphens): {artifact_name}"
)
# Check existence
exists, location = self.check_existence(artifact_name)
if exists and not force:
raise ValueError(
f"Artifact type '{artifact_name}' already exists at: {location}\n"
f"Use --force to overwrite."
)
result = {
"artifact_name": artifact_name,
"created_files": [],
"updated_files": [],
"errors": []
}
# Generate and save JSON schema (if applicable)
if artifact_desc.get("schema_properties") or artifact_desc["format"] in ["JSON", "YAML"]:
schema = self.generate_json_schema(artifact_desc)
schema_file = self.schemas_dir / f"{artifact_name}.json"
self.schemas_dir.mkdir(parents=True, exist_ok=True)
with open(schema_file, 'w') as f:
json.dump(schema, f, indent=2)
result["created_files"].append(str(schema_file))
# Update ARTIFACT_STANDARDS.md
try:
self.update_standards_doc(artifact_desc)
result["updated_files"].append(str(self.standards_doc))
except Exception as e:
result["errors"].append(f"Failed to update standards doc: {e}")
# Update artifact registry
try:
self.update_registry(artifact_desc)
result["updated_files"].append(str(self.artifact_define))
except Exception as e:
result["errors"].append(f"Failed to update registry: {e}")
return result
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.artifact - The Artifact Standards Authority"
)
subparsers = parser.add_subparsers(dest='command', help='Commands')
# Create command
create_parser = subparsers.add_parser('create', help='Create new artifact type')
create_parser.add_argument(
"description",
help="Path to artifact type description file (.md or .json)"
)
create_parser.add_argument(
"--force",
action="store_true",
help="Force creation even if type exists"
)
# Check command
check_parser = subparsers.add_parser('check', help='Check if artifact type exists')
check_parser.add_argument("name", help="Artifact type name")
args = parser.parse_args()
if not args.command:
parser.print_help()
sys.exit(1)
authority = ArtifactAuthority()
if args.command == 'create':
print(f"🏛️ meta.artifact - Creating artifact type from {args.description}")
try:
result = authority.create_artifact_type(args.description, force=args.force)
print(f"\n✨ Artifact type '{result['artifact_name']}' created successfully!\n")
if result["created_files"]:
print("📄 Created files:")
for file in result["created_files"]:
print(f" - {file}")
if result["updated_files"]:
print("\n📝 Updated files:")
for file in result["updated_files"]:
print(f" - {file}")
if result["errors"]:
print("\n⚠️ Warnings:")
for error in result["errors"]:
print(f" - {error}")
print(f"\n✅ Artifact type '{result['artifact_name']}' is now registered")
print(" All agents and skills can now use this artifact type.")
except Exception as e:
print(f"\n❌ Error creating artifact type: {e}", file=sys.stderr)
sys.exit(1)
elif args.command == 'check':
exists, location = authority.check_existence(args.name)
if exists:
print(f"✅ Artifact type '{args.name}' exists")
print(f" Location: {location}")
else:
print(f"❌ Artifact type '{args.name}' does not exist")
print(f" Use 'meta.artifact create' to define it")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,457 @@
# meta.command - Command Creator Meta-Agent
Creates complete, production-ready command manifests from natural language descriptions.
## Purpose
The `meta.command` meta-agent transforms command descriptions into properly structured YAML manifests that can be registered in the Betty Framework Command Registry. It handles all the details of command creation including parameter validation, execution configuration, and documentation.
## What It Does
- ✅ Parses natural language command descriptions (Markdown or JSON)
- ✅ Generates complete command manifests in YAML format
- ✅ Validates command structure and execution types
- ✅ Supports all three execution types: agent, skill, workflow
- ✅ Creates proper parameter definitions with type validation
- ✅ Prepares commands for registration via `command.define` skill
- ✅ Supports traceability tracking
## Usage
```bash
python3 agents/meta.command/meta_command.py <description_file>
```
### With Traceability
```bash
python3 agents/meta.command/meta_command.py examples/api_validate_command.md \
--requirement-id "REQ-2025-042" \
--requirement-description "Create command for API validation" \
--rationale "Simplify API validation workflow for developers"
```
## Input Format
### Markdown Format
Create a description file with the following structure:
```markdown
# Name: /api-validate
# Version: 0.1.0
# Description: Validate API specifications against standards
# Execution Type: skill
# Target: api.validate
# Parameters:
- spec_file: string (required) - Path to API specification file
- format: enum (optional, default=openapi, values=[openapi,asyncapi,grpc]) - API specification format
- strict: boolean (optional, default=true) - Enable strict validation mode
# Execution Context:
- format: json
- timeout: 300
# Status: active
# Tags: api, validation, quality
```
### JSON Format
Alternatively, use JSON:
```json
{
"name": "/api-validate",
"version": "0.1.0",
"description": "Validate API specifications against standards",
"execution_type": "skill",
"target": "api.validate",
"parameters": [
{
"name": "spec_file",
"type": "string",
"required": true,
"description": "Path to API specification file"
},
{
"name": "format",
"type": "enum",
"values": ["openapi", "asyncapi", "grpc"],
"default": "openapi",
"description": "API specification format"
}
],
"execution_context": {
"format": "json",
"timeout": 300
},
"status": "active",
"tags": ["api", "validation", "quality"]
}
```
## Command Execution Types
### 1. Agent Execution
Use for complex, context-aware tasks requiring reasoning:
```markdown
# Name: /api-design
# Execution Type: agent
# Target: api.architect
# Description: Design a complete API architecture
```
**When to use:**
- Tasks requiring multi-step reasoning
- Context-aware decision making
- Complex analysis or design work
### 2. Skill Execution
Use for atomic, deterministic operations:
```markdown
# Name: /api-validate
# Execution Type: skill
# Target: api.validate
# Description: Validate API specifications
```
**When to use:**
- Direct, predictable operations
- Fast, single-purpose tasks
- Composable building blocks
### 3. Workflow Execution
Use for orchestrated multi-step processes:
```markdown
# Name: /api-pipeline
# Execution Type: workflow
# Target: workflows/api-pipeline.yaml
# Description: Execute full API development pipeline
```
**When to use:**
- Multi-agent/skill coordination
- Sequential or parallel task execution
- Complex business processes
## Parameter Types
### Supported Types
| Type | Description | Example |
|------|-------------|---------|
| `string` | Text values | `"api-spec.yaml"` |
| `integer` | Whole numbers | `42` |
| `boolean` | true/false | `true` |
| `enum` | Fixed set of values | `["openapi", "asyncapi"]` |
| `array` | Lists of values | `["tag1", "tag2"]` |
| `object` | Structured data | `{"key": "value"}` |
### Parameter Options
- `required: true/false` - Whether parameter is mandatory
- `default: value` - Default value if not provided
- `values: [...]` - Allowed values (for enum type)
- `description: "..."` - What the parameter does
## Examples
### Example 1: Simple Validation Command
**Input:** `examples/api-validate-cmd.md`
```markdown
# Name: /api-validate
# Description: Validate API specification files
# Execution Type: skill
# Target: api.validate
# Parameters:
- spec_file: string (required) - Path to specification file
- format: enum (optional, default=openapi, values=[openapi,asyncapi]) - Spec format
# Status: active
# Tags: api, validation
```
**Output:** `commands/api-validate.yaml`
```yaml
name: /api-validate
version: 0.1.0
description: Validate API specification files
parameters:
- name: spec_file
type: string
required: true
description: Path to specification file
- name: format
type: enum
values:
- openapi
- asyncapi
default: openapi
description: Spec format
execution:
type: skill
target: api.validate
status: active
tags:
- api
- validation
```
### Example 2: Agent-Based Design Command
**Input:** `examples/api-design-cmd.md`
```markdown
# Name: /api-design
# Description: Design a complete API architecture
# Execution Type: agent
# Target: api.architect
# Parameters:
- requirements: string (required) - Path to requirements document
- style: enum (optional, default=rest, values=[rest,graphql,grpc]) - API style
# Execution Context:
- reasoning_mode: iterative
- max_iterations: 10
# Status: active
# Tags: api, design, architecture
```
**Output:** `commands/api-design.yaml`
```yaml
name: /api-design
version: 0.1.0
description: Design a complete API architecture
parameters:
- name: requirements
type: string
required: true
description: Path to requirements document
- name: style
type: enum
values:
- rest
- graphql
- grpc
default: rest
description: API style
execution:
type: agent
target: api.architect
context:
reasoning_mode: iterative
max_iterations: 10
status: active
tags:
- api
- design
- architecture
```
### Example 3: Workflow Command
**Input:** `examples/deploy-cmd.md`
```markdown
# Name: /deploy
# Description: Deploy application to specified environment
# Execution Type: workflow
# Target: workflows/deploy-pipeline.yaml
# Parameters:
- environment: enum (required, values=[dev,staging,production]) - Target environment
- version: string (required) - Version to deploy
- skip_tests: boolean (optional, default=false) - Skip test execution
# Status: draft
# Tags: deployment, devops
```
## Output
The meta-agent creates:
1. **Command Manifest** - Complete YAML file in `commands/` directory
2. **Console Output** - Summary of created command
3. **Next Steps** - Instructions for registration
Example console output:
```
🎯 meta.command - Creating command from examples/api-validate-cmd.md
✨ Command '/api-validate' created successfully!
📄 Created file:
- commands/api-validate.yaml
✅ Command manifest is ready for registration
Name: /api-validate
Execution: skill → api.validate
Status: active
📝 Next steps:
1. Review the manifest: cat commands/api-validate.yaml
2. Register command: python3 skills/command.define/command_define.py commands/api-validate.yaml
3. Verify in registry: cat registry/commands.json
```
## Integration with command.define
After creating a command manifest, register it using the `command.define` skill:
```bash
# Register the command
python3 skills/command.define/command_define.py commands/api-validate.yaml
# Verify registration
cat registry/commands.json
```
The `command.define` skill will:
- Validate the manifest structure
- Check that the execution target exists
- Add the command to the Command Registry
- Make the command available for use
## Artifact Flow
```
┌──────────────────────────┐
│ Command Description │
│ (Markdown or JSON) │
└──────────┬───────────────┘
│ consumes
┌──────────────┐
│ meta.command │
└──────┬───────┘
│ produces
┌──────────────────────────┐
│ Command Manifest (YAML) │
│ commands/*.yaml │
└──────────┬───────────────┘
┌──────────────┐
│command.define│
│ (skill) │
└──────┬───────┘
┌──────────────────────────┐
│ Commands Registry │
│ registry/commands.json │
└──────────────────────────┘
```
## Command Naming Conventions
- ✅ Must start with `/` (e.g., `/api-validate`)
- ✅ Use kebab-case for multi-word commands (e.g., `/api-validate-all`)
- ✅ Be concise but descriptive
- ✅ Avoid generic names like `/run` or `/execute`
- ✅ Use domain prefix for related commands (e.g., `/api-*`, `/db-*`)
## Validation
The meta-agent validates:
- ✅ Required fields present (name, description, execution_type, target)
- ✅ Valid execution type (agent, skill, workflow)
- ✅ Command name starts with `/`
- ✅ Parameter types are valid
- ✅ Enum parameters have values defined
- ✅ Version follows semantic versioning
- ✅ Status is valid (draft, active, deprecated, archived)
## Error Handling
Common errors and solutions:
**Missing required fields:**
```
❌ Error: Missing required fields: execution_type, target
```
→ Add all required fields to your description
**Invalid execution type:**
```
❌ Error: Invalid execution type: service. Must be one of: agent, skill, workflow
```
→ Use only valid execution types
**Invalid parameter type:**
```
❌ Error: Invalid parameter type: float
```
→ Use supported parameter types
## Best Practices
1. **Clear Descriptions** - Write concise, actionable command descriptions
2. **Proper Parameters** - Define all parameters with types and validation
3. **Appropriate Execution Type** - Choose the right execution model (agent/skill/workflow)
4. **Meaningful Tags** - Add relevant tags for discoverability
5. **Version Management** - Start with 0.1.0, increment appropriately
6. **Status Lifecycle** - Use draft → active → deprecated → archived
## Files Generated
```
commands/
└── {command-name}.yaml # Command manifest
```
## Integration with Meta-Agents
The `meta.command` agent works alongside:
- **meta.skill** - Create skills that commands can execute
- **meta.agent** - Create agents that commands can delegate to
- **meta.artifact** - Define artifact types for command I/O
- **meta.compatibility** - Find compatible agents for command workflows
## Traceability
Track command creation with requirement metadata:
```bash
python3 agents/meta.command/meta_command.py examples/api-validate-cmd.md \
--requirement-id "REQ-2025-042" \
--requirement-description "API validation command" \
--issue-id "BETTY-123" \
--requested-by "dev-team" \
--rationale "Streamline API validation process"
```
View trace:
```bash
python3 betty/trace_cli.py show command.api_validate
```
## See Also
- **command.define skill** - Register command manifests
- **meta.skill** - Create skills for command execution
- **meta.agent** - Create agents for command delegation
- **Command Registry** - `registry/commands.json`
- **Command Infrastructure** - `docs/COMMAND_HOOK_INFRASTRUCTURE.md`

View File

@@ -0,0 +1,211 @@
name: meta.command
version: 0.1.0
description: |
Creates complete command manifests from natural language descriptions.
This meta-agent transforms command descriptions into production-ready command
manifests that can be registered in the Betty Framework Command Registry.
Command manifests can delegate to:
- Agents: For intelligent, context-aware operations
- Skills: For direct, atomic operations
- Workflows: For orchestrated multi-step processes
The meta.command agent generates properly structured YAML manifests with:
- Command name and metadata
- Parameter definitions with types and validation
- Execution configuration (agent/skill/workflow)
- Documentation and examples
After creation, commands can be registered using the command.define skill.
artifact_metadata:
consumes:
- type: command-description
file_pattern: "**/command_description.md"
content_type: "text/markdown"
description: "Natural language description of command requirements"
schema: "schemas/command-description.json"
produces:
- type: command-manifest
file_pattern: "commands/*.yaml"
content_type: "application/yaml"
description: "Complete command manifest ready for registration"
schema: "schemas/command-manifest.json"
- type: command-documentation
file_pattern: "commands/*/README.md"
content_type: "text/markdown"
description: "Command documentation with usage examples"
status: draft
reasoning_mode: iterative
capabilities:
- Transform natural language specifications into validated command manifests
- Recommend appropriate execution targets across agents, skills, and workflows
- Produce documentation and registration-ready assets for new commands
skills_available:
- command.define # Register command in registry
- artifact.define # Generate artifact metadata
permissions:
- filesystem:read
- filesystem:write
system_prompt: |
You are meta.command, the command creator for Betty Framework.
Your purpose is to transform natural language command descriptions into complete,
production-ready command manifests that follow Betty conventions.
## Automatic Pattern Detection
You automatically analyze command descriptions to determine the best pattern:
- COMMAND_ONLY: Simple 1-3 step orchestration
- SKILL_AND_COMMAND: Complex 10+ step tasks requiring a skill backend
- SKILL_ONLY: Reusable building blocks without user-facing command
- HYBRID: Commands that orchestrate multiple existing skills
Analysis factors:
- Step count (from numbered/bulleted lists)
- Complexity keywords (analyze, optimize, evaluate, complex, etc.)
- Autonomy requirements (intelligent, adaptive, sophisticated, etc.)
- Reusability indicators (composable, shared, library, etc.)
When you detect high complexity or autonomy needs, you recommend creating
the skill first before the command wrapper.
## Your Workflow
1. **Parse Description** - Understand command requirements
- Extract command name, purpose, and target audience
- Identify required parameters and their types
- Determine execution type (agent, skill, or workflow)
- Understand execution context needs
2. **Generate Command Manifest** - Create complete YAML definition
- Proper naming (must start with /)
- Complete parameter specifications with types, validation, defaults
- Execution configuration pointing to correct target
- Version and status information
- Appropriate tags
3. **Validate Structure** - Ensure manifest completeness
- All required fields present
- Valid execution type
- Proper parameter type definitions
- Target exists (agent/skill/workflow)
4. **Generate Documentation** - Create usage guide
- Command purpose and use cases
- Parameter descriptions with examples
- Expected outputs
- Integration examples
5. **Ready for Registration** - Prepare for command.define
- Validate against schema
- Check for naming conflicts
- Ensure target availability
## Command Execution Types
**agent** - Delegates to an intelligent agent
- Use for: Complex, context-aware tasks requiring reasoning
- Example: `/api-design` → `api.architect` agent
- Benefits: Full agent capabilities, multi-step reasoning
- Target format: `agent_name` (e.g., "api.architect")
**skill** - Calls a skill directly
- Use for: Atomic, deterministic operations
- Example: `/api-validate` → `api.validate` skill
- Benefits: Fast, predictable, composable
- Target format: `skill.name` (e.g., "api.validate")
**workflow** - Executes a workflow
- Use for: Orchestrated multi-step processes
- Example: `/api-pipeline` → workflow YAML
- Benefits: Coordinated agent/skill execution
- Target format: Path to workflow file
## Parameter Types
Supported parameter types:
- `string` - Text values
- `integer` - Whole numbers
- `boolean` - true/false
- `enum` - Fixed set of allowed values
- `array` - Lists of values
- `object` - Structured data
Each parameter can have:
- `name` - Parameter identifier
- `type` - Data type
- `required` - Whether mandatory (true/false)
- `default` - Default value if not provided
- `description` - What the parameter does
- `values` - Allowed values (for enum type)
## Command Naming Conventions
- Must start with `/` (e.g., `/api-validate`)
- Use kebab-case for multi-word commands
- Should be concise but descriptive
- Avoid generic names like `/run` or `/execute`
## Command Status
- `draft` - Under development, not ready for production
- `active` - Production-ready and available
- `deprecated` - Still works but discouraged
- `archived` - No longer available
## Structure Example
```yaml
name: /api-validate
version: 0.1.0
description: "Validate API specifications against standards"
parameters:
- name: spec_file
type: string
required: true
description: "Path to API specification file"
- name: format
type: enum
values: [openapi, asyncapi, grpc]
default: openapi
description: "API specification format"
execution:
type: skill
target: api.validate
context:
format: json
status: active
tags: [api, validation, quality]
```
## Quality Standards
- ✅ Follows Betty command conventions
- ✅ Proper parameter definitions with validation
- ✅ Correct execution type and target
- ✅ Clear, actionable descriptions
- ✅ Appropriate status and tags
- ✅ Ready for command.define registration
## Integration with command.define
After generating the command manifest, users should:
1. Review the generated YAML file
2. Test the command locally
3. Register using: `python3 skills/command.define/command_define.py <manifest.yaml>`
4. Verify registration in `registry/commands.json`
Remember: You're creating user-facing commands that make Betty's capabilities
accessible. Make commands intuitive, well-documented, and easy to use.

View File

@@ -0,0 +1,761 @@
#!/usr/bin/env python3
"""
meta.command - Command Creator Meta-Agent
Generates command manifests from natural language descriptions.
Usage:
python3 agents/meta.command/meta_command.py <command_description_file>
Examples:
python3 agents/meta.command/meta_command.py examples/api_validate_command.md
python3 agents/meta.command/meta_command.py examples/deploy_command.json
"""
import os
import sys
import json
import yaml
import re
from pathlib import Path
from typing import Dict, List, Any, Optional
# Add parent directory to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from betty.config import (
BASE_DIR,
COMMANDS_REGISTRY_FILE,
)
from betty.enums import CommandExecutionType, CommandStatus
from betty.logging_utils import setup_logger
from betty.traceability import get_tracer, RequirementInfo
logger = setup_logger(__name__)
# Import artifact validation from artifact.define skill
try:
import importlib.util
artifact_define_path = Path(__file__).parent.parent.parent / "skills" / "artifact.define" / "artifact_define.py"
spec = importlib.util.spec_from_file_location("artifact_define", artifact_define_path)
artifact_define_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(artifact_define_module)
validate_artifact_type = artifact_define_module.validate_artifact_type
KNOWN_ARTIFACT_TYPES = artifact_define_module.KNOWN_ARTIFACT_TYPES
ARTIFACT_VALIDATION_AVAILABLE = True
except Exception as e:
ARTIFACT_VALIDATION_AVAILABLE = False
class CommandCreator:
"""Creates command manifests from descriptions"""
VALID_EXECUTION_TYPES = ["agent", "skill", "workflow"]
VALID_STATUSES = ["draft", "active", "deprecated", "archived"]
VALID_PARAMETER_TYPES = ["string", "integer", "boolean", "enum", "array", "object"]
# Keywords for complexity analysis
AUTONOMY_KEYWORDS = [
"analyze", "optimize", "decide", "evaluate", "assess",
"complex", "multi-step", "autonomous", "intelligent",
"adaptive", "sophisticated", "advanced", "comprehensive"
]
REUSABILITY_KEYWORDS = [
"reusable", "composable", "building block", "library",
"utility", "helper", "shared", "common", "core"
]
def __init__(self, base_dir: str = BASE_DIR):
"""Initialize command creator"""
self.base_dir = Path(base_dir)
self.commands_dir = self.base_dir / "commands"
def parse_description(self, description_path: str) -> Dict[str, Any]:
"""
Parse command description from Markdown or JSON file
Args:
description_path: Path to description file
Returns:
Dict with command configuration
"""
path = Path(description_path)
if not path.exists():
raise FileNotFoundError(f"Description file not found: {description_path}")
# Read file
content = path.read_text()
# Try JSON first
if path.suffix == ".json":
return json.loads(content)
# Parse Markdown format
cmd_desc = {}
# Extract fields using regex patterns
patterns = {
"name": r"#\s*Name:\s*(.+)",
"version": r"#\s*Version:\s*(.+)",
"description": r"#\s*Description:\s*(.+)",
"execution_type": r"#\s*Execution\s*Type:\s*(.+)",
"target": r"#\s*Target:\s*(.+)",
"status": r"#\s*Status:\s*(.+)",
}
for field, pattern in patterns.items():
match = re.search(pattern, content, re.IGNORECASE)
if match:
value = match.group(1).strip()
cmd_desc[field] = value
# Parse parameters section
params_section = re.search(
r"#\s*Parameters:\s*\n(.*?)(?=\n#|\Z)",
content,
re.DOTALL | re.IGNORECASE
)
if params_section:
cmd_desc["parameters"] = self._parse_parameters(params_section.group(1))
# Parse tags
tags_match = re.search(r"#\s*Tags:\s*(.+)", content, re.IGNORECASE)
if tags_match:
tags_str = tags_match.group(1).strip()
# Parse comma-separated or bracket-enclosed tags
if tags_str.startswith("[") and tags_str.endswith("]"):
tags_str = tags_str[1:-1]
cmd_desc["tags"] = [t.strip() for t in tags_str.split(",")]
# Parse execution context
context_section = re.search(
r"#\s*Execution\s*Context:\s*\n(.*?)(?=\n#|\Z)",
content,
re.DOTALL | re.IGNORECASE
)
if context_section:
cmd_desc["execution_context"] = self._parse_context(context_section.group(1))
# Parse artifact metadata sections
produces_section = re.search(
r"#\s*Produces\s*Artifacts:\s*\n(.*?)(?=\n#|\Z)",
content,
re.DOTALL | re.IGNORECASE
)
if produces_section:
cmd_desc["artifact_produces"] = self._parse_artifact_list(produces_section.group(1))
consumes_section = re.search(
r"#\s*Consumes\s*Artifacts:\s*\n(.*?)(?=\n#|\Z)",
content,
re.DOTALL | re.IGNORECASE
)
if consumes_section:
cmd_desc["artifact_consumes"] = self._parse_artifact_list(consumes_section.group(1))
# Validate required fields
required = ["name", "description", "execution_type", "target"]
missing = [f for f in required if f not in cmd_desc]
if missing:
raise ValueError(f"Missing required fields: {', '.join(missing)}")
# Validate execution type
if cmd_desc["execution_type"].lower() not in self.VALID_EXECUTION_TYPES:
raise ValueError(
f"Invalid execution type: {cmd_desc['execution_type']}. "
f"Must be one of: {', '.join(self.VALID_EXECUTION_TYPES)}"
)
# Ensure command name starts with /
if not cmd_desc["name"].startswith("/"):
cmd_desc["name"] = "/" + cmd_desc["name"]
# Set defaults
if "version" not in cmd_desc:
cmd_desc["version"] = "0.1.0"
if "status" not in cmd_desc:
cmd_desc["status"] = "draft"
if "parameters" not in cmd_desc:
cmd_desc["parameters"] = []
return cmd_desc
def _parse_parameters(self, params_text: str) -> List[Dict[str, Any]]:
"""Parse parameters from markdown text"""
parameters = []
# Match parameter blocks
# Format: - name: type (required/optional) - description
param_pattern = r"-\s+(\w+):\s+(\w+)(?:\s+\(([^)]+)\))?\s+-\s+(.+?)(?=\n-|\n#|\Z)"
matches = re.finditer(param_pattern, params_text, re.DOTALL)
for match in matches:
name, param_type, modifiers, description = match.groups()
param = {
"name": name.strip(),
"type": param_type.strip(),
"description": description.strip()
}
# Parse modifiers (required, optional, default=value)
if modifiers:
modifiers = modifiers.lower()
param["required"] = "required" in modifiers
# Extract default value
default_match = re.search(r"default[=:]\s*([^,\s]+)", modifiers)
if default_match:
default_val = default_match.group(1)
# Convert types
if param_type == "integer":
default_val = int(default_val)
elif param_type == "boolean":
default_val = default_val.lower() in ("true", "yes", "1")
param["default"] = default_val
# Extract enum values
values_match = re.search(r"values[=:]\s*\[([^\]]+)\]", modifiers)
if values_match:
param["values"] = [v.strip() for v in values_match.group(1).split(",")]
parameters.append(param)
return parameters
def _parse_context(self, context_text: str) -> Dict[str, Any]:
"""Parse execution context from markdown text"""
context = {}
# Simple key: value parsing
for line in context_text.split("\n"):
line = line.strip()
if not line or line.startswith("#"):
continue
match = re.match(r"-\s*(\w+):\s*(.+)", line)
if match:
key, value = match.groups()
# Try to parse as JSON for complex values
try:
context[key] = json.loads(value)
except (json.JSONDecodeError, ValueError):
context[key] = value.strip()
return context
def _parse_artifact_list(self, artifact_text: str) -> List[str]:
"""Parse artifact list from markdown text"""
artifacts = []
for line in artifact_text.split("\n"):
line = line.strip()
if not line or line.startswith("#"):
continue
# Match lines starting with - or *
match = re.match(r"[-*]\s*`?([a-z0-9-]+)`?", line)
if match:
artifacts.append(match.group(1))
return artifacts
def analyze_complexity(self, cmd_desc: Dict[str, Any], full_content: str = "") -> Dict[str, Any]:
"""
Analyze command complexity and recommend pattern
Args:
cmd_desc: Parsed command description
full_content: Full description file content for analysis
Returns:
Dict with complexity analysis and pattern recommendation
"""
analysis = {
"step_count": 0,
"complexity": "low",
"autonomy_level": "none",
"reusability": "low",
"recommended_pattern": "COMMAND_ONLY",
"should_create_skill": False,
"reasoning": []
}
# Count steps from description
# Look for numbered lists, bullet points, or explicit step mentions
step_patterns = [
r"^\s*\d+\.\s+", # Numbered lists
r"^\s*[-*]\s+", # Bullet points
r"\bstep\s+\d+\b", # Explicit "step N"
]
lines = full_content.split("\n")
step_count = 0
for line in lines:
for pattern in step_patterns:
if re.search(pattern, line, re.IGNORECASE):
step_count += 1
break
analysis["step_count"] = step_count
# Analyze content for keywords
content_lower = full_content.lower()
desc_lower = cmd_desc.get("description", "").lower()
combined = content_lower + " " + desc_lower
# Check autonomy keywords
autonomy_matches = [kw for kw in self.AUTONOMY_KEYWORDS if kw in combined]
if len(autonomy_matches) >= 3:
analysis["autonomy_level"] = "high"
elif len(autonomy_matches) >= 1:
analysis["autonomy_level"] = "medium"
else:
analysis["autonomy_level"] = "low"
# Check reusability keywords
reusability_matches = [kw for kw in self.REUSABILITY_KEYWORDS if kw in combined]
if len(reusability_matches) >= 2:
analysis["reusability"] = "high"
elif len(reusability_matches) >= 1:
analysis["reusability"] = "medium"
# Determine complexity
if step_count >= 10:
analysis["complexity"] = "high"
elif step_count >= 4:
analysis["complexity"] = "medium"
else:
analysis["complexity"] = "low"
# Estimate lines of logic (rough heuristic)
instruction_lines = sum(1 for line in lines if line.strip() and not line.strip().startswith("#"))
if instruction_lines > 50:
analysis["complexity"] = "high"
# Decide pattern based on decision tree
if step_count >= 10 or analysis["complexity"] == "high":
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
analysis["should_create_skill"] = True
analysis["reasoning"].append(f"High complexity: {step_count} steps detected")
elif analysis["autonomy_level"] == "high":
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
analysis["should_create_skill"] = True
analysis["reasoning"].append(f"High autonomy: matched keywords {autonomy_matches[:3]}")
elif analysis["reusability"] == "high":
if step_count <= 3:
analysis["recommended_pattern"] = "SKILL_ONLY"
analysis["should_create_skill"] = True
analysis["reasoning"].append("High reusability but low complexity: create skill only")
else:
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
analysis["should_create_skill"] = True
analysis["reasoning"].append(f"High reusability with {step_count} steps: create both")
elif step_count >= 4 and step_count <= 9:
# Medium complexity - could go either way
if analysis["autonomy_level"] == "medium":
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
analysis["should_create_skill"] = True
analysis["reasoning"].append(f"Medium complexity ({step_count} steps) with some autonomy needs")
else:
analysis["recommended_pattern"] = "COMMAND_ONLY"
analysis["reasoning"].append(f"Medium complexity ({step_count} steps) but simple logic: inline is fine")
else:
# Low complexity - command only
analysis["recommended_pattern"] = "COMMAND_ONLY"
analysis["reasoning"].append(f"Low complexity ({step_count} steps): inline orchestration is sufficient")
# Check if execution type already specifies skill
if cmd_desc.get("execution_type") == "skill":
analysis["recommended_pattern"] = "SKILL_AND_COMMAND"
analysis["should_create_skill"] = True
analysis["reasoning"].append("Execution type explicitly set to 'skill'")
return analysis
def generate_command_manifest(self, cmd_desc: Dict[str, Any]) -> str:
"""
Generate command manifest YAML
Args:
cmd_desc: Parsed command description
Returns:
YAML string
"""
manifest = {
"name": cmd_desc["name"],
"version": cmd_desc["version"],
"description": cmd_desc["description"]
}
# Add parameters if present
if cmd_desc.get("parameters"):
manifest["parameters"] = cmd_desc["parameters"]
# Add execution configuration
execution = {
"type": cmd_desc["execution_type"],
"target": cmd_desc["target"]
}
if cmd_desc.get("execution_context"):
execution["context"] = cmd_desc["execution_context"]
manifest["execution"] = execution
# Add status
manifest["status"] = cmd_desc.get("status", "draft")
# Add tags if present
if cmd_desc.get("tags"):
manifest["tags"] = cmd_desc["tags"]
# Add artifact metadata if present
if cmd_desc.get("artifact_produces") or cmd_desc.get("artifact_consumes"):
artifact_metadata = {}
if cmd_desc.get("artifact_produces"):
artifact_metadata["produces"] = [
{"type": art_type} for art_type in cmd_desc["artifact_produces"]
]
if cmd_desc.get("artifact_consumes"):
artifact_metadata["consumes"] = [
{"type": art_type, "required": True}
for art_type in cmd_desc["artifact_consumes"]
]
manifest["artifact_metadata"] = artifact_metadata
return yaml.dump(manifest, default_flow_style=False, sort_keys=False)
def validate_artifacts(self, cmd_desc: Dict[str, Any]) -> List[str]:
"""
Validate that artifact types exist in the known registry.
Args:
cmd_desc: Parsed command description
Returns:
List of warning messages
"""
warnings = []
if not ARTIFACT_VALIDATION_AVAILABLE:
warnings.append(
"Artifact validation skipped: artifact.define skill not available"
)
return warnings
# Validate produced artifacts
for artifact_type in cmd_desc.get("artifact_produces", []):
is_valid, warning = validate_artifact_type(artifact_type)
if not is_valid and warning:
warnings.append(f"Produces: {warning}")
# Validate consumed artifacts
for artifact_type in cmd_desc.get("artifact_consumes", []):
is_valid, warning = validate_artifact_type(artifact_type)
if not is_valid and warning:
warnings.append(f"Consumes: {warning}")
return warnings
def validate_target(self, cmd_desc: Dict[str, Any]) -> List[str]:
"""
Validate that the target skill or agent exists.
Args:
cmd_desc: Parsed command description
Returns:
List of warning messages
"""
warnings = []
execution_type = cmd_desc.get("execution_type", "").lower()
target = cmd_desc.get("target", "")
if execution_type == "skill":
# Check if skill exists in registry or skills directory
skill_registry = self.base_dir / "registry" / "skills.json"
skill_dir = self.base_dir / "skills" / target.replace(".", "/")
skill_exists = False
if skill_registry.exists():
try:
with open(skill_registry) as f:
registry = json.load(f)
if target in registry.get("skills", {}):
skill_exists = True
except Exception:
pass
if not skill_exists and not skill_dir.exists():
warnings.append(
f"Target skill '{target}' not found in registry or skills directory. "
f"You may need to create it using meta.skill first."
)
elif execution_type == "agent":
# Check if agent exists in agents directory
agent_dir = self.base_dir / "agents" / target
if not agent_dir.exists():
warnings.append(
f"Target agent '{target}' not found in agents directory. "
f"You may need to create it using meta.agent first."
)
return warnings
def create_command(
self,
description_path: str,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Create command manifest from description file
Args:
description_path: Path to description file
requirement: Optional requirement information for traceability
Returns:
Dict with creation results
"""
try:
print(f"🎯 meta.command - Creating command from {description_path}\n")
# Read full content for analysis
with open(description_path, 'r') as f:
full_content = f.read()
# Parse description
cmd_desc = self.parse_description(description_path)
# Validate artifacts
artifact_warnings = self.validate_artifacts(cmd_desc)
if artifact_warnings:
print("\n⚠️ Artifact Validation Warnings:")
for warning in artifact_warnings:
print(f" {warning}")
print()
# Validate target skill/agent
target_warnings = self.validate_target(cmd_desc)
if target_warnings:
print("\n⚠️ Target Validation Warnings:")
for warning in target_warnings:
print(f" {warning}")
print()
# Analyze complexity and recommend pattern
analysis = self.analyze_complexity(cmd_desc, full_content)
# Display analysis
print(f"📊 Complexity Analysis:")
print(f" Steps detected: {analysis['step_count']}")
print(f" Complexity: {analysis['complexity']}")
print(f" Autonomy level: {analysis['autonomy_level']}")
print(f" Reusability: {analysis['reusability']}")
print(f"\n💡 Recommended Pattern: {analysis['recommended_pattern']}")
for reason in analysis['reasoning']:
print(f"{reason}")
print()
# Generate manifest YAML
manifest_yaml = self.generate_command_manifest(cmd_desc)
# Ensure commands directory exists
self.commands_dir.mkdir(parents=True, exist_ok=True)
# Determine output filename
# Remove leading / and replace spaces/special chars with hyphens
filename = cmd_desc["name"].lstrip("/").replace(" ", "-").lower()
filename = re.sub(r"[^a-z0-9-]", "", filename)
manifest_file = self.commands_dir / f"{filename}.yaml"
# Write manifest file
manifest_file.write_text(manifest_yaml)
print(f"✨ Command '{cmd_desc['name']}' created successfully!\n")
print(f"📄 Created file:")
print(f" - {manifest_file}\n")
print(f"✅ Command manifest is ready for registration")
print(f" Name: {cmd_desc['name']}")
print(f" Execution: {cmd_desc['execution_type']}{cmd_desc['target']}")
print(f" Status: {cmd_desc.get('status', 'draft')}\n")
# Display skill creation recommendation if needed
if analysis['should_create_skill']:
print(f"⚠️ RECOMMENDATION: Create the skill first!")
print(f" Pattern: {analysis['recommended_pattern']}")
print(f"\n This command delegates to a skill ({cmd_desc['target']}),")
print(f" but that skill may not exist yet.\n")
print(f" Suggested workflow:")
print(f" 1. Create skill: python3 agents/meta.skill/meta_skill.py <skill-description.md>")
print(f" - Skill should implement: {cmd_desc['target']}")
print(f" - Include all complex logic from the command description")
print(f" 2. Test skill: python3 skills/{cmd_desc['target'].replace('.', '/')}/{cmd_desc['target'].replace('.', '_')}.py")
print(f" 3. Review this command manifest: cat {manifest_file}")
print(f" 4. Register command: python3 skills/command.define/command_define.py {manifest_file}")
print(f" 5. Verify in registry: cat registry/commands.json")
print(f"\n See docs/SKILL_COMMAND_DECISION_TREE.md for pattern details\n")
else:
print(f"📝 Next steps:")
print(f" 1. Review the manifest: cat {manifest_file}")
print(f" 2. Register command: python3 skills/command.define/command_define.py {manifest_file}")
print(f" 3. Verify in registry: cat registry/commands.json")
result = {
"ok": True,
"status": "success",
"command_name": cmd_desc["name"],
"manifest_file": str(manifest_file),
"complexity_analysis": analysis,
"artifact_warnings": artifact_warnings,
"target_warnings": target_warnings
}
# Log traceability if requirement provided
trace_id = None
if requirement:
try:
tracer = get_tracer()
# Create component ID from command name
component_id = f"command.{filename.replace('-', '_')}"
trace_id = tracer.log_creation(
component_id=component_id,
component_name=cmd_desc["name"],
component_type="command",
component_version=cmd_desc["version"],
component_file_path=str(manifest_file),
input_source_path=description_path,
created_by_tool="meta.command",
created_by_version="0.1.0",
requirement=requirement,
tags=["command", "auto-generated"] + cmd_desc.get("tags", []),
project="Betty Framework"
)
# Log validation check
validation_details = {
"checks_performed": [
{"name": "command_structure", "status": "passed"},
{"name": "execution_type_validation", "status": "passed",
"message": f"Valid execution type: {cmd_desc['execution_type']}"},
{"name": "name_validation", "status": "passed",
"message": f"Command name follows convention: {cmd_desc['name']}"}
]
}
# Check parameters
if cmd_desc.get("parameters"):
validation_details["checks_performed"].append({
"name": "parameters_validation",
"status": "passed",
"message": f"Validated {len(cmd_desc['parameters'])} parameters"
})
tracer.log_verification(
component_id=component_id,
check_type="validation",
tool="meta.command",
result="passed",
details=validation_details
)
result["trace_id"] = trace_id
result["component_id"] = component_id
except Exception as e:
print(f"⚠️ Warning: Could not log traceability: {e}")
return result
except Exception as e:
print(f"❌ Error creating command: {e}")
logger.error(f"Error creating command: {e}", exc_info=True)
return {
"ok": False,
"status": "failed",
"error": str(e)
}
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.command - Create command manifests from descriptions"
)
parser.add_argument(
"description",
help="Path to command description file (.md or .json)"
)
# Traceability arguments
parser.add_argument(
"--requirement-id",
help="Requirement identifier (e.g., REQ-2025-001)"
)
parser.add_argument(
"--requirement-description",
help="What this command accomplishes"
)
parser.add_argument(
"--requirement-source",
help="Source document"
)
parser.add_argument(
"--issue-id",
help="Issue tracking ID (e.g., JIRA-123)"
)
parser.add_argument(
"--requested-by",
help="Who requested this"
)
parser.add_argument(
"--rationale",
help="Why this is needed"
)
args = parser.parse_args()
# Create requirement info if provided
requirement = None
if args.requirement_id and args.requirement_description:
requirement = RequirementInfo(
id=args.requirement_id,
description=args.requirement_description,
source=args.requirement_source,
issue_id=args.issue_id,
requested_by=args.requested_by,
rationale=args.rationale
)
creator = CommandCreator()
result = creator.create_command(args.description, requirement=requirement)
# Display traceability info if available
if result.get("trace_id"):
print(f"\n📝 Traceability: {result['trace_id']}")
print(f" View trace: python3 betty/trace_cli.py show {result['component_id']}")
sys.exit(0 if result.get("ok") else 1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,469 @@
# meta.compatibility - Agent Compatibility Analyzer
Analyzes agent compatibility and discovers multi-agent workflows based on artifact flows.
## Overview
**meta.compatibility** helps Claude discover which agents can work together by analyzing what artifacts they produce and consume. It enables intelligent multi-agent orchestration by suggesting compatible combinations and detecting pipeline gaps.
**What it does:**
- Scans all agents and extracts artifact metadata
- Builds compatibility maps (who produces/consumes what)
- Finds compatible agents based on artifact flows
- Suggests multi-agent pipelines for goals
- Generates complete compatibility graphs
- Detects gaps (consumed but not produced artifacts)
## Quick Start
### Find Compatible Agents
```bash
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent
```
Output:
```
Agent: meta.agent
Produces: agent-definition, agent-documentation
Consumes: agent-description
✅ Can feed outputs to (1 agents):
• meta.compatibility (via agent-definition)
⚠️ Gaps (1):
• agent-description: No agents produce 'agent-description' (required by meta.agent)
```
### Suggest Pipeline
```bash
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create and analyze an agent"
```
Output:
```
📋 Pipeline 1: meta.agent Pipeline
Pipeline starting with meta.agent
Steps:
1. meta.agent - Meta-agent that creates other agents...
2. meta.compatibility - Analyzes agent and skill compatibility...
```
### Analyze Agent
```bash
python3 agents/meta.compatibility/meta_compatibility.py analyze meta.agent
```
### List All Compatibility
```bash
python3 agents/meta.compatibility/meta_compatibility.py list-all
```
Output:
```
Total Agents: 7
Total Artifact Types: 16
Total Relationships: 3
⚠️ Global Gaps (5):
• agent-description: Consumed by 1 agents but no producers
...
```
## Commands
### find-compatible
Find agents compatible with a specific agent.
```bash
python3 agents/meta.compatibility/meta_compatibility.py find-compatible AGENT_NAME [--format json|yaml|text]
```
**Shows:**
- What the agent produces
- What the agent consumes
- Agents that can consume its outputs
- Agents that can provide its inputs
- Gaps (missing producers)
### suggest-pipeline
Suggest multi-agent pipeline for a goal.
```bash
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "GOAL" [--artifacts TYPE1 TYPE2...] [--format json|yaml|text]
```
**Examples:**
```bash
# Suggest pipeline for goal
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Design and validate APIs"
# With required artifacts
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Process data" --artifacts openapi-spec validation-report
```
**Shows:**
- Suggested pipelines (ranked)
- Steps in each pipeline
- Artifact flows between agents
- Whether pipeline is complete (no gaps)
### analyze
Complete compatibility analysis for one agent.
```bash
python3 agents/meta.compatibility/meta_compatibility.py analyze AGENT_NAME [--format json|yaml|text]
```
**Shows:**
- Full compatibility report
- Compatible agents (upstream/downstream)
- Suggested workflows
- Gaps and warnings
### list-all
Generate complete compatibility graph for all agents.
```bash
python3 agents/meta.compatibility/meta_compatibility.py list-all [--format json|yaml|text]
```
**Shows:**
- All agents in the system
- All relationships
- All artifact types
- Global gaps
- Statistics
## Output Formats
### Text (default)
Human-readable output with emojis and formatting.
### JSON
Machine-readable JSON for programmatic use.
```bash
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent --format json > meta_agent_compatibility.json
```
### YAML
YAML format for configuration or documentation.
```bash
python3 agents/meta.compatibility/meta_compatibility.py list-all --format yaml > compatibility_graph.yaml
```
## How It Works
### 1. Agent Scanning
Scans `agents/` directory for all `agent.yaml` files:
```python
for agent_dir in agents_dir.iterdir():
agent_yaml = agent_dir / "agent.yaml"
# Load and parse agent definition
```
### 2. Artifact Extraction
Extracts artifact_metadata from each agent:
```yaml
artifact_metadata:
produces:
- type: openapi-spec
consumes:
- type: api-requirements
```
### 3. Compatibility Mapping
Builds map of artifact types to producers/consumers:
```
openapi-spec:
producers: [api.define, api.architect]
consumers: [api.validate, api.code-generator]
```
### 4. Relationship Discovery
For each agent:
- Find agents that can consume its outputs
- Find agents that can provide its inputs
- Detect gaps (missing producers)
### 5. Pipeline Suggestion
Uses keyword matching and artifact analysis:
- Match goal keywords to agent names/descriptions
- Build pipeline from artifact flows
- Rank by completeness and length
- Return top suggestions
## Integration
### With meta.agent
After creating an agent, analyze its compatibility:
```bash
# Create agent
python3 agents/meta.agent/meta_agent.py description.md
# Analyze compatibility
python3 agents/meta.compatibility/meta_compatibility.py analyze new-agent
# Find who can work with it
python3 agents/meta.compatibility/meta_compatibility.py find-compatible new-agent
```
### With meta.suggest
meta.suggest uses meta.compatibility to make recommendations:
```bash
python3 agents/meta.suggest/meta_suggest.py --context meta.agent
```
Internally calls meta.compatibility to find next steps.
## Common Workflows
### Workflow 1: Understand Agent Ecosystem
```bash
# See all compatibility
python3 agents/meta.compatibility/meta_compatibility.py list-all
# Analyze each agent
for agent in meta.agent meta.artifact meta.compatibility meta.suggest; do
echo "=== $agent ==="
python3 agents/meta.compatibility/meta_compatibility.py analyze $agent
done
```
### Workflow 2: Build Multi-Agent Pipeline
```bash
# Suggest pipeline
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create and test an agent"
# Get JSON for workflow automation
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "My goal" --format json > pipeline.json
```
### Workflow 3: Find Gaps
```bash
# Find global gaps
python3 agents/meta.compatibility/meta_compatibility.py list-all | grep "Gaps:"
# Analyze specific agent gaps
python3 agents/meta.compatibility/meta_compatibility.py find-compatible api.architect
```
## Artifact Types
### Consumes
- **agent-definition** - Agent configurations
- Pattern: `agents/*/agent.yaml`
- **registry-data** - Skills and agents registry
- Pattern: `registry/*.json`
### Produces
- **compatibility-graph** - Agent relationship maps
- Pattern: `*.compatibility.json`
- Schema: `schemas/compatibility-graph.json`
- **pipeline-suggestion** - Multi-agent workflows
- Pattern: `*.pipeline.json`
- Schema: `schemas/pipeline-suggestion.json`
## Understanding Output
### Can Feed To
Agents that can consume this agent's outputs.
```
✅ Can feed outputs to (2 agents):
• api.validator (via openapi-spec)
• api.code-generator (via openapi-spec)
```
Means:
- api.architect produces openapi-spec
- Both api.validator and api.code-generator consume openapi-spec
- You can run: api.architect → api.validator
- Or: api.architect → api.code-generator
### Can Receive From
Agents that can provide this agent's inputs.
```
⬅️ Can receive inputs from (1 agents):
• api.requirements-analyzer (via api-requirements)
```
Means:
- api.architect needs api-requirements
- api.requirements-analyzer produces api-requirements
- You can run: api.requirements-analyzer → api.architect
### Gaps
Missing artifacts in the ecosystem.
```
⚠️ Gaps (1):
• agent-description: No agents produce 'agent-description'
```
Means:
- meta.agent needs agent-description input
- No agent produces it (it's user-provided)
- This is expected for user inputs
### Complete vs Incomplete Pipelines
**Complete Pipeline:**
```
Complete: ✅ Yes
```
All consumed artifacts are produced by pipeline steps.
**Incomplete Pipeline:**
```
Complete: ❌ No
Gaps: agent-description, registry-data
```
Some consumed artifacts aren't produced. Requires user input or additional agents.
## Tips & Best Practices
### Finding Compatible Agents
Use specific artifact types:
```bash
# Instead of generic goal
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Process stuff"
# Use specific artifacts
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Validate API" --artifacts openapi-spec
```
### Understanding Gaps
Not all gaps are problems:
- **User inputs** (agent-description, api-requirements) - Expected
- **Missing producers** for internal artifacts - Need new agents/skills
### Building Pipelines
Start with compatibility analysis:
1. Understand what each agent needs/produces
2. Find compatible combinations
3. Build pipeline step-by-step
4. Validate no gaps exist (or gaps are user inputs)
## Troubleshooting
### Agent not found
```
Error: Agent 'my-agent' not found
```
**Solutions:**
- Check agent exists in `agents/` directory
- Ensure `agent.yaml` exists
- Verify agent name in agent.yaml matches
### No compatible agents found
```
Can feed outputs to (0 agents)
Can receive inputs from (0 agents)
```
**Causes:**
- Agent is isolated (no shared artifact types)
- Agent uses custom artifact types
- No other agents exist yet
**Solutions:**
- Create agents with compatible artifact types
- Use standard artifact types
- Check artifact_metadata is properly defined
### Empty pipeline suggestions
```
Error: Could not determine relevant agents for goal
```
**Solutions:**
- Be more specific in goal description
- Mention artifact types explicitly
- Use `--artifacts` flag
## Architecture
```
meta.compatibility
├─ Scans: agents/ directory
├─ Analyzes: artifact_metadata
├─ Builds: compatibility maps
├─ Produces: compatibility graphs
└─ Used by: meta.suggest, Claude
```
## Examples
See test runs:
```bash
# Example 1: Find compatible agents
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent
# Example 2: Suggest pipeline
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create agent and check compatibility"
# Example 3: Full analysis
python3 agents/meta.compatibility/meta_compatibility.py analyze api.architect
# Example 4: Export to JSON
python3 agents/meta.compatibility/meta_compatibility.py list-all --format json > graph.json
```
## Related Documentation
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
- [compatibility-graph schema](../../schemas/compatibility-graph.json)
- [pipeline-suggestion schema](../../schemas/pipeline-suggestion.json)
## How Claude Uses This
Claude can:
1. **Discover capabilities** - "What agents can work with openapi-spec?"
2. **Build workflows** - "How do I design and validate an API?"
3. **Make decisions** - "What should I run next?"
4. **Detect gaps** - "What's missing from the ecosystem?"
meta.compatibility enables autonomous multi-agent orchestration!

View File

@@ -0,0 +1,130 @@
name: meta.compatibility
version: 0.1.0
description: |
Analyzes agent and skill compatibility to discover multi-agent workflows.
This meta-agent helps Claude discover which agents can work together by
analyzing artifact flows - what agents produce and what others consume.
Enables intelligent orchestration by suggesting compatible agent combinations
and detecting potential pipeline gaps.
artifact_metadata:
consumes:
- type: agent-definition
file_pattern: "agents/*/agent.yaml"
description: "Agent definitions to analyze for compatibility"
- type: registry-data
file_pattern: "registry/*.json"
description: "Skills and agents registry"
produces:
- type: compatibility-graph
file_pattern: "*.compatibility.json"
content_type: "application/json"
schema: "schemas/compatibility-graph.json"
description: "Agent relationship graph showing artifact flows"
- type: pipeline-suggestion
file_pattern: "*.pipeline.json"
content_type: "application/json"
schema: "schemas/pipeline-suggestion.json"
description: "Suggested multi-agent workflows"
status: draft
reasoning_mode: iterative
capabilities:
- Build compatibility graphs that connect agent inputs and outputs
- Recommend orchestrated workflows that minimize gaps and conflicts
- Surface registry insights to guide creation of missing capabilities
skills_available:
- agent.compose # Analyze artifact flows
- artifact.define # Understand artifact types
permissions:
- filesystem:read
system_prompt: |
You are meta.compatibility, the agent compatibility analyzer.
Your purpose is to help Claude discover which agents work together by
analyzing what artifacts they produce and consume.
## Your Responsibilities
1. **Analyze Compatibility**
- Scan all agent definitions
- Extract artifact metadata (produces/consumes)
- Find matching artifact types
- Identify compatible agent pairs
2. **Suggest Pipelines**
- Recommend multi-agent workflows
- Ensure artifact flow is complete (no gaps)
- Prioritize common use cases
- Provide clear rationale
3. **Detect Gaps**
- Find consumed artifacts that aren't produced
- Identify missing agents in pipelines
- Suggest what needs to be created
4. **Generate Compatibility Graphs**
- Visual representation of agent relationships
- Show artifact flows between agents
- Highlight compatible combinations
## Commands You Support
**Find Compatible Agents:**
```bash
/meta/compatibility find-compatible api.architect
```
Returns agents that can consume api.architect's outputs.
**Suggest Pipeline:**
```bash
/meta/compatibility suggest-pipeline "Design and implement an API"
```
Returns multi-agent workflow for the task.
**Analyze Agent:**
```bash
/meta/compatibility analyze api.architect
```
Returns full compatibility analysis for one agent.
**List All Compatibility:**
```bash
/meta/compatibility list-all
```
Returns complete compatibility graph for all agents.
## Analysis Criteria
Two agents are compatible if:
- Agent A produces artifact type X
- Agent B consumes artifact type X
- The artifact schemas are compatible
## Pipeline Suggestion Criteria
A good pipeline:
- Has no gaps (all consumed artifacts are produced)
- Follows logical workflow order
- Matches the user's stated goal
- Uses minimal agents (efficiency)
- Includes validation steps when appropriate
## Output Format
Always provide:
- **Compatible agents**: List with rationale
- **Artifact flows**: What flows between agents
- **Suggested pipelines**: Step-by-step workflows
- **Gaps**: Any missing artifacts or agents
- **Confidence**: How confident you are in the suggestions
Remember: You enable intelligent orchestration by making compatibility
discoverable. Help Claude make smart choices about which agents to use together.

View File

@@ -0,0 +1,698 @@
#!/usr/bin/env python3
"""
meta.compatibility - Agent Compatibility Analyzer
Analyzes agent and skill compatibility to discover multi-agent workflows.
Helps Claude orchestrate by showing which agents can work together.
"""
import json
import yaml
import sys
import os
from pathlib import Path
from typing import Dict, List, Any, Optional, Set, Tuple
from collections import defaultdict
# Add parent directory to path for imports
parent_dir = str(Path(__file__).parent.parent.parent)
sys.path.insert(0, parent_dir)
from betty.provenance import compute_hash, get_provenance_logger
from betty.config import REGISTRY_FILE, REGISTRY_DIR
class CompatibilityAnalyzer:
"""Analyzes agent compatibility based on artifact flows"""
def __init__(self, base_dir: str = "."):
"""Initialize with base directory"""
self.base_dir = Path(base_dir)
self.agents_dir = self.base_dir / "agents"
self.agents = {} # name -> agent definition
self.compatibility_map = {} # artifact_type -> {producers: [], consumers: []}
def scan_agents(self) -> Dict[str, Any]:
"""
Scan agents directory and load all agent definitions
Returns:
Dictionary of agent_name -> agent_definition
"""
self.agents = {}
if not self.agents_dir.exists():
return self.agents
for agent_dir in self.agents_dir.iterdir():
if agent_dir.is_dir():
agent_yaml = agent_dir / "agent.yaml"
if agent_yaml.exists():
with open(agent_yaml) as f:
agent_def = yaml.safe_load(f)
if agent_def and "name" in agent_def:
self.agents[agent_def["name"]] = agent_def
return self.agents
def extract_artifacts(self, agent_def: Dict[str, Any]) -> Tuple[Set[str], Set[str]]:
"""
Extract artifact types from agent definition
Args:
agent_def: Agent definition dictionary
Returns:
Tuple of (produces_set, consumes_set)
"""
produces = set()
consumes = set()
artifact_metadata = agent_def.get("artifact_metadata", {})
# Extract produced artifacts
for artifact in artifact_metadata.get("produces", []):
if isinstance(artifact, dict) and "type" in artifact:
produces.add(artifact["type"])
elif isinstance(artifact, str):
produces.add(artifact)
# Extract consumed artifacts
for artifact in artifact_metadata.get("consumes", []):
if isinstance(artifact, dict) and "type" in artifact:
consumes.add(artifact["type"])
elif isinstance(artifact, str):
consumes.add(artifact)
return produces, consumes
def build_compatibility_map(self) -> Dict[str, Dict[str, List[str]]]:
"""
Build map of artifact types to producers/consumers
Returns:
Dictionary mapping artifact_type -> {producers: [], consumers: []}
"""
self.compatibility_map = defaultdict(lambda: {"producers": [], "consumers": []})
for agent_name, agent_def in self.agents.items():
produces, consumes = self.extract_artifacts(agent_def)
for artifact_type in produces:
self.compatibility_map[artifact_type]["producers"].append(agent_name)
for artifact_type in consumes:
self.compatibility_map[artifact_type]["consumers"].append(agent_name)
return dict(self.compatibility_map)
def find_compatible(self, agent_name: str) -> Dict[str, Any]:
"""
Find agents compatible with the specified agent
Args:
agent_name: Name of agent to analyze
Returns:
Dictionary with compatible agents and rationale
"""
if agent_name not in self.agents:
return {
"error": f"Agent '{agent_name}' not found",
"available_agents": list(self.agents.keys())
}
agent_def = self.agents[agent_name]
produces, consumes = self.extract_artifacts(agent_def)
result = {
"agent": agent_name,
"produces": list(produces),
"consumes": list(consumes),
"can_feed_to": [], # Agents that can consume this agent's outputs
"can_receive_from": [], # Agents that can provide this agent's inputs
"gaps": [] # Missing artifacts
}
# Find agents that can consume this agent's outputs
for artifact_type in produces:
consumers = self.compatibility_map.get(artifact_type, {}).get("consumers", [])
for consumer in consumers:
if consumer != agent_name:
result["can_feed_to"].append({
"agent": consumer,
"artifact": artifact_type,
"rationale": f"{agent_name} produces '{artifact_type}' which {consumer} consumes"
})
# Find agents that can provide this agent's inputs
for artifact_type in consumes:
producers = self.compatibility_map.get(artifact_type, {}).get("producers", [])
if not producers:
result["gaps"].append({
"artifact": artifact_type,
"issue": f"No agents produce '{artifact_type}' (required by {agent_name})",
"severity": "high"
})
else:
for producer in producers:
if producer != agent_name:
result["can_receive_from"].append({
"agent": producer,
"artifact": artifact_type,
"rationale": f"{producer} produces '{artifact_type}' which {agent_name} needs"
})
return result
def suggest_pipeline(self, goal: str, required_artifacts: Optional[List[str]] = None) -> Dict[str, Any]:
"""
Suggest multi-agent pipeline for a goal
Args:
goal: Natural language description of what to accomplish
required_artifacts: Optional list of artifact types needed
Returns:
Suggested pipeline with steps and rationale
"""
# Simple keyword matching for now (can be enhanced with ML later)
goal_lower = goal.lower()
keywords_to_agents = {
"api": ["api.architect", "meta.agent"],
"design api": ["api.architect"],
"validate": ["api.architect"],
"create agent": ["meta.agent"],
"agent": ["meta.agent"],
"artifact": ["meta.artifact"],
"optimize": [], # No optimizer yet, but we have the artifact type
}
# Find relevant agents
relevant_agents = set()
for keyword, agents in keywords_to_agents.items():
if keyword in goal_lower:
relevant_agents.update([a for a in agents if a in self.agents])
if not relevant_agents and required_artifacts:
# Find agents that produce the required artifacts
for artifact_type in required_artifacts:
producers = self.compatibility_map.get(artifact_type, {}).get("producers", [])
relevant_agents.update(producers)
if not relevant_agents:
return {
"error": "Could not determine relevant agents for goal",
"suggestion": "Try being more specific or mention required artifact types",
"goal": goal
}
# Build pipeline by analyzing artifact flows
pipelines = []
for start_agent in relevant_agents:
pipeline = self._build_pipeline_from_agent(start_agent, goal)
if pipeline:
pipelines.append(pipeline)
# Rank pipelines by completeness and length
pipelines.sort(key=lambda p: (
-len([s for s in p.get("steps", [])]), # Prefer shorter pipelines
-p.get("confidence_score", 0) # Higher confidence
))
if not pipelines:
return {
"error": "Could not build complete pipeline",
"relevant_agents": list(relevant_agents),
"goal": goal
}
return {
"goal": goal,
"pipelines": pipelines[:3], # Top 3 suggestions
"confidence": "medium" if len(pipelines) > 1 else "low"
}
def _build_pipeline_from_agent(self, start_agent: str, goal: str) -> Optional[Dict[str, Any]]:
"""
Build a pipeline starting from a specific agent
Args:
start_agent: Agent to start pipeline from
goal: Goal description
Returns:
Pipeline dictionary or None
"""
if start_agent not in self.agents:
return None
agent_def = self.agents[start_agent]
produces, consumes = self.extract_artifacts(agent_def)
pipeline = {
"name": f"{start_agent.title()} Pipeline",
"description": f"Pipeline starting with {start_agent}",
"steps": [
{
"step": 1,
"agent": start_agent,
"description": agent_def.get("description", "").split("\n")[0],
"produces": list(produces),
"consumes": list(consumes)
}
],
"artifact_flow": [],
"confidence_score": 0.5
}
# Try to add compatible next steps
compatibility = self.find_compatible(start_agent)
for compatible in compatibility.get("can_feed_to", [])[:2]: # Max 2 next steps
next_agent = compatible["agent"]
if next_agent in self.agents:
next_def = self.agents[next_agent]
next_produces, next_consumes = self.extract_artifacts(next_def)
pipeline["steps"].append({
"step": len(pipeline["steps"]) + 1,
"agent": next_agent,
"description": next_def.get("description", "").split("\n")[0],
"produces": list(next_produces),
"consumes": list(next_consumes)
})
pipeline["artifact_flow"].append({
"from": start_agent,
"to": next_agent,
"artifact": compatible["artifact"]
})
pipeline["confidence_score"] += 0.2
# Calculate if pipeline has gaps
all_produces = set()
all_consumes = set()
for step in pipeline["steps"]:
all_produces.update(step.get("produces", []))
all_consumes.update(step.get("consumes", []))
gaps = all_consumes - all_produces
if not gaps:
pipeline["confidence_score"] += 0.3
pipeline["complete"] = True
else:
pipeline["complete"] = False
pipeline["gaps"] = list(gaps)
return pipeline
def generate_compatibility_graph(self) -> Dict[str, Any]:
"""
Generate complete compatibility graph for all agents
Returns:
Compatibility graph structure
"""
graph = {
"agents": [],
"relationships": [],
"artifact_types": [],
"gaps": [],
"metadata": {
"total_agents": len(self.agents),
"total_artifact_types": len(self.compatibility_map)
}
}
# Add agents
for agent_name, agent_def in self.agents.items():
produces, consumes = self.extract_artifacts(agent_def)
graph["agents"].append({
"name": agent_name,
"description": agent_def.get("description", "").split("\n")[0],
"produces": list(produces),
"consumes": list(consumes)
})
# Add relationships
for agent_name in self.agents:
compatibility = self.find_compatible(agent_name)
for compatible in compatibility.get("can_feed_to", []):
graph["relationships"].append({
"from": agent_name,
"to": compatible["agent"],
"artifact": compatible["artifact"],
"type": "produces_for"
})
# Add artifact types
for artifact_type, info in self.compatibility_map.items():
graph["artifact_types"].append({
"type": artifact_type,
"producers": info["producers"],
"consumers": info["consumers"],
"producer_count": len(info["producers"]),
"consumer_count": len(info["consumers"])
})
# Find global gaps
for artifact_type, info in self.compatibility_map.items():
if not info["producers"] and info["consumers"]:
graph["gaps"].append({
"artifact": artifact_type,
"issue": f"Consumed by {len(info['consumers'])} agents but no producers",
"consumers": info["consumers"],
"severity": "high"
})
return graph
def analyze_agent(self, agent_name: str) -> Dict[str, Any]:
"""
Complete compatibility analysis for one agent
Args:
agent_name: Name of agent to analyze
Returns:
Comprehensive analysis
"""
compatibility = self.find_compatible(agent_name)
if "error" in compatibility:
return compatibility
# Add suggested workflows
workflows = []
# Workflow 1: As a starting point
if compatibility["can_feed_to"]:
workflow = {
"name": f"Start with {agent_name}",
"description": f"Use {agent_name} as the first step",
"agents": [agent_name] + [c["agent"] for c in compatibility["can_feed_to"][:2]]
}
workflows.append(workflow)
# Workflow 2: As a middle step
if compatibility["can_receive_from"] and compatibility["can_feed_to"]:
workflow = {
"name": f"{agent_name} in pipeline",
"description": f"Use {agent_name} as a processing step",
"agents": [
compatibility["can_receive_from"][0]["agent"],
agent_name,
compatibility["can_feed_to"][0]["agent"]
]
}
workflows.append(workflow)
compatibility["suggested_workflows"] = workflows
return compatibility
def verify_registry_integrity(self) -> Dict[str, Any]:
"""
Verify integrity of registry files using provenance hashes.
Returns:
Dictionary with verification results
"""
provenance = get_provenance_logger()
results = {
"verified": [],
"failed": [],
"missing": [],
"summary": {
"total_checked": 0,
"verified_count": 0,
"failed_count": 0,
"missing_count": 0
}
}
# List of registry files to verify
registry_files = [
("skills.json", REGISTRY_FILE),
("agents.json", str(Path(REGISTRY_DIR) / "agents.json")),
("workflow_history.json", str(Path(REGISTRY_DIR) / "workflow_history.json")),
]
for artifact_id, file_path in registry_files:
results["summary"]["total_checked"] += 1
# Check if file exists
if not os.path.exists(file_path):
results["missing"].append({
"artifact": artifact_id,
"path": file_path,
"reason": "File does not exist"
})
results["summary"]["missing_count"] += 1
continue
try:
# Load the registry file
with open(file_path, 'r') as f:
content = json.load(f)
# Get stored hash from file (if present)
stored_hash = content.get("content_hash")
# Remove content_hash field to compute original hash
content_without_hash = {k: v for k, v in content.items() if k != "content_hash"}
# Compute current hash
current_hash = compute_hash(content_without_hash)
# Get latest hash from provenance log
latest_provenance_hash = provenance.get_latest_hash(artifact_id)
# Verify
if stored_hash and stored_hash == current_hash:
# Hash matches what's in the file
verification_status = "verified"
# Also check against provenance log
if latest_provenance_hash:
provenance_match = (stored_hash == latest_provenance_hash)
else:
provenance_match = None
results["verified"].append({
"artifact": artifact_id,
"path": file_path,
"hash": current_hash[:16] + "...",
"stored_hash_valid": True,
"provenance_logged": latest_provenance_hash is not None,
"provenance_match": provenance_match
})
results["summary"]["verified_count"] += 1
elif stored_hash and stored_hash != current_hash:
# Hash mismatch - file may have been modified
results["failed"].append({
"artifact": artifact_id,
"path": file_path,
"reason": "Content hash mismatch",
"stored_hash": stored_hash[:16] + "...",
"computed_hash": current_hash[:16] + "...",
"severity": "high"
})
results["summary"]["failed_count"] += 1
else:
# No hash stored in file
results["missing"].append({
"artifact": artifact_id,
"path": file_path,
"reason": "No content_hash field in file",
"computed_hash": current_hash[:16] + "...",
"provenance_available": latest_provenance_hash is not None
})
results["summary"]["missing_count"] += 1
except Exception as e:
results["failed"].append({
"artifact": artifact_id,
"path": file_path,
"reason": f"Verification error: {str(e)}",
"severity": "high"
})
results["summary"]["failed_count"] += 1
return results
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.compatibility - Agent Compatibility Analyzer"
)
subparsers = parser.add_subparsers(dest='command', help='Commands')
# Find compatible command
find_parser = subparsers.add_parser('find-compatible', help='Find compatible agents')
find_parser.add_argument("agent", help="Agent name to analyze")
# Suggest pipeline command
suggest_parser = subparsers.add_parser('suggest-pipeline', help='Suggest multi-agent pipeline')
suggest_parser.add_argument("goal", help="Goal description")
suggest_parser.add_argument("--artifacts", nargs="+", help="Required artifact types")
# Analyze command
analyze_parser = subparsers.add_parser('analyze', help='Analyze agent compatibility')
analyze_parser.add_argument("agent", help="Agent name to analyze")
# List all command
list_parser = subparsers.add_parser('list-all', help='List all compatibility')
# Verify integrity command
verify_parser = subparsers.add_parser('verify-integrity', help='Verify registry integrity using provenance hashes')
# Output format
parser.add_argument(
"--format",
choices=["json", "yaml", "text"],
default="text",
help="Output format"
)
args = parser.parse_args()
if not args.command:
parser.print_help()
sys.exit(1)
analyzer = CompatibilityAnalyzer()
analyzer.scan_agents()
analyzer.build_compatibility_map()
result = None
if args.command == 'find-compatible':
print(f"🔍 Finding agents compatible with '{args.agent}'...\n")
result = analyzer.find_compatible(args.agent)
if args.format == "text" and "error" not in result:
print(f"Agent: {result['agent']}")
print(f"Produces: {', '.join(result['produces']) if result['produces'] else 'none'}")
print(f"Consumes: {', '.join(result['consumes']) if result['consumes'] else 'none'}")
if result['can_feed_to']:
print(f"\n✅ Can feed outputs to ({len(result['can_feed_to'])} agents):")
for comp in result['can_feed_to']:
print(f"{comp['agent']} (via {comp['artifact']})")
if result['can_receive_from']:
print(f"\n⬅️ Can receive inputs from ({len(result['can_receive_from'])} agents):")
for comp in result['can_receive_from']:
print(f"{comp['agent']} (via {comp['artifact']})")
if result['gaps']:
print(f"\n⚠️ Gaps ({len(result['gaps'])}):")
for gap in result['gaps']:
print(f"{gap['artifact']}: {gap['issue']}")
elif args.command == 'suggest-pipeline':
print(f"💡 Suggesting pipeline for: {args.goal}\n")
result = analyzer.suggest_pipeline(args.goal, args.artifacts)
if args.format == "text" and "pipelines" in result:
for i, pipeline in enumerate(result["pipelines"], 1):
print(f"\n📋 Pipeline {i}: {pipeline['name']}")
print(f" {pipeline['description']}")
print(f" Complete: {'✅ Yes' if pipeline.get('complete', False) else '❌ No'}")
print(f" Steps:")
for step in pipeline['steps']:
print(f" {step['step']}. {step['agent']} - {step['description'][:60]}...")
if pipeline.get('gaps'):
print(f" Gaps: {', '.join(pipeline['gaps'])}")
elif args.command == 'analyze':
print(f"📊 Analyzing '{args.agent}'...\n")
result = analyzer.analyze_agent(args.agent)
if args.format == "text" and "error" not in result:
print(f"Agent: {result['agent']}")
print(f"Produces: {', '.join(result['produces']) if result['produces'] else 'none'}")
print(f"Consumes: {', '.join(result['consumes']) if result['consumes'] else 'none'}")
if result.get('suggested_workflows'):
print(f"\n🔄 Suggested Workflows:")
for workflow in result['suggested_workflows']:
print(f"\n {workflow['name']}")
print(f" {workflow['description']}")
print(f" Pipeline: {''.join(workflow['agents'])}")
elif args.command == 'list-all':
print("🗺️ Generating complete compatibility graph...\n")
result = analyzer.generate_compatibility_graph()
if args.format == "text":
print(f"Total Agents: {result['metadata']['total_agents']}")
print(f"Total Artifact Types: {result['metadata']['total_artifact_types']}")
print(f"Total Relationships: {len(result['relationships'])}")
if result['gaps']:
print(f"\n⚠️ Global Gaps ({len(result['gaps'])}):")
for gap in result['gaps']:
print(f"{gap['artifact']}: {gap['issue']}")
elif args.command == 'verify-integrity':
print("🔐 Verifying registry integrity using provenance hashes...\n")
result = analyzer.verify_registry_integrity()
if args.format == "text":
summary = result['summary']
print(f"Total Checked: {summary['total_checked']}")
print(f"✅ Verified: {summary['verified_count']}")
print(f"❌ Failed: {summary['failed_count']}")
print(f"⚠️ Missing Hash: {summary['missing_count']}")
if result['verified']:
print(f"\n✅ Verified Artifacts ({len(result['verified'])}):")
for item in result['verified']:
print(f"{item['artifact']}: {item['hash']}")
if item.get('provenance_logged'):
match_status = "" if item.get('provenance_match') else ""
print(f" Provenance: {match_status}")
if result['failed']:
print(f"\n❌ Failed Verifications ({len(result['failed'])}):")
for item in result['failed']:
print(f"{item['artifact']}: {item['reason']}")
if 'stored_hash' in item:
print(f" Expected: {item['stored_hash']}")
print(f" Computed: {item['computed_hash']}")
if result['missing']:
print(f"\n⚠️ Missing Hashes ({len(result['missing'])}):")
for item in result['missing']:
print(f"{item['artifact']}: {item['reason']}")
# Output result
if result:
if args.format == "json":
print(json.dumps(result, indent=2))
elif args.format == "yaml":
print(yaml.dump(result, default_flow_style=False))
elif "error" in result:
print(f"\n❌ Error: {result['error']}")
if "suggestion" in result:
print(f"💡 {result['suggestion']}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,247 @@
# Agent: meta.config.router
## Purpose
Configure Claude Code Router for Betty to support multi-model LLM routing across environments. This agent creates or previews a `config.json` file at `~/.claude-code-router/config.json` with model providers, routing profiles, and audit metadata.
## Version
0.1.0
## Status
active
## Reasoning Mode
oneshot
## Capabilities
- Generate multi-model LLM router configurations
- Validate router configuration inputs for correctness
- Apply configurations to filesystem with audit trails
- Support multiple output modes (preview, file, both)
- Work across local, cloud, and CI environments
- Ensure deterministic and portable configurations
## Skills Available
- `config.validate.router` - Validates router configuration inputs
- `config.generate.router` - Generates router configuration JSON
- `audit.log` - Records audit events for configuration changes
## Inputs
### llm_backends (required)
- **Type**: List of objects
- **Description**: Backend provider configurations
- **Schema**:
```json
[
{
"name": "string (e.g., openrouter, ollama, claude)",
"api_base_url": "string (API endpoint URL)",
"api_key": "string (optional for local providers)",
"models": ["string (model identifiers)"]
}
]
```
### routing_rules (required)
- **Type**: Dictionary
- **Description**: Mapping of Claude routing contexts to provider/model pairs
- **Contexts**: default, think, background, longContext
- **Schema**:
```json
{
"default": { "provider": "string", "model": "string" },
"think": { "provider": "string", "model": "string" },
"background": { "provider": "string", "model": "string" },
"longContext": { "provider": "string", "model": "string" }
}
```
### output_mode (optional)
- **Type**: enum
- **Values**: "preview" | "file" | "both"
- **Default**: "preview"
- **Description**: Output mode for configuration
### apply_config (optional)
- **Type**: boolean
- **Default**: false
- **Description**: Write config to disk if true
### metadata (optional)
- **Type**: object
- **Description**: Optional audit metadata (initiator, environment, etc.)
## Outputs
### routing_config
- **Type**: object
- **Description**: Rendered router config as JSON
### write_status
- **Type**: string
- **Values**: "success" | "skipped" | "error"
- **Description**: Status of file write operation
### audit_id
- **Type**: string
- **Description**: Unique trace ID for configuration event
## Behavior
1. Validates inputs via `config.validate.router`
2. Constructs valid router config using `config.generate.router`
3. If `apply_config=true` and `output_mode≠preview`, writes config to: `~/.claude-code-router/config.json`
4. Outputs JSON config regardless of write action
5. Logs audit record via `audit.log` with:
- timestamp
- initiator
- hash of input
- environment fingerprint
## Usage Example
```bash
# Preview configuration (no file write)
/meta/config.router --routing_config_path=router-config.yaml
# Apply configuration to disk
/meta/config.router --routing_config_path=router-config.yaml --apply_config=true
# Both preview and write
/meta/config.router --routing_config_path=router-config.yaml --apply_config=true --output_mode=both
```
## Example Input (YAML)
```yaml
llm_backends:
- name: openrouter
api_base_url: https://openrouter.ai/api/v1
api_key: ${OPENROUTER_API_KEY}
models:
- anthropic/claude-3.5-sonnet
- openai/gpt-4
- name: ollama
api_base_url: http://localhost:11434/v1
models:
- llama3.1:70b
- codellama:34b
routing_rules:
default:
provider: openrouter
model: anthropic/claude-3.5-sonnet
think:
provider: openrouter
model: anthropic/claude-3.5-sonnet
background:
provider: ollama
model: llama3.1:70b
longContext:
provider: openrouter
model: anthropic/claude-3.5-sonnet
metadata:
initiator: user@example.com
environment: production
purpose: Multi-model routing for development
```
## Example Output
```json
{
"version": "1.0.0",
"generated_at": "2025-11-01T12:34:56Z",
"backends": [
{
"name": "openrouter",
"api_base_url": "https://openrouter.ai/api/v1",
"api_key": "${OPENROUTER_API_KEY}",
"models": [
"anthropic/claude-3.5-sonnet",
"openai/gpt-4"
]
},
{
"name": "ollama",
"api_base_url": "http://localhost:11434/v1",
"models": [
"llama3.1:70b",
"codellama:34b"
]
}
],
"routing": {
"default": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet"
},
"think": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet"
},
"background": {
"provider": "ollama",
"model": "llama3.1:70b"
},
"longContext": {
"provider": "openrouter",
"model": "anthropic/claude-3.5-sonnet"
}
},
"metadata": {
"generated_by": "meta.config.router",
"schema_version": "1.0.0",
"initiator": "user@example.com",
"environment": "production",
"purpose": "Multi-model routing for development"
}
}
```
## Permissions
- `filesystem:read` - Read router config input files
- `filesystem:write` - Write config to ~/.claude-code-router/config.json
## Artifacts
### Consumes
- `router-config-input` - User-provided router configuration inputs
### Produces
- `llm-router-config` - Complete Claude Code Router configuration file
- `audit-log-entry` - Audit trail entry for configuration events
## Tags
llm, router, configuration, meta, infra, openrouter, claude, ollama, multi-model
## Environments
- local
- cloud
- ci
## Requires Human Approval
false
## Notes
- The config is deterministic and portable across environments
- API keys can use environment variable substitution (e.g., ${OPENROUTER_API_KEY})
- Local providers (localhost/127.0.0.1) don't require API keys
- All configuration changes are audited for traceability
- The agent supports preview mode to verify configuration before applying

View File

@@ -0,0 +1,92 @@
name: meta.config.router
version: 0.1.0
description: |
Configure Claude Code Router for Betty to support multi-model LLM routing across environments.
This agent creates or previews a config.json file at ~/.claude-code-router/config.json with
model providers, routing profiles, and audit metadata. Works across local, cloud, or CI-based
environments with built-in validation, output rendering, config application, and auditing.
status: active
reasoning_mode: oneshot
capabilities:
- Generate multi-model LLM router configurations
- Validate router configuration inputs for correctness
- Apply configurations to filesystem with audit trails
- Support multiple output modes (preview, file, both)
- Work across local, cloud, and CI environments
- Ensure deterministic and portable configurations
skills_available:
- config.validate.router
- config.generate.router
- audit.log
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: router-config-input
description: User-provided router configuration inputs (backends, routing rules, metadata)
file_pattern: "*-router-input.{json,yaml}"
content_type: application/json
required: true
produces:
- type: llm-router-config
description: Complete Claude Code Router configuration file
file_pattern: "config.json"
content_type: application/json
schema: schemas/router-config.json
- type: audit-log-entry
description: Audit trail entry for configuration events
file_pattern: "audit_log.json"
content_type: application/json
system_prompt: |
You are the meta.config.router agent for the Betty Framework.
Your responsibilities:
1. Validate router configuration inputs using config.validate.router
2. Generate valid router config JSON using config.generate.router
3. Write config to ~/.claude-code-router/config.json when apply_config=true
4. Provide preview, file write, or both modes based on output_mode
5. Log audit records with timestamp, initiator, and environment fingerprint
Inputs you expect:
- llm_backends: List of provider configs (name, api_base_url, api_key, models)
- routing_rules: Mapping of routing contexts (default, think, background, longContext)
- output_mode: "preview" | "file" | "both" (default: preview)
- apply_config: boolean (write to disk if true)
- metadata: Optional audit metadata (initiator, environment, etc.)
Outputs you generate:
- routing_config: Complete router configuration JSON
- write_status: "success" | "skipped" | "error"
- audit_id: Unique trace ID for the configuration event
Workflow:
1. Call config.validate.router with llm_backends and routing_rules
2. If validation fails, return errors and exit
3. Call config.generate.router to create the config JSON
4. If apply_config=true and output_mode≠preview, write to ~/.claude-code-router/config.json
5. Call audit.log to record the configuration event
6. Return config, write status, and audit ID
Environment awareness:
- Detect local vs cloud vs CI environment
- Adjust file paths accordingly
- Include environment fingerprint in audit metadata
tags:
- llm
- router
- configuration
- meta
- infra
- openrouter
- claude
- ollama
- multi-model

View File

@@ -0,0 +1,323 @@
#!/usr/bin/env python3
"""
Agent: meta.config.router
Configure Claude Code Router for multi-model LLM support
"""
import json
import sys
import os
import hashlib
import uuid
from datetime import datetime
from pathlib import Path
from typing import Dict, Any, List
import subprocess
import yaml
class MetaConfigRouter:
"""Configure Claude Code Router for Betty Framework"""
def __init__(self):
self.betty_root = Path(__file__).parent.parent.parent
self.skills_root = self.betty_root / "skills"
self.audit_log_path = self.betty_root / "registry" / "audit_log.json"
def run(
self,
routing_config_path: str,
apply_config: bool = False,
output_mode: str = "preview"
) -> Dict[str, Any]:
"""
Main execution method
Args:
routing_config_path: Path to router config input file (YAML or JSON)
apply_config: Whether to write config to disk
output_mode: "preview" | "file" | "both"
Returns:
Result with routing_config, write_status, and audit_id
"""
print(f"🔧 meta.config.router v0.1.0")
print(f"📋 Config input: {routing_config_path}")
print(f"📝 Output mode: {output_mode}")
print(f"💾 Apply config: {apply_config}")
print()
# Load input config
config_input = self._load_config_input(routing_config_path)
# Extract inputs
llm_backends = config_input.get("llm_backends", [])
routing_rules = config_input.get("routing_rules", {})
config_options = config_input.get("config_options", {})
metadata = config_input.get("metadata", {}) # For audit logging only
# Step 1: Validate inputs
print("🔍 Validating router configuration...")
validation_result = self._validate_config(llm_backends, routing_rules)
if not validation_result["valid"]:
print("❌ Validation failed:")
for error in validation_result["errors"]:
print(f" - {error}")
return {
"success": False,
"errors": validation_result["errors"],
"warnings": validation_result["warnings"]
}
if validation_result["warnings"]:
print("⚠️ Warnings:")
for warning in validation_result["warnings"]:
print(f" - {warning}")
print("✅ Validation passed")
print()
# Step 2: Generate router config
print("🏗️ Generating router configuration...")
router_config = self._generate_config(
llm_backends,
routing_rules,
config_options
)
print("✅ Configuration generated")
print()
# Step 3: Write config if requested
write_status = "skipped"
config_path = None
if apply_config and output_mode != "preview":
print("💾 Writing configuration to disk...")
config_path, write_status = self._write_config(router_config)
if write_status == "success":
print(f"✅ Configuration written to: {config_path}")
else:
print(f"❌ Failed to write configuration")
print()
# Step 4: Log audit record
print("📝 Logging audit record...")
audit_id = self._log_audit(
config_input=config_input,
write_status=write_status,
metadata=metadata
)
print(f"✅ Audit ID: {audit_id}")
print()
# Step 5: Output results
result = {
"success": True,
"routing_config": router_config,
"write_status": write_status,
"audit_id": audit_id
}
if config_path:
result["config_path"] = str(config_path)
# Display preview if requested
if output_mode in ["preview", "both"]:
print("📄 Router Configuration Preview:")
print("" * 80)
print(json.dumps(router_config, indent=2))
print("" * 80)
print()
return result
def _load_config_input(self, config_path: str) -> Dict[str, Any]:
"""Load router config input from YAML or JSON file"""
path = Path(config_path)
if not path.exists():
raise FileNotFoundError(f"Config file not found: {config_path}")
with open(path, 'r') as f:
if path.suffix in ['.yaml', '.yml']:
return yaml.safe_load(f)
else:
return json.load(f)
def _validate_config(
self,
llm_backends: List[Dict[str, Any]],
routing_rules: Dict[str, Any]
) -> Dict[str, Any]:
"""Validate router configuration using config.validate.router skill"""
validator_script = self.skills_root / "config.validate.router" / "validate_router.py"
config_json = json.dumps({
"llm_backends": llm_backends,
"routing_rules": routing_rules
})
try:
result = subprocess.run(
[sys.executable, str(validator_script), config_json],
capture_output=True,
text=True,
check=False
)
return json.loads(result.stdout)
except Exception as e:
return {
"valid": False,
"errors": [f"Validation error: {e}"],
"warnings": []
}
def _generate_config(
self,
llm_backends: List[Dict[str, Any]],
routing_rules: Dict[str, Any],
config_options: Dict[str, Any]
) -> Dict[str, Any]:
"""Generate router configuration using config.generate.router skill"""
generator_script = self.skills_root / "config.generate.router" / "generate_router.py"
input_json = json.dumps({
"llm_backends": llm_backends,
"routing_rules": routing_rules,
"config_options": config_options
})
try:
result = subprocess.run(
[sys.executable, str(generator_script), input_json],
capture_output=True,
text=True,
check=True
)
return json.loads(result.stdout)
except Exception as e:
raise RuntimeError(f"Config generation failed: {e}")
def _write_config(self, router_config: Dict[str, Any]) -> tuple[Path, str]:
"""Write router config to ~/.claude-code-router/config.json"""
try:
config_dir = Path.home() / ".claude-code-router"
config_dir.mkdir(parents=True, exist_ok=True)
config_path = config_dir / "config.json"
with open(config_path, 'w') as f:
json.dump(router_config, f, indent=2)
return config_path, "success"
except Exception as e:
print(f"Error writing config: {e}")
return None, "error"
def _log_audit(
self,
config_input: Dict[str, Any],
write_status: str,
metadata: Dict[str, Any]
) -> str:
"""Log audit record for configuration event"""
audit_id = str(uuid.uuid4())
# Calculate hash of input
input_hash = hashlib.sha256(
json.dumps(config_input, sort_keys=True).encode()
).hexdigest()[:16]
audit_entry = {
"audit_id": audit_id,
"timestamp": datetime.utcnow().isoformat() + "Z",
"agent": "meta.config.router",
"version": "0.1.0",
"action": "router_config_generated",
"write_status": write_status,
"input_hash": input_hash,
"environment": self._detect_environment(),
"initiator": metadata.get("initiator", "unknown"),
"metadata": metadata
}
# Append to audit log
try:
if self.audit_log_path.exists():
with open(self.audit_log_path, 'r') as f:
audit_log = json.load(f)
else:
audit_log = []
audit_log.append(audit_entry)
with open(self.audit_log_path, 'w') as f:
json.dump(audit_log, f, indent=2)
except Exception as e:
print(f"Warning: Failed to write audit log: {e}")
return audit_id
def _detect_environment(self) -> str:
"""Detect execution environment (local, cloud, ci)"""
if os.getenv("CI"):
return "ci"
elif os.getenv("CLOUD_ENV"):
return "cloud"
else:
return "local"
def main():
"""CLI entrypoint"""
if len(sys.argv) < 2:
print("Usage: meta_config_router.py <routing_config_path> [--apply_config] [--output_mode=<mode>]")
print()
print("Arguments:")
print(" routing_config_path Path to router config input file (YAML or JSON)")
print(" --apply_config Write config to ~/.claude-code-router/config.json")
print(" --output_mode=MODE Output mode: preview, file, or both (default: preview)")
sys.exit(1)
# Parse arguments
routing_config_path = sys.argv[1]
apply_config = "--apply_config" in sys.argv or "--apply-config" in sys.argv
output_mode = "preview"
for arg in sys.argv[2:]:
if arg.startswith("--output_mode=") or arg.startswith("--output-mode="):
output_mode = arg.split("=")[1]
# Run agent
agent = MetaConfigRouter()
try:
result = agent.run(
routing_config_path=routing_config_path,
apply_config=apply_config,
output_mode=output_mode
)
if result["success"]:
print("✅ meta.config.router completed successfully")
print(f"📋 Audit ID: {result['audit_id']}")
print(f"💾 Write status: {result['write_status']}")
sys.exit(0)
else:
print("❌ meta.config.router failed")
for error in result.get("errors", []):
print(f" - {error}")
sys.exit(1)
except Exception as e:
print(f"❌ Error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,374 @@
# meta.create - Component Creation Orchestrator
The intelligent orchestrator for creating Betty skills, commands, and agents from natural language descriptions.
## Purpose
`meta.create` is the primary entry point for creating Betty components. It automatically:
- **Detects** what type of component you're describing (skill, command, agent, or combination)
- **Checks** inventory to avoid duplicates
- **Analyzes** complexity to determine the optimal creation pattern
- **Creates** components in dependency order
- **Validates** compatibility and identifies gaps
- **Recommends** next steps for completion
## Why Use meta.create?
Instead of manually running multiple meta-agents (`meta.skill`, `meta.command`, `meta.agent`, `meta.compatibility`), `meta.create` orchestrates everything for you in the right order.
### Before meta.create:
```bash
# Manual workflow - you had to know the order and check everything
python3 agents/meta.command/meta_command.py description.md
# Check if it recommends creating a skill...
python3 agents/meta.skill/meta_skill.py skill_description.md
python3 agents/meta.agent/meta_agent.py agent_description.md
python3 agents/meta.compatibility/meta_compatibility.py analyze my.agent
# Check for gaps, create missing skills...
```
### With meta.create:
```bash
# One command does it all
python3 agents/meta.create/meta_create.py description.md
```
## How It Works
### Step 1: Analysis
Parses your description to determine:
- Is this a skill? command? agent?
- What artifacts are involved?
- What's the complexity level?
### Step 2: Duplicate Check
Queries registries to find existing components:
- Prevents recreating existing skills
- Shows what you can reuse
- Skips unnecessary work
### Step 3: Creation Planning
Uses `meta.command` complexity analysis to determine pattern:
- **COMMAND_ONLY**: Simple inline logic (1-3 steps)
- **SKILL_ONLY**: Reusable utility without command
- **SKILL_AND_COMMAND**: Complex logic in skill + command wrapper
- **AGENT**: Multi-skill orchestration
### Step 4: Component Creation
Creates components in dependency order:
1. **Skills first** (using `meta.skill`)
2. **Commands second** (using `meta.command`)
3. **Agents last** (using `meta.agent` with skill composition)
### Step 5: Compatibility Validation
For agents, runs `meta.compatibility` to:
- Find compatible agent pipelines
- Identify artifact gaps
- Suggest workflows
### Step 6: Recommendations
Provides actionable next steps:
- Missing skills to create
- Compatibility issues to fix
- Integration opportunities
## Usage
### Basic Usage
```bash
python3 agents/meta.create/meta_create.py <description.md>
```
### Create Skill and Command
```bash
python3 agents/meta.create/meta_create.py examples/api_validate.md
```
If `api_validate.md` describes a complex command, meta.create will:
1. Analyze complexity → detects SKILL_AND_COMMAND pattern
2. Create the skill first
3. Create the command that uses the skill
4. Report what was created
### Create Agent with Dependencies
```bash
python3 agents/meta.create/meta_create.py examples/api_agent.md
```
meta.create will:
1. Detect it's an agent description
2. Check for required skills (reuse existing)
3. Create missing skills if needed
4. Create the agent with proper skill composition
5. Validate compatibility with other agents
6. Report gaps and recommendations
### Auto-Fill Gaps
```bash
python3 agents/meta.create/meta_create.py description.md --auto-fill-gaps
```
Automatically creates missing skills to fill compatibility gaps.
### Skip Duplicate Check
```bash
python3 agents/meta.create/meta_create.py description.md --skip-duplicate-check
```
Force creation even if components exist (useful for updates).
### Output Formats
```bash
# Human-readable text (default)
python3 agents/meta.create/meta_create.py description.md
# JSON output for automation
python3 agents/meta.create/meta_create.py description.md --output-format json
# YAML output
python3 agents/meta.create/meta_create.py description.md --output-format yaml
```
### With Traceability
```bash
python3 agents/meta.create/meta_create.py description.md \
--requirement-id REQ-2025-042 \
--requirement-description "Create API validation agent" \
--issue-id JIRA-1234 \
--requested-by "Product Team"
```
## Description File Format
Your description file can be Markdown or JSON. meta.create detects the type automatically.
### Example: Skill Description
```markdown
# Name: data.validate
# Type: skill
# Purpose:
Validate data against JSON schemas with detailed error reporting
# Inputs:
- data (JSON object to validate)
- schema (JSON schema for validation)
# Outputs:
- validation_result (validation report with errors)
# Produces Artifacts:
- validation.report
# Consumes Artifacts:
- data.json
- schema.json
```
### Example: Command Description
```markdown
# Name: /validate-api
# Type: command
# Description:
Validate API responses against OpenAPI schemas
# Execution Type: skill
# Target: api.validate
# Parameters:
- endpoint: string (required) - API endpoint to validate
- schema: string (required) - Path to OpenAPI schema
```
### Example: Agent Description
```markdown
# Name: api.validator
# Type: agent
# Purpose:
Comprehensive API testing and validation agent
# Inputs:
- api.spec
# Outputs:
- validation.report
- test.results
# Examples:
- Validate all API endpoints against OpenAPI spec
- Generate test cases from schema
```
## What Gets Created
### For Skills
- `skills/{name}/skill.yaml` - Skill configuration
- `skills/{name}/{name}.py` - Python implementation stub
- `skills/{name}/test_{name}.py` - pytest test template
- `skills/{name}/README.md` - Documentation
### For Commands
- `commands/{name}.yaml` - Command manifest
- Recommendations for skill creation if needed
### For Agents
- `agents/{name}/agent.yaml` - Agent configuration
- `agents/{name}/README.md` - Documentation with usage examples
- Compatibility analysis report
## Output Report
meta.create provides a comprehensive report:
```
🎯 meta.create - Orchestrating component creation from description.md
📋 Step 1: Analyzing description...
Detected types: Skill=True, Command=True, Agent=False
🔍 Step 2: Checking for existing components...
✅ No duplicates found
🛠️ Step 3: Creating components...
📊 Analyzing command complexity...
Recommended pattern: SKILL_AND_COMMAND
Should create skill: True
🔧 Creating skill...
✅ Skill 'api.validate' created
📜 Creating command...
✅ Command '/validate-api' created
================================================================================
✨ CREATION SUMMARY
================================================================================
✅ Created 2 component(s):
• SKILL: api.validate
• COMMAND: /validate-api
================================================================================
```
## Integration with Other Meta-Agents
meta.create uses:
- **meta.command** - Complexity analysis and command generation
- **meta.skill** - Skill creation with full package
- **meta.agent** - Agent creation with skill composition
- **meta.compatibility** - Compatibility validation and gap detection
- **registry.query** - Duplicate checking
- **agent.compose** - Skill recommendation for agents
## Decision Tree
```
Description Input
Parse Type
┌──┴──────────────────┐
↓ ↓
Command? Agent?
↓ ↓
Analyze Find Skills
Complexity ↓
↓ Create Missing
SKILL_ONLY Skills
COMMAND_ONLY ↓
SKILL_AND_COMMAND Create Agent
↓ ↓
Create Skill Validate Compat
↓ ↓
Create Command Report Gaps
↓ ↓
Done Recommend
```
## Examples
### Example 1: Simple Command
```bash
# description.md specifies a simple 2-step command
python3 agents/meta.create/meta_create.py description.md
# Result: Creates COMMAND_ONLY (inline logic is sufficient)
```
### Example 2: Complex Command
```bash
# description.md specifies 10+ step validation logic
python3 agents/meta.create/meta_create.py description.md
# Result: Creates SKILL_AND_COMMAND (skill has logic, command delegates)
```
### Example 3: Multi-Agent System
```bash
# description.md describes an orchestration agent
python3 agents/meta.create/meta_create.py description.md
# Result:
# - Creates agent with existing skills
# - Validates compatibility
# - Reports: "Can receive from api.architect, can feed to report.generator"
# - Suggests pipeline workflows
```
## Benefits
**Intelligent** - Automatically determines optimal creation pattern
**Safe** - Checks for duplicates, prevents overwrites
**Complete** - Creates all necessary components in order
**Validated** - Runs compatibility checks automatically
**Traceable** - Supports requirement tracking
**Informative** - Provides detailed reports and recommendations
## Next Steps
After using meta.create:
1. **Review** created files
2. **Implement** TODO sections in generated code
3. **Test** with pytest
4. **Register** components (manual or use `skill.register`, etc.)
5. **Use** in your Betty workflows
## Troubleshooting
**Q: meta.create says component already exists**
A: Use `--skip-duplicate-check` to override, or rename your component
**Q: Compatibility gaps reported**
A: Use `--auto-fill-gaps` or manually create the missing skills
**Q: Wrong pattern detected**
A: Add explicit `# Type: skill` or `# Type: command` to your description
## Related Documentation
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Overview of all meta-agents
- [SKILL_COMMAND_DECISION_TREE.md](../../docs/SKILL_COMMAND_DECISION_TREE.md) - Pattern decision logic
- [ARTIFACTS.md](../../docs/ARTIFACTS.md) - Artifact metadata system
---
*Created by the Betty Framework Meta-Agent System*

View File

@@ -0,0 +1,80 @@
name: meta.create
version: 0.1.0
description: |
Orchestrator meta-agent that intelligently creates skills, commands, and agents.
Capabilities:
- Detects component type from description
- Checks inventory for duplicates
- Analyzes complexity and determines creation pattern
- Creates skills, commands, and agents in proper order
- Validates compatibility using meta.compatibility
- Identifies gaps and provides recommendations
- Supports auto-filling missing dependencies
This is the primary entry point for creating Betty components from natural
language descriptions.
status: draft
reasoning_mode: iterative
capabilities:
- Diagnose component needs and recommend skills, commands, or agents to create
- Generate scaffolding for new framework components with proper metadata
- Coordinate validation steps to ensure compatibility before registration
skills_available:
- registry.query
- agent.compose
permissions:
- filesystem:read
- filesystem:write
- registry:read
- registry:write
artifact_metadata:
consumes:
- type: component.description
description: Natural language description of component to create
format: markdown or JSON
required: true
produces:
- type: skill.definition
description: Complete skill package with YAML, implementation, tests
optional: true
- type: command.manifest
description: Command manifest in YAML format
optional: true
- type: agent.definition
description: Agent configuration with skill composition
optional: true
- type: compatibility.report
description: Compatibility analysis showing agent relationships and gaps
optional: true
tags:
- meta
- orchestration
- creation
- automation
system_prompt: |
You are meta.create, the intelligent orchestrator for creating Betty components.
Your responsibilities:
1. Analyze component descriptions to determine type (skill/command/agent)
2. Check registries to avoid creating duplicates
3. Determine optimal creation pattern using complexity analysis
4. Create components in dependency order (skills → commands → agents)
5. Validate agent compatibility and identify gaps
6. Provide actionable recommendations for completion
Always prioritize:
- Reusing existing components over creating new ones
- Creating building blocks (skills) before orchestrators (agents)
- Validating compatibility to ensure smooth agent pipelines
- Providing clear feedback about what was created and why

View File

@@ -0,0 +1,555 @@
#!/usr/bin/env python3
"""
meta.create - Orchestrator Meta-Agent
Intelligently orchestrates the creation of skills, commands, and agents.
Checks inventory, determines what needs to be created, validates compatibility,
and fills gaps automatically.
This is the main entry point for creating Betty components from descriptions.
"""
import json
import yaml
import sys
import os
from pathlib import Path
from typing import Dict, List, Any, Optional, Set, Tuple
from datetime import datetime
# Add parent directory to path for imports
parent_dir = str(Path(__file__).parent.parent.parent)
sys.path.insert(0, parent_dir)
# Import other meta agents by adding their paths
meta_command_path = Path(parent_dir) / "agents" / "meta.command"
meta_skill_path = Path(parent_dir) / "agents" / "meta.skill"
meta_agent_path = Path(parent_dir) / "agents" / "meta.agent"
meta_compatibility_path = Path(parent_dir) / "agents" / "meta.compatibility"
registry_query_path = Path(parent_dir) / "skills" / "registry.query"
sys.path.insert(0, str(meta_command_path))
sys.path.insert(0, str(meta_skill_path))
sys.path.insert(0, str(meta_agent_path))
sys.path.insert(0, str(meta_compatibility_path))
sys.path.insert(0, str(registry_query_path))
import meta_command
import meta_skill
import meta_agent
import meta_compatibility
import registry_query
from betty.config import BASE_DIR
from betty.logging_utils import setup_logger
from betty.traceability import get_tracer, RequirementInfo
logger = setup_logger(__name__)
class ComponentCreator:
"""Orchestrates the creation of skills, commands, and agents"""
def __init__(self, base_dir: str = BASE_DIR):
"""Initialize orchestrator"""
self.base_dir = Path(base_dir)
self.created_components = []
self.compatibility_analyzer = None
def check_duplicate(self, component_type: str, name: str) -> Optional[Dict[str, Any]]:
"""
Check if a component already exists in registry
Args:
component_type: 'skills', 'commands', or 'agents'
name: Component name to check
Returns:
Existing component info if found, None otherwise
"""
try:
result = registry_query.query_registry(
registry=component_type,
name=name,
fuzzy=False
)
if result.get("ok") and result.get("details", {}).get("matching_entries", 0) > 0:
matches = result["details"]["results"]
# Check for exact match
for match in matches:
if match["name"] == name:
return match
return None
except Exception as e:
logger.warning(f"Error checking duplicate for {name}: {e}")
return None
def parse_description_type(self, description_path: str) -> Dict[str, Any]:
"""
Determine what type of component is being described
Args:
description_path: Path to description file
Returns:
Dict with component_type and parsed metadata
"""
path = Path(description_path)
content = path.read_text()
# Try to determine type from content
result = {
"is_skill": False,
"is_command": False,
"is_agent": False,
"path": str(path)
}
content_lower = content.lower()
# Check for skill indicators
if any(x in content_lower for x in ["# produces artifacts:", "# consumes artifacts:",
"skill.yaml", "artifact_metadata"]):
result["is_skill"] = True
# Check for command indicators
if any(x in content_lower for x in ["# execution type:", "# parameters:",
"command manifest"]):
result["is_command"] = True
# Check for agent indicators
if any(x in content_lower for x in ["# skills:", "skills_available",
"agent purpose", "multi-step", "orchestrat"]):
result["is_agent"] = True
# If ambiguous, look at explicit markers
if "# type: skill" in content_lower:
result["is_skill"] = True
result["is_command"] = False
result["is_agent"] = False
elif "# type: command" in content_lower:
result["is_command"] = True
result["is_skill"] = False
result["is_agent"] = False
elif "# type: agent" in content_lower:
result["is_agent"] = True
result["is_skill"] = False
result["is_command"] = False
return result
def create_skill(
self,
description_path: str,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Create a skill using meta.skill
Args:
description_path: Path to skill description
requirement: Optional requirement info
Returns:
Creation result
"""
logger.info(f"Creating skill from {description_path}")
creator = meta_skill.SkillCreator(base_dir=str(self.base_dir))
result = creator.create_skill(description_path, requirement=requirement)
self.created_components.append({
"type": "skill",
"name": result.get("skill_name"),
"files": result.get("created_files", []),
"trace_id": result.get("trace_id")
})
return result
def create_command(
self,
description_path: str,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Create a command using meta.command
Args:
description_path: Path to command description
requirement: Optional requirement info
Returns:
Creation result with complexity analysis
"""
logger.info(f"Creating command from {description_path}")
creator = meta_command.CommandCreator(base_dir=str(self.base_dir))
result = creator.create_command(description_path, requirement=requirement)
self.created_components.append({
"type": "command",
"name": result.get("command_name"),
"manifest": result.get("manifest_file"),
"analysis": result.get("complexity_analysis"),
"trace_id": result.get("trace_id")
})
return result
def create_agent(
self,
description_path: str,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Create an agent using meta.agent
Args:
description_path: Path to agent description
requirement: Optional requirement info
Returns:
Creation result
"""
logger.info(f"Creating agent from {description_path}")
creator = meta_agent.AgentCreator(
registry_path=str(self.base_dir / "registry" / "skills.json")
)
result = creator.create_agent(description_path, requirement=requirement)
self.created_components.append({
"type": "agent",
"name": result.get("name"),
"files": [result.get("agent_yaml"), result.get("readme")],
"skills": result.get("skills", []),
"trace_id": result.get("trace_id")
})
return result
def validate_compatibility(self, agent_name: str) -> Dict[str, Any]:
"""
Validate agent compatibility using meta.compatibility
Args:
agent_name: Name of agent to validate
Returns:
Compatibility analysis
"""
logger.info(f"Validating compatibility for {agent_name}")
if not self.compatibility_analyzer:
self.compatibility_analyzer = meta_compatibility.CompatibilityAnalyzer(
base_dir=str(self.base_dir)
)
self.compatibility_analyzer.scan_agents()
self.compatibility_analyzer.build_compatibility_map()
return self.compatibility_analyzer.analyze_agent(agent_name)
def orchestrate_creation(
self,
description_path: str,
auto_fill_gaps: bool = False,
check_duplicates: bool = True,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Main orchestration method that intelligently creates components
Args:
description_path: Path to description file
auto_fill_gaps: Whether to automatically create missing dependencies
check_duplicates: Whether to check for existing components
requirement: Optional requirement info for traceability
Returns:
Comprehensive creation report
"""
print(f"🎯 meta.create - Orchestrating component creation from {description_path}\n")
report = {
"ok": True,
"description_path": description_path,
"component_type": None,
"created_components": [],
"skipped_components": [],
"compatibility_analysis": None,
"gaps": [],
"recommendations": [],
"errors": []
}
try:
# Step 1: Determine what's being described
print("📋 Step 1: Analyzing description...")
desc_type = self.parse_description_type(description_path)
print(f" Detected types: Skill={desc_type['is_skill']}, "
f"Command={desc_type['is_command']}, Agent={desc_type['is_agent']}\n")
# Step 2: Check for duplicates if requested
if check_duplicates:
print("🔍 Step 2: Checking for existing components...")
# Parse name from description
content = Path(description_path).read_text()
name_match = None
for line in content.split('\n'):
if line.strip().startswith('# Name:'):
name_match = line.replace('# Name:', '').strip()
break
if name_match:
# Check all registries
for comp_type in ['skills', 'commands', 'agents']:
existing = self.check_duplicate(comp_type, name_match)
if existing:
print(f" ⚠️ Found existing {comp_type[:-1]}: {name_match}")
report["skipped_components"].append({
"type": comp_type[:-1],
"name": name_match,
"reason": "Already exists",
"existing": existing
})
if not report["skipped_components"]:
print(" ✅ No duplicates found\n")
else:
print()
# Step 3: Create components based on type
print("🛠️ Step 3: Creating components...\n")
# If it's a command, analyze complexity first
if desc_type["is_command"]:
print(" 📊 Analyzing command complexity...")
creator = meta_command.CommandCreator(base_dir=str(self.base_dir))
# Read content for analysis
with open(description_path) as f:
full_content = f.read()
cmd_desc = creator.parse_description(description_path)
analysis = creator.analyze_complexity(cmd_desc, full_content)
print(f" Recommended pattern: {analysis['recommended_pattern']}")
print(f" Should create skill: {analysis['should_create_skill']}\n")
# Update desc_type based on analysis
if analysis['should_create_skill']:
desc_type['is_skill'] = True
# Create skill first if needed
if desc_type["is_skill"]:
print(" 🔧 Creating skill...")
skill_result = self.create_skill(description_path, requirement)
if skill_result.get("errors"):
report["errors"].extend(skill_result["errors"])
print(f" ⚠️ Skill creation had warnings\n")
else:
print(f" ✅ Skill '{skill_result['skill_name']}' created\n")
report["created_components"].append({
"type": "skill",
"name": skill_result["skill_name"],
"files": skill_result.get("created_files", [])
})
# Create command if needed
if desc_type["is_command"]:
print(" 📜 Creating command...")
command_result = self.create_command(description_path, requirement)
if command_result.get("ok"):
print(f" ✅ Command '{command_result['command_name']}' created\n")
report["created_components"].append({
"type": "command",
"name": command_result["command_name"],
"manifest": command_result.get("manifest_file"),
"pattern": command_result.get("complexity_analysis", {}).get("recommended_pattern")
})
else:
report["errors"].append(f"Command creation failed: {command_result.get('error')}")
print(f" ❌ Command creation failed\n")
# Create agent if needed
if desc_type["is_agent"]:
print(" 🤖 Creating agent...")
agent_result = self.create_agent(description_path, requirement)
print(f" ✅ Agent '{agent_result['name']}' created")
print(f" Skills: {', '.join(agent_result.get('skills', []))}\n")
report["created_components"].append({
"type": "agent",
"name": agent_result["name"],
"files": [agent_result.get("agent_yaml"), agent_result.get("readme")],
"skills": agent_result.get("skills", [])
})
# Step 4: Validate compatibility for agents
print("🔬 Step 4: Validating compatibility...\n")
compatibility = self.validate_compatibility(agent_result["name"])
if "error" not in compatibility:
report["compatibility_analysis"] = compatibility
# Check for gaps
gaps = compatibility.get("gaps", [])
if gaps:
print(f" ⚠️ Found {len(gaps)} gap(s):")
for gap in gaps:
print(f"{gap['artifact']}: {gap['issue']}")
report["gaps"].append(gap)
print()
# Add recommendations
for gap in gaps:
report["recommendations"].append(
f"Create skill to produce '{gap['artifact']}' artifact"
)
else:
print(" ✅ No compatibility gaps found\n")
# Show compatible agents
if compatibility.get("can_feed_to"):
print(f" ➡️ Can feed to {len(compatibility['can_feed_to'])} agent(s)")
if compatibility.get("can_receive_from"):
print(f" ⬅️ Can receive from {len(compatibility['can_receive_from'])} agent(s)")
print()
# Step 5: Auto-fill gaps if requested
if auto_fill_gaps and report["gaps"]:
print("🔧 Step 5: Auto-filling gaps...\n")
for gap in report["gaps"]:
print(f" TODO: Auto-create skill for '{gap['artifact']}'")
# TODO: Implement auto-gap-filling
print()
# Final summary
print("=" * 80)
print("✨ CREATION SUMMARY")
print("=" * 80)
if report["created_components"]:
print(f"\n✅ Created {len(report['created_components'])} component(s):")
for comp in report["created_components"]:
print(f"{comp['type'].upper()}: {comp['name']}")
if report["skipped_components"]:
print(f"\n⏭️ Skipped {len(report['skipped_components'])} component(s) (already exist):")
for comp in report["skipped_components"]:
print(f"{comp['type'].upper()}: {comp['name']}")
if report["gaps"]:
print(f"\n⚠️ Found {len(report['gaps'])} compatibility gap(s)")
if report["recommendations"]:
print("\n💡 Recommendations:")
for rec in report["recommendations"]:
print(f"{rec}")
print("\n" + "=" * 80 + "\n")
return report
except Exception as e:
logger.error(f"Error during orchestration: {e}", exc_info=True)
report["ok"] = False
report["errors"].append(str(e))
print(f"\n❌ Error: {e}\n")
return report
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.create - Intelligent component creation orchestrator"
)
parser.add_argument(
"description",
help="Path to component description file (.md or .json)"
)
parser.add_argument(
"--auto-fill-gaps",
action="store_true",
help="Automatically create missing dependencies"
)
parser.add_argument(
"--skip-duplicate-check",
action="store_true",
help="Skip checking for existing components"
)
parser.add_argument(
"--output-format",
choices=["json", "yaml", "text"],
default="text",
help="Output format for final report"
)
# Traceability arguments
parser.add_argument(
"--requirement-id",
help="Requirement identifier (e.g., REQ-2025-001)"
)
parser.add_argument(
"--requirement-description",
help="What this component accomplishes"
)
parser.add_argument(
"--requirement-source",
help="Source document"
)
parser.add_argument(
"--issue-id",
help="Issue tracking ID (e.g., JIRA-123)"
)
parser.add_argument(
"--requested-by",
help="Who requested this"
)
parser.add_argument(
"--rationale",
help="Why this is needed"
)
args = parser.parse_args()
# Create requirement info if provided
requirement = None
if args.requirement_id and args.requirement_description:
requirement = RequirementInfo(
id=args.requirement_id,
description=args.requirement_description,
source=args.requirement_source,
issue_id=args.issue_id,
requested_by=args.requested_by,
rationale=args.rationale
)
orchestrator = ComponentCreator()
result = orchestrator.orchestrate_creation(
description_path=args.description,
auto_fill_gaps=args.auto_fill_gaps,
check_duplicates=not args.skip_duplicate_check,
requirement=requirement
)
# Output final report in requested format
if args.output_format == "json":
print(json.dumps(result, indent=2))
elif args.output_format == "yaml":
print(yaml.dump(result, default_flow_style=False))
sys.exit(0 if result.get("ok") else 1)
if __name__ == "__main__":
main()

442
agents/meta.hook/README.md Normal file
View File

@@ -0,0 +1,442 @@
# meta.hook - Hook Creator Meta-Agent
Generates Claude Code hooks from natural language descriptions.
## Overview
**meta.hook** is a meta-agent that creates Claude Code hooks from simple description files. It generates hook configurations that execute commands in response to events like tool calls, errors, or user interactions.
**What it does:**
- Parses hook descriptions (Markdown or JSON)
- Generates `.claude/hooks.yaml` configurations
- Validates event types and hook structure
- Manages hook lifecycle (create, update, enable/disable)
- Supports tool-specific filtering
## Quick Start
### Create a Hook
```bash
python3 agents/meta.hook/meta_hook.py examples/my_hook.md
```
Output:
```
🪝 meta.hook - Creating hook from examples/my_hook.md
✨ Hook 'pre-commit-lint' created successfully!
📄 Created/updated file:
- .claude/hooks.yaml
✅ Hook 'pre-commit-lint' is ready to use
Event: before-tool-call
Command: npm run lint
```
### Hook Description Format
Create a Markdown file:
```markdown
# Name: pre-commit-lint
# Event: before-tool-call
# Tool Filter: git
# Description: Run linter before git commits
# Command: npm run lint
# Timeout: 30000
# Enabled: true
```
Or use JSON format:
```json
{
"name": "pre-commit-lint",
"event": "before-tool-call",
"tool_filter": "git",
"description": "Run linter before git commits",
"command": "npm run lint",
"timeout": 30000,
"enabled": true
}
```
## Event Types
Supported Claude Code events:
- **before-tool-call** - Before any tool is executed
- **after-tool-call** - After any tool completes
- **on-error** - When a tool call fails
- **user-prompt-submit** - When user submits a prompt
- **assistant-response** - After assistant responds
## Generated Structure
meta.hook generates or updates `.claude/hooks.yaml`:
```yaml
hooks:
- name: pre-commit-lint
event: before-tool-call
command: npm run lint
description: Run linter before git commits
enabled: true
tool_filter: git
timeout: 30000
```
## Usage Examples
### Example 1: Pre-commit Linting
**Description file** (`lint_hook.md`):
```markdown
# Name: pre-commit-lint
# Event: before-tool-call
# Tool Filter: git
# Description: Run linter before git commits to ensure code quality
# Command: npm run lint
# Timeout: 30000
```
**Create hook:**
```bash
python3 agents/meta.hook/meta_hook.py lint_hook.md
```
### Example 2: Post-deployment Notification
**Description file** (`deploy_notify.json`):
```json
{
"name": "deploy-notify",
"event": "after-tool-call",
"tool_filter": "deploy",
"description": "Send notification after deployment",
"command": "./scripts/notify-team.sh",
"timeout": 10000
}
```
**Create hook:**
```bash
python3 agents/meta.hook/meta_hook.py deploy_notify.json
```
### Example 3: Error Logging
**Description file** (`error_logger.md`):
```markdown
# Name: error-logger
# Event: on-error
# Description: Log errors to monitoring system
# Command: ./scripts/log-error.sh "{error}" "{tool}"
# Timeout: 5000
# Enabled: true
```
**Create hook:**
```bash
python3 agents/meta.hook/meta_hook.py error_logger.md
```
## Hook Parameters
### Required
- **name** - Unique hook identifier
- **event** - Trigger event type
- **command** - Shell command to execute
### Optional
- **description** - What the hook does
- **tool_filter** - Only trigger for specific tools (e.g., "git", "npm", "docker")
- **enabled** - Whether hook is active (default: true)
- **timeout** - Command timeout in milliseconds (default: none)
## Tool Filters
Restrict hooks to specific tools:
```markdown
# Tool Filter: git
```
This hook only triggers for git-related tool calls.
Common tool filters:
- `git` - Git operations
- `npm` - NPM commands
- `docker` - Docker commands
- `python` - Python execution
- `bash` - Shell commands
## Managing Hooks
### Update Existing Hook
Run meta.hook with the same hook name to update:
```bash
python3 agents/meta.hook/meta_hook.py updated_hook.md
```
Output:
```
⚠️ Warning: Hook 'pre-commit-lint' already exists, updating...
✨ Hook 'pre-commit-lint' created successfully!
```
### Disable Hook
Set `Enabled: false` in description:
```markdown
# Name: my-hook
# Event: before-tool-call
# Command: echo "test"
# Enabled: false
```
### Multiple Hooks
Create multiple hook descriptions and run meta.hook for each:
```bash
for hook in hooks/*.md; do
python3 agents/meta.hook/meta_hook.py "$hook"
done
```
## Integration
### With Claude Code
Hooks are automatically loaded by Claude Code from `.claude/hooks.yaml`.
### With meta.agent
Create agents that use hooks:
```yaml
name: ci.agent
description: Continuous integration agent
# Hooks will trigger during agent execution
```
## Artifact Types
### Consumes
- **hook-description** - Natural language hook requirements
- Pattern: `**/hook_description.md`
- Format: Markdown or JSON
### Produces
- **hook-config** - Claude Code hook configuration
- Pattern: `.claude/hooks.yaml`
- Schema: `schemas/hook-config.json`
## Common Workflows
### Workflow 1: Create and Test Hook
```bash
# 1. Create hook description
cat > my_hook.md <<EOF
# Name: test-runner
# Event: after-tool-call
# Tool Filter: git
# Description: Run tests after git push
# Command: npm test
EOF
# 2. Generate hook
python3 agents/meta.hook/meta_hook.py my_hook.md
# 3. Test hook (trigger the event)
git add .
git commit -m "test"
```
### Workflow 2: Create Pre-commit Workflow
```bash
# Create linting hook
cat > lint_hook.md <<EOF
# Name: lint
# Event: before-tool-call
# Tool Filter: git
# Command: npm run lint
EOF
python3 agents/meta.hook/meta_hook.py lint_hook.md
# Create test hook
cat > test_hook.md <<EOF
# Name: test
# Event: before-tool-call
# Tool Filter: git
# Command: npm test
EOF
python3 agents/meta.hook/meta_hook.py test_hook.md
```
### Workflow 3: Error Monitoring
```bash
# Create error notification hook
cat > error_notify.md <<EOF
# Name: error-notify
# Event: on-error
# Description: Send error notifications
# Command: ./scripts/notify.sh
# Timeout: 5000
EOF
python3 agents/meta.hook/meta_hook.py error_notify.md
```
## Tips & Best Practices
### Command Design
**Use absolute paths for scripts:**
```markdown
# Good
# Command: ./scripts/lint.sh
# Bad
# Command: lint.sh
```
**Set appropriate timeouts:**
```markdown
# Fast operations: 5-10 seconds
# Timeout: 10000
# Longer operations: 30-60 seconds
# Timeout: 60000
```
**Handle errors gracefully:**
```bash
#!/bin/bash
# In your hook script
set -e # Exit on error
trap 'echo "Hook failed"' ERR
```
### Tool Filters
Be specific with tool filters to avoid unnecessary executions:
```markdown
# Specific
# Tool Filter: git
# Too broad
# (no tool filter - runs for ALL tools)
```
### Testing Hooks
Test hooks before enabling:
```markdown
# Enabled: false
```
Then manually test the command, and enable once verified.
## Troubleshooting
### Hook not triggering
**Check event type:**
```bash
# Verify event is correct in .claude/hooks.yaml
cat .claude/hooks.yaml
```
**Check tool filter:**
```markdown
# If using tool filter, ensure it matches the tool being called
# Tool Filter: git
```
### Command fails
**Check command path:**
```bash
# Test command manually
npm run lint
# If fails, fix path or installation
```
**Check timeout:**
```markdown
# Increase timeout for slow commands
# Timeout: 60000
```
### Hook already exists warning
This is normal when updating hooks. The old version is replaced with the new one.
## Architecture
```
meta.hook
├─ Input: hook-description (Markdown/JSON)
├─ Parser: extract name, event, command, filters
├─ Generator: create/update hooks.yaml
├─ Validator: check event types and structure
└─ Output: .claude/hooks.yaml configuration
```
## Related Documentation
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
- [hook-description schema](../../schemas/hook-description.json)
- [hook-config schema](../../schemas/hook-config.json)
## How Claude Uses This
Claude can:
1. **Create hooks on demand** - "Create a pre-commit linting hook"
2. **Automate workflows** - "Add error logging for all failures"
3. **Build CI/CD pipelines** - "Create hooks for test, lint, and deploy"
4. **Monitor executions** - "Add notification hooks for important events"
meta.hook enables powerful event-driven automation in Claude Code!

View File

@@ -0,0 +1,64 @@
name: meta.hook
version: 0.1.0
description: Hook creator meta-agent that generates Claude Code hooks from descriptions
status: draft
reasoning_mode: iterative
capabilities:
- Translate natural language specifications into validated hook manifests
- Recommend appropriate hook events, commands, and execution patterns
- Simulate and document hook behavior for developer adoption
type: meta-agent
skills_available:
- hook.define
- hook.register
- hook.simulate
artifact_metadata:
consumes:
- type: hook-description
required: true
produces:
- type: hook-config
system_prompt: |
You are meta.hook, a specialized meta-agent that creates Claude Code hooks from natural language descriptions.
Your role:
1. Parse hook descriptions (Markdown or JSON format)
2. Generate hook configurations (.claude/hooks.yaml)
3. Validate hook names and event types
4. Document hook usage
Hook description format:
- Name: Descriptive hook identifier
- Event: Trigger event (before-tool-call, after-tool-call, on-error, etc.)
- Description: What the hook does
- Command: Shell command to execute
- Tool Filter (optional): Only trigger for specific tools
- Enabled (optional): Whether hook is active (default: true)
Generated hooks.yaml format:
```yaml
hooks:
- name: hook-name
event: trigger-event
description: What it does
command: shell command
enabled: true
tool_filter: tool-name # optional
timeout: 30000 # optional, in milliseconds
```
Event types:
- before-tool-call: Before any tool is called
- after-tool-call: After any tool completes
- on-error: When a tool call fails
- user-prompt-submit: When user submits a prompt
- assistant-response: After assistant responds
Always:
- Validate event types
- Provide clear descriptions
- Set reasonable timeouts
- Document tool filters
- Include usage examples in generated documentation

349
agents/meta.hook/meta_hook.py Executable file
View File

@@ -0,0 +1,349 @@
#!/usr/bin/env python3
"""
meta.hook - Hook Creator Meta-Agent
Generates Claude Code hooks from natural language descriptions.
Usage:
python3 agents/meta.hook/meta_hook.py <hook_description_file>
Examples:
python3 agents/meta.hook/meta_hook.py examples/lint_hook.md
python3 agents/meta.hook/meta_hook.py examples/notify_hook.json
"""
import os
import sys
import json
import yaml
import re
from pathlib import Path
from typing import Dict, List, Any, Optional
# Add parent directory to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from betty.config import BASE_DIR
from betty.logging_utils import setup_logger
from betty.traceability import get_tracer, RequirementInfo
logger = setup_logger(__name__)
class HookCreator:
"""Creates Claude Code hooks from descriptions"""
VALID_EVENTS = [
"before-tool-call",
"after-tool-call",
"on-error",
"user-prompt-submit",
"assistant-response"
]
def __init__(self, base_dir: str = BASE_DIR):
"""Initialize hook creator"""
self.base_dir = Path(base_dir)
self.hooks_dir = self.base_dir / ".claude"
def parse_description(self, description_path: str) -> Dict[str, Any]:
"""
Parse hook description from Markdown or JSON file
Args:
description_path: Path to description file
Returns:
Dict with hook configuration
"""
path = Path(description_path)
if not path.exists():
raise FileNotFoundError(f"Description file not found: {description_path}")
# Read file
content = path.read_text()
# Try JSON first
if path.suffix == ".json":
return json.loads(content)
# Parse Markdown format
hook_desc = {}
# Extract fields
patterns = {
"name": r"#\s*Name:\s*(.+)",
"event": r"#\s*Event:\s*(.+)",
"description": r"#\s*Description:\s*(.+)",
"command": r"#\s*Command:\s*(.+)",
"tool_filter": r"#\s*Tool\s*Filter:\s*(.+)",
"enabled": r"#\s*Enabled:\s*(.+)",
"timeout": r"#\s*Timeout:\s*(\d+)"
}
for field, pattern in patterns.items():
match = re.search(pattern, content, re.IGNORECASE)
if match:
value = match.group(1).strip()
# Convert types
if field == "enabled":
value = value.lower() in ("true", "yes", "1")
elif field == "timeout":
value = int(value)
hook_desc[field] = value
# Validate required fields
required = ["name", "event", "command"]
missing = [f for f in required if f not in hook_desc]
if missing:
raise ValueError(f"Missing required fields: {', '.join(missing)}")
# Validate event type
if hook_desc["event"] not in self.VALID_EVENTS:
raise ValueError(
f"Invalid event type: {hook_desc['event']}. "
f"Must be one of: {', '.join(self.VALID_EVENTS)}"
)
return hook_desc
def generate_hooks_yaml(self, hook_desc: Dict[str, Any]) -> str:
"""
Generate hooks.yaml configuration
Args:
hook_desc: Parsed hook description
Returns:
YAML string
"""
hook_config = {
"name": hook_desc["name"],
"event": hook_desc["event"],
"command": hook_desc["command"]
}
# Add optional fields
if "description" in hook_desc:
hook_config["description"] = hook_desc["description"]
if "enabled" in hook_desc:
hook_config["enabled"] = hook_desc["enabled"]
else:
hook_config["enabled"] = True
if "tool_filter" in hook_desc:
hook_config["tool_filter"] = hook_desc["tool_filter"]
if "timeout" in hook_desc:
hook_config["timeout"] = hook_desc["timeout"]
# Wrap in hooks array
hooks_yaml = {"hooks": [hook_config]}
return yaml.dump(hooks_yaml, default_flow_style=False, sort_keys=False)
def create_hook(
self,
description_path: str,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Create hook from description file
Args:
description_path: Path to description file
requirement: Optional requirement information for traceability
Returns:
Dict with creation results
"""
try:
print(f"🪝 meta.hook - Creating hook from {description_path}\n")
# Parse description
hook_desc = self.parse_description(description_path)
# Generate hooks.yaml
hooks_yaml = self.generate_hooks_yaml(hook_desc)
# Ensure .claude directory exists
self.hooks_dir.mkdir(parents=True, exist_ok=True)
# Write hooks.yaml (or append if exists)
hooks_file = self.hooks_dir / "hooks.yaml"
if hooks_file.exists():
# Load existing hooks
existing = yaml.safe_load(hooks_file.read_text())
if not existing or not isinstance(existing, dict):
existing = {"hooks": []}
if "hooks" not in existing:
existing["hooks"] = []
if not isinstance(existing["hooks"], list):
existing["hooks"] = []
# Add new hook
new_hook = yaml.safe_load(hooks_yaml)["hooks"][0]
# Check for duplicate
hook_names = [h.get("name") for h in existing["hooks"] if isinstance(h, dict)]
if new_hook["name"] in hook_names:
print(f"⚠️ Warning: Hook '{new_hook['name']}' already exists, updating...")
# Remove old version
existing["hooks"] = [h for h in existing["hooks"] if h["name"] != new_hook["name"]]
existing["hooks"].append(new_hook)
hooks_yaml = yaml.dump(existing, default_flow_style=False, sort_keys=False)
# Write file
hooks_file.write_text(hooks_yaml)
print(f"✨ Hook '{hook_desc['name']}' created successfully!\n")
print(f"📄 Created/updated file:")
print(f" - {hooks_file}\n")
print(f"✅ Hook '{hook_desc['name']}' is ready to use")
print(f" Event: {hook_desc['event']}")
print(f" Command: {hook_desc['command']}")
result = {
"ok": True,
"status": "success",
"hook_name": hook_desc["name"],
"hooks_file": str(hooks_file)
}
# Log traceability if requirement provided
trace_id = None
if requirement:
try:
tracer = get_tracer()
# Create component ID from hook name
component_id = f"hook.{hook_desc['name'].replace('-', '_')}"
trace_id = tracer.log_creation(
component_id=component_id,
component_name=hook_desc["name"],
component_type="hook",
component_version="0.1.0",
component_file_path=str(hooks_file),
input_source_path=description_path,
created_by_tool="meta.hook",
created_by_version="0.1.0",
requirement=requirement,
tags=["hook", "auto-generated", hook_desc["event"]],
project="Betty Framework"
)
# Log validation check
validation_details = {
"checks_performed": [
{"name": "hook_structure", "status": "passed"},
{"name": "event_validation", "status": "passed",
"message": f"Valid event type: {hook_desc['event']}"}
]
}
# Check for tool filter
if hook_desc.get("tool_filter"):
validation_details["checks_performed"].append({
"name": "tool_filter_validation",
"status": "passed",
"message": f"Tool filter: {hook_desc['tool_filter']}"
})
tracer.log_verification(
component_id=component_id,
check_type="validation",
tool="meta.hook",
result="passed",
details=validation_details
)
result["trace_id"] = trace_id
result["component_id"] = component_id
except Exception as e:
print(f"⚠️ Warning: Could not log traceability: {e}")
return result
except Exception as e:
print(f"❌ Error creating hook: {e}")
logger.error(f"Error creating hook: {e}", exc_info=True)
return {
"ok": False,
"status": "failed",
"error": str(e)
}
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.hook - Create hooks from descriptions"
)
parser.add_argument(
"description",
help="Path to hook description file (.md or .json)"
)
# Traceability arguments
parser.add_argument(
"--requirement-id",
help="Requirement identifier (e.g., REQ-2025-001)"
)
parser.add_argument(
"--requirement-description",
help="What this hook accomplishes"
)
parser.add_argument(
"--requirement-source",
help="Source document"
)
parser.add_argument(
"--issue-id",
help="Issue tracking ID (e.g., JIRA-123)"
)
parser.add_argument(
"--requested-by",
help="Who requested this"
)
parser.add_argument(
"--rationale",
help="Why this is needed"
)
args = parser.parse_args()
# Create requirement info if provided
requirement = None
if args.requirement_id and args.requirement_description:
requirement = RequirementInfo(
id=args.requirement_id,
description=args.requirement_description,
source=args.requirement_source,
issue_id=args.issue_id,
requested_by=args.requested_by,
rationale=args.rationale
)
creator = HookCreator()
result = creator.create_hook(args.description, requirement=requirement)
# Display traceability info if available
if result.get("trace_id"):
print(f"\n📝 Traceability: {result['trace_id']}")
print(f" View trace: python3 betty/trace_cli.py show {result['component_id']}")
sys.exit(0 if result.get("ok") else 1)
if __name__ == "__main__":
main()

647
agents/meta.skill/README.md Normal file
View File

@@ -0,0 +1,647 @@
# meta.skill - Skill Creator Meta-Agent
Generates complete Betty skills from natural language descriptions.
## Overview
**meta.skill** is a meta-agent that creates fully functional skills from simple description files. It generates skill definitions, Python implementations, tests, and documentation, following Betty Framework conventions.
**What it does:**
- Parses skill descriptions (Markdown or JSON)
- Generates `skill.yaml` configurations
- Creates Python implementation stubs
- Generates test templates
- Creates comprehensive README documentation
- Validates skill names and structure
- Registers artifact metadata
## Quick Start
### Create a Skill
```bash
# Create skill from description
python3 agents/meta.skill/meta_skill.py examples/my_skill_description.md
```
Output:
```
🛠️ meta.skill - Creating skill from examples/my_skill_description.md
✨ Skill 'data.transform' created successfully!
📄 Created files:
- skills/data.transform/skill.yaml
- skills/data.transform/data_transform.py
- skills/data.transform/test_data_transform.py
- skills/data.transform/README.md
✅ Skill 'data.transform' is ready to use
Add to agent skills_available to use it.
```
### Skill Description Format
Create a Markdown file with this structure:
```markdown
# Name: domain.action
# Purpose:
Brief description of what the skill does
# Inputs:
- input_parameter_1
- input_parameter_2 (optional)
# Outputs:
- output_file_1.json
- output_file_2.yaml
# Permissions:
- filesystem:read
- filesystem:write
# Produces Artifacts:
- artifact-type-1
- artifact-type-2
# Consumes Artifacts:
- artifact-type-3
# Implementation Notes:
Detailed guidance for implementing the skill logic
```
Or use JSON format:
```json
{
"name": "domain.action",
"purpose": "Brief description",
"inputs": ["param1", "param2"],
"outputs": ["output.json"],
"permissions": ["filesystem:read"],
"artifact_produces": ["artifact-type-1"],
"artifact_consumes": ["artifact-type-2"],
"implementation_notes": "Implementation guidance"
}
```
## Generated Structure
For a skill named `data.transform`, meta.skill generates:
```
skills/data.transform/
├── skill.yaml # Skill configuration
├── data_transform.py # Python implementation
├── test_data_transform.py # Test suite
└── README.md # Documentation
```
### skill.yaml
Complete skill configuration following Betty conventions:
```yaml
name: data.transform
version: 0.1.0
description: Transform data between formats
inputs:
- input_file
- output_format
outputs:
- transformed_data.json
status: active
permissions:
- filesystem:read
- filesystem:write
entrypoints:
- command: /data/transform
handler: data_transform.py
runtime: python
description: Transform data between formats
artifact_metadata:
produces:
- type: transformed-data
consumes:
- type: raw-data
```
### Implementation Stub
Python implementation with:
- Proper imports and logging
- Class structure
- execute() method with typed parameters
- CLI entry point with argparse
- Error handling
- Output formatting (JSON/YAML)
```python
#!/usr/bin/env python3
"""
data.transform - Transform data between formats
Generated by meta.skill
"""
import os
import sys
import json
import yaml
from pathlib import Path
from typing import Dict, List, Any, Optional
from betty.config import BASE_DIR
from betty.logging_utils import setup_logger
logger = setup_logger(__name__)
class DataTransform:
"""Transform data between formats"""
def __init__(self, base_dir: str = BASE_DIR):
"""Initialize skill"""
self.base_dir = Path(base_dir)
def execute(self, input_file: Optional[str] = None,
output_format: Optional[str] = None) -> Dict[str, Any]:
"""Execute the skill"""
try:
logger.info("Executing data.transform...")
# TODO: Implement skill logic here
# Implementation notes: [your notes here]
result = {
"ok": True,
"status": "success",
"message": "Skill executed successfully"
}
logger.info("Skill completed successfully")
return result
except Exception as e:
logger.error(f"Error executing skill: {e}")
return {
"ok": False,
"status": "failed",
"error": str(e)
}
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="Transform data between formats"
)
parser.add_argument("--input-file", help="input_file")
parser.add_argument("--output-format", help="output_format")
parser.add_argument(
"--output-format",
choices=["json", "yaml"],
default="json",
help="Output format"
)
args = parser.parse_args()
skill = DataTransform()
result = skill.execute(
input_file=args.input_file,
output_format=args.output_format,
)
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(yaml.dump(result, default_flow_style=False))
sys.exit(0 if result.get("ok") else 1)
if __name__ == "__main__":
main()
```
### Test Template
pytest-based test suite:
```python
#!/usr/bin/env python3
"""Tests for data.transform"""
import pytest
import sys
import os
from pathlib import Path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from skills.data_transform import data_transform
class TestDataTransform:
"""Tests for DataTransform"""
def setup_method(self):
"""Setup test fixtures"""
self.skill = data_transform.DataTransform()
def test_initialization(self):
"""Test skill initializes correctly"""
assert self.skill is not None
assert self.skill.base_dir is not None
def test_execute_basic(self):
"""Test basic execution"""
result = self.skill.execute()
assert result is not None
assert "ok" in result
assert "status" in result
def test_execute_success(self):
"""Test successful execution"""
result = self.skill.execute()
assert result["ok"] is True
assert result["status"] == "success"
# TODO: Add more specific tests
def test_cli_help(capsys):
"""Test CLI help message"""
sys.argv = ["data_transform.py", "--help"]
with pytest.raises(SystemExit) as exc_info:
data_transform.main()
assert exc_info.value.code == 0
```
## Skill Naming Convention
Skills must follow the `domain.action` format:
- **domain**: Category (e.g., `data`, `api`, `file`, `text`)
- **action**: Operation (e.g., `validate`, `transform`, `parse`)
- Use only lowercase letters and numbers (no hyphens, underscores, or special characters)
Valid examples:
-`data.validate`
-`api.test`
-`file.compress`
-`text.summarize`
Invalid examples:
-`data.validate-json` (hyphen not allowed)
-`data_validate` (underscore not allowed)
-`DataValidate` (uppercase not allowed)
-`validate` (missing domain)
## Usage Examples
### Example 1: JSON Validator
**Description file** (`json_validator.md`):
```markdown
# Name: data.validatejson
# Purpose:
Validates JSON files against JSON Schema definitions
# Inputs:
- json_file_path
- schema_file_path (optional)
# Outputs:
- validation_result.json
# Permissions:
- filesystem:read
# Produces Artifacts:
- validation-report
# Implementation Notes:
Use Python's jsonschema library for validation
```
**Create skill:**
```bash
python3 agents/meta.skill/meta_skill.py json_validator.md
```
### Example 2: API Tester
**Description file** (`api_tester.json`):
```json
{
"name": "api.test",
"purpose": "Test API endpoints and generate reports",
"inputs": ["openapi_spec_path", "base_url"],
"outputs": ["test_results.json"],
"permissions": ["network:http"],
"artifact_produces": ["test-report"],
"artifact_consumes": ["openapi-spec"],
"implementation_notes": "Use requests library to test each endpoint"
}
```
**Create skill:**
```bash
python3 agents/meta.skill/meta_skill.py api_tester.json
```
### Example 3: File Compressor
**Description file** (`file_compressor.md`):
```markdown
# Name: file.compress
# Purpose:
Compress files using various algorithms
# Inputs:
- input_path
- compression_type (gzip, zip, tar.gz)
# Outputs:
- compressed_file
# Permissions:
- filesystem:read
- filesystem:write
# Implementation Notes:
Support gzip, zip, and tar.gz formats using Python standard library
```
**Create skill:**
```bash
python3 agents/meta.skill/meta_skill.py file_compressor.md
```
## Integration
### With meta.agent
Create an agent that uses the skill:
```yaml
name: data.validator
description: Data validation agent
skills_available:
- data.validatejson # Skill created by meta.skill
```
### With plugin.sync
Sync skills to plugin format:
```bash
python3 skills/plugin.sync/plugin_sync.py
```
This converts `skill.yaml` to commands in `.claude-plugin/plugin.yaml`.
## Artifact Types
### Consumes
- **skill-description** - Natural language skill requirements
- Pattern: `**/skill_description.md`
- Format: Markdown or JSON
### Produces
- **skill-definition** - Complete skill configuration
- Pattern: `skills/*/skill.yaml`
- Schema: `schemas/skill-definition.json`
- **skill-implementation** - Python implementation code
- Pattern: `skills/*/[skill_module].py`
- **skill-tests** - Test suite
- Pattern: `skills/*/test_[skill_module].py`
- **skill-documentation** - README documentation
- Pattern: `skills/*/README.md`
## Common Workflows
### Workflow 1: Create and Test Skill
```bash
# 1. Create skill description
cat > my_skill.md <<EOF
# Name: data.parse
# Purpose: Parse structured data from text
# Inputs:
- input_text
# Outputs:
- parsed_data.json
# Permissions:
- filesystem:write
EOF
# 2. Generate skill
python3 agents/meta.skill/meta_skill.py my_skill.md
# 3. Implement logic (edit the generated file)
vim skills/data.parse/data_parse.py
# 4. Run tests
pytest skills/data.parse/test_data_parse.py -v
# 5. Test CLI
python3 skills/data.parse/data_parse.py --help
```
### Workflow 2: Create Skill for Agent
```bash
# 1. Create skill
python3 agents/meta.skill/meta_skill.py api_analyzer_skill.md
# 2. Add to agent
echo " - api.analyze" >> agents/api.agent/agent.yaml
# 3. Sync to plugin
python3 skills/plugin.sync/plugin_sync.py
```
### Workflow 3: Batch Create Skills
```bash
# Create multiple skills
for desc in skills_to_create/*.md; do
echo "Creating skill from $desc..."
python3 agents/meta.skill/meta_skill.py "$desc"
done
```
## Tips & Best Practices
### Skill Descriptions
**Be specific about purpose:**
```markdown
# Good
# Purpose: Validate JSON against JSON Schema Draft 07
# Bad
# Purpose: Validate stuff
```
**Include implementation notes:**
```markdown
# Implementation Notes:
Use the jsonschema library. Support Draft 07 schemas.
Provide detailed error messages with line numbers.
```
**Specify optional parameters:**
```markdown
# Inputs:
- required_param
- optional_param (optional)
- another_optional (optional, defaults to 'value')
```
### Parameter Naming
Parameters are automatically sanitized:
- Special characters removed (except `-`, `_`, spaces)
- Converted to lowercase
- Spaces and hyphens become underscores
Example conversions:
- `"Schema File Path (optional)"``schema_file_path_optional`
- `"API-Key"``api_key`
- `"Input Data"``input_data`
### Implementation Strategy
1. **Generate skeleton first** - Let meta.skill create structure
2. **Implement gradually** - Add logic to `execute()` method
3. **Test incrementally** - Run tests after each change
4. **Update documentation** - Keep README current
### Artifact Metadata
Always specify artifact types for interoperability:
```markdown
# Produces Artifacts:
- openapi-spec
- validation-report
# Consumes Artifacts:
- api-requirements
```
This enables:
- Agent discovery via meta.compatibility
- Pipeline suggestions via meta.suggest
- Workflow orchestration
## Troubleshooting
### Invalid skill name
```
Error: Skill name must be in domain.action format: my-skill
```
**Solution:** Use format `domain.action` with only alphanumeric characters:
```markdown
# Wrong: my-skill, my_skill, MySkill
# Right: data.transform, api.validate
```
### Skill already exists
```
Error: Skill directory already exists: skills/data.validate
```
**Solution:** Remove existing skill or use different name:
```bash
rm -rf skills/data.validate
```
### Import errors in generated code
```
ModuleNotFoundError: No module named 'betty.config'
```
**Solution:** Ensure Betty framework is in Python path:
```bash
export PYTHONPATH="${PYTHONPATH}:/home/user/betty"
```
### Test failures
```
ModuleNotFoundError: No module named 'skills.data_validate'
```
**Solution:** Run tests from Betty root directory:
```bash
cd /home/user/betty
pytest skills/data.validate/test_data_validate.py -v
```
## Architecture
```
meta.skill
├─ Input: skill-description (Markdown/JSON)
├─ Parser: extract name, purpose, inputs, outputs
├─ Generator: create skill.yaml, Python, tests, README
├─ Validator: check naming conventions
└─ Output: Complete skill directory structure
```
## Next Steps
After creating a skill with meta.skill:
1. **Implement logic** - Add functionality to `execute()` method
2. **Write tests** - Expand test coverage beyond basic tests
3. **Add to agent** - Include in agent's `skills_available`
4. **Sync to plugin** - Run plugin.sync to update plugin.yaml
5. **Test integration** - Verify skill works in agent context
6. **Document usage** - Update README with examples
## Related Documentation
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
- [skill-description schema](../../schemas/skill-description.json)
- [skill-definition schema](../../schemas/skill-definition.json)
## How Claude Uses This
Claude can:
1. **Create skills on demand** - "Create a skill that validates YAML files"
2. **Extend agent capabilities** - "Add a JSON validator skill to this agent"
3. **Build skill libraries** - "Create skills for all common data operations"
4. **Prototype quickly** - Test ideas by generating skill scaffolds
meta.skill enables rapid skill development and agent expansion!

View File

@@ -0,0 +1,306 @@
name: meta.skill
version: 0.4.0
description: |
Creates complete, functional skills from natural language descriptions.
This meta-agent transforms skill descriptions into production-ready skills with:
- Complete skill.yaml definition with validated artifact types
- Artifact flow analysis showing producers/consumers
- Production-quality Python implementation with type hints
- Comprehensive test templates
- Complete documentation with examples
- Dependency validation
- Registry registration with artifact_metadata
- Discoverability verification
Ensures skills follow Betty Framework conventions and are ready for use in agents.
Version 0.4.0 adds artifact flow analysis, improved code templates with
type hints parsed from skill.yaml, and dependency validation.
artifact_metadata:
consumes:
- type: skill-description
file_pattern: "**/skill_description.md"
content_type: "text/markdown"
description: "Natural language description of skill requirements"
schema: "schemas/skill-description.json"
produces:
- type: skill-definition
file_pattern: "skills/*/skill.yaml"
content_type: "application/yaml"
schema: "schemas/skill-definition.json"
description: "Complete skill configuration"
- type: skill-implementation
file_pattern: "skills/*/*.py"
content_type: "text/x-python"
description: "Python implementation with proper structure"
- type: skill-tests
file_pattern: "skills/*/test_*.py"
content_type: "text/x-python"
description: "Test template with example tests"
- type: skill-documentation
file_pattern: "skills/*/SKILL.md"
content_type: "text/markdown"
description: "Skill documentation and usage guide"
status: draft
reasoning_mode: iterative
capabilities:
- Convert skill concepts into production-ready packages with tests and docs
- Ensure generated skills follow registry, artifact, and permission conventions
- Coordinate registration and documentation updates for new skills
skills_available:
- skill.create
- skill.define
- artifact.define # Generate artifact metadata
- artifact.validate.types # Validate artifact types against registry
permissions:
- filesystem:read
- filesystem:write
system_prompt: |
You are meta.skill, the skill creator for Betty Framework.
Your purpose is to transform natural language skill descriptions into complete,
production-ready skills that follow Betty conventions.
## Your Workflow
1. **Parse Description** - Understand skill requirements
- Extract name, purpose, inputs, outputs
- Identify artifact types in produces/consumes sections
- Identify required permissions
- Understand implementation requirements
2. **Validate Artifact Types** - CRITICAL: Verify before generating skill.yaml
- Extract ALL artifact types from skill description (produces + consumes sections)
- Call artifact.validate.types skill:
```bash
python3 skills/artifact.validate.types/artifact_validate_types.py \
--artifact_types '["threat-model", "data-flow-diagrams", "architecture-overview"]' \
--check_schemas true \
--suggest_alternatives true \
--max_suggestions 3
```
- Parse validation results:
```json
{
"all_valid": true/false,
"validation_results": {
"threat-model": {
"valid": true,
"file_pattern": "*.threat-model.yaml",
"content_type": "application/yaml",
"schema": "schemas/artifacts/threat-model-schema.json"
}
},
"invalid_types": ["data-flow-diagram"],
"suggestions": {
"data-flow-diagram": [
{"type": "data-flow-diagrams", "reason": "Plural form", "confidence": "high"}
]
}
}
```
- If all_valid == false:
→ Display invalid_types and suggestions to user
→ Example: "❌ Artifact type 'data-flow-diagram' not found. Did you mean 'data-flow-diagrams' (plural, high confidence)?"
→ ASK USER to confirm correct types or provide alternatives
→ HALT skill creation until artifact types are validated
- If all_valid == true:
→ Store validated metadata (file_pattern, content_type, schema) for each type
→ Use this exact metadata in Step 3 when generating skill.yaml
3. **Analyze Artifact Flow** - Understand skill's place in ecosystem
- For each artifact type the skill produces:
→ Search registry for skills that consume this type
→ Report: "✅ {artifact_type} will be consumed by: {consuming_skills}"
→ If no consumers: "⚠️ {artifact_type} has no consumers yet - consider creating skills that use it"
- For each artifact type the skill consumes:
→ Search registry for skills that produce this type
→ Report: "✅ {artifact_type} produced by: {producing_skills}"
→ If no producers: "❌ {artifact_type} has no producers - user must provide manually or create producer skill first"
- Warn about gaps in artifact flow
- Suggest related skills to create for complete workflow
4. **Generate skill.yaml** - Create complete definition with VALIDATED artifact metadata
- name: Proper naming (domain.action format)
- version: Semantic versioning (e.g., "0.1.0")
- description: Clear description of what the skill does
- inputs: List of input parameters (use empty list [] if none)
- outputs: List of output parameters (use empty list [] if none)
- status: One of "draft", "active", or "deprecated"
- Artifact metadata (produces/consumes)
- Permissions
- Entrypoints with parameters
5. **Generate Implementation** - Create production-quality Python stub
- **Parse skill.yaml inputs** to generate proper argparse CLI:
```python
# For each input in skill.yaml:
parser.add_argument(
'--{input.name}',
type={map_type(input.type)}, # string→str, number→int, boolean→bool, array→list
required={input.required},
default={input.default if not required},
help="{input.description}"
)
```
- **Generate function signature** with type hints from inputs/outputs:
```python
def validate_artifact_types(
artifact_types: List[str],
check_schemas: bool = True,
suggest_alternatives: bool = True
) -> Dict[str, Any]:
\"\"\"
{skill.description}
Args:
artifact_types: {input.description from skill.yaml}
check_schemas: {input.description from skill.yaml}
...
Returns:
{output descriptions from skill.yaml}
\"\"\"
```
- **Include implementation pattern** based on skill type:
- Validation skills: load data → validate → return results
- Generator skills: gather inputs → process → save output
- Transform skills: load input → transform → save output
- **Add comprehensive error handling**:
```python
except FileNotFoundError as e:
logger.error(str(e))
print(json.dumps({"ok": False, "error": str(e)}, indent=2))
sys.exit(1)
```
- **JSON output structure** matching skill.yaml outputs:
```python
result = {
"{output1.name}": value1, # From skill.yaml outputs
"{output2.name}": value2,
"ok": True,
"status": "success"
}
print(json.dumps(result, indent=2))
```
- Add proper logging setup
- Include module docstring with usage example
6. **Generate Tests** - Create test template
- Unit test structure
- Example test cases
- Fixtures
- Assertions
7. **Generate Documentation** - Create SKILL.md
- Purpose and usage
- Input/output examples
- Integration with agents
- Artifact flow (from Step 3 analysis)
- Must include markdown header starting with #
8. **Validate Dependencies** - Check Python packages
- For each dependency in skill.yaml:
→ Verify package exists on PyPI (if possible)
→ Check for known naming issues (e.g., "yaml" vs "pyyaml")
→ Warn about version conflicts with existing skills
- Suggest installation command: `pip install {dependencies}`
- If dependencies missing, warn but don't block
9. **Register Skill** - Update registry
- Call registry.update with skill manifest path
- Verify skill appears in registry with artifact_metadata
- Confirm skill is discoverable via artifact types
10. **Verify Discoverability** - Final validation
- Check skill exists in registry/skills.json
- Verify artifact_metadata is complete
- Test that agent.compose can discover skill by artifact type
- Confirm artifact flow is complete (from Step 3)
## Conventions
**Naming:**
- Skills: `domain.action` (e.g., `api.validate`, `workflow.compose`)
- Use lowercase with dots
- Action should be imperative verb
**Structure:**
```
skills/domain.action/
├── skill.yaml (definition)
├── domain_action.py (implementation)
├── test_domain_action.py (tests)
└── SKILL.md (docs)
```
**Artifact Metadata:**
- Always define what the skill produces/consumes
- Use registered artifact types from meta.artifact
- Include schemas when applicable
**Implementation:**
- Follow Python best practices
- Include proper error handling
- Add logging
- CLI with argparse
- JSON output for results
## Quality Standards
- ✅ Follows Betty conventions (domain.action naming, proper structure)
- ✅ All required fields in skill.yaml: name, version, description, inputs, outputs, status
- ✅ Artifact types VALIDATED against registry before generation
- ✅ Artifact flow ANALYZED (producers/consumers identified)
- ✅ Production-quality code with type hints and comprehensive docstrings
- ✅ Proper CLI generated from skill.yaml inputs (no TODO placeholders)
- ✅ JSON output structure matches skill.yaml outputs
- ✅ Dependencies VALIDATED and installation command provided
- ✅ Comprehensive test template with fixtures
- ✅ SKILL.md with markdown header, examples, and artifact flow
- ✅ Registered in registry with complete artifact_metadata
- ✅ Passes Pydantic validation
- ✅ Discoverable via agent.compose by artifact type
## Error Handling & Recovery
**Artifact Type Not Found:**
- Search registry/artifact_types.json for similar names
- Check for singular/plural variants (data-model vs logical-data-model)
- Suggest alternatives: "Did you mean: 'data-flow-diagrams', 'dataflow-diagram'?"
- ASK USER to confirm or provide correct type
- DO NOT proceed with invalid artifact types
**File Pattern Mismatch:**
- Use exact file_pattern from registry
- Warn user if description specifies different pattern
- Document correct pattern in skill.yaml comments
**Schema File Missing:**
- Warn: "Schema file schemas/artifacts/X-schema.json not found"
- Ask if schema should be: (a) created, (b) omitted, (c) ignored
- Continue with warning but don't block skill creation
**Registry Update Fails:**
- Report specific error from registry.update
- Check if it's version conflict or validation issue
- Provide manual registration command as fallback
- Log issue for framework team
**Duplicate Skill Name:**
- Check existing version in registry
- Offer to: (a) version bump, (b) rename skill, (c) cancel
- Require explicit user confirmation before overwriting
Remember: You're creating building blocks for agents. Make skills
composable, well-documented, and easy to use. ALWAYS validate artifact
types before generating skill.yaml!

791
agents/meta.skill/meta_skill.py Executable file
View File

@@ -0,0 +1,791 @@
#!/usr/bin/env python3
"""
meta.skill - Skill Creator
Creates complete, functional skills from natural language descriptions.
Generates skill.yaml, implementation stub, tests, and documentation.
"""
import json
import yaml
import sys
import os
import re
from pathlib import Path
from typing import Dict, List, Any, Optional
from datetime import datetime
# Add parent directory to path for imports
parent_dir = str(Path(__file__).parent.parent.parent)
sys.path.insert(0, parent_dir)
from betty.traceability import get_tracer, RequirementInfo
# Import artifact validation from artifact.define skill
try:
import importlib.util
artifact_define_path = Path(__file__).parent.parent.parent / "skills" / "artifact.define" / "artifact_define.py"
spec = importlib.util.spec_from_file_location("artifact_define", artifact_define_path)
artifact_define_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(artifact_define_module)
validate_artifact_type = artifact_define_module.validate_artifact_type
KNOWN_ARTIFACT_TYPES = artifact_define_module.KNOWN_ARTIFACT_TYPES
ARTIFACT_VALIDATION_AVAILABLE = True
except Exception as e:
ARTIFACT_VALIDATION_AVAILABLE = False
class SkillCreator:
"""Creates skills from natural language descriptions"""
def __init__(self, base_dir: str = "."):
"""Initialize with base directory"""
self.base_dir = Path(base_dir)
self.skills_dir = self.base_dir / "skills"
self.registry_path = self.base_dir / "registry" / "skills.json"
def parse_description(self, description_path: str) -> Dict[str, Any]:
"""
Parse skill description from Markdown or JSON file
Args:
description_path: Path to skill_description.md or .json
Returns:
Parsed description with skill metadata
"""
path = Path(description_path)
if not path.exists():
raise FileNotFoundError(f"Description not found: {description_path}")
# Handle JSON format
if path.suffix == ".json":
with open(path) as f:
return json.load(f)
# Handle Markdown format
with open(path) as f:
content = f.read()
# Parse Markdown sections
description = {
"name": "",
"purpose": "",
"inputs": [],
"outputs": [],
"permissions": [],
"implementation_notes": "",
"examples": [],
"artifact_produces": [],
"artifact_consumes": []
}
current_section = None
for line in content.split('\n'):
line_stripped = line.strip()
# Section headers
if line_stripped.startswith('# Name:'):
description["name"] = line_stripped.replace('# Name:', '').strip()
elif line_stripped.startswith('# Purpose:'):
current_section = "purpose"
elif line_stripped.startswith('# Inputs:'):
current_section = "inputs"
elif line_stripped.startswith('# Outputs:'):
current_section = "outputs"
elif line_stripped.startswith('# Permissions:'):
current_section = "permissions"
elif line_stripped.startswith('# Implementation Notes:'):
current_section = "implementation_notes"
elif line_stripped.startswith('# Examples:'):
current_section = "examples"
elif line_stripped.startswith('# Produces Artifacts:'):
current_section = "artifact_produces"
elif line_stripped.startswith('# Consumes Artifacts:'):
current_section = "artifact_consumes"
elif line_stripped and not line_stripped.startswith('#'):
# Content for current section
if current_section == "purpose":
description["purpose"] += line_stripped + " "
elif current_section == "implementation_notes":
description["implementation_notes"] += line_stripped + " "
elif current_section in ["inputs", "outputs", "permissions",
"examples", "artifact_produces",
"artifact_consumes"] and line_stripped.startswith('-'):
description[current_section].append(line_stripped[1:].strip())
description["purpose"] = description["purpose"].strip()
description["implementation_notes"] = description["implementation_notes"].strip()
return description
def generate_skill_yaml(self, skill_desc: Dict[str, Any]) -> str:
"""
Generate skill.yaml content
Args:
skill_desc: Parsed skill description
Returns:
YAML content as string
"""
skill_name = skill_desc["name"]
# Convert skill.name to skill_name format for handler
handler_name = skill_name.replace('.', '_') + ".py"
skill_def = {
"name": skill_name,
"version": "0.1.0",
"description": skill_desc["purpose"],
"inputs": skill_desc.get("inputs", []),
"outputs": skill_desc.get("outputs", []),
"status": "active",
"permissions": skill_desc.get("permissions", ["filesystem:read"]),
"entrypoints": [
{
"command": f"/{skill_name.replace('.', '/')}",
"handler": handler_name,
"runtime": "python",
"description": skill_desc["purpose"][:100]
}
]
}
# Add artifact metadata if specified
if skill_desc.get("artifact_produces") or skill_desc.get("artifact_consumes"):
artifact_metadata = {}
if skill_desc.get("artifact_produces"):
artifact_metadata["produces"] = [
{"type": art_type} for art_type in skill_desc["artifact_produces"]
]
if skill_desc.get("artifact_consumes"):
artifact_metadata["consumes"] = [
{"type": art_type, "required": True}
for art_type in skill_desc["artifact_consumes"]
]
skill_def["artifact_metadata"] = artifact_metadata
return yaml.dump(skill_def, default_flow_style=False, sort_keys=False)
def generate_implementation(self, skill_desc: Dict[str, Any]) -> str:
"""
Generate Python implementation stub
Args:
skill_desc: Parsed skill description
Returns:
Python code as string
"""
skill_name = skill_desc["name"]
module_name = skill_name.replace('.', '_')
class_name = ''.join(word.capitalize() for word in skill_name.split('.'))
implementation = f'''#!/usr/bin/env python3
"""
{skill_name} - {skill_desc["purpose"]}
Generated by meta.skill with Betty Framework certification
"""
import os
import sys
import json
import yaml
from pathlib import Path
from typing import Dict, List, Any, Optional
# Add parent directory to path for imports
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from betty.config import BASE_DIR
from betty.logging_utils import setup_logger
from betty.certification import certified_skill
logger = setup_logger(__name__)
class {class_name}:
"""
{skill_desc["purpose"]}
"""
def __init__(self, base_dir: str = BASE_DIR):
"""Initialize skill"""
self.base_dir = Path(base_dir)
@certified_skill("{skill_name}")
def execute(self'''
# Add input parameters
if skill_desc.get("inputs"):
for inp in skill_desc["inputs"]:
# Sanitize parameter names - remove special characters, keep only alphanumeric and underscores
param_name = ''.join(c if c.isalnum() or c in ' -_' else '' for c in inp.lower())
param_name = param_name.replace(' ', '_').replace('-', '_')
implementation += f', {param_name}: Optional[str] = None'
implementation += f''') -> Dict[str, Any]:
"""
Execute the skill
Returns:
Dict with execution results
"""
try:
logger.info("Executing {skill_name}...")
# TODO: Implement skill logic here
'''
if skill_desc.get("implementation_notes"):
implementation += f'''
# Implementation notes:
# {skill_desc["implementation_notes"]}
'''
# Escape the purpose string for Python string literal
escaped_purpose = skill_desc['purpose'].replace('"', '\\"')
implementation += f'''
# Placeholder implementation
result = {{
"ok": True,
"status": "success",
"message": "Skill executed successfully"
}}
logger.info("Skill completed successfully")
return result
except Exception as e:
logger.error(f"Error executing skill: {{e}}")
return {{
"ok": False,
"status": "failed",
"error": str(e)
}}
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="{escaped_purpose}"
)
'''
# Add CLI arguments for inputs
if skill_desc.get("inputs"):
for inp in skill_desc["inputs"]:
# Sanitize parameter names - remove special characters
param_name = ''.join(c if c.isalnum() or c in ' -_' else '' for c in inp.lower())
param_name = param_name.replace(' ', '_').replace('-', '_')
implementation += f'''
parser.add_argument(
"--{param_name.replace('_', '-')}",
help="{inp}"
)'''
implementation += f'''
parser.add_argument(
"--output-format",
choices=["json", "yaml"],
default="json",
help="Output format"
)
args = parser.parse_args()
# Create skill instance
skill = {class_name}()
# Execute skill
result = skill.execute('''
if skill_desc.get("inputs"):
for inp in skill_desc["inputs"]:
# Sanitize parameter names - remove special characters
param_name = ''.join(c if c.isalnum() or c in ' -_' else '' for c in inp.lower())
param_name = param_name.replace(' ', '_').replace('-', '_')
implementation += f'''
{param_name}=args.{param_name},'''
implementation += '''
)
# Output result
if args.output_format == "json":
print(json.dumps(result, indent=2))
else:
print(yaml.dump(result, default_flow_style=False))
# Exit with appropriate code
sys.exit(0 if result.get("ok") else 1)
if __name__ == "__main__":
main()
'''
return implementation
def generate_tests(self, skill_desc: Dict[str, Any]) -> str:
"""
Generate test template
Args:
skill_desc: Parsed skill description
Returns:
Python test code as string
"""
skill_name = skill_desc["name"]
module_name = skill_name.replace('.', '_')
class_name = ''.join(word.capitalize() for word in skill_name.split('.'))
tests = f'''#!/usr/bin/env python3
"""
Tests for {skill_name}
Generated by meta.skill
"""
import pytest
import sys
import os
from pathlib import Path
# Add parent directory to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from skills.{skill_name.replace('.', '_')} import {module_name}
class Test{class_name}:
"""Tests for {class_name}"""
def setup_method(self):
"""Setup test fixtures"""
self.skill = {module_name}.{class_name}()
def test_initialization(self):
"""Test skill initializes correctly"""
assert self.skill is not None
assert self.skill.base_dir is not None
def test_execute_basic(self):
"""Test basic execution"""
result = self.skill.execute()
assert result is not None
assert "ok" in result
assert "status" in result
def test_execute_success(self):
"""Test successful execution"""
result = self.skill.execute()
assert result["ok"] is True
assert result["status"] == "success"
# TODO: Add more specific tests based on skill functionality
def test_cli_help(capsys):
"""Test CLI help message"""
sys.argv = ["{module_name}.py", "--help"]
with pytest.raises(SystemExit) as exc_info:
{module_name}.main()
assert exc_info.value.code == 0
captured = capsys.readouterr()
assert "{skill_desc['purpose'][:50]}" in captured.out
if __name__ == "__main__":
pytest.main([__file__, "-v"])
'''
return tests
def generate_skill_md(self, skill_desc: Dict[str, Any]) -> str:
"""
Generate SKILL.md
Args:
skill_desc: Parsed skill description
Returns:
Markdown content as string
"""
skill_name = skill_desc["name"]
readme = f'''# {skill_name}
{skill_desc["purpose"]}
## Overview
**Purpose:** {skill_desc["purpose"]}
**Command:** `/{skill_name.replace('.', '/')}`
## Usage
### Basic Usage
```bash
python3 skills/{skill_name.replace('.', '/')}/{skill_name.replace('.', '_')}.py
```
### With Arguments
```bash
python3 skills/{skill_name.replace('.', '/')}/{skill_name.replace('.', '_')}.py \\
'''
if skill_desc.get("inputs"):
for inp in skill_desc["inputs"]:
param_name = inp.lower().replace(' ', '_').replace('-', '-')
readme += f' --{param_name} "value" \\\n'
readme += ' --output-format json\n```\n\n'
if skill_desc.get("inputs"):
readme += "## Inputs\n\n"
for inp in skill_desc["inputs"]:
readme += f"- **{inp}**\n"
readme += "\n"
if skill_desc.get("outputs"):
readme += "## Outputs\n\n"
for out in skill_desc["outputs"]:
readme += f"- **{out}**\n"
readme += "\n"
if skill_desc.get("artifact_consumes") or skill_desc.get("artifact_produces"):
readme += "## Artifact Metadata\n\n"
if skill_desc.get("artifact_consumes"):
readme += "### Consumes\n\n"
for art in skill_desc["artifact_consumes"]:
readme += f"- `{art}`\n"
readme += "\n"
if skill_desc.get("artifact_produces"):
readme += "### Produces\n\n"
for art in skill_desc["artifact_produces"]:
readme += f"- `{art}`\n"
readme += "\n"
if skill_desc.get("examples"):
readme += "## Examples\n\n"
for example in skill_desc["examples"]:
readme += f"- {example}\n"
readme += "\n"
if skill_desc.get("permissions"):
readme += "## Permissions\n\n"
for perm in skill_desc["permissions"]:
readme += f"- `{perm}`\n"
readme += "\n"
if skill_desc.get("implementation_notes"):
readme += "## Implementation Notes\n\n"
readme += f"{skill_desc['implementation_notes']}\n\n"
readme += f'''## Integration
This skill can be used in agents by including it in `skills_available`:
```yaml
name: my.agent
skills_available:
- {skill_name}
```
## Testing
Run tests with:
```bash
pytest skills/{skill_name.replace('.', '/')}/test_{skill_name.replace('.', '_')}.py -v
```
## Created By
This skill was generated by **meta.skill**, the skill creator meta-agent.
---
*Part of the Betty Framework*
'''
return readme
def validate_artifacts(self, skill_desc: Dict[str, Any]) -> List[str]:
"""
Validate that artifact types exist in the known registry.
Args:
skill_desc: Parsed skill description
Returns:
List of warning messages
"""
warnings = []
if not ARTIFACT_VALIDATION_AVAILABLE:
warnings.append(
"Artifact validation skipped: artifact.define skill not available"
)
return warnings
# Validate produced artifacts
for artifact_type in skill_desc.get("artifact_produces", []):
is_valid, warning = validate_artifact_type(artifact_type)
if not is_valid and warning:
warnings.append(f"Produces: {warning}")
# Validate consumed artifacts
for artifact_type in skill_desc.get("artifact_consumes", []):
is_valid, warning = validate_artifact_type(artifact_type)
if not is_valid and warning:
warnings.append(f"Consumes: {warning}")
return warnings
def create_skill(
self,
description_path: str,
output_dir: Optional[str] = None,
requirement: Optional[RequirementInfo] = None
) -> Dict[str, Any]:
"""
Create a complete skill from description
Args:
description_path: Path to skill description file
output_dir: Output directory (default: skills/{name}/)
requirement: Optional requirement information for traceability
Returns:
Summary of created files
"""
# Parse description
skill_desc = self.parse_description(description_path)
skill_name = skill_desc["name"]
if not skill_name:
raise ValueError("Skill name is required")
# Validate name format (domain.action)
if not re.match(r'^[a-z0-9]+\.[a-z0-9]+$', skill_name):
raise ValueError(
f"Skill name must be in domain.action format: {skill_name}"
)
# Validate artifact types
artifact_warnings = self.validate_artifacts(skill_desc)
if artifact_warnings:
print("\n⚠️ Artifact Validation Warnings:")
for warning in artifact_warnings:
print(f" {warning}")
print()
# Determine output directory
if not output_dir:
output_dir = f"skills/{skill_name}"
output_path = Path(output_dir)
output_path.mkdir(parents=True, exist_ok=True)
result = {
"skill_name": skill_name,
"created_files": [],
"errors": [],
"artifact_warnings": artifact_warnings
}
# Generate and save skill.yaml
skill_yaml_content = self.generate_skill_yaml(skill_desc)
skill_yaml_path = output_path / "skill.yaml"
with open(skill_yaml_path, 'w') as f:
f.write(skill_yaml_content)
result["created_files"].append(str(skill_yaml_path))
# Generate and save implementation
impl_content = self.generate_implementation(skill_desc)
impl_path = output_path / f"{skill_name.replace('.', '_')}.py"
with open(impl_path, 'w') as f:
f.write(impl_content)
os.chmod(impl_path, 0o755) # Make executable
result["created_files"].append(str(impl_path))
# Generate and save tests
tests_content = self.generate_tests(skill_desc)
tests_path = output_path / f"test_{skill_name.replace('.', '_')}.py"
with open(tests_path, 'w') as f:
f.write(tests_content)
result["created_files"].append(str(tests_path))
# Generate and save SKILL.md
skill_md_content = self.generate_skill_md(skill_desc)
skill_md_path = output_path / "SKILL.md"
with open(skill_md_path, 'w') as f:
f.write(skill_md_content)
result["created_files"].append(str(skill_md_path))
# Log traceability if requirement provided
trace_id = None
if requirement:
try:
tracer = get_tracer()
trace_id = tracer.log_creation(
component_id=skill_name,
component_name=skill_name.replace(".", " ").title(),
component_type="skill",
component_version="0.1.0",
component_file_path=str(skill_yaml_path),
input_source_path=description_path,
created_by_tool="meta.skill",
created_by_version="0.1.0",
requirement=requirement,
tags=["skill", "auto-generated"],
project="Betty Framework"
)
# Log validation check
validation_details = {
"checks_performed": [
{"name": "skill_structure", "status": "passed"},
{"name": "artifact_metadata", "status": "passed"}
]
}
# Check for artifact metadata
if skill_desc.get("artifact_produces") or skill_desc.get("artifact_consumes"):
validation_details["checks_performed"].append({
"name": "artifact_metadata_completeness",
"status": "passed",
"message": f"Produces: {len(skill_desc.get('artifact_produces', []))}, Consumes: {len(skill_desc.get('artifact_consumes', []))}"
})
tracer.log_verification(
component_id=skill_name,
check_type="validation",
tool="meta.skill",
result="passed",
details=validation_details
)
result["trace_id"] = trace_id
except Exception as e:
print(f"⚠️ Warning: Could not log traceability: {e}")
return result
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.skill - Create skills from descriptions"
)
parser.add_argument(
"description",
help="Path to skill description file (.md or .json)"
)
parser.add_argument(
"-o", "--output",
help="Output directory (default: skills/{name}/)"
)
# Traceability arguments
parser.add_argument(
"--requirement-id",
help="Requirement identifier (e.g., REQ-2025-001)"
)
parser.add_argument(
"--requirement-description",
help="What this skill accomplishes"
)
parser.add_argument(
"--requirement-source",
help="Source document"
)
parser.add_argument(
"--issue-id",
help="Issue tracking ID (e.g., JIRA-123)"
)
parser.add_argument(
"--requested-by",
help="Who requested this"
)
parser.add_argument(
"--rationale",
help="Why this is needed"
)
args = parser.parse_args()
# Create requirement info if provided
requirement = None
if args.requirement_id and args.requirement_description:
requirement = RequirementInfo(
id=args.requirement_id,
description=args.requirement_description,
source=args.requirement_source,
issue_id=args.issue_id,
requested_by=args.requested_by,
rationale=args.rationale
)
creator = SkillCreator()
print(f"🛠️ meta.skill - Creating skill from {args.description}")
try:
result = creator.create_skill(
args.description,
output_dir=args.output,
requirement=requirement
)
print(f"\n✨ Skill '{result['skill_name']}' created successfully!\n")
if result["created_files"]:
print("📄 Created files:")
for file in result["created_files"]:
print(f" - {file}")
if result["errors"]:
print("\n⚠️ Warnings:")
for error in result["errors"]:
print(f" - {error}")
if result.get("trace_id"):
print(f"\n📝 Traceability: {result['trace_id']}")
print(f" View trace: python3 betty/trace_cli.py show {result['skill_name']}")
print(f"\n✅ Skill '{result['skill_name']}' is ready to use")
print(" Add to agent skills_available to use it.")
except Exception as e:
print(f"\n❌ Error creating skill: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,510 @@
# meta.suggest - Context-Aware Next-Step Recommender
Helps Claude decide what to do next after an agent completes by analyzing context and suggesting compatible next steps.
## Overview
**meta.suggest** provides intelligent "what's next" recommendations by analyzing what just happened, what artifacts were produced, and what agents are compatible. It works with meta.compatibility to enable smart multi-agent orchestration.
**What it does:**
- Analyzes context (what agent ran, what artifacts produced)
- Uses meta.compatibility to find compatible next steps
- Provides ranked suggestions with clear rationale
- Considers project state and user goals
- Detects warnings (gaps, isolated agents)
- Suggests project-wide improvements
## Quick Start
### Suggest Next Steps
```bash
python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--artifacts agents/api.architect/agent.yaml
```
Output:
```
Context: meta.agent
Produced: agent-definition
🌟 Primary Suggestion:
Process with meta.compatibility
Rationale: meta.agent produces 'agent-definition' which meta.compatibility consumes
Priority: high
🔄 Alternatives:
1. Test the created artifact
Verify the artifact works as expected
2. Analyze compatibility
Understand what agents can work with meta.agent's outputs
```
### Analyze Project
```bash
python3 agents/meta.suggest/meta_suggest.py --analyze-project
```
Output:
```
📊 Project Analysis:
Total Agents: 7
Total Artifacts: 16
Relationships: 3
Gaps: 5
💡 Suggestions (6):
1. Create agent/skill to produce 'agent-description'
Consumed by 1 agents but no producers
Priority: medium
...
```
## Commands
### Suggest After Agent Runs
```bash
python3 agents/meta.suggest/meta_suggest.py \
--context AGENT_NAME \
[--artifacts FILE1 FILE2...] \
[--goal "USER_GOAL"] \
[--format json|text]
```
**Parameters:**
- `--context` - Agent that just ran
- `--artifacts` - Artifact files that were produced (optional)
- `--goal` - User's goal for better suggestions (optional)
- `--format` - Output format (text or json)
**Examples:**
```bash
# After meta.agent creates agent
python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--artifacts agents/my-agent/agent.yaml
# After meta.artifact creates artifact type
python3 agents/meta.suggest/meta_suggest.py \
--context meta.artifact \
--artifacts schemas/my-artifact.json
# With user goal
python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--goal "Create and validate API design agent"
```
### Analyze Project
```bash
python3 agents/meta.suggest/meta_suggest.py --analyze-project [--format json|text]
```
Analyzes the entire agent ecosystem and suggests improvements:
- Agents to create
- Gaps to fill
- Documentation needs
- Ecosystem health
## How It Works
### 1. Context Analysis
Determines what just happened:
- Which agent ran
- What artifacts were produced
- What artifact types are involved
### 2. Compatibility Check
Uses meta.compatibility to find:
- Agents that can consume the produced artifacts
- Agents that are compatible downstream
- Potential pipeline steps
### 3. Suggestion Generation
Creates suggestions based on:
- Compatible agents (high priority)
- Validation/testing options (medium priority)
- Gap-filling needs (low priority if applicable)
### 4. Ranking
Ranks suggestions by:
- Priority level (high > medium > low)
- Automation (automated > manual)
- Relevance to user goal
### 5. Warning Generation
Detects potential issues:
- Gaps in required artifacts
- Isolated agents (no compatible partners)
- Failed validations
## Suggestion Types
### 1. Process with Compatible Agent
```
🌟 Primary Suggestion:
Process with api.validator
Rationale: api.architect produces 'openapi-spec' which api.validator consumes
```
Automatically suggests running compatible agents.
### 2. Validate/Test Artifact
```
Test the created artifact
Rationale: Verify the artifact works as expected
```
Suggests testing when creation-type agents run.
### 3. Analyze Compatibility
```
Analyze compatibility
Rationale: Understand what agents can work with meta.agent's outputs
Command: python3 agents/meta.compatibility/meta_compatibility.py analyze meta.agent
```
Suggests understanding the ecosystem.
### 4. Fill Gaps
```
Create producer for 'agent-description'
Rationale: No agents produce 'agent-description' (required by meta.agent)
```
Suggests creating missing components.
## Output Structure
### Text Format
```
Context: AGENT_NAME
Produced: artifact-type-1, artifact-type-2
🌟 Primary Suggestion:
ACTION
Rationale: WHY
[Command: HOW (if automated)]
Priority: LEVEL
🔄 Alternatives:
1. ACTION
Rationale: WHY
⚠️ Warnings:
• WARNING_MESSAGE
```
### JSON Format
```json
{
"context": {
"agent": "meta.agent",
"artifacts_produced": ["agents/my-agent/agent.yaml"],
"artifact_types": ["agent-definition"],
"timestamp": "2025-10-24T..."
},
"suggestions": [
{
"action": "Process with meta.compatibility",
"agent": "meta.compatibility",
"rationale": "...",
"priority": "high",
"command": "..."
}
],
"primary_suggestion": {...},
"alternatives": [...],
"warnings": [...]
}
```
## Integration
### With meta.compatibility
meta.suggest uses meta.compatibility for discovery:
```python
# Internal call
compatibility = meta.compatibility.find_compatible(agent_name)
# Use compatible agents for suggestions
for compatible in compatibility.get("can_feed_to", []):
suggest(f"Process with {compatible['agent']}")
```
### With Claude
Claude can call meta.suggest after any agent:
```
User: Create an API design agent
Claude: *runs meta.agent*
Claude: *calls meta.suggest --context meta.agent*
Claude: I've created the agent. Would you like me to:
1. Analyze its compatibility
2. Test it
3. Add documentation
```
### In Workflows
Use in shell scripts:
```bash
#!/bin/bash
# Create and analyze agent
# Step 1: Create agent
python3 agents/meta.agent/meta_agent.py description.md
# Step 2: Get suggestions
SUGGESTIONS=$(python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--format json)
# Step 3: Extract primary suggestion
PRIMARY=$(echo "$SUGGESTIONS" | jq -r '.primary_suggestion.command')
# Step 4: Run it
eval "$PRIMARY"
```
## Common Workflows
### Workflow 1: Agent Creation Pipeline
```bash
# Create agent
python3 agents/meta.agent/meta_agent.py my_agent.md
# Get suggestions
python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--artifacts agents/my-agent/agent.yaml
# Follow primary suggestion
python3 agents/meta.compatibility/meta_compatibility.py analyze my-agent
```
### Workflow 2: Continuous Improvement
```bash
# Analyze project
python3 agents/meta.suggest/meta_suggest.py --analyze-project > improvements.txt
# Review suggestions
cat improvements.txt
# Implement top suggestions
# (create missing agents, fill gaps, etc.)
```
### Workflow 3: Goal-Oriented Orchestration
```bash
# Define goal
GOAL="Design, validate, and implement an API"
# Get suggestions for goal
python3 agents/meta.suggest/meta_suggest.py \
--goal "$GOAL" \
--format json > pipeline.json
# Execute suggested pipeline
# (extract steps from pipeline.json and run)
```
## Artifact Types
### Consumes
- **compatibility-graph** - Agent compatibility information
- From: meta.compatibility
- **agent-definition** - Agent that just ran
- Pattern: `agents/*/agent.yaml`
### Produces
- **suggestion-report** - Next-step recommendations
- Pattern: `*.suggestions.json`
- Schema: `schemas/suggestion-report.json`
## Understanding Suggestions
### Priority Levels
**High** - Should probably do this
- Compatible agent waiting
- Validation needed
- Next logical step
**Medium** - Good to do
- Analyze compatibility
- Understand ecosystem
- Non-critical validation
**Low** - Nice to have
- Fill gaps
- Documentation
- Future improvements
### Automated vs Manual
**Automated** - Has command to run
```
Command: python3 agents/meta.compatibility/...
```
**Manual** - Requires user action
```
(No command - manual action required)
```
### Rationale
Always includes "why" for each suggestion:
```
Rationale: meta.agent produces 'agent-definition' which meta.compatibility consumes
```
Helps Claude and users understand the reasoning.
## Tips & Best Practices
### Providing Context
More context = better suggestions:
**Good:**
```bash
--context meta.agent \
--artifacts agents/my-agent/agent.yaml \
--goal "Create and validate agent"
```
**Minimal:**
```bash
--context meta.agent
```
### Interpreting Warnings
**Gaps warning:**
```
⚠️ meta.agent requires artifacts that aren't produced by any agent
```
This is often expected for user inputs. Not always a problem.
**Isolated warning:**
```
⚠️ my-agent has no compatible agents
```
This suggests the agent uses non-standard artifact types or no other agents exist yet.
### Using Suggestions
1. **Review primary suggestion first** - Usually the best option
2. **Consider alternatives** - May be better for your specific case
3. **Check warnings** - Understand potential issues
4. **Verify commands** - Review before running automated suggestions
## Troubleshooting
### No suggestions returned
```
Error: Could not determine relevant agents for goal
```
**Causes:**
- Agent has no compatible downstream agents
- Artifact types are all user-provided inputs
- No other agents in ecosystem
**Solutions:**
- Create more agents
- Use standard artifact types
- Check agent artifact_metadata
### Incorrect suggestions
If suggestions don't make sense:
- Verify agent artifact_metadata is correct
- Check meta.compatibility output directly
- Ensure artifact types are registered
### Empty project analysis
```
Total Agents: 0
```
**Cause:** No agents found in `agents/` directory
**Solution:** Create agents using meta.agent or manually
## Architecture
```
meta.suggest
├─ Uses: meta.compatibility (discovery)
├─ Analyzes: context and artifacts
├─ Produces: ranked suggestions
└─ Helps: Claude make decisions
```
## Examples
```bash
# Example 1: After creating agent
python3 agents/meta.agent/meta_agent.py examples/api_architect_description.md
python3 agents/meta.suggest/meta_suggest.py --context meta.agent
# Example 2: After creating artifact type
python3 agents/meta.artifact/meta_artifact.py create artifact.md
python3 agents/meta.suggest/meta_suggest.py --context meta.artifact
# Example 3: Project health check
python3 agents/meta.suggest/meta_suggest.py --analyze-project
# Example 4: Export to JSON
python3 agents/meta.suggest/meta_suggest.py \
--context meta.agent \
--format json > suggestions.json
```
## Related Documentation
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
- [meta.compatibility README](../meta.compatibility/README.md) - Compatibility analyzer
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
- [suggestion-report schema](../../schemas/suggestion-report.json)
## How Claude Uses This
After any agent completes:
1. Claude calls meta.suggest with context
2. Reviews suggestions and rationale
3. Presents options to user or auto-executes
4. Makes intelligent orchestration decisions
meta.suggest is Claude's assistant for "what's next" decisions!

View File

@@ -0,0 +1,127 @@
name: meta.suggest
version: 0.1.0
description: |
Context-aware next-step recommender that helps Claude decide what to do next
after an agent completes.
Analyzes current context, produced artifacts, and project state to suggest
compatible agents and workflows. Works with meta.compatibility to provide
intelligent orchestration recommendations.
artifact_metadata:
consumes:
- type: compatibility-graph
description: "Agent compatibility information from meta.compatibility"
- type: agent-definition
description: "Agent that just ran"
produces:
- type: suggestion-report
file_pattern: "*.suggestions.json"
content_type: "application/json"
schema: "schemas/suggestion-report.json"
description: "Context-aware recommendations for next steps"
status: draft
reasoning_mode: iterative
capabilities:
- Analyze produced artifacts to understand project context
- Recommend next agents or workflows with supporting rationale
- Highlight gaps and dependencies to maintain delivery momentum
skills_available:
- meta.compatibility # Analyze compatibility
- artifact.define # Understand artifacts
permissions:
- filesystem:read
system_prompt: |
You are meta.suggest, the context-aware next-step recommender.
After an agent completes its work, you help Claude decide what to do next by
analyzing what artifacts were produced and suggesting compatible next steps.
## Your Responsibilities
1. **Analyze Context**
- What agent just ran?
- What artifacts were produced?
- What's the current project state?
- What might the user want to do next?
2. **Suggest Next Steps**
- Use meta.compatibility to find compatible agents
- Rank suggestions by relevance and usefulness
- Provide clear rationale for each suggestion
- Consider common workflows
3. **Be Smart About Context**
- If validation failed, don't suggest proceeding
- If artifacts were created, suggest consumers
- If gaps were detected, suggest filling them
- Consider the user's likely goals
## Commands You Support
**Suggest next steps after agent ran:**
```bash
/meta/suggest --context meta.agent --artifacts agents/my-agent/agent.yaml
```
**Analyze project and suggest:**
```bash
/meta/suggest --analyze-project
```
**Suggest for specific goal:**
```bash
/meta/suggest --goal "Design and implement an API"
```
## Suggestion Criteria
Good suggestions:
- Are relevant to what just happened
- Use artifacts that were produced
- Follow logical workflow order
- Provide clear value to the user
- Include validation/quality checks when appropriate
Bad suggestions:
- Suggest proceeding after failures
- Ignore produced artifacts
- Suggest irrelevant agents
- Don't explain why
## Output Format
Always provide:
- **Primary suggestion**: Best next step with strong rationale
- **Alternative suggestions**: 2-3 other options
- **Rationale**: Why each suggestion makes sense
- **Artifacts needed**: What inputs each option requires
- **Expected outcome**: What each option will produce
## Example Interactions
**Context:** meta.agent just created agent.yaml for new agent
**Suggestions:**
1. Validate the agent (meta.compatibility analyze)
- Rationale: Ensure agent has proper artifact compatibility
- Needs: agent.yaml (already produced)
- Produces: compatibility analysis
2. Test the agent (agent.run)
- Rationale: See if agent works as expected
- Needs: agent.yaml + test inputs
- Produces: execution results
3. Document the agent (manual)
- Rationale: Add examples and usage guide
- Needs: Understanding of agent purpose
- Produces: Enhanced README.md
Remember: You're Claude's assistant for orchestration. Help Claude make
smart decisions about what to do next based on context and compatibility.

View File

@@ -0,0 +1,371 @@
#!/usr/bin/env python3
"""
meta.suggest - Context-Aware Next-Step Recommender
Helps Claude decide what to do next after an agent completes by analyzing
context and suggesting compatible next steps.
"""
import json
import yaml
import sys
import os
from pathlib import Path
from typing import Dict, List, Any, Optional, Set
from datetime import datetime
# Add parent directory to path for imports
parent_dir = str(Path(__file__).parent.parent.parent)
sys.path.insert(0, parent_dir)
# Import meta.compatibility
meta_comp_path = parent_dir + "/agents/meta.compatibility"
sys.path.insert(0, meta_comp_path)
import meta_compatibility
class SuggestionEngine:
"""Context-aware suggestion engine"""
def __init__(self, base_dir: str = "."):
"""Initialize with base directory"""
self.base_dir = Path(base_dir)
self.compatibility_analyzer = meta_compatibility.CompatibilityAnalyzer(base_dir)
self.compatibility_analyzer.scan_agents()
self.compatibility_analyzer.build_compatibility_map()
def suggest_next_steps(
self,
context_agent: str,
artifacts_produced: Optional[List[str]] = None,
goal: Optional[str] = None
) -> Dict[str, Any]:
"""
Suggest next steps based on context
Args:
context_agent: Agent that just ran
artifacts_produced: List of artifact file paths produced
goal: Optional user goal
Returns:
Suggestion report with recommendations
"""
# Get compatibility info for the agent
compatibility = self.compatibility_analyzer.find_compatible(context_agent)
if "error" in compatibility:
return {
"error": compatibility["error"],
"context": {
"agent": context_agent,
"artifacts": artifacts_produced or []
}
}
# Determine artifact types produced
artifact_types = set()
if artifacts_produced:
artifact_types = self._infer_artifact_types(artifacts_produced)
else:
artifact_types = set(compatibility.get("produces", []))
suggestions = []
# Suggestion 1: Validate/analyze what was created
if context_agent not in ["meta.compatibility", "meta.suggest"]:
suggestions.append({
"action": "Analyze compatibility",
"agent": "meta.compatibility",
"command": f"python3 agents/meta.compatibility/meta_compatibility.py analyze {context_agent}",
"rationale": f"Understand what agents can work with {context_agent}'s outputs",
"artifacts_needed": [],
"produces": ["compatibility-graph"],
"priority": "medium",
"estimated_duration": "< 1 minute"
})
# Suggestion 2: Use compatible agents
can_feed_to = compatibility.get("can_feed_to", [])
for compatible in can_feed_to[:3]: # Top 3
next_agent = compatible["agent"]
artifact = compatible["artifact"]
suggestions.append({
"action": f"Process with {next_agent}",
"agent": next_agent,
"rationale": compatible["rationale"],
"artifacts_needed": [artifact],
"produces": self._get_agent_produces(next_agent),
"priority": "high",
"estimated_duration": "varies"
})
# Suggestion 3: If agent created something, suggest testing/validation
if artifact_types and context_agent in ["meta.agent", "meta.artifact"]:
suggestions.append({
"action": "Test the created artifact",
"rationale": "Verify the artifact works as expected",
"artifacts_needed": list(artifact_types),
"priority": "high",
"manual": True
})
# Suggestion 4: If gaps exist, suggest filling them
gaps = compatibility.get("gaps", [])
if gaps:
for gap in gaps[:2]: # Top 2 gaps
suggestions.append({
"action": f"Create producer for '{gap['artifact']}'",
"rationale": gap["issue"],
"severity": gap.get("severity", "medium"),
"priority": "low",
"manual": True
})
# Rank suggestions
suggestions = self._rank_suggestions(suggestions, goal)
# Build report
report = {
"context": {
"agent": context_agent,
"artifacts_produced": artifacts_produced or [],
"artifact_types": list(artifact_types),
"timestamp": datetime.now().isoformat()
},
"suggestions": suggestions,
"primary_suggestion": suggestions[0] if suggestions else None,
"alternatives": suggestions[1:4] if len(suggestions) > 1 else [],
"warnings": self._generate_warnings(context_agent, compatibility, gaps)
}
return report
def _infer_artifact_types(self, artifact_paths: List[str]) -> Set[str]:
"""Infer artifact types from file paths"""
types = set()
for path in artifact_paths:
path_lower = path.lower()
# Pattern matching
if ".openapi." in path_lower:
types.add("openapi-spec")
elif "agent.yaml" in path_lower:
types.add("agent-definition")
elif "readme.md" in path_lower:
if "agent" in path_lower:
types.add("agent-documentation")
elif ".validation." in path_lower:
types.add("validation-report")
elif ".optimization." in path_lower:
types.add("optimization-report")
elif ".compatibility." in path_lower:
types.add("compatibility-graph")
elif ".pipeline." in path_lower:
types.add("pipeline-suggestion")
elif ".workflow." in path_lower:
types.add("workflow-definition")
return types
def _get_agent_produces(self, agent_name: str) -> List[str]:
"""Get what an agent produces"""
if agent_name in self.compatibility_analyzer.agents:
agent_def = self.compatibility_analyzer.agents[agent_name]
produces, _ = self.compatibility_analyzer.extract_artifacts(agent_def)
return list(produces)
return []
def _rank_suggestions(self, suggestions: List[Dict], goal: Optional[str] = None) -> List[Dict]:
"""Rank suggestions by relevance"""
priority_order = {"high": 3, "medium": 2, "low": 1}
# Sort by priority, then by manual (auto first)
return sorted(
suggestions,
key=lambda s: (
-priority_order.get(s.get("priority", "medium"), 2),
s.get("manual", False) # Auto suggestions first
)
)
def _generate_warnings(
self,
agent: str,
compatibility: Dict,
gaps: List[Dict]
) -> List[Dict]:
"""Generate warnings based on context"""
warnings = []
# Warn about gaps
if gaps:
warnings.append({
"type": "gaps",
"message": f"{agent} requires artifacts that aren't produced by any agent",
"details": [g["artifact"] for g in gaps],
"severity": "medium"
})
# Warn if no compatible agents
if not compatibility.get("can_feed_to") and not compatibility.get("can_receive_from"):
warnings.append({
"type": "isolated",
"message": f"{agent} has no compatible agents",
"details": "This agent can't be used in multi-agent pipelines",
"severity": "low"
})
return warnings
def analyze_project(self) -> Dict[str, Any]:
"""Analyze entire project and suggest improvements"""
# Generate compatibility graph
graph = self.compatibility_analyzer.generate_compatibility_graph()
suggestions = []
# Suggest filling gaps
for gap in graph.get("gaps", []):
suggestions.append({
"action": f"Create agent/skill to produce '{gap['artifact']}'",
"rationale": gap["issue"],
"priority": "medium",
"impact": f"Enables {len(gap.get('consumers', []))} agents"
})
# Suggest creating more agents if few exist
if graph["metadata"]["total_agents"] < 5:
suggestions.append({
"action": "Create more agents using meta.agent",
"rationale": "Expand agent ecosystem for more capabilities",
"priority": "low"
})
# Suggest documentation if gaps exist
if graph.get("gaps"):
suggestions.append({
"action": "Document artifact standards for gaps",
"rationale": "Clarify requirements for missing artifacts",
"priority": "low"
})
return {
"project_analysis": {
"total_agents": graph["metadata"]["total_agents"],
"total_artifacts": graph["metadata"]["total_artifact_types"],
"total_relationships": len(graph["relationships"]),
"total_gaps": len(graph["gaps"])
},
"suggestions": suggestions,
"gaps": graph["gaps"],
"timestamp": datetime.now().isoformat()
}
def main():
"""CLI entry point"""
import argparse
parser = argparse.ArgumentParser(
description="meta.suggest - Context-Aware Next-Step Recommender"
)
parser.add_argument(
"--context",
help="Agent that just ran"
)
parser.add_argument(
"--artifacts",
nargs="+",
help="Artifacts that were produced"
)
parser.add_argument(
"--goal",
help="User's goal (for better suggestions)"
)
parser.add_argument(
"--analyze-project",
action="store_true",
help="Analyze entire project and suggest improvements"
)
parser.add_argument(
"--format",
choices=["json", "text"],
default="text",
help="Output format"
)
args = parser.parse_args()
engine = SuggestionEngine()
if args.analyze_project:
print("🔍 Analyzing project...\n")
result = engine.analyze_project()
if args.format == "text":
print(f"📊 Project Analysis:")
print(f" Total Agents: {result['project_analysis']['total_agents']}")
print(f" Total Artifacts: {result['project_analysis']['total_artifacts']}")
print(f" Relationships: {result['project_analysis']['total_relationships']}")
print(f" Gaps: {result['project_analysis']['total_gaps']}")
if result.get("suggestions"):
print(f"\n💡 Suggestions ({len(result['suggestions'])}):")
for i, suggestion in enumerate(result["suggestions"], 1):
print(f"\n {i}. {suggestion['action']}")
print(f" {suggestion['rationale']}")
print(f" Priority: {suggestion['priority']}")
else:
print(json.dumps(result, indent=2))
elif args.context:
print(f"💡 Suggesting next steps after '{args.context}'...\n")
result = engine.suggest_next_steps(
args.context,
args.artifacts,
args.goal
)
if args.format == "text":
if "error" in result:
print(f"❌ Error: {result['error']}")
return
print(f"Context: {result['context']['agent']}")
if result['context']['artifact_types']:
print(f"Produced: {', '.join(result['context']['artifact_types'])}")
if result.get("primary_suggestion"):
print(f"\n🌟 Primary Suggestion:")
ps = result["primary_suggestion"]
print(f" {ps['action']}")
print(f" Rationale: {ps['rationale']}")
if not ps.get("manual"):
print(f" Command: {ps.get('command', 'N/A')}")
print(f" Priority: {ps['priority']}")
if result.get("alternatives"):
print(f"\n🔄 Alternatives:")
for i, alt in enumerate(result["alternatives"], 1):
print(f"\n {i}. {alt['action']}")
print(f" {alt['rationale']}")
if result.get("warnings"):
print(f"\n⚠️ Warnings:")
for warning in result["warnings"]:
print(f"{warning['message']}")
else:
print(json.dumps(result, indent=2))
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,46 @@
# Operations.Orchestrator Agent
Orchestrates operational workflows including builds, deployments, and monitoring
## Purpose
This orchestrator agent coordinates complex operations workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
## Capabilities
- Coordinate build and deployment pipelines
- Manage infrastructure as code workflows
- Orchestrate monitoring and alerting
- Handle incident response and remediation
- Coordinate release management
## Available Skills
- `build.optimize`
- `workflow.orchestrate`
- `workflow.compose`
- `workflow.validate`
- `git.createpr`
- `git.cleanupbranches`
- `telemetry.capture`
## Usage
This agent uses iterative reasoning to:
1. Analyze requirements
2. Plan execution steps
3. Coordinate skill execution
4. Validate results
5. Handle errors and retries
## Status
**Generated**: Auto-generated from taxonomy gap analysis
## Next Steps
- [ ] Review and refine capabilities
- [ ] Test with real workflows
- [ ] Add domain-specific examples
- [ ] Integrate with existing agents
- [ ] Document best practices

View File

@@ -0,0 +1,56 @@
name: operations.orchestrator
version: 0.1.0
description: Orchestrates operational workflows including builds, deployments, and
monitoring
capabilities:
- Coordinate build and deployment pipelines
- Manage infrastructure as code workflows
- Orchestrate monitoring and alerting
- Handle incident response and remediation
- Coordinate release management
skills_available:
- build.optimize
- workflow.orchestrate
- workflow.compose
- workflow.validate
- git.createpr
- git.cleanupbranches
- telemetry.capture
reasoning_mode: iterative
tags:
- operations
- orchestration
- devops
- deployment
workflow_pattern: '1. Analyze incoming request and requirements
2. Identify relevant operations skills and workflows
3. Compose multi-step execution plan
4. Execute skills in coordinated sequence
5. Validate intermediate results
6. Handle errors and retry as needed
7. Return comprehensive results'
example_task: "Input: \"Complete operations workflow from start to finish\"\n\nAgent\
\ will:\n1. Break down the task into stages\n2. Select appropriate skills for each\
\ stage\n3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n\
4. Monitor progress and handle failures\n5. Generate comprehensive reports"
error_handling:
timeout_seconds: 300
retry_strategy: exponential_backoff
max_retries: 3
output:
success:
- Operations workflow results
- Execution logs and metrics
- Validation reports
- Generated artifacts
failure:
- Error details and stack traces
- Partial results (if available)
- Remediation suggestions
status: generated

View File

@@ -0,0 +1,72 @@
# Security.Architect Agent
## Purpose
Create comprehensive security architecture and assessment artifacts including threat models, security architecture diagrams, penetration testing reports, vulnerability management plans, and incident response plans. Applies security frameworks (STRIDE, NIST, ISO 27001, OWASP) and creates artifacts ready for security review and compliance audit.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `System or application description`
- `Architecture components and data flows`
- `Security requirements or compliance needs`
- `Assets and data classification`
- `Existing security controls`
- `Threat intelligence or vulnerability data`
### Produces
- `threat-model: STRIDE-based threat model with attack vectors, risk scoring, and security controls`
- `security-architecture-diagram: Security architecture with trust boundaries, security zones, and control points`
- `penetration-testing-report: Penetration test findings with CVSS scores and remediation recommendations`
- `vulnerability-management-plan: Vulnerability management program with policies and procedures`
- `incident-response-plan: Incident response playbook with roles, procedures, and escalation`
- `security-assessment: Security posture assessment against frameworks`
- `zero-trust-design: Zero trust architecture design with identity, device, and data controls`
- `compliance-matrix: Compliance mapping to regulatory requirements`
## Example Use Cases
- System description with components (API gateway, tokenization service, payment processor)
- Trust boundaries (external, DMZ, internal)
- Asset inventory (credit card data, transaction records)
- STRIDE threat catalog with 15-20 threats
- Security controls mapped to each threat
- Residual risk assessment
- PCI-DSS compliance mapping
- Network segmentation and security zones
- Identity and access management (IAM) controls
- Data encryption (at rest and in transit)
- Tenant isolation mechanisms
- Logging and monitoring infrastructure
- Compliance controls for SOC 2
- Incident classification and severity levels
- Response team roles and responsibilities
- Incident response procedures by type
- Communication and escalation protocols
- Forensics and evidence collection
- Post-incident review process
## Usage
```bash
# Activate the agent
/agent security.architect
# Or invoke directly
betty agent run security.architect --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,65 @@
name: security.architect
version: 0.1.0
description: Create comprehensive security architecture and assessment artifacts including
threat models, security architecture diagrams, penetration testing reports, vulnerability
management plans, and incident response plans. Applies security frameworks (STRIDE,
NIST, ISO 27001, OWASP) and creates artifacts ready for security review and compliance
audit.
status: draft
reasoning_mode: iterative
capabilities:
- Perform structured threat modeling and control gap assessments
- Produce security architecture and testing documentation for reviews
- Recommend remediation and governance improvements for security programs
skills_available:
- artifact.create
- artifact.validate
- artifact.review
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: System or application description
description: Input artifact of type System or application description
- type: Architecture components and data flows
description: Input artifact of type Architecture components and data flows
- type: Security requirements or compliance needs
description: Input artifact of type Security requirements or compliance needs
- type: Assets and data classification
description: Input artifact of type Assets and data classification
- type: Existing security controls
description: Input artifact of type Existing security controls
- type: Threat intelligence or vulnerability data
description: Input artifact of type Threat intelligence or vulnerability data
produces:
- type: 'threat-model: STRIDE-based threat model with attack vectors, risk scoring,
and security controls'
description: 'Output artifact of type threat-model: STRIDE-based threat model
with attack vectors, risk scoring, and security controls'
- type: 'security-architecture-diagram: Security architecture with trust boundaries,
security zones, and control points'
description: 'Output artifact of type security-architecture-diagram: Security
architecture with trust boundaries, security zones, and control points'
- type: 'penetration-testing-report: Penetration test findings with CVSS scores
and remediation recommendations'
description: 'Output artifact of type penetration-testing-report: Penetration
test findings with CVSS scores and remediation recommendations'
- type: 'vulnerability-management-plan: Vulnerability management program with policies
and procedures'
description: 'Output artifact of type vulnerability-management-plan: Vulnerability
management program with policies and procedures'
- type: 'incident-response-plan: Incident response playbook with roles, procedures,
and escalation'
description: 'Output artifact of type incident-response-plan: Incident response
playbook with roles, procedures, and escalation'
- type: 'security-assessment: Security posture assessment against frameworks'
description: 'Output artifact of type security-assessment: Security posture assessment
against frameworks'
- type: 'zero-trust-design: Zero trust architecture design with identity, device,
and data controls'
description: 'Output artifact of type zero-trust-design: Zero trust architecture
design with identity, device, and data controls'
- type: 'compliance-matrix: Compliance mapping to regulatory requirements'
description: 'Output artifact of type compliance-matrix: Compliance mapping to
regulatory requirements'

View File

@@ -0,0 +1,43 @@
# Security.Orchestrator Agent
Orchestrates security workflows including audits, compliance checks, and vulnerability management
## Purpose
This orchestrator agent coordinates complex security workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
## Capabilities
- Coordinate security audits and assessments
- Manage compliance validation workflows
- Orchestrate vulnerability scanning and remediation
- Handle security documentation generation
- Coordinate access control and policy enforcement
## Available Skills
- `policy.enforce`
- `artifact.validate`
- `artifact.review`
- `audit.log`
## Usage
This agent uses iterative reasoning to:
1. Analyze requirements
2. Plan execution steps
3. Coordinate skill execution
4. Validate results
5. Handle errors and retries
## Status
**Generated**: Auto-generated from taxonomy gap analysis
## Next Steps
- [ ] Review and refine capabilities
- [ ] Test with real workflows
- [ ] Add domain-specific examples
- [ ] Integrate with existing agents
- [ ] Document best practices

View File

@@ -0,0 +1,53 @@
name: security.orchestrator
version: 0.1.0
description: Orchestrates security workflows including audits, compliance checks,
and vulnerability management
capabilities:
- Coordinate security audits and assessments
- Manage compliance validation workflows
- Orchestrate vulnerability scanning and remediation
- Handle security documentation generation
- Coordinate access control and policy enforcement
skills_available:
- policy.enforce
- artifact.validate
- artifact.review
- audit.log
reasoning_mode: iterative
tags:
- security
- orchestration
- compliance
- audit
workflow_pattern: '1. Analyze incoming request and requirements
2. Identify relevant security skills and workflows
3. Compose multi-step execution plan
4. Execute skills in coordinated sequence
5. Validate intermediate results
6. Handle errors and retry as needed
7. Return comprehensive results'
example_task: "Input: \"Complete security workflow from start to finish\"\n\nAgent\
\ will:\n1. Break down the task into stages\n2. Select appropriate skills for each\
\ stage\n3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n\
4. Monitor progress and handle failures\n5. Generate comprehensive reports"
error_handling:
timeout_seconds: 300
retry_strategy: exponential_backoff
max_retries: 3
output:
success:
- Security workflow results
- Execution logs and metrics
- Validation reports
- Generated artifacts
failure:
- Error details and stack traces
- Partial results (if available)
- Remediation suggestions
status: generated

View File

@@ -0,0 +1,71 @@
# Strategy.Architect Agent
## Purpose
Create comprehensive business strategy and planning artifacts including business cases, portfolio roadmaps, market analyses, competitive assessments, and strategic planning documents. Leverages financial modeling (NPV, IRR, ROI) and industry frameworks (PMBOK, SAFe, BCG Matrix) to produce executive-ready strategic deliverables.
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `Initiative or project description`
- `Problem statement or opportunity`
- `Target business outcomes`
- `Budget range or financial constraints`
- `Market research data`
- `Competitive intelligence`
- `Stakeholder requirements`
### Produces
- `business-case: Comprehensive business justification with financial analysis, ROI model, risk assessment, and recommendation`
- `portfolio-roadmap: Strategic multi-initiative roadmap with timeline, dependencies, and resource allocation`
- `market-analysis: Market opportunity assessment with sizing, trends, and target segments`
- `competitive-analysis: Competitive landscape analysis with positioning and differentiation`
- `feasibility-study: Technical and business feasibility assessment`
- `strategic-plan: Multi-year strategic planning document with objectives and key results`
- `value-proposition-canvas: Customer value proposition and fit analysis`
- `roi-model: Financial return on investment model with multi-year projections`
## Example Use Cases
- Executive summary for C-suite
- Problem statement with impact analysis
- Proposed solution with scope
- Financial analysis (costs $500K, benefits $300K annually, 18mo payback)
- Risk assessment with mitigation
- Implementation timeline
- Recommendation and next steps
- Strategic alignment to business objectives
- Three major initiatives with phases
- Timeline with milestones and dependencies
- Resource allocation across initiatives
- Budget distribution ($5M across 18 months)
- Risk and dependency management
- Success metrics and KPIs
- market-analysis.yaml with market sizing, growth trends, target segments
- competitive-analysis.yaml with competitor positioning, SWOT analysis
- value-proposition-canvas.yaml with customer jobs, pains, gains
## Usage
```bash
# Activate the agent
/agent strategy.architect
# Or invoke directly
betty agent run strategy.architect --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,65 @@
name: strategy.architect
version: 0.1.0
description: Create comprehensive business strategy and planning artifacts including
business cases, portfolio roadmaps, market analyses, competitive assessments, and
strategic planning documents. Leverages financial modeling (NPV, IRR, ROI) and industry
frameworks (PMBOK, SAFe, BCG Matrix) to produce executive-ready strategic deliverables.
status: draft
reasoning_mode: iterative
capabilities:
- Build financial models and strategic roadmaps aligned to business objectives
- Analyze market and competitive data to inform executive decisions
- Produce governance-ready artifacts with risks, dependencies, and recommendations
skills_available:
- artifact.create
- artifact.validate
- artifact.review
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: Initiative or project description
description: Input artifact of type Initiative or project description
- type: Problem statement or opportunity
description: Input artifact of type Problem statement or opportunity
- type: Target business outcomes
description: Input artifact of type Target business outcomes
- type: Budget range or financial constraints
description: Input artifact of type Budget range or financial constraints
- type: Market research data
description: Input artifact of type Market research data
- type: Competitive intelligence
description: Input artifact of type Competitive intelligence
- type: Stakeholder requirements
description: Input artifact of type Stakeholder requirements
produces:
- type: 'business-case: Comprehensive business justification with financial analysis,
ROI model, risk assessment, and recommendation'
description: 'Output artifact of type business-case: Comprehensive business justification
with financial analysis, ROI model, risk assessment, and recommendation'
- type: 'portfolio-roadmap: Strategic multi-initiative roadmap with timeline, dependencies,
and resource allocation'
description: 'Output artifact of type portfolio-roadmap: Strategic multi-initiative
roadmap with timeline, dependencies, and resource allocation'
- type: 'market-analysis: Market opportunity assessment with sizing, trends, and
target segments'
description: 'Output artifact of type market-analysis: Market opportunity assessment
with sizing, trends, and target segments'
- type: 'competitive-analysis: Competitive landscape analysis with positioning and
differentiation'
description: 'Output artifact of type competitive-analysis: Competitive landscape
analysis with positioning and differentiation'
- type: 'feasibility-study: Technical and business feasibility assessment'
description: 'Output artifact of type feasibility-study: Technical and business
feasibility assessment'
- type: 'strategic-plan: Multi-year strategic planning document with objectives
and key results'
description: 'Output artifact of type strategic-plan: Multi-year strategic planning
document with objectives and key results'
- type: 'value-proposition-canvas: Customer value proposition and fit analysis'
description: 'Output artifact of type value-proposition-canvas: Customer value
proposition and fit analysis'
- type: 'roi-model: Financial return on investment model with multi-year projections'
description: 'Output artifact of type roi-model: Financial return on investment
model with multi-year projections'

View File

@@ -0,0 +1,74 @@
# Test.Engineer Agent
## Purpose
Create comprehensive testing artifacts including test plans, test cases, test results, test automation strategies, and quality assurance reports. Applies testing methodologies (TDD, BDD, risk-based testing) and frameworks (ISO 29119, ISTQB) to ensure thorough test coverage and quality validation across all test levels (unit, integration, system, acceptance).
## Skills
This agent uses the following skills:
## Artifact Flow
### Consumes
- `Requirements or user stories`
- `System architecture or design`
- `Test scope and objectives`
- `Quality criteria and acceptance thresholds`
- `Testing constraints`
- `Defects or test results`
### Produces
- `test-plan: Comprehensive test strategy with scope, approach, resources, and schedule`
- `test-cases: Detailed test cases with steps, data, and expected results`
- `test-results: Test execution results with pass/fail status and defect tracking`
- `test-automation-strategy: Test automation framework and tool selection`
- `acceptance-criteria: User story acceptance criteria in Given-When-Then format`
- `performance-test-plan: Performance and load testing strategy`
- `integration-test-plan: Integration testing approach with interface validation`
- `regression-test-suite: Regression test suite for continuous integration`
- `quality-assurance-report: QA summary with metrics, defects, and quality assessment`
## Example Use Cases
- Test scope (functional, security, accessibility, performance)
- Test levels (unit, integration, system, UAT)
- Test approach by feature area
- Platform coverage (iOS 14+, Android 10+)
- Test environment and data requirements
- Accessibility testing (WCAG 2.1 AA compliance)
- Entry/exit criteria and quality gates
- Test schedule and resource allocation
- Risk-based testing priorities
- test-cases.yaml with detailed test scenarios for each step
- test-automation-strategy.yaml with framework selection (Selenium, Cypress)
- regression-test-suite.yaml for CI/CD integration
- Test cases include: happy path, error handling, edge cases
- Automation coverage: 80% of critical user journeys
- Performance requirements (throughput, latency, concurrency)
- Load testing scenarios (baseline, peak, stress, soak)
- Performance metrics and SLAs
- Test data and environment sizing
- Monitoring and observability requirements
- Performance acceptance criteria
## Usage
```bash
# Activate the agent
/agent test.engineer
# Or invoke directly
betty agent run test.engineer --input <path>
```
## Created By
This agent was created by **meta.agent**, the meta-agent for creating agents.
---
*Part of the Betty Framework*

View File

@@ -0,0 +1,65 @@
name: test.engineer
version: 0.1.0
description: Create comprehensive testing artifacts including test plans, test cases,
test results, test automation strategies, and quality assurance reports. Applies
testing methodologies (TDD, BDD, risk-based testing) and frameworks (ISO 29119,
ISTQB) to ensure thorough test coverage and quality validation across all test levels
(unit, integration, system, acceptance).
status: draft
reasoning_mode: iterative
capabilities:
- Develop comprehensive test strategies across multiple levels and techniques
- Produce reusable automation assets and coverage reporting
- Analyze defect data to recommend quality improvements
skills_available:
- artifact.create
- artifact.validate
- artifact.review
permissions:
- filesystem:read
- filesystem:write
artifact_metadata:
consumes:
- type: Requirements or user stories
description: Input artifact of type Requirements or user stories
- type: System architecture or design
description: Input artifact of type System architecture or design
- type: Test scope and objectives
description: Input artifact of type Test scope and objectives
- type: Quality criteria and acceptance thresholds
description: Input artifact of type Quality criteria and acceptance thresholds
- type: Testing constraints
description: Input artifact of type Testing constraints
- type: Defects or test results
description: Input artifact of type Defects or test results
produces:
- type: 'test-plan: Comprehensive test strategy with scope, approach, resources,
and schedule'
description: 'Output artifact of type test-plan: Comprehensive test strategy with
scope, approach, resources, and schedule'
- type: 'test-cases: Detailed test cases with steps, data, and expected results'
description: 'Output artifact of type test-cases: Detailed test cases with steps,
data, and expected results'
- type: 'test-results: Test execution results with pass/fail status and defect tracking'
description: 'Output artifact of type test-results: Test execution results with
pass/fail status and defect tracking'
- type: 'test-automation-strategy: Test automation framework and tool selection'
description: 'Output artifact of type test-automation-strategy: Test automation
framework and tool selection'
- type: 'acceptance-criteria: User story acceptance criteria in Given-When-Then
format'
description: 'Output artifact of type acceptance-criteria: User story acceptance
criteria in Given-When-Then format'
- type: 'performance-test-plan: Performance and load testing strategy'
description: 'Output artifact of type performance-test-plan: Performance and load
testing strategy'
- type: 'integration-test-plan: Integration testing approach with interface validation'
description: 'Output artifact of type integration-test-plan: Integration testing
approach with interface validation'
- type: 'regression-test-suite: Regression test suite for continuous integration'
description: 'Output artifact of type regression-test-suite: Regression test suite
for continuous integration'
- type: 'quality-assurance-report: QA summary with metrics, defects, and quality
assessment'
description: 'Output artifact of type quality-assurance-report: QA summary with
metrics, defects, and quality assessment'

View File

@@ -0,0 +1,42 @@
# Testing.Orchestrator Agent
Orchestrates testing workflows across unit, integration, and end-to-end tests
## Purpose
This orchestrator agent coordinates complex testing workflows by composing and sequencing multiple skills. It handles the complete lifecycle from planning through execution and validation.
## Capabilities
- Coordinate test planning and design
- Manage test execution and reporting
- Orchestrate quality assurance workflows
- Handle test data generation and management
- Coordinate continuous testing pipelines
## Available Skills
- `test.example`
- `workflow.validate`
- `api.test`
## Usage
This agent uses iterative reasoning to:
1. Analyze requirements
2. Plan execution steps
3. Coordinate skill execution
4. Validate results
5. Handle errors and retries
## Status
**Generated**: Auto-generated from taxonomy gap analysis
## Next Steps
- [ ] Review and refine capabilities
- [ ] Test with real workflows
- [ ] Add domain-specific examples
- [ ] Integrate with existing agents
- [ ] Document best practices

View File

@@ -0,0 +1,52 @@
name: testing.orchestrator
version: 0.1.0
description: Orchestrates testing workflows across unit, integration, and end-to-end
tests
capabilities:
- Coordinate test planning and design
- Manage test execution and reporting
- Orchestrate quality assurance workflows
- Handle test data generation and management
- Coordinate continuous testing pipelines
skills_available:
- test.example
- workflow.validate
- api.test
reasoning_mode: iterative
tags:
- testing
- orchestration
- qa
- quality
workflow_pattern: '1. Analyze incoming request and requirements
2. Identify relevant testing skills and workflows
3. Compose multi-step execution plan
4. Execute skills in coordinated sequence
5. Validate intermediate results
6. Handle errors and retry as needed
7. Return comprehensive results'
example_task: "Input: \"Complete testing workflow from start to finish\"\n\nAgent\
\ will:\n1. Break down the task into stages\n2. Select appropriate skills for each\
\ stage\n3. Execute create \u2192 validate \u2192 review \u2192 publish lifecycle\n\
4. Monitor progress and handle failures\n5. Generate comprehensive reports"
error_handling:
timeout_seconds: 300
retry_strategy: exponential_backoff
max_retries: 3
output:
success:
- Testing workflow results
- Execution logs and metrics
- Validation reports
- Generated artifacts
failure:
- Error details and stack traces
- Partial results (if available)
- Remediation suggestions
status: generated