Initial commit
This commit is contained in:
647
agents/meta.skill/README.md
Normal file
647
agents/meta.skill/README.md
Normal file
@@ -0,0 +1,647 @@
|
||||
# meta.skill - Skill Creator Meta-Agent
|
||||
|
||||
Generates complete Betty skills from natural language descriptions.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.skill** is a meta-agent that creates fully functional skills from simple description files. It generates skill definitions, Python implementations, tests, and documentation, following Betty Framework conventions.
|
||||
|
||||
**What it does:**
|
||||
- Parses skill descriptions (Markdown or JSON)
|
||||
- Generates `skill.yaml` configurations
|
||||
- Creates Python implementation stubs
|
||||
- Generates test templates
|
||||
- Creates comprehensive README documentation
|
||||
- Validates skill names and structure
|
||||
- Registers artifact metadata
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Create a Skill
|
||||
|
||||
```bash
|
||||
# Create skill from description
|
||||
python3 agents/meta.skill/meta_skill.py examples/my_skill_description.md
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
🛠️ meta.skill - Creating skill from examples/my_skill_description.md
|
||||
|
||||
✨ Skill 'data.transform' created successfully!
|
||||
|
||||
📄 Created files:
|
||||
- skills/data.transform/skill.yaml
|
||||
- skills/data.transform/data_transform.py
|
||||
- skills/data.transform/test_data_transform.py
|
||||
- skills/data.transform/README.md
|
||||
|
||||
✅ Skill 'data.transform' is ready to use
|
||||
Add to agent skills_available to use it.
|
||||
```
|
||||
|
||||
### Skill Description Format
|
||||
|
||||
Create a Markdown file with this structure:
|
||||
|
||||
```markdown
|
||||
# Name: domain.action
|
||||
|
||||
# Purpose:
|
||||
Brief description of what the skill does
|
||||
|
||||
# Inputs:
|
||||
- input_parameter_1
|
||||
- input_parameter_2 (optional)
|
||||
|
||||
# Outputs:
|
||||
- output_file_1.json
|
||||
- output_file_2.yaml
|
||||
|
||||
# Permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
# Produces Artifacts:
|
||||
- artifact-type-1
|
||||
- artifact-type-2
|
||||
|
||||
# Consumes Artifacts:
|
||||
- artifact-type-3
|
||||
|
||||
# Implementation Notes:
|
||||
Detailed guidance for implementing the skill logic
|
||||
```
|
||||
|
||||
Or use JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "domain.action",
|
||||
"purpose": "Brief description",
|
||||
"inputs": ["param1", "param2"],
|
||||
"outputs": ["output.json"],
|
||||
"permissions": ["filesystem:read"],
|
||||
"artifact_produces": ["artifact-type-1"],
|
||||
"artifact_consumes": ["artifact-type-2"],
|
||||
"implementation_notes": "Implementation guidance"
|
||||
}
|
||||
```
|
||||
|
||||
## Generated Structure
|
||||
|
||||
For a skill named `data.transform`, meta.skill generates:
|
||||
|
||||
```
|
||||
skills/data.transform/
|
||||
├── skill.yaml # Skill configuration
|
||||
├── data_transform.py # Python implementation
|
||||
├── test_data_transform.py # Test suite
|
||||
└── README.md # Documentation
|
||||
```
|
||||
|
||||
### skill.yaml
|
||||
|
||||
Complete skill configuration following Betty conventions:
|
||||
|
||||
```yaml
|
||||
name: data.transform
|
||||
version: 0.1.0
|
||||
description: Transform data between formats
|
||||
inputs:
|
||||
- input_file
|
||||
- output_format
|
||||
outputs:
|
||||
- transformed_data.json
|
||||
status: active
|
||||
permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
entrypoints:
|
||||
- command: /data/transform
|
||||
handler: data_transform.py
|
||||
runtime: python
|
||||
description: Transform data between formats
|
||||
artifact_metadata:
|
||||
produces:
|
||||
- type: transformed-data
|
||||
consumes:
|
||||
- type: raw-data
|
||||
```
|
||||
|
||||
### Implementation Stub
|
||||
|
||||
Python implementation with:
|
||||
- Proper imports and logging
|
||||
- Class structure
|
||||
- execute() method with typed parameters
|
||||
- CLI entry point with argparse
|
||||
- Error handling
|
||||
- Output formatting (JSON/YAML)
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
data.transform - Transform data between formats
|
||||
|
||||
Generated by meta.skill
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
from betty.config import BASE_DIR
|
||||
from betty.logging_utils import setup_logger
|
||||
|
||||
logger = setup_logger(__name__)
|
||||
|
||||
|
||||
class DataTransform:
|
||||
"""Transform data between formats"""
|
||||
|
||||
def __init__(self, base_dir: str = BASE_DIR):
|
||||
"""Initialize skill"""
|
||||
self.base_dir = Path(base_dir)
|
||||
|
||||
def execute(self, input_file: Optional[str] = None,
|
||||
output_format: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""Execute the skill"""
|
||||
try:
|
||||
logger.info("Executing data.transform...")
|
||||
|
||||
# TODO: Implement skill logic here
|
||||
# Implementation notes: [your notes here]
|
||||
|
||||
result = {
|
||||
"ok": True,
|
||||
"status": "success",
|
||||
"message": "Skill executed successfully"
|
||||
}
|
||||
|
||||
logger.info("Skill completed successfully")
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error executing skill: {e}")
|
||||
return {
|
||||
"ok": False,
|
||||
"status": "failed",
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Transform data between formats"
|
||||
)
|
||||
|
||||
parser.add_argument("--input-file", help="input_file")
|
||||
parser.add_argument("--output-format", help="output_format")
|
||||
parser.add_argument(
|
||||
"--output-format",
|
||||
choices=["json", "yaml"],
|
||||
default="json",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
skill = DataTransform()
|
||||
result = skill.execute(
|
||||
input_file=args.input_file,
|
||||
output_format=args.output_format,
|
||||
)
|
||||
|
||||
if args.output_format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
else:
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
|
||||
sys.exit(0 if result.get("ok") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
### Test Template
|
||||
|
||||
pytest-based test suite:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Tests for data.transform"""
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
|
||||
|
||||
from skills.data_transform import data_transform
|
||||
|
||||
|
||||
class TestDataTransform:
|
||||
"""Tests for DataTransform"""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup test fixtures"""
|
||||
self.skill = data_transform.DataTransform()
|
||||
|
||||
def test_initialization(self):
|
||||
"""Test skill initializes correctly"""
|
||||
assert self.skill is not None
|
||||
assert self.skill.base_dir is not None
|
||||
|
||||
def test_execute_basic(self):
|
||||
"""Test basic execution"""
|
||||
result = self.skill.execute()
|
||||
assert result is not None
|
||||
assert "ok" in result
|
||||
assert "status" in result
|
||||
|
||||
def test_execute_success(self):
|
||||
"""Test successful execution"""
|
||||
result = self.skill.execute()
|
||||
assert result["ok"] is True
|
||||
assert result["status"] == "success"
|
||||
|
||||
# TODO: Add more specific tests
|
||||
|
||||
|
||||
def test_cli_help(capsys):
|
||||
"""Test CLI help message"""
|
||||
sys.argv = ["data_transform.py", "--help"]
|
||||
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
data_transform.main()
|
||||
|
||||
assert exc_info.value.code == 0
|
||||
```
|
||||
|
||||
## Skill Naming Convention
|
||||
|
||||
Skills must follow the `domain.action` format:
|
||||
- **domain**: Category (e.g., `data`, `api`, `file`, `text`)
|
||||
- **action**: Operation (e.g., `validate`, `transform`, `parse`)
|
||||
- Use only lowercase letters and numbers (no hyphens, underscores, or special characters)
|
||||
|
||||
Valid examples:
|
||||
- ✅ `data.validate`
|
||||
- ✅ `api.test`
|
||||
- ✅ `file.compress`
|
||||
- ✅ `text.summarize`
|
||||
|
||||
Invalid examples:
|
||||
- ❌ `data.validate-json` (hyphen not allowed)
|
||||
- ❌ `data_validate` (underscore not allowed)
|
||||
- ❌ `DataValidate` (uppercase not allowed)
|
||||
- ❌ `validate` (missing domain)
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: JSON Validator
|
||||
|
||||
**Description file** (`json_validator.md`):
|
||||
|
||||
```markdown
|
||||
# Name: data.validatejson
|
||||
|
||||
# Purpose:
|
||||
Validates JSON files against JSON Schema definitions
|
||||
|
||||
# Inputs:
|
||||
- json_file_path
|
||||
- schema_file_path (optional)
|
||||
|
||||
# Outputs:
|
||||
- validation_result.json
|
||||
|
||||
# Permissions:
|
||||
- filesystem:read
|
||||
|
||||
# Produces Artifacts:
|
||||
- validation-report
|
||||
|
||||
# Implementation Notes:
|
||||
Use Python's jsonschema library for validation
|
||||
```
|
||||
|
||||
**Create skill:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.skill/meta_skill.py json_validator.md
|
||||
```
|
||||
|
||||
### Example 2: API Tester
|
||||
|
||||
**Description file** (`api_tester.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "api.test",
|
||||
"purpose": "Test API endpoints and generate reports",
|
||||
"inputs": ["openapi_spec_path", "base_url"],
|
||||
"outputs": ["test_results.json"],
|
||||
"permissions": ["network:http"],
|
||||
"artifact_produces": ["test-report"],
|
||||
"artifact_consumes": ["openapi-spec"],
|
||||
"implementation_notes": "Use requests library to test each endpoint"
|
||||
}
|
||||
```
|
||||
|
||||
**Create skill:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.skill/meta_skill.py api_tester.json
|
||||
```
|
||||
|
||||
### Example 3: File Compressor
|
||||
|
||||
**Description file** (`file_compressor.md`):
|
||||
|
||||
```markdown
|
||||
# Name: file.compress
|
||||
|
||||
# Purpose:
|
||||
Compress files using various algorithms
|
||||
|
||||
# Inputs:
|
||||
- input_path
|
||||
- compression_type (gzip, zip, tar.gz)
|
||||
|
||||
# Outputs:
|
||||
- compressed_file
|
||||
|
||||
# Permissions:
|
||||
- filesystem:read
|
||||
- filesystem:write
|
||||
|
||||
# Implementation Notes:
|
||||
Support gzip, zip, and tar.gz formats using Python standard library
|
||||
```
|
||||
|
||||
**Create skill:**
|
||||
|
||||
```bash
|
||||
python3 agents/meta.skill/meta_skill.py file_compressor.md
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### With meta.agent
|
||||
|
||||
Create an agent that uses the skill:
|
||||
|
||||
```yaml
|
||||
name: data.validator
|
||||
description: Data validation agent
|
||||
skills_available:
|
||||
- data.validatejson # Skill created by meta.skill
|
||||
```
|
||||
|
||||
### With plugin.sync
|
||||
|
||||
Sync skills to plugin format:
|
||||
|
||||
```bash
|
||||
python3 skills/plugin.sync/plugin_sync.py
|
||||
```
|
||||
|
||||
This converts `skill.yaml` to commands in `.claude-plugin/plugin.yaml`.
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **skill-description** - Natural language skill requirements
|
||||
- Pattern: `**/skill_description.md`
|
||||
- Format: Markdown or JSON
|
||||
|
||||
### Produces
|
||||
|
||||
- **skill-definition** - Complete skill configuration
|
||||
- Pattern: `skills/*/skill.yaml`
|
||||
- Schema: `schemas/skill-definition.json`
|
||||
|
||||
- **skill-implementation** - Python implementation code
|
||||
- Pattern: `skills/*/[skill_module].py`
|
||||
|
||||
- **skill-tests** - Test suite
|
||||
- Pattern: `skills/*/test_[skill_module].py`
|
||||
|
||||
- **skill-documentation** - README documentation
|
||||
- Pattern: `skills/*/README.md`
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Create and Test Skill
|
||||
|
||||
```bash
|
||||
# 1. Create skill description
|
||||
cat > my_skill.md <<EOF
|
||||
# Name: data.parse
|
||||
# Purpose: Parse structured data from text
|
||||
# Inputs:
|
||||
- input_text
|
||||
# Outputs:
|
||||
- parsed_data.json
|
||||
# Permissions:
|
||||
- filesystem:write
|
||||
EOF
|
||||
|
||||
# 2. Generate skill
|
||||
python3 agents/meta.skill/meta_skill.py my_skill.md
|
||||
|
||||
# 3. Implement logic (edit the generated file)
|
||||
vim skills/data.parse/data_parse.py
|
||||
|
||||
# 4. Run tests
|
||||
pytest skills/data.parse/test_data_parse.py -v
|
||||
|
||||
# 5. Test CLI
|
||||
python3 skills/data.parse/data_parse.py --help
|
||||
```
|
||||
|
||||
### Workflow 2: Create Skill for Agent
|
||||
|
||||
```bash
|
||||
# 1. Create skill
|
||||
python3 agents/meta.skill/meta_skill.py api_analyzer_skill.md
|
||||
|
||||
# 2. Add to agent
|
||||
echo " - api.analyze" >> agents/api.agent/agent.yaml
|
||||
|
||||
# 3. Sync to plugin
|
||||
python3 skills/plugin.sync/plugin_sync.py
|
||||
```
|
||||
|
||||
### Workflow 3: Batch Create Skills
|
||||
|
||||
```bash
|
||||
# Create multiple skills
|
||||
for desc in skills_to_create/*.md; do
|
||||
echo "Creating skill from $desc..."
|
||||
python3 agents/meta.skill/meta_skill.py "$desc"
|
||||
done
|
||||
```
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Skill Descriptions
|
||||
|
||||
**Be specific about purpose:**
|
||||
```markdown
|
||||
# Good
|
||||
# Purpose: Validate JSON against JSON Schema Draft 07
|
||||
|
||||
# Bad
|
||||
# Purpose: Validate stuff
|
||||
```
|
||||
|
||||
**Include implementation notes:**
|
||||
```markdown
|
||||
# Implementation Notes:
|
||||
Use the jsonschema library. Support Draft 07 schemas.
|
||||
Provide detailed error messages with line numbers.
|
||||
```
|
||||
|
||||
**Specify optional parameters:**
|
||||
```markdown
|
||||
# Inputs:
|
||||
- required_param
|
||||
- optional_param (optional)
|
||||
- another_optional (optional, defaults to 'value')
|
||||
```
|
||||
|
||||
### Parameter Naming
|
||||
|
||||
Parameters are automatically sanitized:
|
||||
- Special characters removed (except `-`, `_`, spaces)
|
||||
- Converted to lowercase
|
||||
- Spaces and hyphens become underscores
|
||||
|
||||
Example conversions:
|
||||
- `"Schema File Path (optional)"` → `schema_file_path_optional`
|
||||
- `"API-Key"` → `api_key`
|
||||
- `"Input Data"` → `input_data`
|
||||
|
||||
### Implementation Strategy
|
||||
|
||||
1. **Generate skeleton first** - Let meta.skill create structure
|
||||
2. **Implement gradually** - Add logic to `execute()` method
|
||||
3. **Test incrementally** - Run tests after each change
|
||||
4. **Update documentation** - Keep README current
|
||||
|
||||
### Artifact Metadata
|
||||
|
||||
Always specify artifact types for interoperability:
|
||||
|
||||
```markdown
|
||||
# Produces Artifacts:
|
||||
- openapi-spec
|
||||
- validation-report
|
||||
|
||||
# Consumes Artifacts:
|
||||
- api-requirements
|
||||
```
|
||||
|
||||
This enables:
|
||||
- Agent discovery via meta.compatibility
|
||||
- Pipeline suggestions via meta.suggest
|
||||
- Workflow orchestration
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Invalid skill name
|
||||
|
||||
```
|
||||
Error: Skill name must be in domain.action format: my-skill
|
||||
```
|
||||
|
||||
**Solution:** Use format `domain.action` with only alphanumeric characters:
|
||||
```markdown
|
||||
# Wrong: my-skill, my_skill, MySkill
|
||||
# Right: data.transform, api.validate
|
||||
```
|
||||
|
||||
### Skill already exists
|
||||
|
||||
```
|
||||
Error: Skill directory already exists: skills/data.validate
|
||||
```
|
||||
|
||||
**Solution:** Remove existing skill or use different name:
|
||||
```bash
|
||||
rm -rf skills/data.validate
|
||||
```
|
||||
|
||||
### Import errors in generated code
|
||||
|
||||
```
|
||||
ModuleNotFoundError: No module named 'betty.config'
|
||||
```
|
||||
|
||||
**Solution:** Ensure Betty framework is in Python path:
|
||||
```bash
|
||||
export PYTHONPATH="${PYTHONPATH}:/home/user/betty"
|
||||
```
|
||||
|
||||
### Test failures
|
||||
|
||||
```
|
||||
ModuleNotFoundError: No module named 'skills.data_validate'
|
||||
```
|
||||
|
||||
**Solution:** Run tests from Betty root directory:
|
||||
```bash
|
||||
cd /home/user/betty
|
||||
pytest skills/data.validate/test_data_validate.py -v
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
meta.skill
|
||||
├─ Input: skill-description (Markdown/JSON)
|
||||
├─ Parser: extract name, purpose, inputs, outputs
|
||||
├─ Generator: create skill.yaml, Python, tests, README
|
||||
├─ Validator: check naming conventions
|
||||
└─ Output: Complete skill directory structure
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After creating a skill with meta.skill:
|
||||
|
||||
1. **Implement logic** - Add functionality to `execute()` method
|
||||
2. **Write tests** - Expand test coverage beyond basic tests
|
||||
3. **Add to agent** - Include in agent's `skills_available`
|
||||
4. **Sync to plugin** - Run plugin.sync to update plugin.yaml
|
||||
5. **Test integration** - Verify skill works in agent context
|
||||
6. **Document usage** - Update README with examples
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [skill-description schema](../../schemas/skill-description.json)
|
||||
- [skill-definition schema](../../schemas/skill-definition.json)
|
||||
|
||||
## How Claude Uses This
|
||||
|
||||
Claude can:
|
||||
1. **Create skills on demand** - "Create a skill that validates YAML files"
|
||||
2. **Extend agent capabilities** - "Add a JSON validator skill to this agent"
|
||||
3. **Build skill libraries** - "Create skills for all common data operations"
|
||||
4. **Prototype quickly** - Test ideas by generating skill scaffolds
|
||||
|
||||
meta.skill enables rapid skill development and agent expansion!
|
||||
Reference in New Issue
Block a user