Initial commit
This commit is contained in:
469
agents/meta.compatibility/README.md
Normal file
469
agents/meta.compatibility/README.md
Normal file
@@ -0,0 +1,469 @@
|
||||
# meta.compatibility - Agent Compatibility Analyzer
|
||||
|
||||
Analyzes agent compatibility and discovers multi-agent workflows based on artifact flows.
|
||||
|
||||
## Overview
|
||||
|
||||
**meta.compatibility** helps Claude discover which agents can work together by analyzing what artifacts they produce and consume. It enables intelligent multi-agent orchestration by suggesting compatible combinations and detecting pipeline gaps.
|
||||
|
||||
**What it does:**
|
||||
- Scans all agents and extracts artifact metadata
|
||||
- Builds compatibility maps (who produces/consumes what)
|
||||
- Finds compatible agents based on artifact flows
|
||||
- Suggests multi-agent pipelines for goals
|
||||
- Generates complete compatibility graphs
|
||||
- Detects gaps (consumed but not produced artifacts)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Find Compatible Agents
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Agent: meta.agent
|
||||
Produces: agent-definition, agent-documentation
|
||||
Consumes: agent-description
|
||||
|
||||
✅ Can feed outputs to (1 agents):
|
||||
• meta.compatibility (via agent-definition)
|
||||
|
||||
⚠️ Gaps (1):
|
||||
• agent-description: No agents produce 'agent-description' (required by meta.agent)
|
||||
```
|
||||
|
||||
### Suggest Pipeline
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create and analyze an agent"
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
📋 Pipeline 1: meta.agent Pipeline
|
||||
Pipeline starting with meta.agent
|
||||
Steps:
|
||||
1. meta.agent - Meta-agent that creates other agents...
|
||||
2. meta.compatibility - Analyzes agent and skill compatibility...
|
||||
```
|
||||
|
||||
### Analyze Agent
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze meta.agent
|
||||
```
|
||||
|
||||
### List All Compatibility
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Total Agents: 7
|
||||
Total Artifact Types: 16
|
||||
Total Relationships: 3
|
||||
|
||||
⚠️ Global Gaps (5):
|
||||
• agent-description: Consumed by 1 agents but no producers
|
||||
...
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### find-compatible
|
||||
|
||||
Find agents compatible with a specific agent.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible AGENT_NAME [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- What the agent produces
|
||||
- What the agent consumes
|
||||
- Agents that can consume its outputs
|
||||
- Agents that can provide its inputs
|
||||
- Gaps (missing producers)
|
||||
|
||||
### suggest-pipeline
|
||||
|
||||
Suggest multi-agent pipeline for a goal.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "GOAL" [--artifacts TYPE1 TYPE2...] [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Suggest pipeline for goal
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Design and validate APIs"
|
||||
|
||||
# With required artifacts
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Process data" --artifacts openapi-spec validation-report
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- Suggested pipelines (ranked)
|
||||
- Steps in each pipeline
|
||||
- Artifact flows between agents
|
||||
- Whether pipeline is complete (no gaps)
|
||||
|
||||
### analyze
|
||||
|
||||
Complete compatibility analysis for one agent.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze AGENT_NAME [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- Full compatibility report
|
||||
- Compatible agents (upstream/downstream)
|
||||
- Suggested workflows
|
||||
- Gaps and warnings
|
||||
|
||||
### list-all
|
||||
|
||||
Generate complete compatibility graph for all agents.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all [--format json|yaml|text]
|
||||
```
|
||||
|
||||
**Shows:**
|
||||
- All agents in the system
|
||||
- All relationships
|
||||
- All artifact types
|
||||
- Global gaps
|
||||
- Statistics
|
||||
|
||||
## Output Formats
|
||||
|
||||
### Text (default)
|
||||
|
||||
Human-readable output with emojis and formatting.
|
||||
|
||||
### JSON
|
||||
|
||||
Machine-readable JSON for programmatic use.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent --format json > meta_agent_compatibility.json
|
||||
```
|
||||
|
||||
### YAML
|
||||
|
||||
YAML format for configuration or documentation.
|
||||
|
||||
```bash
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all --format yaml > compatibility_graph.yaml
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### 1. Agent Scanning
|
||||
|
||||
Scans `agents/` directory for all `agent.yaml` files:
|
||||
|
||||
```python
|
||||
for agent_dir in agents_dir.iterdir():
|
||||
agent_yaml = agent_dir / "agent.yaml"
|
||||
# Load and parse agent definition
|
||||
```
|
||||
|
||||
### 2. Artifact Extraction
|
||||
|
||||
Extracts artifact_metadata from each agent:
|
||||
|
||||
```yaml
|
||||
artifact_metadata:
|
||||
produces:
|
||||
- type: openapi-spec
|
||||
consumes:
|
||||
- type: api-requirements
|
||||
```
|
||||
|
||||
### 3. Compatibility Mapping
|
||||
|
||||
Builds map of artifact types to producers/consumers:
|
||||
|
||||
```
|
||||
openapi-spec:
|
||||
producers: [api.define, api.architect]
|
||||
consumers: [api.validate, api.code-generator]
|
||||
```
|
||||
|
||||
### 4. Relationship Discovery
|
||||
|
||||
For each agent:
|
||||
- Find agents that can consume its outputs
|
||||
- Find agents that can provide its inputs
|
||||
- Detect gaps (missing producers)
|
||||
|
||||
### 5. Pipeline Suggestion
|
||||
|
||||
Uses keyword matching and artifact analysis:
|
||||
- Match goal keywords to agent names/descriptions
|
||||
- Build pipeline from artifact flows
|
||||
- Rank by completeness and length
|
||||
- Return top suggestions
|
||||
|
||||
## Integration
|
||||
|
||||
### With meta.agent
|
||||
After creating an agent, analyze its compatibility:
|
||||
|
||||
```bash
|
||||
# Create agent
|
||||
python3 agents/meta.agent/meta_agent.py description.md
|
||||
|
||||
# Analyze compatibility
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze new-agent
|
||||
|
||||
# Find who can work with it
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible new-agent
|
||||
```
|
||||
|
||||
### With meta.suggest
|
||||
|
||||
meta.suggest uses meta.compatibility to make recommendations:
|
||||
|
||||
```bash
|
||||
python3 agents/meta.suggest/meta_suggest.py --context meta.agent
|
||||
```
|
||||
|
||||
Internally calls meta.compatibility to find next steps.
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Workflow 1: Understand Agent Ecosystem
|
||||
|
||||
```bash
|
||||
# See all compatibility
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all
|
||||
|
||||
# Analyze each agent
|
||||
for agent in meta.agent meta.artifact meta.compatibility meta.suggest; do
|
||||
echo "=== $agent ==="
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze $agent
|
||||
done
|
||||
```
|
||||
|
||||
### Workflow 2: Build Multi-Agent Pipeline
|
||||
|
||||
```bash
|
||||
# Suggest pipeline
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create and test an agent"
|
||||
|
||||
# Get JSON for workflow automation
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "My goal" --format json > pipeline.json
|
||||
```
|
||||
|
||||
### Workflow 3: Find Gaps
|
||||
|
||||
```bash
|
||||
# Find global gaps
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all | grep "Gaps:"
|
||||
|
||||
# Analyze specific agent gaps
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible api.architect
|
||||
```
|
||||
|
||||
## Artifact Types
|
||||
|
||||
### Consumes
|
||||
|
||||
- **agent-definition** - Agent configurations
|
||||
- Pattern: `agents/*/agent.yaml`
|
||||
|
||||
- **registry-data** - Skills and agents registry
|
||||
- Pattern: `registry/*.json`
|
||||
|
||||
### Produces
|
||||
|
||||
- **compatibility-graph** - Agent relationship maps
|
||||
- Pattern: `*.compatibility.json`
|
||||
- Schema: `schemas/compatibility-graph.json`
|
||||
|
||||
- **pipeline-suggestion** - Multi-agent workflows
|
||||
- Pattern: `*.pipeline.json`
|
||||
- Schema: `schemas/pipeline-suggestion.json`
|
||||
|
||||
## Understanding Output
|
||||
|
||||
### Can Feed To
|
||||
|
||||
Agents that can consume this agent's outputs.
|
||||
|
||||
```
|
||||
✅ Can feed outputs to (2 agents):
|
||||
• api.validator (via openapi-spec)
|
||||
• api.code-generator (via openapi-spec)
|
||||
```
|
||||
|
||||
Means:
|
||||
- api.architect produces openapi-spec
|
||||
- Both api.validator and api.code-generator consume openapi-spec
|
||||
- You can run: api.architect → api.validator
|
||||
- Or: api.architect → api.code-generator
|
||||
|
||||
### Can Receive From
|
||||
|
||||
Agents that can provide this agent's inputs.
|
||||
|
||||
```
|
||||
⬅️ Can receive inputs from (1 agents):
|
||||
• api.requirements-analyzer (via api-requirements)
|
||||
```
|
||||
|
||||
Means:
|
||||
- api.architect needs api-requirements
|
||||
- api.requirements-analyzer produces api-requirements
|
||||
- You can run: api.requirements-analyzer → api.architect
|
||||
|
||||
### Gaps
|
||||
|
||||
Missing artifacts in the ecosystem.
|
||||
|
||||
```
|
||||
⚠️ Gaps (1):
|
||||
• agent-description: No agents produce 'agent-description'
|
||||
```
|
||||
|
||||
Means:
|
||||
- meta.agent needs agent-description input
|
||||
- No agent produces it (it's user-provided)
|
||||
- This is expected for user inputs
|
||||
|
||||
### Complete vs Incomplete Pipelines
|
||||
|
||||
**Complete Pipeline:**
|
||||
```
|
||||
Complete: ✅ Yes
|
||||
```
|
||||
All consumed artifacts are produced by pipeline steps.
|
||||
|
||||
**Incomplete Pipeline:**
|
||||
```
|
||||
Complete: ❌ No
|
||||
Gaps: agent-description, registry-data
|
||||
```
|
||||
Some consumed artifacts aren't produced. Requires user input or additional agents.
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
### Finding Compatible Agents
|
||||
|
||||
Use specific artifact types:
|
||||
```bash
|
||||
# Instead of generic goal
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Process stuff"
|
||||
|
||||
# Use specific artifacts
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Validate API" --artifacts openapi-spec
|
||||
```
|
||||
|
||||
### Understanding Gaps
|
||||
|
||||
Not all gaps are problems:
|
||||
- **User inputs** (agent-description, api-requirements) - Expected
|
||||
- **Missing producers** for internal artifacts - Need new agents/skills
|
||||
|
||||
### Building Pipelines
|
||||
|
||||
Start with compatibility analysis:
|
||||
1. Understand what each agent needs/produces
|
||||
2. Find compatible combinations
|
||||
3. Build pipeline step-by-step
|
||||
4. Validate no gaps exist (or gaps are user inputs)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Agent not found
|
||||
|
||||
```
|
||||
Error: Agent 'my-agent' not found
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
- Check agent exists in `agents/` directory
|
||||
- Ensure `agent.yaml` exists
|
||||
- Verify agent name in agent.yaml matches
|
||||
|
||||
### No compatible agents found
|
||||
|
||||
```
|
||||
Can feed outputs to (0 agents)
|
||||
Can receive inputs from (0 agents)
|
||||
```
|
||||
|
||||
**Causes:**
|
||||
- Agent is isolated (no shared artifact types)
|
||||
- Agent uses custom artifact types
|
||||
- No other agents exist yet
|
||||
|
||||
**Solutions:**
|
||||
- Create agents with compatible artifact types
|
||||
- Use standard artifact types
|
||||
- Check artifact_metadata is properly defined
|
||||
|
||||
### Empty pipeline suggestions
|
||||
|
||||
```
|
||||
Error: Could not determine relevant agents for goal
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
- Be more specific in goal description
|
||||
- Mention artifact types explicitly
|
||||
- Use `--artifacts` flag
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
meta.compatibility
|
||||
├─ Scans: agents/ directory
|
||||
├─ Analyzes: artifact_metadata
|
||||
├─ Builds: compatibility maps
|
||||
├─ Produces: compatibility graphs
|
||||
└─ Used by: meta.suggest, Claude
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
See test runs:
|
||||
```bash
|
||||
# Example 1: Find compatible agents
|
||||
python3 agents/meta.compatibility/meta_compatibility.py find-compatible meta.agent
|
||||
|
||||
# Example 2: Suggest pipeline
|
||||
python3 agents/meta.compatibility/meta_compatibility.py suggest-pipeline "Create agent and check compatibility"
|
||||
|
||||
# Example 3: Full analysis
|
||||
python3 agents/meta.compatibility/meta_compatibility.py analyze api.architect
|
||||
|
||||
# Example 4: Export to JSON
|
||||
python3 agents/meta.compatibility/meta_compatibility.py list-all --format json > graph.json
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [META_AGENTS.md](../../docs/META_AGENTS.md) - Meta-agent ecosystem
|
||||
- [ARTIFACT_STANDARDS.md](../../docs/ARTIFACT_STANDARDS.md) - Artifact system
|
||||
- [compatibility-graph schema](../../schemas/compatibility-graph.json)
|
||||
- [pipeline-suggestion schema](../../schemas/pipeline-suggestion.json)
|
||||
|
||||
## How Claude Uses This
|
||||
|
||||
Claude can:
|
||||
1. **Discover capabilities** - "What agents can work with openapi-spec?"
|
||||
2. **Build workflows** - "How do I design and validate an API?"
|
||||
3. **Make decisions** - "What should I run next?"
|
||||
4. **Detect gaps** - "What's missing from the ecosystem?"
|
||||
|
||||
meta.compatibility enables autonomous multi-agent orchestration!
|
||||
130
agents/meta.compatibility/agent.yaml
Normal file
130
agents/meta.compatibility/agent.yaml
Normal file
@@ -0,0 +1,130 @@
|
||||
name: meta.compatibility
|
||||
version: 0.1.0
|
||||
description: |
|
||||
Analyzes agent and skill compatibility to discover multi-agent workflows.
|
||||
|
||||
This meta-agent helps Claude discover which agents can work together by
|
||||
analyzing artifact flows - what agents produce and what others consume.
|
||||
|
||||
Enables intelligent orchestration by suggesting compatible agent combinations
|
||||
and detecting potential pipeline gaps.
|
||||
|
||||
artifact_metadata:
|
||||
consumes:
|
||||
- type: agent-definition
|
||||
file_pattern: "agents/*/agent.yaml"
|
||||
description: "Agent definitions to analyze for compatibility"
|
||||
|
||||
- type: registry-data
|
||||
file_pattern: "registry/*.json"
|
||||
description: "Skills and agents registry"
|
||||
|
||||
produces:
|
||||
- type: compatibility-graph
|
||||
file_pattern: "*.compatibility.json"
|
||||
content_type: "application/json"
|
||||
schema: "schemas/compatibility-graph.json"
|
||||
description: "Agent relationship graph showing artifact flows"
|
||||
|
||||
- type: pipeline-suggestion
|
||||
file_pattern: "*.pipeline.json"
|
||||
content_type: "application/json"
|
||||
schema: "schemas/pipeline-suggestion.json"
|
||||
description: "Suggested multi-agent workflows"
|
||||
|
||||
status: draft
|
||||
reasoning_mode: iterative
|
||||
capabilities:
|
||||
- Build compatibility graphs that connect agent inputs and outputs
|
||||
- Recommend orchestrated workflows that minimize gaps and conflicts
|
||||
- Surface registry insights to guide creation of missing capabilities
|
||||
skills_available:
|
||||
- agent.compose # Analyze artifact flows
|
||||
- artifact.define # Understand artifact types
|
||||
|
||||
permissions:
|
||||
- filesystem:read
|
||||
|
||||
system_prompt: |
|
||||
You are meta.compatibility, the agent compatibility analyzer.
|
||||
|
||||
Your purpose is to help Claude discover which agents work together by
|
||||
analyzing what artifacts they produce and consume.
|
||||
|
||||
## Your Responsibilities
|
||||
|
||||
1. **Analyze Compatibility**
|
||||
- Scan all agent definitions
|
||||
- Extract artifact metadata (produces/consumes)
|
||||
- Find matching artifact types
|
||||
- Identify compatible agent pairs
|
||||
|
||||
2. **Suggest Pipelines**
|
||||
- Recommend multi-agent workflows
|
||||
- Ensure artifact flow is complete (no gaps)
|
||||
- Prioritize common use cases
|
||||
- Provide clear rationale
|
||||
|
||||
3. **Detect Gaps**
|
||||
- Find consumed artifacts that aren't produced
|
||||
- Identify missing agents in pipelines
|
||||
- Suggest what needs to be created
|
||||
|
||||
4. **Generate Compatibility Graphs**
|
||||
- Visual representation of agent relationships
|
||||
- Show artifact flows between agents
|
||||
- Highlight compatible combinations
|
||||
|
||||
## Commands You Support
|
||||
|
||||
**Find Compatible Agents:**
|
||||
```bash
|
||||
/meta/compatibility find-compatible api.architect
|
||||
```
|
||||
Returns agents that can consume api.architect's outputs.
|
||||
|
||||
**Suggest Pipeline:**
|
||||
```bash
|
||||
/meta/compatibility suggest-pipeline "Design and implement an API"
|
||||
```
|
||||
Returns multi-agent workflow for the task.
|
||||
|
||||
**Analyze Agent:**
|
||||
```bash
|
||||
/meta/compatibility analyze api.architect
|
||||
```
|
||||
Returns full compatibility analysis for one agent.
|
||||
|
||||
**List All Compatibility:**
|
||||
```bash
|
||||
/meta/compatibility list-all
|
||||
```
|
||||
Returns complete compatibility graph for all agents.
|
||||
|
||||
## Analysis Criteria
|
||||
|
||||
Two agents are compatible if:
|
||||
- Agent A produces artifact type X
|
||||
- Agent B consumes artifact type X
|
||||
- The artifact schemas are compatible
|
||||
|
||||
## Pipeline Suggestion Criteria
|
||||
|
||||
A good pipeline:
|
||||
- Has no gaps (all consumed artifacts are produced)
|
||||
- Follows logical workflow order
|
||||
- Matches the user's stated goal
|
||||
- Uses minimal agents (efficiency)
|
||||
- Includes validation steps when appropriate
|
||||
|
||||
## Output Format
|
||||
|
||||
Always provide:
|
||||
- **Compatible agents**: List with rationale
|
||||
- **Artifact flows**: What flows between agents
|
||||
- **Suggested pipelines**: Step-by-step workflows
|
||||
- **Gaps**: Any missing artifacts or agents
|
||||
- **Confidence**: How confident you are in the suggestions
|
||||
|
||||
Remember: You enable intelligent orchestration by making compatibility
|
||||
discoverable. Help Claude make smart choices about which agents to use together.
|
||||
698
agents/meta.compatibility/meta_compatibility.py
Executable file
698
agents/meta.compatibility/meta_compatibility.py
Executable file
@@ -0,0 +1,698 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
meta.compatibility - Agent Compatibility Analyzer
|
||||
|
||||
Analyzes agent and skill compatibility to discover multi-agent workflows.
|
||||
Helps Claude orchestrate by showing which agents can work together.
|
||||
"""
|
||||
|
||||
import json
|
||||
import yaml
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any, Optional, Set, Tuple
|
||||
from collections import defaultdict
|
||||
|
||||
# Add parent directory to path for imports
|
||||
parent_dir = str(Path(__file__).parent.parent.parent)
|
||||
sys.path.insert(0, parent_dir)
|
||||
|
||||
from betty.provenance import compute_hash, get_provenance_logger
|
||||
from betty.config import REGISTRY_FILE, REGISTRY_DIR
|
||||
|
||||
|
||||
class CompatibilityAnalyzer:
|
||||
"""Analyzes agent compatibility based on artifact flows"""
|
||||
|
||||
def __init__(self, base_dir: str = "."):
|
||||
"""Initialize with base directory"""
|
||||
self.base_dir = Path(base_dir)
|
||||
self.agents_dir = self.base_dir / "agents"
|
||||
self.agents = {} # name -> agent definition
|
||||
self.compatibility_map = {} # artifact_type -> {producers: [], consumers: []}
|
||||
|
||||
def scan_agents(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Scan agents directory and load all agent definitions
|
||||
|
||||
Returns:
|
||||
Dictionary of agent_name -> agent_definition
|
||||
"""
|
||||
self.agents = {}
|
||||
|
||||
if not self.agents_dir.exists():
|
||||
return self.agents
|
||||
|
||||
for agent_dir in self.agents_dir.iterdir():
|
||||
if agent_dir.is_dir():
|
||||
agent_yaml = agent_dir / "agent.yaml"
|
||||
if agent_yaml.exists():
|
||||
with open(agent_yaml) as f:
|
||||
agent_def = yaml.safe_load(f)
|
||||
if agent_def and "name" in agent_def:
|
||||
self.agents[agent_def["name"]] = agent_def
|
||||
|
||||
return self.agents
|
||||
|
||||
def extract_artifacts(self, agent_def: Dict[str, Any]) -> Tuple[Set[str], Set[str]]:
|
||||
"""
|
||||
Extract artifact types from agent definition
|
||||
|
||||
Args:
|
||||
agent_def: Agent definition dictionary
|
||||
|
||||
Returns:
|
||||
Tuple of (produces_set, consumes_set)
|
||||
"""
|
||||
produces = set()
|
||||
consumes = set()
|
||||
|
||||
artifact_metadata = agent_def.get("artifact_metadata", {})
|
||||
|
||||
# Extract produced artifacts
|
||||
for artifact in artifact_metadata.get("produces", []):
|
||||
if isinstance(artifact, dict) and "type" in artifact:
|
||||
produces.add(artifact["type"])
|
||||
elif isinstance(artifact, str):
|
||||
produces.add(artifact)
|
||||
|
||||
# Extract consumed artifacts
|
||||
for artifact in artifact_metadata.get("consumes", []):
|
||||
if isinstance(artifact, dict) and "type" in artifact:
|
||||
consumes.add(artifact["type"])
|
||||
elif isinstance(artifact, str):
|
||||
consumes.add(artifact)
|
||||
|
||||
return produces, consumes
|
||||
|
||||
def build_compatibility_map(self) -> Dict[str, Dict[str, List[str]]]:
|
||||
"""
|
||||
Build map of artifact types to producers/consumers
|
||||
|
||||
Returns:
|
||||
Dictionary mapping artifact_type -> {producers: [], consumers: []}
|
||||
"""
|
||||
self.compatibility_map = defaultdict(lambda: {"producers": [], "consumers": []})
|
||||
|
||||
for agent_name, agent_def in self.agents.items():
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
for artifact_type in produces:
|
||||
self.compatibility_map[artifact_type]["producers"].append(agent_name)
|
||||
|
||||
for artifact_type in consumes:
|
||||
self.compatibility_map[artifact_type]["consumers"].append(agent_name)
|
||||
|
||||
return dict(self.compatibility_map)
|
||||
|
||||
def find_compatible(self, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Find agents compatible with the specified agent
|
||||
|
||||
Args:
|
||||
agent_name: Name of agent to analyze
|
||||
|
||||
Returns:
|
||||
Dictionary with compatible agents and rationale
|
||||
"""
|
||||
if agent_name not in self.agents:
|
||||
return {
|
||||
"error": f"Agent '{agent_name}' not found",
|
||||
"available_agents": list(self.agents.keys())
|
||||
}
|
||||
|
||||
agent_def = self.agents[agent_name]
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
result = {
|
||||
"agent": agent_name,
|
||||
"produces": list(produces),
|
||||
"consumes": list(consumes),
|
||||
"can_feed_to": [], # Agents that can consume this agent's outputs
|
||||
"can_receive_from": [], # Agents that can provide this agent's inputs
|
||||
"gaps": [] # Missing artifacts
|
||||
}
|
||||
|
||||
# Find agents that can consume this agent's outputs
|
||||
for artifact_type in produces:
|
||||
consumers = self.compatibility_map.get(artifact_type, {}).get("consumers", [])
|
||||
for consumer in consumers:
|
||||
if consumer != agent_name:
|
||||
result["can_feed_to"].append({
|
||||
"agent": consumer,
|
||||
"artifact": artifact_type,
|
||||
"rationale": f"{agent_name} produces '{artifact_type}' which {consumer} consumes"
|
||||
})
|
||||
|
||||
# Find agents that can provide this agent's inputs
|
||||
for artifact_type in consumes:
|
||||
producers = self.compatibility_map.get(artifact_type, {}).get("producers", [])
|
||||
if not producers:
|
||||
result["gaps"].append({
|
||||
"artifact": artifact_type,
|
||||
"issue": f"No agents produce '{artifact_type}' (required by {agent_name})",
|
||||
"severity": "high"
|
||||
})
|
||||
else:
|
||||
for producer in producers:
|
||||
if producer != agent_name:
|
||||
result["can_receive_from"].append({
|
||||
"agent": producer,
|
||||
"artifact": artifact_type,
|
||||
"rationale": f"{producer} produces '{artifact_type}' which {agent_name} needs"
|
||||
})
|
||||
|
||||
return result
|
||||
|
||||
def suggest_pipeline(self, goal: str, required_artifacts: Optional[List[str]] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Suggest multi-agent pipeline for a goal
|
||||
|
||||
Args:
|
||||
goal: Natural language description of what to accomplish
|
||||
required_artifacts: Optional list of artifact types needed
|
||||
|
||||
Returns:
|
||||
Suggested pipeline with steps and rationale
|
||||
"""
|
||||
# Simple keyword matching for now (can be enhanced with ML later)
|
||||
goal_lower = goal.lower()
|
||||
|
||||
keywords_to_agents = {
|
||||
"api": ["api.architect", "meta.agent"],
|
||||
"design api": ["api.architect"],
|
||||
"validate": ["api.architect"],
|
||||
"create agent": ["meta.agent"],
|
||||
"agent": ["meta.agent"],
|
||||
"artifact": ["meta.artifact"],
|
||||
"optimize": [], # No optimizer yet, but we have the artifact type
|
||||
}
|
||||
|
||||
# Find relevant agents
|
||||
relevant_agents = set()
|
||||
for keyword, agents in keywords_to_agents.items():
|
||||
if keyword in goal_lower:
|
||||
relevant_agents.update([a for a in agents if a in self.agents])
|
||||
|
||||
if not relevant_agents and required_artifacts:
|
||||
# Find agents that produce the required artifacts
|
||||
for artifact_type in required_artifacts:
|
||||
producers = self.compatibility_map.get(artifact_type, {}).get("producers", [])
|
||||
relevant_agents.update(producers)
|
||||
|
||||
if not relevant_agents:
|
||||
return {
|
||||
"error": "Could not determine relevant agents for goal",
|
||||
"suggestion": "Try being more specific or mention required artifact types",
|
||||
"goal": goal
|
||||
}
|
||||
|
||||
# Build pipeline by analyzing artifact flows
|
||||
pipelines = []
|
||||
|
||||
for start_agent in relevant_agents:
|
||||
pipeline = self._build_pipeline_from_agent(start_agent, goal)
|
||||
if pipeline:
|
||||
pipelines.append(pipeline)
|
||||
|
||||
# Rank pipelines by completeness and length
|
||||
pipelines.sort(key=lambda p: (
|
||||
-len([s for s in p.get("steps", [])]), # Prefer shorter pipelines
|
||||
-p.get("confidence_score", 0) # Higher confidence
|
||||
))
|
||||
|
||||
if not pipelines:
|
||||
return {
|
||||
"error": "Could not build complete pipeline",
|
||||
"relevant_agents": list(relevant_agents),
|
||||
"goal": goal
|
||||
}
|
||||
|
||||
return {
|
||||
"goal": goal,
|
||||
"pipelines": pipelines[:3], # Top 3 suggestions
|
||||
"confidence": "medium" if len(pipelines) > 1 else "low"
|
||||
}
|
||||
|
||||
def _build_pipeline_from_agent(self, start_agent: str, goal: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Build a pipeline starting from a specific agent
|
||||
|
||||
Args:
|
||||
start_agent: Agent to start pipeline from
|
||||
goal: Goal description
|
||||
|
||||
Returns:
|
||||
Pipeline dictionary or None
|
||||
"""
|
||||
if start_agent not in self.agents:
|
||||
return None
|
||||
|
||||
agent_def = self.agents[start_agent]
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
pipeline = {
|
||||
"name": f"{start_agent.title()} Pipeline",
|
||||
"description": f"Pipeline starting with {start_agent}",
|
||||
"steps": [
|
||||
{
|
||||
"step": 1,
|
||||
"agent": start_agent,
|
||||
"description": agent_def.get("description", "").split("\n")[0],
|
||||
"produces": list(produces),
|
||||
"consumes": list(consumes)
|
||||
}
|
||||
],
|
||||
"artifact_flow": [],
|
||||
"confidence_score": 0.5
|
||||
}
|
||||
|
||||
# Try to add compatible next steps
|
||||
compatibility = self.find_compatible(start_agent)
|
||||
|
||||
for compatible in compatibility.get("can_feed_to", [])[:2]: # Max 2 next steps
|
||||
next_agent = compatible["agent"]
|
||||
if next_agent in self.agents:
|
||||
next_def = self.agents[next_agent]
|
||||
next_produces, next_consumes = self.extract_artifacts(next_def)
|
||||
|
||||
pipeline["steps"].append({
|
||||
"step": len(pipeline["steps"]) + 1,
|
||||
"agent": next_agent,
|
||||
"description": next_def.get("description", "").split("\n")[0],
|
||||
"produces": list(next_produces),
|
||||
"consumes": list(next_consumes)
|
||||
})
|
||||
|
||||
pipeline["artifact_flow"].append({
|
||||
"from": start_agent,
|
||||
"to": next_agent,
|
||||
"artifact": compatible["artifact"]
|
||||
})
|
||||
|
||||
pipeline["confidence_score"] += 0.2
|
||||
|
||||
# Calculate if pipeline has gaps
|
||||
all_produces = set()
|
||||
all_consumes = set()
|
||||
for step in pipeline["steps"]:
|
||||
all_produces.update(step.get("produces", []))
|
||||
all_consumes.update(step.get("consumes", []))
|
||||
|
||||
gaps = all_consumes - all_produces
|
||||
if not gaps:
|
||||
pipeline["confidence_score"] += 0.3
|
||||
pipeline["complete"] = True
|
||||
else:
|
||||
pipeline["complete"] = False
|
||||
pipeline["gaps"] = list(gaps)
|
||||
|
||||
return pipeline
|
||||
|
||||
def generate_compatibility_graph(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate complete compatibility graph for all agents
|
||||
|
||||
Returns:
|
||||
Compatibility graph structure
|
||||
"""
|
||||
graph = {
|
||||
"agents": [],
|
||||
"relationships": [],
|
||||
"artifact_types": [],
|
||||
"gaps": [],
|
||||
"metadata": {
|
||||
"total_agents": len(self.agents),
|
||||
"total_artifact_types": len(self.compatibility_map)
|
||||
}
|
||||
}
|
||||
|
||||
# Add agents
|
||||
for agent_name, agent_def in self.agents.items():
|
||||
produces, consumes = self.extract_artifacts(agent_def)
|
||||
|
||||
graph["agents"].append({
|
||||
"name": agent_name,
|
||||
"description": agent_def.get("description", "").split("\n")[0],
|
||||
"produces": list(produces),
|
||||
"consumes": list(consumes)
|
||||
})
|
||||
|
||||
# Add relationships
|
||||
for agent_name in self.agents:
|
||||
compatibility = self.find_compatible(agent_name)
|
||||
|
||||
for compatible in compatibility.get("can_feed_to", []):
|
||||
graph["relationships"].append({
|
||||
"from": agent_name,
|
||||
"to": compatible["agent"],
|
||||
"artifact": compatible["artifact"],
|
||||
"type": "produces_for"
|
||||
})
|
||||
|
||||
# Add artifact types
|
||||
for artifact_type, info in self.compatibility_map.items():
|
||||
graph["artifact_types"].append({
|
||||
"type": artifact_type,
|
||||
"producers": info["producers"],
|
||||
"consumers": info["consumers"],
|
||||
"producer_count": len(info["producers"]),
|
||||
"consumer_count": len(info["consumers"])
|
||||
})
|
||||
|
||||
# Find global gaps
|
||||
for artifact_type, info in self.compatibility_map.items():
|
||||
if not info["producers"] and info["consumers"]:
|
||||
graph["gaps"].append({
|
||||
"artifact": artifact_type,
|
||||
"issue": f"Consumed by {len(info['consumers'])} agents but no producers",
|
||||
"consumers": info["consumers"],
|
||||
"severity": "high"
|
||||
})
|
||||
|
||||
return graph
|
||||
|
||||
def analyze_agent(self, agent_name: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Complete compatibility analysis for one agent
|
||||
|
||||
Args:
|
||||
agent_name: Name of agent to analyze
|
||||
|
||||
Returns:
|
||||
Comprehensive analysis
|
||||
"""
|
||||
compatibility = self.find_compatible(agent_name)
|
||||
|
||||
if "error" in compatibility:
|
||||
return compatibility
|
||||
|
||||
# Add suggested workflows
|
||||
workflows = []
|
||||
|
||||
# Workflow 1: As a starting point
|
||||
if compatibility["can_feed_to"]:
|
||||
workflow = {
|
||||
"name": f"Start with {agent_name}",
|
||||
"description": f"Use {agent_name} as the first step",
|
||||
"agents": [agent_name] + [c["agent"] for c in compatibility["can_feed_to"][:2]]
|
||||
}
|
||||
workflows.append(workflow)
|
||||
|
||||
# Workflow 2: As a middle step
|
||||
if compatibility["can_receive_from"] and compatibility["can_feed_to"]:
|
||||
workflow = {
|
||||
"name": f"{agent_name} in pipeline",
|
||||
"description": f"Use {agent_name} as a processing step",
|
||||
"agents": [
|
||||
compatibility["can_receive_from"][0]["agent"],
|
||||
agent_name,
|
||||
compatibility["can_feed_to"][0]["agent"]
|
||||
]
|
||||
}
|
||||
workflows.append(workflow)
|
||||
|
||||
compatibility["suggested_workflows"] = workflows
|
||||
|
||||
return compatibility
|
||||
|
||||
def verify_registry_integrity(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Verify integrity of registry files using provenance hashes.
|
||||
|
||||
Returns:
|
||||
Dictionary with verification results
|
||||
"""
|
||||
provenance = get_provenance_logger()
|
||||
|
||||
results = {
|
||||
"verified": [],
|
||||
"failed": [],
|
||||
"missing": [],
|
||||
"summary": {
|
||||
"total_checked": 0,
|
||||
"verified_count": 0,
|
||||
"failed_count": 0,
|
||||
"missing_count": 0
|
||||
}
|
||||
}
|
||||
|
||||
# List of registry files to verify
|
||||
registry_files = [
|
||||
("skills.json", REGISTRY_FILE),
|
||||
("agents.json", str(Path(REGISTRY_DIR) / "agents.json")),
|
||||
("workflow_history.json", str(Path(REGISTRY_DIR) / "workflow_history.json")),
|
||||
]
|
||||
|
||||
for artifact_id, file_path in registry_files:
|
||||
results["summary"]["total_checked"] += 1
|
||||
|
||||
# Check if file exists
|
||||
if not os.path.exists(file_path):
|
||||
results["missing"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": "File does not exist"
|
||||
})
|
||||
results["summary"]["missing_count"] += 1
|
||||
continue
|
||||
|
||||
try:
|
||||
# Load the registry file
|
||||
with open(file_path, 'r') as f:
|
||||
content = json.load(f)
|
||||
|
||||
# Get stored hash from file (if present)
|
||||
stored_hash = content.get("content_hash")
|
||||
|
||||
# Remove content_hash field to compute original hash
|
||||
content_without_hash = {k: v for k, v in content.items() if k != "content_hash"}
|
||||
|
||||
# Compute current hash
|
||||
current_hash = compute_hash(content_without_hash)
|
||||
|
||||
# Get latest hash from provenance log
|
||||
latest_provenance_hash = provenance.get_latest_hash(artifact_id)
|
||||
|
||||
# Verify
|
||||
if stored_hash and stored_hash == current_hash:
|
||||
# Hash matches what's in the file
|
||||
verification_status = "verified"
|
||||
|
||||
# Also check against provenance log
|
||||
if latest_provenance_hash:
|
||||
provenance_match = (stored_hash == latest_provenance_hash)
|
||||
else:
|
||||
provenance_match = None
|
||||
|
||||
results["verified"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"hash": current_hash[:16] + "...",
|
||||
"stored_hash_valid": True,
|
||||
"provenance_logged": latest_provenance_hash is not None,
|
||||
"provenance_match": provenance_match
|
||||
})
|
||||
results["summary"]["verified_count"] += 1
|
||||
|
||||
elif stored_hash and stored_hash != current_hash:
|
||||
# Hash mismatch - file may have been modified
|
||||
results["failed"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": "Content hash mismatch",
|
||||
"stored_hash": stored_hash[:16] + "...",
|
||||
"computed_hash": current_hash[:16] + "...",
|
||||
"severity": "high"
|
||||
})
|
||||
results["summary"]["failed_count"] += 1
|
||||
|
||||
else:
|
||||
# No hash stored in file
|
||||
results["missing"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": "No content_hash field in file",
|
||||
"computed_hash": current_hash[:16] + "...",
|
||||
"provenance_available": latest_provenance_hash is not None
|
||||
})
|
||||
results["summary"]["missing_count"] += 1
|
||||
|
||||
except Exception as e:
|
||||
results["failed"].append({
|
||||
"artifact": artifact_id,
|
||||
"path": file_path,
|
||||
"reason": f"Verification error: {str(e)}",
|
||||
"severity": "high"
|
||||
})
|
||||
results["summary"]["failed_count"] += 1
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
"""CLI entry point"""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="meta.compatibility - Agent Compatibility Analyzer"
|
||||
)
|
||||
|
||||
subparsers = parser.add_subparsers(dest='command', help='Commands')
|
||||
|
||||
# Find compatible command
|
||||
find_parser = subparsers.add_parser('find-compatible', help='Find compatible agents')
|
||||
find_parser.add_argument("agent", help="Agent name to analyze")
|
||||
|
||||
# Suggest pipeline command
|
||||
suggest_parser = subparsers.add_parser('suggest-pipeline', help='Suggest multi-agent pipeline')
|
||||
suggest_parser.add_argument("goal", help="Goal description")
|
||||
suggest_parser.add_argument("--artifacts", nargs="+", help="Required artifact types")
|
||||
|
||||
# Analyze command
|
||||
analyze_parser = subparsers.add_parser('analyze', help='Analyze agent compatibility')
|
||||
analyze_parser.add_argument("agent", help="Agent name to analyze")
|
||||
|
||||
# List all command
|
||||
list_parser = subparsers.add_parser('list-all', help='List all compatibility')
|
||||
|
||||
# Verify integrity command
|
||||
verify_parser = subparsers.add_parser('verify-integrity', help='Verify registry integrity using provenance hashes')
|
||||
|
||||
# Output format
|
||||
parser.add_argument(
|
||||
"--format",
|
||||
choices=["json", "yaml", "text"],
|
||||
default="text",
|
||||
help="Output format"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
analyzer = CompatibilityAnalyzer()
|
||||
analyzer.scan_agents()
|
||||
analyzer.build_compatibility_map()
|
||||
|
||||
result = None
|
||||
|
||||
if args.command == 'find-compatible':
|
||||
print(f"🔍 Finding agents compatible with '{args.agent}'...\n")
|
||||
result = analyzer.find_compatible(args.agent)
|
||||
|
||||
if args.format == "text" and "error" not in result:
|
||||
print(f"Agent: {result['agent']}")
|
||||
print(f"Produces: {', '.join(result['produces']) if result['produces'] else 'none'}")
|
||||
print(f"Consumes: {', '.join(result['consumes']) if result['consumes'] else 'none'}")
|
||||
|
||||
if result['can_feed_to']:
|
||||
print(f"\n✅ Can feed outputs to ({len(result['can_feed_to'])} agents):")
|
||||
for comp in result['can_feed_to']:
|
||||
print(f" • {comp['agent']} (via {comp['artifact']})")
|
||||
|
||||
if result['can_receive_from']:
|
||||
print(f"\n⬅️ Can receive inputs from ({len(result['can_receive_from'])} agents):")
|
||||
for comp in result['can_receive_from']:
|
||||
print(f" • {comp['agent']} (via {comp['artifact']})")
|
||||
|
||||
if result['gaps']:
|
||||
print(f"\n⚠️ Gaps ({len(result['gaps'])}):")
|
||||
for gap in result['gaps']:
|
||||
print(f" • {gap['artifact']}: {gap['issue']}")
|
||||
|
||||
elif args.command == 'suggest-pipeline':
|
||||
print(f"💡 Suggesting pipeline for: {args.goal}\n")
|
||||
result = analyzer.suggest_pipeline(args.goal, args.artifacts)
|
||||
|
||||
if args.format == "text" and "pipelines" in result:
|
||||
for i, pipeline in enumerate(result["pipelines"], 1):
|
||||
print(f"\n📋 Pipeline {i}: {pipeline['name']}")
|
||||
print(f" {pipeline['description']}")
|
||||
print(f" Complete: {'✅ Yes' if pipeline.get('complete', False) else '❌ No'}")
|
||||
print(f" Steps:")
|
||||
for step in pipeline['steps']:
|
||||
print(f" {step['step']}. {step['agent']} - {step['description'][:60]}...")
|
||||
|
||||
if pipeline.get('gaps'):
|
||||
print(f" Gaps: {', '.join(pipeline['gaps'])}")
|
||||
|
||||
elif args.command == 'analyze':
|
||||
print(f"📊 Analyzing '{args.agent}'...\n")
|
||||
result = analyzer.analyze_agent(args.agent)
|
||||
|
||||
if args.format == "text" and "error" not in result:
|
||||
print(f"Agent: {result['agent']}")
|
||||
print(f"Produces: {', '.join(result['produces']) if result['produces'] else 'none'}")
|
||||
print(f"Consumes: {', '.join(result['consumes']) if result['consumes'] else 'none'}")
|
||||
|
||||
if result.get('suggested_workflows'):
|
||||
print(f"\n🔄 Suggested Workflows:")
|
||||
for workflow in result['suggested_workflows']:
|
||||
print(f"\n {workflow['name']}")
|
||||
print(f" {workflow['description']}")
|
||||
print(f" Pipeline: {' → '.join(workflow['agents'])}")
|
||||
|
||||
elif args.command == 'list-all':
|
||||
print("🗺️ Generating complete compatibility graph...\n")
|
||||
result = analyzer.generate_compatibility_graph()
|
||||
|
||||
if args.format == "text":
|
||||
print(f"Total Agents: {result['metadata']['total_agents']}")
|
||||
print(f"Total Artifact Types: {result['metadata']['total_artifact_types']}")
|
||||
print(f"Total Relationships: {len(result['relationships'])}")
|
||||
|
||||
if result['gaps']:
|
||||
print(f"\n⚠️ Global Gaps ({len(result['gaps'])}):")
|
||||
for gap in result['gaps']:
|
||||
print(f" • {gap['artifact']}: {gap['issue']}")
|
||||
|
||||
elif args.command == 'verify-integrity':
|
||||
print("🔐 Verifying registry integrity using provenance hashes...\n")
|
||||
result = analyzer.verify_registry_integrity()
|
||||
|
||||
if args.format == "text":
|
||||
summary = result['summary']
|
||||
print(f"Total Checked: {summary['total_checked']}")
|
||||
print(f"✅ Verified: {summary['verified_count']}")
|
||||
print(f"❌ Failed: {summary['failed_count']}")
|
||||
print(f"⚠️ Missing Hash: {summary['missing_count']}")
|
||||
|
||||
if result['verified']:
|
||||
print(f"\n✅ Verified Artifacts ({len(result['verified'])}):")
|
||||
for item in result['verified']:
|
||||
print(f" • {item['artifact']}: {item['hash']}")
|
||||
if item.get('provenance_logged'):
|
||||
match_status = "✓" if item.get('provenance_match') else "✗"
|
||||
print(f" Provenance: {match_status}")
|
||||
|
||||
if result['failed']:
|
||||
print(f"\n❌ Failed Verifications ({len(result['failed'])}):")
|
||||
for item in result['failed']:
|
||||
print(f" • {item['artifact']}: {item['reason']}")
|
||||
if 'stored_hash' in item:
|
||||
print(f" Expected: {item['stored_hash']}")
|
||||
print(f" Computed: {item['computed_hash']}")
|
||||
|
||||
if result['missing']:
|
||||
print(f"\n⚠️ Missing Hashes ({len(result['missing'])}):")
|
||||
for item in result['missing']:
|
||||
print(f" • {item['artifact']}: {item['reason']}")
|
||||
|
||||
# Output result
|
||||
if result:
|
||||
if args.format == "json":
|
||||
print(json.dumps(result, indent=2))
|
||||
elif args.format == "yaml":
|
||||
print(yaml.dump(result, default_flow_style=False))
|
||||
elif "error" in result:
|
||||
print(f"\n❌ Error: {result['error']}")
|
||||
if "suggestion" in result:
|
||||
print(f"💡 {result['suggestion']}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user