Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:47:38 +08:00
commit 70edb707a9
6 changed files with 1008 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "distributed-gummy-orchestrator",
"description": "Orchestrate gummy-agents across distributed network using 'dw' command for load-balanced, multi-host AI development. Automates distributed Claude Code agents for parallel development workflows.",
"version": "0.0.0-2025.11.28",
"author": {
"name": "William VanSickle III",
"email": "noreply@humanfrontierlabs.com"
},
"skills": [
"./"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# distributed-gummy-orchestrator
Orchestrate gummy-agents across distributed network using 'dw' command for load-balanced, multi-host AI development. Automates distributed Claude Code agents for parallel development workflows.

480
SKILL.md Normal file
View File

@@ -0,0 +1,480 @@
---
name: distributed-gummy
description: Orchestrate gummy-agents across distributed network using 'd' command for load-balanced, multi-host AI development
---
# Distributed Gummy Orchestrator
Coordinate gummy-agent tasks across your Tailscale network using the `d` command for intelligent load balancing and multi-host AI-powered development.
## When to Use
This skill activates when you want to:
**Distribute gummy tasks across multiple hosts**
- "Run this gummy task on the least loaded host"
- "Execute specialist on optimal node"
- "Distribute testing across all machines"
**Load-balanced AI development**
- "Which host should handle this database work?"
- "Run API specialist on best available node"
- "Balance gummy tasks across cluster"
**Multi-host coordination**
- "Sync codebase andw run gummy on node-2"
- "Execute parallel specialists across network"
- "Run tests on all platforms simultaneously"
**Network-wide specialist monitoring**
- "Show all running specialists across hosts"
- "What gummy tasks are active on my network?"
- "Status of distributed specialists"
## How It Works
### Architecture
```
[Main Claude] ──> [Orchestrator] ──> dw command ──> [Network Nodes]
│ │
├──> Load Analysis ├──> gummy-agent
├──> Host Selection ├──> Specialists
├──> Sync Management └──> Tasks
└──> Task Distribution
```
### Core Workflows
**1. Load-Balanced Execution**
```bash
# User request: "Run database optimization on best host"
# Agent:
1. Execute: dw load
2. Parse metrics (CPU, memory, load average)
3. Calculate composite scores
4. Select optimal host
5. Sync codebase: dw sync <host>
6. Execute: dw run <host> "cd <project> && gummy task 'optimize database queries'"
```
**2. Parallel Distribution**
```bash
# User request: "Test on all platforms"
# Agent:
1. Get hosts: dw status
2. Filter by availability
3. Sync all: for host in hosts; do dw sync $host; done
4. Launch parallel: dw run host1 "test" & dw run host2 "test" & ...
5. Aggregate results
```
**3. Network-Wide Monitoring**
```bash
# User request: "Show all specialists"
# Agent:
1. Get active hosts: dw status
2. For each host: dw run <host> "gummy-watch status"
3. Parse specialist states
4. Aggregate and display
```
## Available Scripts
### orchestrate_gummy.py
Main orchestration logic - coordinates distributed gummy execution.
**Functions**:
- `select_optimal_host()` - Choose best node based on load
- `sync_and_execute_gummy()` - Sync code + run gummy task
- `parallel_gummy_tasks()` - Execute multiple tasks simultaneously
- `monitor_all_specialists()` - Aggregate specialist status across network
**Usage**:
```python
from scripts.orchestrate_gummy import select_optimal_host, sync_and_execute_gummy
# Find best host
optimal = select_optimal_host(task_type="database")
# Returns: {'host': 'node-1', 'score': 0.23, 'cpu': 15%, 'mem': 45%}
# Execute on optimal host
result = sync_and_execute_gummy(
host=optimal['host'],
task="optimize user queries",
project_dir="/path/to/project"
)
```
### d_wrapper.py
Wrapper for `d` command operations.
**Functions**:
- `get_load_metrics()` - Execute `dwload` and parse results
- `get_host_status()` - Execute `dwstatus` and parse availability
- `sync_directory()` - Execute `dwsync` to target host
- `run_remote_command()` - Execute `dwrun` on specific host
## Workflows
### Workflow 1: Load-Balanced Task Distribution
**User Query**: "Run database optimization on optimal host"
**Agent Actions**:
1. Call `get_load_metrics()` to fetch cluster load
2. Call `select_optimal_host(task_type="database")` to choose best node
3. Call `sync_directory(host, project_path)` to sync codebase
4. Call `dwrun <host> "cd project && gummy task 'optimize database queries'"`
5. Monitor execution via `gummy-watch`
6. Return results
### Workflow 2: Parallel Multi-Host Execution
**User Query**: "Run tests across all available nodes"
**Agent Actions**:
1. Call `get_host_status()` to get available hosts
2. Filter hosts by availability and capability
3. For each host:
- Sync codebase: `sync_directory(host, project)`
- Launch test: `dwrun <host> "cd project && gummy task 'run test suite'" &`
4. Collect all background job PIDs
5. Wait for completion
6. Aggregate results from all hosts
### Workflow 3: Network-Wide Specialist Monitoring
**User Query**: "Show all running specialists across my network"
**Agent Actions**:
1. Get active hosts from `dwstatus`
2. For each host:
- Check for gummy-agent: `dwrun <host> "command -v gummy"`
- If present, get specialists: `dwrun <host> "ls -la ~/.gummy/specialists"`
3. Parse specialist metadata from each host
4. Create aggregated dashboard showing:
- Host name
- Active specialists
- Session states (active/dormant)
- Resource usage
5. Display unified network view
### Workflow 4: Intelligent Work Distribution
**User Query**: "I have database work and API work - distribute optimally"
**Agent Actions**:
1. Analyze tasks:
- Task 1: database optimization (CPU-intensive)
- Task 2: API development (I/O-intensive)
2. Get load metrics for all hosts
3. Select hosts using different criteria:
- Database work → host with lowest CPU load
- API work → host with lowest I/O wait
4. Sync to both hosts
5. Launch tasks in parallel:
```bash
dw run <cpu-host> "gummy task 'optimize database queries'" &
dw run <io-host> "gummy task 'build REST API endpoints'" &
```
6. Monitor both executions
7. Report when complete
## Error Handling
### Connection Issues
```python
try:
result = d_run(host, command)
except SSHConnectionError:
# Retry with different host
fallback = select_optimal_host(exclude=[failed_host])
result = d_run(fallback, command)
```
### Sync Failures
```python
if not sync_successful:
# Fall back to local execution
return execute_local_gummy(task)
```
### Load Metric Unavailable
```python
if not load_data:
# Use round-robin distribution
return round_robin_host_selection()
```
## Performance & Caching
**Load Metrics Caching**:
- Cache TTL: 30 seconds (load changes fast)
- Cache location: `/tmp/d-load-cache.json`
- Invalidate on manual request
**Host Availability**:
- Cache TTL: 60 seconds
- Cache location: `/tmp/d-status-cache.json`
**Specialist State**:
- Cache TTL: 5 seconds (near real-time)
- Only cached during active monitoring
## Keywords for Auto-Detection
This skill activates when user mentions:
**Distributed Operations**:
- distributed, cluster, network, multi-host, across hosts, all nodes, all machines
- remote, ssh, tailscale
**Gummy Integration**:
- gummy, specialist, agent, haiku, task
- execute gummy, run gummy, gummy task
**Load Balancing**:
- least loaded, best host, optimal node, load balance, best machine
- which host should, where should I run
**Coordination**:
- sync andw run, parallel, coordinate, orchestrate, distribute
- run on, execute on, send to
**Monitoring**:
- all specialists, network status, cluster status, show specialists
- what's running, active tasks
**Examples That Activate**:
- "Run database optimization on least loaded host"
- "Execute API specialist on optimal node"
- "Distribute tests across all machines"
- "Show all specialists across network"
- "Sync codebase andw run gummy on node-2"
- "Which host should handle this database work?"
- "Run parallel gummy tasks across cluster"
**Does NOT Activate** (local operations):
- "Run gummy task locally"
- "Execute specialist here"
- "/gummy-task add logging"
## Usage Examples
### Example 1: Simple Load-Balanced Execution
```
👤: "Run database optimization on the best available host"
🤖: I'll find the optimal host and execute the task there.
[Executes: dw load]
[Analysis: node-1 has lowest composite score (0.18)]
[Executes: dw sync node-1]
[Executes: dw run node-1 "cd ~/project && gummy task 'optimize database queries'"]
✅ Task launched on node-1 (CPU: 12%, Mem: 35%, Load: 0.18)
Monitor with: ssh node-1 'gummy-watch'
```
### Example 2: Parallel Multi-Platform Testing
```
👤: "Run tests across all available nodes"
🤖: I'll distribute test execution across your network.
[Executes: dw status]
[Found: 4 available hosts]
[Syncing codebase to all hosts...]
✓ node-1 synced
✓ node-2 synced
✓ node-3 synced
✓ node-4 synced
[Launching parallel test execution...]
node-1: Running unit tests
node-2: Running integration tests
node-3: Running e2e tests
node-4: Running performance tests
[All tests complete in 8 minutes (vs 32 minutes sequential)]
Results:
✅ unit: 1,247 passed
✅ integration: 89 passed
✅ e2e: 34 passed
⚠️ performance: 3 slow queries detected
```
### Example 3: Intelligent Task Distribution
```
👤: "I need to optimize database queries and build new API endpoints"
🤖: I'll distribute these optimally based on task characteristics.
[Analyzing tasks...]
- Database optimization: CPU-intensive
- API development: I/O-bound
[Fetching cluster load...]
node-1: CPU 15%, I/O 45% → Best for database work
node-2: CPU 65%, I/O 12% → Best for API work
[Distributing tasks...]
✓ Database optimization → node-1
✓ API development → node-2
[Syncing and launching...]
node-1: database-expert specialist active
node-2: api-developer specialist active
Both tasks running in parallel. Expected completion: 15 minutes.
Monitor:
ssh node-1 'gummy-watch'
ssh node-2 'gummy-watch'
```
### Example 4: Network-Wide Specialist Status
```
👤: "Show all specialists running across my network"
🤖: Scanning network for active specialists...
[Checking all hosts...]
╔═══════════════════════════════════════════════════════╗
║ DISTRIBUTED SPECIALIST DASHBOARD ║
╚═══════════════════════════════════════════════════════╝
┌─ node-1 ──────────────────────────────────────────────┐
│ ⚡ database-expert Sessions: 8 Cost: $0.02 │
│ 💤 testing-specialist Sessions: 3 Cost: $0.01 │
└───────────────────────────────────────────────────────┘
┌─ node-2 ──────────────────────────────────────────────┐
│ ⚡ api-developer Sessions: 5 Cost: $0.01 │
│ ⚡ frontend-specialist Sessions: 12 Cost: $0.03 │
└───────────────────────────────────────────────────────┘
┌─ node-3 ──────────────────────────────────────────────┐
│ 💤 security-specialist Sessions: 2 Cost: $0.01 │
└───────────────────────────────────────────────────────┘
Total: 5 active specialists across 3 hosts
Total cost today: $0.08
```
### Example 5: Sync-Before-Execute Workflow
```
👤: "Sync my codebase to node-3 andw run frontend specialist there"
🤖: I'll sync and execute on node-3.
[Current directory: ~/projects/myapp]
[Syncing to node-3...]
Sending: src/ components/ lib/ package.json
Excluding: node_modules .git dist
✓ Synced 1,247 files (45 MB) in 3 seconds
[Launching frontend specialist on node-3...]
✓ specialist created (session: abc123)
⚡ Working on task...
Task active on node-3. Monitor: ssh node-3 'gummy-watch'
```
## Configuration
### Required Tools
- ✅ `d` command (distributed CLI)
- ✅ `gummy-agent` installed on remote hosts
- ✅ Tailscale network configured
- ✅ SSH access via ~/.ssh/config
- ✅ rsync on all hosts
### Optional Configuration
Create `~/.config/distributed-gummy/config.yaml`:
```yaml
# Load balancing weights
load_weights:
cpu: 0.4
memory: 0.3
load_average: 0.3
# Sync exclusions
sync_exclude:
- node_modules
- .git
- dist
- build
- .DS_Store
- "*.log"
# Host preferences for task types
host_preferences:
database:
- node-1 # High CPU
frontend:
- node-2 # High memory
testing:
- node-3 # Dedicated test node
```
## Troubleshooting
### "Host not responding"
```bash
# Check Tailscale connectivity
tailscale status
# Check SSH access
ssh <host> echo "OK"
# Verify dw command works
dw status
```
### "Sync failed"
```bash
# Manual sync test
dw sync <host>
# Check rsync
which rsync
# Check disk space on remote
dw run <host> "df -h"
```
### "Gummy not found on remote host"
```bash
# Check gummy installation
dw run <host> "which gummy"
# Install if needed
dw run <host> "brew install WillyV3/tap/gummy-agent"
```
## Limitations
- Requires gummy-agent installed on all target hosts
- Requires Tailscale network connectivity
- Load metrics only available from `dwload` command
- Sync uses current directory as base (change with cd first)
## Version
1.0.0 - Initial release

1
VERSION Normal file
View File

@@ -0,0 +1 @@
1.0.0

53
plugin.lock.json Normal file
View File

@@ -0,0 +1,53 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:Human-Frontier-Labs-Inc/human-frontier-labs-marketplace:plugins/distributed-gummy-orchestrator",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "910a567017ecfb513c0bc6b7263f0918ad3e1641",
"treeHash": "9065296fcca9f5d374098e23ad7b552abe281d51267759c8ceeecbe73bbb896a",
"generatedAt": "2025-11-28T10:11:41.578716Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "distributed-gummy-orchestrator",
"description": "Orchestrate gummy-agents across distributed network using 'dw' command for load-balanced, multi-host AI development. Automates distributed Claude Code agents for parallel development workflows.",
"version": null
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "8a7375731d6110f8fb3e2ab6bb1b9caad615e1743e2b74131511b5a0c0c7ff01"
},
{
"path": "VERSION",
"sha256": "59854984853104df5c353e2f681a15fc7924742f9a2e468c29af248dce45ce03"
},
{
"path": "SKILL.md",
"sha256": "0f8b2836bdca45d8555971242a9be6cc5e0085c43405140a8f1e0660c2eebe14"
},
{
"path": "scripts/orchestrate_gummy.py",
"sha256": "03c57162896dba8e47537a4bad8078ca6ac679522c2be35fefd5dd84c16689c9"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "f30b3b158de46ed0870983b0ac1b07c0bffef739e0c65ec7e8460d0da854353f"
}
],
"dirSha256": "9065296fcca9f5d374098e23ad7b552abe281d51267759c8ceeecbe73bbb896a"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,459 @@
#!/usr/bin/env python3
"""
Orchestrate gummy-agent tasks across distributed network using 'd' command.
Handles load balancing, task distribution, and network-wide coordination.
"""
import subprocess
import json
import os
import re
from typing import Dict, List, Optional, Tuple
from datetime import datetime
def run_d_command(cmd: str) -> Tuple[str, int]:
"""
Execute 'dw' command and return output.
Args:
cmd: Full command string (e.g., "dw status")
Returns:
Tuple of (stdout, returncode)
"""
try:
result = subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
timeout=30
)
return result.stdout, result.returncode
except subprocess.TimeoutExpired:
return "", -1
except Exception as e:
return f"Error: {e}", -1
def get_load_metrics() -> Dict:
"""
Get load metrics from 'dw load' command.
Returns:
Dict mapping host -> metrics:
{
'host1': {'cpu': 15, 'mem': 45, 'load': 0.23, 'score': 0.28},
'host2': {...},
}
"""
output, code = run_d_command("dw load")
if code != 0:
return {}
metrics = {}
# Parse dw load output
# Expected format:
# host1 CPU: 15% MEM: 45% LOAD: 0.23 SCORE: 0.28
for line in output.strip().split('\n'):
if not line.strip():
continue
# Extract host and metrics
match = re.match(r'(\S+)\s+CPU:\s*(\d+)%\s+MEM:\s*(\d+)%\s+LOAD:\s*([\d.]+)\s+SCORE:\s*([\d.]+)', line)
if match:
host, cpu, mem, load, score = match.groups()
metrics[host] = {
'cpu': int(cpu),
'mem': int(mem),
'load': float(load),
'score': float(score)
}
return metrics
def get_host_status() -> Dict:
"""
Get host availability from 'dw status' command.
Returns:
Dict mapping host -> status:
{
'host1': {'status': 'online', 'ip': '100.1.2.3'},
'host2': {'status': 'offline'},
}
"""
output, code = run_d_command("dw status")
if code != 0:
return {}
status = {}
# Parse dw status output
# Expected format:
# host1 online 100.1.2.3
# host2 offline -
for line in output.strip().split('\n'):
if not line.strip():
continue
parts = line.split()
if len(parts) >= 2:
host = parts[0]
state = parts[1]
ip = parts[2] if len(parts) > 2 else None
status[host] = {
'status': state,
'ip': ip
}
return status
def select_optimal_host(
task_type: Optional[str] = None,
exclude: Optional[List[str]] = None
) -> Optional[Dict]:
"""
Select the best host for a task based on load metrics.
Args:
task_type: Type of task ('database', 'api', 'frontend', etc.)
exclude: List of hosts to exclude from selection
Returns:
Dict with selected host info:
{
'host': 'node-1',
'score': 0.23,
'cpu': 15,
'mem': 45,
'load': 0.23
}
Example:
>>> optimal = select_optimal_host(task_type="database")
>>> print(f"Selected: {optimal['host']}")
Selected: node-1
"""
exclude = exclude or []
# Get current load metrics
load_data = get_load_metrics()
# Get host availability
status_data = get_host_status()
# Filter to online hosts not in exclude list
available_hosts = [
host for host, info in status_data.items()
if info['status'] == 'online' and host not in exclude
]
if not available_hosts:
return None
# Filter load data to available hosts
available_load = {
host: metrics
for host, metrics in load_data.items()
if host in available_hosts
}
if not available_load:
# No load data, pick first available
host = available_hosts[0]
return {
'host': host,
'score': 0.5, # Unknown
'cpu': None,
'mem': None,
'load': None
}
# Select host with lowest score (best performance)
best_host = min(available_load.keys(), key=lambda h: available_load[h]['score'])
best_metrics = available_load[best_host]
return {
'host': best_host,
'score': best_metrics['score'],
'cpu': best_metrics['cpu'],
'mem': best_metrics['mem'],
'load': best_metrics['load']
}
def sync_codebase(host: str, local_path: str, remote_path: Optional[str] = None) -> bool:
"""
Sync codebase to remote host using 'dw sync'.
Args:
host: Target host name
local_path: Local directory to sync
remote_path: Remote path (defaults to same as local)
Returns:
True if sync successful
"""
if remote_path is None:
remote_path = local_path
# Change to local path
original_dir = os.getcwd()
try:
os.chdir(local_path)
# Execute dw sync
output, code = run_d_command(f"dw sync {host}")
return code == 0
finally:
os.chdir(original_dir)
def execute_remote_gummy(
host: str,
task: str,
project_path: str,
sync_first: bool = True
) -> Dict:
"""
Execute gummy task on remote host.
Args:
host: Target host name
task: Gummy task description
project_path: Path to project directory
sync_first: Whether to sync codebase before executing
Returns:
Dict with execution info:
{
'host': 'node-1',
'task': 'optimize queries',
'command': 'cd ... && gummy task ...',
'synced': True,
'launched': True
}
"""
result = {
'host': host,
'task': task,
'synced': False,
'launched': False
}
# Sync if requested
if sync_first:
result['synced'] = sync_codebase(host, project_path)
if not result['synced']:
return result
# Build gummy command
gummy_cmd = f'cd {project_path} && gummy task "{task}"'
# Execute on remote host
full_cmd = f'dw run {host} \'{gummy_cmd}\''
result['command'] = full_cmd
output, code = run_d_command(full_cmd)
result['launched'] = (code == 0)
result['output'] = output
return result
def sync_and_execute_gummy(
host: str,
task: str,
project_dir: str
) -> Dict:
"""
Convenience function: sync codebase and execute gummy task.
Args:
host: Target host name
task: Gummy task description
project_dir: Project directory path
Returns:
Execution result dict
"""
return execute_remote_gummy(host, task, project_dir, sync_first=True)
def parallel_gummy_tasks(tasks: List[Dict]) -> List[Dict]:
"""
Execute multiple gummy tasks in parallel across hosts.
Args:
tasks: List of task dicts:
[
{'host': 'node-1', 'task': 'task1', 'project': '/path'},
{'host': 'node-2', 'task': 'task2', 'project': '/path'},
]
Returns:
List of result dicts
"""
results = []
for task_info in tasks:
result = execute_remote_gummy(
host=task_info['host'],
task=task_info['task'],
project_path=task_info['project'],
sync_first=task_info.get('sync', True)
)
results.append(result)
return results
def monitor_all_specialists() -> Dict:
"""
Get status of all specialists across all hosts.
Returns:
Dict mapping host -> specialists:
{
'node-1': [
{'name': 'database-expert', 'status': 'active', 'sessions': 8},
{'name': 'api-developer', 'status': 'dormant', 'sessions': 3},
],
'node-2': [...],
}
"""
status_data = get_host_status()
online_hosts = [h for h, info in status_data.items() if info['status'] == 'online']
all_specialists = {}
for host in online_hosts:
# Check if gummy is installed
check_cmd = f'dw run {host} "command -v gummy"'
output, code = run_d_command(check_cmd)
if code != 0:
continue
# List specialists
list_cmd = f'dw run {host} "ls -1 ~/.gummy/specialists 2>/dev/null || echo"'
output, code = run_d_command(list_cmd)
if code != 0 or not output.strip():
continue
specialists = []
for spec_name in output.strip().split('\n'):
if not spec_name:
continue
# Get specialist metadata
meta_cmd = f'dw run {host} "cat ~/.gummy/specialists/{spec_name}/meta.yaml 2>/dev/null"'
meta_output, meta_code = run_d_command(meta_cmd)
if meta_code == 0:
# Parse YAML (simple key: value format)
spec_info = {'name': spec_name}
for line in meta_output.split('\n'):
if ':' in line:
key, value = line.split(':', 1)
spec_info[key.strip()] = value.strip()
specialists.append(spec_info)
if specialists:
all_specialists[host] = specialists
return all_specialists
def comprehensive_distributed_report() -> Dict:
"""
Generate comprehensive report of entire distributed network.
Returns:
Dict with complete network state:
{
'timestamp': '2025-10-19T22:00:00',
'hosts': {...},
'load': {...},
'specialists': {...},
'summary': 'Summary text'
}
"""
report = {
'timestamp': datetime.now().isoformat(),
'hosts': get_host_status(),
'load': get_load_metrics(),
'specialists': monitor_all_specialists()
}
# Generate summary
online_count = sum(1 for h in report['hosts'].values() if h['status'] == 'online')
total_count = len(report['hosts'])
specialist_count = sum(len(specs) for specs in report['specialists'].values())
report['summary'] = (
f"{online_count}/{total_count} hosts online, "
f"{specialist_count} active specialists"
)
return report
def main():
"""Test orchestration functions."""
print("=" * 70)
print("DISTRIBUTED GUMMY ORCHESTRATOR - TEST")
print("=" * 70)
# Test 1: Load metrics
print("\n✓ Testing load metrics...")
load = get_load_metrics()
for host, metrics in load.items():
print(f" {host}: CPU {metrics['cpu']}%, Score {metrics['score']}")
# Test 2: Host status
print("\n✓ Testing host status...")
status = get_host_status()
for host, info in status.items():
print(f" {host}: {info['status']}")
# Test 3: Optimal host selection
print("\n✓ Testing optimal host selection...")
optimal = select_optimal_host()
if optimal:
print(f" Best host: {optimal['host']} (score: {optimal['score']})")
else:
print(" No hosts available")
# Test 4: Specialist monitoring
print("\n✓ Testing specialist monitoring...")
specialists = monitor_all_specialists()
for host, specs in specialists.items():
print(f" {host}: {len(specs)} specialists")
for spec in specs:
print(f" - {spec['name']}")
# Test 5: Comprehensive report
print("\n✓ Testing comprehensive report...")
report = comprehensive_distributed_report()
print(f" {report['summary']}")
print("\n" + "=" * 70)
print("✅ ALL TESTS COMPLETE")
print("=" * 70)
if __name__ == "__main__":
main()