Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:43:45 +08:00
commit 9b1040bcac
5 changed files with 1656 additions and 0 deletions

View File

@@ -0,0 +1,13 @@
{
"name": "wolf-automation",
"description": "Wolf Agent automation scripts for core operations and agent coordination",
"version": "0.0.0-2025.11.28",
"author": {
"name": "Jeremy Miranda",
"email": "jeremy@nicewolfstudio.com"
},
"skills": [
"./skills/wolf-scripts-core",
"./skills/wolf-scripts-agents"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# wolf-automation
Wolf Agent automation scripts for core operations and agent coordination

48
plugin.lock.json Normal file
View File

@@ -0,0 +1,48 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:Nice-Wolf-Studio/wolf-skills-marketplace:wolf-automation",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "4b1807eb6f870b4e4ef3ab7ec6418542fcf4c46d",
"treeHash": "820d6f25670c61643649008a9be8daef603c46be421d82a55600a14a56c33bc8",
"generatedAt": "2025-11-28T10:12:13.308444Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "wolf-automation",
"description": "Wolf Agent automation scripts for core operations and agent coordination"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "dc3d34b14176de398114627f71a13690959de021160620739c422e4c3c09fe10"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "58edeccc9b2d44eaae2a1674d35e9a9649c969478c2bce3a0916f843796f56c1"
},
{
"path": "skills/wolf-scripts-core/SKILL.md",
"sha256": "d4a71d546eeb77f2b3180b559b9d513c1c398b59cd868802e14da1f914b0a987"
},
{
"path": "skills/wolf-scripts-agents/SKILL.md",
"sha256": "7a553183040a3eb3be2bb7bf94943504726a1a1be38e24384430c237cbf1565f"
}
],
"dirSha256": "820d6f25670c61643649008a9be8daef603c46be421d82a55600a14a56c33bc8"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,942 @@
---
name: wolf-scripts-agents
description: Agent coordination, orchestration, and multi-agent workflow management scripts
version: 1.1.0
category: agent-coordination
triggers:
- agent orchestration
- workflow coordination
- multi-agent
- agent execution
- agent validation
dependencies:
- wolf-roles
- wolf-archetypes
- wolf-governance
size: large
---
# Wolf Scripts - Agent Coordination
Agent coordination patterns that power Wolf's multi-agent orchestration system. These scripts manage agent lifecycles, workflow handoffs, and cross-agent collaboration.
## Overview
This skill captures **agent coordination and orchestration patterns**:
1. **Agent Executor** - Unified interface for invoking agent binaries
2. **Workflow Orchestrator** - Multi-phase, multi-agent workflow management
3. **Agent Change Validator** - Enforces agent file scope boundaries
4. **Mailbox System** - Async inter-agent communication
5. **Work Claimer** - Agent work assignment and claim management
## 🤖 Agent Executor Pattern
### Purpose
Provides unified interface for invoking real agent binaries (codex, claude-code, custom) with structured output parsing and timeout management.
### Features
- **Multi-backend Support**: codex, claude-code, custom binaries
- **Non-interactive Execution**: `--cwd` and `--prompt-file` for automation
- **Structured Signal Parsing**: `AGENT_RESPONSE_ID`, `AGENT_CREATED_ISSUE_URL`, etc.
- **Timeout Management**: Configurable execution timeouts
- **Error Handling**: Robust error capture and reporting
### Configuration
```javascript
const executor = new AgentExecutor({
agentBinary: 'codex', // or 'claude-code', '/path/to/custom'
timeout: 300000, // 5 minutes default
captureOutput: true, // Capture stdout/stderr
parseSignals: true, // Parse structured signals
env: { ... }, // Custom environment variables
executionDir: '/path/to/repo', // Working directory
verbose: false, // Logging verbosity
logPrefix: '[agent-executor]' // Log message prefix
});
```
### Execution Parameters
```javascript
const result = await executor.executeAgent({
cwd: '/path/to/repo', // Working directory for agent
promptFile: '/path/to/prompt.md', // Prompt file path
command: 'run', // Agent command (run, create-issue, etc.)
args: { ... }, // Additional command arguments
id: 'unique-request-id' // Request ID for tracking
});
```
### Structured Signals
Agent output is parsed for special signals:
- `AGENT_RESPONSE_ID`: Unique response identifier
- `AGENT_CREATED_ISSUE_URL`: URL of created issue
- `AGENT_CREATED_PR_URL`: URL of created PR
- `AGENT_STATUS`: Execution status (success, failure, partial)
- `AGENT_NEXT_ACTION`: Suggested next step
- `AGENT_HANDOFF_TO`: Next agent in workflow
### Return Value
```javascript
{
success: true,
stdout: '...',
stderr: '...',
exitCode: 0,
signals: {
responseId: '...',
createdIssueUrl: '...',
status: 'success',
nextAction: '...',
handoffTo: '...'
},
executionTime: 45.2, // seconds
timeout: false
}
```
### Usage Pattern
```javascript
import { AgentExecutor } from './agent-executor.mjs';
// Initialize executor
const executor = new AgentExecutor({
agentBinary: 'claude-code',
timeout: 600000 // 10 minutes
});
// Execute agent with prompt
const result = await executor.executeAgent({
cwd: '/workspace/project',
promptFile: '/tmp/prompt.md',
command: 'run',
id: 'workflow-123-phase-1'
});
// Check for success
if (result.success) {
console.log(`Agent completed successfully`);
console.log(`Response ID: ${result.signals.responseId}`);
// Handle handoff if specified
if (result.signals.handoffTo) {
console.log(`Handing off to: ${result.signals.handoffTo}`);
}
} else {
console.error(`Agent failed: ${result.stderr}`);
}
```
### When to Use
- Automating agent invocation from workflows
- Building multi-agent pipelines
- Testing agent behavior programmatically
- Capturing structured agent outputs
**Script Location**: `/agents/shared/scripts/agent-executor.mjs`
---
## 🎯 Workflow Orchestrator Pattern
### Purpose
Coordinates multi-phase, multi-agent workflows with automatic handoffs, lens integration, and state management.
### Available Workflows
```javascript
const WORKFLOWS = {
'issue-to-release': {
name: 'Issue to Release Pipeline',
description: 'Complete journey from issue creation to deployment',
phases: [
{ agent: 'intake-agent', action: 'triage' },
{ agent: 'pm-agent', action: 'curate' },
{ agent: 'coder-agent', action: 'implement' },
{ agent: 'reviewer-agent', action: 'review' },
{ agent: 'qa-agent', action: 'test' },
{ agent: 'release-agent', action: 'deploy' }
]
},
'pr-review-cycle': {
name: 'PR Review Cycle',
description: 'Automated PR review and feedback loop',
phases: [
{ agent: 'reviewer-agent', action: 'initial-review' },
{ agent: 'coder-agent', action: 'address-feedback' },
{ agent: 'reviewer-agent', action: 'final-review' }
]
},
'ci-failure-recovery': {
name: 'CI Failure Recovery',
description: 'Automatic diagnosis and fix of CI failures',
phases: [
{ agent: 'devops-agent', action: 'diagnose' },
{ agent: 'coder-agent', action: 'fix' },
{ agent: 'qa-agent', action: 'verify' }
]
}
};
```
### WorkflowOrchestrator Class
```javascript
class WorkflowOrchestrator {
constructor() {
this.currentWorkflow = null;
this.currentPhase = 0;
this.workflowState = {};
}
// List all available workflows
async listWorkflows() {
// Display workflow catalog
}
// Start a workflow
async startWorkflow(workflowName, options = {}) {
// Initialize workflow state
// Integrate lens selection if issue provided
// Execute workflow phases
}
// Execute workflow phases sequentially
async executeWorkflow() {
// For each phase:
// 1. Prepare agent context
// 2. Execute agent action
// 3. Capture results
// 4. Determine handoff
// 5. Update state
}
// Execute a single phase
async executePhase(phase, context) {
// Run agent action with context
// Parse output signals
// Return phase result
}
// Save workflow state for resume
saveWorkflowState() {
// Persist state to disk/database
}
// Resume incomplete workflow
async resumeWorkflow(workflowId) {
// Load state
// Continue from last phase
}
}
```
### Lens Integration
Workflows automatically integrate with the lens system:
```javascript
// Lens selection happens at workflow start
const lensIntegration = await integrateLensWithOrchestration(
workflowName,
issueNumber,
repository
);
// Lenses modify workflow phases
this.currentWorkflowDefinition = lensIntegration.applyToWorkflow(baseWorkflow);
// Example modifications:
// - Performance lens adds benchmark phase
// - Security lens adds security review phase
// - Accessibility lens adds a11y validation phase
```
### Workflow State
```javascript
{
workflowName: 'issue-to-release',
startTime: '2025-10-20T12:00:00Z',
currentPhase: 3,
phases: [
{
phase: 0,
agent: 'intake-agent',
action: 'triage',
startTime: '2025-10-20T12:00:00Z',
endTime: '2025-10-20T12:02:30Z',
status: 'completed',
output: { ... },
signals: { ... }
},
{ ... } // More phases
],
lenses: ['performance', 'security'],
metadata: { issueNumber: 123, prNumber: 456 }
}
```
### Command Line Usage
```bash
# List all workflows
node orchestrate-workflow.mjs --list-workflows
# Start issue-to-release workflow
node orchestrate-workflow.mjs --workflow=issue-to-release --issue=123
# Start PR review cycle
node orchestrate-workflow.mjs --workflow=pr-review-cycle --pr=456
# Dry run mode
DRY_RUN=true node orchestrate-workflow.mjs --workflow=ci-failure-recovery --issue=78
```
### When to Use
- Multi-agent collaboration required
- Sequential phase execution needed
- Automatic handoffs between agents
- State persistence for long-running workflows
- Lens-aware workflow modification
**Script Location**: `/agents/shared/scripts/orchestrate-workflow.mjs`
---
## 🛡️ Agent Change Validator Pattern
### Purpose
Enforces agent file scope boundaries to prevent cross-cutting changes that could interfere with other agents.
### Role File Patterns
Each agent role has explicitly defined allowed file patterns:
```javascript
const ROLE_FILE_PATTERNS = {
'pm-agent': {
allowed: [
'agents/roles/pm-agent/**/*',
'agents/shared/templates/pm-*',
'.github/ISSUE_TEMPLATE/**/*',
'docs/**/*.md',
'README.md',
'CONTRIBUTING.md'
],
description: 'PM templates, documentation, issue templates'
},
'coder-agent': {
allowed: [
'src/**/*',
'lib/**/*',
'components/**/*',
'test/**/*',
'*.test.*',
'*.spec.*',
'package.json',
'tsconfig.json',
'*.config.js'
],
description: 'Source code, tests, build configs, dependencies'
},
'qa-agent': {
allowed: [
'test/**/*',
'e2e/**/*',
'*.test.*',
'*.spec.*',
'.github/workflows/*test*',
'playwright.config.*',
'jest.config.*'
],
description: 'Test files, CI test workflows, testing configs'
},
'reviewer-agent': {
allowed: [
'agents/roles/reviewer-agent/**/*',
'.eslintrc*',
'.prettierrc*',
'.github/workflows/*review*',
'.github/workflows/*lint*',
'.gitignore'
],
description: 'Code standards, linting configs, review templates'
}
// ... more agent patterns
};
```
### Validation Algorithm
```javascript
function validateAgentChanges(agentRole, changedFiles) {
const rolePatterns = ROLE_FILE_PATTERNS[agentRole];
if (!rolePatterns) {
throw new Error(`Unknown agent role: ${agentRole}`);
}
const violations = [];
for (const file of changedFiles) {
const isAllowed = rolePatterns.allowed.some(pattern =>
minimatch(file, pattern)
);
if (!isAllowed) {
violations.push({
file,
agent: agentRole,
reason: `File outside agent scope. Allowed patterns: ${rolePatterns.description}`
});
}
}
return {
valid: violations.length === 0,
violations,
totalFiles: changedFiles.length,
allowedFiles: changedFiles.length - violations.length
};
}
```
### Command Line Usage
```bash
# Validate from file list
node validate-agent-changes.mjs --agent=pm-agent --files=changed-files.txt
# Validate from git diff
node validate-agent-changes.mjs --agent=coder-agent --git-diff
# Validate specific PR
node validate-agent-changes.mjs --agent=qa-agent --pr=456
# Check without failing (report only)
node validate-agent-changes.mjs --agent=reviewer-agent --git-diff --no-fail
```
### Integration with CI
```yaml
# .github/workflows/agent-scope-validation.yml
- name: Validate Agent File Scope
run: |
AGENT=$(gh pr view ${{ github.event.pull_request.number }} --json labels --jq '.labels[].name' | grep 'agent:' | sed 's/agent://')
node agents/shared/scripts/validate-agent-changes.mjs --agent=$AGENT --git-diff
```
### When to Use
- PR validation gates
- Pre-commit hooks for agent work
- Preventing scope creep
- Enforcing separation of concerns
- CI/CD validation
**Script Location**: `/agents/shared/scripts/validate-agent-changes.mjs`
---
## 📬 Mailbox System Pattern
### Purpose
Asynchronous inter-agent communication using file-based mailboxes.
### Mailbox Structure
```
agents/shared/mailbox/
├── intake/ # intake-agent mailbox
│ ├── inbox/
│ ├── outbox/
│ └── archive/
├── pm/ # pm-agent mailbox
│ ├── inbox/
│ ├── outbox/
│ └── archive/
└── coder/ # coder-agent mailbox
├── inbox/
├── outbox/
└── archive/
```
### Message Format
```json
{
"id": "msg-2025-10-20-001",
"from": "pm-agent",
"to": "coder-agent",
"subject": "Implementation request for issue #123",
"timestamp": "2025-10-20T12:00:00Z",
"priority": "normal",
"payload": {
"issueNumber": 123,
"archetype": "product-implementer",
"acceptanceCriteria": [ ... ],
"technicalContext": { ... }
},
"metadata": {
"workflowId": "workflow-456",
"phaseNumber": 2
}
}
```
### Mailbox API
```javascript
class MailboxClient {
constructor(agentName) {
this.agentName = agentName;
this.inboxPath = `agents/shared/mailbox/${agentName}/inbox`;
this.outboxPath = `agents/shared/mailbox/${agentName}/outbox`;
}
// Send message to another agent
async sendMessage(toAgent, subject, payload) {
const message = {
id: this.generateMessageId(),
from: this.agentName,
to: toAgent,
subject,
timestamp: new Date().toISOString(),
payload
};
// Write to outbox and recipient's inbox
await this.writeMessage(this.outboxPath, message);
await this.writeMessage(`agents/shared/mailbox/${toAgent}/inbox`, message);
return message.id;
}
// Check inbox for new messages
async checkInbox() {
const messages = await this.readMessages(this.inboxPath);
return messages.filter(msg => !msg.read);
}
// Mark message as read
async markAsRead(messageId) {
// Update message metadata
}
// Archive message
async archiveMessage(messageId) {
// Move to archive folder
}
}
```
### Usage Pattern
```javascript
// PM agent sends work to coder agent
const pmMailbox = new MailboxClient('pm');
const messageId = await pmMailbox.sendMessage('coder', 'Implement feature #123', {
issueNumber: 123,
archetype: 'product-implementer',
acceptanceCriteria: [ ... ]
});
// Coder agent checks inbox
const coderMailbox = new MailboxClient('coder');
const newMessages = await coderMailbox.checkInbox();
for (const msg of newMessages) {
console.log(`New work from ${msg.from}: ${msg.subject}`);
// Process message
await processWork(msg.payload);
// Mark as read
await coderMailbox.markAsRead(msg.id);
// Send completion notification
await coderMailbox.sendMessage(msg.from, `Completed: ${msg.subject}`, {
prUrl: '...',
completionTime: '...'
});
}
```
### When to Use
- Async agent communication
- Work queue management
- Handoff tracking
- Audit trails
- Decoupled agent coordination
---
## Integration Patterns
### Complete Workflow Example
```javascript
// 1. Start orchestrated workflow
const orchestrator = new WorkflowOrchestrator();
await orchestrator.startWorkflow('issue-to-release', {
issue: 123,
repository: 'org/repo'
});
// 2. Workflow internally uses agent executor
const executor = new AgentExecutor({ agentBinary: 'claude-code' });
for (const phase of workflow.phases) {
// 3. Execute agent with validation
const changedFiles = await getChangedFiles();
const validation = validateAgentChanges(phase.agent, changedFiles);
if (!validation.valid) {
throw new Error(`Agent scope violation: ${validation.violations}`);
}
// 4. Run agent
const result = await executor.executeAgent({
cwd: '/workspace',
promptFile: `/tmp/phase-${phase.number}-prompt.md`,
command: phase.action
});
// 5. Use mailbox for async communication if needed
if (phase.requiresHandoff) {
const mailbox = new MailboxClient(phase.agent);
await mailbox.sendMessage(phase.nextAgent, 'Work ready for review', {
prUrl: result.signals.createdPrUrl
});
}
// 6. Save state
orchestrator.saveWorkflowState();
}
```
---
## Related Skills
- **wolf-roles**: Agent role definitions and responsibilities
- **wolf-archetypes**: Behavioral profiles
- **wolf-workflows-ci**: GitHub Actions integration
- **wolf-governance**: Quality gates and policies
---
## File Locations
All agent coordination scripts in `/agents/shared/scripts/`:
- `agent-executor.mjs` - Unified agent execution interface
- `orchestrate-workflow.mjs` - Multi-agent workflow orchestration
- `validate-agent-changes.mjs` - Agent scope enforcement
- `intake-agent-with-mailbox.mjs` - Mailbox-integrated intake agent
- `work-claimer.mjs` - Work assignment and claim management
---
## Best Practices
### Agent Execution
- ✅ Always set reasonable timeouts
- ✅ Parse structured signals for automation
- ✅ Capture both stdout and stderr
- ✅ Use unique request IDs for tracking
- ❌ Don't run agents without timeout limits
- ❌ Don't ignore exit codes
### Workflow Orchestration
- ✅ Save state after each phase for resume capability
- ✅ Integrate lens modifications early
- ✅ Use dry-run mode for testing
- ✅ Log all phase transitions
- ❌ Don't skip error handling
- ❌ Don't hardcode workflow definitions
### Agent Scope Validation
- ✅ Validate file changes in CI
- ✅ Provide clear violation messages
- ✅ Keep role patterns up to date
- ✅ Document allowed patterns clearly
- ❌ Don't allow broad wildcards
- ❌ Don't skip validation for "small" changes
### Mailbox Communication
- ✅ Use structured message formats
- ✅ Archive read messages
- ✅ Set message priorities
- ✅ Include workflow context in metadata
- ❌ Don't leave inbox unprocessed
- ❌ Don't lose message delivery confirmation
---
## Red Flags - STOP
If you catch yourself thinking:
-**"I can coordinate agents manually without scripts"** - STOP. Manual coordination doesn't scale and loses state. Use workflow orchestrator for multi-agent work.
-**"Scripts are overkill for simple workflows"** - NO. Even "simple" multi-agent workflows have handoffs, state, and failures. Scripts provide resilience.
-**"Automation can wait until later"** - FORBIDDEN. "Later" means never. Set up orchestration from the start or face coordination chaos.
-**"Agent scope validation is too restrictive"** - Wrong. Scope boundaries prevent interference and maintain separation of concerns. Violating scope = breaking governance.
-**"Mailbox system is unnecessary overhead"** - False. Async communication enables decoupled agents and audit trails. Direct coordination breaks under load.
-**"One agent can do multiple roles to save time"** - FORBIDDEN. Role mixing violates governance. Use orchestrator to coordinate multiple agents properly.
**STOP. Use the appropriate coordination script BEFORE proceeding.**
## After Using This Skill
**REQUIRED NEXT STEPS:**
```
Integration with Wolf role-based system
```
1. **REQUIRED SKILL**: Use **wolf-roles** to understand agent boundaries
- **Why**: Coordination scripts enforce role boundaries. Must understand role definitions to use orchestration properly.
- **When**: Before using `orchestrate-workflow.mjs` or `validate-agent-changes.mjs`
- **Tool**: Use Skill tool to load wolf-roles
- **Example**: Before orchestrating pm-agent → coder-agent → reviewer-agent workflow, load each role's responsibilities
2. **RECOMMENDED SKILL**: Use **wolf-governance** for workflow quality gates
- **Why**: Workflows must enforce governance at each phase. Understanding gates ensures compliance.
- **When**: When designing custom workflows or modifying existing workflow definitions
- **Tool**: Use Skill tool to load wolf-governance
3. **DURING WORK**: Coordination scripts enable multi-agent collaboration
- Scripts orchestrate agent interactions throughout complex workflows
- Use scripts at handoff points and phase transitions
- Continuous state management for long-running workflows
### Verification Checklist
Before claiming multi-agent coordination complete:
- [ ] Used workflow orchestrator for multi-agent coordination (not manual coordination)
- [ ] Validated agent file scope boundaries before allowing changes (`validate-agent-changes.mjs`)
- [ ] Set up mailbox communication for async handoffs (if workflow requires it)
- [ ] Documented workflow state and phases (saved state after each phase)
- [ ] All agents stayed within their defined file scopes (no role boundary violations)
- [ ] Workflow resumable from any phase (state persistence working)
**Can't check all boxes? Coordination incomplete. Return to this skill.**
### Good/Bad Examples: Multi-Agent Coordination
#### Example 1: Proper Workflow Orchestration
<Good>
**Task**: Implement feature #123 through complete pipeline (intake → PM curation → implementation → review → QA → release)
**Script Execution**:
```bash
$ node orchestrate-workflow.mjs --workflow=issue-to-release --issue=123
Starting workflow: issue-to-release
Issue: #123 - Add user authentication
Lenses detected: security, observability
Phase 1: intake-agent (triage)
✅ Issue triaged successfully
✅ Labels applied: feature, security
✅ Archetype recommended: product-implementer
✅ State saved
Phase 2: pm-agent (curate)
✅ Acceptance criteria defined
✅ Incremental shards created (#123-1, #123-2)
✅ Technical context documented
✅ State saved
Phase 3: coder-agent (implement)
✅ Implementation complete
✅ PR created: #456
✅ Security lens requirements met (threat model, security tests)
✅ File scope validated: all changes in src/**/* (allowed for coder-agent)
✅ State saved
Phase 4: reviewer-agent (review)
✅ Code review complete
✅ Governance checks passed (DoD complete)
✅ Security lens validated
✅ Approved
✅ State saved
Phase 5: qa-agent (test)
✅ E2E tests passing
✅ Security scans clean
✅ Observability lens validated (metrics, monitoring)
✅ State saved
Phase 6: release-agent (deploy)
✅ Deployed to staging
✅ Smoke tests passing
✅ Production deployment complete
✅ Workflow complete
Total time: 2.5 hours
All phases completed successfully
```
**Why this is correct**:
- Used workflow orchestrator instead of manual coordination
- Each agent stayed within scope (validated at each phase)
- Lenses (security, observability) automatically integrated
- State saved after each phase (workflow resumable)
- Complete audit trail of all agent actions
- Handoffs automatic and documented
**Benefits**:
- No dropped work between agents
- No scope violations
- Governance enforced at each phase
- Resumable if any phase fails
- Complete traceability
</Good>
<Bad>
**Task**: Same feature implementation #123
**Manual Approach**: "I'll just coordinate manually"
**Actions**:
```
Developer: "Let me implement this feature myself across all phases"
1. Manual triage: Skipped (assumed feature type)
2. Manual PM work: Wrote quick acceptance criteria in memory
3. Implementation:
❌ Modified files in agents/roles/pm-agent/templates/ (scope violation - coder touching PM files)
❌ Modified .github/workflows/review.yml (scope violation - coder touching reviewer config)
❌ Modified test/ and src/ together (mixed coder + qa concerns)
4. Self-review: "Looks good to me" (governance violation - no separation)
5. Manual deploy: Pushed directly to main
Result:
❌ No archetype selection (wrong behavioral profile)
❌ No lens integration (security requirements missed)
❌ Scope violations broke PM templates (PM agent now fails)
❌ Scope violations broke review workflows (all PRs now fail review)
❌ No governance enforcement (DoD skipped)
❌ No state tracking (no resume capability)
❌ No audit trail (can't determine what broke)
❌ Self-approval violates governance
❌ Direct to main bypasses all gates
Outcome: 3 other agents broke, 2 days debugging, emergency rollback, team demoralized
```
**What Should Have Been Done**:
Used `orchestrate-workflow.mjs` which would have:
- ✅ Enforced agent scope boundaries
- ✅ Prevented PM template modifications by coder
- ✅ Prevented review workflow modifications by coder
- ✅ Required separate qa-agent for testing
- ✅ Required separate reviewer-agent for approval
- ✅ Integrated security lens automatically
- ✅ Saved state for resume
- ✅ Created audit trail
- ✅ Enforced governance at each phase
</Bad>
#### Example 2: Agent Scope Validation
<Good>
**PR #456** from coder-agent
**Files changed**:
```
src/auth/login.ts
src/auth/logout.ts
src/auth/session.ts
test/auth/login.test.ts
test/auth/session.test.ts
package.json
```
**Validation**:
```bash
$ node validate-agent-changes.mjs --agent=coder-agent --pr=456
Validating changes for coder-agent...
Allowed patterns: src/**/* lib/**/* components/**/* test/**/* *.test.* *.spec.* package.json tsconfig.json *.config.js
Checking files:
✅ src/auth/login.ts (matches: src/***)
✅ src/auth/logout.ts (matches: src/***)
✅ src/auth/session.ts (matches: src/***)
✅ test/auth/login.test.ts (matches: test/**/* AND *.test.*)
✅ test/auth/session.test.ts (matches: test/**/* AND *.test.*)
✅ package.json (matches: package.json)
Validation passed ✅
6/6 files within coder-agent scope
```
**Result**: PR allowed to proceed. No scope violations.
</Good>
<Bad>
**PR #457** from coder-agent
**Files changed**:
```
src/auth/login.ts
agents/roles/pm-agent/templates/feature-template.md
.github/workflows/review.yml
docs/GOVERNANCE.md
README.md
```
**Validation**:
```bash
$ node validate-agent-changes.mjs --agent=coder-agent --pr=457
Validating changes for coder-agent...
Allowed patterns: src/**/* lib/**/* components/**/* test/**/* *.test.* *.spec.* package.json tsconfig.json *.config.js
Checking files:
✅ src/auth/login.ts (matches: src/***)
❌ agents/roles/pm-agent/templates/feature-template.md
Violation: File outside coder-agent scope
This file belongs to: pm-agent
Reason: PM templates are PM agent's responsibility
❌ .github/workflows/review.yml
Violation: File outside coder-agent scope
This file belongs to: reviewer-agent
Reason: Review workflows are reviewer agent's responsibility
❌ docs/GOVERNANCE.md
Violation: File outside coder-agent scope
This file belongs to: pm-agent
Reason: Governance docs are PM agent's responsibility
❌ README.md
Violation: File outside coder-agent scope
This file belongs to: pm-agent (or documentation-agent)
Reason: Top-level docs are PM/docs agent's responsibility
Validation failed ❌
1/5 files within scope
4/5 files violate scope boundaries
BLOCKING: This PR cannot be merged until scope violations are resolved.
Recommended actions:
1. Remove changes to PM templates, review workflows, and docs
2. OR: Create separate PRs from appropriate agents:
- pm-agent PR for template and governance doc changes
- reviewer-agent PR for review workflow changes
- documentation-agent PR for README changes
```
**Result**: PR blocked. Developer must split changes across appropriate agent roles.
**Why this is correct**:
- Scope validation caught cross-cutting changes
- Prevented coder from modifying PM agent's files
- Prevented coder from modifying reviewer agent's files
- Enforced separation of concerns
- Provided clear remediation guidance
**If validation had been skipped**:
- PM templates would have been modified by non-PM agent
- Review workflow would have been broken by non-reviewer agent
- Governance docs would have been modified without proper review
- Future agent work would have failed due to broken templates/workflows
</Bad>
---
**Last Updated**: 2025-11-14
**Phase**: Superpowers Skill-Chaining Enhancement v2.0.0
**Maintainer**: Wolf Orchestration Team

View File

@@ -0,0 +1,650 @@
---
name: wolf-scripts-core
description: Core automation scripts for archetype selection, evidence validation, quality scoring, and safe bash execution
version: 1.1.0
category: automation
triggers:
- archetype selection
- evidence validation
- quality scoring
- curator rubric
- bash validation
dependencies:
- wolf-archetypes
- wolf-governance
size: medium
---
# Wolf Scripts Core
Core automation patterns that power Wolf's behavioral adaptation system. These scripts represent battle-tested logic from 50+ phases of development.
## Overview
This skill captures the **essential automation patterns** that run on nearly every Wolf operation:
1. **Archetype Selection** - Automatically determine behavioral profile based on issue/PR characteristics
2. **Evidence Validation** - Validate archetype-specific evidence requirements with conflict resolution
3. **Curator Rubric** - Score issue quality using reproducible 1-10 scoring system
4. **Bash Validation** - Safe bash execution with pattern checking and GitHub CLI validation
## 🎯 Archetype Selection Pattern
### Purpose
Analyzes GitHub issues to determine the appropriate coder agent archetype based on labels, keywords, and patterns.
### Available Archetypes
```javascript
{
'product-implementer': {
keywords: ['feature', 'implement', 'add', 'create', 'build', 'develop', 'functionality'],
patterns: ['user story', 'as a user', 'acceptance criteria', 'business logic']
},
'reliability-fixer': {
keywords: ['bug', 'fix', 'error', 'crash', 'fail', 'broken', 'issue'],
patterns: ['steps to reproduce', 'error message', 'stack trace', 'regression']
},
'security-hardener': {
keywords: ['security', 'vulnerability', 'exploit', 'auth', 'permission', 'access'],
patterns: ['cve', 'security scan', 'penetration', 'authentication', 'authorization']
},
'perf-optimizer': {
keywords: ['performance', 'slow', 'optimize', 'speed', 'memory', 'cpu'],
patterns: ['benchmark', 'profiling', 'latency', 'throughput', 'bottleneck']
},
'research-prototyper': {
keywords: ['research', 'prototype', 'experiment', 'proof of concept', 'explore'],
patterns: ['investigation', 'feasibility', 'spike', 'technical debt', 'architecture']
}
}
```
### Scoring Algorithm
1. **Keyword Matching**: `score += matches.length * 2` (weight keywords highly)
2. **Pattern Matching**: `score += matches.length * 3` (weight patterns even higher)
3. **Label Matching**: Labels are included in full text search
4. **Threshold Check**: Requires minimum confidence score to select archetype
5. **Fallback**: If no archetype reaches threshold, defaults to `product-implementer`
### Usage Pattern
```javascript
// Score all archetypes
const scores = scoreArchetypes(title, body, labels);
// Find highest score
const winner = Object.entries(scores)
.sort((a, b) => b[1] - a[1])[0];
// Confidence check
const confidence = (scores[winner[0]] / totalScore) * 100;
if (confidence < CONFIDENCE_THRESHOLD) {
// Use fallback or request human review
}
```
### When to Use
- Starting new work items
- Issue triage and routing
- Determining agent behavioral profile
- Validating archetype assignments
**Script Location**: `/agents/shared/scripts/select-archetype.mjs`
---
## 📋 Evidence Validation Pattern
### Purpose
Validates evidence requirements from multiple sources (archetypes, lenses) with priority-based conflict resolution.
### Priority Levels
```javascript
const PRIORITY_LEVELS = {
DEFAULT: 0, // Basic requirements
LENS: 1, // Overlay lens requirements
ARCHETYPE: 2, // Core archetype requirements
OVERRIDE: 3 // Explicit overrides
};
```
### Conflict Resolution Strategies
1. **Priority-Based**: Higher priority wins
2. **Merge Strategy**: Same priority → union of requirements
3. **Conflict Tracking**: All conflicts logged for audit
4. **Resolution Recording**: Documents why each resolution was made
### Schema Versions (Backward Compatibility)
- **V1 (1.0.0)**: Original format
- **V2 (2.0.0)**: Added priority field
- **V3 (3.0.0)**: Added conflict resolution
### Usage Pattern
```javascript
class EvidenceValidator {
constructor() {
this.requirements = new Map();
this.conflicts = [];
this.resolutions = [];
}
// Add requirements from source
addRequirements(source, requirements, priority) {
// Detect conflicts
// Resolve based on priority or merge
// Track resolution decision
}
// Merge two requirement values
mergeRequirements(existing, incoming) {
// Arrays: union
// Objects: deep merge
// Numbers: max
// Booleans: logical OR
// Strings: concatenate with separator
}
// Validate against requirements
validate(evidence) {
// Check all requirements met
// Return validation report
}
}
```
### When to Use
- Before creating PRs (ensure evidence collected)
- During PR review (validate evidence submitted)
- When combining archetype + lens requirements
- Resolving conflicts between requirement sources
**Script Location**: `/agents/shared/scripts/evidence-validator.mjs`
---
## 🏆 Curator Rubric Pattern
### Purpose
Reproducible 1-10 scoring system for issue quality using weighted rubric across 5 categories.
### Rubric Categories (100 points total)
#### 1. Problem Definition (25 points)
- **Problem Stated** (5pts): Clear problem statement exists
- **User Impact** (5pts): User/system impact described
- **Root Cause** (5pts): Root cause identified or investigated
- **Constraints** (5pts): Constraints and limitations noted
- **Success Metrics** (5pts): Success criteria measurable
#### 2. Acceptance Criteria (25 points)
- **Testable** (8pts): AC is specific and testable
- **Complete** (7pts): Covers happy and edge cases
- **Prioritized** (5pts): Must/should/nice clearly separated
- **Given/When/Then** (5pts): Uses Given/When/Then format
#### 3. Technical Completeness (20 points)
- **Dependencies** (5pts): Dependencies identified
- **Risks** (5pts): Technical risks assessed
- **Architecture** (5pts): Architecture approach defined
- **Performance** (5pts): Performance considerations noted
#### 4. Documentation Quality (15 points)
- **Context** (5pts): Sufficient context provided
- **References** (5pts): Links to related issues/docs
- **Examples** (5pts): Examples or mockups included
#### 5. Process Compliance (15 points)
- **Labels** (5pts): Appropriate labels applied
- **Estimates** (5pts): Effort estimated (S/M/L/XL)
- **Priority** (5pts): Priority clearly indicated
### Score Conversion (100 → 10 scale)
```javascript
function convertTo10Scale(rawScore) {
return Math.round(rawScore / 10);
}
// Score ranges:
// 90-100 → 10 (Exceptional)
// 80-89 → 8-9 (Excellent)
// 70-79 → 7 (Good)
// 60-69 → 6 (Acceptable)
// 50-59 → 5 (Needs improvement)
// <50 → 1-4 (Poor)
```
### Usage Pattern
```javascript
const rubric = new CuratorRubric();
// Score an issue
const score = rubric.scoreIssue(issueNumber);
// Get detailed breakdown
const breakdown = rubric.getScoreBreakdown(issueNumber);
// Post score as comment
rubric.postScoreComment(issueNumber, score, breakdown);
// Track score history
rubric.saveScoreHistory();
```
### When to Use
- During intake curation
- Before moving issues to pm-ready
- Quality gate enforcement
- Identifying patterns of good/poor curation
**Script Location**: `/agents/shared/scripts/curator-rubric.mjs`
---
## 🛡️ Bash Validation Pattern
### Purpose
Safe bash execution with syntax checking, pattern validation, and GitHub CLI validation.
### Validation Layers
1. **Shellcheck**: Syntax and best practices validation
2. **Pattern Checking**: Custom rules for common anti-patterns
3. **GitHub CLI**: Validate `gh` command usage
4. **Dry Run**: Test commands before execution
### Configuration
```javascript
const config = {
shellcheck: {
enabled: true,
format: 'json',
severity: ['error', 'warning', 'info']
},
patterns: {
enabled: true,
customRules: 'bash-patterns.json'
},
github: {
validateCLI: true,
dryRun: true
},
output: {
format: 'detailed', // or 'json', 'summary'
exitOnError: true
}
};
```
### Command Line Options
```bash
# Validate single file
bash-validator.mjs --file script.sh
# Validate directory
bash-validator.mjs --directory ./scripts
# Validate staged files (pre-commit)
bash-validator.mjs --staged
# Validate workflows
bash-validator.mjs --workflows
# Comprehensive scan
bash-validator.mjs --comprehensive
# Update pattern rules
bash-validator.mjs --update-patterns
```
### Severity Levels
- **error**: Critical issues that will cause failures
- **warning**: Potential issues, best practice violations
- **info**: Suggestions for improvement
### Usage Pattern
```javascript
class BashValidator {
constructor(options) {
this.options = { ...config, ...options };
this.results = {
files: [],
summary: { total: 0, passed: 0, failed: 0 }
};
}
// Validate file
validateFile(filepath) {
// Run shellcheck
// Check custom patterns
// Validate GitHub CLI usage
// Return validation report
}
// Aggregate results
getSummary() {
// Return summary statistics
}
}
```
### Common Anti-Patterns Detected
- Unquoted variables
- Missing error handling
- Unsafe command substitution
- Race conditions in pipelines
- Missing shellcheck directives
### When to Use
- Pre-commit hooks for bash scripts
- CI/CD validation of shell scripts
- Workflow validation
- Before executing user-provided bash
**Script Location**: `/agents/shared/scripts/bash-validator.mjs`
---
## Integration Patterns
### Combining Scripts in Workflows
**Example 1: Issue Intake Pipeline**
```javascript
// 1. Score issue quality
const qualityScore = curatorRubric.scoreIssue(issueNumber);
// 2. If quality sufficient, select archetype
if (qualityScore >= 6) {
const archetype = selectArchetype(issueNumber);
// 3. Load archetype evidence requirements
const requirements = loadArchetypeRequirements(archetype);
// 4. Validate requirements
const validator = new EvidenceValidator();
validator.addRequirements(archetype, requirements, PRIORITY_LEVELS.ARCHETYPE);
}
```
**Example 2: PR Validation Pipeline**
```javascript
// 1. Get archetype from PR labels
const archetype = extractArchetypeFromLabels(prLabels);
// 2. Load evidence requirements (archetype + lenses)
const validator = new EvidenceValidator();
validator.addRequirements(archetype, archetypeRequirements, PRIORITY_LEVELS.ARCHETYPE);
// Apply lenses if present
if (hasPerformanceLens) {
validator.addRequirements('performance-lens', perfRequirements, PRIORITY_LEVELS.LENS);
}
// 3. Validate evidence submitted
const validationReport = validator.validate(prEvidence);
// 4. Block merge if validation fails
if (!validationReport.passed) {
postComment(prNumber, validationReport.message);
setStatus('failure');
}
```
**Example 3: Safe Script Execution**
```javascript
// 1. Validate bash script
const bashValidator = new BashValidator();
const validationResult = bashValidator.validateFile(scriptPath);
// 2. Only execute if validation passed
if (validationResult.passed) {
execSync(scriptPath);
} else {
console.error('Validation failed:', validationResult.errors);
process.exit(1);
}
```
---
## Related Skills
- **wolf-archetypes**: Archetype definitions and registry
- **wolf-lenses**: Lens overlay requirements
- **wolf-governance**: Governance policies and quality gates
- **wolf-scripts-agents**: Agent coordination scripts (orchestration, execution)
---
## File Locations
All core scripts are in `/agents/shared/scripts/`:
- `select-archetype.mjs` - Archetype selection logic
- `evidence-validator.mjs` - Evidence validation with conflict resolution
- `curator-rubric.mjs` - Issue quality scoring
- `bash-validator.mjs` - Safe bash execution validation
---
## Best Practices
### Archetype Selection
- ✅ Always check confidence score before auto-assignment
- ✅ Log archetype selection rationale for audit
- ✅ Fall back to human review if low confidence
- ❌ Don't skip label analysis
- ❌ Don't ignore pattern matching
### Evidence Validation
- ✅ Always track conflict resolutions
- ✅ Document why conflicts were resolved specific ways
- ✅ Validate backward compatibility when updating schemas
- ❌ Don't silently drop conflicting requirements
- ❌ Don't ignore priority levels
### Curator Rubric
- ✅ Provide detailed score breakdown with comments
- ✅ Track score history for trend analysis
- ✅ Use consistent scoring criteria
- ❌ Don't auto-approve low-scoring issues
- ❌ Don't skip any rubric categories
### Bash Validation
- ✅ Always validate before execution
- ✅ Use dry-run mode for testing
- ✅ Check comprehensive mode for critical scripts
- ❌ Don't bypass validation for "simple" scripts
- ❌ Don't ignore warnings in production code
---
## Red Flags - STOP
If you catch yourself thinking:
-**"Skipping automated checks to save time"** - STOP. Automation exists because manual checks fail. Scripts catch what humans miss. Use the automation.
-**"Manual validation is good enough"** - NO. Manual validation is inconsistent and error-prone. Scripts provide reproducible validation every time.
-**"Scripts are just helpers, not requirements"** - Wrong. These scripts encode battle-tested logic from 50+ phases. They ARE requirements.
-**"I can select archetypes manually faster"** - False. Manual selection misses patterns and lacks confidence scoring. Use `select-archetype.mjs`.
-**"Evidence validation can wait until PR review"** - FORBIDDEN. Waiting until PR review wastes reviewer time. Validate BEFORE creating PR.
-**"Curator rubric scoring is optional"** - NO. Quality gates depend on rubric scores. All issues must be scored before pm-ready.
**STOP. Use the appropriate automation script BEFORE proceeding.**
## After Using This Skill
**REQUIRED NEXT STEPS:**
```
Integration with Wolf skill chain
```
1. **RECOMMENDED SKILL**: Use **wolf-archetypes** to understand archetype definitions
- **Why**: Scripts automate archetype selection. Understanding archetypes ensures correct interpretation of results.
- **When**: After using `select-archetype.mjs` to understand selected archetype's requirements
- **Tool**: Use Skill tool to load wolf-archetypes
2. **RECOMMENDED SKILL**: Use **wolf-governance** to understand quality gates
- **Why**: Scripts enforce governance. Understanding gates ensures compliance.
- **When**: After using `curator-rubric.mjs` or `evidence-validator.mjs`
- **Tool**: Use Skill tool to load wolf-governance
3. **DURING WORK**: Scripts provide continuous automation
- Scripts are called throughout workflow (intake, validation, execution)
- No single "next skill" - scripts integrate into existing chains
- Use scripts at appropriate workflow stages
### Verification Checklist
Before claiming script-based automation complete:
- [ ] Used appropriate automation script for task (archetype selection, evidence validation, rubric scoring, or bash validation)
- [ ] Validated confidence scores before proceeding (for archetype selection, require >70% confidence)
- [ ] Documented script execution and results in journal or PR description
- [ ] Evidence requirements tracked and validated (for evidence-validator usage)
- [ ] No validation warnings ignored (all errors and warnings addressed)
**Can't check all boxes? Automation incomplete. Return to this skill.**
### Good/Bad Examples: Script Usage
#### Example 1: Archetype Selection
<Good>
**Issue #456: Add rate limiting to API endpoints**
**Labels**: `feature`, `performance`
**Script Execution**:
```bash
$ node select-archetype.mjs --issue 456
Results:
product-implementer: 45% (keywords: add, feature)
perf-optimizer: 72% (keywords: performance, rate limiting; patterns: throughput)
Selected: perf-optimizer (confidence: 72%)
✅ Confidence above threshold (70%)
```
**Agent Action**:
✅ Accepted perf-optimizer archetype
✅ Loaded perf-optimizer evidence requirements (benchmarks, profiling, performance tests)
✅ Documented selection rationale in journal
✅ Proceeded with performance-focused implementation
**Why this is correct**:
- Used automation script instead of manual guess
- Validated confidence score before accepting
- Selected archetype matches work characteristics (performance focus)
- Evidence requirements automatically loaded
</Good>
<Bad>
**Issue #457: Fix login button**
**Labels**: `bug`
**Manual Selection**: "It's obviously a bug fix, so reliability-fixer"
**Problems**:
❌ Skipped automation script
❌ No confidence scoring
❌ Missed that issue title/description mention "button doesn't display" (could be CSS issue = maintainability-refactorer)
❌ No evidence requirements loaded
❌ No documentation of selection rationale
**What Should Have Been Done**:
```bash
$ node select-archetype.mjs --issue 457
Results:
reliability-fixer: 38% (keywords: fix, bug)
maintainability-refactorer: 54% (patterns: display issue, CSS)
Selected: maintainability-refactorer (confidence: 54%)
⚠️ Low confidence - recommend human review
```
**Outcome**: Agent would have identified this as UI/styling issue, not logic bug.
</Bad>
#### Example 2: Evidence Validation Workflow
<Good>
**PR #789: Optimize database query performance**
**Archetype**: perf-optimizer
**Lenses**: observability
**Script Execution**:
```bash
$ node evidence-validator.mjs --archetype perf-optimizer --lenses observability
Loading requirements...
✅ Archetype requirements (priority: 2): benchmarks, profiling, performance tests
✅ Observability lens (priority: 1): metrics, monitoring, alerting
Validating evidence...
✅ Benchmarks provided: before/after query times
✅ Profiling data: flame graph showing bottleneck
✅ Performance tests: 47 tests passing, 15% latency improvement
✅ Metrics: added query_duration_ms metric
✅ Monitoring: added query performance dashboard
✅ Alerting: added slow query alert (>500ms)
All requirements met ✅
```
**Agent Action**:
✅ Evidence validator ran before PR creation
✅ All requirements from archetype + lens validated
✅ PR included complete evidence package
✅ Reviewer approved without requesting additional evidence
**Why this is correct**:
- Automated validation caught all requirements
- Combined archetype + lens requirements properly
- Evidence complete before PR review
- No wasted reviewer time
</Good>
<Bad>
**PR #790: Add caching layer**
**Archetype**: perf-optimizer
**Lenses**: security
**Manual Check**: "I added benchmarks, should be good"
**Problems**:
❌ Skipped evidence-validator script
❌ Didn't realize security lens adds requirements (threat model, security scan)
❌ Missing security evidence for caching layer
❌ Missing several perf-optimizer requirements (profiling, comprehensive tests)
**PR Review**:
❌ Reviewer requested: threat model for cache poisoning
❌ Reviewer requested: security scan for cache key vulnerabilities
❌ Reviewer requested: comprehensive performance tests
❌ Reviewer requested: cache eviction profiling
**Outcome**: 3 review cycles, 2 weeks delay, demoralized team
**What Should Have Been Done**:
```bash
$ node evidence-validator.mjs --archetype perf-optimizer --lenses security
❌ Validation failed:
Missing: Threat model (security lens requirement)
Missing: Security scan (security lens requirement)
Missing: Profiling data (archetype requirement)
Missing: Cache eviction tests (archetype requirement)
Provided: Basic benchmarks only
Evidence incomplete. Address missing requirements before PR creation.
```
**If script had been used**: All requirements identified upfront, evidence collected before PR, single review cycle.
</Bad>
---
**Last Updated**: 2025-11-14
**Phase**: Superpowers Skill-Chaining Enhancement v2.0.0
**Maintainer**: Wolf Automation Team