Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:02:01 +08:00
commit a38b4d5c57
8 changed files with 1280 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "headless-cli-agents",
"description": "Comprehensive guide for running AI coding agents in non-interactive mode for automation, CI/CD pipelines, and scripting",
"version": "1.0.0",
"author": {
"name": "Timur Khakhalev",
"email": "timur.khakhalev@gmail.com"
},
"skills": [
"./"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# headless-cli-agents
Comprehensive guide for running AI coding agents in non-interactive mode for automation, CI/CD pipelines, and scripting

277
SKILL.md Normal file
View File

@@ -0,0 +1,277 @@
---
name: headless-cli-agents
description: This skill provides comprehensive guidance for running AI coding agents in non-interactive (headless) mode for automation, CI/CD pipelines, and scripting. Use when integrating Claude Code, Codex, Gemini, OpenCode, Qwen, or Droid CLI into automated workflows where human interaction is not desired.
---
# Headless CLI Agents
## Overview
This skill enables the use of AI coding agents in non-interactive mode for automation scenarios. It provides command references, safety considerations, and practical examples for integrating AI agents into CI/CD pipelines, shell scripts, and other automated workflows.
## Quick Reference
| Agent | Basic Command | Automation Flag | Best For |
|-------|--------------|----------------|----------|
| Claude Code | `claude -p "prompt"` | Auto-approved by default | General coding tasks |
| OpenAI Codex | `codex exec "prompt"` | `--full-auto` | Complex refactoring |
| Google Gemini | `gemini -p "prompt"` | `--yolo` (if available) | Analysis tasks |
| OpenCode | `opencode -p "prompt"` | Auto-approved by default | Multi-provider support |
| Qwen Code | `qwen -p "prompt"` | `--yolo` | Local model support |
| Factory Droid | `droid exec "prompt"` | `--auto <level>` | Controlled automation |
## When to Use Headless Mode
Use this skill when:
- **CI/CD Pipelines**: Automated code reviews, test generation, documentation
- **Shell Scripts**: Repetitive coding tasks, bulk operations
- **Cron Jobs**: Scheduled maintenance, analysis tasks
- **Git Hooks**: Pre-commit validation, post-commit analysis
- **DevOps Automation**: Infrastructure as code, deployment preparation
## Core Concepts
### 1. Agent Selection
Choose the appropriate agent based on your requirements:
**Claude Code CLI** - Best for general-purpose coding with excellent code understanding
```bash
# Basic usage
claude -p "Review this code for security issues"
# With additional context
claude -p "Generate tests for authentication module" --add-dir ./tests
```
**OpenAI Codex CLI** - Best for complex refactoring and code transformation
```bash
# Automated refactoring
codex exec --full-auto "Refactor this module to use async/await"
# Outside git repos
codex exec --skip-git-repo-check --full-auto "Create API documentation"
```
**Google Gemini CLI** - Best for analysis and documentation tasks
```bash
# Analysis with structured output
gemini -p "Analyze codebase architecture" --output-format json
# Documentation generation
cat src/ | gemini -p "Generate comprehensive API documentation"
```
### 2. Safety and Autonomy Levels
Different agents provide varying levels of automation control:
**Read-Only Mode (Safest)**
```bash
# Analysis without changes
droid exec "Analyze security vulnerabilities"
gemini -p "Review code quality metrics"
```
**Low-Risk Changes**
```bash
# Documentation and comments
droid exec "Add docstrings to all functions" --auto low
codex exec --full-auto "Update README with installation instructions"
```
**Development Operations**
```bash
# Package installation, test running
droid exec "Install dependencies and run tests" --auto medium
codex exec --full-auto "Fix failing unit tests"
```
**High-Risk Changes (Use Carefully)**
```bash
# Production deployments, major refactoring
droid exec "Implement OAuth2 migration" --auto high
# Only in isolated environments
codex exec --yolo "Complete system refactoring"
```
### 3. Input/Output Patterns
**Piping Content**
```bash
# Analyze git diff
git diff | claude -p "Review these changes for bugs"
# Process error logs
cat error.log | qwen -p "Categorize and summarize these errors"
# Multiple files
find . -name "*.py" | xargs claude -p "Check for anti-patterns"
```
**File-based Prompts**
```bash
# Read from prompt file
droid exec -f migration_prompt.md
# JSON output for parsing
gemini -p "List all API endpoints" --output-format json
```
**Structured Output**
```bash
# Machine-readable output
opencode -p "Count lines of code" -f json
claude -p "Generate test coverage report" > coverage_report.md
```
## Common Workflows
### Code Review Automation
```bash
# Quick security scan
find . -name "*.py" | xargs claude -p "Check for security vulnerabilities"
# Performance analysis
git diff | codex exec --full-auto "Analyze performance impact of changes"
# Documentation consistency
droid exec "Verify all functions have docstrings" --auto low
```
### Test Generation
```bash
# Unit tests for specific module
claude -p "Generate comprehensive unit tests for auth.py using pytest"
# Integration tests
codex exec --full-auto "Create API integration tests with realistic data"
# Test coverage analysis
qwen -p "Analyze test coverage and suggest missing test cases"
```
### Documentation Automation
```bash
# API documentation
find src/ -name "*.py" | gemini -p "Generate OpenAPI specification"
# README generation
claude -p "Create comprehensive README with setup, usage, and examples"
# Changelog from commits
git log --oneline | qwen -p "Generate changelog from commit history"
```
## Integration Patterns
### CI/CD Integration
**GitHub Actions**
```yaml
- name: AI Code Review
run: |
git diff origin/main...HEAD | claude -p "Review for security and performance issues"
```
**GitLab CI**
```yaml
script:
- gemini -p "Generate test suite for new features" --output-format json > test_plan.json
```
### Pre-commit Hooks
```bash
#!/bin/sh
# .git/hooks/pre-commit
git diff --cached | claude -p "Check staged changes for obvious bugs"
if [ $? -ne 0 ]; then exit 1; fi
```
### Monitoring and Alerts
```bash
# Daily code quality report
claude -p "Generate daily code quality report" | mail -s "Code Quality" team@example.com
```
## Best Practices
### Security
- Never use `--yolo` or equivalent flags in production environments
- Validate AI-generated code before deployment
- Use read-only mode for security-sensitive analysis
- Implement human review for high-risk changes
### Performance
- Limit the scope of analysis (specific files vs entire codebase)
- Use structured output formats for programmatic processing
- Cache results when appropriate
- Monitor API usage and costs
### Reliability
- Include fallback mechanisms for AI agent failures
- Validate generated code with linters and tests
- Use specific, well-defined prompts for consistent results
- Implement retry logic for network issues
### Error Handling
```bash
# Robust script pattern
if ! claude -p "Generate tests" > tests.py; then
echo "AI generation failed, using fallback"
cp fallback_tests.py tests.py
fi
```
## Troubleshooting
### Common Issues
**Agent not found**: Ensure CLI tools are installed and in PATH
```bash
which claude codex gemini opencode qwen droid
```
**Authentication errors**: Verify API keys and tokens
```bash
claude auth status
codex auth verify
```
**Permission denied**: Check file permissions and working directory
```bash
ls -la
pwd
```
**Context limit exceeded**: Reduce analysis scope or use specific files
```bash
# Instead of entire codebase
claude -p "Analyze main.py only"
# Or use specific patterns
find src/ -name "*.py" -maxdepth 2 | claude -p "Review these files"
```
### Debug Mode
Most agents support verbose output:
```bash
claude --verbose -p "Debug prompt"
codex exec --debug "Debug task"
```
## Resources
### references/
**agent-specific-commands.md** - Detailed command documentation for all six CLI agents including flags, options, and specific usage patterns. Load this when you need comprehensive syntax reference for a particular agent.
**use-case-examples.md** - Practical examples for CI/CD pipelines, shell scripts, and automation workflows. Load this when implementing specific automation scenarios or need concrete implementation patterns.
### scripts/
**validate-agent-setup.py** - Optional helper script to verify agent installations, API authentication, and basic functionality. Execute this to check if the required CLI agents are properly configured before using them in automation.
---
**References contain detailed command documentation and practical examples that complement this guide.**

61
plugin.lock.json Normal file
View File

@@ -0,0 +1,61 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:timurkhakhalev/cc-plugins:headless-cli-agents",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "87c67967a1cc2c9b76bf166118b8a371a0e25b0f",
"treeHash": "d4a3d8dea3696e472573ae6682853d892a39935b4b59cf07e2563eebb8c498da",
"generatedAt": "2025-11-28T10:28:41.738773Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "headless-cli-agents",
"description": "Comprehensive guide for running AI coding agents in non-interactive mode for automation, CI/CD pipelines, and scripting",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "f1c8f2e76cbe34a301a2682222bf0749e43236567dafe1b01a50fb23f36b8533"
},
{
"path": "SKILL.md",
"sha256": "7ce88f695ee1d3db69400b6848cf3eb125ee06327d825730f8c38400c5606411"
},
{
"path": "references/api_reference.md",
"sha256": "96126526510569488ab86e78bdd24d543a7b046489d3f6c34041a9d732ae0606"
},
{
"path": "references/agent-specific-commands.md",
"sha256": "63d7eb2a993feb9f00f24b92e3d500310020ce6dcc3bb7fed1c7c542003318ae"
},
{
"path": "references/use-case-examples.md",
"sha256": "874f19b33226f5ea4f741c89b55a87b172911a9c6d227b6b1bee122ff23b3b78"
},
{
"path": "scripts/validate-agent-setup.py",
"sha256": "21280b2006a793f3cfba0f2f51a6ac9db4d6f38dd414d85c9040f1902c322914"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "fe3e6d3ca4013e30cd72937ee15023d466486f8a9e0d656688941fb561cb1ed0"
}
],
"dirSha256": "d4a3d8dea3696e472573ae6682853d892a39935b4b59cf07e2563eebb8c498da"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,260 @@
# CLI Agent Commands Reference
## Claude Code CLI (Anthropic)
### Basic Headless Usage
```bash
claude -p "Your prompt here"
```
### Key Flags
- `-p` or `--prompt`: Execute one-shot prompt and exit
- `--add-dir <path>`: Add additional directory to workspace context
- `--model <model>`: Specify model (optional)
### Examples
```bash
# Basic one-shot query
claude -p "Explain this function"
# Pipe input to Claude
cat error.log | claude -p "Summarize these errors"
# Multiple directory context
claude -p "Review the API design" --add-dir ../api-specs
```
### Authentication
Requires configured API key or OAuth token. Run `claude --help` for setup options.
---
## OpenAI Codex CLI
### Basic Headless Usage
```bash
codex exec "Your prompt here"
codex e "Your prompt here" # Short alias
```
### Key Flags
- `--full-auto`: Unattended operation with workspace-write sandbox
- `--dangerously-bypass-approvals-and-sandbox` or `--yolo`: Complete hands-off mode (use carefully)
- `--skip-git-repo-check`: Allow execution outside Git repositories
- `--cd <path>`: Set working directory
- `--model <model>` or `-m`: Specify model (e.g., `-m gpt-5-codex`)
### Examples
```bash
# Automated refactoring
codex exec --full-auto "Update all README links to HTTPS"
# Outside Git repo
codex exec --skip-git-repo-check --full-auto "Create hello world HTML"
# Different working directory
codex exec --cd /path/to/project "Fix failing tests"
```
### Input Methods
```bash
# Pipe prompt from file
codex exec - < prompt.txt
# Standard input
echo "Review this code" | codex exec -
```
---
## Google Gemini CLI
### Basic Headless Usage
```bash
gemini --prompt "Your prompt here"
gemini -p "Your prompt here" # Short form
```
### Key Flags
- `--prompt` or `-p`: Execute prompt and exit
- `--output-format <format>`: Output format (json, stream-json)
- `--model <model>`: Specify model variant
### Examples
```bash
# Basic query
gemini -p "Summarize API design in this repo"
# Pipe input with prompt
echo "List TODO comments" | gemini -p "-"
# JSON output
gemini -p "Analyze code structure" --output-format json
# Process file with instruction
cat DESIGN.md | gemini -p "Improve this design document"
```
### Authentication
Requires Google account authentication or API key setup.
---
## OpenCode CLI
### Basic Headless Usage
```bash
opencode -p "Your prompt here"
opencode --prompt "Your prompt here"
```
### Key Flags
- `-p` or `--prompt`: Execute single prompt and exit
- `-f <format>` or `--format`: Output format (json)
- `-q` or `--quiet`: Suppress loading spinner
- `--cwd <path>`: Set working directory
### Examples
```bash
# Basic query
opencode -p "Explain Go context usage"
# JSON output
opencode -p "How many files in project?" -f json
# Quiet mode for scripting
opencode -p "Review code" -q
# Different working directory
opencode -p "Analyze this project" --cwd /path/to/project
```
### Environment Setup
Requires API keys for providers (OpenAI, Anthropic, etc.) in environment variables.
---
## Alibaba Qwen Code CLI
### Basic Headless Usage
```bash
qwen -p "Your prompt here"
```
### Key Flags
- `-p` or `--prompt`: Execute one-shot prompt
- `--output-format <format>`: Output format (json)
- `--model <model>`: Specify Qwen model variant
- `--yolo`: Bypass confirmations (similar to other agents)
### Examples
```bash
# Code review
qwen -p "Review this code for potential bugs"
# Generate tests
qwen -p "Generate unit tests for utils.py"
# Pipe diff for review
git diff | qwen -p "Review this diff for errors"
# JSON output
qwen -p "List project files" --output-format json
```
### Authentication
- First-time setup: Run `qwen` interactively to login with Qwen.ai OAuth
- Cached credentials: Used automatically for subsequent `-p` calls
- Local models: Set OPENAI_API_KEY and related env vars for local LLM servers
---
## Factory Droid CLI
### Basic Headless Usage
```bash
droid exec "Your prompt here"
```
### Key Flags
- `--auto <level>`: Set autonomy level (low, medium, high)
- `--skip-permissions-unsafe`: Bypass all permission checks (use carefully)
- `--cwd <path>`: Set working directory
- `-f <file>`: Read prompt from file
- `-o <format>`: Output format (json)
### Autonomy Levels
- **Default (no flag)**: Read-only mode, safe for analysis
- `--auto low`: Allow low-risk file edits (documentation, simple refactors)
- `--auto medium`: Allow development operations (install packages, run tests)
- `--auto high`: Permit production-level changes (full access)
### Examples
```bash
# Read-only analysis
droid exec "List all TODO comments across the project"
# Low-risk edits
droid exec "Fix typos in README.md" --auto low
# Development operations
droid exec "Fix failing unit tests" --auto medium
# High-risk changes
droid exec "Implement OAuth2 migration" --auto high
# Read prompt from file
droid exec -f prompt.md
# JSON output
droid exec "Analyze codebase" -o json
# Different working directory
droid exec "Review this code" --cwd /path/to/project
```
### Safety Notes
- Default mode is read-only for safety
- Use `--skip-permissions-unsafe` only in sandboxed environments
- Consider autonomy levels carefully based on use case
---
## Common Patterns
### piping Input
Most agents support piping input:
```bash
# Pipe file content
cat file.txt | agent -p "Process this"
# Pipe command output
git log --oneline | agent -p "Summarize commits"
```
### Reading from Files
```bash
# Direct file reading (if supported)
agent -f prompt.txt
# Using cat and pipe
cat prompt.txt | agent -p "-"
```
### JSON Output
For scripting and automation:
```bash
# JSON output format
agent -p "Query" --output-format json
agent -p "Query" -f json
agent -p "Query" -o json
```
### Automation Flags
For completely unattended operation:
```bash
# Various automation flags per agent
codex --full-auto "Task"
droid --auto medium "Task"
gemini --yolo "Task" # if available
```

View File

@@ -0,0 +1,34 @@
# Reference Documentation for Headless Cli Agents
This is a placeholder for detailed reference documentation.
Replace with actual reference content or delete if not needed.
Example real reference docs from other skills:
- product-management/references/communication.md - Comprehensive guide for status updates
- product-management/references/context_building.md - Deep-dive on gathering context
- bigquery/references/ - API references and query examples
## When Reference Docs Are Useful
Reference docs are ideal for:
- Comprehensive API documentation
- Detailed workflow guides
- Complex multi-step processes
- Information too lengthy for main SKILL.md
- Content that's only needed for specific use cases
## Structure Suggestions
### API Reference Example
- Overview
- Authentication
- Endpoints with examples
- Error codes
- Rate limits
### Workflow Guide Example
- Prerequisites
- Step-by-step instructions
- Common patterns
- Troubleshooting
- Best practices

View File

@@ -0,0 +1,363 @@
# CLI Agents Use Case Examples
## CI/CD Pipeline Examples
### GitHub Actions - Code Review
```yaml
name: AI Code Review
on:
pull_request:
branches: [main]
jobs:
ai-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Claude Code CLI
run: |
curl -fsSL https://claude.ai/install.sh | sh
echo "$CLAUDE_API_KEY" | claude auth login
env:
CLAUDE_API_KEY: ${{ secrets.CLAUDE_API_KEY }}
- name: Review PR Changes
run: |
git diff origin/main...HEAD | claude -p "Review these changes for potential bugs, security issues, and best practices. Focus on: 1) Error handling 2) Performance 3) Security 4) Code quality"
- name: Check for TODO Comments
run: |
find . -name "*.py" -o -name "*.js" -o -name "*.ts" | xargs grep -l "TODO\|FIXME" | claude -p "Review these files containing TODO/FIXME comments and suggest implementation approaches"
```
### GitLab CI - Documentation Generation
```yaml
stages:
- analyze
- build
code-analysis:
stage: analyze
image: node:18
before_script:
- npm install -g @anthropic-ai/claude-cli
script:
- |
claude -p "Generate comprehensive API documentation for this codebase. Focus on: 1) Endpoints 2) Request/response formats 3) Authentication 4) Error codes" > API_DOCUMENTATION.md
claude -p "Create a README with setup instructions, usage examples, and contribution guidelines" > README_ENHANCED.md
artifacts:
paths:
- API_DOCUMENTATION.md
- README_ENHANCED.md
```
### Jenkins Pipeline - Test Generation
```groovy
pipeline {
agent any
stages {
stage('AI Test Generation') {
steps {
script {
sh '''
# Generate unit tests for untested functions
find src -name "*.py" -exec grep -L "def test_" {} \\; | codex exec --full-auto --skip-git-repo-check "Generate comprehensive unit tests for these files using pytest framework. Include edge cases and error handling."
# Generate integration tests
codex exec --full-auto "Create integration tests for the main API endpoints. Test authentication, data flow, and error scenarios."
'''
}
}
}
stage('Run Tests') {
steps {
sh 'python -m pytest'
}
}
}
}
```
## Shell Scripting Examples
### Code Quality Check Script
```bash
#!/bin/bash
# quality-check.sh - Automated code quality analysis
set -e
echo "🔍 Running AI-powered code quality checks..."
# Check for security vulnerabilities
echo "🔒 Checking for security issues..."
find . -name "*.py" -o -name "*.js" -o -name "*.ts" | \
xargs grep -l "password\|secret\|token\|key" | \
claude -p "Analyze these files for potential security vulnerabilities. Look for: 1) Hardcoded credentials 2) Insecure data handling 3) Missing input validation 4) Authentication bypasses"
# Check for performance issues
echo "⚡ Analyzing performance patterns..."
find . -name "*.py" | head -10 | \
claude -p "Review these files for performance bottlenecks. Focus on: 1) Database queries 2) Loops and recursion 3) Memory usage 4) Async operations"
# Generate quality report
echo "📋 Generating quality report..."
claude -p "Create a comprehensive code quality report summarizing: 1) Security findings 2) Performance issues 3) Code style violations 4) Recommendations for improvement" > QUALITY_REPORT.md
echo "✅ Quality check completed. See QUALITY_REPORT.md"
```
### Automated Refactoring Script
```bash
#!/bin/bash
# refactor.sh - Automated code refactoring
PROJECT_DIR=${1:-.}
REFACTOR_TYPE=${2:-"general"}
echo "🔧 Starting automated refactoring in $PROJECT_DIR..."
cd "$PROJECT_DIR"
case "$REFACTOR_TYPE" in
"security")
codex exec --full-auto "Review all files for security issues and implement fixes: 1) Input sanitization 2) Output encoding 3) Authentication improvements 4) Secure headers"
;;
"performance")
codex exec --full-auto "Optimize code for performance: 1) Database query optimization 2) Caching strategies 3) Async/await patterns 4) Resource cleanup"
;;
"documentation")
codex exec --full-auto "Add comprehensive documentation: 1) Function docstrings 2) Type hints 3) Usage examples 4) README updates"
;;
*)
codex exec --full-auto "General code refactoring: 1) Improve naming conventions 2) Reduce complexity 3) Add error handling 4) Code organization"
;;
esac
echo "✅ Refactoring completed"
```
### Dependency Update Script
```bash
#!/bin/bash
# update-dependencies.sh - Smart dependency management
echo "📦 Analyzing and updating dependencies..."
# Check for security updates
gemini -p "Analyze package.json/requirements.txt for security vulnerabilities and outdated dependencies. Suggest specific version updates with migration notes." > DEPENDENCY_ANALYSIS.md
# Update packages (if safe)
if [ "$1" = "--auto" ]; then
echo "🚀 Auto-updating dependencies..."
droid exec "Update all dependencies to latest safe versions. Create migration plan for breaking changes." --auto medium
fi
# Generate changelog
claude -p "Create a changelog entry documenting dependency updates, security improvements, and potential breaking changes." > CHANGELOG.md
echo "✅ Dependency analysis completed"
```
## Automation Workflow Examples
### Pre-commit Hook
```bash
#!/bin/bash
# .git/hooks/pre-commit
echo "🤖 Running AI pre-commit checks..."
# Check commit message
commit_msg=$(git log -1 --pretty=%B)
echo "$commit_msg" | claude -p "Validate this commit message. Check for: 1) Clear description 2) Proper format 3) Issue references 4) Breaking change indicators"
if [ $? -ne 0 ]; then
echo "❌ Commit message validation failed"
exit 1
fi
# Quick code review of staged changes
git diff --cached | claude -p "Quick review of staged changes. Check for: 1) Obvious bugs 2) Syntax errors 3) Missing tests 4) Security issues"
if [ $? -ne 0 ]; then
echo "❌ Code review found issues"
exit 1
fi
echo "✅ Pre-commit checks passed"
```
### Release Preparation Script
```bash
#!/bin/bash
# prepare-release.sh - Automated release preparation
VERSION=${1:-"patch"}
RELEASE_BRANCH=${2:-"release"}
echo "🚀 Preparing release for version bump: $VERSION"
# Create release branch
git checkout -b "$RELEASE_BRANCH"
# Update version numbers
codex exec --full-auto "Update all version numbers in the project for a $VERSION release: 1) package.json 2) __init__.py files 3) Docker files 4) Documentation"
# Generate release notes
git log --oneline $(git describe --tags --abbrev=0)..HEAD | \
qwen -p "Create comprehensive release notes from these commits. Categorize by: 1) Features 2) Bug fixes 3) Breaking changes 4) Security improvements" > RELEASE_NOTES.md
# Update documentation
droid exec "Update all documentation for new release: 1) API docs 2) README 3) Installation guides 4) Migration guides" --auto low
echo "✅ Release preparation completed"
echo "📋 Review RELEASE_NOTES.md and commit changes"
```
### Database Migration Helper
```bash
#!/bin/bash
# migrate-db.sh - AI-assisted database migrations
MIGRATION_NAME=${1:-"auto_migration"}
echo "🗄️ Generating migration: $MIGRATION_NAME"
# Analyze schema changes
find models/ -name "*.py" | \
claude -p "Analyze these model files for schema changes since last migration. Identify: 1) New tables 2) Column changes 3) Index changes 4) Relationship updates"
# Generate migration file
codex exec --full-auto "Create a database migration file named ${MIGRATION_NAME}.py. Include: 1) Forward migration 2) Rollback migration 3) Data transformations 4) Safety checks"
# Generate test data
gemini -p "Generate test data and validation queries for the new migration. Include: 1) Sample records 2) Constraint tests 3) Performance test queries"
echo "✅ Migration generated. Review and run: python manage.py migrate"
```
## Monitoring and Maintenance Examples
### Log Analysis Script
```bash
#!/bin/bash
# analyze-logs.sh - AI-powered log analysis
LOG_FILE=${1:-"app.log"}
TIMEFRAME=${2:-"24h"}
echo "📊 Analyzing logs from $LOG_FILE (last $TIMEFRAME)..."
# Extract errors and warnings
grep -E "(ERROR|WARN|CRITICAL)" "$LOG_FILE" | \
qwen -p "Analyze these log entries for: 1) Error patterns 2) Root causes 3) Frequency analysis 4) Recommended fixes" > ERROR_ANALYSIS.md
# Performance analysis
grep -E "(slow|timeout|memory|performance)" "$LOG_FILE" | \
claude -p "Identify performance issues from these logs. Focus on: 1) Slow queries 2) Memory leaks 3) Timeout patterns 4) Resource bottlenecks" > PERFORMANCE_ISSUES.md
# Generate summary
gemini -p "Create an executive summary of log analysis including: 1) Critical issues 2) Performance impact 3) Security concerns 4) Action items" > LOG_SUMMARY.md
echo "✅ Log analysis completed. Check *.md files"
```
### Health Check Automation
```bash
#!/bin/bash
# health-check.sh - Automated system health analysis
echo "🏥 Running AI-powered health checks..."
# Code health
find . -name "*.py" | head -20 | \
droid exec "Analyze code health indicators: 1) Code complexity 2) Test coverage gaps 3) Dead code 4) Anti-patterns" --auto low > CODE_HEALTH.md
# Dependency health
claude -p "Analyze project dependencies for: 1) Security vulnerabilities 2) License compliance 3) Version conflicts 4) Maintenance status" > DEPENDENCY_HEALTH.md
# Architecture health
gemini -p "Review project architecture for: 1) Design patterns 2) Coupling issues 3) Scalability concerns 4) Technical debt" > ARCHITECTURE_HEALTH.md
# Generate actionable report
claude -p "Create a prioritized action plan based on health checks. Include: 1) Critical fixes 2) Improvements 3) Technical debt roadmap 4) Resource allocation" > HEALTH_ACTION_PLAN.md
echo "✅ Health check completed. Review generated reports"
```
## Integration Examples
### Slack Integration
```bash
#!/bin/bash
# slack-ai-notify.sh - Send AI analysis to Slack
WEBHOOK_URL=${SLACK_WEBHOOK_URL}
PROJECT_DIR=${1:-"."}
cd "$PROJECT_DIR"
# Analyze recent changes
git diff HEAD~1 | \
claude -p "Analyze recent changes and create a concise summary for team notification. Include: 1) Key changes 2) Impact 3) Any action needed" > CHANGE_SUMMARY.txt
# Send to Slack
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"$(cat CHANGE_SUMMARY.txt)\"}" \
"$WEBHOOK_URL"
echo "📢 AI summary sent to Slack"
```
### Email Reports
```bash
#!/bin/bash
# email-ai-report.sh - Generate and email AI reports
EMAIL=${1:-"team@example.com"}
REPORT_TYPE=${2:-"weekly"}
echo "📧 Generating $REPORT_TYPE report..."
case "$REPORT_TYPE" in
"weekly")
git log --since="1 week ago" --oneline | \
qwen -p "Create a weekly development report. Include: 1) Features completed 2) Bugs fixed 3) Code quality metrics 4) Team achievements" > weekly_report.md
;;
"security")
find . -name "*.py" -o -name "*.js" | \
claude -p "Generate a security audit report. Include: 1) Vulnerabilities found 2) Risk assessment 3) Remediation steps 4) Best practices" > security_report.md
;;
esac
# Send email (using mail command or your preferred method)
mail -s "AI-generated $REPORT_TYPE report" "$EMAIL" < "${REPORT_TYPE}_report.md"
echo "✅ Report emailed to $EMAIL"
```

270
scripts/validate-agent-setup.py Executable file
View File

@@ -0,0 +1,270 @@
#!/usr/bin/env python3
"""
CLI Agent Setup Validation Script
This script checks if various AI coding CLI agents are properly installed,
configured, and authenticated for headless operation.
Usage:
python validate-agent-setup.py [agent_name]
If agent_name is provided, only that agent will be checked.
Otherwise, all available agents will be validated.
"""
import subprocess
import sys
import json
import os
from typing import Dict, List, Tuple, Optional
# Agent configuration
AGENTS = {
"claude": {
"commands": ["claude", "claude -p 'test'"],
"install_url": "https://claude.ai/install",
"auth_check": "claude auth status",
"description": "Anthropic Claude Code CLI"
},
"codex": {
"commands": ["codex", "codex exec 'test'"],
"install_url": "https://platform.openai.com/docs/cli",
"auth_check": "codex auth verify",
"description": "OpenAI Codex CLI"
},
"gemini": {
"commands": ["gemini", "gemini -p 'test'"],
"install_url": "https://geminicli.com/docs/installation",
"auth_check": "gemini auth status",
"description": "Google Gemini CLI"
},
"opencode": {
"commands": ["opencode", "opencode -p 'test'"],
"install_url": "https://github.com/opencode-ai/opencode",
"auth_check": "echo 'Check environment variables for API keys'",
"description": "OpenCode CLI (multi-provider)"
},
"qwen": {
"commands": ["qwen", "qwen -p 'test'"],
"install_url": "https://github.com/QwenLM/qwen-code",
"auth_check": "qwen auth status",
"description": "Alibaba Qwen Code CLI"
},
"droid": {
"commands": ["droid", "droid exec 'test'"],
"install_url": "https://docs.factory.ai/cli/installation",
"auth_check": "droid auth status",
"description": "Factory Droid CLI"
}
}
def run_command(cmd: str, timeout: int = 30) -> Tuple[int, str, str]:
"""Run a command and return exit code, stdout, stderr"""
try:
result = subprocess.run(
cmd.split() if isinstance(cmd, str) else cmd,
capture_output=True,
text=True,
timeout=timeout
)
return result.returncode, result.stdout, result.stderr
except subprocess.TimeoutExpired:
return -1, "", "Command timed out"
except Exception as e:
return -1, "", str(e)
def check_agent_installation(agent_name: str, config: Dict) -> Dict:
"""Check if an agent is properly installed"""
results = {
"agent": agent_name,
"description": config["description"],
"installed": False,
"version": None,
"auth_status": "unknown",
"basic_test": False,
"errors": [],
"recommendations": []
}
# Check if command exists
cmd = config["commands"][0]
exit_code, stdout, stderr = run_command(f"which {cmd}")
if exit_code != 0:
results["errors"].append(f"Command '{cmd}' not found in PATH")
results["recommendations"].append(f"Install from: {config['install_url']}")
return results
results["installed"] = True
results["version"] = stdout.strip() or "Unknown version"
# Check authentication
if "auth_check" in config:
exit_code, auth_stdout, auth_stderr = run_command(config["auth_check"])
if exit_code == 0:
results["auth_status"] = "authenticated"
else:
results["auth_status"] = "not_authenticated"
results["errors"].append(f"Authentication check failed: {auth_stderr}")
results["recommendations"].append("Run authentication command for this agent")
# Basic functionality test (try to get help)
exit_code, help_stdout, help_stderr = run_command(f"{cmd} --help", timeout=10)
if exit_code == 0:
results["basic_test"] = True
return results
def check_environment_requirements() -> Dict:
"""Check general environment requirements"""
results = {
"python_version": sys.version,
"working_directory": os.getcwd(),
"environment_variables": {},
"recommendations": []
}
# Check for common environment variables
env_vars = [
"ANTHROPIC_API_KEY",
"OPENAI_API_KEY",
"GOOGLE_API_KEY",
"OPENAI_BASE_URL",
"CLAUDE_API_KEY"
]
for var in env_vars:
value = os.environ.get(var)
results["environment_variables"][var] = "SET" if value else "NOT_SET"
# Check git availability
exit_code, stdout, stderr = run_command("git --version")
if exit_code != 0:
results["recommendations"].append("Install git for better agent integration")
return results
def print_results(results: List[Dict], env_info: Dict, format_type: str = "table"):
"""Print validation results in specified format"""
if format_type == "json":
output = {
"environment": env_info,
"agents": results
}
print(json.dumps(output, indent=2))
return
# Table format
print("\n" + "="*80)
print("🤖 CLI AGENT SETUP VALIDATION")
print("="*80)
print(f"\n📁 Working Directory: {env_info['working_directory']}")
print(f"🐍 Python Version: {env_info['python_version'].split()[0]}")
print("\n🔑 Environment Variables:")
for var, status in env_info["environment_variables"].items():
status_icon = "" if status == "SET" else ""
print(f" {status_icon} {var}: {status}")
print("\n" + "="*80)
print("AGENT STATUS")
print("="*80)
for result in results:
agent = result["agent"]
description = result["description"]
# Status icons
install_icon = "" if result["installed"] else ""
auth_icon = "" if result["auth_status"] == "authenticated" else "" if result["auth_status"] == "not_authenticated" else "⚠️"
test_icon = "" if result["basic_test"] else ""
print(f"\n{install_icon} {agent.upper()}: {description}")
print(f" 📦 Installed: {'Yes' if result['installed'] else 'No'}")
print(f" 🔐 Auth Status: {result['auth_status']}")
print(f" 🧪 Basic Test: {'Pass' if result['basic_test'] else 'Fail'}")
if result["version"]:
print(f" 📋 Version: {result['version']}")
if result["errors"]:
print(" ❌ Errors:")
for error in result["errors"]:
print(f"{error}")
if result["recommendations"]:
print(" 💡 Recommendations:")
for rec in result["recommendations"]:
print(f"{rec}")
print("\n" + "="*80)
print("SUMMARY")
print("="*80)
installed_count = sum(1 for r in results if r["installed"])
auth_count = sum(1 for r in results if r["auth_status"] == "authenticated")
print(f"📦 Agents Installed: {installed_count}/{len(results)}")
print(f"🔐 Agents Authenticated: {auth_count}/{len(results)}")
if env_info["recommendations"]:
print("\n💡 General Recommendations:")
for rec in env_info["recommendations"]:
print(f"{rec}")
def main():
"""Main validation function"""
import argparse
parser = argparse.ArgumentParser(description="Validate CLI agent setup")
parser.add_argument("agent", nargs="?", choices=list(AGENTS.keys()),
help="Specific agent to check (default: all)")
parser.add_argument("--format", choices=["table", "json"], default="table",
help="Output format (default: table)")
parser.add_argument("--quiet", action="store_true",
help="Only show summary")
args = parser.parse_args()
# Check environment
env_info = check_environment_requirements()
# Check agents
if args.agent:
agents_to_check = {args.agent: AGENTS[args.agent]}
else:
agents_to_check = AGENTS
results = []
for agent_name, config in agents_to_check.items():
result = check_agent_installation(agent_name, config)
results.append(result)
if not args.quiet:
print(f"Checking {agent_name}...", end=" ")
if result["installed"]:
print("✅ Found")
else:
print("❌ Not found")
# Print results
if not args.quiet:
print_results(results, env_info, args.format)
else:
# Quiet summary
installed = sum(1 for r in results if r["installed"])
authenticated = sum(1 for r in results if r["auth_status"] == "authenticated")
print(f"Installed: {installed}/{len(results)} | Authenticated: {authenticated}/{len(results)}")
# Exit code based on results
if not any(r["installed"] for r in results):
sys.exit(2) # No agents installed
elif not all(r["installed"] for r in results):
sys.exit(1) # Some agents missing
else:
sys.exit(0) # All good
if __name__ == "__main__":
main()