Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:01:56 +08:00
commit 324f1d386c
21 changed files with 8327 additions and 0 deletions

View File

@@ -0,0 +1,363 @@
---
allowed-tools:
- Read
- Grep
- Glob
- Bash
- Task
argument-hint: "[file-or-directory] [--strict]"
description: "Esegue code review approfondita del codice"
---
# Toduba Code Review - Analisi e Review del Codice 🔍
## Obiettivo
Eseguire una code review completa e dettagliata, fornendo feedback costruttivo su qualità, best practices, sicurezza e manutenibilità.
## Argomenti
- `file-or-directory`: File o directory da revieware (default: current directory)
- `--strict`: Modalità strict con controlli aggiuntivi
Argomenti ricevuti: $ARGUMENTS
## Processo di Code Review
### Fase 1: Identificazione Scope
```bash
# Determina cosa revieware
if [ -z "$ARGUMENTS" ]; then
# Review ultime modifiche
FILES=$(git diff --name-only HEAD~1)
else
# Review file/directory specificato
FILES=$ARGUMENTS
fi
# Conta file da revieware
FILE_COUNT=$(echo "$FILES" | wc -l)
echo "📋 File da revieware: $FILE_COUNT"
```
### Fase 2: Analisi Multi-Dimensionale
#### 2.1 Code Quality
```typescript
const reviewCodeQuality = (code: string) => {
const issues = [];
// Naming conventions
if (!/^[a-z][a-zA-Z0-9]*$/.test(variableName)) {
issues.push({
severity: 'minor',
type: 'naming',
message: 'Variable should use camelCase'
});
}
// Function length
if (functionLines > 50) {
issues.push({
severity: 'major',
type: 'complexity',
message: 'Function too long, consider splitting'
});
}
// Cyclomatic complexity
if (complexity > 10) {
issues.push({
severity: 'major',
type: 'complexity',
message: 'High complexity, simplify logic'
});
}
return issues;
};
```
#### 2.2 Security Review
```typescript
const securityReview = (code: string) => {
const vulnerabilities = [];
// SQL Injection
if (code.includes('query("SELECT * FROM users WHERE id = " + userId)')) {
vulnerabilities.push({
severity: 'critical',
type: 'sql-injection',
message: 'Use parameterized queries'
});
}
// XSS
if (code.includes('innerHTML') && !code.includes('sanitize')) {
vulnerabilities.push({
severity: 'high',
type: 'xss',
message: 'Sanitize HTML before innerHTML'
});
}
// Hardcoded secrets
if (/api[_-]?key\s*=\s*["'][^"']+["']/i.test(code)) {
vulnerabilities.push({
severity: 'critical',
type: 'secrets',
message: 'Use environment variables for secrets'
});
}
return vulnerabilities;
};
```
#### 2.3 Performance Review
```typescript
const performanceReview = (code: string) => {
const issues = [];
// N+1 queries
if (code.includes('forEach') && code.includes('await')) {
issues.push({
severity: 'major',
type: 'performance',
message: 'Potential N+1 query, use batch operations'
});
}
// Memory leaks
if (code.includes('addEventListener') && !code.includes('removeEventListener')) {
issues.push({
severity: 'major',
type: 'memory',
message: 'Remove event listeners to prevent memory leaks'
});
}
return issues;
};
```
### Fase 3: Best Practices Check
```typescript
const checkBestPractices = () => {
const checks = {
errorHandling: checkErrorHandling(),
testing: checkTestCoverage(),
documentation: checkDocumentation(),
accessibility: checkAccessibility(),
i18n: checkInternationalization()
};
return generateReport(checks);
};
```
### Fase 4: Generazione Report
## Code Review Report Template
```markdown
# 📊 Toduba Code Review Report
**Date**: [TIMESTAMP]
**Reviewer**: Toduba System
**Files Reviewed**: [COUNT]
**Overall Score**: 7.5/10
## 🎯 Summary
### Statistics
- Lines of Code: 450
- Complexity: Medium
- Test Coverage: 78%
- Documentation: Good
### Rating by Category
| Category | Score | Status |
|----------|-------|--------|
| Code Quality | 8/10 | ✅ Good |
| Security | 7/10 | ⚠️ Needs Attention |
| Performance | 8/10 | ✅ Good |
| Maintainability | 7/10 | ⚠️ Moderate |
| Testing | 6/10 | ⚠️ Improve |
## 🔴 Critical Issues (Must Fix)
### 1. SQL Injection Vulnerability
**File**: `src/api/users.js:45`
```javascript
// ❌ Current
const query = "SELECT * FROM users WHERE id = " + userId;
// ✅ Suggested
const query = "SELECT * FROM users WHERE id = ?";
db.query(query, [userId]);
```
**Impact**: High security risk
**Effort**: Low
### 2. Hardcoded API Key
**File**: `src/config.js:12`
```javascript
// ❌ Current
const API_KEY = "sk-1234567890abcdef";
// ✅ Suggested
const API_KEY = process.env.API_KEY;
```
## 🟡 Major Issues (Should Fix)
### 1. Function Complexity
**File**: `src/services/payment.js:120`
- Cyclomatic complexity: 15 (threshold: 10)
- Suggestion: Split into smaller functions
- Example refactoring provided below
### 2. Missing Error Handling
**File**: `src/controllers/user.js:34`
```javascript
// ❌ Current
const user = await getUserById(id);
return res.json(user);
// ✅ Suggested
try {
const user = await getUserById(id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
return res.json(user);
} catch (error) {
logger.error('Failed to get user:', error);
return res.status(500).json({ error: 'Internal server error' });
}
```
## 🔵 Minor Issues (Nice to Have)
### 1. Naming Convention
- `getUserData``fetchUserData` (more descriptive)
- `tmp``temporaryFile` (avoid abbreviations)
### 2. Code Duplication
- Similar logic in 3 places
- Consider extracting to utility function
## ✅ Good Practices Observed
1. **Consistent formatting** throughout the codebase
2. **TypeScript usage** for type safety
3. **Async/await** properly used
4. **Environment variables** for configuration
5. **Modular structure** with clear separation
## 📈 Improvements Since Last Review
- Test coverage increased from 65% to 78%
- Removed 3 deprecated dependencies
- Fixed 2 security vulnerabilities
## 💡 Recommendations
### Immediate Actions
1. Fix SQL injection vulnerability
2. Remove hardcoded secrets
3. Add error handling to async operations
### Short-term Improvements
1. Increase test coverage to 85%
2. Reduce function complexity
3. Add JSDoc comments
### Long-term Suggestions
1. Implement automated security scanning
2. Set up performance monitoring
3. Create coding standards document
## 📝 Detailed Feedback by File
### `src/api/users.js`
- **Lines**: 245
- **Issues**: 3 critical, 2 major, 5 minor
- **Suggestions**:
- Add input validation middleware
- Implement rate limiting
- Use transaction for multi-step operations
### `src/components/UserProfile.tsx`
- **Lines**: 180
- **Issues**: 1 major, 3 minor
- **Suggestions**:
- Memoize expensive calculations
- Add loading states
- Improve accessibility
## 🎓 Learning Opportunities
Based on this review, consider studying:
1. OWASP Top 10 Security Risks
2. Clean Code principles
3. Performance optimization techniques
4. Advanced TypeScript patterns
```
## Integrazione con Orchestrator
Quando chiamato dall'orchestrator:
```typescript
// Può invocare agenti specializzati per review approfondite
if (needsSecurityReview) {
await Task.invoke('toduba-qa-engineer', {
action: 'security-scan',
files: criticalFiles
});
}
if (needsPerformanceReview) {
await Task.invoke('toduba-backend-engineer', {
action: 'performance-analysis',
files: backendFiles
});
}
```
## Output Finale
```
✅ Code Review Completata
📊 Risultati:
- Score: 7.5/10
- Critical Issues: 2
- Major Issues: 5
- Minor Issues: 12
🔴 Azioni Richieste:
1. Fix SQL injection (users.js:45)
2. Remove hardcoded API key (config.js:12)
📋 Report completo salvato in:
./code-review-report-2024-10-31.md
💡 Prossimi step:
1. Correggere issue critiche
2. Pianificare fix per issue major
3. Aggiornare documentazione
Tempo impiegato: 45 secondi
```
## Best Practices Code Review
1. **Constructive feedback** sempre
2. **Prioritize issues** per severity
3. **Provide solutions** non solo problemi
4. **Recognize good code** non solo criticare
5. **Educational approach** per team growth
6. **Automated checks** dove possibile
7. **Consistent standards** across reviews
8. **Follow-up** su issue risolte

236
commands/toduba-commit.md Normal file
View File

@@ -0,0 +1,236 @@
---
allowed-tools:
- Bash
- Read
- Grep
argument-hint: "[message]"
description: "Crea commit con messaggi strutturati seguendo best practices"
---
# Toduba Commit - Gestione Commit Strutturati 📝
## Obiettivo
Creare commit Git con messaggi ben strutturati, seguendo le convenzioni e best practices del progetto.
## Argomenti
- `message` (opzionale): Messaggio di commit personalizzato
Argomenti ricevuti: $ARGUMENTS
## Processo di Commit
### Fase 1: Analisi Modifiche
```bash
# Verifica stato repository
git status --porcelain
# Mostra diff delle modifiche
git diff --stat
# Conta file modificati
MODIFIED_FILES=$(git status --porcelain | wc -l)
```
### Fase 2: Categorizzazione Modifiche
Determina il tipo di commit:
- `feat`: Nuova funzionalità
- `fix`: Bug fix
- `docs`: Solo documentazione
- `style`: Formattazione, no logic changes
- `refactor`: Refactoring codice
- `test`: Aggiunta o modifica test
- `chore`: Manutenzione, dipendenze
- `perf`: Performance improvements
### Fase 3: Generazione Messaggio
#### Formato Conventional Commits:
```
<type>(<scope>): <description>
[body opzionale]
[footer opzionale]
```
#### Esempi:
```
feat(auth): add JWT token refresh capability
Implemented automatic token refresh when the access token expires.
Added refresh token storage and validation logic.
Closes #123
```
### Fase 4: Pre-Commit Checks
```bash
# Run linting
npm run lint
# Run tests
npm test
# Check for console.logs
if grep -r "console.log" src/; then
echo "⚠️ Warning: console.log trovati nel codice"
fi
# Check for TODO comments
if grep -r "TODO" src/; then
echo "📝 Reminder: TODO comments trovati"
fi
```
### Fase 5: Creazione Commit
```bash
# Stage modifiche appropriate
git add -A
# Crea commit con messaggio strutturato
git commit -m "$(cat <<EOF
$COMMIT_TYPE($COMMIT_SCOPE): $COMMIT_MESSAGE
$COMMIT_BODY
🤖 Generated with Toduba System
Co-Authored-By: Toduba <noreply@toduba.it>
EOF
)"
```
## Analisi Intelligente per Messaggio
```typescript
const generateCommitMessage = (changes) => {
// Analizza file modificati
const analysis = {
hasNewFiles: changes.some((c) => c.status === "A"),
hasDeletedFiles: changes.some((c) => c.status === "D"),
hasModifiedFiles: changes.some((c) => c.status === "M"),
mainlyFrontend:
changes.filter((c) => c.path.includes("components")).length > 0,
mainlyBackend: changes.filter((c) => c.path.includes("api")).length > 0,
mainlyTests: changes.filter((c) => c.path.includes(".test.")).length > 0,
mainlyDocs:
changes.filter((c) => c.path.match(/\.(md|txt|doc)/)).length > 0,
};
// Determina tipo
let type = "chore";
if (analysis.hasNewFiles && !analysis.mainlyTests) type = "feat";
if (analysis.mainlyTests) type = "test";
if (analysis.mainlyDocs) type = "docs";
// Determina scope
let scope = "general";
if (analysis.mainlyFrontend) scope = "ui";
if (analysis.mainlyBackend) scope = "api";
if (analysis.mainlyTests) scope = "test";
// Genera descrizione
const description = summarizeChanges(changes);
return {
type,
scope,
description,
};
};
```
## Template Messaggi
### Feature
```
feat(module): add new feature description
- Implemented X functionality
- Added Y configuration
- Created Z component
Related to #ISSUE
```
### Bug Fix
```
fix(module): resolve issue with X
Fixed the bug where X was causing Y.
The issue was due to Z condition not being handled.
Fixes #ISSUE
```
### Refactoring
```
refactor(module): improve X structure
- Extracted common logic to utilities
- Reduced code duplication
- Improved readability
No functional changes.
```
## Output
```
🔍 Analisi modifiche in corso...
📊 Riepilogo modifiche:
- File modificati: 5
- Aggiunti: 2
- Modificati: 3
- Eliminati: 0
📝 Tipo di commit identificato: feat
📁 Scope: backend
📌 Descrizione suggerita: add user authentication endpoints
✅ Pre-commit checks:
- Linting: PASSED
- Tests: PASSED
- Build: PASSED
💬 Messaggio di commit:
────────────────────────────────
feat(backend): add user authentication endpoints
Implemented login, logout, and token refresh endpoints.
Added JWT validation middleware and session management.
🤖 Generated with Toduba System
Co-Authored-By: Toduba <noreply@toduba.it>
────────────────────────────────
📤 Commit creato con successo!
Hash: abc123def456
Branch: feature/auth
Files: 5 changed, 203 insertions(+), 10 deletions(-)
💡 Prossimo step: git push origin feature/auth
```
## Best Practices
1. Commit atomici (una feature per commit)
2. Messaggi descrittivi e chiari
3. Usare tempo presente imperativo
4. Limitare subject line a 50 caratteri
5. Body dettagliato per commit complessi
6. Referenziare issue quando applicabile
7. No commit di file generati/build
8. Verificare sempre prima di committare

501
commands/toduba-help.md Normal file
View File

@@ -0,0 +1,501 @@
---
allowed-tools:
- Read
- Glob
- Grep
argument-hint: "[command|agent] [--examples] [--verbose]"
description: "Sistema di help integrato con esempi e documentazione contestuale"
---
# Toduba Help - Sistema Help Integrato 📖
## Obiettivo
Fornire help contestuale, esempi pratici e documentazione per tutti i componenti del sistema Toduba.
## Argomenti
- `[command|agent]`: Nome specifico comando o agente
- `--examples`: Mostra esempi pratici
- `--verbose`: Documentazione dettagliata
- `--list`: Lista tutti i componenti disponibili
- `--search <term>`: Cerca nella documentazione
Argomenti ricevuti: $ARGUMENTS
## Quick Start Guide
```
╔════════════════════════════════════════════════════════════╗
║ 🚀 TODUBA QUICK START ║
╠════════════════════════════════════════════════════════════╣
║ ║
║ 1. Initialize project documentation: ║
║ /toduba-init ║
║ ║
║ 2. Develop a feature: ║
║ "Create a user authentication API" ║
║ → Orchestrator handles everything ║
║ ║
║ 3. Run tests: ║
║ /toduba-test --watch ║
║ ║
║ 4. Commit changes: ║
║ /toduba-commit ║
║ ║
║ 5. Need help? ║
║ /toduba-help [component] ║
║ ║
╚════════════════════════════════════════════════════════════╝
```
## Help System Implementation
### Dynamic Help Generation
```javascript
const generateHelp = (component) => {
if (!component) {
return showMainMenu();
}
// Check if it's a command
if (component.startsWith("/") || component.startsWith("toduba-")) {
return showCommandHelp(component);
}
// Check if it's an agent
if (component.includes("engineer") || component.includes("orchestrator")) {
return showAgentHelp(component);
}
// Search in all documentation
return searchDocumentation(component);
};
```
## Main Help Menu
```
🎯 TODUBA SYSTEM v2.0 - Help Center
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📚 COMMANDS (5)
────────────────
/toduba-init Initialize project documentation
/toduba-test Run test suite with coverage
/toduba-rollback Rollback to previous state
/toduba-commit Create structured commits
/toduba-code-review Perform code review
/toduba-ultra-think Deep analysis mode
/toduba-update-docs Update documentation
/toduba-help This help system
🤖 AGENTS (8)
──────────────
toduba-orchestrator Brain of the system
toduba-backend-engineer Backend development
toduba-frontend-engineer Frontend/UI development
toduba-mobile-engineer Flutter specialist
toduba-qa-engineer Test execution
toduba-test-engineer Test writing
toduba-codebase-analyzer Code analysis
toduba-documentation-generator Docs generation
⚡ QUICK TIPS
─────────────
• Start with: /toduba-init
• Orchestrator uses smart mode detection
• Test/QA engineers have different roles
• Docs auto-update for large tasks
• Use /toduba-help <component> for details
Type: /toduba-help <component> --examples for practical examples
```
## Component-Specific Help
### Command Help Template
```markdown
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📘 COMMAND: /toduba-[name]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 DESCRIPTION
[Brief description of what the command does]
⚙️ SYNTAX
/toduba-[name] [required] [--optional] [--flags]
🎯 ARGUMENTS
• required Description of required argument
• --optional Description of optional flag
• --flag Description of boolean flag
📊 EXAMPLES
Basic usage:
/toduba-[name]
With options:
/toduba-[name] --verbose --coverage
Advanced:
/toduba-[name] pattern --only tests --parallel
💡 TIPS
• [Useful tip 1]
• [Useful tip 2]
• [Common pitfall to avoid]
🔗 RELATED
• /toduba-[related1] - Related command
• toduba-[agent] - Related agent
📚 FULL DOCS
See: commands/toduba-[name].md
```
### Agent Help Template
```markdown
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🤖 AGENT: toduba-[name]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 ROLE
[Agent's primary responsibility]
🛠️ CAPABILITIES
• [Capability 1]
• [Capability 2]
• [Capability 3]
📦 TOOLS ACCESS
• Read, Write, Edit
• Bash
• [Other tools]
🔄 WORKFLOW
1. [Step 1 in typical workflow]
2. [Step 2]
3. [Step 3]
📊 WHEN TO USE
✅ Use for:
• [Scenario 1]
• [Scenario 2]
❌ Don't use for:
• [Anti-pattern 1]
• [Anti-pattern 2]
💡 BEST PRACTICES
• [Best practice 1]
• [Best practice 2]
🔗 WORKS WITH
• toduba-[agent1] - Collaboration pattern
• toduba-[agent2] - Handoff pattern
📚 FULL DOCS
See: agents/toduba-[name].md
```
## Examples System
### Show Examples for Commands
```bash
show_command_examples() {
case "$1" in
"toduba-init")
cat <<EOF
📊 EXAMPLES: /toduba-init
1⃣ Basic initialization:
/toduba-init
2⃣ With verbose output:
/toduba-init --verbose
3⃣ Force regeneration:
/toduba-init --force
4⃣ After cloning a repo:
git clone <repo>
cd <repo>
/toduba-init
💡 TIP: Always run this first on new projects!
EOF
;;
"toduba-test")
cat <<EOF
📊 EXAMPLES: /toduba-test
1⃣ Run all tests:
/toduba-test
2⃣ Watch mode for development:
/toduba-test --watch
3⃣ With coverage report:
/toduba-test --coverage
4⃣ Run specific tests:
/toduba-test --only "user.*auth"
5⃣ CI/CD pipeline:
/toduba-test --coverage --fail-fast
💡 TIP: Use --watch during development!
EOF
;;
"toduba-rollback")
cat <<EOF
📊 EXAMPLES: /toduba-rollback
1⃣ Rollback last operation:
/toduba-rollback
2⃣ Rollback 3 steps:
/toduba-rollback --steps 3
3⃣ Preview without changes:
/toduba-rollback --dry-run
4⃣ Rollback to specific commit:
/toduba-rollback --to abc123def
5⃣ List available snapshots:
/toduba-rollback --list
⚠️ CAUTION: Always check --dry-run first!
EOF
;;
esac
}
```
### Show Examples for Agents
```bash
show_agent_examples() {
case "$1" in
"toduba-orchestrator")
cat <<EOF
📊 EXAMPLES: Using toduba-orchestrator
The orchestrator is invoked automatically when you make requests.
1⃣ Simple request (quick mode):
"Fix the typo in README"
→ Orchestrator detects simple task, skips ultra-think
2⃣ Standard request:
"Add user authentication to the API"
→ Orchestrator does standard analysis, asks for confirmation
3⃣ Complex request (deep mode):
"Refactor the entire backend architecture"
→ Full ultra-think analysis with multiple options
💡 The orchestrator automatically detects complexity!
EOF
;;
"toduba-backend-engineer")
cat <<EOF
📊 EXAMPLES: toduba-backend-engineer tasks
Automatically invoked by orchestrator for:
1⃣ API Development:
"Create CRUD endpoints for products"
2⃣ Database Work:
"Add indexes to improve query performance"
3⃣ Integration:
"Integrate Stripe payment processing"
4⃣ Performance:
"Optimize the user search endpoint"
💡 Works in parallel with frontend-engineer!
EOF
;;
esac
}
```
## Search Functionality
```javascript
const searchDocumentation = (term) => {
console.log(`🔍 Searching for: "${term}"`);
console.log("━━━━━━━━━━━━━━━━━━━━━━━━━━");
const results = [];
// Search in commands
const commandFiles = glob.sync("commands/toduba-*.md");
commandFiles.forEach((file) => {
const content = fs.readFileSync(file, "utf8");
if (content.toLowerCase().includes(term.toLowerCase())) {
const lines = content.split("\n");
const matches = lines.filter((line) =>
line.toLowerCase().includes(term.toLowerCase())
);
results.push({
type: "command",
file: path.basename(file, ".md"),
matches: matches.slice(0, 3),
});
}
});
// Search in agents
const agentFiles = glob.sync("agents/toduba-*.md");
agentFiles.forEach((file) => {
const content = fs.readFileSync(file, "utf8");
if (content.toLowerCase().includes(term.toLowerCase())) {
results.push({
type: "agent",
file: path.basename(file, ".md"),
context: extractContext(content, term),
});
}
});
// Display results
if (results.length === 0) {
console.log("No results found. Try different terms.");
} else {
console.log(`Found ${results.length} matches:\n`);
results.forEach(displaySearchResult);
}
};
```
## Interactive Help Mode
```javascript
// When no arguments provided
if (!ARGUMENTS) {
// Show interactive menu
console.log("🎯 TODUBA HELP - Interactive Mode");
console.log("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━");
console.log("");
console.log("What would you like help with?");
console.log("");
console.log("1. Commands overview");
console.log("2. Agents overview");
console.log("3. Quick start guide");
console.log("4. Common workflows");
console.log("5. Troubleshooting");
console.log("6. Search documentation");
console.log("");
console.log("Enter number or type component name:");
}
```
## Common Workflows Section
```markdown
## 🔄 COMMON WORKFLOWS
### 🚀 Starting a New Feature
1. "I want to add user authentication"
2. Orchestrator analyzes (standard mode)
3. Confirms approach with you
4. Delegates to backend/frontend engineers
5. Test engineer writes tests
6. QA engineer runs tests
7. Auto-updates documentation
### 🐛 Fixing a Bug
1. "Fix the login button not working"
2. Orchestrator analyzes (quick/standard)
3. Delegates to appropriate engineer
4. Tests are updated/added
5. QA validates fix
### 📊 Code Analysis
1. /toduba-code-review
2. Analyzer examines code
3. Provides recommendations
4. Can trigger refactoring
### 🔄 Deployment Preparation
1. /toduba-test --coverage
2. /toduba-code-review
3. /toduba-commit
4. Ready for deployment!
```
## Troubleshooting Section
```markdown
## 🔧 TROUBLESHOOTING
### ❌ Common Issues
#### "Orchestrator not responding"
• Check if Claude Desktop is running
• Restart Claude Desktop
• Check .claude-plugin/marketplace.json
#### "Test command not finding tests"
• Ensure test files follow naming convention
• Check test runner is installed
• Run: npm install (or equivalent)
#### "Rollback failed"
• Check .toduba/snapshots/ exists
• Ensure sufficient disk space
• Try: /toduba-rollback --list
#### "Documentation not updating"
• Run: /toduba-update-docs --force
• Check /docs directory permissions
• Verify git status
### 💡 Pro Tips
• Use --verbose for debugging
• Check logs in .toduba/logs/
• Join Discord for community help
```
## Output Format
```
╔═══════════════════════════════════════════╗
║ TODUBA HELP SYSTEM ║
╠═══════════════════════════════════════════╣
║ ║
║ Topic: [Component Name] ║
║ Type: [Command/Agent/Workflow] ║
║ ║
║ [Help content here] ║
║ ║
║ Need more? Try: ║
║ • /toduba-help [topic] --examples ║
║ • /toduba-help --search [term] ║
║ ║
╚═══════════════════════════════════════════╝
```

750
commands/toduba-init.md Normal file
View File

@@ -0,0 +1,750 @@
---
allowed-tools:
- Read
- Write
- Bash
- Glob
- Grep
argument-hint: "[--force] [--verbose]"
description: "Analizza il progetto e genera documentazione completa in /docs"
---
# Toduba Init V2.0 - Smart Documentation Generator 📚
## Obiettivo
Analizzare il progetto, rilevare automaticamente la struttura (monorepo vs single service), identificare i servizi presenti e generare documentazione completa e organizzata seguendo la nuova struttura gerarchica.
## Argomenti
- `--force`: Rigenera completamente la documentazione anche se esiste
- `--verbose`: Output dettagliato durante la generazione
Argomenti ricevuti: $ARGUMENTS
## Struttura Documentazione Generata
```
docs/
├── .toduba-meta/ # Metadata e cache (JSON)
│ ├── project-type.json # Tipo progetto rilevato
│ ├── services.json # Lista servizi e metadati
│ └── last-update.json # Info ultimo aggiornamento
├── global/ # Documentazione globale progetto
│ ├── README.md # Overview progetto completo
│ ├── ARCHITECTURE.md # Architettura generale sistema
│ ├── SETUP.md # Setup globale (se monorepo)
│ ├── CONTRIBUTING.md # Linee guida contribuzione
│ └── adr/ # Architecture Decision Records
│ ├── 0001-template.md # Template per nuove ADR
│ └── README.md # Indice ADR
├── services/ # SEMPRE presente (1+ servizi)
│ └── [service-name]/ # Es: app, backend, frontend, api
│ ├── README.md # Overview servizio (Tier 1)
│ ├── SETUP.md # Setup specifico servizio (Tier 1)
│ ├── ARCHITECTURE.md # Architettura servizio (Tier 1)
│ ├── TECH-STACK.md # Stack tecnologico (Tier 1)
│ ├── STYLE-GUIDE.md # Convenzioni codice (Tier 1)
│ ├── ENDPOINTS.md # API endpoints (Tier 2, solo backend/api)
│ ├── DATABASE.md # Schema database (Tier 2, solo se DB)
│ ├── TESTING.md # Strategia testing (Tier 2)
│ └── TROUBLESHOOTING.md # FAQ e problemi comuni (Tier 2)
└── operations/ # DevOps e operations
├── DEPLOYMENT.md # Procedure deployment
├── CI-CD.md # Pipeline CI/CD
├── MONITORING.md # Logging e monitoring
├── SECURITY.md # Security guidelines
└── ENVIRONMENT-VARS.md # Configurazione environment
```
## 🔄 Processo di Generazione
### STEP 1: Verifica Stato Attuale
```bash
# Controlla se docs esiste
if [ -d "docs" ] && [ "$FORCE" != "true" ]; then
echo "⚠️ Documentazione esistente trovata."
echo " Usa --force per rigenerare o /toduba-update-docs per aggiornamenti incrementali"
# Verifica metadata
if [ -f "docs/.toduba-meta/last-update.json" ]; then
echo " Ultimo aggiornamento: $(cat docs/.toduba-meta/last-update.json | grep timestamp)"
fi
exit 0
fi
# Se --force, backup documentazione esistente
if [ -d "docs" ] && [ "$FORCE" == "true" ]; then
timestamp=$(date +%Y%m%d_%H%M%S)
mv docs "docs.backup.$timestamp"
echo "📦 Backup creato: docs.backup.$timestamp"
fi
```
### STEP 2: Analisi Progetto (Auto-Detection)
#### 2.1 Rilevamento Tipo Progetto
```bash
PROJECT_TYPE="single_service"
SERVICES=()
# Cerca indicatori monorepo
if [ -f "pnpm-workspace.yaml" ] || [ -f "lerna.json" ] || [ -f "nx.json" ]; then
PROJECT_TYPE="monorepo"
elif grep -q "\"workspaces\"" package.json 2>/dev/null; then
PROJECT_TYPE="monorepo"
fi
# Conta directory con package.json (o altri config files)
PACKAGE_JSON_COUNT=$(find . -name "package.json" -not -path "*/node_modules/*" | wc -l)
if [ $PACKAGE_JSON_COUNT -gt 1 ]; then
PROJECT_TYPE="monorepo"
fi
```
#### 2.2 Rilevamento Servizi
**Strategia**: Cerca directory con file di configurazione (package.json, pubspec.yaml, go.mod, etc.)
```bash
# Trova tutti i potenziali servizi
find_services() {
local services=()
# Node.js/TypeScript projects
for pkg in $(find . -name "package.json" -not -path "*/node_modules/*" -not -path "*/dist/*"); do
service_path=$(dirname "$pkg")
service_name=$(basename "$service_path")
# Skip root se è monorepo
if [ "$service_path" == "." ] && [ "$PROJECT_TYPE" == "monorepo" ]; then
continue
fi
# Rileva tipo servizio analizzando dependencies
service_type=$(detect_service_type "$pkg")
services+=("$service_name:$service_path:$service_type")
done
# Flutter/Dart projects
for pubspec in $(find . -name "pubspec.yaml" -not -path "*/.*"); do
service_path=$(dirname "$pubspec")
service_name=$(basename "$service_path")
service_type="mobile"
services+=("$service_name:$service_path:$service_type")
done
# Go projects
for gomod in $(find . -name "go.mod" -not -path "*/.*"); do
service_path=$(dirname "$gomod")
service_name=$(basename "$service_path")
service_type="backend"
services+=("$service_name:$service_path:$service_type")
done
# Python projects
for req in $(find . -name "requirements.txt" -not -path "*/.*" -not -path "*/venv/*"); do
service_path=$(dirname "$req")
service_name=$(basename "$service_path")
service_type=$(detect_python_type "$req")
services+=("$service_name:$service_path:$service_type")
done
# Se nessun servizio trovato, usa root come servizio unico
if [ ${#services[@]} -eq 0 ]; then
project_name=$(basename "$PWD")
service_type=$(detect_root_type)
services+=("$project_name:.:$service_type")
fi
echo "${services[@]}"
}
detect_service_type() {
local package_json="$1"
# Leggi dependencies
if grep -q "express\|fastify\|@nestjs/core\|koa" "$package_json"; then
echo "backend"
elif grep -q "react\|vue\|angular\|@angular/core\|svelte" "$package_json"; then
echo "frontend"
elif grep -q "react-native" "$package_json"; then
echo "mobile"
elif grep -q "@types/node" "$package_json" && grep -q "\"bin\"" "$package_json"; then
echo "cli"
else
# Fallback: analizza struttura directory
service_dir=$(dirname "$package_json")
if [ -d "$service_dir/src/controllers" ] || [ -d "$service_dir/src/routes" ]; then
echo "backend"
elif [ -d "$service_dir/src/components" ] || [ -d "$service_dir/src/pages" ]; then
echo "frontend"
else
echo "api" # Default generico
fi
fi
}
detect_python_type() {
local req_file="$1"
if grep -q "fastapi\|flask\|django" "$req_file"; then
echo "backend"
else
echo "api"
fi
}
detect_root_type() {
# Rileva tipo progetto dalla root
if [ -f "package.json" ]; then
detect_service_type "package.json"
elif [ -f "pubspec.yaml" ]; then
echo "mobile"
elif [ -f "go.mod" ]; then
echo "backend"
else
echo "generic"
fi
}
# Esegui rilevamento
SERVICES_ARRAY=($(find_services))
```
#### 2.3 Analisi Dettagliata per Servizio
Per ogni servizio rilevato, analizza:
```bash
analyze_service() {
local service_name="$1"
local service_path="$2"
local service_type="$3"
echo "🔍 Analizzando $service_name ($service_type)..."
# Rileva linguaggio principale
primary_lang=$(detect_primary_language "$service_path")
# Rileva framework
primary_framework=$(detect_framework "$service_path" "$service_type")
# Rileva database (se backend)
has_database="false"
db_type="none"
if [ "$service_type" == "backend" ] || [ "$service_type" == "api" ]; then
db_info=$(detect_database "$service_path")
if [ "$db_info" != "none" ]; then
has_database="true"
db_type="$db_info"
fi
fi
# Rileva testing framework
test_framework=$(detect_test_framework "$service_path")
# Conta file e LOC
file_count=$(find "$service_path" -type f -not -path "*/node_modules/*" -not -path "*/dist/*" | wc -l)
loc_count=$(find "$service_path" -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.dart" -o -name "*.go" \) -not -path "*/node_modules/*" -exec wc -l {} + 2>/dev/null | tail -1 | awk '{print $1}')
# Crea JSON metadati servizio
cat > "docs/.toduba-meta/service_${service_name}.json" <<EOF
{
"name": "$service_name",
"path": "$service_path",
"type": "$service_type",
"primary_language": "$primary_lang",
"primary_framework": "$primary_framework",
"has_database": $has_database,
"database_type": "$db_type",
"test_framework": "$test_framework",
"file_count": $file_count,
"loc_count": $loc_count,
"analyzed_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
echo "✅ Analisi completata: $service_name"
}
```
### STEP 3: Creazione Struttura Directory
```bash
echo "📁 Creando struttura documentazione..."
# Crea struttura base
mkdir -p docs/.toduba-meta
mkdir -p docs/global/adr
mkdir -p docs/services
mkdir -p docs/operations
# Per ogni servizio, crea cartella
for service in "${SERVICES_ARRAY[@]}"; do
IFS=':' read -r name path type <<< "$service"
mkdir -p "docs/services/$name"
done
echo "✅ Struttura creata"
```
### STEP 4: Generazione Documentazione
#### 4.1 Documentazione Global
```bash
generate_global_docs() {
echo "📝 Generando documentazione globale..."
# Global README.md
generate_from_template \
"templates/docs/tier1/README.template.md" \
"docs/global/README.md" \
"global" \
""
# Global ARCHITECTURE.md
generate_from_template \
"templates/docs/tier1/ARCHITECTURE.template.md" \
"docs/global/ARCHITECTURE.md" \
"global" \
""
# Global SETUP.md (se monorepo)
if [ "$PROJECT_TYPE" == "monorepo" ]; then
generate_from_template \
"templates/docs/tier1/SETUP.template.md" \
"docs/global/SETUP.md" \
"global" \
""
fi
# CONTRIBUTING.md
generate_from_template \
"templates/docs/tier1/CONTRIBUTING.template.md" \
"docs/global/CONTRIBUTING.md" \
"global" \
""
# ADR Template e README
cp "templates/docs/tier1/ADR-TEMPLATE.template.md" "docs/global/adr/0001-template.md"
cat > "docs/global/adr/README.md" <<'EOF'
# Architecture Decision Records (ADR)
Questo directory contiene le Architecture Decision Records (ADR) del progetto.
## Cosa sono le ADR?
Le ADR documentano decisioni architetturali significative prese durante lo sviluppo del progetto, inclusi il contesto, le alternative considerate e le conseguenze.
## Come creare una nuova ADR
1. Copia il template: `cp 0001-template.md XXXX-your-decision.md`
2. Compila tutte le sezioni
3. Commit e crea PR per review
## Indice ADR
<!-- TODO: Aggiungere ADR quando create -->
EOF
echo "✅ Documentazione globale generata"
}
```
#### 4.2 Documentazione per Servizio (Tier 1 + Tier 2)
```bash
generate_service_docs() {
local service_name="$1"
local service_path="$2"
local service_type="$3"
echo "📝 Generando documentazione per: $service_name..."
# Leggi metadati servizio
local service_meta="docs/.toduba-meta/service_${service_name}.json"
# TIER 1: Sempre generato
generate_from_template \
"templates/docs/tier1/README.template.md" \
"docs/services/$service_name/README.md" \
"service" \
"$service_meta"
generate_from_template \
"templates/docs/tier1/SETUP.template.md" \
"docs/services/$service_name/SETUP.md" \
"service" \
"$service_meta"
generate_from_template \
"templates/docs/tier1/ARCHITECTURE.template.md" \
"docs/services/$service_name/ARCHITECTURE.md" \
"service" \
"$service_meta"
generate_from_template \
"templates/docs/tier1/TECH-STACK.template.md" \
"docs/services/$service_name/TECH-STACK.md" \
"service" \
"$service_meta"
generate_from_template \
"templates/docs/tier1/STYLE-GUIDE.template.md" \
"docs/services/$service_name/STYLE-GUIDE.md" \
"service" \
"$service_meta"
# TIER 2: Condizionale
# ENDPOINTS.md - solo se backend o api
if [ "$service_type" == "backend" ] || [ "$service_type" == "api" ]; then
generate_from_template \
"templates/docs/tier2/ENDPOINTS.template.md" \
"docs/services/$service_name/ENDPOINTS.md" \
"service" \
"$service_meta"
fi
# DATABASE.md - solo se ha database
local has_db=$(cat "$service_meta" | grep "has_database" | grep "true")
if [ -n "$has_db" ]; then
generate_from_template \
"templates/docs/tier2/DATABASE.template.md" \
"docs/services/$service_name/DATABASE.md" \
"service" \
"$service_meta"
fi
# TESTING.md - sempre per Tier 2
generate_from_template \
"templates/docs/tier2/TESTING.template.md" \
"docs/services/$service_name/TESTING.md" \
"service" \
"$service_meta"
# TROUBLESHOOTING.md - sempre per Tier 2
generate_from_template \
"templates/docs/tier2/TROUBLESHOOTING.template.md" \
"docs/services/$service_name/TROUBLESHOOTING.md" \
"service" \
"$service_meta"
echo "✅ Documentazione $service_name generata"
}
```
#### 4.3 Documentazione Operations
```bash
generate_operations_docs() {
echo "📝 Generando documentazione operations..."
# Crea template placeholder per operations docs
cat > "docs/operations/DEPLOYMENT.md" <<'EOF'
# Deployment Guide
> 🚀 Guida al deployment del progetto
> Ultimo aggiornamento: {{TIMESTAMP}}
## Overview
<!-- TODO: Descrivere strategia di deployment -->
## Environments
### Development
<!-- TODO: Setup environment development -->
### Staging
<!-- TODO: Setup environment staging -->
### Production
<!-- TODO: Setup environment production -->
## Deployment Process
<!-- TODO: Documentare processo deployment -->
## Rollback
<!-- TODO: Procedure di rollback -->
---
*Generato da Toduba System*
EOF
cat > "docs/operations/CI-CD.md" <<'EOF'
# CI/CD Pipeline
> ⚙️ Documentazione pipeline CI/CD
> Ultimo aggiornamento: {{TIMESTAMP}}
## Pipeline Overview
<!-- TODO: Descrivere pipeline CI/CD -->
## Stages
<!-- TODO: Documentare stages -->
## Configuration
<!-- TODO: File di configurazione -->
---
*Generato da Toduba System*
EOF
cat > "docs/operations/MONITORING.md" <<'EOF'
# Monitoring & Logging
> 📊 Guida monitoring e logging
> Ultimo aggiornamento: {{TIMESTAMP}}
## Logging Strategy
<!-- TODO: Strategia logging -->
## Monitoring Tools
<!-- TODO: Tool di monitoring -->
## Alerts
<!-- TODO: Configurazione alert -->
---
*Generato da Toduba System*
EOF
cat > "docs/operations/SECURITY.md" <<'EOF'
# Security Guidelines
> 🛡️ Linee guida sicurezza
> Ultimo aggiornamento: {{TIMESTAMP}}
## Security Best Practices
<!-- TODO: Best practices sicurezza -->
## Authentication & Authorization
<!-- TODO: Auth strategy -->
## Secrets Management
<!-- TODO: Gestione secrets -->
---
*Generato da Toduba System*
EOF
cat > "docs/operations/ENVIRONMENT-VARS.md" <<'EOF'
# Environment Variables
> ⚙️ Configurazione variabili d'ambiente
> Ultimo aggiornamento: {{TIMESTAMP}}
## Required Variables
<!-- TODO: Variabili richieste -->
## Optional Variables
<!-- TODO: Variabili opzionali -->
## Per Environment
### Development
<!-- TODO: Env development -->
### Production
<!-- TODO: Env production -->
---
*Generato da Toduba System*
EOF
echo "✅ Documentazione operations generata"
}
```
### STEP 5: Rendering Template con Placeholder
```bash
generate_from_template() {
local template_file="$1"
local output_file="$2"
local scope="$3" # "global" o "service"
local metadata_file="$4" # Path to service metadata JSON (se service)
# Leggi template
local content=$(cat "$template_file")
# Replace placeholder comuni
content="${content//\{\{TIMESTAMP\}\}/$(date -u +%Y-%m-%dT%H:%M:%SZ)}"
content="${content//\{\{TODUBA_VERSION\}\}/2.0.0}"
if [ "$scope" == "global" ]; then
# Placeholder globali
local project_name=$(basename "$PWD")
content="${content//\{\{PROJECT_NAME\}\}/$project_name}"
content="${content//\{\{PROJECT_DESCRIPTION\}\}/<!-- TODO: Aggiungere descrizione progetto -->}"
elif [ "$scope" == "service" ] && [ -f "$metadata_file" ]; then
# Placeholder servizio (da metadata JSON)
local service_name=$(cat "$metadata_file" | grep -o '"name": *"[^"]*"' | cut -d'"' -f4)
local service_type=$(cat "$metadata_file" | grep -o '"type": *"[^"]*"' | cut -d'"' -f4)
local primary_lang=$(cat "$metadata_file" | grep -o '"primary_language": *"[^"]*"' | cut -d'"' -f4)
local primary_framework=$(cat "$metadata_file" | grep -o '"primary_framework": *"[^"]*"' | cut -d'"' -f4)
local file_count=$(cat "$metadata_file" | grep -o '"file_count": *[0-9]*' | awk '{print $2}')
local loc_count=$(cat "$metadata_file" | grep -o '"loc_count": *[0-9]*' | awk '{print $2}')
content="${content//\{\{SERVICE_NAME\}\}/$service_name}"
content="${content//\{\{PROJECT_NAME\}\}/$service_name}"
content="${content//\{\{SERVICE_TYPE\}\}/$service_type}"
content="${content//\{\{PRIMARY_LANGUAGE\}\}/$primary_lang}"
content="${content//\{\{PRIMARY_FRAMEWORK\}\}/$primary_framework}"
content="${content//\{\{TOTAL_FILES\}\}/$file_count}"
content="${content//\{\{LINES_OF_CODE\}\}/$loc_count}"
# Placeholder generici (TODO)
content="${content//\{\{[^}]*\}\}/<!-- TODO: Completare manualmente -->}"
fi
# Scrivi output
echo "$content" > "$output_file"
}
```
### STEP 6: Creazione Metadata
```bash
create_metadata() {
echo "💾 Creando metadata..."
# project-type.json
cat > "docs/.toduba-meta/project-type.json" <<EOF
{
"type": "$PROJECT_TYPE",
"detected_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"root_path": "$(pwd)",
"services_count": ${#SERVICES_ARRAY[@]}
}
EOF
# services.json
echo "{" > "docs/.toduba-meta/services.json"
echo ' "services": [' >> "docs/.toduba-meta/services.json"
local first=true
for service in "${SERVICES_ARRAY[@]}"; do
IFS=':' read -r name path type <<< "$service"
if [ "$first" = true ]; then
first=false
else
echo "," >> "docs/.toduba-meta/services.json"
fi
echo " {" >> "docs/.toduba-meta/services.json"
echo " \"name\": \"$name\"," >> "docs/.toduba-meta/services.json"
echo " \"path\": \"$path\"," >> "docs/.toduba-meta/services.json"
echo " \"type\": \"$type\"" >> "docs/.toduba-meta/services.json"
echo -n " }" >> "docs/.toduba-meta/services.json"
done
echo "" >> "docs/.toduba-meta/services.json"
echo " ]" >> "docs/.toduba-meta/services.json"
echo "}" >> "docs/.toduba-meta/services.json"
# last-update.json
cat > "docs/.toduba-meta/last-update.json" <<EOF
{
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"git_commit": "$(git rev-parse HEAD 2>/dev/null || echo 'unknown')",
"git_branch": "$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo 'unknown')",
"toduba_version": "2.0.0",
"full_generation": true
}
EOF
echo "✅ Metadata creati"
}
```
## 📊 Output Finale
```bash
echo ""
echo "✅ =========================================="
echo "✅ Toduba Init V2.0 - Completato!"
echo "✅ =========================================="
echo ""
echo "📊 Riepilogo:"
echo " • Tipo progetto: $PROJECT_TYPE"
echo " • Servizi rilevati: ${#SERVICES_ARRAY[@]}"
for service in "${SERVICES_ARRAY[@]}"; do
IFS=':' read -r name path type <<< "$service"
echo " - $name ($type)"
done
echo ""
echo "📁 Documentazione generata in: ./docs/"
echo ""
echo "📂 Struttura creata:"
echo " ├── global/ (Documentazione globale)"
echo " ├── services/ (Documentazione per servizio)"
echo " ├── operations/ (DevOps e operations)"
echo " └── .toduba-meta/ (Metadata e cache)"
echo ""
echo "📝 Prossimi passi:"
echo " 1. ✏️ Completa i placeholder TODO nei file generati"
echo " 2. 📖 Revisiona la documentazione"
echo " 3. 🔄 Usa /toduba-update-docs per aggiornamenti futuri"
echo " 4. 💾 Committa la cartella docs/ nel repository"
echo ""
echo "💡 Tips:"
echo " • I template sono semi-dinamici con placeholder intelligenti"
echo " • Sezioni con TODO vanno completate manualmente"
echo " • La struttura è ottimizzata per monorepo e single service"
echo " • Usa /toduba-update-docs per update incrementali (molto più veloce)"
echo ""
```
## 🎯 Note Implementazione
1. **Auto-detection robusto**: Rileva automaticamente tipo progetto e servizi
2. **Template semi-dinamici**: Placeholder popolati da analisi + TODO per completamento manuale
3. **Struttura sempre consistente**: `docs/services/` sempre presente anche con 1 solo servizio
4. **Tier 1 + Tier 2**: Tier 1 sempre generato, Tier 2 condizionale (ENDPOINTS solo backend, DATABASE solo se ha DB)
5. **Metadata tracking**: `.toduba-meta/` traccia tutto per update incrementali futuri
6. **Fallback intelligente**: Se detection fallisce, usa default ragionevoli
## 🚨 Gestione Errori
- **Directory non scrivibile**: Alert utente
- **Template mancanti**: Usa fallback generico
- **Detection fallita**: Usa progetto root come singolo servizio generico
- **Git non inizializzato**: Procedi senza info git (ok)
## ⚡ Performance
- Target: < 10 secondi per init completo
- Parallel processing dove possibile
- Cache metadata per future operazioni
---
*Toduba Init V2.0 - Smart Documentation Generator*

View File

@@ -0,0 +1,473 @@
---
allowed-tools:
- Read
- Write
- Edit
- Bash
- Task
argument-hint: "[--step-by-step] [--auto-pause] [--verbose]"
description: "Modalità interattiva con esecuzione step-by-step e controllo utente"
---
# Toduba Interactive Mode - Esecuzione Interattiva 🎮
## Obiettivo
Fornire un'esecuzione controllata step-by-step con possibilità di pause, resume, undo e controllo completo del flusso.
## Argomenti
- `--step-by-step`: Richiede conferma ad ogni step
- `--auto-pause`: Pausa automatica su warning/error
- `--verbose`: Output dettagliato per ogni operazione
- `--checkpoint`: Crea checkpoint ad ogni step major
Argomenti ricevuti: $ARGUMENTS
## Interactive Session Manager
```typescript
class InteractiveSession {
private steps: Step[] = [];
private currentStep: number = 0;
private paused: boolean = false;
private history: StepResult[] = [];
private checkpoints: Checkpoint[] = [];
async start(task: Task) {
console.log('🎮 TODUBA INTERACTIVE MODE');
console.log('━━━━━━━━━━━━━━━━━━━━━━━━━');
console.log('');
console.log('Controls:');
console.log(' [Enter] - Continue');
console.log(' [p] - Pause');
console.log(' [s] - Skip step');
console.log(' [u] - Undo last');
console.log(' [r] - Resume all');
console.log(' [q] - Quit');
console.log('');
this.initializeSteps(task);
await this.executeInteractive();
}
}
```
## Step-by-Step Execution Flow
### Visual Progress Display
```
┌─────────────────────────────────────────────────────────┐
│ 🎮 INTERACTIVE EXECUTION │
├─────────────────────────────────────────────────────────┤
│ │
│ Task: Create user authentication API │
│ Mode: Step-by-step │
│ Progress: [████████░░░░░░░░] 40% (4/10 steps) │
│ │
│ ┌─ Current Step ──────────────────────────────────┐ │
│ │ Step 4: Creating user model │ │
│ │ Agent: toduba-backend-engineer │ │
│ │ Action: Write file models/User.js │ │
│ │ Status: ⏸️ Awaiting confirmation │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ Previous: ✅ Database connection setup │
│ Next: Create authentication middleware │
│ │
│ [Enter] Continue | [p] Pause | [u] Undo | [q] Quit │
└─────────────────────────────────────────────────────────┘
```
### Step Structure
```typescript
interface Step {
id: string;
name: string;
description: string;
agent: string;
action: Action;
dependencies: string[];
canUndo: boolean;
critical: boolean;
estimatedTime: number;
}
interface Action {
type: 'create' | 'modify' | 'delete' | 'execute';
target: string;
details: any;
}
```
## Interactive Controls Implementation
### Pause/Resume System
```javascript
const handleUserInput = async (input) => {
switch(input.toLowerCase()) {
case 'p':
case 'pause':
await pauseExecution();
break;
case 'r':
case 'resume':
await resumeExecution();
break;
case 's':
case 'skip':
await skipCurrentStep();
break;
case 'u':
case 'undo':
await undoLastStep();
break;
case 'q':
case 'quit':
await quitInteractive();
break;
case '':
case 'enter':
await continueExecution();
break;
case 'h':
case 'help':
showInteractiveHelp();
break;
default:
console.log('Unknown command. Press [h] for help.');
}
};
```
### Undo Mechanism
```javascript
async undoLastStep() {
if (this.history.length === 0) {
console.log('❌ No steps to undo');
return;
}
const lastStep = this.history.pop();
console.log(`↩️ Undoing: ${lastStep.step.name}`);
// Show what will be undone
console.log('');
console.log('This will revert:');
lastStep.changes.forEach(change => {
console.log(`${change.type}: ${change.file}`);
});
const confirm = await promptUser('Confirm undo? (y/n): ');
if (confirm === 'y') {
// Revert changes
await this.revertStep(lastStep);
this.currentStep--;
console.log('✅ Step undone successfully');
} else {
this.history.push(lastStep);
console.log('❌ Undo cancelled');
}
}
```
## Checkpoint System
```javascript
class CheckpointManager {
async createCheckpoint(name: string, metadata: any) {
const checkpoint = {
id: `checkpoint-${Date.now()}`,
name,
timestamp: new Date(),
step: this.currentStep,
files: await this.captureFileState(),
metadata
};
// Save current state
await this.saveCheckpoint(checkpoint);
console.log(`💾 Checkpoint created: ${checkpoint.name}`);
return checkpoint.id;
}
async restoreCheckpoint(checkpointId: string) {
console.log(`🔄 Restoring checkpoint: ${checkpointId}`);
const checkpoint = await this.loadCheckpoint(checkpointId);
// Show changes
console.log('');
console.log('This will restore to:');
console.log(` Step: ${checkpoint.step}`);
console.log(` Time: ${checkpoint.timestamp}`);
console.log(` Files: ${checkpoint.files.length}`);
const confirm = await promptUser('Proceed? (y/n): ');
if (confirm === 'y') {
await this.applyCheckpoint(checkpoint);
console.log('✅ Checkpoint restored');
}
}
}
```
## Step Details Preview
```javascript
async previewStep(step: Step) {
console.clear();
console.log('┌─────────────────────────────────────────┐');
console.log('│ 📋 STEP PREVIEW │');
console.log('├─────────────────────────────────────────┤');
console.log(`│ Step ${step.id}: ${step.name}`);
console.log('├─────────────────────────────────────────┤');
console.log('│');
console.log(`│ Description:`);
console.log(`${step.description}`);
console.log('│');
console.log(`│ Will be executed by:`);
console.log(`│ 🤖 ${step.agent}`);
console.log('│');
console.log(`│ Actions to perform:`);
step.actions.forEach(action => {
console.log(`│ • ${action.type}: ${action.target}`);
});
console.log('│');
console.log(`│ Estimated time: ${step.estimatedTime}s`);
console.log('│');
if (step.critical) {
console.log('│ ⚠️ CRITICAL STEP - Cannot be skipped');
}
console.log('│');
console.log('└─────────────────────────────────────────┘');
console.log('');
return await promptUser('[Enter] Execute | [s] Skip | [m] Modify: ');
}
```
## Breakpoint System
```javascript
class BreakpointManager {
private breakpoints: Breakpoint[] = [];
addBreakpoint(condition: string | Function) {
this.breakpoints.push({
id: `bp-${Date.now()}`,
condition,
hits: 0
});
}
async checkBreakpoints(context: ExecutionContext) {
for (const bp of this.breakpoints) {
if (await this.evaluateBreakpoint(bp, context)) {
console.log(`🔴 Breakpoint hit: ${bp.id}`);
await this.handleBreakpoint(bp, context);
}
}
}
async handleBreakpoint(bp: Breakpoint, context: ExecutionContext) {
console.log('');
console.log('━━━ BREAKPOINT ━━━');
console.log(`Location: ${context.step.name}`);
console.log(`Condition: ${bp.condition}`);
console.log(`Hits: ${++bp.hits}`);
console.log('');
// Show context
console.log('Context:');
console.log(JSON.stringify(context.variables, null, 2));
// Interactive debugger
let debugging = true;
while (debugging) {
const cmd = await promptUser('debug> ');
switch(cmd) {
case 'c':
case 'continue':
debugging = false;
break;
case 'i':
case 'inspect':
await this.inspectContext(context);
break;
case 'n':
case 'next':
await this.stepOver();
debugging = false;
break;
case 'q':
case 'quit':
process.exit(0);
}
}
}
}
```
## Modification Mode
```javascript
async modifyStep(step: Step) {
console.log('✏️ MODIFY STEP');
console.log('━━━━━━━━━━━━━━');
console.log('');
console.log('Current configuration:');
console.log(JSON.stringify(step, null, 2));
console.log('');
console.log('What would you like to modify?');
console.log('1. Change target files');
console.log('2. Modify parameters');
console.log('3. Change agent assignment');
console.log('4. Skip this step');
console.log('5. Cancel modification');
const choice = await promptUser('Choice (1-5): ');
switch(choice) {
case '1':
step.action.target = await promptUser('New target: ');
break;
case '2':
await this.modifyParameters(step);
break;
case '3':
step.agent = await this.selectAgent();
break;
case '4':
step.skip = true;
break;
}
console.log('✅ Step modified');
return step;
}
```
## Watch Mode Integration
```javascript
class WatchModeIntegration {
async enableWatchMode() {
console.log('👁️ Watch mode enabled');
console.log('Files will be monitored for changes');
const watcher = chokidar.watch('.', {
ignored: /node_modules|\.git/,
persistent: true
});
watcher.on('change', async (path) => {
if (this.isPaused) return;
console.log(`\n📝 File changed: ${path}`);
console.log('Options:');
console.log('[r] Re-run current step');
console.log('[c] Continue anyway');
console.log('[p] Pause to investigate');
const action = await promptUser('Action: ');
await this.handleFileChange(action, path);
});
}
}
```
## Summary Report
```markdown
## 📊 Interactive Session Summary
**Session ID**: interactive-20241031-145632
**Duration**: 15 minutes 23 seconds
**Mode**: Step-by-step with checkpoints
### Execution Statistics
| Metric | Value |
|--------|-------|
| Total Steps | 10 |
| Completed | 8 |
| Skipped | 1 |
| Undone | 1 |
| Breakpoints Hit | 3 |
| Checkpoints | 4 |
### Step Timeline
```
1. ✅ Initialize project structure (0:23)
2. ✅ Setup database connection (1:45)
3. ✅ Create user model (2:12)
4. ↩️ UNDONE: Create auth middleware
5. ✅ Create auth middleware v2 (4:33)
6. ⏭️ SKIPPED: Add logging
7. ✅ Create API endpoints (6:21)
8. ✅ Add validation (8:45)
9. ✅ Write tests (10:12)
10. ✅ Update documentation (11:54)
```
### User Interactions
- Pauses: 2
- Modifications: 3
- Undo operations: 1
- Breakpoint inspections: 3
### Files Modified
- Created: 12 files
- Modified: 8 files
- Deleted: 0 files
### Checkpoints Available
1. `checkpoint-1698765392000` - After database setup
2. `checkpoint-1698765512000` - After auth implementation
3. `checkpoint-1698765634000` - After API creation
4. `checkpoint-1698765756000` - Final state
### Recommendations
- Consider automating step 6 (skipped frequently)
- Breakpoint at auth middleware hit multiple times
- Average pause duration: 45 seconds
```
## Quick Commands
During interactive execution:
```
┌────────────────────────────────┐
│ ⌨️ QUICK COMMANDS │
├────────────────────────────────┤
│ Enter - Continue │
│ p - Pause │
│ r - Resume │
│ s - Skip step │
│ u - Undo last │
│ m - Modify step │
│ b - Set breakpoint │
│ c - Create checkpoint │
│ l - List checkpoints │
│ i - Inspect context │
│ h - Help │
│ q - Quit │
└────────────────────────────────┘
```

392
commands/toduba-rollback.md Normal file
View File

@@ -0,0 +1,392 @@
---
allowed-tools:
- Bash
- Read
- Write
- Glob
- Grep
argument-hint: "[--last] [--steps <n>] [--to <commit>] [--dry-run] [--force]"
description: "Sistema di rollback con snapshot automatici per annullare modifiche"
---
# Toduba Rollback - Sistema di Rollback Intelligente ↩️
## Obiettivo
Fornire un sistema di rollback sicuro e intelligente per annullare modifiche, con snapshot automatici prima di ogni operazione significativa.
## Argomenti
- `--last`: Rollback ultima operazione (default)
- `--steps <n>`: Rollback di N operazioni
- `--to <commit>`: Rollback a specific commit
- `--dry-run`: Mostra cosa verrebbe rollbackato senza farlo
- `--force`: Skip conferme di sicurezza
- `--list`: Mostra snapshot disponibili
Argomenti ricevuti: $ARGUMENTS
## Sistema di Snapshot
### Auto-Snapshot Before Changes
```bash
# Automaticamente creato da orchestrator prima di modifiche
create_snapshot() {
local snapshot_id="toduba-$(date +%Y%m%d-%H%M%S)"
local snapshot_dir=".toduba/snapshots/$snapshot_id"
mkdir -p "$snapshot_dir"
# Save current state
echo "📸 Creating snapshot: $snapshot_id"
# 1. Git state
git diff > "$snapshot_dir/uncommitted.diff"
git status --porcelain > "$snapshot_dir/status.txt"
git rev-parse HEAD > "$snapshot_dir/last_commit.txt"
# 2. File list
find . -type f -not -path "./.git/*" -not -path "./node_modules/*" \
> "$snapshot_dir/files.txt"
# 3. Metadata
cat > "$snapshot_dir/metadata.json" <<EOF
{
"id": "$snapshot_id",
"timestamp": "$(date -Iseconds)",
"description": "$1",
"files_count": $(wc -l < "$snapshot_dir/files.txt"),
"uncommitted_changes": $(git status --porcelain | wc -l),
"user": "$(git config user.name)",
"operation": "$2"
}
EOF
# 4. Create restore point
tar czf "$snapshot_dir/backup.tar.gz" \
--exclude=".git" \
--exclude="node_modules" \
--exclude=".toduba/snapshots" \
.
echo "✅ Snapshot created: $snapshot_id"
return 0
}
```
## Processo di Rollback
### Fase 1: Identificazione Snapshot
```bash
identify_rollback_target() {
local target=""
if [[ "$ARGUMENTS" == *"--last"* ]] || [ -z "$ARGUMENTS" ]; then
# Get last snapshot
target=$(ls -t .toduba/snapshots | head -1)
echo "🎯 Target: Last operation ($target)"
elif [[ "$ARGUMENTS" == *"--steps"* ]]; then
# Get N snapshots back
local steps=$(echo "$ARGUMENTS" | grep -oP '(?<=--steps )\d+')
target=$(ls -t .toduba/snapshots | sed -n "${steps}p")
echo "🎯 Target: $steps steps back ($target)"
elif [[ "$ARGUMENTS" == *"--to"* ]]; then
# Rollback to specific commit
local commit=$(echo "$ARGUMENTS" | grep -oP '(?<=--to )\w+')
echo "🎯 Target: Git commit $commit"
git_rollback=true
fi
if [ -z "$target" ] && [ "$git_rollback" != true ]; then
echo "❌ No valid rollback target found"
exit 1
fi
ROLLBACK_TARGET="$target"
}
```
### Fase 2: Pre-Rollback Analysis
```bash
analyze_rollback_impact() {
echo ""
echo "📊 Rollback Impact Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━"
local snapshot_dir=".toduba/snapshots/$ROLLBACK_TARGET"
if [ -f "$snapshot_dir/metadata.json" ]; then
# Parse metadata
local timestamp=$(jq -r '.timestamp' "$snapshot_dir/metadata.json")
local description=$(jq -r '.description' "$snapshot_dir/metadata.json")
local files_count=$(jq -r '.files_count' "$snapshot_dir/metadata.json")
echo "📅 Snapshot: $ROLLBACK_TARGET"
echo "🕒 Created: $timestamp"
echo "📝 Description: $description"
echo "📁 Files: $files_count"
fi
# Current vs Target comparison
echo ""
echo "Changes to be reverted:"
echo "━━━━━━━━━━━━━━━━━━━━━━"
# Show file differences
git diff --stat HEAD "$(cat $snapshot_dir/last_commit.txt 2>/dev/null)"
# Count changes
local added=$(git diff --numstat HEAD "$(cat $snapshot_dir/last_commit.txt)" | wc -l)
local modified=$(git status --porcelain | grep "^ M" | wc -l)
local deleted=$(git status --porcelain | grep "^ D" | wc -l)
echo ""
echo "Summary:"
echo " Added: $added files"
echo " ✏️ Modified: $modified files"
echo " ❌ Deleted: $deleted files"
if [[ "$ARGUMENTS" == *"--dry-run"* ]]; then
echo ""
echo "🔍 DRY RUN MODE - No changes will be made"
exit 0
fi
}
```
### Fase 3: Safety Checks
```bash
perform_safety_checks() {
echo ""
echo "🔒 Safety Checks"
echo "━━━━━━━━━━━━━━━━━━━"
# Check 1: Uncommitted changes
if [ -n "$(git status --porcelain)" ]; then
echo "⚠️ Warning: You have uncommitted changes"
if [[ "$ARGUMENTS" != *"--force"* ]]; then
read -p "Create backup before rollback? (Y/n): " backup_choice
if [ "$backup_choice" != "n" ]; then
create_snapshot "Pre-rollback backup" "manual"
fi
fi
fi
# Check 2: Running processes
if pgrep -f "npm run dev" > /dev/null; then
echo "⚠️ Warning: Development server is running"
echo " It will be stopped during rollback"
fi
# Check 3: Database state
if [ -f ".toduba/db-version.txt" ]; then
echo "⚠️ Warning: Database migrations may need reverting"
fi
# Final confirmation
if [[ "$ARGUMENTS" != *"--force"* ]]; then
echo ""
echo "⚠️ This action cannot be undone (except by another rollback)"
read -p "Proceed with rollback? (y/N): " confirm
if [ "$confirm" != "y" ]; then
echo "❌ Rollback cancelled"
exit 0
fi
fi
}
```
### Fase 4: Execute Rollback
```bash
execute_rollback() {
echo ""
echo "🔄 Executing Rollback"
echo "━━━━━━━━━━━━━━━━━━━━━"
local snapshot_dir=".toduba/snapshots/$ROLLBACK_TARGET"
# Stop any running processes
echo "📦 Stopping running processes..."
pkill -f "npm run dev" 2>/dev/null || true
pkill -f "npm start" 2>/dev/null || true
# Git rollback if specified
if [ "$git_rollback" = true ]; then
echo "📝 Rolling back to commit: $commit"
git reset --hard "$commit"
else
# File system rollback
echo "📁 Restoring files from snapshot..."
# Create safety backup
mv . .toduba/pre_rollback_$(date +%s) 2>/dev/null || true
# Extract snapshot
tar xzf "$snapshot_dir/backup.tar.gz" -C .
# Restore git state
if [ -f "$snapshot_dir/uncommitted.diff" ]; then
git apply "$snapshot_dir/uncommitted.diff" 2>/dev/null || true
fi
fi
# Post-rollback tasks
post_rollback_tasks
}
post_rollback_tasks() {
echo ""
echo "🔧 Post-Rollback Tasks"
echo "━━━━━━━━━━━━━━━━━━━━━"
# Reinstall dependencies if package.json changed
if git diff HEAD~1 HEAD --name-only | grep -q "package.json"; then
echo "📦 Reinstalling dependencies..."
npm install
fi
# Run migrations if needed
if [ -f ".toduba/run-migrations.sh" ]; then
echo "🗄️ Running database migrations..."
./.toduba/run-migrations.sh
fi
# Clear caches
echo "🧹 Clearing caches..."
rm -rf .cache/ dist/ build/ 2>/dev/null || true
# Rebuild if needed
if [ -f "package.json" ] && grep -q '"build"' package.json; then
echo "🔨 Rebuilding project..."
npm run build
fi
}
```
### Fase 5: Rollback Report
```markdown
## 📋 Rollback Report
**Timestamp**: [DATE TIME]
**Rollback Type**: [snapshot/git]
**Target**: [SNAPSHOT_ID or COMMIT]
### ✅ Actions Completed
- [x] Stopped running processes
- [x] Created safety backup
- [x] Restored files from snapshot
- [x] Applied uncommitted changes
- [x] Reinstalled dependencies
- [x] Cleared caches
- [x] Rebuilt project
### 📊 Statistics
- Files restored: 156
- Dependencies updated: 3
- Cache cleared: 12MB
- Time taken: 45 seconds
### ⚠️ Manual Actions Required
1. Restart development server: `npm run dev`
2. Check database state if applicable
3. Verify application functionality
4. Review restored code changes
### 🔄 Rollback History
```
toduba-20241031-143022 ← CURRENT
toduba-20241031-140515
toduba-20241031-134208
toduba-20241031-125633
```
### 💡 Next Steps
- Test application thoroughly
- If issues persist, rollback further: `/toduba-rollback --steps 2`
- To undo this rollback: `/toduba-rollback --last`
```
## Snapshot Management
### List Available Snapshots
```bash
list_snapshots() {
echo "📸 Available Snapshots"
echo "━━━━━━━━━━━━━━━━━━━━━━━"
for snapshot in .toduba/snapshots/*/metadata.json; do
if [ -f "$snapshot" ]; then
local id=$(jq -r '.id' "$snapshot")
local time=$(jq -r '.timestamp' "$snapshot")
local desc=$(jq -r '.description' "$snapshot")
local size=$(du -sh "$(dirname "$snapshot")" | cut -f1)
printf "%-25s %s %6s %s\n" "$id" "$time" "$size" "$desc"
fi
done
echo ""
echo "Total: $(ls -1 .toduba/snapshots | wc -l) snapshots"
echo "Disk usage: $(du -sh .toduba/snapshots | cut -f1)"
}
```
### Auto-Cleanup Old Snapshots
```bash
cleanup_old_snapshots() {
# Keep only last 20 snapshots or 7 days
local max_age=7 # days
local max_count=20
echo "🧹 Cleaning old snapshots..."
# Delete by age
find .toduba/snapshots -type d -mtime +$max_age -exec rm -rf {} \;
# Delete by count
ls -t .toduba/snapshots | tail -n +$((max_count + 1)) | \
xargs -I {} rm -rf ".toduba/snapshots/{}"
echo "✅ Cleanup complete"
}
```
## Integration with Orchestrator
The orchestrator automatically creates snapshots before:
- Major refactoring
- Database migrations
- Dependency updates
- Bulk file operations
- Deployment preparations
```javascript
// In orchestrator work package
if (taskComplexity === 'high' || modifiedFiles > 10) {
await createSnapshot(`Pre-${taskName}`, taskName);
}
```
## Error Recovery
```bash
handle_rollback_error() {
echo "❌ Rollback failed!"
echo ""
echo "Emergency recovery options:"
echo "1. Check .toduba/pre_rollback_* for backup"
echo "2. Use git history: git reflog"
echo "3. Restore from .toduba/snapshots manually"
echo "4. Contact support with error details"
# Save error log
echo "$1" > .toduba/rollback_error.log
exit 1
}
```

654
commands/toduba-template.md Normal file
View File

@@ -0,0 +1,654 @@
---
allowed-tools:
- Read
- Write
- Edit
- Glob
- Task
argument-hint: "[template-name] [--list] [--variables key=value]"
description: "Sistema di template per workflow comuni e scaffolding rapido"
---
# Toduba Template - Template Workflows System 📝
## Obiettivo
Fornire template predefiniti per workflow comuni, permettendo scaffolding rapido e consistente di componenti, API, e applicazioni complete.
## Argomenti
- `[template-name]`: Nome del template da usare
- `--list`: Lista tutti i template disponibili
- `--variables`: Variabili per il template (key=value)
- `--preview`: Mostra preview senza creare file
- `--customize`: Modalità interattiva per personalizzazione
Argomenti ricevuti: $ARGUMENTS
## Template System Architecture
```typescript
interface Template {
name: string;
description: string;
category: "api" | "component" | "app" | "test" | "config";
variables: Variable[];
files: FileTemplate[];
hooks?: {
preGenerate?: string;
postGenerate?: string;
};
}
interface Variable {
name: string;
description: string;
type: "string" | "boolean" | "select";
default?: any;
required: boolean;
options?: string[];
}
interface FileTemplate {
path: string;
template: string;
condition?: string;
}
```
## Available Templates Gallery
```
📚 TODUBA TEMPLATE GALLERY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔷 API TEMPLATES
─────────────────
crud-api Complete CRUD API with validation
rest-endpoint Single REST endpoint
graphql-resolver GraphQL resolver with types
websocket-server WebSocket server setup
microservice Microservice with Docker
🔷 COMPONENT TEMPLATES
──────────────────────
react-component React component with tests
vue-component Vue 3 composition component
angular-component Angular component with service
flutter-widget Flutter stateful widget
web-component Native web component
🔷 APPLICATION TEMPLATES
────────────────────────
fullstack-app Complete full-stack application
mobile-app Flutter mobile app
electron-app Electron desktop app
cli-tool CLI application
chrome-extension Chrome extension
🔷 TESTING TEMPLATES
────────────────────
unit-test Unit test suite
integration-test Integration test setup
e2e-test E2E test with Playwright
performance-test Performance test suite
🔷 CONFIGURATION TEMPLATES
───────────────────────────
docker-setup Docker + docker-compose
ci-cd-pipeline GitHub Actions/GitLab CI
kubernetes K8s deployment configs
nginx-config Nginx configuration
env-setup Environment setup
```
## Template Usage Flow
### Step 1: Select Template
```bash
/toduba-template crud-api --variables resource=product
```
### Step 2: Variable Input
```
🎯 Template: crud-api
━━━━━━━━━━━━━━━━━━━━━━
Required variables:
✓ resource: product
? database: [postgres/mysql/mongodb]: postgres
? authentication: [jwt/session/oauth]: jwt
? validation: [joi/yup/zod]: joi
Optional variables:
? includeTests: [Y/n]: Y
? includeDocker: [y/N]: n
? includeSwagger: [Y/n]: Y
```
### Step 3: Preview Generation
```
📋 Files to be created:
━━━━━━━━━━━━━━━━━━━━━━━
✓ api/products/controller.js
✓ api/products/model.js
✓ api/products/routes.js
✓ api/products/validation.js
✓ api/products/service.js
✓ tests/products.test.js
✓ docs/products-api.yaml
Total: 7 files
Continue? [Y/n]:
```
## Template Definitions
### CRUD API Template
```yaml
name: crud-api
description: Complete CRUD API with all operations
category: api
variables:
- name: resource
type: string
required: true
description: Resource name (singular)
- name: database
type: select
options: [postgres, mysql, mongodb]
default: postgres
required: true
- name: authentication
type: select
options: [jwt, session, oauth, none]
default: jwt
- name: validation
type: select
options: [joi, yup, zod, ajv]
default: joi
- name: includeTests
type: boolean
default: true
files:
- path: api/{{resource}}s/controller.js
template: |
const { {{Resource}}Service } = require('./service');
const { validate{{Resource}} } = require('./validation');
class {{Resource}}Controller {
async create(req, res, next) {
try {
const validated = await validate{{Resource}}(req.body);
const result = await {{Resource}}Service.create(validated);
res.status(201).json({
success: true,
data: result
});
} catch (error) {
next(error);
}
}
async getAll(req, res, next) {
try {
const { page = 1, limit = 10, ...filters } = req.query;
const result = await {{Resource}}Service.findAll({
page: Number(page),
limit: Number(limit),
filters
});
res.json({
success: true,
data: result.data,
pagination: result.pagination
});
} catch (error) {
next(error);
}
}
async getById(req, res, next) {
try {
const result = await {{Resource}}Service.findById(req.params.id);
if (!result) {
return res.status(404).json({
success: false,
message: '{{Resource}} not found'
});
}
res.json({
success: true,
data: result
});
} catch (error) {
next(error);
}
}
async update(req, res, next) {
try {
const validated = await validate{{Resource}}(req.body, true);
const result = await {{Resource}}Service.update(req.params.id, validated);
res.json({
success: true,
data: result
});
} catch (error) {
next(error);
}
}
async delete(req, res, next) {
try {
await {{Resource}}Service.delete(req.params.id);
res.status(204).send();
} catch (error) {
next(error);
}
}
}
module.exports = new {{Resource}}Controller();
- path: api/{{resource}}s/routes.js
template: |
const router = require('express').Router();
const controller = require('./controller');
{{#if authentication}}
const { authenticate } = require('../../middleware/auth');
{{/if}}
router.post('/',
{{#if authentication}}authenticate,{{/if}}
controller.create
);
router.get('/',
{{#if authentication}}authenticate,{{/if}}
controller.getAll
);
router.get('/:id',
{{#if authentication}}authenticate,{{/if}}
controller.getById
);
router.put('/:id',
{{#if authentication}}authenticate,{{/if}}
controller.update
);
router.delete('/:id',
{{#if authentication}}authenticate,{{/if}}
controller.delete
);
module.exports = router;
```
### React Component Template
```yaml
name: react-component
description: React functional component with hooks
category: component
variables:
- name: componentName
type: string
required: true
- name: hasState
type: boolean
default: true
- name: hasProps
type: boolean
default: true
- name: style
type: select
options: [css, scss, styled-components, tailwind]
default: css
files:
- path: components/{{componentName}}/{{componentName}}.tsx
template: |
import React{{#if hasState}}, { useState, useEffect }{{/if}} from 'react';
{{#if style === 'styled-components'}}
import styled from 'styled-components';
{{else}}
import './{{componentName}}.{{style}}';
{{/if}}
{{#if hasProps}}
interface {{componentName}}Props {
title?: string;
children?: React.ReactNode;
onClick?: () => void;
}
{{/if}}
export const {{componentName}}: React.FC{{#if hasProps}}<{{componentName}}Props>{{/if}}> = ({
{{#if hasProps}}
title = 'Default Title',
children,
onClick
{{/if}}
}) => {
{{#if hasState}}
const [isLoading, setIsLoading] = useState(false);
const [data, setData] = useState<any>(null);
useEffect(() => {
// Component mount logic
return () => {
// Cleanup
};
}, []);
{{/if}}
return (
<div className="{{kebabCase componentName}}">
{{#if hasProps}}
<h2>{title}</h2>
{{/if}}
{{#if hasState}}
{isLoading ? (
<div>Loading...</div>
) : (
<div>{children}</div>
)}
{{else}}
{children}
{{/if}}
{{#if hasProps}}
<button onClick={onClick}>Click me</button>
{{/if}}
</div>
);
};
- path: components/{{componentName}}/{{componentName}}.test.tsx
condition: includeTests
template: |
import { render, screen, fireEvent } from '@testing-library/react';
import { {{componentName}} } from './{{componentName}}';
describe('{{componentName}}', () => {
it('renders without crashing', () => {
render(<{{componentName}} />);
});
{{#if hasProps}}
it('displays the title', () => {
render(<{{componentName}} title="Test Title" />);
expect(screen.getByText('Test Title')).toBeInTheDocument();
});
it('calls onClick handler', () => {
const handleClick = jest.fn();
render(<{{componentName}} onClick={handleClick} />);
fireEvent.click(screen.getByText('Click me'));
expect(handleClick).toHaveBeenCalled();
});
{{/if}}
});
```
### Full-Stack App Template
```yaml
name: fullstack-app
description: Complete full-stack application setup
category: app
variables:
- name: appName
type: string
required: true
- name: frontend
type: select
options: [react, vue, angular, nextjs]
default: react
- name: backend
type: select
options: [express, fastify, nestjs, fastapi]
default: express
- name: database
type: select
options: [postgres, mysql, mongodb, sqlite]
default: postgres
files:
# Project structure
- path: .gitignore
template: |
node_modules/
.env
.env.local
dist/
build/
.DS_Store
*.log
.vscode/
.idea/
- path: README.md
template: |
# {{appName}}
Full-stack application built with {{frontend}} and {{backend}}.
## Quick Start
\`\`\`bash
# Install dependencies
npm install
# Setup database
npm run db:setup
# Start development
npm run dev
\`\`\`
## Architecture
- Frontend: {{frontend}}
- Backend: {{backend}}
- Database: {{database}}
- path: docker-compose.yml
template: |
version: '3.8'
services:
backend:
build: ./backend
ports:
- "3001:3001"
environment:
- DATABASE_URL={{database}}://user:pass@db:5432/{{appName}}
depends_on:
- db
frontend:
build: ./frontend
ports:
- "3000:3000"
depends_on:
- backend
db:
image: {{database}}:latest
environment:
{{#if database === 'postgres'}}
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB={{appName}}
{{/if}}
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
```
## Template Engine
```javascript
class TemplateEngine {
async generate(templateName, variables) {
const template = await this.loadTemplate(templateName);
// Validate required variables
this.validateVariables(template, variables);
// Process each file template
const files = [];
for (const fileTemplate of template.files) {
if (this.shouldGenerate(fileTemplate, variables)) {
const path = this.processPath(fileTemplate.path, variables);
const content = this.processTemplate(fileTemplate.template, variables);
files.push({ path, content });
}
}
return files;
}
processTemplate(template, variables) {
// Replace {{variable}} with values
// Handle conditionals {{#if condition}}...{{/if}}
// Handle loops {{#each array}}...{{/each}}
// Transform cases: {{pascalCase var}}, {{kebabCase var}}
return processedContent;
}
}
```
## Interactive Customization Mode
```
🎨 TEMPLATE CUSTOMIZATION MODE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Template: react-component
[✓] Include TypeScript
[✓] Include tests
[ ] Include Storybook stories
[✓] Include PropTypes
[ ] Include Redux connection
[✓] Include hooks
[ ] Include error boundary
Style system:
○ Plain CSS
● Tailwind CSS
○ Styled Components
○ CSS Modules
State management:
○ None
● useState/useReducer
○ Redux
○ MobX
○ Zustand
File structure:
○ Single file
● Component folder
○ Feature folder
[Generate] [Preview] [Cancel]
```
## Template Hooks
```javascript
hooks: {
preGenerate: async (variables) => {
// Validate environment
// Check dependencies
// Create directories
console.log('🔍 Pre-generation checks...');
if (!fs.existsSync('package.json')) {
console.log('⚠️ No package.json found');
const init = await prompt('Initialize npm project? (y/n)');
if (init === 'y') {
execSync('npm init -y');
}
}
},
postGenerate: async (files, variables) => {
// Install dependencies
// Run formatters
// Update indexes
console.log('📦 Installing dependencies...');
if (variables.frontend === 'react') {
execSync('npm install react react-dom');
}
console.log('🎨 Formatting files...');
execSync('npx prettier --write .');
console.log('✅ Template generated successfully!');
}
}
```
## Success Output
```
✅ Template Generation Complete!
📊 Summary:
━━━━━━━━━━━━━━━━━━━━━━━━
Template: crud-api
Resource: product
Database: postgres
📁 Files Created (7):
✓ api/products/controller.js
✓ api/products/model.js
✓ api/products/routes.js
✓ api/products/validation.js
✓ api/products/service.js
✓ tests/products.test.js
✓ docs/products-api.yaml
📦 Dependencies Added:
+ express@4.18.2
+ pg@8.11.3
+ joi@17.9.2
🚀 Next Steps:
1. Review generated files
2. Update .env with database credentials
3. Run migrations: npm run db:migrate
4. Start server: npm run dev
💡 Tips:
• Customize validation in validation.js
• Add custom business logic in service.js
• Extend tests in products.test.js
```

403
commands/toduba-test.md Normal file
View File

@@ -0,0 +1,403 @@
---
allowed-tools:
- Bash
- Read
- Grep
- Glob
- Task
argument-hint: "[--watch] [--coverage] [--only <pattern>] [--fail-fast]"
description: "Esegue test suite completa con report coverage e watch mode"
---
# Toduba Test - Esecuzione Test Suite 🧪
## Obiettivo
Eseguire test suite completa del progetto con supporto per watch mode, coverage reporting e filtering.
## Argomenti
- `--watch`: Modalità watch per sviluppo continuo
- `--coverage`: Genera report coverage dettagliato
- `--only <pattern>`: Esegue solo test che matchano pattern
- `--fail-fast`: Stop alla prima failure
- `--parallel`: Esegue test in parallelo
- `--verbose`: Output dettagliato
Argomenti ricevuti: $ARGUMENTS
## Progress Tracking
```
🧪 Test Execution Progress
[████████░░░░░░░░] 53% - Running integration tests (27/51)
⏱️ ETA: 2 minutes
✅ Unit: 245/245 | 🔄 Integration: 27/51 | ⏳ E2E: 0/12
```
## Processo di Test
### Fase 1: Auto-Detect Test Framework
```bash
detect_test_framework() {
echo "🔍 Detecting test framework..."
if [ -f "package.json" ]; then
# Node.js project
if grep -q '"jest"' package.json; then
TEST_RUNNER="jest"
TEST_CMD="npm test"
elif grep -q '"vitest"' package.json; then
TEST_RUNNER="vitest"
TEST_CMD="npm test"
elif grep -q '"mocha"' package.json; then
TEST_RUNNER="mocha"
TEST_CMD="npm test"
elif grep -q '"cypress"' package.json; then
HAS_E2E="true"
E2E_CMD="npm run cypress:run"
elif grep -q '"playwright"' package.json; then
HAS_E2E="true"
E2E_CMD="npm run playwright test"
fi
elif [ -f "pubspec.yaml" ]; then
# Flutter project
TEST_RUNNER="flutter"
TEST_CMD="flutter test"
elif [ -f "requirements.txt" ] || [ -f "setup.py" ]; then
# Python project
if grep -q "pytest" requirements.txt 2>/dev/null; then
TEST_RUNNER="pytest"
TEST_CMD="pytest"
else
TEST_RUNNER="unittest"
TEST_CMD="python -m unittest"
fi
elif [ -f "pom.xml" ]; then
# Java Maven
TEST_RUNNER="maven"
TEST_CMD="mvn test"
elif [ -f "build.gradle" ]; then
# Java Gradle
TEST_RUNNER="gradle"
TEST_CMD="./gradlew test"
elif [ -f "Cargo.toml" ]; then
# Rust
TEST_RUNNER="cargo"
TEST_CMD="cargo test"
elif [ -f "go.mod" ]; then
# Go
TEST_RUNNER="go"
TEST_CMD="go test ./..."
fi
echo "✅ Detected: $TEST_RUNNER"
}
```
### Fase 2: Parse Arguments e Setup
```bash
# Parse arguments
WATCH_MODE=false
COVERAGE=false
PATTERN=""
FAIL_FAST=false
PARALLEL=false
VERBOSE=false
for arg in $ARGUMENTS; do
case $arg in
--watch) WATCH_MODE=true ;;
--coverage) COVERAGE=true ;;
--only) PATTERN=$2; shift ;;
--fail-fast) FAIL_FAST=true ;;
--parallel) PARALLEL=true ;;
--verbose) VERBOSE=true ;;
esac
shift
done
```
### Fase 3: Esecuzione Test con Progress
```bash
run_tests_with_progress() {
local total_tests=$(find . -name "*.test.*" -o -name "*.spec.*" | wc -l)
local current=0
echo "🧪 Starting Test Suite Execution"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Unit Tests
if [ -d "src/__tests__" ] || [ -d "test/unit" ]; then
echo "📦 Running Unit Tests..."
if [ "$COVERAGE" = true ]; then
$TEST_CMD -- --coverage
else
$TEST_CMD
fi
UNIT_RESULT=$?
((current+=1))
show_progress $current $total_tests "Unit tests completed"
fi
# Integration Tests
if [ -d "test/integration" ] || [ -d "tests/integration" ]; then
echo "🔗 Running Integration Tests..."
$TEST_CMD integration
INTEGRATION_RESULT=$?
((current+=1))
show_progress $current $total_tests "Integration tests completed"
fi
# E2E Tests
if [ "$HAS_E2E" = true ]; then
echo "🌐 Running E2E Tests..."
$E2E_CMD
E2E_RESULT=$?
((current+=1))
show_progress $current $total_tests "E2E tests completed"
fi
}
show_progress() {
local current=$1
local total=$2
local message=$3
local percent=$((current * 100 / total))
local filled=$((percent / 5))
local empty=$((20 - filled))
printf "\r["
printf "%${filled}s" | tr ' ' '█'
printf "%${empty}s" | tr ' ' '░'
printf "] %d%% - %s\n" $percent "$message"
}
```
### Fase 4: Watch Mode Implementation
```javascript
// Se --watch è attivo
if (WATCH_MODE) {
console.log("👁️ Watch mode activated - Tests will re-run on file changes");
const chokidar = require("chokidar");
const watcher = chokidar.watch(["src/**/*.js", "test/**/*.js"], {
ignored: /node_modules/,
persistent: true,
});
watcher.on("change", async (path) => {
console.clear();
console.log(`📝 File changed: ${path}`);
console.log("🔄 Re-running tests...\n");
// Re-run only affected tests
const affectedTests = findAffectedTests(path);
await runTests(affectedTests);
// Update progress
updateProgress();
});
}
```
### Fase 5: Coverage Report Generation
```bash
generate_coverage_report() {
if [ "$COVERAGE" = true ]; then
echo ""
echo "📊 Coverage Report"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Run with coverage
if [ "$TEST_RUNNER" = "jest" ]; then
npm test -- --coverage --coverageReporters=text
elif [ "$TEST_RUNNER" = "pytest" ]; then
pytest --cov=. --cov-report=term
elif [ "$TEST_RUNNER" = "go" ]; then
go test ./... -cover
fi
# Parse coverage
COVERAGE_PERCENT=$(parse_coverage_output)
# Visual coverage bar
show_coverage_bar $COVERAGE_PERCENT
# Check threshold
if [ $COVERAGE_PERCENT -lt 80 ]; then
echo "⚠️ Coverage below 80% threshold!"
return 1
else
echo "✅ Coverage meets threshold"
fi
fi
}
show_coverage_bar() {
local percent=$1
local filled=$((percent / 5))
local empty=$((20 - filled))
echo ""
echo "Coverage: ["
printf "%${filled}s" | tr ' ' '🟩'
printf "%${empty}s" | tr ' ' '⬜'
printf "] %d%%\n" $percent
if [ $percent -ge 90 ]; then
echo "🏆 Excellent coverage!"
elif [ $percent -ge 80 ]; then
echo "✅ Good coverage"
elif [ $percent -ge 70 ]; then
echo "⚠️ Moderate coverage"
else
echo "❌ Poor coverage - needs improvement"
fi
}
```
### Fase 6: Test Results Summary
```markdown
## 📋 Test Results Summary
**Date**: [TIMESTAMP]
**Duration**: 2m 34s
**Mode**: [watch/single]
### Test Suites
| Type | Passed | Failed | Skipped | Time |
| ----------- | ------- | ------ | ------- | ---------- |
| Unit | 245 | 0 | 2 | 12s |
| Integration | 48 | 2 | 0 | 45s |
| E2E | 12 | 0 | 0 | 1m 37s |
| **Total** | **305** | **2** | **2** | **2m 34s** |
### Failed Tests ❌
1. `integration/api/user.test.js`
- Test: "should handle concurrent updates"
- Error: Timeout after 5000ms
- Line: 145
2. `integration/database/transaction.test.js`
- Test: "rollback on error"
- Error: Expected 0, received 1
- Line: 89
### Coverage Report 📊
```
| File | % Stmts | % Branch | % Funcs | % Lines |
| ----------- | ------- | -------- | ------- | ------- |
| All files | 87.3 | 82.1 | 90.5 | 87.2 |
| src/ | 89.2 | 85.3 | 92.1 | 89.1 |
| api/ | 91.5 | 88.2 | 94.3 | 91.4 |
| components/ | 86.7 | 81.9 | 89.8 | 86.6 |
| services/ | 88.4 | 83.7 | 91.2 | 88.3 |
| utils/ | 92.8 | 90.1 | 95.6 | 92.7 |
```
### Performance Metrics ⚡
- Slowest Test: `e2e/checkout-flow.test.js` (8.2s)
- Fastest Test: `unit/utils/format.test.js` (0.003s)
- Average Time: 0.42s per test
- Parallel Execution: Saved 45s (if enabled)
### Recommendations 💡
1. Fix failing integration tests before deployment
2. Improve coverage in `components/` directory
3. Consider splitting slow E2E test
4. Add missing tests for new payment module
```
## Integration con CI/CD
```yaml
# .github/workflows/test.yml
name: Toduba Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Toduba Tests
run: |
/toduba-test --coverage --fail-fast
- name: Upload Coverage
uses: codecov/codecov-action@v3
if: success()
```
## Output Examples
### Success Output
```
🧪 Toduba Test Suite
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[████████████████████] 100% - All tests completed
✅ All 309 tests passed!
📊 Coverage: 87.3%
⏱️ Duration: 2m 34s
Run with --watch for continuous testing
```
### Failure Output
```
🧪 Toduba Test Suite
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[████████████░░░░░░░] 65% - Failed at integration tests
❌ 2 tests failed
📊 Coverage: 73.8% (below threshold)
⏱️ Duration: 1m 12s
See details above. Fix and run: /toduba-test --only failed
```
## Advanced Features
### Parallel Execution
```bash
if [ "$PARALLEL" = true ]; then
echo "⚡ Running tests in parallel..."
npm test -- --maxWorkers=4
fi
```
### Test Filtering
```bash
if [ -n "$PATTERN" ]; then
echo "🔍 Running only tests matching: $PATTERN"
npm test -- --testNamePattern="$PATTERN"
fi
```
### Fail Fast
```bash
if [ "$FAIL_FAST" = true ]; then
echo "⚠️ Fail-fast mode: Will stop at first failure"
npm test -- --bail
fi
```

View File

@@ -0,0 +1,374 @@
---
allowed-tools:
- Read
- Grep
- Glob
- WebSearch
- WebFetch
- Task
argument-hint: "[problema o domanda da analizzare]"
description: "Modalità di analisi profonda per problemi complessi"
---
# Toduba Ultra Think - Analisi Profonda e Problem Solving 🧠
## Obiettivo
Attivare modalità di analisi profonda per esplorare problemi complessi da molteplici prospettive, generare soluzioni innovative e fornire raccomandazioni strategiche.
## Argomenti
Problema o domanda da analizzare: $ARGUMENTS
## Processo Ultra Think
### Fase 1: Comprensione Profonda del Problema
```
1. PARSING DEL PROBLEMA
- Identificare il core challenge
- Estrarre requisiti espliciti e impliciti
- Riconoscere stakeholder e vincoli
- Mappare dipendenze e interconnessioni
2. QUESTIONING ASSUMPTIONS
- Cosa stiamo dando per scontato?
- Quali bias potrebbero influenzarci?
- Esistono precedenti o pattern simili?
- Quali sono i veri obiettivi?
```
### Fase 2: Analisi Multi-Dimensionale
#### Dimensione Tecnica
```
- Fattibilità tecnologica
- Complessità implementativa
- Scalabilità e performance
- Debito tecnico e manutenibilità
- Sicurezza e affidabilità
```
#### Dimensione Business
```
- Valore generato vs costo
- Time to market
- ROI e metriche di successo
- Rischi e opportunità
- Vantaggio competitivo
```
#### Dimensione Utente
```
- User experience e usabilità
- Learning curve
- Valore percepito
- Pain points risolti
- Adozione e retention
```
#### Dimensione Sistemica
```
- Impatto su sistema esistente
- Effetti di secondo e terzo ordine
- Feedback loops
- Emergent behaviors
- Evolutionary path
```
### Fase 3: Generazione Soluzioni Creative
```typescript
const generateSolutions = () => {
const approaches = [];
// Approccio Convenzionale
approaches.push({
name: "Standard Industry Solution",
description: "Seguire best practices consolidate",
pros: ["Rischio basso", "Documentazione disponibile", "Talent pool ampio"],
cons: ["Nessun vantaggio competitivo", "Possibili limitazioni"],
complexity: "Media",
timeToImplement: "3-4 mesi",
risk: "Basso",
});
// Approccio Innovativo
approaches.push({
name: "Cutting-Edge Technology",
description: "Utilizzare tecnologie emergenti",
pros: ["Vantaggio competitivo", "Future-proof", "Performance superiori"],
cons: ["Rischio alto", "Learning curve", "Pochi esperti"],
complexity: "Alta",
timeToImplement: "6-8 mesi",
risk: "Alto",
});
// Approccio Ibrido
approaches.push({
name: "Phased Hybrid Approach",
description: "Mix di proven e innovative",
pros: ["Bilanciato", "Riduce rischi", "Evolutivo"],
cons: ["Complessità architetturale", "Possibili compromessi"],
complexity: "Media-Alta",
timeToImplement: "4-6 mesi",
risk: "Medio",
});
// Approccio Minimale
approaches.push({
name: "MVP First",
description: "Minimo prodotto funzionante, poi iterare",
pros: ["Fast time to market", "Validazione rapida", "Costo iniziale basso"],
cons: ["Possibile refactoring", "Feature limitate inizialmente"],
complexity: "Bassa",
timeToImplement: "1-2 mesi",
risk: "Basso",
});
return approaches;
};
```
### Fase 4: Analisi Comparativa e Trade-offs
```markdown
## Matrice Decisionale
| Criterio | Peso | Sol. A | Sol. B | Sol. C | Sol. D |
| -------------- | ---- | ------ | ------ | ------ | ------ |
| Performance | 25% | 7/10 | 9/10 | 8/10 | 5/10 |
| Costo | 20% | 6/10 | 4/10 | 5/10 | 9/10 |
| Time to Market | 20% | 5/10 | 3/10 | 6/10 | 9/10 |
| Scalabilità | 15% | 8/10 | 9/10 | 7/10 | 4/10 |
| Manutenibilità | 10% | 8/10 | 6/10 | 7/10 | 6/10 |
| Rischio | 10% | 8/10 | 4/10 | 6/10 | 9/10 |
**Score Pesato:**
- Soluzione A: 6.8
- Soluzione B: 6.3
- Soluzione C: 6.6
- Soluzione D: 6.9 ⭐
```
### Fase 5: Deep Dive sulla Soluzione Raccomandata
```
SOLUZIONE RACCOMANDATA: [Nome]
## Razionale
[Spiegazione dettagliata del perché questa soluzione]
## Piano di Implementazione
1. Fase 1 (Settimana 1-2)
- Setup infrastruttura base
- Proof of concept core functionality
- Validazione approccio
2. Fase 2 (Settimana 3-6)
- Sviluppo features principali
- Integrazione con sistemi esistenti
- Testing iniziale
3. Fase 3 (Settimana 7-8)
- Ottimizzazione performance
- Security hardening
- Documentazione
## Metriche di Successo
- KPI 1: [Metrica specifica]
- KPI 2: [Metrica specifica]
- KPI 3: [Metrica specifica]
## Risk Mitigation
- Rischio A → Strategia di mitigazione
- Rischio B → Strategia di mitigazione
- Rischio C → Piano di contingenza
```
### Fase 6: Pensiero Laterale e Alternative
```
ALTERNATIVE NON CONVENZIONALI:
1. "E se non risolvessimo il problema?"
- Il problema potrebbe risolversi da solo?
- Possiamo convivere con esso?
- C'è valore nel non-agire?
2. "E se invertissimo il problema?"
- Invece di X, facciamo l'opposto
- Trasformare il bug in feature
- Abbracciare il constraint
3. "E se lo delegassimo?"
- Outsourcing strategico
- Crowdsourcing
- AI/Automazione
4. "E se cambiassimo le regole?"
- Ridefinire il problema
- Cambiare i vincoli
- Nuovo paradigma
```
### Fase 7: Sintesi e Raccomandazioni
## Output Report Ultra Think
````markdown
# 🧠 Toduba Ultra Think Analysis
## Executive Summary
[2-3 paragrafi di sintesi ad alto livello]
## Il Problema Analizzato
- **Core Challenge**: [Descrizione]
- **Stakeholder Impattati**: [Lista]
- **Vincoli Critici**: [Lista]
- **Timeline**: [Urgenza]
## Analisi Multi-Prospettiva
### 🔧 Prospettiva Tecnica
[Insights tecnici chiave]
### 💼 Prospettiva Business
[Considerazioni business]
### 👤 Prospettiva Utente
[Impact su user experience]
### 🌐 Prospettiva Sistemica
[Effetti sul sistema complessivo]
## Soluzioni Proposte
### Opzione 1: [Nome] ⭐ RACCOMANDATA
**Descrizione**: [Dettagli]
**Pro**: [Lista]
**Contro**: [Lista]
**Implementazione**: [Timeline]
**Costo Stimato**: [Range]
**Rischio**: [Livello]
### Opzione 2: [Nome]
[Simile struttura]
### Opzione 3: [Nome]
[Simile struttura]
## Raccomandazione Strategica
### Approccio Consigliato
[Descrizione dettagliata della strategia raccomandata]
### Roadmap
```mermaid
gantt
title Implementation Roadmap
dateFormat YYYY-MM-DD
section Phase 1
Foundation :2024-11-01, 14d
section Phase 2
Core Development :14d
section Phase 3
Testing & Optimization :7d
```
````
### Success Metrics
1. [Metrica 1 con target]
2. [Metrica 2 con target]
3. [Metrica 3 con target]
## Rischi e Mitigazioni
| Rischio | Probabilità | Impatto | Mitigazione |
| -------- | ----------- | ------- | ----------- |
| [Risk 1] | Alta | Alto | [Strategia] |
| [Risk 2] | Media | Medio | [Strategia] |
## Considerazioni Finali
### Punti Chiave
- 💡 [Insight principale]
- 💡 [Insight secondario]
- 💡 [Considerazione importante]
### Aree di Incertezza
- ❓ [Area che richiede più dati]
- ❓ [Assunzione da validare]
### Next Steps Immediati
1. [Azione 1]
2. [Azione 2]
3. [Azione 3]
## Confidence Level
- **Analisi**: 95% confidence
- **Raccomandazione**: 85% confidence
- **Success Probability**: 75%
---
_Analisi generata da Toduba Ultra Think System_
_Tempo di analisi: [X] minuti_
_Profondità: Maximum_
````
## Integrazione con Orchestrator
L'orchestrator usa sempre Ultra Think per task complessi:
```typescript
// Automaticamente invocato per analisi iniziale
const performUltraThink = async (task) => {
const analysis = await ultraThink.analyze(task);
// Presenta all'utente
await presentAnalysis(analysis);
// Attende conferma
const userFeedback = await getUserConfirmation();
// Itera se necessario
if (userFeedback.requiresChanges) {
return performUltraThink(refinedTask);
}
return analysis;
};
````
## Principi Guida Ultra Think
1. **First Principles Thinking** - Scomporre ai fondamentali
2. **Systems Thinking** - Vedere interconnessioni
3. **Lateral Thinking** - Esplorare l'inaspettato
4. **Critical Thinking** - Questionare tutto
5. **Creative Thinking** - Generare novità
6. **Strategic Thinking** - Visione long-term
7. **Probabilistic Thinking** - Gestire incertezza
8. **Ethical Thinking** - Considerare implicazioni morali

View File

@@ -0,0 +1,607 @@
---
allowed-tools:
- Read
- Write
- Edit
- Bash
- Glob
- Grep
argument-hint: "[--check] [--full] [--smart] [--service <name>] [--format md|html|json|pdf]"
description: "Smart incremental updates con cache e multiple export formats per struttura gerarchica V2.0"
---
# Toduba Update Docs V2.0 - Smart Incremental Updates 🔄
## Obiettivo
Aggiornamento intelligente e incrementale della documentazione gerarchica V2.0 (docs/global, docs/services/, docs/operations/) con cache, change detection avanzata e supporto per multiple export formats.
## Argomenti
- `--check`: Mostra cosa verrebbe aggiornato senza modificare
- `--full`: Forza rigenerazione completa (equivalente a toduba-init --force)
- `--smart`: Abilita cache e ottimizzazioni AI (default: on)
- `--service <name>`: Aggiorna solo documentazione per il servizio specificato
- `--format`: Formato export (md, html, json, pdf) - default: md
Argomenti ricevuti: $ARGUMENTS
## Nuova Struttura Supportata (V2.0)
```
docs/
├── .toduba-meta/ # Metadata e tracking
│ ├── project-type.json
│ ├── services.json
│ ├── last-update.json
│ └── service_*.json
├── global/ # Documentazione globale
│ ├── README.md
│ ├── ARCHITECTURE.md
│ ├── SETUP.md
│ ├── CONTRIBUTING.md
│ └── adr/
├── services/ # Per-service documentation
│ └── [service-name]/
│ ├── README.md
│ ├── SETUP.md
│ ├── ARCHITECTURE.md
│ ├── TECH-STACK.md
│ ├── STYLE-GUIDE.md
│ ├── ENDPOINTS.md (condizionale)
│ ├── DATABASE.md (condizionale)
│ ├── TESTING.md
│ └── TROUBLESHOOTING.md
└── operations/ # DevOps docs
├── DEPLOYMENT.md
├── CI-CD.md
├── MONITORING.md
├── SECURITY.md
└── ENVIRONMENT-VARS.md
```
## Pre-requisiti
```bash
# Verifica che docs/ esista
if [ ! -d "docs" ]; then
echo "❌ Errore: Documentazione non trovata!"
echo " Esegui prima: /toduba-system:toduba-init"
exit 1
fi
# Verifica nuova struttura V2.0
if [ ! -d "docs/.toduba-meta" ]; then
echo "⚠️ Struttura documentazione V1.0 rilevata!"
echo " Aggiorna alla V2.0 con: /toduba-system:toduba-init --force"
echo " (La vecchia documentazione verrà backuppata automaticamente)"
exit 1
fi
# Verifica metadata essenziali
if [ ! -f "docs/.toduba-meta/last-update.json" ]; then
echo "⚠️ Metadata mancante - rigenerazione completa necessaria"
echo " Esegui: /toduba-system:toduba-init --force"
exit 1
fi
```
## Processo di Aggiornamento Intelligente V2.0
### Fase 1: Analisi Cambiamenti (Struttura Gerarchica)
#### 1.1 Lettura Stato Precedente
```bash
# Leggi metadata V2.0
LAST_COMMIT=$(cat docs/.toduba-meta/last-update.json | grep -o '"git_commit": *"[^"]*"' | cut -d'"' -f4)
LAST_UPDATE=$(cat docs/.toduba-meta/last-update.json | grep -o '"timestamp": *"[^"]*"' | cut -d'"' -f4)
PROJECT_TYPE=$(cat docs/.toduba-meta/project-type.json | grep -o '"type": *"[^"]*"' | cut -d'"' -f4)
# Leggi lista servizi
SERVICES_LIST=$(cat docs/.toduba-meta/services.json | grep -o '"name": *"[^"]*"' | cut -d'"' -f4)
echo "📊 Stato precedente:"
echo " • Ultimo aggiornamento: $LAST_UPDATE"
echo " • Ultimo commit: ${LAST_COMMIT:0:7}"
echo " • Tipo progetto: $PROJECT_TYPE"
echo " • Servizi: $(echo "$SERVICES_LIST" | wc -l)"
```
#### 1.2 Calcolo Differenze
```bash
# Commits dal'ultimo update
COMMITS_COUNT=$(git rev-list --count ${LAST_COMMIT}..HEAD)
# File modificati raggruppati per categoria
git diff --name-only ${LAST_COMMIT}..HEAD | while read file; do
case "$file" in
*/api/* | */routes/* | */controllers/*)
echo "API: $file" >> changes_api.txt
;;
*/components/* | */pages/* | */views/*)
echo "FRONTEND: $file" >> changes_frontend.txt
;;
*/models/* | */schemas/* | */migrations/*)
echo "DATABASE: $file" >> changes_db.txt
;;
*.test.* | *.spec.* | */tests/*)
echo "TESTS: $file" >> changes_tests.txt
;;
esac
done
```
### Fase 2: Decisione Aggiornamento (Struttura Gerarchica V2.0)
#### Matrice di Update per Struttura V2.0:
```
Cambiamenti rilevati → Documenti da aggiornare
────────────────────────────────────────────────────────
GLOBAL SCOPE:
- Root files changes → docs/global/README.md
- Architecture changes → docs/global/ARCHITECTURE.md
- Contributing changes → docs/global/CONTRIBUTING.md
- Setup changes (monorepo) → docs/global/SETUP.md
SERVICE SCOPE (per ogni servizio modificato):
- Source code changes → docs/services/[name]/ARCHITECTURE.md
- API/Routes changes → docs/services/[name]/ENDPOINTS.md
- Database/models changes → docs/services/[name]/DATABASE.md
- Dependencies changes → docs/services/[name]/TECH-STACK.md
- Test changes → docs/services/[name]/TESTING.md
- Style/conventions → docs/services/[name]/STYLE-GUIDE.md
- Any service changes → docs/services/[name]/README.md
OPERATIONS SCOPE:
- CI/CD config changes → docs/operations/CI-CD.md
- Deployment scripts → docs/operations/DEPLOYMENT.md
- Monitoring config → docs/operations/MONITORING.md
- Security policies → docs/operations/SECURITY.md
- Env vars changes → docs/operations/ENVIRONMENT-VARS.md
METADATA (always):
- docs/.toduba-meta/last-update.json
- docs/.toduba-meta/service_*.json (se servizio modificato)
```
#### Rilevamento Servizio da File Modificato
```bash
detect_affected_service() {
local file_path="$1"
# Leggi servizi e i loro path
while IFS= read -r service_name; do
service_path=$(cat "docs/.toduba-meta/service_${service_name}.json" | grep -o '"path": *"[^"]*"' | cut -d'"' -f4)
# Se il file è nel path del servizio
if [[ "$file_path" == "$service_path"* ]]; then
echo "$service_name"
return
fi
done <<< "$SERVICES_LIST"
# Se non trovato, è probabilmente global
echo "global"
}
```
#### Soglie per Update:
- **Minor** (< 5 file): Aggiorna solo file specifici
- **Medium** (5-20 file): Aggiorna categoria + INDEX
- **Major** (> 20 file): Considera update completo
- **Structural** (nuove cartelle/moduli): ARCHITECTURE.md obbligatorio
### Fase 3: Update Incrementale
#### 3.1 Per API_ENDPOINTS.md:
```javascript
// Analizza solo endpoint modificati
const modifiedControllers = getModifiedFiles("controllers");
const modifiedRoutes = getModifiedFiles("routes");
// Estrai endpoint esistenti
const existingEndpoints = parseExistingEndpoints("API_ENDPOINTS.md");
// Analizza nuovi/modificati
const updatedEndpoints = analyzeEndpoints(modifiedControllers, modifiedRoutes);
// Merge intelligente
const mergedEndpoints = mergeEndpoints(existingEndpoints, updatedEndpoints);
// Rigenera solo sezioni cambiate
updateSections("API_ENDPOINTS.md", mergedEndpoints);
```
#### 3.2 Per COMPONENTS.md:
```javascript
// Simile approccio per componenti UI
const modifiedComponents = getModifiedFiles(["components", "pages"]);
// Aggiorna solo componenti modificati
for (const component of modifiedComponents) {
const componentDoc = generateComponentDoc(component);
replaceSection("COMPONENTS.md", component.name, componentDoc);
}
```
#### 3.3 Per DATABASE_SCHEMA.md:
```javascript
// Rileva modifiche schema
const schemaChanges = detectSchemaChanges();
if (schemaChanges.migrations) {
appendSection(
"DATABASE_SCHEMA.md",
"## Migrazioni Recenti",
schemaChanges.migrations
);
}
if (schemaChanges.newModels) {
updateModelsSection("DATABASE_SCHEMA.md", schemaChanges.newModels);
}
```
### Fase 4: Smart Merge Strategy
#### Preservazione Contenuto Custom:
```markdown
<!-- TODUBA:START:AUTO -->
[Contenuto generato automaticamente]
<!-- TODUBA:END:AUTO -->
<!-- TODUBA:CUSTOM:START -->
[Contenuto custom preservato durante update]
<!-- TODUBA:CUSTOM:END -->
```
#### Conflict Resolution:
1. Preserva sempre sezioni custom
2. Se conflitto in auto-generated:
- Backup versione vecchia
- Genera nuova
- Marca conflitti per review
### Fase 5: Validazione e Report
#### Se `--check`:
```
🔍 Toduba Update Docs - Analisi Cambiamenti
📊 Sommario:
- Commits dall'ultimo update: 15
- File modificati: 23
- Categorie impattate: API, Frontend, Tests
📝 Documenti che verrebbero aggiornati:
✓ INDEX.md (sempre aggiornato)
✓ API_ENDPOINTS.md (8 endpoint modificati)
✓ COMPONENTS.md (3 nuovi componenti)
✓ TESTING.md (nuovi test aggiunti)
○ DATABASE_SCHEMA.md (nessun cambiamento)
○ ARCHITECTURE.md (nessun cambiamento strutturale)
⏱️ Tempo stimato: ~8 secondi
Esegui senza --check per applicare gli aggiornamenti.
```
#### Update Effettivo:
```
🔄 Toduba Update Docs - Aggiornamento in Corso...
[===========----------] 55% Analizzando API changes...
[==================---] 85% Aggiornando COMPONENTS.md...
[====================] 100% Completato!
✅ Documentazione Aggiornata con Successo!
📊 Riepilogo Update:
- File analizzati: 23
- Documenti aggiornati: 4/10
- Tempo impiegato: 7.3s
- Risparmio vs regenerazione: ~45s
📝 Modifiche applicate:
✓ INDEX.md - Statistiche aggiornate
✓ API_ENDPOINTS.md - 8 endpoint aggiornati, 2 nuovi
✓ COMPONENTS.md - 3 nuovi componenti documentati
✓ TESTING.md - Coverage aggiornato al 87%
💾 metadata.json aggiornato:
- last_updated: 2024-10-31T15:30:00Z
- commits_since_generation: 0
- git_info.last_commit: abc123def
💡 Tip: Usa --check la prossima volta per preview
```
### Fase 6: Auto-Invocazione da Orchestrator
Quando chiamato automaticamente:
1. Sempre modalità silenziosa (no verbose)
2. Log minimo solo se errori
3. Return status code per orchestrator
4. Se fallisce, non bloccare task principale
## Ottimizzazioni Performance
1. **Caching**: Cache analisi file per 5 minuti
2. **Parallel Processing**: Analizza categorie in parallelo
3. **Incremental Parsing**: Parse solo diff, non file interi
4. **Smart Skip**: Skip file non documentabili (.test, .spec)
5. **Batch Updates**: Accumula modifiche, scrivi una volta
## Smart Incremental Updates (H1)
### Cache System
```typescript
class DocumentationCache {
private cache = new Map();
private maxAge = 5 * 60 * 1000; // 5 minuti
async getAnalysis(filePath: string): Promise<Analysis | null> {
const cached = this.cache.get(filePath);
if (cached && Date.now() - cached.timestamp < this.maxAge) {
return cached.analysis;
}
return null;
}
setAnalysis(filePath: string, analysis: Analysis) {
this.cache.set(filePath, {
analysis,
timestamp: Date.now(),
hash: this.calculateHash(filePath),
});
}
async isValid(filePath: string): Promise<boolean> {
const cached = this.cache.get(filePath);
if (!cached) return false;
const currentHash = await this.calculateHash(filePath);
return cached.hash === currentHash;
}
}
```
### Smart Change Detection
```javascript
// Usa git diff con analisi semantica
const detectSmartChanges = async () => {
const changes = {
breaking: [],
feature: [],
bugfix: [],
refactor: [],
documentation: [],
};
// Analizza AST per determinare tipo di cambiamento
const diff = await git.diff("--cached", "--name-status");
for (const file of diff) {
const analysis = await analyzeFileChange(file);
// Categorizza in base al contenuto, non solo al path
if (analysis.breaksAPI) changes.breaking.push(file);
else if (analysis.addsFeature) changes.feature.push(file);
else if (analysis.fixesBug) changes.bugfix.push(file);
else if (analysis.refactors) changes.refactor.push(file);
else changes.documentation.push(file);
}
return changes;
};
```
### Dependency Graph Updates
```typescript
// Aggiorna solo documenti dipendenti
const updateDependentDocs = async (changedFile: string) => {
const dependencyGraph = await loadDependencyGraph();
const affected = dependencyGraph.getDependents(changedFile);
// Update solo documenti realmente impattati
for (const doc of affected) {
await updateSection(doc, changedFile);
}
};
```
## Multiple Export Formats (H2)
### Format Converters
```typescript
interface FormatConverter {
convert(markdown: string, options?: any): string | Buffer;
extension: string;
mimeType: string;
}
const converters: Record<string, FormatConverter> = {
html: {
convert: (md) => {
const html = marked.parse(md);
return `
<!DOCTYPE html>
<html>
<head>
<title>Toduba Documentation</title>
<link rel="stylesheet" href="toduba-docs.css">
</head>
<body>${html}</body>
</html>`;
},
extension: ".html",
mimeType: "text/html",
},
json: {
convert: (md) => {
const sections = parseMarkdownToSections(md);
return JSON.stringify(
{
version: "2.0.0",
generated: new Date().toISOString(),
sections,
metadata: getMetadata(),
},
null,
2
);
},
extension: ".json",
mimeType: "application/json",
},
pdf: {
convert: async (md) => {
// Usa markdown-pdf o puppeteer
const html = marked.parse(md);
return await generatePDF(html);
},
extension: ".pdf",
mimeType: "application/pdf",
},
};
```
### Export Pipeline
```javascript
const exportDocumentation = async (format: string = "md") => {
const converter = converters[format];
if (!converter) throw new Error(`Format ${format} not supported`);
// Crea directory per formato
const outputDir = `docs/export/${format}`;
await fs.mkdir(outputDir, { recursive: true });
// Converti tutti i documenti
for (const file of await glob("docs/*.md")) {
const content = await fs.readFile(file, "utf8");
const converted = await converter.convert(content);
const outputName = path.basename(file, ".md") + converter.extension;
await fs.writeFile(`${outputDir}/${outputName}`, converted);
}
console.log(`✅ Exported to ${outputDir}/`);
};
```
### Format-Specific Templates
```typescript
// Templates per diversi formati
const templates = {
html: {
css: `
.toduba-docs {
font-family: 'Inter', sans-serif;
max-width: 1200px;
margin: 0 auto;
}
.sidebar { position: fixed; left: 0; width: 250px; }
.content { margin-left: 270px; }
.code-block { background: #f4f4f4; padding: 1rem; }
`,
},
pdf: {
pageSize: "A4",
margins: { top: "2cm", bottom: "2cm", left: "2cm", right: "2cm" },
header: "🤖 Toduba Documentation",
footer: "Page {page} of {pages}",
},
};
```
## Gestione Errori
- **Metadata corrotto**: Fallback a rigenerazione
- **Git history perso**: Usa timestamp per determinare modifiche
- **Conflitti merge**: Crea .backup e procedi
- **Docs readonly**: Alert user, skip update
- **Out of sync**: Se > 100 commits, suggerisci --full
- **Cache invalido**: Invalida e rigenera
- **Export fallito**: Mantieni formato originale
## Performance Metrics
```
📊 Smart Update Performance:
- Cache hit rate: 75%
- Average update time: 3.2s (vs 45s full)
- Memory usage: -60% con streaming
- File I/O: -80% con cache
```
## Integrazione con Orchestrator
L'orchestrator invoca automaticamente quando:
```javascript
if (modifiedFiles > 10 || majorRefactoring || newModules) {
await invokeCommand("toduba-update-docs --smart");
}
```
Non viene invocato per:
- Modifiche a singoli file
- Fix di typo/commenti
- Modifiche solo a test
- Operazioni di configurazione
## Output con Multiple Formats
```
🔄 Toduba Smart Update - Multiple Formats
📊 Analisi Smart:
- Cache hits: 18/23 (78%)
- Semantic changes detected: 5
- Affected documents: 3
📝 Generazione formati:
[====] MD: ✅ 100% (base format)
[====] HTML: ✅ 100% (con styling)
[====] JSON: ✅ 100% (structured data)
[====] PDF: ✅ 100% (print-ready)
✅ Export completato:
- docs/export/html/
- docs/export/json/
- docs/export/pdf/
⚡ Performance:
- Tempo totale: 4.8s
- Risparmio: 89% vs full regeneration
```