Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:35:09 +08:00
commit 086c1eabf6
14 changed files with 1609 additions and 0 deletions

View File

@@ -0,0 +1,196 @@
# Conversion Analyzer Agent
A specialized AI agent that analyzes asyncpg code patterns and determines optimal SQLAlchemy conversion strategies. This agent handles complex conversion scenarios, edge cases, and provides detailed migration planning.
## Capabilities
### Code Pattern Analysis
- Detects complex asyncpg usage patterns beyond simple detection
- Analyzes query performance implications
- Identifies conversion complexity and potential issues
- Evaluates dependency chains and import relationships
### Conversion Strategy Planning
- Creates detailed conversion plans with priorities
- Identifies files that require manual intervention
- Suggests optimal SQLAlchemy patterns for specific use cases
- Plans testing and validation strategies
### Risk Assessment
- Evaluates potential breaking changes
- Identifies performance bottlenecks in conversion
- Assesses data loss risks during migration
- Provides rollback strategies
### Optimization Recommendations
- Suggests performance improvements during conversion
- Identifies opportunities for better async patterns
- Recommends Supabase-specific optimizations
- Evaluates connection pooling strategies
## Usage Patterns
### Complex Conversion Analysis
When the detection phase identifies complex asyncpg patterns that require careful analysis:
```bash
# Analyze specific complex files
/agent:conversion-analyzer analyze --file ./src/database.py --complexity high
# Analyze entire project with detailed reporting
/agent:conversion-analyzer analyze --path ./src --detailed-report
```
### Risk Assessment
Before performing large-scale conversions:
```bash
# Assess conversion risks
/agent:conversion-analyzer risk-assessment --path ./src --report-format json
# Generate rollback plan
/agent:conversion-analyzer rollback-plan --backup-path ./backup
```
### Performance Impact Analysis
For performance-critical applications:
```bash
# Analyze performance impact of conversion
/agent:conversion-analyzer performance-analysis --baseline ./current_code
# Generate optimization recommendations
/agent:conversion-analyzer optimize-recommendations --target-profile production
```
## Analysis Features
### Deep Code Analysis
- Understands asyncpg transaction patterns
- Identifies custom connection pooling logic
- Detects manual query building and optimization
- Analyzes error handling and retry logic
### Dependency Mapping
- Maps asyncpg dependencies across modules
- Identifies shared database connection patterns
- Analyzes middleware and dependency injection
- Evaluates testing code dependencies
### Conversion Complexity Scoring
- **Low Complexity**: Simple queries with standard patterns
- **Medium Complexity**: Custom queries with moderate complexity
- **High Complexity**: Advanced patterns, custom connection handling
- **Critical**: Complex transaction logic, performance-critical code
### Manual Intervention Requirements
- Complex query optimization patterns
- Custom asyncpg extensions or wrappers
- Performance-critical database operations
- Business logic embedded in database operations
## Output Reports
### Conversion Plan Report
```json
{
"conversion_plan": {
"total_files": 45,
"complexity_breakdown": {
"low": 32,
"medium": 10,
"high": 2,
"critical": 1
},
"recommended_approach": "incremental",
"estimated_time": "4-6 hours",
"manual_intervention_files": ["src/database.py", "src/complex_queries.py"]
}
}
```
### Risk Assessment Report
```json
{
"risk_assessment": {
"overall_risk": "medium",
"breaking_changes": 3,
"performance_impact": "minimal",
"data_loss_risk": "low",
"rollback_feasibility": "high"
}
}
```
### Performance Impact Report
```json
{
"performance_analysis": {
"query_performance": "maintained_or_improved",
"connection_efficiency": "improved",
"memory_usage": "reduced",
"recommendations": [
"Implement connection pooling",
"Add query result caching",
"Optimize batch operations"
]
}
}
```
## Integration with Other Components
### Works with Detection Skill
- Takes detection results as input for deeper analysis
- Provides detailed conversion strategies for detected patterns
- Prioritizes conversion order based on complexity and dependencies
### Supports Conversion Skill
- Provides detailed conversion guidance
- Suggests optimal SQLAlchemy patterns
- Identifies edge cases that require special handling
### Enhances Validation Skill
- Provides validation criteria for converted code
- Identifies test scenarios based on original patterns
- Suggests performance benchmarks
## Advanced Features
### Machine Learning Pattern Recognition
- Learns from conversion patterns across multiple projects
- Improves complexity scoring over time
- Identifies common pitfalls and optimization opportunities
- Provides pattern-based conversion recommendations
### Multi-Project Analysis
- Can analyze dependencies across multiple services
- Coordinates conversions for microservices architectures
- Manages database schema changes across services
- Coordinates testing across service boundaries
### Custom Rule Engine
- Supports custom conversion rules for specific projects
- Allows organization-specific patterns and conventions
- Integrates with existing code quality tools
- Supports compliance and security requirements
## Best Practices
### When to Use
- Large codebases with complex asyncpg usage
- Performance-critical applications requiring careful conversion
- Projects with custom database logic and optimizations
- Organizations with strict compliance requirements
### Integration Workflow
1. Run detection phase first to identify patterns
2. Use conversion analyzer for complex patterns
3. Follow recommended conversion plan
4. Use validation to ensure successful conversion
### Customization
- Can be configured with project-specific rules
- Supports custom complexity scoring criteria
- Integrates with existing development workflows
- Provides API for integration with CI/CD pipelines

244
agents/schema-reflector.md Normal file
View File

@@ -0,0 +1,244 @@
# Schema Reflector Agent
A specialized AI agent that performs comprehensive database schema reflection, analyzes existing database structures, and generates optimized SQLAlchemy model definitions with proper relationships, constraints, and performance optimizations.
## Capabilities
### Database Schema Analysis
- Connects to PostgreSQL/Supabase databases and reflects complete schema
- Analyzes tables, columns, constraints, indexes, and relationships
- Handles complex schemas including inheritance, partitions, and extensions
- Supports multiple schemas and custom types
### Intelligent Model Generation
- Generates SQLAlchemy models with proper type mappings and constraints
- Creates bi-directional relationships with optimal loading strategies
- Handles Supabase-specific features (UUIDs, JSONB, RLS policies)
- Optimizes for performance with lazy loading and efficient querying
### Schema Documentation
- Creates comprehensive documentation of database structure
- Documents business logic embedded in schema constraints
- Identifies potential issues and optimization opportunities
- Generates visual schema diagrams and relationship maps
### Performance Optimization
- Analyzes query patterns and suggests optimal indexing
- Identifies N+1 query problems and suggests solutions
- Recommends connection pooling configurations
- Suggests denormalization opportunities for performance
## Usage Patterns
### Complete Schema Reflection
For generating models from existing databases:
```bash
# Reflect entire database
/agent:schema-reflector reflect --connection-string $DATABASE_URL --output ./models/
# Reflect specific schema
/agent:schema-reflector reflect --schema public --output ./models/base.py
# Reflect with Supabase optimizations
/agent:schema-reflector reflect --supabase --rls-aware --output ./models/supabase.py
```
### Incremental Schema Updates
For updating existing models when schema changes:
```bash
# Update existing models
/agent:schema-reflector update --existing-models ./models/ --connection-string $DATABASE_URL
# Generate migration scripts
/agent:schema-reflector generate-migration --from-schema ./current_schema.json --to-schema ./new_schema.json
```
### Schema Analysis and Optimization
For performance tuning and optimization:
```bash
# Analyze performance issues
/agent:schema-reflector analyze-performance --connection-string $DATABASE_URL --report
# Suggest optimizations
/agent:schema-reflector optimize --connection-string $DATABASE_URL --recommendations
# Generate indexing strategy
/agent:schema-reflector indexing-strategy --query-log ./slow_queries.log
```
## Advanced Features
### Multi-Schema Support
- Handles complex databases with multiple schemas
- Maintains schema separation in generated models
- Supports cross-schema relationships
- Handles schema-specific configurations and permissions
### Custom Type Handling
- Maps PostgreSQL custom types to SQLAlchemy types
- Handles enum types and domain constraints
- Supports array types and JSONB operations
- Creates custom type definitions when needed
### Supabase Integration
- Handles Supabase-specific table types and extensions
- Integrates with Supabase auth tables
- Understands Supabase RLS policy implications
- Optimizes for Supabase connection pooling
### Performance-Aware Generation
- Generates models optimized for common query patterns
- Implements efficient relationship loading strategies
- Suggests optimal indexing strategies
- Identifies potential performance bottlenecks
## Output Formats
### SQLAlchemy Models
```python
# Generated model with relationships
class User(Base):
__tablename__ = "users"
id = Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4)
email = Column(String(255), unique=True, nullable=False, index=True)
created_at = Column(DateTime(timezone=True), server_default=func.now())
# Optimized relationships
profiles = relationship("Profile", back_populates="user", lazy="selectin")
posts = relationship("Post", back_populates="author", lazy="dynamic")
```
### Schema Documentation
```markdown
## Database Schema Documentation
### Users Table
- **Purpose**: User authentication and profile management
- **Primary Key**: UUID (auto-generated)
- **Indexes**: Unique index on email, created_at for sorting
- **Relationships**: One-to-many with profiles and posts
- **Constraints**: Email must be valid email format
- **Business Logic**: Users can have multiple profiles for different contexts
```
### Performance Analysis Report
```json
{
"performance_analysis": {
"query_patterns": {
"frequent_queries": [
"SELECT * FROM users WHERE email = ?",
"SELECT users.*, profiles.* FROM users JOIN profiles ON users.id = profiles.user_id"
],
"recommendations": [
"Add composite index on (email, created_at)",
"Implement query result caching for user lookups"
]
},
"bottlenecks": [
{
"table": "posts",
"issue": "Missing index on author_id for frequent joins",
"solution": "Add index on posts.author_id"
}
]
}
}
```
### Migration Scripts
```python
# Alembic migration script
def upgrade():
# Add new column
op.add_column('users', sa.Column('last_login', sa.DateTime(timezone=True), nullable=True))
# Create index for performance
op.create_index('ix_users_email_created', 'users', ['email', 'created_at'], unique=False)
def downgrade():
op.drop_index('ix_users_email_created', table_name='users')
op.drop_column('users', 'last_login')
```
## Integration with Other Components
### Works with Model Generation Command
- Provides core reflection functionality for model generation
- Handles complex schema scenarios beyond basic reflection
- Generates optimized models with performance considerations
### Supports Validation Agent
- Provides schema validation capabilities
- Identifies inconsistencies between models and database
- Validates relationships and constraints
### Enhances Supabase Integration
- Understands Supabase-specific schema patterns
- Optimizes for Supabase performance characteristics
- Handles Supabase auth and storage integration
## Advanced Configuration
### Custom Type Mappings
```python
# Custom type mapping configuration
TYPE_MAPPINGS = {
"custom_enum": "sqlalchemy.Enum",
"vector": "pgvector.Vector",
"tsvector": "sqlalchemy.dialects.postgresql.TSVECTOR"
}
```
### Relationship Loading Strategies
```python
# Configure optimal loading strategies
RELATIONSHIP_CONFIG = {
"selectin": "small_result_sets",
"joined": "always_needed",
"subquery": "large_result_sets",
"dynamic": "large_collections"
}
```
### Performance Optimization Rules
```python
# Custom optimization rules
OPTIMIZATION_RULES = {
"index_foreign_keys": True,
"add_composite_indexes": True,
"optimize_date_queries": True,
"cache_frequent_lookups": True
}
```
## Best Practices
### When to Use
- New projects starting from existing databases
- Migrating projects with complex schemas
- Performance optimization of existing SQLAlchemy models
- Documentation and analysis of legacy databases
### Integration Workflow
1. Connect to database and analyze schema structure
2. Generate initial models with basic relationships
3. Analyze query patterns and optimize models
4. Create migration scripts for schema changes
5. Validate generated models against database
### Performance Considerations
- Use lazy loading strategies appropriate to data sizes
- Implement proper indexing based on query patterns
- Consider connection pooling for high-traffic applications
- Monitor performance after deployment and optimize as needed
### Schema Evolution
- Handle schema changes gracefully with migrations
- Maintain backward compatibility when possible
- Test migrations thoroughly before deployment
- Document schema changes and their implications