Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:52:02 +08:00
commit 65465ede7b
7 changed files with 1984 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "python-architecture-review",
"description": "Comprehensive design architecture review for Python backend applications, analyzing scalability, security, performance, and best practices",
"version": "1.0.0",
"author": {
"name": "rknall",
"email": "zhongweili@tubi.tv"
},
"skills": [
"./"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# python-architecture-review
Comprehensive design architecture review for Python backend applications, analyzing scalability, security, performance, and best practices

494
SKILL.md Normal file
View File

@@ -0,0 +1,494 @@
---
name: "Python Backend Architecture Review"
description: "Comprehensive design architecture review for Python backend applications. Use this skill when users ask you to review, analyze, or provide feedback on backend architecture designs, system design documents, or Python application architecture. Covers scalability, security, performance, database design, API design, microservices patterns, deployment architecture, and best practices."
---
# Python Backend Architecture Review
This skill provides comprehensive architecture review capabilities for Python backend applications, covering all aspects of system design from infrastructure to code organization.
## When to Use This Skill
Activate this skill when the user requests:
- Review of a backend architecture design document
- Feedback on system design for a Python application
- Analysis of scalability patterns and approaches
- Security review of backend architecture
- Database design evaluation
- API design assessment
- Microservices architecture review
- Performance optimization recommendations
- Cloud infrastructure architecture review
- Code organization and project structure analysis
## Review Framework
### 1. Initial Analysis
When a user provides an architecture document or describes their system, begin by:
1. **Understanding Context**
- Ask clarifying questions about:
- Expected scale (users, requests/sec, data volume)
- Performance requirements (latency, throughput)
- Security and compliance requirements
- Team size and expertise
- Budget constraints
- Timeline expectations
2. **Document Analysis**
- If architecture diagrams or documents are provided, analyze:
- Component relationships and boundaries
- Data flow patterns
- External dependencies
- Technology stack choices
- Deployment topology
### 2. Comprehensive Review Areas
Evaluate the architecture across these dimensions:
#### A. System Architecture & Design Patterns
**Evaluate:**
- Overall architectural style (monolith, microservices, serverless, hybrid)
- Service boundaries and responsibilities
- Communication patterns (sync/async, REST/GraphQL/gRPC)
- Event-driven architecture components
- CQRS and Event Sourcing patterns where applicable
- Domain-Driven Design principles
- Separation of concerns
- Dependency management
**Provide Feedback On:**
- Whether the chosen architecture matches the scale and complexity
- Over-engineering or under-engineering concerns
- Missing components or services
- Tight coupling issues
- Single points of failure
- Scalability bottlenecks
**Python-Specific Considerations:**
- Framework selection (FastAPI, Django, Flask, etc.)
- ASGI vs WSGI considerations
- Async/await patterns and usage
- Python's GIL impact on architecture decisions
- Multi-processing vs multi-threading strategies
#### B. Database Architecture
**Evaluate:**
- Database type selection (PostgreSQL, MySQL, MongoDB, Redis, etc.)
- Data modeling approach
- Normalization vs denormalization strategy
- Sharding and partitioning plans
- Read replicas and replication strategy
- Caching layers (Redis, Memcached)
- Database connection pooling
- Transaction management
- Data consistency models (strong, eventual)
**Provide Feedback On:**
- Schema design quality
- Index strategies
- Query optimization patterns
- N+1 query prevention
- Database migration strategy
- Backup and disaster recovery
- Multi-tenancy approaches if applicable
- Data retention and archival strategies
**Python-Specific Considerations:**
- ORM selection (SQLAlchemy, Django ORM, Tortoise ORM, etc.)
- Raw SQL vs ORM tradeoffs
- Async database drivers (asyncpg, motor, etc.)
- Migration tools (Alembic, Django migrations)
#### C. API Design & Communication
**Evaluate:**
- API design patterns (RESTful, GraphQL, gRPC)
- Endpoint structure and naming
- Request/response formats
- Versioning strategy
- Authentication and authorization
- Rate limiting and throttling
- API documentation approach
- Contract-first vs code-first design
- WebSocket usage for real-time features
- Message queue integration (RabbitMQ, Kafka, SQS)
**Provide Feedback On:**
- API consistency and conventions
- Error handling and status codes
- Pagination strategies
- Filtering and search capabilities
- Idempotency guarantees
- Backward compatibility approach
- GraphQL schema design if applicable
- gRPC service definitions if applicable
**Python-Specific Considerations:**
- FastAPI automatic OpenAPI generation
- Pydantic validation models
- Django REST Framework serializers
- GraphQL libraries (Strawberry, Graphene, Ariadne)
- gRPC-python code generation
#### D. Security Architecture
**Evaluate:**
- Authentication mechanisms (JWT, OAuth2, session-based)
- Authorization model (RBAC, ABAC, policy-based)
- API security (rate limiting, CORS, CSRF protection)
- Data encryption (at rest and in transit)
- Secrets management approach
- Network security (VPC, security groups, firewall rules)
- Input validation and sanitization
- SQL injection prevention
- XSS and CSRF protections
- Dependency vulnerability scanning
- Security headers implementation
**Provide Feedback On:**
- Authentication/authorization gaps
- Sensitive data exposure risks
- Missing security controls
- Overly permissive access
- Insecure defaults
- Lack of audit logging
- Missing security monitoring
**Python-Specific Considerations:**
- Usage of python-jose, PyJWT for token handling
- Password hashing with bcrypt, argon2
- Environment variable management (python-dotenv)
- Security middleware in frameworks
- SQLAlchemy parameterized queries
#### E. Scalability & Performance
**Evaluate:**
- Horizontal vs vertical scaling strategy
- Load balancing approach
- Auto-scaling configuration
- Caching strategy (application, database, CDN)
- Async processing for long-running tasks
- Background job processing (Celery, RQ, Dramatiq)
- Queue-based architectures
- Database read replicas
- Connection pooling
- Resource optimization
**Provide Feedback On:**
- Scalability bottlenecks
- Missing caching layers
- Inefficient data access patterns
- Synchronous operations that should be async
- Missing queue infrastructure
- Poor resource utilization
- Lack of performance monitoring
**Python-Specific Considerations:**
- ASGI server selection (Uvicorn, Hypercorn)
- Gunicorn worker configuration
- Celery worker configuration
- Async framework usage (asyncio best practices)
- Performance profiling tools (cProfile, py-spy)
- GIL workarounds for CPU-bound tasks
#### F. Observability & Monitoring
**Evaluate:**
- Logging strategy and centralization
- Metrics collection and aggregation
- Distributed tracing implementation
- Error tracking and alerting
- Health check endpoints
- Performance monitoring
- Business metrics tracking
- Log aggregation tools (ELK, Loki, CloudWatch)
- APM tools (DataDog, New Relic, Prometheus)
**Provide Feedback On:**
- Missing observability components
- Insufficient logging detail
- Lack of structured logging
- No distributed tracing
- Missing critical alerts
- No performance baselines
- Inadequate error tracking
**Python-Specific Considerations:**
- Structured logging libraries (structlog, python-json-logger)
- OpenTelemetry Python SDK
- Sentry integration
- StatsD/Prometheus client libraries
- Context propagation in async code
#### G. Deployment & Infrastructure
**Evaluate:**
- Containerization strategy (Docker)
- Orchestration approach (Kubernetes, ECS, etc.)
- CI/CD pipeline design
- Environment management (dev, staging, prod)
- Infrastructure as Code (Terraform, CloudFormation)
- Blue-green or canary deployment strategies
- Rollback procedures
- Configuration management
- Secret management in deployment
**Provide Feedback On:**
- Deployment complexity
- Missing automation
- Lack of environment parity
- No rollback strategy
- Insufficient testing in pipeline
- Manual deployment steps
- Missing infrastructure versioning
**Python-Specific Considerations:**
- Docker image optimization (multi-stage builds)
- Dependency management (pip, Poetry, PDM)
- Virtual environment handling in containers
- Python version management
- Compiled dependencies (wheel files)
#### H. Code Organization & Project Structure
**Evaluate:**
- Project directory structure
- Module and package organization
- Dependency injection patterns
- Configuration management
- Environment variable usage
- Testing strategy and organization
- Code reusability patterns
- Package/module boundaries
**Provide Feedback On:**
- Unclear module responsibilities
- Circular dependencies
- Poorly organized code structure
- Lack of separation between layers
- Missing configuration abstraction
- Hard-coded values
- Insufficient test coverage
**Python-Specific Considerations:**
- Package structure (src layout vs flat layout)
- __init__.py organization
- Import patterns and circular import prevention
- Type hints and mypy configuration
- Pydantic settings management
- pytest organization and fixtures
#### I. Data Flow & State Management
**Evaluate:**
- Request lifecycle and data flow
- State management approach
- Session management
- Cache invalidation strategy
- Event flow in event-driven systems
- Data transformation layers
- Data validation points
**Provide Feedback On:**
- Unclear data flow
- State synchronization issues
- Missing validation layers
- Inconsistent data transformation
- Cache coherence problems
- Session management issues
#### J. Resilience & Error Handling
**Evaluate:**
- Retry mechanisms and backoff strategies
- Circuit breaker patterns
- Timeout configurations
- Graceful degradation approach
- Error handling consistency
- Dead letter queue handling
- Bulkhead patterns
- Rate limiting and throttling
**Provide Feedback On:**
- Missing fault tolerance patterns
- Cascading failure risks
- Lack of timeouts
- No circuit breakers for external services
- Inconsistent error handling
- Missing retry logic
- No graceful degradation
**Python-Specific Considerations:**
- tenacity library for retries
- asyncio timeout handling
- Exception hierarchy design
- Context managers for resource cleanup
### 3. Review Output Format
Structure your review as follows:
#### Executive Summary
- Overall architecture assessment (1-3 paragraphs)
- Key strengths identified
- Critical concerns requiring immediate attention
- Overall maturity and readiness assessment
#### Detailed Findings
For each review area, provide:
**[Area Name]**
**Strengths:**
- Bullet points of what's done well
**Concerns:**
- HIGH: Critical issues that must be addressed
- MEDIUM: Important issues that should be addressed
- LOW: Nice-to-have improvements
**Recommendations:**
- Specific, actionable recommendations
- Alternative approaches to consider
- Best practices to follow
- Python-specific library or tool suggestions
#### Architecture Patterns & Best Practices
Suggest proven patterns relevant to their use case:
- Specific design patterns (Repository, Factory, Strategy, etc.)
- Integration patterns
- Python-specific idioms
- Framework-specific best practices
#### Technology Stack Assessment
Review their chosen technologies:
- Appropriateness for the use case
- Team expertise considerations
- Community support and maturity
- Alternative options to consider
- Python package ecosystem recommendations
#### Scalability Roadmap
If the architecture needs to scale:
- Current limitations
- Scaling stages and triggers
- Migration strategies
- Cost projections at different scales
#### Security Checklist
Provide a specific security checklist:
- Authentication/authorization items
- Data protection items
- Network security items
- Compliance considerations (GDPR, HIPAA, etc.)
- Python security best practices
#### Next Steps & Priorities
Rank recommendations by:
1. Must-fix items (blocking issues)
2. Should-fix items (important for production)
3. Nice-to-have items (improvements)
Include estimated effort and dependencies.
### 4. Interactive Review Process
When conducting the review:
1. **Start with clarifying questions** if the architecture description is incomplete
2. **Ask about constraints** (budget, timeline, team size)
3. **Understand the domain** and specific business requirements
4. **Request diagrams or documentation** if not provided
5. **Provide incremental feedback** for large architectures
6. **Offer to dive deeper** into specific areas of concern
7. **Suggest example implementations** or reference architectures
8. **Provide code examples** for recommended patterns
### 5. Reference Resources
When relevant, reference:
- 12-Factor App principles
- Python package recommendations (awesome-python)
- Cloud provider best practices (AWS Well-Architected, etc.)
- Security frameworks (OWASP Top 10)
- Performance benchmarking resources
- Open-source reference implementations
- Python-specific resources (PEPs, Python Enhancement Proposals)
### 6. Tools & Automation Recommendations
Suggest tools for:
- Static analysis (Ruff, pylint, flake8, mypy)
- Security scanning (Bandit, Safety, Snyk)
- Performance profiling (cProfile, py-spy, scalene)
- Load testing (Locust, Artillery)
- Monitoring (Prometheus, Grafana, DataDog)
- Documentation (Sphinx, MkDocs)
- Dependency management (Poetry, PDM, pip-tools)
- Code formatting (Black, Ruff)
## Communication Style
When providing reviews:
- Be constructive and specific
- Explain the "why" behind recommendations
- Provide examples and code snippets
- Balance criticism with recognition of good practices
- Prioritize issues clearly
- Offer multiple solutions when applicable
- Consider the team's context and constraints
- Use clear, professional language
- Include Python code examples where helpful
- Reference Python documentation and PEPs
## Example Questions to Ask
Before starting a review, consider asking:
1. What is the expected scale of this system (users, requests, data)?
2. What are the critical performance requirements?
3. Are there specific compliance or security requirements?
4. What is the team's experience level with Python backend development?
5. What is the current development stage (design, prototype, production)?
6. Are there any existing systems this needs to integrate with?
7. What is the budget for infrastructure?
8. What is the timeline for deployment?
9. Are there any technology preferences or constraints?
10. What are the most critical features for the initial release?
## Deliverables
At the end of a review, you should have provided:
1. Executive summary with overall assessment
2. Detailed findings across all review areas
3. Prioritized list of recommendations
4. Security checklist
5. Scalability roadmap (if applicable)
6. Technology stack assessment
7. Next steps with effort estimates
8. Optional: Example code or architectural diagrams
9. Optional: Reference links and resources
## Continuous Improvement
After the initial review:
- Offer to review specific areas in more depth
- Provide guidance on implementing recommendations
- Help with specific technical challenges
- Review updated designs
- Answer follow-up questions
Remember: The goal is to help the user build a robust, scalable, secure, and maintainable Python backend system that meets their specific needs and constraints.

154
architecture-checklist.md Normal file
View File

@@ -0,0 +1,154 @@
# Python Backend Architecture Review Checklist
This checklist serves as a quick reference for conducting comprehensive architecture reviews.
## System Architecture
- [ ] Architecture style matches scale and complexity
- [ ] Service boundaries are well-defined
- [ ] Communication patterns are appropriate
- [ ] No unnecessary over-engineering
- [ ] Single points of failure identified and addressed
- [ ] Dependency management is clear
- [ ] Framework choice is justified (FastAPI/Django/Flask/etc.)
- [ ] Async patterns are properly utilized where needed
## Database Architecture
- [ ] Database type selection is appropriate
- [ ] Schema is properly normalized/denormalized
- [ ] Indexes are strategically placed
- [ ] Sharding/partitioning strategy exists if needed
- [ ] Read replicas planned for scale
- [ ] Caching layer is implemented
- [ ] Connection pooling is configured
- [ ] N+1 query issues are prevented
- [ ] ORM choice is appropriate
- [ ] Migration strategy is defined
- [ ] Backup and DR plans exist
## API Design
- [ ] API design pattern is consistent (REST/GraphQL/gRPC)
- [ ] Endpoints follow naming conventions
- [ ] Versioning strategy is defined
- [ ] Authentication/authorization is implemented
- [ ] Rate limiting exists
- [ ] API documentation is auto-generated
- [ ] Error handling is consistent
- [ ] Pagination is implemented
- [ ] Input validation uses Pydantic or similar
- [ ] OpenAPI/Swagger documentation exists
## Security
- [ ] Authentication mechanism is secure (JWT/OAuth2)
- [ ] Authorization model is well-defined (RBAC/ABAC)
- [ ] CORS is properly configured
- [ ] CSRF protection is enabled where needed
- [ ] Data is encrypted in transit (HTTPS/TLS)
- [ ] Data is encrypted at rest where needed
- [ ] Secrets management solution exists
- [ ] SQL injection is prevented (parameterized queries)
- [ ] XSS protections are in place
- [ ] Security headers are configured
- [ ] Dependency scanning is automated
- [ ] Password hashing uses bcrypt/argon2
- [ ] Audit logging is implemented
- [ ] Rate limiting prevents abuse
- [ ] Input sanitization is thorough
## Scalability & Performance
- [ ] Scaling strategy is defined (horizontal/vertical)
- [ ] Load balancer is configured
- [ ] Auto-scaling rules exist
- [ ] Caching strategy is multi-layered
- [ ] Background jobs use queue system (Celery/RQ)
- [ ] Long-running tasks are async
- [ ] Database connection pooling is optimized
- [ ] ASGI server is production-ready
- [ ] GIL limitations are addressed
- [ ] Performance monitoring is in place
- [ ] Load testing has been conducted
## Observability
- [ ] Structured logging is implemented
- [ ] Log aggregation is configured
- [ ] Metrics are collected (Prometheus/StatsD)
- [ ] Distributed tracing exists (OpenTelemetry)
- [ ] Error tracking is configured (Sentry)
- [ ] Health check endpoints exist
- [ ] Alerting rules are defined
- [ ] Performance baselines are established
- [ ] Business metrics are tracked
- [ ] Dashboards are created
## Deployment & Infrastructure
- [ ] Dockerfile is optimized (multi-stage)
- [ ] Container orchestration is configured
- [ ] CI/CD pipeline is automated
- [ ] Environment parity exists (dev/staging/prod)
- [ ] Infrastructure as Code is used
- [ ] Deployment strategy is safe (blue-green/canary)
- [ ] Rollback procedure is defined
- [ ] Configuration is externalized
- [ ] Secrets are managed securely
- [ ] Dependencies are pinned and managed (Poetry/PDM)
## Code Organization
- [ ] Project structure is clear and logical
- [ ] Module boundaries are well-defined
- [ ] No circular dependencies exist
- [ ] Dependency injection is used appropriately
- [ ] Configuration management is centralized
- [ ] Type hints are used throughout
- [ ] Tests are well-organized (pytest)
- [ ] Code follows PEP 8 standards
- [ ] Linting/formatting is automated (Ruff/Black)
## Resilience
- [ ] Retry logic exists for external calls
- [ ] Circuit breakers protect external services
- [ ] Timeouts are configured appropriately
- [ ] Graceful degradation is implemented
- [ ] Error handling is consistent
- [ ] Dead letter queues exist
- [ ] Bulkhead patterns separate concerns
- [ ] Rate limiting protects resources
## Testing
- [ ] Unit tests exist (>80% coverage)
- [ ] Integration tests cover critical paths
- [ ] API tests validate contracts
- [ ] Load tests verify performance
- [ ] Security tests check vulnerabilities
- [ ] Test fixtures are reusable
- [ ] Mocking is used appropriately
- [ ] CI runs tests automatically
## Documentation
- [ ] API documentation is complete
- [ ] Architecture diagrams exist
- [ ] Setup instructions are clear
- [ ] Configuration is documented
- [ ] Deployment process is documented
- [ ] Code has docstrings
- [ ] README is comprehensive
- [ ] Contributing guidelines exist
## Compliance & Standards
- [ ] GDPR compliance addressed if applicable
- [ ] HIPAA compliance addressed if applicable
- [ ] SOC 2 requirements met if applicable
- [ ] Data retention policies defined
- [ ] Privacy policies implemented
- [ ] Audit trails exist
- [ ] 12-Factor App principles followed

672
common-patterns.md Normal file
View File

@@ -0,0 +1,672 @@
# Common Python Backend Architecture Patterns
This document provides reference implementations and patterns for common architectural decisions.
## 1. Repository Pattern
```python
from abc import ABC, abstractmethod
from typing import Generic, TypeVar, Optional, List
from sqlalchemy.orm import Session
T = TypeVar('T')
class BaseRepository(ABC, Generic[T]):
"""Abstract base repository for data access"""
@abstractmethod
async def get(self, id: int) -> Optional[T]:
pass
@abstractmethod
async def list(self, skip: int = 0, limit: int = 100) -> List[T]:
pass
@abstractmethod
async def create(self, obj: T) -> T:
pass
@abstractmethod
async def update(self, id: int, obj: T) -> Optional[T]:
pass
@abstractmethod
async def delete(self, id: int) -> bool:
pass
class SQLAlchemyRepository(BaseRepository[T]):
"""SQLAlchemy implementation of repository pattern"""
def __init__(self, session: Session, model_class: type):
self.session = session
self.model_class = model_class
async def get(self, id: int) -> Optional[T]:
return self.session.query(self.model_class).filter_by(id=id).first()
async def list(self, skip: int = 0, limit: int = 100) -> List[T]:
return self.session.query(self.model_class).offset(skip).limit(limit).all()
async def create(self, obj: T) -> T:
self.session.add(obj)
self.session.commit()
self.session.refresh(obj)
return obj
async def update(self, id: int, obj: T) -> Optional[T]:
existing = await self.get(id)
if existing:
for key, value in obj.__dict__.items():
setattr(existing, key, value)
self.session.commit()
return existing
return None
async def delete(self, id: int) -> bool:
obj = await self.get(id)
if obj:
self.session.delete(obj)
self.session.commit()
return True
return False
```
## 2. Service Layer Pattern
```python
from typing import Protocol
from .repositories import UserRepository
from .models import User
from .schemas import UserCreate, UserUpdate
class IUserService(Protocol):
"""Interface for user service"""
async def get_user(self, user_id: int) -> User:
...
async def create_user(self, user_data: UserCreate) -> User:
...
class UserService:
"""Service layer for user business logic"""
def __init__(self, user_repo: UserRepository):
self.user_repo = user_repo
async def get_user(self, user_id: int) -> User:
user = await self.user_repo.get(user_id)
if not user:
raise ValueError(f"User {user_id} not found")
return user
async def create_user(self, user_data: UserCreate) -> User:
# Business logic here
if await self._email_exists(user_data.email):
raise ValueError("Email already registered")
user = User(**user_data.dict())
return await self.user_repo.create(user)
async def _email_exists(self, email: str) -> bool:
# Check if email exists
return await self.user_repo.find_by_email(email) is not None
```
## 3. Dependency Injection with FastAPI
```python
from fastapi import Depends, FastAPI
from sqlalchemy.orm import Session
from typing import Generator
app = FastAPI()
# Database session dependency
def get_db() -> Generator[Session, None, None]:
db = SessionLocal()
try:
yield db
finally:
db.close()
# Repository dependency
def get_user_repository(db: Session = Depends(get_db)) -> UserRepository:
return UserRepository(db)
# Service dependency
def get_user_service(
user_repo: UserRepository = Depends(get_user_repository)
) -> UserService:
return UserService(user_repo)
# Route using dependency injection
@app.get("/users/{user_id}")
async def get_user(
user_id: int,
user_service: UserService = Depends(get_user_service)
):
return await user_service.get_user(user_id)
```
## 4. Event-Driven Architecture
```python
from typing import Callable, Dict, List, Any
from dataclasses import dataclass
from datetime import datetime
import asyncio
@dataclass
class Event:
"""Base event class"""
event_type: str
data: Dict[str, Any]
timestamp: datetime = datetime.utcnow()
class EventBus:
"""Simple in-memory event bus"""
def __init__(self):
self._subscribers: Dict[str, List[Callable]] = {}
def subscribe(self, event_type: str, handler: Callable):
"""Subscribe to an event type"""
if event_type not in self._subscribers:
self._subscribers[event_type] = []
self._subscribers[event_type].append(handler)
async def publish(self, event: Event):
"""Publish an event to all subscribers"""
handlers = self._subscribers.get(event.event_type, [])
await asyncio.gather(*[handler(event) for handler in handlers])
# Usage
event_bus = EventBus()
@dataclass
class UserCreatedEvent(Event):
event_type: str = "user.created"
async def send_welcome_email(event: Event):
print(f"Sending welcome email for user {event.data['user_id']}")
async def create_user_profile(event: Event):
print(f"Creating profile for user {event.data['user_id']}")
# Subscribe handlers
event_bus.subscribe("user.created", send_welcome_email)
event_bus.subscribe("user.created", create_user_profile)
# Publish event
await event_bus.publish(UserCreatedEvent(data={"user_id": 123}))
```
## 5. Circuit Breaker Pattern
```python
from enum import Enum
from datetime import datetime, timedelta
from typing import Callable, Any
import asyncio
class CircuitState(Enum):
CLOSED = "closed"
OPEN = "open"
HALF_OPEN = "half_open"
class CircuitBreaker:
"""Circuit breaker for external service calls"""
def __init__(
self,
failure_threshold: int = 5,
timeout: int = 60,
recovery_timeout: int = 30
):
self.failure_threshold = failure_threshold
self.timeout = timeout
self.recovery_timeout = recovery_timeout
self.failure_count = 0
self.last_failure_time = None
self.state = CircuitState.CLOSED
async def call(self, func: Callable, *args, **kwargs) -> Any:
"""Execute function with circuit breaker protection"""
if self.state == CircuitState.OPEN:
if self._should_attempt_reset():
self.state = CircuitState.HALF_OPEN
else:
raise Exception("Circuit breaker is OPEN")
try:
result = await asyncio.wait_for(
func(*args, **kwargs),
timeout=self.timeout
)
self._on_success()
return result
except Exception as e:
self._on_failure()
raise e
def _on_success(self):
"""Handle successful call"""
self.failure_count = 0
self.state = CircuitState.CLOSED
def _on_failure(self):
"""Handle failed call"""
self.failure_count += 1
self.last_failure_time = datetime.utcnow()
if self.failure_count >= self.failure_threshold:
self.state = CircuitState.OPEN
def _should_attempt_reset(self) -> bool:
"""Check if enough time has passed to retry"""
if self.last_failure_time is None:
return True
return (
datetime.utcnow() - self.last_failure_time
).total_seconds() >= self.recovery_timeout
# Usage
circuit_breaker = CircuitBreaker(failure_threshold=3, timeout=5)
async def call_external_api():
# External API call
pass
result = await circuit_breaker.call(call_external_api)
```
## 6. CQRS Pattern
```python
from abc import ABC, abstractmethod
from typing import Generic, TypeVar
from pydantic import BaseModel
# Commands
class Command(BaseModel):
"""Base command"""
pass
class CreateUserCommand(Command):
email: str
name: str
# Command Handlers
TCommand = TypeVar('TCommand', bound=Command)
class CommandHandler(ABC, Generic[TCommand]):
"""Abstract command handler"""
@abstractmethod
async def handle(self, command: TCommand) -> Any:
pass
class CreateUserCommandHandler(CommandHandler[CreateUserCommand]):
"""Handler for creating users"""
def __init__(self, user_repo: UserRepository):
self.user_repo = user_repo
async def handle(self, command: CreateUserCommand) -> User:
user = User(email=command.email, name=command.name)
return await self.user_repo.create(user)
# Queries
class Query(BaseModel):
"""Base query"""
pass
class GetUserQuery(Query):
user_id: int
# Query Handlers
TQuery = TypeVar('TQuery', bound=Query)
class QueryHandler(ABC, Generic[TQuery]):
"""Abstract query handler"""
@abstractmethod
async def handle(self, query: TQuery) -> Any:
pass
class GetUserQueryHandler(QueryHandler[GetUserQuery]):
"""Handler for getting users"""
def __init__(self, user_repo: UserRepository):
self.user_repo = user_repo
async def handle(self, query: GetUserQuery) -> User:
return await self.user_repo.get(query.user_id)
# Command Bus
class CommandBus:
"""Simple command bus"""
def __init__(self):
self._handlers = {}
def register(self, command_type: type, handler: CommandHandler):
self._handlers[command_type] = handler
async def execute(self, command: Command):
handler = self._handlers.get(type(command))
if not handler:
raise ValueError(f"No handler for {type(command)}")
return await handler.handle(command)
```
## 7. Retry Pattern with Exponential Backoff
```python
import asyncio
from typing import Callable, TypeVar, Any
from functools import wraps
T = TypeVar('T')
def retry_with_backoff(
max_retries: int = 3,
base_delay: float = 1.0,
max_delay: float = 60.0,
exponential_base: float = 2.0
):
"""Decorator for retry logic with exponential backoff"""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
retries = 0
while retries < max_retries:
try:
return await func(*args, **kwargs)
except Exception as e:
retries += 1
if retries >= max_retries:
raise e
delay = min(
base_delay * (exponential_base ** (retries - 1)),
max_delay
)
print(f"Retry {retries}/{max_retries} after {delay}s")
await asyncio.sleep(delay)
raise Exception("Max retries exceeded")
return wrapper
return decorator
# Usage
@retry_with_backoff(max_retries=3, base_delay=1.0)
async def fetch_data_from_api():
# API call that might fail
pass
```
## 8. Settings Management
```python
from pydantic_settings import BaseSettings
from functools import lru_cache
class Settings(BaseSettings):
"""Application settings"""
# API Settings
api_title: str = "My API"
api_version: str = "1.0.0"
# Database
database_url: str
db_pool_size: int = 5
# Redis
redis_url: str
redis_ttl: int = 3600
# Security
secret_key: str
algorithm: str = "HS256"
access_token_expire_minutes: int = 30
# External Services
external_api_url: str
external_api_key: str
# Observability
log_level: str = "INFO"
sentry_dsn: str | None = None
class Config:
env_file = ".env"
case_sensitive = False
@lru_cache()
def get_settings() -> Settings:
"""Cached settings instance"""
return Settings()
# Usage
settings = get_settings()
```
## 9. Background Task Processing
```python
from celery import Celery
from typing import Any
# Celery app
celery_app = Celery(
"tasks",
broker="redis://localhost:6379/0",
backend="redis://localhost:6379/0"
)
celery_app.conf.update(
task_serializer="json",
accept_content=["json"],
result_serializer="json",
timezone="UTC",
enable_utc=True,
task_track_started=True,
task_time_limit=300, # 5 minutes
task_soft_time_limit=240, # 4 minutes
)
@celery_app.task(bind=True, max_retries=3)
def process_data(self, data: dict) -> Any:
"""Background task with retry logic"""
try:
# Process data
result = heavy_computation(data)
return result
except Exception as exc:
# Retry with exponential backoff
raise self.retry(exc=exc, countdown=2 ** self.request.retries)
# FastAPI integration
from fastapi import FastAPI, BackgroundTasks
app = FastAPI()
@app.post("/process")
async def trigger_processing(data: dict, background_tasks: BackgroundTasks):
# Option 1: FastAPI background tasks (for quick tasks)
background_tasks.add_task(quick_task, data)
# Option 2: Celery (for long-running tasks)
task = process_data.delay(data)
return {"task_id": task.id}
@app.get("/task/{task_id}")
async def get_task_status(task_id: str):
task = celery_app.AsyncResult(task_id)
return {
"task_id": task_id,
"status": task.status,
"result": task.result if task.ready() else None
}
```
## 10. API Versioning
```python
from fastapi import APIRouter, FastAPI
from enum import Enum
class APIVersion(str, Enum):
V1 = "v1"
V2 = "v2"
app = FastAPI()
# Version 1 router
router_v1 = APIRouter(prefix="/api/v1", tags=["v1"])
@router_v1.get("/users/{user_id}")
async def get_user_v1(user_id: int):
return {"id": user_id, "version": "v1"}
# Version 2 router
router_v2 = APIRouter(prefix="/api/v2", tags=["v2"])
@router_v2.get("/users/{user_id}")
async def get_user_v2(user_id: int):
return {
"id": user_id,
"version": "v2",
"additional_field": "new in v2"
}
app.include_router(router_v1)
app.include_router(router_v2)
# Header-based versioning (alternative)
from fastapi import Header
@app.get("/users/{user_id}")
async def get_user(
user_id: int,
api_version: str = Header(default="v1", alias="X-API-Version")
):
if api_version == "v2":
return {"id": user_id, "version": "v2"}
return {"id": user_id, "version": "v1"}
```
## 11. Middleware Patterns
```python
from fastapi import FastAPI, Request
from starlette.middleware.base import BaseHTTPMiddleware
import time
import logging
class LoggingMiddleware(BaseHTTPMiddleware):
"""Middleware for request/response logging"""
async def dispatch(self, request: Request, call_next):
start_time = time.time()
# Log request
logging.info(f"Request: {request.method} {request.url}")
response = await call_next(request)
# Log response
process_time = time.time() - start_time
logging.info(
f"Response: {response.status_code} "
f"(took {process_time:.2f}s)"
)
response.headers["X-Process-Time"] = str(process_time)
return response
class RateLimitMiddleware(BaseHTTPMiddleware):
"""Simple rate limiting middleware"""
def __init__(self, app, requests_per_minute: int = 60):
super().__init__(app)
self.requests_per_minute = requests_per_minute
self.request_counts = {}
async def dispatch(self, request: Request, call_next):
client_ip = request.client.host
current_minute = int(time.time() / 60)
key = f"{client_ip}:{current_minute}"
self.request_counts[key] = self.request_counts.get(key, 0) + 1
if self.request_counts[key] > self.requests_per_minute:
return JSONResponse(
status_code=429,
content={"error": "Rate limit exceeded"}
)
return await call_next(request)
# Add middleware to app
app = FastAPI()
app.add_middleware(LoggingMiddleware)
app.add_middleware(RateLimitMiddleware, requests_per_minute=100)
```
## 12. Structured Logging
```python
import structlog
from typing import Any
import logging
# Configure structlog
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.UnicodeDecoder(),
structlog.processors.JSONRenderer()
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)
# Get logger
logger = structlog.get_logger()
# Usage
logger.info(
"user_created",
user_id=123,
email="user@example.com",
ip_address="192.168.1.1"
)
# Context binding
logger = logger.bind(request_id="abc-123")
logger.info("processing_request")
logger.info("request_completed", duration_ms=150)
```
These patterns provide battle-tested solutions for common architectural challenges in Python backend development.

57
plugin.lock.json Normal file
View File

@@ -0,0 +1,57 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:rknall/claude-skills:python-architecture-review",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "34ff50b48c144f759d6f9ff0def868dec04ddfff",
"treeHash": "071ec8f09a274e3e714789fdae9e1beac43e6e712b3e1d34b36b41efcce06fb3",
"generatedAt": "2025-11-28T10:27:58.100698Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "python-architecture-review",
"description": "Comprehensive design architecture review for Python backend applications, analyzing scalability, security, performance, and best practices",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "268961256dafe4674f790b3c95a53d388ad0c96e3d709e595fc0a3f8cae95f8f"
},
{
"path": "SKILL.md",
"sha256": "255256461b8c6caf7ee09070a0172473c971a11bd417c41853e53aafa3ecc52a"
},
{
"path": "common-patterns.md",
"sha256": "ff7db6bfa5dbcb94e9efacec81f3eae6e9bed81e643f74d7efeffa1fb7cee54b"
},
{
"path": "architecture-checklist.md",
"sha256": "b93aaecd48103df433dda23b7014cc1780363420c1475be9c3e9bda53a780392"
},
{
"path": "technology-recommendations.md",
"sha256": "6592e62f19d7018ef399c55f09b4d5f126fa3398062728108af93f72d0421529"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "4227c2590fffe69814b2e9e91a1946e128ba820aa9868607001be3c90008fd8f"
}
],
"dirSha256": "071ec8f09a274e3e714789fdae9e1beac43e6e712b3e1d34b36b41efcce06fb3"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,592 @@
# Python Backend Technology Stack Recommendations
This document provides curated recommendations for common technology choices in Python backend development.
## Web Frameworks
### FastAPI (Recommended for Modern APIs)
**Best for:** Microservices, RESTful APIs, async-first applications
**Pros:**
- Automatic OpenAPI/Swagger documentation
- Built-in request/response validation with Pydantic
- High performance (comparable to Node.js/Go)
- Native async/await support
- Type hints and IDE support
- Modern Python features
**Cons:**
- Smaller ecosystem compared to Django
- Less built-in features (need to integrate more libraries)
- Newer framework (less mature)
**When to use:**
- Building new REST or GraphQL APIs
- High-performance requirements
- Microservices architecture
- Team comfortable with modern Python
### Django (Recommended for Full-Featured Applications)
**Best for:** Monolithic applications, admin-heavy apps, rapid development
**Pros:**
- Batteries-included (ORM, admin, auth, forms, etc.)
- Massive ecosystem and community
- Battle-tested and mature
- Excellent documentation
- Built-in admin interface
- Strong security defaults
**Cons:**
- Heavier framework
- Less suitable for microservices
- Async support is newer and limited
- More opinionated structure
**When to use:**
- Building full-featured web applications
- Need admin interface out of the box
- Rapid prototyping
- Team prefers convention over configuration
### Flask (Recommended for Simple APIs)
**Best for:** Small to medium applications, prototypes, simple APIs
**Pros:**
- Lightweight and flexible
- Easy to learn
- Large ecosystem of extensions
- Minimal boilerplate
- Great for prototypes
**Cons:**
- Requires more setup for production
- No built-in async support (requires extensions)
- Need to choose and integrate many components
- Less structure by default
**When to use:**
- Small to medium applications
- Prototypes and MVPs
- Learning Python web development
- Need maximum flexibility
## Async Frameworks
### Comparison
| Feature | FastAPI | Starlette | Quart | aiohttp |
|---------|---------|-----------|-------|---------|
| Performance | Excellent | Excellent | Good | Excellent |
| Documentation | Excellent | Good | Good | Good |
| Ease of Use | Excellent | Good | Excellent | Moderate |
| Community | Growing | Moderate | Small | Moderate |
| Best For | APIs | Custom apps | Flask users | Low-level control |
## Database Solutions
### PostgreSQL (Recommended for Most Cases)
**Best for:** Production applications requiring ACID compliance
**Pros:**
- ACID compliant
- Rich feature set (JSONB, full-text search, etc.)
- Excellent performance
- Strong community
- Great for complex queries
**Libraries:**
- `asyncpg` - Fastest async driver
- `psycopg2` - Traditional sync driver
- `psycopg3` - Modern sync/async driver
### MySQL/MariaDB
**Best for:** Applications with heavy read operations
**Pros:**
- Fast read performance
- Wide hosting support
- Good replication
- Mature ecosystem
**Libraries:**
- `aiomysql` - Async driver
- `mysqlclient` - Sync driver (fastest)
- `PyMySQL` - Pure Python sync driver
### MongoDB
**Best for:** Flexible schema, document-oriented data
**Pros:**
- Schema flexibility
- Horizontal scaling
- Good for rapid development
- Rich query language
**Libraries:**
- `motor` - Async driver (recommended)
- `pymongo` - Sync driver
### Redis
**Best for:** Caching, sessions, real-time features
**Pros:**
- Extremely fast
- Rich data structures
- Pub/sub support
- Good for caching and sessions
**Libraries:**
- `redis-py` - Official client with async support
- `aioredis` - Async-first client (now merged into redis-py)
## ORM Solutions
### SQLAlchemy (Recommended)
**Best for:** Complex queries, database agnostic code
**Pros:**
- Most mature Python ORM
- Powerful query API
- Database agnostic
- Great documentation
- Good async support (2.0+)
**Cons:**
- Steeper learning curve
- More verbose than alternatives
```python
# Async SQLAlchemy 2.0
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import declarative_base, sessionmaker
engine = create_async_engine("postgresql+asyncpg://user:pass@localhost/db")
AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
```
### Tortoise ORM
**Best for:** Async-first applications, Django-like syntax
**Pros:**
- Built for async from the ground up
- Django-like API
- Easy to learn
- Good documentation
**Cons:**
- Smaller community
- Fewer features than SQLAlchemy
- Limited database support
```python
from tortoise import fields, models
from tortoise.contrib.fastapi import register_tortoise
class User(models.Model):
id = fields.IntField(pk=True)
email = fields.CharField(max_length=255, unique=True)
name = fields.CharField(max_length=255)
```
### Django ORM
**Best for:** Django projects
**Pros:**
- Integrated with Django
- Simple and intuitive
- Excellent documentation
- Good async support (3.1+)
**Cons:**
- Tied to Django
- Less powerful than SQLAlchemy for complex queries
## API Documentation
### OpenAPI/Swagger (FastAPI Built-in)
**Recommended for:** REST APIs
**Libraries:**
- FastAPI includes automatic generation
- `flasgger` for Flask
- `drf-spectacular` for Django REST Framework
### GraphQL
**Strawberry (Recommended for FastAPI)**
```python
import strawberry
from fastapi import FastAPI
from strawberry.fastapi import GraphQLRouter
@strawberry.type
class Query:
@strawberry.field
def hello(self) -> str:
return "Hello World"
schema = strawberry.Schema(query=Query)
graphql_app = GraphQLRouter(schema)
```
**Other Options:**
- `Graphene` - Mature, works with Django
- `Ariadne` - Schema-first approach
## Authentication & Authorization
### JWT Authentication
**Libraries:**
- `python-jose[cryptography]` - Recommended for JWT
- `PyJWT` - Lightweight alternative
```python
from jose import JWTError, jwt
from datetime import datetime, timedelta
def create_access_token(data: dict):
to_encode = data.copy()
expire = datetime.utcnow() + timedelta(minutes=15)
to_encode.update({"exp": expire})
return jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
```
### OAuth2
**Libraries:**
- `authlib` - Comprehensive OAuth client/server
- `python-social-auth` - Social authentication
### Password Hashing
**Libraries:**
- `passlib[bcrypt]` - Recommended
- `argon2-cffi` - Most secure (memory-hard)
```python
from passlib.context import CryptContext
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def hash_password(password: str) -> str:
return pwd_context.hash(password)
def verify_password(plain_password: str, hashed_password: str) -> bool:
return pwd_context.verify(plain_password, hashed_password)
```
## Task Queues
### Celery (Recommended for Most Cases)
**Best for:** Complex workflows, scheduled tasks
**Pros:**
- Mature and battle-tested
- Rich feature set
- Good monitoring tools
- Supports multiple brokers
**Cons:**
- Complex configuration
- Heavy dependency
```python
from celery import Celery
app = Celery('tasks', broker='redis://localhost:6379/0')
@app.task
def process_data(data):
# Long-running task
pass
```
### RQ (Redis Queue)
**Best for:** Simple job queues
**Pros:**
- Simple to use
- Lightweight
- Good for simple tasks
**Cons:**
- Less features than Celery
- Only works with Redis
### Dramatiq
**Best for:** Alternatives to Celery
**Pros:**
- Simpler than Celery
- Good performance
- Type-safe
**Cons:**
- Smaller community
## Validation & Serialization
### Pydantic (Recommended)
**Best for:** Data validation, settings management
```python
from pydantic import BaseModel, EmailStr, validator
class User(BaseModel):
id: int
email: EmailStr
name: str
age: int
@validator('age')
def validate_age(cls, v):
if v < 0:
raise ValueError('Age must be positive')
return v
```
### Marshmallow
**Best for:** Flask applications, complex serialization
## HTTP Clients
### httpx (Recommended)
**Best for:** Modern async/sync HTTP client
```python
import httpx
async with httpx.AsyncClient() as client:
response = await client.get('https://api.example.com')
```
### aiohttp
**Best for:** Async-only applications
### requests
**Best for:** Sync-only applications (legacy)
## Testing
### pytest (Recommended)
**Essential plugins:**
- `pytest-asyncio` - Async test support
- `pytest-cov` - Coverage reporting
- `pytest-mock` - Mocking utilities
- `pytest-xdist` - Parallel testing
```python
import pytest
from httpx import AsyncClient
@pytest.mark.asyncio
async def test_create_user(client: AsyncClient):
response = await client.post("/users", json={"email": "test@example.com"})
assert response.status_code == 201
```
### Other Tools
- `coverage` - Code coverage
- `faker` - Test data generation
- `factory_boy` - Test fixtures
- `responses` - Mock HTTP requests
## Observability
### Logging
**Recommended:**
- `structlog` - Structured logging
- `python-json-logger` - JSON logging
- Built-in `logging` module
### Metrics
**Recommended:**
- `prometheus-client` - Prometheus metrics
- `statsd` - StatsD client
### Tracing
**Recommended:**
- `opentelemetry-api` + `opentelemetry-sdk` - OpenTelemetry
- `ddtrace` - DataDog APM
- `sentry-sdk` - Error tracking
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("process_data"):
# Your code here
pass
```
## Security
### Security Headers
```python
from fastapi.middleware.trustedhost import TrustedHostMiddleware
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(TrustedHostMiddleware, allowed_hosts=["example.com"])
app.add_middleware(
CORSMiddleware,
allow_origins=["https://example.com"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
```
### Security Libraries
- `bandit` - Security linting
- `safety` - Dependency vulnerability scanning
- `python-dotenv` - Environment variable management
## Development Tools
### Code Quality
- `ruff` - Fast linter and formatter (recommended)
- `black` - Code formatter
- `isort` - Import sorting
- `mypy` - Static type checking
- `pylint` - Comprehensive linting
### Dependency Management
- `poetry` - Recommended for modern projects
- `pdm` - Fast alternative to poetry
- `pip-tools` - Minimal approach (pip-compile)
### Pre-commit Hooks
```yaml
# .pre-commit-config.yaml
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.6
hooks:
- id: ruff
args: [--fix]
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.7.1
hooks:
- id: mypy
```
## Deployment
### ASGI Servers
- `uvicorn` - Recommended for FastAPI
- `hypercorn` - HTTP/2 support
- `daphne` - Django channels
### WSGI Servers
- `gunicorn` - Recommended for Django/Flask
- `uwsgi` - Alternative option
### Production Setup
```bash
# Uvicorn with Gunicorn (recommended for production)
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker
```
## Configuration Management
### Pydantic Settings (Recommended)
```python
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
database_url: str
redis_url: str
secret_key: str
class Config:
env_file = ".env"
```
### python-decouple
Simple alternative for basic configuration
## Message Brokers
### RabbitMQ
**Best for:** Complex routing, guaranteed delivery
**Libraries:**
- `aio-pika` - Async AMQP client
### Apache Kafka
**Best for:** Event streaming, high throughput
**Libraries:**
- `aiokafka` - Async Kafka client
- `confluent-kafka` - High-performance client
### Redis Pub/Sub
**Best for:** Simple pub/sub, low latency
**Libraries:**
- `redis-py` with pub/sub support
## Recommended Stack Combinations
### Modern Microservices Stack
- Framework: FastAPI
- Database: PostgreSQL (asyncpg)
- ORM: SQLAlchemy 2.0 (async)
- Cache: Redis
- Task Queue: Celery or Dramatiq
- Validation: Pydantic
- Testing: pytest + httpx
- Observability: OpenTelemetry + Sentry
- Deployment: Uvicorn + Docker + Kubernetes
### Traditional Monolith Stack
- Framework: Django
- Database: PostgreSQL (psycopg2)
- ORM: Django ORM
- Cache: Redis
- Task Queue: Celery
- API: Django REST Framework
- Testing: pytest-django
- Deployment: Gunicorn + Docker
### Lightweight API Stack
- Framework: Flask or FastAPI
- Database: PostgreSQL or MongoDB
- ORM: SQLAlchemy or Motor
- Testing: pytest
- Deployment: Uvicorn or Gunicorn
## Version Recommendations (as of 2025)
```toml
# pyproject.toml
[tool.poetry.dependencies]
python = "^3.11" # Python 3.11+ recommended for performance
fastapi = "^0.110.0"
uvicorn = {extras = ["standard"], version = "^0.27.0"}
sqlalchemy = "^2.0.25"
asyncpg = "^0.29.0"
pydantic = "^2.5.0"
pydantic-settings = "^2.1.0"
redis = "^5.0.1"
celery = "^5.3.4"
python-jose = {extras = ["cryptography"], version = "^3.3.0"}
passlib = {extras = ["bcrypt"], version = "^1.7.4"}
httpx = "^0.26.0"
structlog = "^24.1.0"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
sentry-sdk = "^1.39.0"
[tool.poetry.group.dev.dependencies]
pytest = "^7.4.4"
pytest-asyncio = "^0.23.3"
pytest-cov = "^4.1.0"
ruff = "^0.1.9"
mypy = "^1.8.0"
```
This technology stack provides battle-tested, production-ready solutions for Python backend development in 2025.