Initial commit
This commit is contained in:
928
commands/migrate-api.md
Normal file
928
commands/migrate-api.md
Normal file
@@ -0,0 +1,928 @@
|
||||
---
|
||||
description: Migrate API to new version with compatibility layers and automated scripts
|
||||
shortcut: migrate
|
||||
---
|
||||
|
||||
# Migrate API Version
|
||||
|
||||
Orchestrate comprehensive API version migrations with automated compatibility layers, breaking change detection, and zero-downtime deployment strategies. This command manages the complete lifecycle of API evolution from initial analysis through deployment and deprecation.
|
||||
|
||||
## Design Decisions
|
||||
|
||||
**Architecture Approach:**
|
||||
- Version routing at API gateway level for clean separation
|
||||
- Adapter pattern for backward compatibility transformations
|
||||
- Feature flags for gradual rollout control
|
||||
- Automated test generation across all supported versions
|
||||
|
||||
**Alternatives Considered:**
|
||||
- Hard cutover migration (rejected: high risk, no rollback)
|
||||
- Separate API endpoints per version (rejected: operational complexity)
|
||||
- GraphQL federation (chosen for microservices architectures)
|
||||
- BFF pattern for client-specific migrations
|
||||
|
||||
## When to Use
|
||||
|
||||
**USE when:**
|
||||
- Introducing breaking changes to API contracts
|
||||
- Deprecating legacy endpoints with controlled timelines
|
||||
- Migrating between API paradigms (REST to GraphQL)
|
||||
- Evolving data models with backward incompatible changes
|
||||
- Implementing new authentication mechanisms
|
||||
- Consolidating multiple API versions
|
||||
|
||||
**DON'T USE when:**
|
||||
- Adding backward-compatible endpoints (use versioned routes)
|
||||
- Making internal refactoring without contract changes
|
||||
- Deploying hotfixes or security patches
|
||||
- Changes affect only implementation, not interface
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Required:**
|
||||
- Complete OpenAPI/GraphQL schema for both versions
|
||||
- Comprehensive API test suite with >80% coverage
|
||||
- Version control with tagged releases
|
||||
- Deployment pipeline with rollback capability
|
||||
- API gateway with routing rules support
|
||||
- Monitoring and alerting infrastructure
|
||||
|
||||
**Recommended:**
|
||||
- Consumer registry with contact information
|
||||
- Deprecation policy documented and communicated
|
||||
- Traffic analysis showing endpoint usage patterns
|
||||
- Backward compatibility test matrix
|
||||
- Canary deployment environment
|
||||
|
||||
## Migration Process
|
||||
|
||||
**Step 1: Analysis and Impact Assessment**
|
||||
- Scan API schemas to detect breaking changes
|
||||
- Analyze usage patterns from API logs
|
||||
- Identify affected consumers and endpoints
|
||||
- Calculate complexity score for migration effort
|
||||
- Generate compatibility matrix between versions
|
||||
|
||||
**Step 2: Compatibility Layer Generation**
|
||||
- Create adapter functions for data transformation
|
||||
- Generate request/response mappers automatically
|
||||
- Build version-specific validation schemas
|
||||
- Implement fallback logic for missing fields
|
||||
- Create deprecation warning middleware
|
||||
|
||||
**Step 3: Migration Script Creation**
|
||||
- Generate database migration scripts for schema changes
|
||||
- Create data backfill scripts for new required fields
|
||||
- Build rollback procedures for each migration step
|
||||
- Generate test fixtures for both API versions
|
||||
- Create automated smoke tests for critical paths
|
||||
|
||||
**Step 4: Routing and Deployment Configuration**
|
||||
- Configure API gateway version routing rules
|
||||
- Set up feature flags for gradual rollout
|
||||
- Implement traffic splitting for canary deployment
|
||||
- Configure monitoring dashboards for version metrics
|
||||
- Set up deprecation warning headers and logs
|
||||
|
||||
**Step 5: Validation and Monitoring**
|
||||
- Execute automated test suite across all versions
|
||||
- Verify backward compatibility with consumer tests
|
||||
- Monitor error rates and performance metrics
|
||||
- Track adoption rates for new version
|
||||
- Schedule deprecation timeline communications
|
||||
|
||||
## Output Format
|
||||
|
||||
```yaml
|
||||
migration_plan:
|
||||
api_name: "User Service API"
|
||||
source_version: "v1"
|
||||
target_version: "v2"
|
||||
breaking_changes:
|
||||
- endpoint: "/users"
|
||||
change_type: "field_removed"
|
||||
field: "username"
|
||||
severity: "high"
|
||||
affected_consumers: 15
|
||||
- endpoint: "/users/{id}"
|
||||
change_type: "response_structure"
|
||||
details: "Nested address object"
|
||||
severity: "medium"
|
||||
affected_consumers: 8
|
||||
|
||||
compatibility_layer:
|
||||
adapters_generated: 12
|
||||
transformation_functions: 8
|
||||
fallback_strategies: 5
|
||||
|
||||
migration_scripts:
|
||||
- script: "001_add_email_unique_constraint.sql"
|
||||
type: "database"
|
||||
rollback: "001_rollback_email_constraint.sql"
|
||||
- script: "002_backfill_address_objects.js"
|
||||
type: "data_transformation"
|
||||
estimated_time: "15 minutes"
|
||||
|
||||
deployment_strategy:
|
||||
type: "canary"
|
||||
phases:
|
||||
- name: "internal_testing"
|
||||
traffic_percentage: 0
|
||||
duration: "3 days"
|
||||
- name: "early_adopters"
|
||||
traffic_percentage: 10
|
||||
duration: "1 week"
|
||||
- name: "general_rollout"
|
||||
traffic_percentage: 100
|
||||
duration: "2 weeks"
|
||||
|
||||
deprecation_timeline:
|
||||
warning_start: "2024-01-01"
|
||||
support_end: "2024-06-30"
|
||||
sunset_date: "2024-12-31"
|
||||
notification_plan: "Email + dashboard banner"
|
||||
|
||||
monitoring:
|
||||
dashboards:
|
||||
- "API Version Adoption Metrics"
|
||||
- "Error Rate by Version"
|
||||
- "Response Time Comparison"
|
||||
alerts:
|
||||
- "V2 error rate > 5%"
|
||||
- "V1 traffic spike (rollback indicator)"
|
||||
```
|
||||
|
||||
## Code Examples
|
||||
|
||||
### Example 1: REST API v1 to v2 Migration with Breaking Changes
|
||||
|
||||
```javascript
|
||||
// API v1 → v2 Migration: User endpoint restructure
|
||||
// BREAKING: Flattened user object to nested structure
|
||||
|
||||
// Source: /api/v1/users/{id}
|
||||
{
|
||||
"id": 123,
|
||||
"name": "John Doe",
|
||||
"email": "john@example.com",
|
||||
"street": "123 Main St",
|
||||
"city": "San Francisco",
|
||||
"state": "CA",
|
||||
"zip": "94105"
|
||||
}
|
||||
|
||||
// Target: /api/v2/users/{id}
|
||||
{
|
||||
"id": 123,
|
||||
"name": "John Doe",
|
||||
"email": "john@example.com",
|
||||
"address": {
|
||||
"street": "123 Main St",
|
||||
"city": "San Francisco",
|
||||
"state": "CA",
|
||||
"postalCode": "94105"
|
||||
},
|
||||
"metadata": {
|
||||
"createdAt": "2024-01-15T10:30:00Z",
|
||||
"version": "v2"
|
||||
}
|
||||
}
|
||||
|
||||
// Generated Compatibility Adapter
|
||||
class UserV1ToV2Adapter {
|
||||
transform(v1Response) {
|
||||
return {
|
||||
id: v1Response.id,
|
||||
name: v1Response.name,
|
||||
email: v1Response.email,
|
||||
address: {
|
||||
street: v1Response.street,
|
||||
city: v1Response.city,
|
||||
state: v1Response.state,
|
||||
postalCode: v1Response.zip // Field renamed
|
||||
},
|
||||
metadata: {
|
||||
createdAt: v1Response.created_at || new Date().toISOString(),
|
||||
version: "v2"
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
reverseTransform(v2Response) {
|
||||
// For backward compatibility when v1 clients hit v2
|
||||
return {
|
||||
id: v2Response.id,
|
||||
name: v2Response.name,
|
||||
email: v2Response.email,
|
||||
street: v2Response.address?.street || "",
|
||||
city: v2Response.address?.city || "",
|
||||
state: v2Response.address?.state || "",
|
||||
zip: v2Response.address?.postalCode || ""
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// API Gateway Routing Configuration
|
||||
const routingRules = {
|
||||
"/api/v1/users/:id": {
|
||||
target: "/api/v2/users/:id",
|
||||
adapter: "UserV1ToV2Adapter",
|
||||
deprecationWarning: {
|
||||
header: "Deprecation",
|
||||
value: "API v1 will be sunset on 2024-12-31. Migrate to v2.",
|
||||
link: "https://docs.example.com/api-migration"
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Migration Script: Database Schema Evolution
|
||||
// 001_restructure_user_addresses.sql
|
||||
BEGIN;
|
||||
|
||||
-- Create new address table for normalized structure
|
||||
CREATE TABLE user_addresses (
|
||||
id SERIAL PRIMARY KEY,
|
||||
user_id INTEGER REFERENCES users(id),
|
||||
street VARCHAR(255),
|
||||
city VARCHAR(100),
|
||||
state VARCHAR(2),
|
||||
postal_code VARCHAR(10),
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Migrate existing flat data to nested structure
|
||||
INSERT INTO user_addresses (user_id, street, city, state, postal_code)
|
||||
SELECT id, street, city, state, zip
|
||||
FROM users
|
||||
WHERE street IS NOT NULL;
|
||||
|
||||
-- Add foreign key to users table
|
||||
ALTER TABLE users ADD COLUMN address_id INTEGER REFERENCES user_addresses(id);
|
||||
|
||||
UPDATE users u
|
||||
SET address_id = ua.id
|
||||
FROM user_addresses ua
|
||||
WHERE u.id = ua.user_id;
|
||||
|
||||
-- Deprecate old columns (don't drop yet for rollback safety)
|
||||
ALTER TABLE users
|
||||
ALTER COLUMN street DROP NOT NULL,
|
||||
ALTER COLUMN city DROP NOT NULL,
|
||||
ALTER COLUMN state DROP NOT NULL,
|
||||
ALTER COLUMN zip DROP NOT NULL;
|
||||
|
||||
COMMIT;
|
||||
|
||||
-- Rollback Script: 001_rollback_restructure.sql
|
||||
BEGIN;
|
||||
UPDATE users u
|
||||
SET street = ua.street,
|
||||
city = ua.city,
|
||||
state = ua.state,
|
||||
zip = ua.postal_code
|
||||
FROM user_addresses ua
|
||||
WHERE u.address_id = ua.id;
|
||||
|
||||
ALTER TABLE users DROP COLUMN address_id;
|
||||
DROP TABLE user_addresses;
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
### Example 2: GraphQL Schema Evolution
|
||||
|
||||
```graphql
|
||||
# Schema v1 (Deprecated)
|
||||
type User {
|
||||
id: ID!
|
||||
username: String! # DEPRECATED: Replaced by email
|
||||
email: String
|
||||
fullName: String
|
||||
}
|
||||
|
||||
type Query {
|
||||
user(id: ID!): User
|
||||
users: [User!]!
|
||||
}
|
||||
|
||||
# Schema v2 (Current)
|
||||
type Address {
|
||||
street: String!
|
||||
city: String!
|
||||
state: String!
|
||||
postalCode: String!
|
||||
country: String!
|
||||
}
|
||||
|
||||
type User {
|
||||
id: ID!
|
||||
email: String! # Now required, primary identifier
|
||||
username: String @deprecated(reason: "Use email instead. Removed in v3.")
|
||||
profile: UserProfile!
|
||||
address: Address
|
||||
}
|
||||
|
||||
type UserProfile {
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
displayName: String!
|
||||
avatar: String
|
||||
}
|
||||
|
||||
type Query {
|
||||
user(id: ID, email: String): User # Multiple lookup options
|
||||
users(filter: UserFilter): [User!]!
|
||||
}
|
||||
|
||||
input UserFilter {
|
||||
email: String
|
||||
city: String
|
||||
state: String
|
||||
}
|
||||
|
||||
# Schema v3 (Planned)
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
profile: UserProfile!
|
||||
addresses: [Address!]! # Now supports multiple addresses
|
||||
}
|
||||
|
||||
# Migration Resolver Implementation
|
||||
const resolvers = {
|
||||
Query: {
|
||||
user: async (_, args, context) => {
|
||||
const version = context.apiVersion;
|
||||
|
||||
if (version === 'v1') {
|
||||
// Legacy lookup by username
|
||||
const user = await db.users.findByUsername(args.id);
|
||||
return {
|
||||
...user,
|
||||
username: user.username, // Still supported in v1
|
||||
fullName: `${user.firstName} ${user.lastName}`
|
||||
};
|
||||
} else {
|
||||
// Modern lookup by email or ID
|
||||
const user = await db.users.findOne({
|
||||
where: args.email ? { email: args.email } : { id: args.id }
|
||||
});
|
||||
return user;
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
User: {
|
||||
// Compatibility field resolver for deprecated username
|
||||
username: (user, args, context) => {
|
||||
if (context.apiVersion === 'v1') {
|
||||
return user.username;
|
||||
}
|
||||
// Add deprecation warning to response headers
|
||||
context.res.set('Deprecation', 'username field is deprecated. Use email.');
|
||||
return user.username || user.email.split('@')[0];
|
||||
},
|
||||
|
||||
// Transform flat structure to nested for v2+
|
||||
profile: (user) => ({
|
||||
firstName: user.firstName || user.fullName?.split(' ')[0],
|
||||
lastName: user.lastName || user.fullName?.split(' ')[1],
|
||||
displayName: user.fullName,
|
||||
avatar: user.avatar
|
||||
})
|
||||
}
|
||||
};
|
||||
|
||||
// Automated Schema Compatibility Tests
|
||||
describe('GraphQL Schema Migration Tests', () => {
|
||||
test('v1 clients can still query with username', async () => {
|
||||
const query = `query { user(id: "johndoe") { username email } }`;
|
||||
const result = await executeQuery(query, { apiVersion: 'v1' });
|
||||
expect(result.data.user.username).toBe('johndoe');
|
||||
});
|
||||
|
||||
test('v2 clients receive nested profile structure', async () => {
|
||||
const query = `query { user(email: "john@example.com") { profile { firstName lastName } } }`;
|
||||
const result = await executeQuery(query, { apiVersion: 'v2' });
|
||||
expect(result.data.user.profile.firstName).toBe('John');
|
||||
});
|
||||
|
||||
test('deprecated fields trigger warning headers', async () => {
|
||||
const query = `query { user(id: "123") { username } }`;
|
||||
const { response } = await executeQueryWithHeaders(query, { apiVersion: 'v2' });
|
||||
expect(response.headers.get('Deprecation')).toContain('username field is deprecated');
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Example 3: gRPC Service Versioning
|
||||
|
||||
```protobuf
|
||||
// service_v1.proto (Deprecated)
|
||||
syntax = "proto3";
|
||||
package user.v1;
|
||||
|
||||
message User {
|
||||
int32 id = 1;
|
||||
string username = 2;
|
||||
string email = 3;
|
||||
string full_name = 4;
|
||||
}
|
||||
|
||||
message GetUserRequest {
|
||||
int32 id = 1;
|
||||
}
|
||||
|
||||
message GetUserResponse {
|
||||
User user = 1;
|
||||
}
|
||||
|
||||
service UserService {
|
||||
rpc GetUser(GetUserRequest) returns (GetUserResponse);
|
||||
}
|
||||
|
||||
// service_v2.proto (Current)
|
||||
syntax = "proto3";
|
||||
package user.v2;
|
||||
|
||||
import "google/protobuf/timestamp.proto";
|
||||
|
||||
message Address {
|
||||
string street = 1;
|
||||
string city = 2;
|
||||
string state = 3;
|
||||
string postal_code = 4;
|
||||
string country = 5;
|
||||
}
|
||||
|
||||
message UserProfile {
|
||||
string first_name = 1;
|
||||
string last_name = 2;
|
||||
string display_name = 3;
|
||||
string avatar_url = 4;
|
||||
}
|
||||
|
||||
message User {
|
||||
int32 id = 1;
|
||||
string email = 2;
|
||||
UserProfile profile = 3;
|
||||
Address address = 4;
|
||||
google.protobuf.Timestamp created_at = 5;
|
||||
google.protobuf.Timestamp updated_at = 6;
|
||||
}
|
||||
|
||||
message GetUserRequest {
|
||||
oneof identifier {
|
||||
int32 id = 1;
|
||||
string email = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message GetUserResponse {
|
||||
User user = 1;
|
||||
}
|
||||
|
||||
service UserService {
|
||||
rpc GetUser(GetUserRequest) returns (GetUserResponse);
|
||||
rpc ListUsers(ListUsersRequest) returns (ListUsersResponse);
|
||||
}
|
||||
|
||||
// Compatibility Bridge Service
|
||||
package user.bridge;
|
||||
|
||||
import "user/v1/service.proto";
|
||||
import "user/v2/service.proto";
|
||||
|
||||
class UserServiceBridge {
|
||||
constructor() {
|
||||
this.v2Service = new user.v2.UserServiceClient('localhost:50052');
|
||||
}
|
||||
|
||||
// Implement v1 interface while calling v2 backend
|
||||
async GetUser(call, callback) {
|
||||
const v1Request = call.request;
|
||||
|
||||
// Transform v1 request to v2 format
|
||||
const v2Request = {
|
||||
id: v1Request.id
|
||||
};
|
||||
|
||||
try {
|
||||
const v2Response = await this.v2Service.GetUser(v2Request);
|
||||
const v2User = v2Response.user;
|
||||
|
||||
// Transform v2 response back to v1 format
|
||||
const v1User = {
|
||||
id: v2User.id,
|
||||
username: v2User.email.split('@')[0], // Synthesize username
|
||||
email: v2User.email,
|
||||
full_name: v2User.profile.display_name
|
||||
};
|
||||
|
||||
callback(null, { user: v1User });
|
||||
|
||||
// Log deprecation warning
|
||||
console.warn(`[DEPRECATED] v1 API used by client ${call.getPeer()}`);
|
||||
|
||||
} catch (error) {
|
||||
callback(error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Envoy gRPC Gateway Configuration for Version Routing
|
||||
static_resources:
|
||||
listeners:
|
||||
- name: user_service_listener
|
||||
address:
|
||||
socket_address:
|
||||
address: 0.0.0.0
|
||||
port_value: 50051
|
||||
filter_chains:
|
||||
- filters:
|
||||
- name: envoy.filters.network.http_connection_manager
|
||||
typed_config:
|
||||
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
|
||||
stat_prefix: grpc_json
|
||||
codec_type: AUTO
|
||||
route_config:
|
||||
name: local_route
|
||||
virtual_hosts:
|
||||
- name: user_service
|
||||
domains: ["*"]
|
||||
routes:
|
||||
# Route v1 requests to compatibility bridge
|
||||
- match:
|
||||
prefix: "/user.v1.UserService"
|
||||
route:
|
||||
cluster: user_service_v1_bridge
|
||||
timeout: 30s
|
||||
# Route v2 requests to native service
|
||||
- match:
|
||||
prefix: "/user.v2.UserService"
|
||||
route:
|
||||
cluster: user_service_v2
|
||||
timeout: 30s
|
||||
|
||||
clusters:
|
||||
- name: user_service_v1_bridge
|
||||
connect_timeout: 1s
|
||||
type: STRICT_DNS
|
||||
lb_policy: ROUND_ROBIN
|
||||
http2_protocol_options: {}
|
||||
load_assignment:
|
||||
cluster_name: user_service_v1_bridge
|
||||
endpoints:
|
||||
- lb_endpoints:
|
||||
- endpoint:
|
||||
address:
|
||||
socket_address:
|
||||
address: user-bridge-service
|
||||
port_value: 50051
|
||||
|
||||
- name: user_service_v2
|
||||
connect_timeout: 1s
|
||||
type: STRICT_DNS
|
||||
lb_policy: ROUND_ROBIN
|
||||
http2_protocol_options: {}
|
||||
load_assignment:
|
||||
cluster_name: user_service_v2
|
||||
endpoints:
|
||||
- lb_endpoints:
|
||||
- endpoint:
|
||||
address:
|
||||
socket_address:
|
||||
address: user-service-v2
|
||||
port_value: 50052
|
||||
|
||||
// Migration Testing Framework
|
||||
describe('gRPC API Migration Tests', () => {
|
||||
const v1Client = new user.v1.UserServiceClient('localhost:50051');
|
||||
const v2Client = new user.v2.UserServiceClient('localhost:50052');
|
||||
|
||||
test('v1 clients receive compatible responses', async () => {
|
||||
const request = new user.v1.GetUserRequest({ id: 123 });
|
||||
const response = await v1Client.GetUser(request);
|
||||
|
||||
expect(response.user.username).toBeDefined();
|
||||
expect(response.user.email).toBeDefined();
|
||||
expect(response.user.full_name).toBeDefined();
|
||||
});
|
||||
|
||||
test('v2 clients receive enhanced data structures', async () => {
|
||||
const request = new user.v2.GetUserRequest({ email: 'john@example.com' });
|
||||
const response = await v2Client.GetUser(request);
|
||||
|
||||
expect(response.user.profile).toBeDefined();
|
||||
expect(response.user.profile.first_name).toBeDefined();
|
||||
expect(response.user.address).toBeDefined();
|
||||
});
|
||||
|
||||
test('data consistency between versions', async () => {
|
||||
const userId = 123;
|
||||
|
||||
const v1Response = await v1Client.GetUser({ id: userId });
|
||||
const v2Response = await v2Client.GetUser({ id: userId });
|
||||
|
||||
// Verify transformed data matches
|
||||
expect(v1Response.user.email).toBe(v2Response.user.email);
|
||||
expect(v1Response.user.full_name).toBe(v2Response.user.profile.display_name);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
**Basic Usage:**
|
||||
```bash
|
||||
/migrate-api \
|
||||
--source=v1 \
|
||||
--target=v2 \
|
||||
--api-spec=openapi.yaml \
|
||||
--consumers=consumer-registry.json
|
||||
```
|
||||
|
||||
**Available Options:**
|
||||
|
||||
`--strategy <type>` - Migration deployment strategy
|
||||
- `canary` - Gradual traffic shifting (default, safest)
|
||||
- `blue-green` - Instant switchover with rollback capability
|
||||
- `rolling` - Progressive deployment across instances
|
||||
- `feature-flag` - Application-controlled version selection
|
||||
- `parallel-run` - Run both versions, compare results
|
||||
|
||||
`--compatibility-mode <mode>` - Backward compatibility approach
|
||||
- `adapter` - Transform requests/responses between versions (default)
|
||||
- `proxy` - Route old endpoints to new implementation
|
||||
- `shim` - Minimal compatibility layer, consumers must adapt
|
||||
- `none` - No compatibility, hard cutover (dangerous)
|
||||
|
||||
`--deprecation-period <duration>` - Support window for old version
|
||||
- `3months` - Short deprecation (minor changes)
|
||||
- `6months` - Standard deprecation (default)
|
||||
- `12months` - Extended support (major changes)
|
||||
- `custom:YYYY-MM-DD` - Specific sunset date
|
||||
|
||||
`--breaking-changes-policy <policy>` - How to handle breaking changes
|
||||
- `require-adapters` - Force compatibility layer generation
|
||||
- `warn-consumers` - Send notifications, allow migration time
|
||||
- `block-deployment` - Prevent deploy until consumers updated
|
||||
- `document-only` - Just update documentation
|
||||
|
||||
`--traffic-split <percentage>` - Initial new version traffic
|
||||
- Default: `0` (dark launch)
|
||||
- Range: 0-100
|
||||
- Example: `10` for 10% canary deployment
|
||||
|
||||
`--rollback-threshold <percentage>` - Error rate trigger for auto-rollback
|
||||
- Default: `5` (5% error rate)
|
||||
- Range: 1-50
|
||||
- Example: `2` for strict quality requirements
|
||||
|
||||
`--test-coverage-required <percentage>` - Minimum test coverage before deploy
|
||||
- Default: `80`
|
||||
- Range: 0-100
|
||||
- Blocks deployment if coverage below threshold
|
||||
|
||||
`--generate-migration-guide` - Create consumer migration documentation
|
||||
- Generates markdown guide with code examples
|
||||
- Includes breaking change summaries
|
||||
- Provides timeline and support contacts
|
||||
|
||||
`--dry-run` - Simulate migration without making changes
|
||||
- Analyze breaking changes
|
||||
- Generate compatibility report
|
||||
- Estimate migration effort
|
||||
- No actual deployment
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Common Errors and Solutions:**
|
||||
|
||||
**Error: Breaking changes detected without compatibility layer**
|
||||
```
|
||||
ERROR: 15 breaking changes detected in target API version
|
||||
- Removed field: User.username (affects 12 endpoints)
|
||||
- Changed type: Order.total (string → number)
|
||||
- Renamed endpoint: /users/search → /users/find
|
||||
|
||||
Solution: Either:
|
||||
1. Add --compatibility-mode=adapter to generate transformers
|
||||
2. Create manual adapters in adapters/ directory
|
||||
3. Use --breaking-changes-policy=warn-consumers for grace period
|
||||
```
|
||||
|
||||
**Error: Consumer test failures in compatibility mode**
|
||||
```
|
||||
ERROR: 3 consumer integration tests failed with v2 adapter
|
||||
- AcmeApp: Expected username field, received null
|
||||
- BetaCorp: Response schema validation failed
|
||||
- GammaInc: Authentication token format mismatch
|
||||
|
||||
Solution:
|
||||
1. Review consumer test failures: npm run test:consumers
|
||||
2. Update adapters to handle edge cases
|
||||
3. Contact affected consumers for migration coordination
|
||||
4. Use --traffic-split=0 for dark launch until resolved
|
||||
```
|
||||
|
||||
**Error: Database migration rollback required**
|
||||
```
|
||||
ERROR: Migration script 003_add_foreign_keys.sql failed
|
||||
Constraint violation: user_addresses.user_id references missing users
|
||||
|
||||
Solution:
|
||||
1. Execute rollback: psql -f 003_rollback.sql
|
||||
2. Fix data inconsistencies: npm run data:cleanup
|
||||
3. Re-run migration with --validate-data flag
|
||||
4. Check migration logs: tail -f logs/migration.log
|
||||
```
|
||||
|
||||
**Error: Traffic spike indicating rollback needed**
|
||||
```
|
||||
WARNING: v1 traffic increased from 10% to 45% in 5 minutes
|
||||
Possible rollback from consumers due to v2 issues
|
||||
|
||||
Solution:
|
||||
1. Check v2 error rates: /metrics/api/v2/errors
|
||||
2. Review recent v2 deployment logs
|
||||
3. Pause traffic shift: kubectl patch deployment api-gateway --type=json -p='[{"op":"replace","path":"/spec/template/spec/containers/0/env/1/value","value":"10"}]'
|
||||
4. Investigate root cause before continuing rollout
|
||||
```
|
||||
|
||||
**Error: Incompatible schema versions in distributed system**
|
||||
```
|
||||
ERROR: Service A running v2 schema, Service B still on v1
|
||||
Message deserialization failed: unknown field 'profile'
|
||||
|
||||
Solution:
|
||||
1. Implement schema registry: npm install @kafkajs/schema-registry
|
||||
2. Use forward-compatible schemas with optional fields
|
||||
3. Deploy with version negotiation: --enable-version-negotiation
|
||||
4. Coordinate deployment order across services
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
**DO:**
|
||||
- Start with comprehensive API usage analysis before planning migration
|
||||
- Generate automated compatibility tests for all breaking changes
|
||||
- Implement feature flags for granular control over version activation
|
||||
- Use semantic versioning and clearly communicate breaking changes
|
||||
- Monitor error rates and latency separately for each API version
|
||||
- Maintain detailed migration documentation with timelines
|
||||
- Create rollback procedures and test them before deployment
|
||||
- Send deprecation warnings months before sunset dates
|
||||
- Provide sandbox environments for consumers to test migrations
|
||||
- Use API gateways for centralized version routing and monitoring
|
||||
|
||||
**DON'T:**
|
||||
- Deploy breaking changes without backward compatibility period
|
||||
- Remove deprecated endpoints immediately after new version launch
|
||||
- Skip comprehensive testing of compatibility layers under load
|
||||
- Assume all consumers will migrate quickly (expect stragglers)
|
||||
- Make multiple major version jumps simultaneously
|
||||
- Ignore consumer feedback during migration planning
|
||||
- Deploy migrations during peak traffic periods
|
||||
- Use hard deadlines without grace period extensions
|
||||
- Forget to version your database schema along with API
|
||||
- Mix multiple unrelated breaking changes in single version
|
||||
|
||||
**TIPS:**
|
||||
- Use OpenAPI diff tools to automatically detect breaking changes
|
||||
- Implement consumer registry to track who uses which endpoints
|
||||
- Add X-API-Version header to all responses for debugging
|
||||
- Create automated alerts for unexpected version usage patterns
|
||||
- Use GraphQL deprecation directives for gradual field removal
|
||||
- Maintain change logs with migration impact assessments
|
||||
- Build compatibility dashboards showing adoption rates
|
||||
- Provide SDK updates simultaneously with API versions
|
||||
- Consider GraphQL for more flexible schema evolution
|
||||
- Use contract testing to verify consumer compatibility
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/api-contract-generator` - Generate OpenAPI specs from code
|
||||
- `/api-versioning-manager` - Manage multiple API versions
|
||||
- `/api-documentation-generator` - Update docs for new versions
|
||||
- `/api-monitoring-dashboard` - Track version adoption metrics
|
||||
- `/api-security-scanner` - Audit security across versions
|
||||
- `/api-load-tester` - Performance test both versions
|
||||
- `/api-sdk-generator` - Create client libraries for v2
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
**Migration Performance Impact:**
|
||||
- Compatibility adapters add 5-20ms latency per request
|
||||
- Dual-write patterns during migration can double database load
|
||||
- Traffic splitting requires load balancer state management
|
||||
- Monitoring overhead increases with multiple active versions
|
||||
|
||||
**Optimization Strategies:**
|
||||
- Cache adapter transformation results for identical requests
|
||||
- Use asynchronous migration for non-critical data changes
|
||||
- Implement read-through caches for backward compatibility lookups
|
||||
- Batch database migrations during low-traffic windows
|
||||
- Pre-warm caches before traffic switch to new version
|
||||
- Use connection pooling to handle parallel version load
|
||||
- Consider edge caching for frequently accessed compatibility transformations
|
||||
|
||||
**Capacity Planning:**
|
||||
- Expect 20-30% overhead during dual-version support period
|
||||
- Plan for 2x database capacity during migration window
|
||||
- Allocate extra API gateway resources for routing logic
|
||||
- Monitor memory usage in compatibility layer services
|
||||
- Scale horizontally rather than vertically for version isolation
|
||||
|
||||
## Security Considerations
|
||||
|
||||
**Version Transition Security:**
|
||||
- Audit authentication mechanisms for compatibility breaks
|
||||
- Verify authorization rules apply consistently across versions
|
||||
- Scan for security vulnerabilities in compatibility adapters
|
||||
- Review CORS policies for new version endpoints
|
||||
- Update API keys and tokens if authentication changes
|
||||
- Test rate limiting separately per API version
|
||||
- Ensure TLS/SSL certificates cover new version domains
|
||||
- Validate input sanitization in transformation layers
|
||||
- Check for data leakage in error messages across versions
|
||||
- Review audit logging captures version information
|
||||
|
||||
**Security Checklist:**
|
||||
- [ ] Authentication backward compatible or migration path clear
|
||||
- [ ] Authorization policies tested with both version payloads
|
||||
- [ ] Sensitive data transformations don't expose information
|
||||
- [ ] Rate limiting prevents abuse of compatibility layers
|
||||
- [ ] API keys revocation works across all versions
|
||||
- [ ] Security headers consistent across versions
|
||||
- [ ] OWASP Top 10 validated for new endpoints
|
||||
- [ ] Penetration testing completed for v2 before public launch
|
||||
|
||||
## Troubleshooting Guide
|
||||
|
||||
**Issue: Consumers report intermittent failures after migration**
|
||||
- Check load balancer health checks for version endpoints
|
||||
- Verify DNS propagation completed for new version domains
|
||||
- Review session affinity settings (sticky sessions may cause issues)
|
||||
- Confirm database connection pools sized for dual version load
|
||||
- Check for race conditions in data migration scripts
|
||||
|
||||
**Issue: Adapter performance degrading over time**
|
||||
- Monitor adapter service memory for leaks
|
||||
- Check for unbounded cache growth in transformation layer
|
||||
- Review database query performance for compatibility lookups
|
||||
- Consider pre-computing common transformations
|
||||
- Profile adapter code for inefficient object mapping
|
||||
|
||||
**Issue: Version metrics not appearing in dashboard**
|
||||
- Verify X-API-Version header added to all responses
|
||||
- Check logging configuration captures version metadata
|
||||
- Confirm monitoring agents updated to track new version
|
||||
- Review metric aggregation rules in monitoring system
|
||||
- Ensure API gateway properly tags requests by version
|
||||
|
||||
**Issue: Rollback triggered unexpectedly**
|
||||
- Review error rate thresholds (may be too sensitive)
|
||||
- Check if external service outages affected v2 only
|
||||
- Verify rollback threshold uses appropriate time windows
|
||||
- Investigate false positives in health checks
|
||||
- Review deployment timing vs. traffic pattern changes
|
||||
|
||||
## Version History
|
||||
|
||||
**v1.0.0** (2024-01-15)
|
||||
- Initial release with REST API migration support
|
||||
- Basic compatibility adapter generation
|
||||
- Canary deployment strategy
|
||||
- OpenAPI 3.0 diff analysis
|
||||
|
||||
**v1.1.0** (2024-02-10)
|
||||
- Added GraphQL schema evolution support
|
||||
- Implemented automatic deprecation warning injection
|
||||
- Enhanced consumer notification system
|
||||
- Added rollback automation based on error thresholds
|
||||
|
||||
**v1.2.0** (2024-03-05)
|
||||
- gRPC service versioning support with Envoy integration
|
||||
- Blue-green deployment strategy option
|
||||
- Database schema migration with automated rollback
|
||||
- Consumer registry integration for impact analysis
|
||||
|
||||
**v1.3.0** (2024-04-20)
|
||||
- Feature flag integration for gradual rollout control
|
||||
- Enhanced compatibility testing framework
|
||||
- Performance optimization for adapter transformations
|
||||
- Multi-region migration coordination
|
||||
|
||||
**v2.0.0** (2024-06-15)
|
||||
- Complete rewrite of adapter generation engine
|
||||
- Support for complex data transformation scenarios
|
||||
- Integration with major API gateway platforms
|
||||
- Real-time migration dashboard with adoption metrics
|
||||
- Automated consumer SDK generation for new versions
|
||||
|
||||
**v2.1.0** (2024-08-30) - Current
|
||||
- AI-powered breaking change impact analysis
|
||||
- Automated migration guide generation
|
||||
- Enhanced security scanning for version transitions
|
||||
- Support for WebSocket and Server-Sent Events migration
|
||||
- Contract testing automation across API versions
|
||||
Reference in New Issue
Block a user