Initial commit
This commit is contained in:
396
skills/aws-java/aws-rds-spring-boot-integration/SKILL.md
Normal file
396
skills/aws-java/aws-rds-spring-boot-integration/SKILL.md
Normal file
@@ -0,0 +1,396 @@
|
||||
---
|
||||
name: aws-rds-spring-boot-integration
|
||||
description: Configure AWS RDS (Aurora, MySQL, PostgreSQL) with Spring Boot applications. Use when setting up datasources, connection pooling, security, and production-ready database configuration.
|
||||
category: aws
|
||||
tags: [aws, rds, aurora, spring-boot, spring-data-jpa, datasource, configuration, hikari, mysql, postgresql]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash, Glob
|
||||
---
|
||||
|
||||
# AWS RDS Spring Boot Integration
|
||||
|
||||
Configure AWS RDS databases (Aurora, MySQL, PostgreSQL) with Spring Boot applications for production-ready connectivity.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Setting up AWS RDS Aurora with Spring Data JPA
|
||||
- Configuring datasource properties for Aurora, MySQL, or PostgreSQL endpoints
|
||||
- Implementing HikariCP connection pooling for RDS
|
||||
- Setting up environment-specific configurations (dev/prod)
|
||||
- Configuring SSL connections to AWS RDS
|
||||
- Troubleshooting RDS connection issues
|
||||
- Setting up database migrations with Flyway
|
||||
- Integrating with AWS Secrets Manager for credential management
|
||||
- Optimizing connection pool settings for RDS workloads
|
||||
- Implementing read/write split with Aurora
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting AWS RDS Spring Boot integration:
|
||||
1. AWS account with RDS access
|
||||
2. Spring Boot project (3.x)
|
||||
3. RDS instance created and running (Aurora/MySQL/PostgreSQL)
|
||||
4. Security group configured for database access
|
||||
5. Database endpoint information available
|
||||
6. Database credentials secured (environment variables or Secrets Manager)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Step 1: Add Dependencies
|
||||
|
||||
**Maven (pom.xml):**
|
||||
```xml
|
||||
<dependencies>
|
||||
<!-- Spring Data JPA -->
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-data-jpa</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Aurora MySQL Driver -->
|
||||
<dependency>
|
||||
<groupId>com.mysql</groupId>
|
||||
<artifactId>mysql-connector-j</artifactId>
|
||||
<version>8.2.0</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Aurora PostgreSQL Driver (alternative) -->
|
||||
<dependency>
|
||||
<groupId>org.postgresql</groupId>
|
||||
<artifactId>postgresql</artifactId>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
|
||||
<!-- Flyway for database migrations -->
|
||||
<dependency>
|
||||
<groupId>org.flywaydb</groupId>
|
||||
<artifactId>flyway-core</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Validation -->
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-validation</artifactId>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
**Gradle (build.gradle):**
|
||||
```gradle
|
||||
dependencies {
|
||||
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
|
||||
implementation 'org.springframework.boot:spring-boot-starter-validation'
|
||||
|
||||
// Aurora MySQL
|
||||
runtimeOnly 'com.mysql:mysql-connector-j:8.2.0'
|
||||
|
||||
// Aurora PostgreSQL (alternative)
|
||||
runtimeOnly 'org.postgresql:postgresql'
|
||||
|
||||
// Flyway
|
||||
implementation 'org.flywaydb:flyway-core'
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Basic Datasource Configuration
|
||||
|
||||
**application.properties (Aurora MySQL):**
|
||||
```properties
|
||||
# Aurora MySQL Datasource - Cluster Endpoint
|
||||
spring.datasource.url=jdbc:mysql://myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:3306/devops
|
||||
spring.datasource.username=admin
|
||||
spring.datasource.password=${DB_PASSWORD}
|
||||
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
|
||||
|
||||
# JPA/Hibernate Configuration
|
||||
spring.jpa.hibernate.ddl-auto=validate
|
||||
spring.jpa.show-sql=false
|
||||
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect
|
||||
spring.jpa.properties.hibernate.format_sql=true
|
||||
spring.jpa.open-in-view=false
|
||||
|
||||
# HikariCP Connection Pool
|
||||
spring.datasource.hikari.maximum-pool-size=20
|
||||
spring.datasource.hikari.minimum-idle=5
|
||||
spring.datasource.hikari.connection-timeout=20000
|
||||
spring.datasource.hikari.idle-timeout=300000
|
||||
spring.datasource.hikari.max-lifetime=1200000
|
||||
|
||||
# Flyway Configuration
|
||||
spring.flyway.enabled=true
|
||||
spring.flyway.baseline-on-migrate=true
|
||||
spring.flyway.locations=classpath:db/migration
|
||||
```
|
||||
|
||||
**application.properties (Aurora PostgreSQL):**
|
||||
```properties
|
||||
# Aurora PostgreSQL Datasource
|
||||
spring.datasource.url=jdbc:postgresql://myapp-aurora-pg-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:5432/devops
|
||||
spring.datasource.username=admin
|
||||
spring.datasource.password=${DB_PASSWORD}
|
||||
spring.datasource.driver-class-name=org.postgresql.Driver
|
||||
|
||||
# JPA/Hibernate Configuration
|
||||
spring.jpa.hibernate.ddl-auto=validate
|
||||
spring.jpa.show-sql=false
|
||||
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
|
||||
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
|
||||
spring.jpa.open-in-view=false
|
||||
```
|
||||
|
||||
### Step 3: Set Up Environment Variables
|
||||
|
||||
```bash
|
||||
# Production environment variables
|
||||
export DB_PASSWORD=YourStrongPassword123!
|
||||
export SPRING_PROFILES_ACTIVE=prod
|
||||
|
||||
# For development
|
||||
export SPRING_PROFILES_ACTIVE=dev
|
||||
```
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Simple Aurora Cluster (MySQL)
|
||||
|
||||
**application.yml:**
|
||||
```yaml
|
||||
spring:
|
||||
application:
|
||||
name: DevOps
|
||||
|
||||
datasource:
|
||||
url: jdbc:mysql://myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:3306/devops
|
||||
username: admin
|
||||
password: ${DB_PASSWORD}
|
||||
driver-class-name: com.mysql.cj.jdbc.Driver
|
||||
|
||||
hikari:
|
||||
pool-name: AuroraHikariPool
|
||||
maximum-pool-size: 20
|
||||
minimum-idle: 5
|
||||
connection-timeout: 20000
|
||||
idle-timeout: 300000
|
||||
max-lifetime: 1200000
|
||||
leak-detection-threshold: 60000
|
||||
connection-test-query: SELECT 1
|
||||
|
||||
jpa:
|
||||
hibernate:
|
||||
ddl-auto: validate
|
||||
show-sql: false
|
||||
open-in-view: false
|
||||
properties:
|
||||
hibernate:
|
||||
dialect: org.hibernate.dialect.MySQL8Dialect
|
||||
format_sql: true
|
||||
jdbc:
|
||||
batch_size: 20
|
||||
order_inserts: true
|
||||
order_updates: true
|
||||
|
||||
flyway:
|
||||
enabled: true
|
||||
baseline-on-migrate: true
|
||||
locations: classpath:db/migration
|
||||
validate-on-migrate: true
|
||||
|
||||
logging:
|
||||
level:
|
||||
org.hibernate.SQL: WARN
|
||||
com.zaxxer.hikari: INFO
|
||||
```
|
||||
|
||||
### Read/Write Split Configuration
|
||||
|
||||
For read-heavy workloads, use separate writer and reader datasources:
|
||||
|
||||
**application.properties:**
|
||||
```properties
|
||||
# Aurora MySQL - Writer Endpoint
|
||||
spring.datasource.writer.jdbc-url=jdbc:mysql://myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:3306/devops
|
||||
spring.datasource.writer.username=admin
|
||||
spring.datasource.writer.password=${DB_PASSWORD}
|
||||
spring.datasource.writer.driver-class-name=com.mysql.cj.jdbc.Driver
|
||||
|
||||
# Aurora MySQL - Reader Endpoint (Read Replicas)
|
||||
spring.datasource.reader.jdbc-url=jdbc:mysql://myapp-aurora-cluster.cluster-ro-abc123xyz.us-east-1.rds.amazonaws.com:3306/devops
|
||||
spring.datasource.reader.username=admin
|
||||
spring.datasource.reader.password=${DB_PASSWORD}
|
||||
spring.datasource.reader.driver-class-name=com.mysql.cj.jdbc.Driver
|
||||
|
||||
# HikariCP for Writer
|
||||
spring.datasource.writer.hikari.maximum-pool-size=15
|
||||
spring.datasource.writer.hikari.minimum-idle=5
|
||||
|
||||
# HikariCP for Reader
|
||||
spring.datasource.reader.hikari.maximum-pool-size=25
|
||||
spring.datasource.reader.hikari.minimum-idle=10
|
||||
```
|
||||
|
||||
### SSL Configuration
|
||||
|
||||
**Aurora MySQL with SSL:**
|
||||
```properties
|
||||
spring.datasource.url=jdbc:mysql://myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:3306/devops?useSSL=true&requireSSL=true&verifyServerCertificate=true
|
||||
```
|
||||
|
||||
**Aurora PostgreSQL with SSL:**
|
||||
```properties
|
||||
spring.datasource.url=jdbc:postgresql://myapp-aurora-pg-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:5432/devops?ssl=true&sslmode=require
|
||||
```
|
||||
|
||||
## Environment-Specific Configuration
|
||||
|
||||
### Development Profile
|
||||
|
||||
**application-dev.properties:**
|
||||
```properties
|
||||
# Local MySQL for development
|
||||
spring.datasource.url=jdbc:mysql://localhost:3306/devops_dev
|
||||
spring.datasource.username=root
|
||||
spring.datasource.password=root
|
||||
|
||||
# Enable DDL auto-update in development
|
||||
spring.jpa.hibernate.ddl-auto=update
|
||||
spring.jpa.show-sql=true
|
||||
|
||||
# Smaller connection pool for local dev
|
||||
spring.datasource.hikari.maximum-pool-size=5
|
||||
spring.datasource.hikari.minimum-idle=2
|
||||
```
|
||||
|
||||
### Production Profile
|
||||
|
||||
**application-prod.properties:**
|
||||
```properties
|
||||
# Aurora Cluster Endpoint (Production)
|
||||
spring.datasource.url=jdbc:mysql://${AURORA_ENDPOINT}:3306/${DB_NAME}
|
||||
spring.datasource.username=${DB_USERNAME}
|
||||
spring.datasource.password=${DB_PASSWORD}
|
||||
|
||||
# Validate schema only in production
|
||||
spring.jpa.hibernate.ddl-auto=validate
|
||||
spring.jpa.show-sql=false
|
||||
spring.jpa.open-in-view=false
|
||||
|
||||
# Production-optimized connection pool
|
||||
spring.datasource.hikari.maximum-pool-size=30
|
||||
spring.datasource.hikari.minimum-idle=10
|
||||
spring.datasource.hikari.connection-timeout=20000
|
||||
spring.datasource.hikari.idle-timeout=300000
|
||||
spring.datasource.hikari.max-lifetime=1200000
|
||||
|
||||
# Enable Flyway migrations
|
||||
spring.flyway.enabled=true
|
||||
spring.flyway.validate-on-migrate=true
|
||||
```
|
||||
|
||||
## Database Migration Setup
|
||||
|
||||
Create migration files for Flyway:
|
||||
|
||||
```
|
||||
src/main/resources/db/migration/
|
||||
├── V1__create_users_table.sql
|
||||
├── V2__add_phone_column.sql
|
||||
└── V3__create_orders_table.sql
|
||||
```
|
||||
|
||||
**V1__create_users_table.sql:**
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
id BIGINT AUTO_INCREMENT PRIMARY KEY,
|
||||
name VARCHAR(100) NOT NULL,
|
||||
email VARCHAR(255) NOT NULL UNIQUE,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
INDEX idx_email (email)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
For advanced configuration, see the reference documents:
|
||||
|
||||
- [Multi-datasource, SSL, Secrets Manager integration](references/advanced-configuration.md)
|
||||
- [Common issues and solutions](references/troubleshooting.md)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Connection Pool Optimization
|
||||
|
||||
- Use HikariCP with Aurora-optimized settings
|
||||
- Set appropriate pool sizes based on Aurora instance capacity
|
||||
- Configure connection timeouts for failover handling
|
||||
- Enable leak detection
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
- Never hardcode credentials in configuration files
|
||||
- Use environment variables or AWS Secrets Manager
|
||||
- Enable SSL/TLS connections
|
||||
- Configure proper security group rules
|
||||
- Use IAM Database Authentication when possible
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
- Enable batch operations for bulk data operations
|
||||
- Disable open-in-view pattern to prevent lazy loading issues
|
||||
- Use appropriate indexing for Aurora queries
|
||||
- Configure connection pooling for high availability
|
||||
|
||||
### Monitoring
|
||||
|
||||
- Enable Spring Boot Actuator for database metrics
|
||||
- Monitor connection pool metrics
|
||||
- Set up proper logging for debugging
|
||||
- Configure health checks for database connectivity
|
||||
|
||||
## Testing
|
||||
|
||||
Create a health check endpoint to test database connectivity:
|
||||
|
||||
```java
|
||||
@RestController
|
||||
@RequestMapping("/api/health")
|
||||
public class DatabaseHealthController {
|
||||
|
||||
@Autowired
|
||||
private DataSource dataSource;
|
||||
|
||||
@GetMapping("/db-connection")
|
||||
public ResponseEntity<Map<String, Object>> testDatabaseConnection() {
|
||||
Map<String, Object> response = new HashMap<>();
|
||||
|
||||
try (Connection connection = dataSource.getConnection()) {
|
||||
response.put("status", "success");
|
||||
response.put("database", connection.getCatalog());
|
||||
response.put("url", connection.getMetaData().getURL());
|
||||
response.put("connected", true);
|
||||
return ResponseEntity.ok(response);
|
||||
} catch (Exception e) {
|
||||
response.put("status", "failed");
|
||||
response.put("error", e.getMessage());
|
||||
response.put("connected", false);
|
||||
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE).body(response);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Test with cURL:**
|
||||
```bash
|
||||
curl http://localhost:8080/api/health/db-connection
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For detailed troubleshooting and advanced configuration, refer to:
|
||||
|
||||
- [AWS RDS Aurora Advanced Configuration](references/advanced-configuration.md)
|
||||
- [AWS RDS Aurora Troubleshooting Guide](references/troubleshooting.md)
|
||||
- [AWS RDS Aurora documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/java_aurora_code_examples.html)
|
||||
- [Spring Boot Data RDS Aurora documentation](https://www.baeldung.com/aws-aurora-rds-java)
|
||||
@@ -0,0 +1,279 @@
|
||||
# AWS RDS Aurora Advanced Configuration
|
||||
|
||||
## Read/Write Split Configuration
|
||||
|
||||
For applications with heavy read operations, configure separate datasources:
|
||||
|
||||
**Multi-Datasource Configuration Class:**
|
||||
```java
|
||||
@Configuration
|
||||
public class AuroraDataSourceConfig {
|
||||
|
||||
@Primary
|
||||
@Bean(name = "writerDataSource")
|
||||
@ConfigurationProperties("spring.datasource.writer")
|
||||
public DataSource writerDataSource() {
|
||||
return DataSourceBuilder.create().build();
|
||||
}
|
||||
|
||||
@Bean(name = "readerDataSource")
|
||||
@ConfigurationProperties("spring.datasource.reader")
|
||||
public DataSource readerDataSource() {
|
||||
return DataSourceBuilder.create().build();
|
||||
}
|
||||
|
||||
@Primary
|
||||
@Bean(name = "writerEntityManagerFactory")
|
||||
public LocalContainerEntityManagerFactoryBean writerEntityManagerFactory(
|
||||
EntityManagerFactoryBuilder builder,
|
||||
@Qualifier("writerDataSource") DataSource dataSource) {
|
||||
return builder
|
||||
.dataSource(dataSource)
|
||||
.packages("com.example.domain")
|
||||
.persistenceUnit("writer")
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean(name = "readerEntityManagerFactory")
|
||||
public LocalContainerEntityManagerFactoryBean readerEntityManagerFactory(
|
||||
EntityManagerFactoryBuilder builder,
|
||||
@Qualifier("readerDataSource") DataSource dataSource) {
|
||||
return builder
|
||||
.dataSource(dataSource)
|
||||
.packages("com.example.domain")
|
||||
.persistenceUnit("reader")
|
||||
.build();
|
||||
}
|
||||
|
||||
@Primary
|
||||
@Bean(name = "writerTransactionManager")
|
||||
public PlatformTransactionManager writerTransactionManager(
|
||||
@Qualifier("writerEntityManagerFactory") EntityManagerFactory entityManagerFactory) {
|
||||
return new JpaTransactionManager(entityManagerFactory);
|
||||
}
|
||||
|
||||
@Bean(name = "readerTransactionManager")
|
||||
public PlatformTransactionManager readerTransactionManager(
|
||||
@Qualifier("readerEntityManagerFactory") EntityManagerFactory entityManagerFactory) {
|
||||
return new JpaTransactionManager(entityManagerFactory);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Usage in Repository:**
|
||||
```java
|
||||
@Repository
|
||||
public interface UserReadRepository extends JpaRepository<User, Long> {
|
||||
// Read operations automatically use reader endpoint
|
||||
}
|
||||
|
||||
@Repository
|
||||
public interface UserWriteRepository extends JpaRepository<User, Long> {
|
||||
// Write operations use writer endpoint
|
||||
}
|
||||
```
|
||||
|
||||
## SSL/TLS Configuration
|
||||
|
||||
Enable SSL for secure connections to Aurora:
|
||||
|
||||
**Aurora MySQL with SSL:**
|
||||
```properties
|
||||
spring.datasource.url=jdbc:mysql://myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:3306/devops?useSSL=true&requireSSL=true&verifyServerCertificate=true
|
||||
```
|
||||
|
||||
**Aurora PostgreSQL with SSL:**
|
||||
```properties
|
||||
spring.datasource.url=jdbc:postgresql://myapp-aurora-pg-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com:5432/devops?ssl=true&sslmode=require
|
||||
```
|
||||
|
||||
**Download RDS Certificate:**
|
||||
```bash
|
||||
# Download RDS CA certificate
|
||||
wget https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
|
||||
|
||||
# Configure in application
|
||||
spring.datasource.url=jdbc:mysql://...?useSSL=true&trustCertificateKeyStoreUrl=file:///path/to/global-bundle.pem
|
||||
```
|
||||
|
||||
## AWS Secrets Manager Integration
|
||||
|
||||
**Add AWS SDK Dependency:**
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>secretsmanager</artifactId>
|
||||
<version>2.20.0</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
**Secrets Manager Configuration:**
|
||||
```java
|
||||
@Configuration
|
||||
public class AuroraDataSourceConfig {
|
||||
|
||||
@Value("${aws.secretsmanager.secret-name}")
|
||||
private String secretName;
|
||||
|
||||
@Value("${aws.region}")
|
||||
private String region;
|
||||
|
||||
@Bean
|
||||
public DataSource dataSource() {
|
||||
Map<String, String> credentials = getAuroraCredentials();
|
||||
|
||||
HikariConfig config = new HikariConfig();
|
||||
config.setJdbcUrl(credentials.get("url"));
|
||||
config.setUsername(credentials.get("username"));
|
||||
config.setPassword(credentials.get("password"));
|
||||
config.setMaximumPoolSize(20);
|
||||
config.setMinimumIdle(5);
|
||||
config.setConnectionTimeout(20000);
|
||||
|
||||
return new HikariDataSource(config);
|
||||
}
|
||||
|
||||
private Map<String, String> getAuroraCredentials() {
|
||||
SecretsManagerClient client = SecretsManagerClient.builder()
|
||||
.region(Region.of(region))
|
||||
.build();
|
||||
|
||||
GetSecretValueRequest request = GetSecretValueRequest.builder()
|
||||
.secretId(secretName)
|
||||
.build();
|
||||
|
||||
GetSecretValueResponse response = client.getSecretValue(request);
|
||||
String secretString = response.secretString();
|
||||
|
||||
// Parse JSON secret
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
try {
|
||||
return mapper.readValue(secretString, Map.class);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to parse secret", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**application.properties (Secrets Manager):**
|
||||
```properties
|
||||
aws.secretsmanager.secret-name=prod/aurora/credentials
|
||||
aws.region=us-east-1
|
||||
```
|
||||
|
||||
## Database Migration with Flyway
|
||||
|
||||
### Setup Flyway
|
||||
|
||||
**Create Migration Directory:**
|
||||
```
|
||||
src/main/resources/db/migration/
|
||||
├── V1__create_users_table.sql
|
||||
├── V2__add_phone_column.sql
|
||||
└── V3__create_orders_table.sql
|
||||
```
|
||||
|
||||
**V1__create_users_table.sql:**
|
||||
```sql
|
||||
CREATE TABLE users (
|
||||
id BIGINT AUTO_INCREMENT PRIMARY KEY,
|
||||
name VARCHAR(100) NOT NULL,
|
||||
email VARCHAR(255) NOT NULL UNIQUE,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
INDEX idx_email (email)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
|
||||
```
|
||||
|
||||
**V2__add_phone_column.sql:**
|
||||
```sql
|
||||
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
|
||||
```
|
||||
|
||||
**Flyway Configuration:**
|
||||
```properties
|
||||
spring.jpa.hibernate.ddl-auto=validate
|
||||
spring.flyway.enabled=true
|
||||
spring.flyway.baseline-on-migrate=true
|
||||
spring.flyway.locations=classpath:db/migration
|
||||
spring.flyway.validate-on-migrate=true
|
||||
```
|
||||
|
||||
## Connection Pool Optimization for Aurora
|
||||
|
||||
**Recommended HikariCP Settings:**
|
||||
```properties
|
||||
# Aurora-optimized connection pool
|
||||
spring.datasource.hikari.maximum-pool-size=20
|
||||
spring.datasource.hikari.minimum-idle=5
|
||||
spring.datasource.hikari.connection-timeout=20000
|
||||
spring.datasource.hikari.idle-timeout=300000
|
||||
spring.datasource.hikari.max-lifetime=1200000
|
||||
spring.datasource.hikari.leak-detection-threshold=60000
|
||||
spring.datasource.hikari.connection-test-query=SELECT 1
|
||||
```
|
||||
|
||||
**Formula for Pool Size:**
|
||||
```
|
||||
connections = ((core_count * 2) + effective_spindle_count)
|
||||
For Aurora: Use 20-30 connections per application instance
|
||||
```
|
||||
|
||||
## Failover Handling
|
||||
|
||||
Aurora automatically handles failover between instances. Configure connection retry:
|
||||
|
||||
```properties
|
||||
# Connection retry configuration
|
||||
spring.datasource.hikari.connection-timeout=30000
|
||||
spring.datasource.url=jdbc:mysql://cluster-endpoint:3306/db?failOverReadOnly=false&maxReconnects=3&connectTimeout=30000
|
||||
```
|
||||
|
||||
## Read Replica Load Balancing
|
||||
|
||||
Use reader endpoint for distributing read traffic across replicas:
|
||||
|
||||
```properties
|
||||
# Reader endpoint for read-heavy workloads
|
||||
spring.datasource.reader.url=jdbc:mysql://cluster-ro-endpoint:3306/db
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
**Enable batch operations:**
|
||||
```properties
|
||||
spring.jpa.properties.hibernate.jdbc.batch_size=20
|
||||
spring.jpa.properties.hibernate.order_inserts=true
|
||||
spring.jpa.properties.hibernate.order_updates=true
|
||||
spring.jpa.properties.hibernate.batch_versioned_data=true
|
||||
```
|
||||
|
||||
**Disable open-in-view pattern:**
|
||||
```properties
|
||||
spring.jpa.open-in-view=false
|
||||
```
|
||||
|
||||
**Production logging configuration:**
|
||||
```properties
|
||||
# Disable SQL logging in production
|
||||
logging.level.org.hibernate.SQL=WARN
|
||||
logging.level.org.springframework.jdbc=WARN
|
||||
|
||||
# Enable HikariCP metrics
|
||||
logging.level.com.zaxxer.hikari=INFO
|
||||
logging.level.com.zaxxer.hikari.pool=DEBUG
|
||||
```
|
||||
|
||||
**Enable Spring Boot Actuator for metrics:**
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-actuator</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
```properties
|
||||
management.endpoints.web.exposure.include=health,metrics,info
|
||||
management.endpoint.health.show-details=always
|
||||
```
|
||||
@@ -0,0 +1,180 @@
|
||||
# AWS RDS Aurora Troubleshooting Guide
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
### Connection Timeout to Aurora Cluster
|
||||
**Error:** `Communications link failure` or `Connection timed out`
|
||||
|
||||
**Solutions:**
|
||||
- Verify security group inbound rules allow traffic on port 3306 (MySQL) or 5432 (PostgreSQL)
|
||||
- Check Aurora cluster endpoint is correct (cluster vs instance endpoint)
|
||||
- Ensure your IP/CIDR is whitelisted in security group
|
||||
- Verify VPC and subnet configuration
|
||||
- Check if Aurora cluster is in the same VPC or VPC peering is configured
|
||||
|
||||
```bash
|
||||
# Test connection from EC2/local machine
|
||||
telnet myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com 3306
|
||||
```
|
||||
|
||||
### Access Denied for User
|
||||
**Error:** `Access denied for user 'admin'@'...'`
|
||||
|
||||
**Solutions:**
|
||||
- Verify master username and password are correct
|
||||
- Check if IAM authentication is required but not configured
|
||||
- Reset master password in Aurora console if needed
|
||||
- Verify user permissions in database
|
||||
|
||||
```sql
|
||||
-- Check user permissions
|
||||
SHOW GRANTS FOR 'admin'@'%';
|
||||
```
|
||||
|
||||
### Database Not Found
|
||||
**Error:** `Unknown database 'devops'`
|
||||
|
||||
**Solutions:**
|
||||
- Verify initial database name was created with cluster
|
||||
- Create database manually using MySQL/PostgreSQL client
|
||||
- Check database name in JDBC URL matches existing database
|
||||
|
||||
```sql
|
||||
-- Connect to Aurora and create database
|
||||
CREATE DATABASE devops CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
|
||||
```
|
||||
|
||||
### SSL Connection Issues
|
||||
**Error:** `SSL connection error` or `Certificate validation failed`
|
||||
|
||||
**Solutions:**
|
||||
```properties
|
||||
# Option 1: Disable SSL verification (NOT recommended for production)
|
||||
spring.datasource.url=jdbc:mysql://...?useSSL=false
|
||||
|
||||
# Option 2: Properly configure SSL with RDS certificate
|
||||
spring.datasource.url=jdbc:mysql://...?useSSL=true&requireSSL=true&verifyServerCertificate=true&trustCertificateKeyStoreUrl=file:///path/to/global-bundle.pem
|
||||
|
||||
# Option 3: Trust all certificates (NOT recommended for production)
|
||||
spring.datasource.url=jdbc:mysql://...?useSSL=true&requireSSL=true&verifyServerCertificate=false
|
||||
```
|
||||
|
||||
### Too Many Connections
|
||||
**Error:** `Too many connections` or `Connection pool exhausted`
|
||||
|
||||
**Solutions:**
|
||||
- Review Aurora instance max_connections parameter
|
||||
- Optimize HikariCP pool size
|
||||
- Check for connection leaks in application code
|
||||
|
||||
```properties
|
||||
# Reduce pool size
|
||||
spring.datasource.hikari.maximum-pool-size=15
|
||||
spring.datasource.hikari.minimum-idle=5
|
||||
|
||||
# Enable leak detection
|
||||
spring.datasource.hikari.leak-detection-threshold=60000
|
||||
```
|
||||
|
||||
**Check Aurora max_connections:**
|
||||
```sql
|
||||
SHOW VARIABLES LIKE 'max_connections';
|
||||
-- Default for Aurora: depends on instance class
|
||||
-- db.r6g.large: ~1000 connections
|
||||
```
|
||||
|
||||
### Slow Query Performance
|
||||
**Error:** Queries taking longer than expected
|
||||
|
||||
**Solutions:**
|
||||
- Enable slow query log in Aurora parameter group
|
||||
- Review connection pool settings
|
||||
- Check Aurora instance metrics in CloudWatch
|
||||
- Optimize queries and add indexes
|
||||
|
||||
```properties
|
||||
# Enable query logging (development only)
|
||||
logging.level.org.hibernate.SQL=DEBUG
|
||||
logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE
|
||||
```
|
||||
|
||||
### Failover Delays
|
||||
**Error:** Application freezes during Aurora failover
|
||||
|
||||
**Solutions:**
|
||||
- Configure connection timeout appropriately
|
||||
- Use cluster endpoint (not instance endpoint)
|
||||
- Implement connection retry logic
|
||||
|
||||
```properties
|
||||
spring.datasource.hikari.connection-timeout=20000
|
||||
spring.datasource.hikari.validation-timeout=5000
|
||||
spring.datasource.url=jdbc:mysql://...?failOverReadOnly=false&maxReconnects=3
|
||||
```
|
||||
|
||||
## Testing Aurora Connection
|
||||
|
||||
### Connection Test with Spring Boot Application
|
||||
|
||||
**Create a Simple Test Endpoint:**
|
||||
```java
|
||||
@RestController
|
||||
@RequestMapping("/api/health")
|
||||
public class DatabaseHealthController {
|
||||
|
||||
@Autowired
|
||||
private DataSource dataSource;
|
||||
|
||||
@GetMapping("/db-connection")
|
||||
public ResponseEntity<Map<String, Object>> testDatabaseConnection() {
|
||||
Map<String, Object> response = new HashMap<>();
|
||||
|
||||
try (Connection connection = dataSource.getConnection()) {
|
||||
response.put("status", "success");
|
||||
response.put("database", connection.getCatalog());
|
||||
response.put("url", connection.getMetaData().getURL());
|
||||
response.put("connected", true);
|
||||
return ResponseEntity.ok(response);
|
||||
} catch (Exception e) {
|
||||
response.put("status", "failed");
|
||||
response.put("error", e.getMessage());
|
||||
response.put("connected", false);
|
||||
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE).body(response);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Test with cURL:**
|
||||
```bash
|
||||
curl http://localhost:8080/api/health/db-connection
|
||||
```
|
||||
|
||||
### Verify Aurora Connection with MySQL/PostgreSQL Client
|
||||
|
||||
**MySQL Client Connection:**
|
||||
```bash
|
||||
# Connect to Aurora MySQL cluster
|
||||
mysql -h myapp-aurora-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com \
|
||||
-P 3306 \
|
||||
-u admin \
|
||||
-p devops
|
||||
|
||||
# Verify connection
|
||||
SHOW DATABASES;
|
||||
SELECT @@version;
|
||||
SHOW VARIABLES LIKE 'aurora_version';
|
||||
```
|
||||
|
||||
**PostgreSQL Client Connection:**
|
||||
```bash
|
||||
# Connect to Aurora PostgreSQL
|
||||
psql -h myapp-aurora-pg-cluster.cluster-abc123xyz.us-east-1.rds.amazonaws.com \
|
||||
-p 5432 \
|
||||
-U admin \
|
||||
-d devops
|
||||
|
||||
# Verify connection
|
||||
\l
|
||||
SELECT version();
|
||||
```
|
||||
377
skills/aws-java/aws-sdk-java-v2-bedrock/SKILL.md
Normal file
377
skills/aws-java/aws-sdk-java-v2-bedrock/SKILL.md
Normal file
@@ -0,0 +1,377 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-bedrock
|
||||
description: Amazon Bedrock patterns using AWS SDK for Java 2.x. Use when working with foundation models (listing, invoking), text generation, image generation, embeddings, streaming responses, or integrating generative AI with Spring Boot applications.
|
||||
category: aws
|
||||
tags: [aws, bedrock, java, sdk, generative-ai, foundation-models]
|
||||
version: 2.0.0
|
||||
allowed-tools: Read, Write, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - Amazon Bedrock
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Listing and inspecting foundation models on Amazon Bedrock
|
||||
- Invoking foundation models for text generation (Claude, Llama, Titan)
|
||||
- Generating images with AI models (Stable Diffusion)
|
||||
- Creating text embeddings for RAG applications
|
||||
- Implementing streaming responses for real-time generation
|
||||
- Working with multiple AI providers through unified API
|
||||
- Integrating generative AI into Spring Boot applications
|
||||
- Building AI-powered chatbots and assistants
|
||||
|
||||
## Overview
|
||||
|
||||
Amazon Bedrock provides access to foundation models from leading AI providers through a unified API. This skill covers patterns for working with various models including Claude, Llama, Titan, and Stability Diffusion using AWS SDK for Java 2.x.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Dependencies
|
||||
|
||||
```xml
|
||||
<!-- Bedrock (model management) -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>bedrock</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Bedrock Runtime (model invocation) -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>bedrockruntime</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- For JSON processing -->
|
||||
<dependency>
|
||||
<groupId>org.json</groupId>
|
||||
<artifactId>json</artifactId>
|
||||
<version>20231013</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### Basic Client Setup
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.bedrock.BedrockClient;
|
||||
import software.amazon.awssdk.services.bedrockruntime.BedrockRuntimeClient;
|
||||
|
||||
// Model management client
|
||||
BedrockClient bedrockClient = BedrockClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Model invocation client
|
||||
BedrockRuntimeClient bedrockRuntimeClient = BedrockRuntimeClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Model Discovery
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.bedrock.model.*;
|
||||
import java.util.List;
|
||||
|
||||
public List<FoundationModelSummary> listFoundationModels(BedrockClient bedrockClient) {
|
||||
return bedrockClient.listFoundationModels().modelSummaries();
|
||||
}
|
||||
```
|
||||
|
||||
### Model Invocation
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.SdkBytes;
|
||||
import software.amazon.awssdk.services.bedrockruntime.model.*;
|
||||
import org.json.JSONObject;
|
||||
|
||||
public String invokeModel(BedrockRuntimeClient client, String modelId, String prompt) {
|
||||
JSONObject payload = createPayload(modelId, prompt);
|
||||
|
||||
InvokeModelResponse response = client.invokeModel(request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload.toString())));
|
||||
|
||||
return extractTextFromResponse(modelId, response.body().asUtf8String());
|
||||
}
|
||||
|
||||
private JSONObject createPayload(String modelId, String prompt) {
|
||||
if (modelId.startsWith("anthropic.claude")) {
|
||||
return new JSONObject()
|
||||
.put("anthropic_version", "bedrock-2023-05-31")
|
||||
.put("max_tokens", 1000)
|
||||
.put("messages", new JSONObject[]{
|
||||
new JSONObject().put("role", "user").put("content", prompt)
|
||||
});
|
||||
} else if (modelId.startsWith("amazon.titan")) {
|
||||
return new JSONObject()
|
||||
.put("inputText", prompt)
|
||||
.put("textGenerationConfig", new JSONObject()
|
||||
.put("maxTokenCount", 512)
|
||||
.put("temperature", 0.7));
|
||||
} else if (modelId.startsWith("meta.llama")) {
|
||||
return new JSONObject()
|
||||
.put("prompt", "[INST] " + prompt + " [/INST]")
|
||||
.put("max_gen_len", 512)
|
||||
.put("temperature", 0.7);
|
||||
}
|
||||
throw new IllegalArgumentException("Unsupported model: " + modelId);
|
||||
}
|
||||
```
|
||||
|
||||
### Streaming Responses
|
||||
|
||||
```java
|
||||
public void streamResponse(BedrockRuntimeClient client, String modelId, String prompt) {
|
||||
JSONObject payload = createPayload(modelId, prompt);
|
||||
|
||||
InvokeModelWithResponseStreamRequest streamRequest =
|
||||
InvokeModelWithResponseStreamRequest.builder()
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload.toString()))
|
||||
.build();
|
||||
|
||||
client.invokeModelWithResponseStream(streamRequest,
|
||||
InvokeModelWithResponseStreamResponseHandler.builder()
|
||||
.onEventStream(stream -> {
|
||||
stream.forEach(event -> {
|
||||
if (event instanceof PayloadPart) {
|
||||
PayloadPart payloadPart = (PayloadPart) event;
|
||||
String chunk = payloadPart.bytes().asUtf8String();
|
||||
processChunk(modelId, chunk);
|
||||
}
|
||||
});
|
||||
})
|
||||
.build());
|
||||
}
|
||||
```
|
||||
|
||||
### Text Embeddings
|
||||
|
||||
```java
|
||||
public double[] createEmbeddings(BedrockRuntimeClient client, String text) {
|
||||
String modelId = "amazon.titan-embed-text-v1";
|
||||
|
||||
JSONObject payload = new JSONObject().put("inputText", text);
|
||||
|
||||
InvokeModelResponse response = client.invokeModel(request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload.toString())));
|
||||
|
||||
JSONObject responseBody = new JSONObject(response.body().asUtf8String());
|
||||
JSONArray embeddingArray = responseBody.getJSONArray("embedding");
|
||||
|
||||
double[] embeddings = new double[embeddingArray.length()];
|
||||
for (int i = 0; i < embeddingArray.length(); i++) {
|
||||
embeddings[i] = embeddingArray.getDouble(i);
|
||||
}
|
||||
|
||||
return embeddings;
|
||||
}
|
||||
```
|
||||
|
||||
### Spring Boot Integration
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
public class BedrockConfiguration {
|
||||
|
||||
@Bean
|
||||
public BedrockClient bedrockClient() {
|
||||
return BedrockClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public BedrockRuntimeClient bedrockRuntimeClient() {
|
||||
return BedrockRuntimeClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
@Service
|
||||
public class BedrockAIService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
|
||||
@Value("${bedrock.default-model-id:anthropic.claude-sonnet-4-5-20250929-v1:0}")
|
||||
private String defaultModelId;
|
||||
|
||||
public BedrockAIService(BedrockRuntimeClient bedrockRuntimeClient) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
}
|
||||
|
||||
public String generateText(String prompt) {
|
||||
return generateText(prompt, defaultModelId);
|
||||
}
|
||||
|
||||
public String generateText(String prompt, String modelId) {
|
||||
Map<String, Object> payload = createPayload(modelId, prompt);
|
||||
String payloadJson = new ObjectMapper().writeValueAsString(payload);
|
||||
|
||||
InvokeModelResponse response = bedrockRuntimeClient.invokeModel(
|
||||
request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payloadJson)));
|
||||
|
||||
return extractTextFromResponse(modelId, response.body().asUtf8String());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Basic Usage Example
|
||||
|
||||
```java
|
||||
BedrockRuntimeClient client = BedrockRuntimeClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
String prompt = "Explain quantum computing in simple terms";
|
||||
String response = invokeModel(client, "anthropic.claude-sonnet-4-5-20250929-v1:0", prompt);
|
||||
System.out.println(response);
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Model Selection
|
||||
- **Claude 4.5 Sonnet**: Best for complex reasoning, analysis, and creative tasks
|
||||
- **Claude 4.5 Haiku**: Fast and affordable for real-time applications
|
||||
- **Claude 3.7 Sonnet**: Most advanced reasoning capabilities
|
||||
- **Llama 3.1**: Latest generation open-source alternative, good for general tasks
|
||||
- **Titan**: AWS native, cost-effective for simple text generation
|
||||
|
||||
### Performance Optimization
|
||||
- Reuse client instances (don't create new clients for each request)
|
||||
- Use async clients for I/O operations
|
||||
- Implement streaming for long responses
|
||||
- Cache foundation model lists
|
||||
|
||||
### Security
|
||||
- Never log sensitive prompt data
|
||||
- Use IAM roles for authentication (never access keys)
|
||||
- Implement rate limiting for public applications
|
||||
- Sanitize user inputs to prevent prompt injection
|
||||
|
||||
### Error Handling
|
||||
- Implement retry logic for throttling (exponential backoff)
|
||||
- Handle model-specific validation errors
|
||||
- Validate responses before processing
|
||||
- Use proper exception handling for different error types
|
||||
|
||||
### Cost Optimization
|
||||
- Use appropriate max_tokens limits
|
||||
- Choose cost-effective models for simple tasks
|
||||
- Cache embeddings when possible
|
||||
- Monitor usage and set budget alerts
|
||||
|
||||
## Common Model IDs
|
||||
|
||||
```java
|
||||
// Claude Models
|
||||
public static final String CLAUDE_SONNET_4_5 = "anthropic.claude-sonnet-4-5-20250929-v1:0";
|
||||
public static final String CLAUDE_HAIKU_4_5 = "anthropic.claude-haiku-4-5-20251001-v1:0";
|
||||
public static final String CLAUDE_OPUS_4_1 = "anthropic.claude-opus-4-1-20250805-v1:0";
|
||||
public static final String CLAUDE_3_7_SONNET = "anthropic.claude-3-7-sonnet-20250219-v1:0";
|
||||
public static final String CLAUDE_OPUS_4 = "anthropic.claude-opus-4-20250514-v1:0";
|
||||
public static final String CLAUDE_SONNET_4 = "anthropic.claude-sonnet-4-20250514-v1:0";
|
||||
public static final String CLAUDE_3_5_SONNET_V2 = "anthropic.claude-3-5-sonnet-20241022-v2:0";
|
||||
public static final String CLAUDE_3_5_HAIKU = "anthropic.claude-3-5-haiku-20241022-v1:0";
|
||||
public static final String CLAUDE_3_OPUS = "anthropic.claude-3-opus-20240229-v1:0";
|
||||
|
||||
// Llama Models
|
||||
public static final String LLAMA_3_3_70B = "meta.llama3-3-70b-instruct-v1:0";
|
||||
public static final String LLAMA_3_2_90B = "meta.llama3-2-90b-instruct-v1:0";
|
||||
public static final String LLAMA_3_2_11B = "meta.llama3-2-11b-instruct-v1:0";
|
||||
public static final String LLAMA_3_2_3B = "meta.llama3-2-3b-instruct-v1:0";
|
||||
public static final String LLAMA_3_2_1B = "meta.llama3-2-1b-instruct-v1:0";
|
||||
public static final String LLAMA_4_MAV_17B = "meta.llama4-maverick-17b-instruct-v1:0";
|
||||
public static final String LLAMA_4_SCOUT_17B = "meta.llama4-scout-17b-instruct-v1:0";
|
||||
public static final String LLAMA_3_1_405B = "meta.llama3-1-405b-instruct-v1:0";
|
||||
public static final String LLAMA_3_1_70B = "meta.llama3-1-70b-instruct-v1:0";
|
||||
public static final String LLAMA_3_1_8B = "meta.llama3-1-8b-instruct-v1:0";
|
||||
public static final String LLAMA_3_70B = "meta.llama3-70b-instruct-v1:0";
|
||||
public static final String LLAMA_3_8B = "meta.llama3-8b-instruct-v1:0";
|
||||
|
||||
// Amazon Titan Models
|
||||
public static final String TITAN_TEXT_EXPRESS = "amazon.titan-text-express-v1";
|
||||
public static final String TITAN_TEXT_LITE = "amazon.titan-text-lite-v1";
|
||||
public static final String TITAN_EMBEDDINGS = "amazon.titan-embed-text-v1";
|
||||
public static final String TITAN_IMAGE_GENERATOR = "amazon.titan-image-generator-v1";
|
||||
|
||||
// Stable Diffusion
|
||||
public static final String STABLE_DIFFUSION_XL = "stability.stable-diffusion-xl-v1";
|
||||
|
||||
// Mistral AI Models
|
||||
public static final String MISTRAL_LARGE_2407 = "mistral.mistral-large-2407-v1:0";
|
||||
public static final String MISTRAL_LARGE_2402 = "mistral.mistral-large-2402-v1:0";
|
||||
public static final String MISTRAL_SMALL_2402 = "mistral.mistral-small-2402-v1:0";
|
||||
public static final String MISTRAL_PIXTRAL_2502 = "mistral.pixtral-large-2502-v1:0";
|
||||
public static final String MISTRAL_MIXTRAL_8X7B = "mistral.mixtral-8x7b-instruct-v0:1";
|
||||
public static final String MISTRAL_7B = "mistral.mistral-7b-instruct-v0:2";
|
||||
|
||||
// Amazon Nova Models
|
||||
public static final String NOVA_PREMIER = "amazon.nova-premier-v1:0";
|
||||
public static final String NOVA_PRO = "amazon.nova-pro-v1:0";
|
||||
public static final String NOVA_LITE = "amazon.nova-lite-v1:0";
|
||||
public static final String NOVA_MICRO = "amazon.nova-micro-v1:0";
|
||||
public static final String NOVA_CANVAS = "amazon.nova-canvas-v1:0";
|
||||
public static final String NOVA_REEL = "amazon.nova-reel-v1:1";
|
||||
|
||||
// Other Models
|
||||
public static final String COHERE_COMMAND = "cohere.command-text-v14";
|
||||
public static final String DEEPSEEK_R1 = "deepseek.r1-v1:0";
|
||||
public static final String DEEPSEEK_V3_1 = "deepseek.v3-v1:0";
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
See the [examples directory](examples/) for comprehensive usage patterns.
|
||||
|
||||
## Advanced Topics
|
||||
|
||||
See the [Advanced Topics](references/advanced-topics.md) for:
|
||||
- Multi-model service patterns
|
||||
- Advanced error handling with retries
|
||||
- Batch processing strategies
|
||||
- Performance optimization techniques
|
||||
- Custom response parsing
|
||||
|
||||
## Model Reference
|
||||
|
||||
See the [Model Reference](references/model-reference.md) for:
|
||||
- Detailed model specifications
|
||||
- Payload/response formats for each provider
|
||||
- Performance characteristics
|
||||
- Model selection guidelines
|
||||
- Configuration templates
|
||||
|
||||
## Testing Strategies
|
||||
|
||||
See the [Testing Strategies](references/testing-strategies.md) for:
|
||||
- Unit testing with mocked clients
|
||||
- Integration testing with LocalStack
|
||||
- Performance testing
|
||||
- Streaming response testing
|
||||
- Test data management
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `aws-sdk-java-v2-core` - Core AWS SDK patterns
|
||||
- `langchain4j-ai-services-patterns` - LangChain4j integration
|
||||
- `spring-boot-dependency-injection` - Spring DI patterns
|
||||
- `spring-boot-test-patterns` - Spring testing patterns
|
||||
|
||||
## References
|
||||
|
||||
- [AWS Bedrock User Guide](references/aws-bedrock-user-guide.md)
|
||||
- [AWS SDK for Java 2.x Documentation](references/aws-sdk-java-bedrock-api.md)
|
||||
- [Bedrock API Reference](references/aws-bedrock-api-reference.md)
|
||||
- [AWS SDK Examples](references/aws-sdk-examples.md)
|
||||
- [Official AWS Examples](bedrock_code_examples.md)
|
||||
- [Supported Models](bedrock_models_supported.md)
|
||||
- [Runtime Examples](bedrock_runtime_code_examples.md)
|
||||
249
skills/aws-java/aws-sdk-java-v2-bedrock/bedrock_code_examples.md
Normal file
249
skills/aws-java/aws-sdk-java-v2-bedrock/bedrock_code_examples.md
Normal file
@@ -0,0 +1,249 @@
|
||||
Amazon Bedrock examples using SDK for Java 2.x - AWS SDK for Java 2.x
|
||||
|
||||
Amazon Bedrock examples using SDK for Java 2.x - AWS SDK for Java 2.x
|
||||
|
||||
[Open PDF](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html/pdfs/sdk-for-java/latest/developer-guide/aws-sdk-java-dg-v2.pdf#java_bedrock_code_examples "Open PDF")
|
||||
|
||||
[Documentation](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html/index.html) [AWS SDK for Java](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html/sdk-for-java/index.html) [Developer Guide for version 2.x](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html/home.html)
|
||||
|
||||
[Actions](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html#actions)
|
||||
|
||||
# Amazon Bedrock examples using SDK for Java 2.x
|
||||
|
||||
The following code examples show you how to perform actions and implement common scenarios by using
|
||||
the AWS SDK for Java 2.x with Amazon Bedrock.
|
||||
|
||||
_Actions_ are code excerpts from larger programs and must be run in context. While actions show you how to call individual service functions, you can see actions in context in their related scenarios.
|
||||
|
||||
Each example includes a link to the complete source code, where you can find
|
||||
instructions on how to set up and run the code in context.
|
||||
|
||||
###### Topics
|
||||
|
||||
- [Actions](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html#actions)
|
||||
|
||||
|
||||
## Actions
|
||||
|
||||
The following code example shows how to use `GetFoundationModel`.
|
||||
|
||||
**SDK for Java 2.x**
|
||||
|
||||
###### Note
|
||||
|
||||
There's more on GitHub. Find the complete example and learn how to set up and run in the
|
||||
[AWS Code\
|
||||
Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/bedrock#code-examples).
|
||||
|
||||
|
||||
Get details about a foundation model using the synchronous Amazon Bedrock client.
|
||||
|
||||
```java
|
||||
|
||||
/**
|
||||
* Get details about an Amazon Bedrock foundation model.
|
||||
*
|
||||
* @param bedrockClient The service client for accessing Amazon Bedrock.
|
||||
* @param modelIdentifier The model identifier.
|
||||
* @return An object containing the foundation model's details.
|
||||
*/
|
||||
public static FoundationModelDetails getFoundationModel(BedrockClient bedrockClient, String modelIdentifier) {
|
||||
try {
|
||||
GetFoundationModelResponse response = bedrockClient.getFoundationModel(
|
||||
r -> r.modelIdentifier(modelIdentifier)
|
||||
);
|
||||
|
||||
FoundationModelDetails model = response.modelDetails();
|
||||
|
||||
System.out.println(" Model ID: " + model.modelId());
|
||||
System.out.println(" Model ARN: " + model.modelArn());
|
||||
System.out.println(" Model Name: " + model.modelName());
|
||||
System.out.println(" Provider Name: " + model.providerName());
|
||||
System.out.println(" Lifecycle status: " + model.modelLifecycle().statusAsString());
|
||||
System.out.println(" Input modalities: " + model.inputModalities());
|
||||
System.out.println(" Output modalities: " + model.outputModalities());
|
||||
System.out.println(" Supported customizations: " + model.customizationsSupported());
|
||||
System.out.println(" Supported inference types: " + model.inferenceTypesSupported());
|
||||
System.out.println(" Response streaming supported: " + model.responseStreamingSupported());
|
||||
|
||||
return model;
|
||||
|
||||
} catch (ValidationException e) {
|
||||
throw new IllegalArgumentException(e.getMessage());
|
||||
} catch (SdkException e) {
|
||||
System.err.println(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Get details about a foundation model using the asynchronous Amazon Bedrock client.
|
||||
|
||||
```java
|
||||
|
||||
/**
|
||||
* Get details about an Amazon Bedrock foundation model.
|
||||
*
|
||||
* @param bedrockClient The async service client for accessing Amazon Bedrock.
|
||||
* @param modelIdentifier The model identifier.
|
||||
* @return An object containing the foundation model's details.
|
||||
*/
|
||||
public static FoundationModelDetails getFoundationModel(BedrockAsyncClient bedrockClient, String modelIdentifier) {
|
||||
try {
|
||||
CompletableFuture<GetFoundationModelResponse> future = bedrockClient.getFoundationModel(
|
||||
r -> r.modelIdentifier(modelIdentifier)
|
||||
);
|
||||
|
||||
FoundationModelDetails model = future.get().modelDetails();
|
||||
|
||||
System.out.println(" Model ID: " + model.modelId());
|
||||
System.out.println(" Model ARN: " + model.modelArn());
|
||||
System.out.println(" Model Name: " + model.modelName());
|
||||
System.out.println(" Provider Name: " + model.providerName());
|
||||
System.out.println(" Lifecycle status: " + model.modelLifecycle().statusAsString());
|
||||
System.out.println(" Input modalities: " + model.inputModalities());
|
||||
System.out.println(" Output modalities: " + model.outputModalities());
|
||||
System.out.println(" Supported customizations: " + model.customizationsSupported());
|
||||
System.out.println(" Supported inference types: " + model.inferenceTypesSupported());
|
||||
System.out.println(" Response streaming supported: " + model.responseStreamingSupported());
|
||||
|
||||
return model;
|
||||
|
||||
} catch (ExecutionException e) {
|
||||
if (e.getMessage().contains("ValidationException")) {
|
||||
throw new IllegalArgumentException(e.getMessage());
|
||||
} else {
|
||||
System.err.println(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
System.err.println(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
- For API details, see
|
||||
[GetFoundationModel](https://docs.aws.amazon.com/goto/SdkForJavaV2/bedrock-2023-04-20/GetFoundationModel)
|
||||
in _AWS SDK for Java 2.x API Reference_.
|
||||
|
||||
|
||||
|
||||
The following code example shows how to use `ListFoundationModels`.
|
||||
|
||||
**SDK for Java 2.x**
|
||||
|
||||
###### Note
|
||||
|
||||
There's more on GitHub. Find the complete example and learn how to set up and run in the
|
||||
[AWS Code\
|
||||
Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/bedrock#code-examples).
|
||||
|
||||
|
||||
List the available Amazon Bedrock foundation models using the synchronous Amazon Bedrock client.
|
||||
|
||||
```java
|
||||
|
||||
/**
|
||||
* Lists Amazon Bedrock foundation models that you can use.
|
||||
* You can filter the results with the request parameters.
|
||||
*
|
||||
* @param bedrockClient The service client for accessing Amazon Bedrock.
|
||||
* @return A list of objects containing the foundation models' details
|
||||
*/
|
||||
public static List<FoundationModelSummary> listFoundationModels(BedrockClient bedrockClient) {
|
||||
|
||||
try {
|
||||
ListFoundationModelsResponse response = bedrockClient.listFoundationModels(r -> {});
|
||||
|
||||
List<FoundationModelSummary> models = response.modelSummaries();
|
||||
|
||||
if (models.isEmpty()) {
|
||||
System.out.println("No available foundation models in " + region.toString());
|
||||
} else {
|
||||
for (FoundationModelSummary model : models) {
|
||||
System.out.println("Model ID: " + model.modelId());
|
||||
System.out.println("Provider: " + model.providerName());
|
||||
System.out.println("Name: " + model.modelName());
|
||||
System.out.println();
|
||||
}
|
||||
}
|
||||
|
||||
return models;
|
||||
|
||||
} catch (SdkClientException e) {
|
||||
System.err.println(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
List the available Amazon Bedrock foundation models using the asynchronous Amazon Bedrock client.
|
||||
|
||||
```java
|
||||
|
||||
/**
|
||||
* Lists Amazon Bedrock foundation models that you can use.
|
||||
* You can filter the results with the request parameters.
|
||||
*
|
||||
* @param bedrockClient The async service client for accessing Amazon Bedrock.
|
||||
* @return A list of objects containing the foundation models' details
|
||||
*/
|
||||
public static List<FoundationModelSummary> listFoundationModels(BedrockAsyncClient bedrockClient) {
|
||||
try {
|
||||
CompletableFuture<ListFoundationModelsResponse> future = bedrockClient.listFoundationModels(r -> {});
|
||||
|
||||
List<FoundationModelSummary> models = future.get().modelSummaries();
|
||||
|
||||
if (models.isEmpty()) {
|
||||
System.out.println("No available foundation models in " + region.toString());
|
||||
} else {
|
||||
for (FoundationModelSummary model : models) {
|
||||
System.out.println("Model ID: " + model.modelId());
|
||||
System.out.println("Provider: " + model.providerName());
|
||||
System.out.println("Name: " + model.modelName());
|
||||
System.out.println();
|
||||
}
|
||||
}
|
||||
|
||||
return models;
|
||||
|
||||
} catch (InterruptedException e) {
|
||||
Thread.currentThread().interrupt();
|
||||
System.err.println(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
} catch (ExecutionException e) {
|
||||
System.err.println(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
- For API details, see
|
||||
[ListFoundationModels](https://docs.aws.amazon.com/goto/SdkForJavaV2/bedrock-2023-04-20/ListFoundationModels)
|
||||
in _AWS SDK for Java 2.x API Reference_.
|
||||
|
||||
|
||||
|
||||
[Document Conventions](http://https:%2F%2Fdocs.aws.amazon.com%2Fsdk-for-java%2Flatest%2Fdeveloper-guide%2Fjava_bedrock_code_examples.html/general/latest/gr/docconventions.html)
|
||||
|
||||
AWS Batch
|
||||
|
||||
Amazon Bedrock Runtime
|
||||
|
||||
Did this page help you? - Yes
|
||||
|
||||
Thanks for letting us know we're doing a good job!
|
||||
|
||||
If you've got a moment, please tell us what we did right so we can do more of it.
|
||||
|
||||
Did this page help you? - No
|
||||
|
||||
Thanks for letting us know this page needs work. We're sorry we let you down.
|
||||
|
||||
If you've got a moment, please tell us how we can make the documentation better.
|
||||
1323
skills/aws-java/aws-sdk-java-v2-bedrock/bedrock_models_supported.md
Normal file
1323
skills/aws-java/aws-sdk-java-v2-bedrock/bedrock_models_supported.md
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,274 @@
|
||||
# Advanced Model Patterns
|
||||
|
||||
## Model-Specific Configuration
|
||||
|
||||
### Claude Models Configuration
|
||||
|
||||
```java
|
||||
// Claude 3 Sonnet
|
||||
public String invokeClaude3Sonnet(BedrockRuntimeClient client, String prompt) {
|
||||
String modelId = "anthropic.claude-3-sonnet-20240229-v1:0";
|
||||
|
||||
JSONObject payload = new JSONObject()
|
||||
.put("anthropic_version", "bedrock-2023-05-31")
|
||||
.put("max_tokens", 1000)
|
||||
.put("temperature", 0.7)
|
||||
.put("top_p", 1.0)
|
||||
.put("messages", new JSONObject[]{
|
||||
new JSONObject()
|
||||
.put("role", "user")
|
||||
.put("content", prompt)
|
||||
});
|
||||
|
||||
InvokeModelResponse response = client.invokeModel(request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload.toString())));
|
||||
|
||||
JSONObject responseBody = new JSONObject(response.body().asUtf8String());
|
||||
return responseBody.getJSONArray("content")
|
||||
.getJSONObject(0)
|
||||
.getString("text");
|
||||
}
|
||||
|
||||
// Claude 3 Haiku (faster, cheaper)
|
||||
public String invokeClaude3Haiku(BedrockRuntimeClient client, String prompt) {
|
||||
String modelId = "anthropic.claude-3-haiku-20240307-v1:0";
|
||||
|
||||
JSONObject payload = new JSONObject()
|
||||
.put("anthropic_version", "bedrock-2023-05-31")
|
||||
.put("max_tokens", 400)
|
||||
.put("messages", new JSONObject[]{
|
||||
new JSONObject()
|
||||
.put("role", "user")
|
||||
.put("content", prompt)
|
||||
});
|
||||
|
||||
// Similar invocation pattern as above
|
||||
}
|
||||
```
|
||||
|
||||
### Llama Models Configuration
|
||||
|
||||
```java
|
||||
// Llama 3 70B
|
||||
public String invokeLlama3_70B(BedrockRuntimeClient client, String prompt) {
|
||||
String modelId = "meta.llama3-70b-instruct-v1:0";
|
||||
|
||||
JSONObject payload = new JSONObject()
|
||||
.put("prompt", prompt)
|
||||
.put("max_gen_len", 512)
|
||||
.put("temperature", 0.7)
|
||||
.put("top_p", 0.9)
|
||||
.put("stop", new String[]{"[INST]", "[/INST]"}); // Custom stop tokens
|
||||
|
||||
InvokeModelResponse response = client.invokeModel(request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload.toString())));
|
||||
|
||||
JSONObject responseBody = new JSONObject(response.body().asUtf8String());
|
||||
return responseBody.getString("generation");
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Model Service Layer
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class MultiModelService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
public MultiModelService(BedrockRuntimeClient bedrockRuntimeClient,
|
||||
ObjectMapper objectMapper) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
this.objectMapper = objectMapper;
|
||||
}
|
||||
|
||||
public String invokeModel(String modelId, String prompt, Map<String, Object> additionalParams) {
|
||||
Map<String, Object> payload = createModelPayload(modelId, prompt, additionalParams);
|
||||
|
||||
try {
|
||||
InvokeModelResponse response = bedrockRuntimeClient.invokeModel(
|
||||
request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(objectMapper.writeValueAsString(payload))));
|
||||
|
||||
return extractResponseContent(modelId, response.body().asUtf8String());
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Model invocation failed: " + e.getMessage(), e);
|
||||
}
|
||||
}
|
||||
|
||||
private Map<String, Object> createModelPayload(String modelId, String prompt,
|
||||
Map<String, Object> additionalParams) {
|
||||
Map<String, Object> payload = new HashMap<>();
|
||||
|
||||
if (modelId.startsWith("anthropic.claude")) {
|
||||
payload.put("anthropic_version", "bedrock-2023-05-31");
|
||||
payload.put("messages", List.of(Map.of("role", "user", "content", prompt)));
|
||||
|
||||
// Add common parameters with defaults
|
||||
payload.putIfAbsent("max_tokens", 1000);
|
||||
payload.putIfAbsent("temperature", 0.7);
|
||||
|
||||
} else if (modelId.startsWith("meta.llama")) {
|
||||
payload.put("prompt", prompt);
|
||||
payload.putIfAbsent("max_gen_len", 512);
|
||||
payload.putIfAbsent("temperature", 0.7);
|
||||
|
||||
} else if (modelId.startsWith("amazon.titan")) {
|
||||
payload.put("inputText", prompt);
|
||||
payload.putIfAbsent("textGenerationConfig",
|
||||
Map.of("maxTokenCount", 512, "temperature", 0.7));
|
||||
}
|
||||
|
||||
// Add additional parameters
|
||||
if (additionalParams != null) {
|
||||
payload.putAll(additionalParams);
|
||||
}
|
||||
|
||||
return payload;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Error Handling
|
||||
|
||||
```java
|
||||
@Component
|
||||
public class BedrockErrorHandler {
|
||||
|
||||
@Retryable(value = {SdkClientException.class}, maxAttempts = 3, backoff = @Backoff(delay = 1000))
|
||||
public String invokeWithRetry(BedrockRuntimeClient client, String modelId,
|
||||
String payloadJson) {
|
||||
try {
|
||||
InvokeModelResponse response = client.invokeModel(request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payloadJson)));
|
||||
return response.body().asUtf8String();
|
||||
|
||||
} catch (ThrottlingException e) {
|
||||
// Exponential backoff for throttling
|
||||
throw new RuntimeException("Rate limit exceeded, please try again later", e);
|
||||
} catch (ValidationException e) {
|
||||
throw new IllegalArgumentException("Invalid request: " + e.getMessage(), e);
|
||||
} catch (SdkException e) {
|
||||
throw new RuntimeException("AWS SDK error: " + e.getMessage(), e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Batch Processing
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class BedrockBatchService {
|
||||
|
||||
public List<String> processBatch(BedrockRuntimeClient client, String modelId,
|
||||
List<String> prompts) {
|
||||
return prompts.parallelStream()
|
||||
.map(prompt -> invokeModelWithTimeout(client, modelId, prompt, 30))
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
private String invokeModelWithTimeout(BedrockRuntimeClient client, String modelId,
|
||||
String prompt, int timeoutSeconds) {
|
||||
ExecutorService executor = Executors.newSingleThreadExecutor();
|
||||
Future<String> future = executor.submit(() -> {
|
||||
JSONObject payload = new JSONObject()
|
||||
.put("prompt", prompt)
|
||||
.put("max_tokens", 500);
|
||||
|
||||
InvokeModelResponse response = client.invokeModel(request -> request
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload.toString())));
|
||||
|
||||
return response.body().asUtf8String();
|
||||
});
|
||||
|
||||
try {
|
||||
return future.get(timeoutSeconds, TimeUnit.SECONDS);
|
||||
} catch (TimeoutException e) {
|
||||
future.cancel(true);
|
||||
throw new RuntimeException("Model invocation timed out");
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Batch processing error", e);
|
||||
} finally {
|
||||
executor.shutdown();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Model Performance Optimization
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
public class BedrockOptimizationConfig {
|
||||
|
||||
@Bean
|
||||
public BedrockRuntimeClient optimizedBedrockRuntimeClient() {
|
||||
return BedrockRuntimeClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.overrideConfiguration(ClientOverrideConfiguration.builder()
|
||||
.apiCallTimeout(Duration.ofSeconds(30))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(20))
|
||||
.build())
|
||||
.httpClient(ApacheHttpClient.builder()
|
||||
.connectionTimeout(Duration.ofSeconds(10))
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.build())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Response Parsing
|
||||
|
||||
```java
|
||||
public class BedrockResponseParser {
|
||||
|
||||
public static TextResponse parseTextResponse(String modelId, String responseBody) {
|
||||
try {
|
||||
switch (getModelProvider(modelId)) {
|
||||
case ANTHROPIC:
|
||||
return parseAnthropicResponse(responseBody);
|
||||
case META:
|
||||
return parseMetaResponse(responseBody);
|
||||
case AMAZON:
|
||||
return parseAmazonResponse(responseBody);
|
||||
default:
|
||||
throw new IllegalArgumentException("Unsupported model: " + modelId);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
throw new ResponseParsingException("Failed to parse response for model: " + modelId, e);
|
||||
}
|
||||
}
|
||||
|
||||
private static TextResponse parseAnthropicResponse(String responseBody) throws JSONException {
|
||||
JSONObject json = new JSONObject(responseBody);
|
||||
JSONArray content = json.getJSONArray("content");
|
||||
String text = content.getJSONObject(0).getString("text");
|
||||
int usage = json.getJSONObject("usage").getInt("input_tokens");
|
||||
|
||||
return new TextResponse(text, usage, "anthropic");
|
||||
}
|
||||
|
||||
private static TextResponse parseMetaResponse(String responseBody) throws JSONException {
|
||||
JSONObject json = new JSONObject(responseBody);
|
||||
String text = json.getString("generation");
|
||||
// Note: Meta doesn't provide token usage in basic response
|
||||
|
||||
return new TextResponse(text, 0, "meta");
|
||||
}
|
||||
|
||||
private enum ModelProvider {
|
||||
ANTHROPIC, META, AMAZON
|
||||
}
|
||||
|
||||
public record TextResponse(String content, int tokensUsed, String provider) {}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,372 @@
|
||||
# Advanced Amazon Bedrock Topics
|
||||
|
||||
This document covers advanced patterns and topics for working with Amazon Bedrock using AWS SDK for Java 2.x.
|
||||
|
||||
## Multi-Model Service Pattern
|
||||
|
||||
Create a service that can handle multiple foundation models with unified interfaces.
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class MultiModelAIService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
|
||||
public MultiModelAIService(BedrockRuntimeClient bedrockRuntimeClient) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
}
|
||||
|
||||
public GenerationResult generate(GenerationRequest request) {
|
||||
String modelId = request.getModelId();
|
||||
String prompt = request.getPrompt();
|
||||
|
||||
switch (getModelProvider(modelId)) {
|
||||
case ANTHROPIC:
|
||||
return generateWithAnthropic(modelId, prompt, request.getConfig());
|
||||
case AMAZON:
|
||||
return generateWithAmazon(modelId, prompt, request.getConfig());
|
||||
case META:
|
||||
return generateWithMeta(modelId, prompt, request.getConfig());
|
||||
default:
|
||||
throw new IllegalArgumentException("Unsupported model provider: " + modelId);
|
||||
}
|
||||
}
|
||||
|
||||
private GenerationProvider getModelProvider(String modelId) {
|
||||
if (modelId.startsWith("anthropic.")) return GenerationProvider.ANTHROPIC;
|
||||
if (modelId.startsWith("amazon.")) return GenerationProvider.AMazon;
|
||||
if (modelId.startsWith("meta.")) return GenerationProvider.META;
|
||||
throw new IllegalArgumentException("Unknown provider for model: " + modelId);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Error Handling with Retries
|
||||
|
||||
Implement robust error handling with exponential backoff:
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.retry.RetryPolicy;
|
||||
import software.amazon.awssdk.core.retry.backoff.BackoffStrategy;
|
||||
import software.amazon.awssdk.core.retry.conditions.RetryCondition;
|
||||
import software.amazon.awssdk.core.retry.predicates.RetryExceptionPredicates;
|
||||
|
||||
public class BedrockWithRetry {
|
||||
|
||||
private final BedrockRuntimeClient client;
|
||||
private final RetryPolicy retryPolicy;
|
||||
|
||||
public BedrockWithRetry(BedrockRuntimeClient client) {
|
||||
this.client = client;
|
||||
this.retryPolicy = RetryPolicy.builder()
|
||||
.numRetries(3)
|
||||
.retryCondition(RetryExceptionPredicates.equalTo(
|
||||
ThrottlingException.class))
|
||||
.backoffStrategy(BackoffStrategy.defaultStrategy())
|
||||
.build();
|
||||
}
|
||||
|
||||
public String invokeModelWithRetry(String modelId, String payload) {
|
||||
try {
|
||||
InvokeModelRequest request = InvokeModelRequest.builder()
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeModelResponse response = client.invokeModel(request);
|
||||
return response.body().asUtf8String();
|
||||
|
||||
} catch (ThrottlingException e) {
|
||||
throw new BedrockThrottledException("Rate limit exceeded for model: " + modelId, e);
|
||||
} catch (ValidationException e) {
|
||||
throw new BedrockValidationException("Invalid request for model: " + modelId, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Batch Processing Strategies
|
||||
|
||||
Process multiple requests efficiently:
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class BatchGenerationService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
|
||||
public BatchGenerationService(BedrockRuntimeClient bedrockRuntimeClient) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
}
|
||||
|
||||
public List<BatchResult> processBatch(List<BatchRequest> requests) {
|
||||
// Process in parallel
|
||||
return requests.parallelStream()
|
||||
.map(this::processSingleRequest)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
private BatchResult processSingleRequest(BatchRequest request) {
|
||||
try {
|
||||
InvokeModelRequest modelRequest = InvokeModelRequest.builder()
|
||||
.modelId(request.getModelId())
|
||||
.body(SdkBytes.fromUtf8String(request.getPayload()))
|
||||
.build();
|
||||
|
||||
InvokeModelResponse response = bedrockRuntimeClient.invokeModel(modelRequest);
|
||||
|
||||
return BatchResult.success(
|
||||
request.getRequestId(),
|
||||
response.body().asUtf8String()
|
||||
);
|
||||
|
||||
} catch (Exception e) {
|
||||
return BatchResult.failure(request.getRequestId(), e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Optimization Techniques
|
||||
|
||||
### Connection Pooling
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.http.nio.apache.ApacheHttpClient;
|
||||
import software.amazon.awssdk.http.apache.ProxyConfiguration;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
|
||||
public class BedrockClientFactory {
|
||||
|
||||
public static BedrockRuntimeClient createOptimizedClient() {
|
||||
ApacheHttpClient httpClient = ApacheHttpClient.builder()
|
||||
.connectionPoolMaxConnections(50)
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.connectionTimeout(Duration.ofSeconds(30))
|
||||
.build();
|
||||
|
||||
return BedrockRuntimeClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.httpClient(httpClient)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Response Caching
|
||||
|
||||
```java
|
||||
import com.github.benmanes.caffeine.cache.Cache;
|
||||
import com.github.benmanes.caffeine.cache.Caffeine;
|
||||
|
||||
@Service
|
||||
public class CachedAIService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
private final Cache<String, String> responseCache;
|
||||
|
||||
public CachedAIService(BedrockRuntimeClient bedrockRuntimeClient) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
this.responseCache = Caffeine.newBuilder()
|
||||
.maximumSize(1000)
|
||||
.expireAfterWrite(1, TimeUnit.HOURS)
|
||||
.build();
|
||||
}
|
||||
|
||||
public String generateText(String prompt, String modelId) {
|
||||
String cacheKey = modelId + ":" + prompt.hashCode();
|
||||
|
||||
return responseCache.get(cacheKey, key -> {
|
||||
String payload = createPayload(modelId, prompt);
|
||||
InvokeModelRequest request = InvokeModelRequest.builder()
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeModelResponse response = bedrockRuntimeClient.invokeModel(request);
|
||||
return response.body().asUtf8String();
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Response Parsing
|
||||
|
||||
Create specialized parsers for different model responses:
|
||||
|
||||
```java
|
||||
public interface ResponseParser {
|
||||
String parse(String responseJson);
|
||||
}
|
||||
|
||||
public class AnthropicResponseParser implements ResponseParser {
|
||||
@Override
|
||||
public String parse(String responseJson) {
|
||||
try {
|
||||
JSONObject jsonResponse = new JSONObject(responseJson);
|
||||
return jsonResponse.getJSONArray("content")
|
||||
.getJSONObject(0)
|
||||
.getString("text");
|
||||
} catch (Exception e) {
|
||||
throw new ResponseParsingException("Failed to parse Anthropic response", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public class AmazonTitanResponseParser implements ResponseParser {
|
||||
@Override
|
||||
public String parse(String responseJson) {
|
||||
try {
|
||||
JSONObject jsonResponse = new JSONObject(responseJson);
|
||||
return jsonResponse.getJSONArray("results")
|
||||
.getJSONObject(0)
|
||||
.getString("outputText");
|
||||
} catch (Exception e) {
|
||||
throw new ResponseParsingException("Failed to parse Amazon Titan response", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public class LlamaResponseParser implements ResponseParser {
|
||||
@Override
|
||||
public String parse(String responseJson) {
|
||||
try {
|
||||
JSONObject jsonResponse = new JSONObject(responseJson);
|
||||
return jsonResponse.getString("generation");
|
||||
} catch (Exception e) {
|
||||
throw new ResponseParsingException("Failed to parse Llama response", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Metrics and Monitoring
|
||||
|
||||
Implement comprehensive monitoring:
|
||||
|
||||
```java
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import io.micrometer.core.instrument.Timer;
|
||||
|
||||
@Service
|
||||
public class MonitoredAIService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
private final Timer generationTimer;
|
||||
private final Counter errorCounter;
|
||||
|
||||
public MonitoredAIService(BedrockRuntimeClient bedrockRuntimeClient,
|
||||
MeterRegistry meterRegistry) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
this.generationTimer = Timer.builder("bedrock.generation.time")
|
||||
.description("Time spent generating text with Bedrock")
|
||||
.register(meterRegistry);
|
||||
this.errorCounter = Counter.builder("bedrock.generation.errors")
|
||||
.description("Number of generation errors")
|
||||
.register(meterRegistry);
|
||||
}
|
||||
|
||||
public String generateText(String prompt, String modelId) {
|
||||
return generationTimer.record(() -> {
|
||||
try {
|
||||
String payload = createPayload(modelId, prompt);
|
||||
InvokeModelRequest request = InvokeModelRequest.builder()
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeModelResponse response = bedrockRuntimeClient.invokeModel(request);
|
||||
return response.body().asUtf8String();
|
||||
|
||||
} catch (Exception e) {
|
||||
errorCounter.increment();
|
||||
throw new GenerationException("Failed to generate text", e);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Configuration Management
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
@ConfigurationProperties(prefix = "bedrock")
|
||||
public class AdvancedBedrockConfiguration {
|
||||
|
||||
private String defaultRegion = "us-east-1";
|
||||
private int maxRetries = 3;
|
||||
private Duration timeout = Duration.ofSeconds(30);
|
||||
private boolean enableMetrics = true;
|
||||
private int maxCacheSize = 1000;
|
||||
private Duration cacheExpireAfter = Duration.ofHours(1);
|
||||
|
||||
@Bean
|
||||
@Primary
|
||||
public BedrockRuntimeClient bedrockRuntimeClient() {
|
||||
BedrockRuntimeClient.Builder builder = BedrockRuntimeClient.builder()
|
||||
.region(Region.of(defaultRegion));
|
||||
|
||||
if (enableMetrics) {
|
||||
builder.overrideConfiguration(c -> c.putAdvancedProperty(
|
||||
"metrics.enabled", "true"));
|
||||
}
|
||||
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
// Getters and setters
|
||||
}
|
||||
```
|
||||
|
||||
## Streaming Response Handling
|
||||
|
||||
Advanced streaming with proper backpressure handling:
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class StreamingAIService {
|
||||
|
||||
private final BedrockRuntimeClient bedrockRuntimeClient;
|
||||
|
||||
public StreamingAIService(BedrockRuntimeClient bedrockRuntimeClient) {
|
||||
this.bedrockRuntimeClient = bedrockRuntimeClient;
|
||||
}
|
||||
|
||||
public Flux<String> streamResponse(String modelId, String prompt) {
|
||||
InvokeModelWithResponseStreamRequest request =
|
||||
InvokeModelWithResponseStreamRequest.builder()
|
||||
.modelId(modelId)
|
||||
.body(SdkBytes.fromUtf8String(createPayload(modelId, prompt)))
|
||||
.build();
|
||||
|
||||
return Mono.fromCallable(() ->
|
||||
bedrockRuntimeClient.invokeModelWithResponseStream(request))
|
||||
.flatMapMany(responseStream -> Flux.defer(() ->
|
||||
Flux.create(sink -> {
|
||||
responseStream.stream().forEach(event -> {
|
||||
if (event instanceof PayloadPart) {
|
||||
PayloadPart payloadPart = (PayloadPart) event;
|
||||
String chunk = payloadPart.bytes().asUtf8String();
|
||||
processChunk(chunk, sink);
|
||||
}
|
||||
});
|
||||
sink.complete();
|
||||
}))
|
||||
)
|
||||
.onErrorResume(e -> Flux.error(new StreamingException("Stream failed", e)));
|
||||
}
|
||||
|
||||
private void processChunk(String chunk, FluxSink<String> sink) {
|
||||
try {
|
||||
JSONObject chunkJson = new JSONObject(chunk);
|
||||
if (chunkJson.getString("type").equals("content_block_delta")) {
|
||||
String text = chunkJson.getJSONObject("delta").getString("text");
|
||||
sink.next(text);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
sink.error(new ChunkProcessingException("Failed to process chunk", e));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<!DOCTYPE HTML><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>Amazon Bedrock</title><meta xmlns="" name="subtitle" content="API Reference"><meta xmlns="" name="abstract" content="Details about operations and parameters in the Amazon Bedrock API Reference"><meta http-equiv="refresh" content="10;URL=welcome.html"><script type="text/javascript"><!--
|
||||
var myDefaultPage = "welcome.html";
|
||||
var myPage = document.location.search.substr(1);
|
||||
var myHash = document.location.hash;
|
||||
|
||||
if (myPage == null || myPage.length == 0) {
|
||||
myPage = myDefaultPage;
|
||||
} else {
|
||||
var docfile = myPage.match(/[^=\;\/?:\s]+\.html/);
|
||||
if (docfile == null) {
|
||||
myPage = myDefaultPage;
|
||||
} else {
|
||||
myPage = docfile + myHash;
|
||||
}
|
||||
}
|
||||
self.location.replace(myPage);
|
||||
--></script></head><body></body></html>
|
||||
@@ -0,0 +1,18 @@
|
||||
<!DOCTYPE html>
|
||||
<!DOCTYPE HTML><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><title>Amazon Bedrock</title><meta xmlns="" name="subtitle" content="User Guide"><meta xmlns="" name="abstract" content="User Guide for the Amazon Bedrock service."><meta http-equiv="refresh" content="10;URL=what-is-bedrock.html"><script type="text/javascript"><!--
|
||||
var myDefaultPage = "what-is-bedrock.html";
|
||||
var myPage = document.location.search.substr(1);
|
||||
var myHash = document.location.hash;
|
||||
|
||||
if (myPage == null || myPage.length == 0) {
|
||||
myPage = myDefaultPage;
|
||||
} else {
|
||||
var docfile = myPage.match(/[^=\;\/?:\s]+\.html/);
|
||||
if (docfile == null) {
|
||||
myPage = myDefaultPage;
|
||||
} else {
|
||||
myPage = docfile + myHash;
|
||||
}
|
||||
}
|
||||
self.location.replace(myPage);
|
||||
--></script></head><body></body></html>
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1,148 @@
|
||||
<!DOCTYPE HTML>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<!-- Generated by javadoc (23) on Tue Oct 28 00:04:26 UTC 2025 -->
|
||||
<title>software.amazon.awssdk.services.bedrock (AWS SDK for Java - 2.36.3)</title>
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
|
||||
<meta name="dc.created" content="2025-10-28">
|
||||
<meta name="description" content="declaration: package: software.amazon.awssdk.services.bedrock">
|
||||
<meta name="generator" content="javadoc/PackageWriter">
|
||||
<link rel="stylesheet" type="text/css" href="../../../../../resource-files/jquery-ui.min.css" title="Style">
|
||||
<link rel="stylesheet" type="text/css" href="../../../../../resource-files/stylesheet.css" title="Style">
|
||||
<link rel="stylesheet" type="text/css" href="../../../../../resource-files/aws-sdk-java-v2-javadoc.css" title="Style">
|
||||
<script type="text/javascript" src="../../../../../script-files/script.js"></script>
|
||||
<script type="text/javascript" src="../../../../../script-files/jquery-3.7.1.min.js"></script>
|
||||
<script type="text/javascript" src="../../../../../script-files/jquery-ui.min.js"></script>
|
||||
</head>
|
||||
<body class="package-declaration-page">
|
||||
<script type="text/javascript">const pathtoroot = "../../../../../";
|
||||
loadScripts(document, 'script');</script>
|
||||
<noscript>
|
||||
<div>JavaScript is disabled on your browser.</div>
|
||||
</noscript>
|
||||
<header role="banner">
|
||||
<nav role="navigation">
|
||||
<!-- ========= START OF TOP NAVBAR ======= -->
|
||||
<div class="top-nav" id="navbar-top">
|
||||
<div class="nav-content">
|
||||
<div class="nav-menu-button"><button id="navbar-toggle-button" aria-controls="navbar-top" aria-expanded="false" aria-label="Toggle navigation links"><span class="nav-bar-toggle-icon"> </span><span class="nav-bar-toggle-icon"> </span><span class="nav-bar-toggle-icon"> </span></button></div>
|
||||
<div class="skip-nav"><a href="#skip-navbar-top" title="Skip navigation links">Skip navigation links</a></div>
|
||||
<ul id="navbar-top-firstrow" class="nav-list" title="Navigation">
|
||||
<li><a href="../../../../../index.html">Overview</a></li>
|
||||
<li class="nav-bar-cell1-rev">Package</li>
|
||||
<li><a href="../../../../../index-all.html">Index</a></li>
|
||||
<li><a href="../../../../../search.html">Search</a></li>
|
||||
<li><a href="../../../../../help-doc.html#package">Help</a></li>
|
||||
</ul>
|
||||
<div class="about-language"><h2>AWS SDK for Java API Reference - 2.36.3</h2></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="sub-nav">
|
||||
<div class="nav-content">
|
||||
<ol class="sub-nav-list">
|
||||
<li><a href="package-summary.html" class="current-selection">software.amazon.awssdk.services.bedrock</a></li>
|
||||
</ol>
|
||||
<div class="nav-list-search">
|
||||
<input type="text" id="search-input" disabled placeholder="Search" aria-label="Search in documentation" autocomplete="off">
|
||||
<input type="reset" id="reset-search" disabled value="Reset">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- ========= END OF TOP NAVBAR ========= -->
|
||||
<span class="skip-nav" id="skip-navbar-top"></span></nav>
|
||||
</header>
|
||||
<div class="main-grid">
|
||||
<nav role="navigation" class="toc" aria-label="Table of contents">
|
||||
<div class="toc-header">Contents</div>
|
||||
<button class="hide-sidebar"><span>Hide sidebar </span>❮</button><button class="show-sidebar">❯<span> Show sidebar</span></button>
|
||||
<ol class="toc-list">
|
||||
<li><a href="#" tabindex="0">Description</a></li>
|
||||
<li><a href="#related-package-summary" tabindex="0">Related Packages</a></li>
|
||||
<li><a href="#class-summary" tabindex="0">Classes and Interfaces</a></li>
|
||||
</ol>
|
||||
</nav>
|
||||
<main role="main">
|
||||
<div class="header">
|
||||
<h1 title="Package software.amazon.awssdk.services.bedrock" class="title">Package software.amazon.awssdk.services.bedrock</h1>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="horizontal-scroll">
|
||||
<div class="package-signature">package <span class="element-name">software.amazon.awssdk.services.bedrock</span></div>
|
||||
<section class="package-description" id="package-description">
|
||||
<div class="block"><p>
|
||||
Describes the API operations for creating, managing, fine-turning, and evaluating Amazon Bedrock models.
|
||||
</p></div>
|
||||
</section>
|
||||
</div>
|
||||
<section class="summary">
|
||||
<ul class="summary-list">
|
||||
<li>
|
||||
<div id="related-package-summary">
|
||||
<div class="caption"><span>Related Packages</span></div>
|
||||
<div class="summary-table two-column-summary">
|
||||
<div class="table-header col-first">Package</div>
|
||||
<div class="table-header col-last">Description</div>
|
||||
<div class="col-first even-row-color"><a href="endpoints/package-summary.html">software.amazon.awssdk.services.bedrock.endpoints</a></div>
|
||||
<div class="col-last even-row-color"> </div>
|
||||
<div class="col-first odd-row-color"><a href="internal/package-summary.html">software.amazon.awssdk.services.bedrock.internal</a></div>
|
||||
<div class="col-last odd-row-color"> </div>
|
||||
<div class="col-first even-row-color"><a href="model/package-summary.html">software.amazon.awssdk.services.bedrock.model</a></div>
|
||||
<div class="col-last even-row-color"> </div>
|
||||
<div class="col-first odd-row-color"><a href="paginators/package-summary.html">software.amazon.awssdk.services.bedrock.paginators</a></div>
|
||||
<div class="col-last odd-row-color"> </div>
|
||||
<div class="col-first even-row-color"><a href="transform/package-summary.html">software.amazon.awssdk.services.bedrock.transform</a></div>
|
||||
<div class="col-last even-row-color"> </div>
|
||||
</div>
|
||||
</div>
|
||||
</li>
|
||||
<li>
|
||||
<div id="class-summary">
|
||||
<div class="table-tabs" role="tablist" aria-orientation="horizontal"><button id="class-summary-tab0" role="tab" aria-selected="true" aria-controls="class-summary.tabpanel" tabindex="0" onkeydown="switchTab(event)" onclick="show('class-summary', 'class-summary', 2)" class="active-table-tab">All Classes and Interfaces</button><button id="class-summary-tab1" role="tab" aria-selected="false" aria-controls="class-summary.tabpanel" tabindex="-1" onkeydown="switchTab(event)" onclick="show('class-summary', 'class-summary-tab1', 2)" class="table-tab">Interfaces</button><button id="class-summary-tab2" role="tab" aria-selected="false" aria-controls="class-summary.tabpanel" tabindex="-1" onkeydown="switchTab(event)" onclick="show('class-summary', 'class-summary-tab2', 2)" class="table-tab">Classes</button></div>
|
||||
<div id="class-summary.tabpanel" role="tabpanel" aria-labelledby="class-summary-tab0">
|
||||
<div class="summary-table two-column-summary">
|
||||
<div class="table-header col-first">Class</div>
|
||||
<div class="table-header col-last">Description</div>
|
||||
<div class="col-first even-row-color class-summary class-summary-tab1"><a href="BedrockAsyncClient.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockAsyncClient</a></div>
|
||||
<div class="col-last even-row-color class-summary class-summary-tab1">
|
||||
<div class="block">Service client for accessing Amazon Bedrock asynchronously.</div>
|
||||
</div>
|
||||
<div class="col-first odd-row-color class-summary class-summary-tab1"><a href="BedrockAsyncClientBuilder.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockAsyncClientBuilder</a></div>
|
||||
<div class="col-last odd-row-color class-summary class-summary-tab1">
|
||||
<div class="block">A builder for creating an instance of <a href="BedrockAsyncClient.html" title="interface in software.amazon.awssdk.services.bedrock"><code>BedrockAsyncClient</code></a>.</div>
|
||||
</div>
|
||||
<div class="col-first even-row-color class-summary class-summary-tab1"><a href="BedrockBaseClientBuilder.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockBaseClientBuilder</a><B extends <a href="BedrockBaseClientBuilder.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockBaseClientBuilder</a><B,<wbr>C>,<wbr>C></div>
|
||||
<div class="col-last even-row-color class-summary class-summary-tab1">
|
||||
<div class="block">This includes configuration specific to Amazon Bedrock that is supported by both <a href="BedrockClientBuilder.html" title="interface in software.amazon.awssdk.services.bedrock"><code>BedrockClientBuilder</code></a> and
|
||||
<a href="BedrockAsyncClientBuilder.html" title="interface in software.amazon.awssdk.services.bedrock"><code>BedrockAsyncClientBuilder</code></a>.</div>
|
||||
</div>
|
||||
<div class="col-first odd-row-color class-summary class-summary-tab1"><a href="BedrockClient.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockClient</a></div>
|
||||
<div class="col-last odd-row-color class-summary class-summary-tab1">
|
||||
<div class="block">Service client for accessing Amazon Bedrock.</div>
|
||||
</div>
|
||||
<div class="col-first even-row-color class-summary class-summary-tab1"><a href="BedrockClientBuilder.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockClientBuilder</a></div>
|
||||
<div class="col-last even-row-color class-summary class-summary-tab1">
|
||||
<div class="block">A builder for creating an instance of <a href="BedrockClient.html" title="interface in software.amazon.awssdk.services.bedrock"><code>BedrockClient</code></a>.</div>
|
||||
</div>
|
||||
<div class="col-first odd-row-color class-summary class-summary-tab2"><a href="BedrockServiceClientConfiguration.html" title="class in software.amazon.awssdk.services.bedrock">BedrockServiceClientConfiguration</a></div>
|
||||
<div class="col-last odd-row-color class-summary class-summary-tab2">
|
||||
<div class="block">Class to expose the service client settings to the user.</div>
|
||||
</div>
|
||||
<div class="col-first even-row-color class-summary class-summary-tab1"><a href="BedrockServiceClientConfiguration.Builder.html" title="interface in software.amazon.awssdk.services.bedrock">BedrockServiceClientConfiguration.Builder</a></div>
|
||||
<div class="col-last even-row-color class-summary class-summary-tab1">
|
||||
<div class="block">A builder for creating a <a href="BedrockServiceClientConfiguration.html" title="class in software.amazon.awssdk.services.bedrock"><code>BedrockServiceClientConfiguration</code></a></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</li>
|
||||
</ul>
|
||||
</section>
|
||||
<footer role="contentinfo">
|
||||
<hr>
|
||||
<p class="legal-copy"><small><div style="margin:1.2em;"><h3><a id="fdbk" target="_blank">Provide feedback</a><h3></div> <span id="awsdocs-legal-zone-copyright"></span> <script type="text/javascript">document.addEventListener("DOMContentLoaded",()=>{ var a=document.createElement("meta"),b=document.createElement("meta"),c=document.createElement("script"), h=document.getElementsByTagName("head")[0],l=location.href,f=document.getElementById("fdbk"); a.name="guide-name",a.content="API Reference";b.name="service-name",b.content="AWS SDK for Java"; c.setAttribute("type","text/javascript"),c.setAttribute("src", "https://docs.aws.amazon.com/assets/js/awsdocs-boot.js");h.appendChild(a);h.appendChild(b); h.appendChild(c);f.setAttribute("href", "https://docs-feedback.aws.amazon.com/feedback.jsp?hidden_service_name="+ encodeURI("AWS SDK for Java")+"&topic_url="+encodeURI(l))}); </script></small></p>
|
||||
</footer>
|
||||
</main>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,340 @@
|
||||
# Model Reference
|
||||
|
||||
## Supported Foundation Models
|
||||
|
||||
### Amazon Models
|
||||
|
||||
#### Amazon Titan Text
|
||||
|
||||
**Model ID:** `amazon.titan-text-express-v1`
|
||||
- **Description:** High-quality text generation model
|
||||
- **Context Window:** Up to 8K tokens
|
||||
- **Languages:** English, Spanish, French, German, Italian, Portuguese
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"inputText": "Your prompt here",
|
||||
"textGenerationConfig": {
|
||||
"maxTokenCount": 512,
|
||||
"temperature": 0.7,
|
||||
"topP": 0.9
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"results": [{
|
||||
"outputText": "Generated text"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
#### Amazon Titan Text Lite
|
||||
|
||||
**Model ID:** `amazon.titan-text-lite-v1`
|
||||
- **Description:** Cost-effective text generation model
|
||||
- **Context Window:** Up to 4K tokens
|
||||
- **Use Case:** Simple text generation tasks
|
||||
|
||||
#### Amazon Titan Embeddings
|
||||
|
||||
**Model ID:** `amazon.titan-embed-text-v1`
|
||||
- **Description:** High-quality text embeddings
|
||||
- **Context Window:** 8K tokens
|
||||
- **Output:** 1024-dimensional vector
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"inputText": "Your text here"
|
||||
}
|
||||
```
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"embedding": [0.1, -0.2, 0.3, ...]
|
||||
}
|
||||
```
|
||||
|
||||
#### Amazon Titan Image Generator
|
||||
|
||||
**Model ID:** `amazon.titan-image-generator-v1`
|
||||
- **Description:** High-quality image generation
|
||||
- **Image Size:** 512x512, 1024x1024
|
||||
- **Use Case:** Text-to-image generation
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"taskType": "TEXT_IMAGE",
|
||||
"textToImageParams": {
|
||||
"text": "Your description"
|
||||
},
|
||||
"imageGenerationConfig": {
|
||||
"numberOfImages": 1,
|
||||
"quality": "standard",
|
||||
"cfgScale": 8.0,
|
||||
"height": 512,
|
||||
"width": 512,
|
||||
"seed": 12345
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Anthropic Models
|
||||
|
||||
#### Claude 3.5 Sonnet
|
||||
|
||||
**Model ID:** `anthropic.claude-3-5-sonnet-20241022-v2:0`
|
||||
- **Description:** High-performance model for complex reasoning, analysis, and creative tasks
|
||||
- **Context Window:** 200K tokens
|
||||
- **Languages:** Multiple languages supported
|
||||
- **Use Case:** Code generation, complex analysis, creative writing, research
|
||||
- **Features:** Tool use, function calling, JSON mode
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"anthropic_version": "bedrock-2023-05-31",
|
||||
"max_tokens": 1000,
|
||||
"messages": [{
|
||||
"role": "user",
|
||||
"content": "Your message"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"content": [{
|
||||
"text": "Response content"
|
||||
}],
|
||||
"usage": {
|
||||
"input_tokens": 10,
|
||||
"output_tokens": 20
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Claude 3.5 Haiku
|
||||
|
||||
**Model ID:** `anthropic.claude-3-5-haiku-20241022-v2:0`
|
||||
- **Description:** Fast and affordable model for real-time applications
|
||||
- **Context Window:** 200K tokens
|
||||
- **Use Case:** Real-time applications, chatbots, quick responses
|
||||
- **Features:** Tool use, function calling, JSON mode
|
||||
|
||||
#### Claude 3 Opus
|
||||
|
||||
**Model ID:** `anthropic.claude-3-opus-20240229-v1:0`
|
||||
- **Description:** Most capable model
|
||||
- **Context Window:** 200K tokens
|
||||
- **Use Case:** Complex reasoning, analysis
|
||||
|
||||
#### Claude 3 Sonnet (Legacy)
|
||||
|
||||
**Model ID:** `anthropic.claude-3-sonnet-20240229-v1:0`
|
||||
- **Description:** Previous generation model
|
||||
- **Context Window:** 200K tokens
|
||||
- **Use Case:** General purpose applications
|
||||
|
||||
### Meta Models
|
||||
|
||||
#### Llama 3.1 70B
|
||||
|
||||
**Model ID:** `meta.llama3-1-70b-instruct-v1:0`
|
||||
- **Description:** Latest generation large open-source model
|
||||
- **Context Window:** 128K tokens
|
||||
- **Use Case:** General purpose instruction following, complex reasoning
|
||||
- **Features:** Improved instruction following, larger context window
|
||||
|
||||
#### Llama 3.1 8B
|
||||
|
||||
**Model ID:** `meta.llama3-1-8b-instruct-v1:0`
|
||||
- **Description:** Latest generation small fast model
|
||||
- **Context Window:** 8K tokens
|
||||
- **Use Case:** Fast inference, lightweight applications
|
||||
|
||||
#### Llama 3 70B
|
||||
|
||||
**Model ID:** `meta.llama3-70b-instruct-v1:0`
|
||||
- **Description:** Previous generation large open-source model
|
||||
- **Context Window:** 8K tokens
|
||||
- **Use Case:** General purpose instruction following
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"prompt": "[INST] Your prompt here [/INST]",
|
||||
"max_gen_len": 512,
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.9
|
||||
}
|
||||
```
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"generation": "Generated text"
|
||||
}
|
||||
```
|
||||
|
||||
#### Llama 3 8B
|
||||
|
||||
**Model ID:** `meta.llama3-8b-instruct-v1:0`
|
||||
- **Description:** Smaller, faster version
|
||||
- **Context Window:** 8K tokens
|
||||
- **Use Case:** Fast inference, lightweight applications
|
||||
|
||||
### Stability AI Models
|
||||
|
||||
#### Stable Diffusion XL
|
||||
|
||||
**Model ID:** `stability.stable-diffusion-xl-v1`
|
||||
- **Description:** High-quality image generation
|
||||
- **Image Size:** Up to 1024x1024
|
||||
- **Use Case:** Text-to-image generation, art creation
|
||||
|
||||
**Payload Format:**
|
||||
```json
|
||||
{
|
||||
"text_prompts": [{
|
||||
"text": "Your description"
|
||||
}],
|
||||
"style_preset": "photographic",
|
||||
"seed": 12345,
|
||||
"cfg_scale": 10,
|
||||
"steps": 50
|
||||
}
|
||||
```
|
||||
|
||||
**Response Format:**
|
||||
```json
|
||||
{
|
||||
"artifacts": [{
|
||||
"base64": "base64-encoded-image-data",
|
||||
"finishReason": "SUCCESS"
|
||||
}]
|
||||
}
|
||||
```
|
||||
|
||||
### Other Models
|
||||
|
||||
#### Cohere Command
|
||||
|
||||
**Model ID:** `cohere.command-text-v14`
|
||||
- **Description:** Text generation model
|
||||
- **Context Window:** 128K tokens
|
||||
- **Use Case:** Content generation, summarization
|
||||
|
||||
#### Mistral Models
|
||||
|
||||
**Model ID:** `mistral.mistral-7b-instruct-v0:2`
|
||||
- **Description:** High-performing open-source model
|
||||
- **Context Window:** 32K tokens
|
||||
- **Use Case:** Instruction following, code generation
|
||||
|
||||
**Model ID:** `mistral.mixtral-8x7b-instruct-v0:1`
|
||||
- **Description:** Mixture of experts model
|
||||
- **Context Window:** 32K tokens
|
||||
- **Use Case:** Complex reasoning tasks
|
||||
|
||||
## Model Selection Guide
|
||||
|
||||
### Use Case Recommendations
|
||||
|
||||
| Use Case | Recommended Models | Notes |
|
||||
|----------|-------------------|-------|
|
||||
| **General Chat/Chatbots** | Claude 3.5 Haiku, Llama 3 8B | Fast response times |
|
||||
| **Content Creation** | Claude 3.5 Sonnet, Cohere | Creative, coherent outputs |
|
||||
| **Code Generation** | Claude 3.5 Sonnet, Llama 3.1 70B | Excellent understanding |
|
||||
| **Analysis & Reasoning** | Claude 3 Opus, Claude 3.5 Sonnet | Complex reasoning |
|
||||
| **Real-time Applications** | Claude 3.5 Haiku, Titan Lite | Fast inference |
|
||||
| **Cost-sensitive Apps** | Titan Lite, Claude 3.5 Haiku | Lower cost per token |
|
||||
| **High Quality** | Claude 3 Opus, Claude 3.5 Sonnet | Premium quality |
|
||||
|
||||
### Performance Characteristics
|
||||
|
||||
| Model | Speed | Cost | Quality | Context Window |
|
||||
|-------|-------|------|---------|----------------|
|
||||
| Claude 3 Opus | Slow | High | Excellent | 200K |
|
||||
| Claude 3.5 Sonnet | Medium | Medium | Excellent | 200K |
|
||||
| Claude 3.5 Haiku | Fast | Low | Good | 200K |
|
||||
| Claude 3 Sonnet (Legacy) | Medium | Medium | Good | 200K |
|
||||
| Llama 3.1 70B | Medium | Medium | Good | 128K |
|
||||
| Llama 3.1 8B | Fast | Low | Fair | 8K |
|
||||
| Llama 3 70B | Medium | Medium | Good | 8K |
|
||||
| Llama 3 8B | Fast | Low | Fair | 8K |
|
||||
| Titan Express | Fast | Medium | Good | 8K |
|
||||
| Titan Lite | Fast | Low | Fair | 4K |
|
||||
|
||||
## Model Comparison Matrix
|
||||
|
||||
| Feature | Claude 3 | Llama 3 | Titan | Stability |
|
||||
|---------|----------|---------|-------|-----------|
|
||||
| **Streaming** | ✅ | ✅ | ✅ | ❌ |
|
||||
| **Tool Use** | ✅ | ❌ | ❌ | ❌ |
|
||||
| **Image Generation** | ❌ | ❌ | ✅ | ✅ |
|
||||
| **Embeddings** | ❌ | ❌ | ✅ | ❌ |
|
||||
| **Multiple Languages** | ✅ | ✅ | ✅ | ✅ |
|
||||
| **Context Window** | 200K | 8K | 8K | N/A |
|
||||
| **Open Source** | ❌ | ✅ | ❌ | ✅ |
|
||||
|
||||
## Model Configuration Templates
|
||||
|
||||
### Text Generation Template
|
||||
```java
|
||||
private static JSONObject createTextGenerationPayload(String modelId, String prompt) {
|
||||
JSONObject payload = new JSONObject();
|
||||
|
||||
if (modelId.startsWith("anthropic.claude")) {
|
||||
payload.put("anthropic_version", "bedrock-2023-05-31");
|
||||
payload.put("max_tokens", 1000);
|
||||
payload.put("messages", new JSONObject[]{new JSONObject()
|
||||
.put("role", "user")
|
||||
.put("content", prompt)
|
||||
});
|
||||
} else if (modelId.startsWith("meta.llama")) {
|
||||
payload.put("prompt", "[INST] " + prompt + " [/INST]");
|
||||
payload.put("max_gen_len", 512);
|
||||
} else if (modelId.startsWith("amazon.titan")) {
|
||||
payload.put("inputText", prompt);
|
||||
payload.put("textGenerationConfig", new JSONObject()
|
||||
.put("maxTokenCount", 512)
|
||||
.put("temperature", 0.7)
|
||||
);
|
||||
}
|
||||
|
||||
return payload;
|
||||
}
|
||||
```
|
||||
|
||||
### Image Generation Template
|
||||
```java
|
||||
private static JSONObject createImageGenerationPayload(String modelId, String prompt) {
|
||||
JSONObject payload = new JSONObject();
|
||||
|
||||
if (modelId.equals("amazon.titan-image-generator-v1")) {
|
||||
payload.put("taskType", "TEXT_IMAGE");
|
||||
payload.put("textToImageParams", new JSONObject().put("text", prompt));
|
||||
payload.put("imageGenerationConfig", new JSONObject()
|
||||
.put("numberOfImages", 1)
|
||||
.put("quality", "standard")
|
||||
.put("height", 512)
|
||||
.put("width", 512)
|
||||
);
|
||||
} else if (modelId.equals("stability.stable-diffusion-xl-v1")) {
|
||||
payload.put("text_prompts", new JSONObject[]{new JSONObject().put("text", prompt)});
|
||||
payload.put("style_preset", "photographic");
|
||||
payload.put("steps", 50);
|
||||
payload.put("cfg_scale", 10);
|
||||
}
|
||||
|
||||
return payload;
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,121 @@
|
||||
# Model ID Lookup Guide
|
||||
|
||||
This document provides quick lookup for the most commonly used model IDs in Amazon Bedrock.
|
||||
|
||||
## Text Generation Models
|
||||
|
||||
### Claude (Anthropic)
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Claude 4.5 Sonnet | `anthropic.claude-sonnet-4-5-20250929-v1:0` | Latest high-performance model | Complex reasoning, coding, creative tasks |
|
||||
| Claude 4.5 Haiku | `anthropic.claude-haiku-4-5-20251001-v1:0` | Latest fast model | Real-time applications, chatbots |
|
||||
| Claude 3.7 Sonnet | `anthropic.claude-3-7-sonnet-20250219-v1:0` | Most advanced reasoning | High-stakes decisions, complex analysis |
|
||||
| Claude Opus 4.1 | `anthropic.claude-opus-4-1-20250805-v1:0` | Most powerful creative | Advanced creative tasks |
|
||||
| Claude 3.5 Sonnet v2 | `anthropic.claude-3-5-sonnet-20241022-v2:0` | High-performance model | General use, coding |
|
||||
| Claude 3.5 Haiku | `anthropic.claude-3-5-haiku-20241022-v1:0` | Fast and affordable | Real-time applications |
|
||||
|
||||
### Llama (Meta)
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Llama 3.3 70B | `meta.llama3-3-70b-instruct-v1:0` | Latest generation | Complex reasoning, general use |
|
||||
| Llama 3.2 90B | `meta.llama3-2-90b-instruct-v1:0` | Large context | Long context tasks |
|
||||
| Llama 3.2 11B | `meta.llama3-2-11b-instruct-v1:0` | Medium model | Balanced performance |
|
||||
| Llama 3.2 3B | `meta.llama3-2-3b-instruct-v1:0` | Small model | Fast inference |
|
||||
| Llama 3.2 1B | `meta.llama3-2-1b-instruct-v1:0` | Ultra-fast | Quick responses |
|
||||
| Llama 3.1 70B | `meta.llama3-1-70b-instruct-v1:0` | Previous gen | General use |
|
||||
| Llama 3.1 8B | `meta.llama3-1-8b-instruct-v1:0` | Fast small model | Lightweight applications |
|
||||
|
||||
### Mistral AI
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Mistral Large 2407 | `mistral.mistral-large-2407-v1:0` | Latest large model | Complex reasoning |
|
||||
| Mistral Large 2402 | `mistral.mistral-large-2402-v1:0` | Previous large model | General use |
|
||||
| Mistral Pixtral 2502 | `mistral.pixtral-large-2502-v1:0` | Multimodal | Text + image understanding |
|
||||
| Mistral 7B | `mistral.mistral-7b-instruct-v0:2` | Small fast model | Quick responses |
|
||||
|
||||
### Amazon
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Titan Text Express | `amazon.titan-text-express-v1` | Fast text generation | Quick responses |
|
||||
| Titan Text Lite | `amazon.titan-text-lite-v1` | Cost-effective | Budget-sensitive apps |
|
||||
| Titan Embeddings | `amazon.titan-embed-text-v1` | Text embeddings | Semantic search |
|
||||
|
||||
### Cohere
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Command R+ | `cohere.command-r-plus-v1:0` | High performance | Complex tasks |
|
||||
| Command R | `cohere.command-r-v1:0` | General purpose | Standard use cases |
|
||||
|
||||
## Image Generation Models
|
||||
|
||||
### Stability AI
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Stable Diffusion 3.5 Large | `stability.sd3-5-large-v1:0` | Latest image gen | High-quality images |
|
||||
| Stable Diffusion XL | `stability.stable-diffusion-xl-v1` | Previous generation | General image generation |
|
||||
|
||||
### Amazon Nova
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Nova Canvas | `amazon.nova-canvas-v1:0` | Image generation | Creative images |
|
||||
| Nova Reel | `amazon.nova-reel-v1:1` | Video generation | Video content |
|
||||
|
||||
## Embedding Models
|
||||
|
||||
### Amazon
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Titan Embeddings | `amazon.titan-embed-text-v1` | Text embeddings | Semantic search |
|
||||
| Titan Embeddings V2 | `amazon.titan-embed-text-v2:0` | Improved embeddings | Better accuracy |
|
||||
|
||||
### Cohere
|
||||
| Model | Model ID | Description | Use Case |
|
||||
|-------|----------|-------------|----------|
|
||||
| Embed English | `cohere.embed-english-v3` | English embeddings | English content |
|
||||
| Embed Multilingual | `cohere.embed-multilingual-v3` | Multi-language | International use |
|
||||
|
||||
## Selection Guide
|
||||
|
||||
### By Speed
|
||||
1. **Fastest**: Llama 3.2 1B, Claude 4.5 Haiku, Titan Lite
|
||||
2. **Fast**: Mistral 7B, Llama 3.2 3B
|
||||
3. **Medium**: Claude 3.5 Sonnet, Llama 3.2 11B
|
||||
4. **Slow**: Claude 4.5 Sonnet, Llama 3.3 70B
|
||||
|
||||
### By Quality
|
||||
1. **Highest**: Claude 4.5 Sonnet, Claude 3.7 Sonnet, Claude Opus 4.1
|
||||
2. **High**: Claude 3.5 Sonnet, Llama 3.3 70B
|
||||
3. **Medium**: Mistral Large, Llama 3.2 11B
|
||||
4. **Basic**: Mistral 7B, Llama 3.2 3B
|
||||
|
||||
### By Cost
|
||||
1. **Most Affordable**: Claude 4.5 Haiku, Llama 3.2 1B
|
||||
2. **Affordable**: Mistral 7B, Titan Lite
|
||||
3. **Medium**: Claude 3.5 Haiku, Llama 3.2 3B
|
||||
4. **Expensive**: Claude 4.5 Sonnet, Llama 3.3 70B
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Default Model Selection
|
||||
```java
|
||||
// For most applications
|
||||
String DEFAULT_MODEL = "anthropic.claude-sonnet-4-5-20250929-v1:0";
|
||||
|
||||
// For real-time applications
|
||||
String FAST_MODEL = "anthropic.claude-haiku-4-5-20251001-v1:0";
|
||||
|
||||
// For budget-sensitive applications
|
||||
String CHEAP_MODEL = "amazon.titan-text-lite-v1";
|
||||
|
||||
// For complex reasoning
|
||||
String POWERFUL_MODEL = "anthropic.claude-3-7-sonnet-20250219-v1:0";
|
||||
```
|
||||
|
||||
### Model Fallback Chain
|
||||
```java
|
||||
private static final String[] MODEL_CHAIN = {
|
||||
"anthropic.claude-sonnet-4-5-20250929-v1:0", // Primary
|
||||
"anthropic.claude-haiku-4-5-20251001-v1:0", // Fast fallback
|
||||
"amazon.titan-text-lite-v1" // Cheap fallback
|
||||
};
|
||||
```
|
||||
@@ -0,0 +1,365 @@
|
||||
# Testing Strategies
|
||||
|
||||
## Unit Testing
|
||||
|
||||
### Mocking Bedrock Clients
|
||||
|
||||
```java
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class BedrockServiceTest {
|
||||
|
||||
@Mock
|
||||
private BedrockRuntimeClient bedrockRuntimeClient;
|
||||
|
||||
@InjectMocks
|
||||
private BedrockAIService aiService;
|
||||
|
||||
@Test
|
||||
void shouldGenerateTextWithClaude() {
|
||||
// Arrange
|
||||
String modelId = "anthropic.claude-3-sonnet-20240229-v1:0";
|
||||
String prompt = "Hello, world!";
|
||||
String expectedResponse = "Hello! How can I help you today?";
|
||||
|
||||
InvokeModelResponse mockResponse = InvokeModelResponse.builder()
|
||||
.body(SdkBytes.fromUtf8String(
|
||||
"{\"content\":[{\"text\":\"" + expectedResponse + "\"}]}"))
|
||||
.build();
|
||||
|
||||
when(bedrockRuntimeClient.invokeModel(any(InvokeModelRequest.class)))
|
||||
.thenReturn(mockResponse);
|
||||
|
||||
// Act
|
||||
String result = aiService.generateText(prompt, modelId);
|
||||
|
||||
// Assert
|
||||
assertThat(result).isEqualTo(expectedResponse);
|
||||
verify(bedrockRuntimeClient).invokeModel(argThat(request ->
|
||||
request.modelId().equals(modelId)));
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldHandleThrottling() {
|
||||
// Arrange
|
||||
when(bedrockRuntimeClient.invokeModel(any(InvokeModelRequest.class)))
|
||||
.thenThrow(ThrottlingException.builder()
|
||||
.message("Rate limit exceeded")
|
||||
.build());
|
||||
|
||||
// Act & Assert
|
||||
assertThatThrownBy(() -> aiService.generateText("test"))
|
||||
.isInstanceOf(RuntimeException.class)
|
||||
.hasMessageContaining("Rate limit exceeded");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Error Conditions
|
||||
|
||||
```java
|
||||
@Test
|
||||
void shouldHandleInvalidModelId() {
|
||||
String invalidModelId = "invalid.model.id";
|
||||
String prompt = "test";
|
||||
|
||||
when(bedrockRuntimeClient.invokeModel(any(InvokeModelRequest.class)))
|
||||
.thenThrow(ValidationException.builder()
|
||||
.message("Invalid model identifier")
|
||||
.build());
|
||||
|
||||
assertThatThrownBy(() -> aiService.generateText(prompt, invalidModelId))
|
||||
.isInstanceOf(IllegalArgumentException.class)
|
||||
.hasMessageContaining("Invalid model identifier");
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Multiple Models
|
||||
|
||||
```java
|
||||
@ParameterizedTest
|
||||
@EnumSource(ModelProvider.class)
|
||||
void shouldSupportAllModels(ModelProvider modelProvider) {
|
||||
String prompt = "Hello";
|
||||
String modelId = modelProvider.getModelId();
|
||||
String expectedResponse = "Response";
|
||||
|
||||
InvokeModelResponse mockResponse = InvokeModelResponse.builder()
|
||||
.body(SdkBytes.fromUtf8String(createMockResponse(modelProvider, expectedResponse)))
|
||||
.build();
|
||||
|
||||
when(bedrockRuntimeClient.invokeModel(any(InvokeModelRequest.class)))
|
||||
.thenReturn(mockResponse);
|
||||
|
||||
String result = aiService.generateText(prompt, modelId);
|
||||
|
||||
assertThat(result).isEqualTo(expectedResponse);
|
||||
}
|
||||
|
||||
private enum ModelProvider {
|
||||
CLAUDE("anthropic.claude-3-sonnet-20240229-v1:0"),
|
||||
LLAMA("meta.llama3-70b-instruct-v1:0"),
|
||||
TITAN("amazon.titan-text-express-v1");
|
||||
|
||||
private final String modelId;
|
||||
|
||||
ModelProvider(String modelId) {
|
||||
this.modelId = modelId;
|
||||
}
|
||||
|
||||
public String getModelId() {
|
||||
return modelId;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Testing
|
||||
|
||||
### Testcontainers Integration
|
||||
|
||||
```java
|
||||
@Testcontainers
|
||||
@SpringBootTest(classes = BedrockConfiguration.class)
|
||||
@ActiveProfiles("test")
|
||||
class BedrockIntegrationTest {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localStack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:latest"))
|
||||
.withServices(AWSService BEDROCK_RUNTIME)
|
||||
.withEnv("DEFAULT_REGION", "us-east-1");
|
||||
|
||||
@Autowired
|
||||
private BedrockRuntimeClient bedrockRuntimeClient;
|
||||
|
||||
@Test
|
||||
void shouldConnectToLocalStack() {
|
||||
assertThat(bedrockRuntimeClient).isNotNull();
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldListFoundationModels() {
|
||||
ListFoundationModelsResponse response =
|
||||
bedrockRuntimeClient.listFoundationModels();
|
||||
|
||||
assertThat(response.modelSummaries()).isNotEmpty();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### LocalStack Configuration
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
public class LocalStackConfig {
|
||||
|
||||
@Value("${localstack.enabled:true}")
|
||||
private boolean localStackEnabled;
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty(name = "localstack.enabled", havingValue = "true")
|
||||
public AwsCredentialsProvider localStackCredentialsProvider() {
|
||||
return StaticCredentialsProvider.create(
|
||||
new AwsBasicCredentialsAccessKey("test", "test"));
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty(name = "localstack.enabled", havingValue = "true")
|
||||
public BedrockRuntimeClient localStackBedrockRuntimeClient(
|
||||
AwsCredentialsProvider credentialsProvider) {
|
||||
|
||||
return BedrockRuntimeClient.builder()
|
||||
.credentialsProvider(credentialsProvider)
|
||||
.endpointOverride(localStack.getEndpoint())
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Performance Testing
|
||||
|
||||
```java
|
||||
@Test
|
||||
void shouldPerformWithinTimeLimit() {
|
||||
String prompt = "Performance test prompt";
|
||||
int iterationCount = 100;
|
||||
|
||||
long startTime = System.currentTimeMillis();
|
||||
|
||||
for (int i = 0; i < iterationCount; i++) {
|
||||
InvokeModelResponse response = bedrockRuntimeClient.invokeModel(
|
||||
request -> request
|
||||
.modelId("anthropic.claude-3-sonnet-20240229-v1:0")
|
||||
.body(SdkBytes.fromUtf8String(createPayload(prompt))));
|
||||
}
|
||||
|
||||
long duration = System.currentTimeMillis() - startTime;
|
||||
double avgTimePerRequest = (double) duration / iterationCount;
|
||||
|
||||
assertThat(avgTimePerRequest).isLessThan(5000); // Less than 5 seconds per request
|
||||
System.out.println("Average response time: " + avgTimePerRequest + "ms");
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Streaming Responses
|
||||
|
||||
### Streaming Handler Testing
|
||||
|
||||
```java
|
||||
@Test
|
||||
void shouldStreamResponse() throws InterruptedException {
|
||||
String prompt = "Stream this response";
|
||||
|
||||
MockStreamHandler mockHandler = new MockStreamHandler();
|
||||
|
||||
InvokeModelWithResponseStreamRequest streamRequest =
|
||||
InvokeModelWithResponseStreamRequest.builder()
|
||||
.modelId("anthropic.claude-3-sonnet-20240229-v1:0")
|
||||
.body(SdkBytes.fromUtf8String(createPayload(prompt)))
|
||||
.build();
|
||||
|
||||
bedrockRuntimeClient.invokeModelWithResponseStream(streamRequest, mockHandler);
|
||||
|
||||
// Wait for streaming to complete
|
||||
mockHandler.awaitCompletion(10, TimeUnit.SECONDS);
|
||||
|
||||
assertThat(mockHandler.getStreamedContent()).isNotEmpty();
|
||||
assertThat(mockHandler.getStreamedContent()).contains(" streamed");
|
||||
}
|
||||
|
||||
private static class MockStreamHandler extends
|
||||
InvokeModelWithResponseStreamResponseHandler.Visitor {
|
||||
|
||||
private final StringBuilder contentBuilder = new StringBuilder();
|
||||
private final CountDownLatch latch = new CountDownLatch(1);
|
||||
|
||||
@Override
|
||||
public void visit(EventStream eventStream) {
|
||||
eventStream.forEach(event -> {
|
||||
if (event instanceof PayloadPart) {
|
||||
PayloadPart payloadPart = (PayloadPart) event;
|
||||
String chunk = payloadPart.bytes().asUtf8String();
|
||||
contentBuilder.append(chunk);
|
||||
}
|
||||
});
|
||||
latch.countDown();
|
||||
}
|
||||
|
||||
public String getStreamedContent() {
|
||||
return contentBuilder.toString();
|
||||
}
|
||||
|
||||
public void awaitCompletion(long timeout, TimeUnit unit)
|
||||
throws InterruptedException {
|
||||
latch.await(timeout, unit);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Configuration
|
||||
|
||||
### Testing Different Regions
|
||||
|
||||
```java
|
||||
@ParameterizedTest
|
||||
@EnumSource(value = Region.class,
|
||||
names = {"US_EAST_1", "US_WEST_2", "EU_WEST_1"})
|
||||
void shouldWorkInAllRegions(Region region) {
|
||||
BedrockRuntimeClient client = BedrockRuntimeClient.builder()
|
||||
.region(region)
|
||||
.build();
|
||||
|
||||
assertThat(client).isNotNull();
|
||||
}
|
||||
|
||||
### Testing Authentication
|
||||
|
||||
```java
|
||||
@Test
|
||||
void shouldUseIamRoleForAuthentication() {
|
||||
BedrockRuntimeClient client = BedrockRuntimeClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Test that client can make basic calls
|
||||
ListFoundationModelsResponse response = client.listFoundationModels();
|
||||
|
||||
assertThat(response).isNotNull();
|
||||
}
|
||||
```
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Test Response Fixtures
|
||||
|
||||
```java
|
||||
public class BedrockTestFixtures {
|
||||
|
||||
public static String createClaudeResponse() {
|
||||
return "{\"content\":[{\"text\":\"Hello! How can I help you today?\"}]}";
|
||||
}
|
||||
|
||||
public static String createLlamaResponse() {
|
||||
return "{\"generation\":\"Hello! How can I assist you?\"}";
|
||||
}
|
||||
|
||||
public static String createTitanResponse() {
|
||||
return "{\"results\":[{\"outputText\":\"Hello! How can I help?\"}]}";
|
||||
}
|
||||
|
||||
public static String createPayload(String prompt) {
|
||||
return new JSONObject()
|
||||
.put("anthropic_version", "bedrock-2023-05-31")
|
||||
.put("max_tokens", 1000)
|
||||
.put("messages", new JSONObject[]{
|
||||
new JSONObject()
|
||||
.put("role", "user")
|
||||
.put("content", prompt)
|
||||
})
|
||||
.toString();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Test Suite
|
||||
|
||||
```java
|
||||
@Suite
|
||||
@SelectClasses({
|
||||
BedrockAIServiceTest.class,
|
||||
BedrockConfigurationTest.class,
|
||||
BedrockStreamingTest.class,
|
||||
BedrockErrorHandlingTest.class
|
||||
})
|
||||
public class BedrockTestSuite {
|
||||
// Integration test suite for all Bedrock functionality
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
### Unit Testing Best Practices
|
||||
|
||||
1. **Mock External Dependencies:** Always mock AWS SDK clients in unit tests
|
||||
2. **Test Error Scenarios:** Include tests for throttling, validation errors, and network issues
|
||||
3. **Parameterized Tests:** Test multiple models and configurations efficiently
|
||||
4. **Performance Assertions:** Include basic performance benchmarks
|
||||
5. **Test Data Fixtures:** Reuse test response data across tests
|
||||
|
||||
### Integration Testing Best Practices
|
||||
|
||||
1. **Use LocalStack:** Test against LocalStack for local development
|
||||
2. **Test Multiple Regions:** Verify functionality across different AWS regions
|
||||
3. **Test Edge Cases:** Include timeout, retry, and concurrent request scenarios
|
||||
4. **Monitor Performance:** Track response times and error rates
|
||||
5. **Clean Up Resources:** Ensure proper cleanup after integration tests
|
||||
|
||||
### Testing Configuration
|
||||
|
||||
```properties
|
||||
# application-test.properties
|
||||
localstack.enabled=true
|
||||
aws.region=us-east-1
|
||||
bedrock.timeout=5000
|
||||
bedrock.retry.max-attempts=3
|
||||
```
|
||||
660
skills/aws-java/aws-sdk-java-v2-core/SKILL.md
Normal file
660
skills/aws-java/aws-sdk-java-v2-core/SKILL.md
Normal file
@@ -0,0 +1,660 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-core
|
||||
description: Core patterns and best practices for AWS SDK for Java 2.x. Use when configuring AWS service clients, setting up authentication, managing credentials, configuring timeouts, HTTP clients, or following AWS SDK best practices.
|
||||
category: aws
|
||||
tags: [aws, java, sdk, core, authentication, configuration]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - Core Patterns
|
||||
|
||||
## Overview
|
||||
|
||||
Configure AWS service clients, authentication, timeouts, HTTP clients, and implement best practices for AWS SDK for Java 2.x applications. This skill provides essential patterns for building robust, performant, and secure integrations with AWS services.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Setting up AWS SDK for Java 2.x service clients with proper configuration
|
||||
- Configuring authentication and credential management strategies
|
||||
- Implementing client lifecycle management and resource cleanup
|
||||
- Optimizing performance with HTTP client configuration and connection pooling
|
||||
- Setting up proper timeout configurations for API calls
|
||||
- Implementing error handling and retry policies
|
||||
- Enabling monitoring and metrics collection
|
||||
- Integrating AWS SDK with Spring Boot applications
|
||||
- Testing AWS integrations with LocalStack and Testcontainers
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Service Client Setup
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
// Basic client with region
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Always close clients when done
|
||||
try (S3Client s3 = S3Client.builder().region(Region.US_EAST_1).build()) {
|
||||
// Use client
|
||||
} // Auto-closed
|
||||
```
|
||||
|
||||
### Basic Authentication
|
||||
|
||||
```java
|
||||
// Uses default credential provider chain
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build(); // Automatically detects credentials
|
||||
```
|
||||
|
||||
## Client Configuration
|
||||
|
||||
### Service Client Builder Pattern
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.client.config.ClientOverrideConfiguration;
|
||||
import software.amazon.awssdk.http.apache.ApacheHttpClient;
|
||||
import software.amazon.awssdk.http.apache.ProxyConfiguration;
|
||||
import software.amazon.awssdk.metrics.publishers.cloudwatch.CloudWatchMetricPublisher;
|
||||
import software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider;
|
||||
import java.time.Duration;
|
||||
import java.net.URI;
|
||||
|
||||
// Advanced client configuration
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.EU_SOUTH_2)
|
||||
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
|
||||
.overrideConfiguration(b -> b
|
||||
.apiCallTimeout(Duration.ofSeconds(30))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(10))
|
||||
.addMetricPublisher(CloudWatchMetricPublisher.create()))
|
||||
.httpClientBuilder(ApacheHttpClient.builder()
|
||||
.maxConnections(100)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.proxyConfiguration(ProxyConfiguration.builder()
|
||||
.endpoint(URI.create("http://proxy:8080"))
|
||||
.build()))
|
||||
.build();
|
||||
```
|
||||
|
||||
### Separate Configuration Objects
|
||||
|
||||
```java
|
||||
ClientOverrideConfiguration clientConfig = ClientOverrideConfiguration.builder()
|
||||
.apiCallTimeout(Duration.ofSeconds(30))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(10))
|
||||
.addMetricPublisher(CloudWatchMetricPublisher.create())
|
||||
.build();
|
||||
|
||||
ApacheHttpClient httpClient = ApacheHttpClient.builder()
|
||||
.maxConnections(100)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.build();
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.EU_SOUTH_2)
|
||||
.credentialsProvider(EnvironmentVariableCredentialsProvider.create())
|
||||
.overrideConfiguration(clientConfig)
|
||||
.httpClient(httpClient)
|
||||
.build();
|
||||
```
|
||||
|
||||
## Authentication and Credentials
|
||||
|
||||
### Default Credentials Provider Chain
|
||||
|
||||
```java
|
||||
// SDK automatically uses default credential provider chain:
|
||||
// 1. Java system properties (aws.accessKeyId and aws.secretAccessKey)
|
||||
// 2. Environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
|
||||
// 3. Web identity token from AWS_WEB_IDENTITY_TOKEN_FILE
|
||||
// 4. Shared credentials and config files (~/.aws/credentials and ~/.aws/config)
|
||||
// 5. Amazon ECS container credentials
|
||||
// 6. Amazon EC2 instance profile credentials
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build(); // Uses default credential provider chain
|
||||
```
|
||||
|
||||
### Explicit Credentials Providers
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.auth.credentials.*;
|
||||
|
||||
// Environment variables
|
||||
CredentialsProvider envCredentials = EnvironmentVariableCredentialsProvider.create();
|
||||
|
||||
// Profile from ~/.aws/credentials
|
||||
CredentialsProvider profileCredentials = ProfileCredentialsProvider.create("myprofile");
|
||||
|
||||
// Static credentials (NOT recommended for production)
|
||||
CredentialsProvider staticCredentials = StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create("accessKeyId", "secretAccessKey")
|
||||
);
|
||||
|
||||
// Instance profile (for EC2)
|
||||
CredentialsProvider instanceProfileCredentials = InstanceProfileCredentialsProvider.create();
|
||||
|
||||
// Use with client
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.credentialsProvider(profileCredentials)
|
||||
.build();
|
||||
```
|
||||
|
||||
### SSO Authentication Setup
|
||||
|
||||
```properties
|
||||
# ~/.aws/config
|
||||
[default]
|
||||
sso_session = my-sso
|
||||
sso_account_id = 111122223333
|
||||
sso_role_name = SampleRole
|
||||
region = us-east-1
|
||||
output = json
|
||||
|
||||
[sso-session my-sso]
|
||||
sso_region = us-east-1
|
||||
sso_start_url = https://provided-domain.awsapps.com/start
|
||||
sso_registration_scopes = sso:account:access
|
||||
```
|
||||
|
||||
```bash
|
||||
# Login before running application
|
||||
aws sso login
|
||||
|
||||
# Verify active session
|
||||
aws sts get-caller-identity
|
||||
```
|
||||
|
||||
## HTTP Client Configuration
|
||||
|
||||
### Apache HTTP Client (Recommended for Sync)
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.http.apache.ApacheHttpClient;
|
||||
|
||||
ApacheHttpClient httpClient = ApacheHttpClient.builder()
|
||||
.maxConnections(100)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.connectionTimeToLive(Duration.ofMinutes(5))
|
||||
.expectContinueEnabled(true)
|
||||
.build();
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.httpClient(httpClient)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Netty HTTP Client (For Async Operations)
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient;
|
||||
import software.amazon.awssdk.http.nio.netty.SslProvider;
|
||||
|
||||
NettyNioAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
|
||||
.maxConcurrency(100)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.readTimeout(Duration.ofSeconds(30))
|
||||
.writeTimeout(Duration.ofSeconds(30))
|
||||
.sslProvider(SslProvider.OPENSSL) // Better performance than JDK
|
||||
.build();
|
||||
|
||||
S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
|
||||
.httpClient(httpClient)
|
||||
.build();
|
||||
```
|
||||
|
||||
### URL Connection HTTP Client (Lightweight)
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.http.urlconnection.UrlConnectionHttpClient;
|
||||
|
||||
UrlConnectionHttpClient httpClient = UrlConnectionHttpClient.builder()
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.build();
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Reuse Service Clients
|
||||
|
||||
**DO:**
|
||||
```java
|
||||
@Service
|
||||
public class S3Service {
|
||||
private final S3Client s3Client;
|
||||
|
||||
public S3Service() {
|
||||
this.s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
// Reuse s3Client for all operations
|
||||
}
|
||||
```
|
||||
|
||||
**DON'T:**
|
||||
```java
|
||||
public void uploadFile(String bucket, String key) {
|
||||
// Creates new client each time - wastes resources!
|
||||
S3Client s3 = S3Client.builder().build();
|
||||
s3.putObject(...);
|
||||
s3.close();
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Configure API Timeouts
|
||||
|
||||
```java
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.overrideConfiguration(b -> b
|
||||
.apiCallTimeout(Duration.ofSeconds(30))
|
||||
.apiCallAttemptTimeout(Duration.ofMillis(5000)))
|
||||
.build();
|
||||
```
|
||||
|
||||
### 3. Close Unused Clients
|
||||
|
||||
```java
|
||||
// Try-with-resources
|
||||
try (S3Client s3 = S3Client.builder().build()) {
|
||||
s3.listBuckets();
|
||||
}
|
||||
|
||||
// Explicit close
|
||||
S3Client s3Client = S3Client.builder().build();
|
||||
try {
|
||||
s3Client.listBuckets();
|
||||
} finally {
|
||||
s3Client.close();
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Close Streaming Responses
|
||||
|
||||
```java
|
||||
try (ResponseInputStream<GetObjectResponse> s3Object =
|
||||
s3Client.getObject(GetObjectRequest.builder()
|
||||
.bucket(bucket)
|
||||
.key(key)
|
||||
.build())) {
|
||||
|
||||
// Read and process stream immediately
|
||||
byte[] data = s3Object.readAllBytes();
|
||||
|
||||
} // Stream auto-closed, connection returned to pool
|
||||
```
|
||||
|
||||
### 5. Optimize SSL for Async Clients
|
||||
|
||||
**Add dependency:**
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty-tcnative-boringssl-static</artifactId>
|
||||
<version>2.0.61.Final</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
**Configure SSL:**
|
||||
```java
|
||||
NettyNioAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
|
||||
.sslProvider(SslProvider.OPENSSL)
|
||||
.build();
|
||||
|
||||
S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
|
||||
.httpClient(httpClient)
|
||||
.build();
|
||||
```
|
||||
|
||||
## Spring Boot Integration
|
||||
|
||||
### Configuration Properties
|
||||
|
||||
```java
|
||||
@ConfigurationProperties(prefix = "aws")
|
||||
public record AwsProperties(
|
||||
String region,
|
||||
String accessKeyId,
|
||||
String secretAccessKey,
|
||||
S3Properties s3,
|
||||
DynamoDbProperties dynamoDb
|
||||
) {
|
||||
public record S3Properties(
|
||||
Integer maxConnections,
|
||||
Integer connectionTimeoutSeconds,
|
||||
Integer apiCallTimeoutSeconds
|
||||
) {}
|
||||
|
||||
public record DynamoDbProperties(
|
||||
Integer maxConnections,
|
||||
Integer readTimeoutSeconds
|
||||
) {}
|
||||
}
|
||||
```
|
||||
|
||||
### Client Configuration Beans
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
@EnableConfigurationProperties(AwsProperties.class)
|
||||
public class AwsClientConfiguration {
|
||||
|
||||
private final AwsProperties awsProperties;
|
||||
|
||||
public AwsClientConfiguration(AwsProperties awsProperties) {
|
||||
this.awsProperties = awsProperties;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public S3Client s3Client() {
|
||||
return S3Client.builder()
|
||||
.region(Region.of(awsProperties.region()))
|
||||
.credentialsProvider(credentialsProvider())
|
||||
.overrideConfiguration(clientOverrideConfiguration(
|
||||
awsProperties.s3().apiCallTimeoutSeconds()))
|
||||
.httpClient(apacheHttpClient(
|
||||
awsProperties.s3().maxConnections(),
|
||||
awsProperties.s3().connectionTimeoutSeconds()))
|
||||
.build();
|
||||
}
|
||||
|
||||
private CredentialsProvider credentialsProvider() {
|
||||
if (awsProperties.accessKeyId() != null &&
|
||||
awsProperties.secretAccessKey() != null) {
|
||||
return StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
awsProperties.accessKeyId(),
|
||||
awsProperties.secretAccessKey()));
|
||||
}
|
||||
return DefaultCredentialsProvider.create();
|
||||
}
|
||||
|
||||
private ClientOverrideConfiguration clientOverrideConfiguration(
|
||||
Integer apiCallTimeoutSeconds) {
|
||||
return ClientOverrideConfiguration.builder()
|
||||
.apiCallTimeout(Duration.ofSeconds(
|
||||
apiCallTimeoutSeconds != null ? apiCallTimeoutSeconds : 30))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(10))
|
||||
.build();
|
||||
}
|
||||
|
||||
private ApacheHttpClient apacheHttpClient(
|
||||
Integer maxConnections,
|
||||
Integer connectionTimeoutSeconds) {
|
||||
return ApacheHttpClient.builder()
|
||||
.maxConnections(maxConnections != null ? maxConnections : 50)
|
||||
.connectionTimeout(Duration.ofSeconds(
|
||||
connectionTimeoutSeconds != null ? connectionTimeoutSeconds : 5))
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Application Properties
|
||||
|
||||
```yaml
|
||||
aws:
|
||||
region: us-east-1
|
||||
s3:
|
||||
max-connections: 100
|
||||
connection-timeout-seconds: 5
|
||||
api-call-timeout-seconds: 30
|
||||
dynamo-db:
|
||||
max-connections: 50
|
||||
read-timeout-seconds: 30
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.s3.model.S3Exception;
|
||||
import software.amazon.awssdk.core.exception.SdkClientException;
|
||||
import software.amazon.awssdk.core.exception.SdkServiceException;
|
||||
|
||||
try {
|
||||
s3Client.getObject(request);
|
||||
|
||||
} catch (S3Exception e) {
|
||||
// Service-specific exception
|
||||
System.err.println("S3 Error: " + e.awsErrorDetails().errorMessage());
|
||||
System.err.println("Error Code: " + e.awsErrorDetails().errorCode());
|
||||
System.err.println("Status Code: " + e.statusCode());
|
||||
System.err.println("Request ID: " + e.requestId());
|
||||
|
||||
} catch (SdkServiceException e) {
|
||||
// Generic service exception
|
||||
System.err.println("AWS Service Error: " + e.getMessage());
|
||||
|
||||
} catch (SdkClientException e) {
|
||||
// Client-side error (network, timeout, etc.)
|
||||
System.err.println("Client Error: " + e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Patterns
|
||||
|
||||
### LocalStack Integration
|
||||
|
||||
```java
|
||||
@TestConfiguration
|
||||
public class LocalStackAwsConfig {
|
||||
|
||||
@Bean
|
||||
public S3Client s3Client() {
|
||||
return S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.endpointOverride(URI.create("http://localhost:4566"))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create("test", "test")))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testcontainers with LocalStack
|
||||
|
||||
```java
|
||||
@Testcontainers
|
||||
@SpringBootTest
|
||||
class S3IntegrationTest {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(LocalStackContainer.Service.S3);
|
||||
|
||||
@DynamicPropertySource
|
||||
static void overrideProperties(DynamicPropertyRegistry registry) {
|
||||
registry.add("aws.s3.endpoint",
|
||||
() -> localstack.getEndpointOverride(LocalStackContainer.Service.S3));
|
||||
registry.add("aws.region", () -> localstack.getRegion());
|
||||
registry.add("aws.access-key-id", localstack::getAccessKey);
|
||||
registry.add("aws.secret-access-key", localstack::getSecretKey);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Maven Dependencies
|
||||
|
||||
```xml
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>bom</artifactId>
|
||||
<version>2.25.0</version> // Use latest stable version
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
<dependencies>
|
||||
<!-- Core SDK -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>sdk-core</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Apache HTTP Client (recommended for sync) -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>apache-client</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Netty HTTP Client (for async) -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>netty-nio-client</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- URL Connection HTTP Client (lightweight) -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>url-connection-client</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- CloudWatch Metrics -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>cloudwatch-metric-publisher</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- OpenSSL for better performance -->
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty-tcnative-boringssl-static</artifactId>
|
||||
<version>2.0.61.Final</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
## Gradle Dependencies
|
||||
|
||||
```gradle
|
||||
dependencies {
|
||||
implementation platform('software.amazon.awssdk:bom:2.25.0')
|
||||
|
||||
implementation 'software.amazon.awssdk:sdk-core'
|
||||
implementation 'software.amazon.awssdk:apache-client'
|
||||
implementation 'software.amazon.awssdk:netty-nio-client'
|
||||
implementation 'software.amazon.awssdk:cloudwatch-metric-publisher'
|
||||
|
||||
runtimeOnly 'io.netty:netty-tcnative-boringssl-static:2.0.61.Final'
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic S3 Upload
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.sync.RequestBody;
|
||||
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
|
||||
|
||||
try (S3Client s3 = S3Client.builder().region(Region.US_EAST_1).build()) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket("my-bucket")
|
||||
.key("uploads/file.txt")
|
||||
.build();
|
||||
|
||||
s3.putObject(request, RequestBody.fromString("Hello, World!"));
|
||||
}
|
||||
```
|
||||
|
||||
### S3 List Objects with Pagination
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.s3.model.ListObjectsV2Request;
|
||||
import software.amazon.awssdk.services.s3.model.ListObjectsV2Response;
|
||||
|
||||
try (S3Client s3 = S3Client.builder().region(Region.US_EAST_1).build()) {
|
||||
ListObjectsV2Request request = ListObjectsV2Request.builder()
|
||||
.bucket("my-bucket")
|
||||
.build();
|
||||
|
||||
ListObjectsV2Response response = s3.listObjectsV2(request);
|
||||
response.contents().forEach(object -> {
|
||||
System.out.println("Object key: " + object.key());
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Async S3 Upload
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.async.AsyncRequestBody;
|
||||
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
|
||||
|
||||
S3AsyncClient s3AsyncClient = S3AsyncClient.builder().build();
|
||||
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket("my-bucket")
|
||||
.key("async-upload.txt")
|
||||
.build();
|
||||
|
||||
CompletableFuture<PutObjectResponse> future = s3AsyncClient.putObject(
|
||||
request, Async.fromString("Hello, Async World!"));
|
||||
|
||||
future.thenAccept(response -> {
|
||||
System.out.println("Upload completed: " + response.eTag());
|
||||
}).exceptionally(error -> {
|
||||
System.err.println("Upload failed: " + error.getMessage());
|
||||
return null;
|
||||
});
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
1. **Connection Pooling**: Default max connections is 50. Increase for high-throughput applications.
|
||||
2. **Timeouts**: Always set both `apiCallTimeout` and `apiCallAttemptTimeout`.
|
||||
3. **Client Reuse**: Create clients once, reuse throughout application lifecycle.
|
||||
4. **Stream Handling**: Close streams immediately to prevent connection pool exhaustion.
|
||||
5. **Async for I/O**: Use async clients for I/O-bound operations.
|
||||
6. **OpenSSL**: Use OpenSSL with Netty for better SSL performance.
|
||||
7. **Metrics**: Enable CloudWatch metrics to monitor performance.
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
1. **Never hardcode credentials**: Use credential providers or environment variables.
|
||||
2. **Use IAM roles**: Prefer IAM roles over access keys when possible.
|
||||
3. **Rotate credentials**: Implement credential rotation for long-lived keys.
|
||||
4. **Least privilege**: Grant minimum required permissions.
|
||||
5. **Enable SSL**: Always use HTTPS endpoints (default).
|
||||
6. **Audit logging**: Enable AWS CloudTrail for API call auditing.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `aws-sdk-java-v2-s3` - S3-specific patterns and examples
|
||||
- `aws-sdk-java-v2-dynamodb` - DynamoDB patterns and examples
|
||||
- `aws-sdk-java-v2-lambda` - Lambda patterns and examples
|
||||
|
||||
## References
|
||||
|
||||
See [references/](references/) for detailed documentation:
|
||||
|
||||
- [Developer Guide](references/developer-guide.md) - Comprehensive guide and architecture overview
|
||||
- [API Reference](references/api-reference.md) - Complete API documentation for core classes
|
||||
- [Best Practices](references/best-practices.md) - In-depth best practices and configuration examples
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [AWS SDK for Java 2.x Developer Guide](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.html)
|
||||
- [AWS SDK for Java 2.x API Reference](https://sdk.amazonaws.com/java/api/latest/)
|
||||
- [Best Practices](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/best-practices.html)
|
||||
- [GitHub Repository](https://github.com/aws/aws-sdk-java-v2)
|
||||
258
skills/aws-java/aws-sdk-java-v2-core/references/api-reference.md
Normal file
258
skills/aws-java/aws-sdk-java-v2-core/references/api-reference.md
Normal file
@@ -0,0 +1,258 @@
|
||||
# AWS SDK for Java 2.x API Reference
|
||||
|
||||
## Core Client Classes
|
||||
|
||||
### AwsClient
|
||||
Base interface for all AWS service clients.
|
||||
|
||||
```java
|
||||
public interface AwsClient extends AutoCloseable {
|
||||
// Base client interface
|
||||
}
|
||||
```
|
||||
|
||||
### SdkClient
|
||||
Enhanced client interface with SDK-specific features.
|
||||
|
||||
```java
|
||||
public interface SdkClient extends AwsClient {
|
||||
// Enhanced client methods
|
||||
}
|
||||
```
|
||||
|
||||
## Client Builders
|
||||
|
||||
### ClientBuilder
|
||||
Base builder interface for all AWS service clients.
|
||||
|
||||
**Key Methods:**
|
||||
- `region(Region region)` - Set AWS region
|
||||
- `credentialsProvider(CredentialsProvider credentialsProvider)` - Configure authentication
|
||||
- `overrideConfiguration(ClientOverrideConfiguration overrideConfiguration)` - Override default settings
|
||||
- `httpClient(HttpClient httpClient)` - Specify HTTP client implementation
|
||||
- `build()` - Create client instance
|
||||
|
||||
## Configuration Classes
|
||||
|
||||
### ClientOverrideConfiguration
|
||||
Controls client-level configuration including timeouts and metrics.
|
||||
|
||||
**Key Properties:**
|
||||
- `apiCallTimeout(Duration)` - Total timeout for all retry attempts
|
||||
- `apiCallAttemptTimeout(Duration)` - Timeout per individual attempt
|
||||
- `retryPolicy(RetryPolicy)` - Retry behavior configuration
|
||||
- `metricPublishers(MetricPublisher...)` - Enable metrics collection
|
||||
|
||||
### Builder Example
|
||||
|
||||
```java
|
||||
ClientOverrideConfiguration config = ClientOverrideConfiguration.builder()
|
||||
.apiCallTimeout(Duration.ofSeconds(30))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(10))
|
||||
.addMetricPublisher(CloudWatchMetricPublisher.create())
|
||||
.build();
|
||||
```
|
||||
|
||||
## HTTP Client Implementations
|
||||
|
||||
### ApacheHttpClient
|
||||
Synchronous HTTP client with advanced features.
|
||||
|
||||
**Builder Configuration:**
|
||||
- `maxConnections(Integer)` - Maximum concurrent connections
|
||||
- `connectionTimeout(Duration)` - Connection establishment timeout
|
||||
- `socketTimeout(Duration)` - Socket read/write timeout
|
||||
- `connectionTimeToLive(Duration)` - Connection lifetime
|
||||
- `proxyConfiguration(ProxyConfiguration)` - Proxy settings
|
||||
|
||||
### NettyNioAsyncHttpClient
|
||||
Asynchronous HTTP client for high-performance applications.
|
||||
|
||||
**Builder Configuration:**
|
||||
- `maxConcurrency(Integer)` - Maximum concurrent operations
|
||||
- `connectionTimeout(Duration)` - Connection timeout
|
||||
- `readTimeout(Duration)` - Read operation timeout
|
||||
- `writeTimeout(Duration)` - Write operation timeout
|
||||
- `sslProvider(SslProvider)` - SSL/TLS implementation
|
||||
|
||||
### UrlConnectionHttpClient
|
||||
Lightweight HTTP client using Java's URLConnection.
|
||||
|
||||
**Builder Configuration:**
|
||||
- `socketTimeout(Duration)` - Socket timeout
|
||||
- `connectTimeout(Duration)` - Connection timeout
|
||||
|
||||
## Authentication and Credentials
|
||||
|
||||
### Credential Providers
|
||||
|
||||
#### EnvironmentVariableCredentialsProvider
|
||||
Reads credentials from environment variables.
|
||||
|
||||
```java
|
||||
CredentialsProvider provider = EnvironmentVariableCredentialsProvider.create();
|
||||
```
|
||||
|
||||
#### SystemPropertyCredentialsProvider
|
||||
Reads credentials from Java system properties.
|
||||
|
||||
```java
|
||||
CredentialsProvider provider = SystemPropertyCredentialsProvider.create();
|
||||
```
|
||||
|
||||
#### ProfileCredentialsProvider
|
||||
Reads credentials from AWS configuration files.
|
||||
|
||||
```java
|
||||
CredentialsProvider provider = ProfileCredentialsProvider.create("profile-name");
|
||||
```
|
||||
|
||||
#### StaticCredentialsProvider
|
||||
Provides static credentials (not recommended for production).
|
||||
|
||||
```java
|
||||
AwsBasicCredentials credentials = AwsBasicCredentials.create("key", "secret");
|
||||
CredentialsProvider provider = StaticCredentialsProvider.create(credentials);
|
||||
```
|
||||
|
||||
#### DefaultCredentialsProvider
|
||||
Implements the default credential provider chain.
|
||||
|
||||
```java
|
||||
CredentialsProvider provider = DefaultCredentialsProvider.create();
|
||||
```
|
||||
|
||||
### SSO Authentication
|
||||
|
||||
#### AwsSsoCredentialsProvider
|
||||
Enables SSO-based authentication.
|
||||
|
||||
```java
|
||||
AwsSsoCredentialsProvider ssoProvider = AwsSsoCredentialsProvider.builder()
|
||||
.ssoProfile("my-sso-profile")
|
||||
.build();
|
||||
```
|
||||
|
||||
## Error Handling Classes
|
||||
|
||||
### SdkClientException
|
||||
Client-side exceptions (network, timeout, configuration issues).
|
||||
|
||||
```java
|
||||
try {
|
||||
awsOperation();
|
||||
} catch (SdkClientException e) {
|
||||
// Handle client-side errors
|
||||
}
|
||||
```
|
||||
|
||||
### SdkServiceException
|
||||
Service-side exceptions (AWS service errors).
|
||||
|
||||
```java
|
||||
try {
|
||||
awsOperation();
|
||||
} catch (SdkServiceException e) {
|
||||
// Handle service-side errors
|
||||
System.err.println("Error Code: " + e.awsErrorDetails().errorCode());
|
||||
System.err.println("Request ID: " + e.requestId());
|
||||
}
|
||||
```
|
||||
|
||||
### S3Exception
|
||||
S3-specific exceptions.
|
||||
|
||||
```java
|
||||
try {
|
||||
s3Operation();
|
||||
} catch (S3Exception e) {
|
||||
// Handle S3-specific errors
|
||||
System.err.println("S3 Error: " + e.awsErrorDetails().errorMessage());
|
||||
}
|
||||
```
|
||||
|
||||
## Metrics and Monitoring
|
||||
|
||||
### CloudWatchMetricPublisher
|
||||
Publishes metrics to AWS CloudWatch.
|
||||
|
||||
```java
|
||||
CloudWatchMetricPublisher publisher = CloudWatchMetricPublisher.create();
|
||||
```
|
||||
|
||||
### MetricPublisher
|
||||
Base interface for custom metrics publishers.
|
||||
|
||||
```java
|
||||
public interface MetricPublisher {
|
||||
void publish(MetricCollection metricCollection);
|
||||
}
|
||||
```
|
||||
|
||||
## Utility Classes
|
||||
|
||||
### Duration and Time
|
||||
Configure timeouts using Java Duration.
|
||||
|
||||
```java
|
||||
Duration apiTimeout = Duration.ofSeconds(30);
|
||||
Duration attemptTimeout = Duration.ofSeconds(10);
|
||||
```
|
||||
|
||||
### Region
|
||||
AWS regions for service endpoints.
|
||||
|
||||
```java
|
||||
Region region = Region.US_EAST_1;
|
||||
Region regionEU = Region.EU_WEST_1;
|
||||
```
|
||||
|
||||
### URI
|
||||
Endpoint configuration and proxy settings.
|
||||
|
||||
```java
|
||||
URI proxyUri = URI.create("http://proxy:8080");
|
||||
URI endpointOverride = URI.create("http://localhost:4566");
|
||||
```
|
||||
|
||||
## Configuration Best Practices
|
||||
|
||||
### Resource Management
|
||||
Always close clients when no longer needed.
|
||||
|
||||
```java
|
||||
try (S3Client s3 = S3Client.builder().build()) {
|
||||
// Use client
|
||||
} // Auto-closed
|
||||
```
|
||||
|
||||
### Connection Pooling
|
||||
Reuse clients to avoid connection pool overhead.
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class AwsService {
|
||||
private final S3Client s3Client;
|
||||
|
||||
public AwsService() {
|
||||
this.s3Client = S3Client.builder().build();
|
||||
}
|
||||
|
||||
// Reuse s3Client throughout application
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
Implement comprehensive error handling for robust applications.
|
||||
|
||||
```java
|
||||
try {
|
||||
// AWS operation
|
||||
} catch (SdkServiceException e) {
|
||||
// Handle service errors
|
||||
} catch (SdkClientException e) {
|
||||
// Handle client errors
|
||||
} catch (Exception e) {
|
||||
// Handle other errors
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,344 @@
|
||||
# AWS SDK for Java 2.x Best Practices
|
||||
|
||||
## Client Configuration
|
||||
|
||||
### Timeout Configuration
|
||||
Always configure both API call and attempt timeouts to prevent hanging requests.
|
||||
|
||||
```java
|
||||
ClientOverrideConfiguration config = ClientOverrideConfiguration.builder()
|
||||
.apiCallTimeout(Duration.ofSeconds(30)) // Total for all retries
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(10)) // Per-attempt timeout
|
||||
.build();
|
||||
```
|
||||
|
||||
**Best Practices:**
|
||||
- Set `apiCallTimeout` higher than `apiCallAttemptTimeout`
|
||||
- Use appropriate timeouts based on your service's characteristics
|
||||
- Consider network latency and service response times
|
||||
- Monitor timeout metrics to adjust as needed
|
||||
|
||||
### HTTP Client Selection
|
||||
Choose the appropriate HTTP client for your use case.
|
||||
|
||||
#### For Synchronous Applications (Apache HttpClient)
|
||||
```java
|
||||
ApacheHttpClient httpClient = ApacheHttpClient.builder()
|
||||
.maxConnections(100)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.build();
|
||||
```
|
||||
|
||||
**Best Use Cases:**
|
||||
- Traditional synchronous applications
|
||||
- Medium-throughput operations
|
||||
- When blocking behavior is acceptable
|
||||
|
||||
#### For Asynchronous Applications (Netty NIO Client)
|
||||
```java
|
||||
NettyNioAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
|
||||
.maxConcurrency(100)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.readTimeout(Duration.ofSeconds(30))
|
||||
.writeTimeout(Duration.ofSeconds(30))
|
||||
.sslProvider(SslProvider.OPENSSL)
|
||||
.build();
|
||||
```
|
||||
|
||||
**Best Use Cases:**
|
||||
- High-throughput applications
|
||||
- I/O-bound operations
|
||||
- When non-blocking behavior is required
|
||||
- For improved SSL performance
|
||||
|
||||
#### For Lightweight Applications (URL Connection Client)
|
||||
```java
|
||||
UrlConnectionHttpClient httpClient = UrlConnectionHttpClient.builder()
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.build();
|
||||
```
|
||||
|
||||
**Best Use Cases:**
|
||||
- Simple applications with low requirements
|
||||
- When minimizing dependencies
|
||||
- For basic operations
|
||||
|
||||
## Authentication and Security
|
||||
|
||||
### Credential Management
|
||||
|
||||
#### Default Provider Chain
|
||||
```java
|
||||
// Use default chain (recommended)
|
||||
S3Client s3Client = S3Client.builder().build();
|
||||
```
|
||||
|
||||
**Default Chain Order:**
|
||||
1. Java system properties (`aws.accessKeyId`, `aws.secretAccessKey`)
|
||||
2. Environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`)
|
||||
3. Web identity token from `AWS_WEB_IDENTITY_TOKEN_FILE`
|
||||
4. Shared credentials file (`~/.aws/credentials`)
|
||||
5. Config file (`~/.aws/config`)
|
||||
6. Amazon ECS container credentials
|
||||
7. Amazon EC2 instance profile credentials
|
||||
|
||||
#### Explicit Credential Provider
|
||||
```java
|
||||
// Use specific credential provider
|
||||
CredentialsProvider credentials = ProfileCredentialsProvider.create("my-profile");
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.credentialsProvider(credentials)
|
||||
.build();
|
||||
```
|
||||
|
||||
#### IAM Roles (Preferred for Production)
|
||||
```java
|
||||
// Use IAM role credentials
|
||||
CredentialsProvider instanceProfile = InstanceProfileCredentialsProvider.create();
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.credentialsProvider(instanceProfile)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Never hardcode credentials** - Use credential providers or environment variables
|
||||
2. **Use IAM roles** - Prefer over access keys when possible
|
||||
3. **Implement credential rotation** - For long-lived access keys
|
||||
4. **Apply least privilege** - Grant minimum required permissions
|
||||
5. **Enable SSL** - Always use HTTPS (default behavior)
|
||||
6. **Monitor access** - Enable AWS CloudTrail for auditing
|
||||
7. **Use SSO for human users** - Instead of long-term credentials
|
||||
|
||||
## Resource Management
|
||||
|
||||
### Client Lifecycle
|
||||
```java
|
||||
// Option 1: Try-with-resources (recommended)
|
||||
try (S3Client s3 = S3Client.builder().build()) {
|
||||
// Use client
|
||||
} // Auto-closed
|
||||
|
||||
// Option 2: Explicit close
|
||||
S3Client s3 = S3Client.builder().build();
|
||||
try {
|
||||
// Use client
|
||||
} finally {
|
||||
s3.close();
|
||||
}
|
||||
```
|
||||
|
||||
### Stream Handling
|
||||
Close streams immediately to prevent connection pool exhaustion.
|
||||
|
||||
```java
|
||||
try (ResponseInputStream<GetObjectResponse> response =
|
||||
s3Client.getObject(GetObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(objectKey)
|
||||
.build())) {
|
||||
|
||||
// Read and process data immediately
|
||||
byte[] data = response.readAllBytes();
|
||||
|
||||
} // Stream auto-closed, connection returned to pool
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Connection Pooling
|
||||
```java
|
||||
// Configure connection pooling
|
||||
ApacheHttpClient httpClient = ApacheHttpClient.builder()
|
||||
.maxConnections(100) // Adjust based on your needs
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.connectionTimeToLive(Duration.ofMinutes(5))
|
||||
.build();
|
||||
```
|
||||
|
||||
**Best Practices:**
|
||||
- Set appropriate `maxConnections` based on expected load
|
||||
- Consider connection time to live (TTL)
|
||||
- Monitor connection pool metrics
|
||||
- Use appropriate timeouts
|
||||
|
||||
### SSL Optimization
|
||||
Use OpenSSL with Netty for better SSL performance.
|
||||
|
||||
```xml
|
||||
<!-- Maven dependency -->
|
||||
<dependency>
|
||||
<groupId>io.netty</groupId>
|
||||
<artifactId>netty-tcnative-boringssl-static</artifactId>
|
||||
<version>2.0.61.Final</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
```java
|
||||
// Use OpenSSL for async clients
|
||||
NettyNioAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
|
||||
.sslProvider(SslProvider.OPENSSL)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Async for I/O-Bound Operations
|
||||
```java
|
||||
// Use async clients for I/O-bound operations
|
||||
S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
|
||||
.httpClient(httpClient)
|
||||
.build();
|
||||
|
||||
// Use CompletableFuture for non-blocking operations
|
||||
CompletableFuture<PutObjectResponse> future = s3AsyncClient.putObject(request);
|
||||
future.thenAccept(response -> {
|
||||
// Handle success
|
||||
}).exceptionally(error -> {
|
||||
// Handle error
|
||||
return null;
|
||||
});
|
||||
```
|
||||
|
||||
## Monitoring and Observability
|
||||
|
||||
### Enable SDK Metrics
|
||||
```java
|
||||
CloudWatchMetricPublisher publisher = CloudWatchMetricPublisher.create();
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.overrideConfiguration(b -> b
|
||||
.addMetricPublisher(publisher))
|
||||
.build();
|
||||
```
|
||||
|
||||
### CloudWatch Integration
|
||||
Configure CloudWatch metrics publisher to collect SDK metrics.
|
||||
|
||||
```java
|
||||
CloudWatchMetricPublisher cloudWatchPublisher = CloudWatchMetricPublisher.builder()
|
||||
.namespace("MyApplication")
|
||||
.credentialProvider(credentials)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Custom Metrics
|
||||
Implement custom metrics for application-specific monitoring.
|
||||
|
||||
```java
|
||||
public class CustomMetricPublisher implements MetricPublisher {
|
||||
@Override
|
||||
public void publish(MetricCollection metrics) {
|
||||
// Implement custom metrics logic
|
||||
metrics.forEach(metric -> {
|
||||
System.out.println("Metric: " + metric.name() + " = " + metric.value());
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Comprehensive Error Handling
|
||||
```java
|
||||
try {
|
||||
awsOperation();
|
||||
} catch (SdkServiceException e) {
|
||||
// Service-specific error
|
||||
System.err.println("AWS Service Error: " + e.awsErrorDetails().errorMessage());
|
||||
System.err.println("Error Code: " + e.awsErrorDetails().errorCode());
|
||||
System.err.println("Status Code: " + e.statusCode());
|
||||
System.err.println("Request ID: " + e.requestId());
|
||||
|
||||
} catch (SdkClientException e) {
|
||||
// Client-side error (network, timeout, etc.)
|
||||
System.err.println("Client Error: " + e.getMessage());
|
||||
|
||||
} catch (Exception e) {
|
||||
// Other errors
|
||||
System.err.println("Unexpected Error: " + e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
### Retry Configuration
|
||||
```java
|
||||
RetryPolicy retryPolicy = RetryPolicy.builder()
|
||||
.numRetries(3)
|
||||
.retryCondition(RetryCondition.defaultRetryCondition())
|
||||
.backoffStrategy(BackoffStrategy.defaultStrategy())
|
||||
.build();
|
||||
```
|
||||
|
||||
## Testing Strategies
|
||||
|
||||
### Local Testing with LocalStack
|
||||
```java
|
||||
@TestConfiguration
|
||||
public class LocalStackConfig {
|
||||
@Bean
|
||||
public S3Client s3Client() {
|
||||
return S3Client.builder()
|
||||
.endpointOverride(URI.create("http://localhost:4566"))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create("test", "test")))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testcontainers Integration
|
||||
```java
|
||||
@Testcontainers
|
||||
@SpringBootTest
|
||||
public class AwsIntegrationTest {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(LocalStackContainer.Service.S3);
|
||||
|
||||
@DynamicPropertySource
|
||||
static void configProperties(DynamicPropertyRegistry registry) {
|
||||
registry.add("aws.endpoint", () -> localstack.getEndpointOverride(LocalStackContainer.Service.S3));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration Templates
|
||||
|
||||
### High-Throughput Configuration
|
||||
```java
|
||||
ApacheHttpClient highThroughputClient = ApacheHttpClient.builder()
|
||||
.maxConnections(200)
|
||||
.connectionTimeout(Duration.ofSeconds(3))
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.connectionTimeToLive(Duration.ofMinutes(10))
|
||||
.build();
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.httpClient(highThroughputClient)
|
||||
.overrideConfiguration(b -> b
|
||||
.apiCallTimeout(Duration.ofSeconds(45))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(15)))
|
||||
.build();
|
||||
```
|
||||
|
||||
### Low-Latency Configuration
|
||||
```java
|
||||
ApacheHttpClient lowLatencyClient = ApacheHttpClient.builder()
|
||||
.maxConnections(50)
|
||||
.connectionTimeout(Duration.ofSeconds(2))
|
||||
.socketTimeout(Duration.ofSeconds(10))
|
||||
.build();
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.httpClient(lowLatencyClient)
|
||||
.overrideConfiguration(b -> b
|
||||
.apiCallTimeout(Duration.ofSeconds(15))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(3)))
|
||||
.build();
|
||||
```
|
||||
@@ -0,0 +1,130 @@
|
||||
# AWS SDK for Java 2.x Developer Guide
|
||||
|
||||
## Overview
|
||||
|
||||
The AWS SDK for Java 2.x provides a modern, type-safe API for AWS services. Built on Java 8+, it offers improved performance, better error handling, and enhanced security compared to v1.x.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Modern Architecture**: Built on Java 8+ with reactive and async support
|
||||
- **Type Safety**: Comprehensive type annotations and validation
|
||||
- **Performance Optimized**: Connection pooling, async support, and SSL optimization
|
||||
- **Enhanced Security**: Better credential management and security practices
|
||||
- **Extensive Coverage**: Support for all AWS services with regular updates
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Service Clients
|
||||
The primary interface for interacting with AWS services. All clients implement the `SdkClient` interface.
|
||||
|
||||
```java
|
||||
// S3Client example
|
||||
S3Client s3 = S3Client.builder().region(Region.US_EAST_1).build();
|
||||
```
|
||||
|
||||
### Client Configuration
|
||||
Configure behavior through builders supporting:
|
||||
- Timeout settings
|
||||
- HTTP client selection
|
||||
- Authentication methods
|
||||
- Monitoring and metrics
|
||||
|
||||
### Credential Providers
|
||||
Multiple authentication methods:
|
||||
- Environment variables
|
||||
- System properties
|
||||
- Shared credential files
|
||||
- IAM roles
|
||||
- SSO integration
|
||||
|
||||
### HTTP Clients
|
||||
Choose from three HTTP implementations:
|
||||
- Apache HttpClient (synchronous)
|
||||
- Netty NIO Client (asynchronous)
|
||||
- URL Connection Client (lightweight)
|
||||
|
||||
## Migration from v1.x
|
||||
|
||||
The SDK 2.x is not backward compatible with v1.x. Key changes:
|
||||
- Builder pattern for client creation
|
||||
- Different package structure
|
||||
- Enhanced error handling
|
||||
- New credential system
|
||||
- Improved resource management
|
||||
|
||||
## Getting Started
|
||||
|
||||
Include the BOM (Bill of Materials) for version management:
|
||||
|
||||
```xml
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>bom</artifactId>
|
||||
<version>2.25.0</version> // Use latest stable version
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
```
|
||||
|
||||
Add service-specific dependencies:
|
||||
|
||||
```xml
|
||||
<dependencies>
|
||||
<!-- S3 Service -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>s3</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Core SDK -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>sdk-core</artifactId>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
AWS Service Client
|
||||
├── Configuration Layer
|
||||
│ ├── Client Override Configuration
|
||||
│ └── HTTP Client Configuration
|
||||
├── Authentication Layer
|
||||
│ ├── Credential Providers
|
||||
│ └── Security Context
|
||||
├── Transport Layer
|
||||
│ ├── HTTP Client (Apache/Netty/URLConn)
|
||||
│ └── Connection Pool
|
||||
└── Protocol Layer
|
||||
├── Service Protocol Implementation
|
||||
└── Error Handling
|
||||
```
|
||||
|
||||
## Service Discovery
|
||||
|
||||
The SDK automatically discovers and registers all available AWS services through service interfaces and paginators.
|
||||
|
||||
### Available Services
|
||||
|
||||
All AWS services are available through dedicated client interfaces:
|
||||
- S3 (Simple Storage Service)
|
||||
- DynamoDB (NoSQL Database)
|
||||
- Lambda (Serverless Functions)
|
||||
- EC2 (Compute Cloud)
|
||||
- RDS (Managed Databases)
|
||||
- And 200+ other services
|
||||
|
||||
For a complete list, see the AWS Service documentation.
|
||||
|
||||
## Support and Community
|
||||
|
||||
- **GitHub Issues**: Report bugs and request features
|
||||
- **AWS Amplify**: For mobile app developers
|
||||
- **Migration Guide**: Available for v1.x users
|
||||
- **Changelog**: Track changes on GitHub
|
||||
392
skills/aws-java/aws-sdk-java-v2-dynamodb/SKILL.md
Normal file
392
skills/aws-java/aws-sdk-java-v2-dynamodb/SKILL.md
Normal file
@@ -0,0 +1,392 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-dynamodb
|
||||
description: Amazon DynamoDB patterns using AWS SDK for Java 2.x. Use when creating, querying, scanning, or performing CRUD operations on DynamoDB tables, working with indexes, batch operations, transactions, or integrating with Spring Boot applications.
|
||||
category: aws
|
||||
tags: [aws, dynamodb, java, sdk, nosql, database]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - Amazon DynamoDB
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Creating, updating, or deleting DynamoDB tables
|
||||
- Performing CRUD operations on DynamoDB items
|
||||
- Querying or scanning tables
|
||||
- Working with Global Secondary Indexes (GSI) or Local Secondary Indexes (LSI)
|
||||
- Implementing batch operations for efficiency
|
||||
- Using DynamoDB transactions
|
||||
- Integrating DynamoDB with Spring Boot applications
|
||||
- Working with DynamoDB Enhanced Client for type-safe operations
|
||||
|
||||
## Dependencies
|
||||
|
||||
Add to `pom.xml`:
|
||||
```xml
|
||||
<!-- Low-level DynamoDB client -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>dynamodb</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Enhanced client (recommended) -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>dynamodb-enhanced</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Setup
|
||||
|
||||
### Low-Level Client
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
|
||||
|
||||
DynamoDbClient dynamoDb = DynamoDbClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Enhanced Client (Recommended)
|
||||
```java
|
||||
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient;
|
||||
|
||||
DynamoDbEnhancedClient enhancedClient = DynamoDbEnhancedClient.builder()
|
||||
.dynamoDbClient(dynamoDb)
|
||||
.build();
|
||||
```
|
||||
|
||||
## Entity Mapping
|
||||
|
||||
To define DynamoDB entities, use `@DynamoDbBean` annotation:
|
||||
|
||||
```java
|
||||
@DynamoDbBean
|
||||
public class Customer {
|
||||
|
||||
@DynamoDbPartitionKey
|
||||
private String customerId;
|
||||
|
||||
@DynamoDbAttribute("customer_name")
|
||||
private String name;
|
||||
|
||||
private String email;
|
||||
|
||||
@DynamoDbSortKey
|
||||
private String orderId;
|
||||
|
||||
// Getters and setters
|
||||
}
|
||||
```
|
||||
|
||||
For complex entity mapping with GSIs and custom converters, see [Entity Mapping Reference](references/entity-mapping.md).
|
||||
|
||||
## CRUD Operations
|
||||
|
||||
### Basic Operations
|
||||
```java
|
||||
// Create or update item
|
||||
DynamoDbTable<Customer> table = enhancedClient.table("Customers", TableSchema.fromBean(Customer.class));
|
||||
table.putItem(customer);
|
||||
|
||||
// Get item
|
||||
Customer result = table.getItem(Key.builder().partitionValue(customerId).build());
|
||||
|
||||
// Update item
|
||||
return table.updateItem(customer);
|
||||
|
||||
// Delete item
|
||||
table.deleteItem(Key.builder().partitionValue(customerId).build());
|
||||
```
|
||||
|
||||
### Composite Key Operations
|
||||
```java
|
||||
// Get item with composite key
|
||||
Order order = table.getItem(Key.builder()
|
||||
.partitionValue(customerId)
|
||||
.sortValue(orderId)
|
||||
.build());
|
||||
```
|
||||
|
||||
## Query Operations
|
||||
|
||||
### Basic Query
|
||||
```java
|
||||
import software.amazon.awssdk.enhanced.dynamodb.model.QueryConditional;
|
||||
|
||||
QueryConditional queryConditional = QueryConditional
|
||||
.keyEqualTo(Key.builder()
|
||||
.partitionValue(customerId)
|
||||
.build());
|
||||
|
||||
List<Order> orders = table.query(queryConditional).items().stream()
|
||||
.collect(Collectors.toList());
|
||||
```
|
||||
|
||||
### Advanced Query with Filters
|
||||
```java
|
||||
import software.amazon.awssdk.enhanced.dynamodb.Expression;
|
||||
|
||||
Expression filter = Expression.builder()
|
||||
.expression("status = :pending")
|
||||
.putExpressionValue(":pending", AttributeValue.builder().s("PENDING").build())
|
||||
.build();
|
||||
|
||||
List<Order> pendingOrders = table.query(r -> r
|
||||
.queryConditional(queryConditional)
|
||||
.filterExpression(filter))
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
```
|
||||
|
||||
For detailed query patterns, see [Advanced Operations Reference](references/advanced-operations.md).
|
||||
|
||||
## Scan Operations
|
||||
|
||||
```java
|
||||
// Scan all items
|
||||
List<Customer> allCustomers = table.scan().items().stream()
|
||||
.collect(Collectors.toList());
|
||||
|
||||
// Scan with filter
|
||||
Expression filter = Expression.builder()
|
||||
.expression("points >= :minPoints")
|
||||
.putExpressionValue(":minPoints", AttributeValue.builder().n("1000").build())
|
||||
.build();
|
||||
|
||||
List<Customer> vipCustomers = table.scan(r -> r.filterExpression(filter))
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
```
|
||||
|
||||
## Batch Operations
|
||||
|
||||
### Batch Get
|
||||
```java
|
||||
import software.amazon.awssdk.enhanced.dynamodb.model.*;
|
||||
|
||||
List<Key> keys = customerIds.stream()
|
||||
.map(id -> Key.builder().partitionValue(id).build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
ReadBatch.Builder<Customer> batchBuilder = ReadBatch.builder(Customer.class)
|
||||
.mappedTableResource(table);
|
||||
|
||||
keys.forEach(batchBuilder::addGetItem);
|
||||
|
||||
BatchGetResultPageIterable result = enhancedClient.batchGetItem(r ->
|
||||
r.addReadBatch(batchBuilder.build()));
|
||||
|
||||
List<Customer> customers = result.resultsForTable(table).stream()
|
||||
.collect(Collectors.toList());
|
||||
```
|
||||
|
||||
### Batch Write
|
||||
```java
|
||||
WriteBatch.Builder<Customer> batchBuilder = WriteBatch.builder(Customer.class)
|
||||
.mappedTableResource(table);
|
||||
|
||||
customers.forEach(batchBuilder::addPutItem);
|
||||
|
||||
enhancedClient.batchWriteItem(r -> r.addWriteBatch(batchBuilder.build()));
|
||||
```
|
||||
|
||||
## Transactions
|
||||
|
||||
### Transactional Write
|
||||
```java
|
||||
enhancedClient.transactWriteItems(r -> r
|
||||
.addPutItem(customerTable, customer)
|
||||
.addPutItem(orderTable, order));
|
||||
```
|
||||
|
||||
### Transactional Read
|
||||
```java
|
||||
TransactGetItemsEnhancedRequest request = TransactGetItemsEnhancedRequest.builder()
|
||||
.addGetItem(customerTable, customerKey)
|
||||
.addGetItem(orderTable, orderKey)
|
||||
.build();
|
||||
|
||||
List<Document> results = enhancedClient.transactGetItems(request);
|
||||
```
|
||||
|
||||
## Spring Boot Integration
|
||||
|
||||
### Configuration
|
||||
```java
|
||||
@Configuration
|
||||
public class DynamoDbConfiguration {
|
||||
|
||||
@Bean
|
||||
public DynamoDbClient dynamoDbClient() {
|
||||
return DynamoDbClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DynamoDbEnhancedClient dynamoDbEnhancedClient(DynamoDbClient dynamoDbClient) {
|
||||
return DynamoDbEnhancedClient.builder()
|
||||
.dynamoDbClient(dynamoDbClient)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Repository Pattern
|
||||
```java
|
||||
@Repository
|
||||
public class CustomerRepository {
|
||||
|
||||
private final DynamoDbTable<Customer> customerTable;
|
||||
|
||||
public CustomerRepository(DynamoDbEnhancedClient enhancedClient) {
|
||||
this.customerTable = enhancedClient.table("Customers", TableSchema.fromBean(Customer.class));
|
||||
}
|
||||
|
||||
public void save(Customer customer) {
|
||||
customerTable.putItem(customer);
|
||||
}
|
||||
|
||||
public Optional<Customer> findById(String customerId) {
|
||||
Key key = Key.builder().partitionValue(customerId).build();
|
||||
return Optional.ofNullable(customerTable.getItem(key));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For comprehensive Spring Boot integration patterns, see [Spring Boot Integration Reference](references/spring-boot-integration.md).
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Testing with Mocks
|
||||
```java
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class CustomerServiceTest {
|
||||
|
||||
@Mock
|
||||
private DynamoDbClient dynamoDbClient;
|
||||
|
||||
@Mock
|
||||
private DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
@Mock
|
||||
private DynamoDbTable<Customer> customerTable;
|
||||
|
||||
@InjectMocks
|
||||
private CustomerService customerService;
|
||||
|
||||
@Test
|
||||
void saveCustomer_ShouldReturnSavedCustomer() {
|
||||
// Arrange
|
||||
when(enhancedClient.table(anyString(), any(TableSchema.class)))
|
||||
.thenReturn(customerTable);
|
||||
|
||||
Customer customer = new Customer("123", "John Doe", "john@example.com");
|
||||
|
||||
// Act
|
||||
Customer result = customerService.saveCustomer(customer);
|
||||
|
||||
// Assert
|
||||
assertNotNull(result);
|
||||
verify(customerTable).putItem(customer);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Testing with LocalStack
|
||||
```java
|
||||
@Testcontainers
|
||||
@SpringBootTest
|
||||
class DynamoDbIntegrationTest {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(LocalStackContainer.Service.DYNAMODB);
|
||||
|
||||
@DynamicPropertySource
|
||||
static void configureProperties(DynamicPropertyRegistry registry) {
|
||||
registry.add("aws.endpoint",
|
||||
() -> localstack.getEndpointOverride(LocalStackContainer.Service.DYNAMODB).toString());
|
||||
}
|
||||
|
||||
@Autowired
|
||||
private DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
@Test
|
||||
void testCustomerCRUDOperations() {
|
||||
// Test implementation
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
For detailed testing strategies, see [Testing Strategies](references/testing-strategies.md).
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use Enhanced Client**: Provides type-safe operations with less boilerplate
|
||||
2. **Design partition keys carefully**: Distribute data evenly across partitions
|
||||
3. **Use composite keys**: Leverage sort keys for efficient queries
|
||||
4. **Create GSIs strategically**: Support different access patterns
|
||||
5. **Use batch operations**: Reduce API calls for multiple items
|
||||
6. **Implement pagination**: For large result sets use pagination
|
||||
7. **Use transactions**: For operations that must be atomic
|
||||
8. **Avoid scans**: Prefer queries with proper indexes
|
||||
9. **Handle conditional writes**: Prevent race conditions
|
||||
10. **Use proper error handling**: Handle exceptions like `ProvisionedThroughputExceeded`
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Conditional Operations
|
||||
```java
|
||||
PutItemEnhancedRequest request = PutItemEnhancedRequest.builder(table)
|
||||
.item(customer)
|
||||
.conditionExpression("attribute_not_exists(customerId)")
|
||||
.build();
|
||||
|
||||
table.putItemWithRequestBuilder(request);
|
||||
```
|
||||
|
||||
### Pagination
|
||||
```java
|
||||
ScanEnhancedRequest request = ScanEnhancedRequest.builder()
|
||||
.limit(100)
|
||||
.build();
|
||||
|
||||
PaginatedScanIterable<Customer> results = table.scan(request);
|
||||
results.stream().forEach(page -> {
|
||||
// Process each page
|
||||
});
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- Monitor read/write capacity units
|
||||
- Implement exponential backoff for retries
|
||||
- Use proper pagination for large datasets
|
||||
- Consider eventual consistency for reads
|
||||
- Use `ReturnConsumedCapacity` to monitor capacity usage
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `aws-sdk-java-v2-core `- Core AWS SDK patterns
|
||||
- `spring-data-jpa` - Alternative data access patterns
|
||||
- `unit-test-service-layer` - Service testing patterns
|
||||
- `unit-test-wiremock-rest-api` - Testing external APIs
|
||||
|
||||
## References
|
||||
|
||||
- [AWS DynamoDB Documentation](https://docs.aws.amazon.com/dynamodb/)
|
||||
- [AWS SDK for Java Documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/)
|
||||
- [DynamoDB Examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/dynamodb)
|
||||
- [LocalStack for Testing](https://docs.localstack.cloud/user-guide/aws/)
|
||||
|
||||
For detailed implementations, see the references folder:
|
||||
- [Entity Mapping Reference](references/entity-mapping.md)
|
||||
- [Advanced Operations Reference](references/advanced-operations.md)
|
||||
- [Spring Boot Integration Reference](references/spring-boot-integration.md)
|
||||
- [Testing Strategies](references/testing-strategies.md)
|
||||
@@ -0,0 +1,195 @@
|
||||
# Advanced Operations Reference
|
||||
|
||||
This document covers advanced DynamoDB operations and patterns.
|
||||
|
||||
## Query Operations
|
||||
|
||||
### Key Conditions
|
||||
|
||||
#### Key.equalTo()
|
||||
```java
|
||||
QueryConditional equalTo = QueryConditional
|
||||
.keyEqualTo(Key.builder()
|
||||
.partitionValue("customer123")
|
||||
.build());
|
||||
```
|
||||
|
||||
#### Key.between()
|
||||
```java
|
||||
QueryConditional between = QueryConditional
|
||||
.sortBetween(
|
||||
Key.builder().partitionValue("customer123").sortValue("2023-01-01").build(),
|
||||
Key.builder().partitionValue("customer123").sortValue("2023-12-31").build());
|
||||
```
|
||||
|
||||
#### Key.beginsWith()
|
||||
```java
|
||||
QueryConditional beginsWith = QueryConditional
|
||||
.sortKeyBeginsWith(Key.builder()
|
||||
.partitionValue("customer123")
|
||||
.sortValue("2023-")
|
||||
.build());
|
||||
```
|
||||
|
||||
### Filter Expressions
|
||||
|
||||
```java
|
||||
Expression filter = Expression.builder()
|
||||
.expression("points >= :minPoints AND status = :status")
|
||||
.putExpressionName("#p", "points")
|
||||
.putExpressionName("#s", "status")
|
||||
.putExpressionValue(":minPoints", AttributeValue.builder().n("1000").build())
|
||||
.putExpressionValue(":status", AttributeValue.builder().s("ACTIVE").build())
|
||||
.build();
|
||||
```
|
||||
|
||||
### Projection Expressions
|
||||
|
||||
```java
|
||||
Expression projection = Expression.builder()
|
||||
.expression("customerId, name, email")
|
||||
.putExpressionName("#c", "customerId")
|
||||
.putExpressionName("#n", "name")
|
||||
.putExpressionName("#e", "email")
|
||||
.build();
|
||||
```
|
||||
|
||||
## Scan Operations
|
||||
|
||||
### Pagination
|
||||
```java
|
||||
ScanEnhancedRequest request = ScanEnhancedRequest.builder()
|
||||
.limit(100)
|
||||
.build();
|
||||
|
||||
PaginatedScanIterable<Customer> results = table.scan(request);
|
||||
results.stream().forEach(page -> {
|
||||
// Process each page of results
|
||||
});
|
||||
```
|
||||
|
||||
### Conditional Scan
|
||||
```java
|
||||
Expression filter = Expression.builder()
|
||||
.expression("active = :active")
|
||||
.putExpressionValue(":active", AttributeValue.builder().bool(true).build())
|
||||
.build();
|
||||
|
||||
return table.scan(r -> r
|
||||
.filterExpression(filter)
|
||||
.limit(50))
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
```
|
||||
|
||||
## Batch Operations
|
||||
|
||||
### Batch Get with Unprocessed Keys
|
||||
```java
|
||||
List<Key> keys = customerIds.stream()
|
||||
.map(id -> Key.builder().partitionValue(id).build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
ReadBatch.Builder<Customer> batchBuilder = ReadBatch.builder(Customer.class)
|
||||
.mappedTableResource(table);
|
||||
|
||||
keys.forEach(batchBuilder::addGetItem);
|
||||
|
||||
BatchGetResultPageIterable result = enhancedClient.batchGetItem(r ->
|
||||
r.addReadBatch(batchBuilder.build()));
|
||||
|
||||
// Handle unprocessed keys
|
||||
result.stream()
|
||||
.flatMap(page -> page.unprocessedKeys().entrySet().stream())
|
||||
.forEach(entry -> {
|
||||
// Retry logic for unprocessed keys
|
||||
});
|
||||
```
|
||||
|
||||
### Batch Write with Different Operations
|
||||
```java
|
||||
WriteBatch.Builder<Customer> batchBuilder = WriteBatch.builder(Customer.class)
|
||||
.mappedTableResource(table);
|
||||
|
||||
batchBuilder.addPutItem(customer1);
|
||||
batchBuilder.addDeleteItem(customer2);
|
||||
batchBuilder.addPutItem(customer3);
|
||||
|
||||
enhancedClient.batchWriteItem(r -> r.addWriteBatch(batchBuilder.build()));
|
||||
```
|
||||
|
||||
## Transactions
|
||||
|
||||
### Conditional Writes
|
||||
```java
|
||||
PutItemEnhancedRequest putRequest = PutItemEnhancedRequest.builder(table)
|
||||
.item(customer)
|
||||
.conditionExpression("attribute_not_exists(customerId)")
|
||||
.build();
|
||||
|
||||
table.putItemWithRequestBuilder(putRequest);
|
||||
```
|
||||
|
||||
### Multiple Table Operations
|
||||
```java
|
||||
TransactWriteItemsEnhancedRequest request = TransactWriteItemsEnhancedRequest.builder()
|
||||
.addPutItem(customerTable, customer)
|
||||
.addPutItem(orderTable, order)
|
||||
.addUpdateItem(productTable, product)
|
||||
.addDeleteItem(cartTable, cartKey)
|
||||
.build();
|
||||
|
||||
enhancedClient.transactWriteItems(request);
|
||||
```
|
||||
|
||||
## Conditional Operations
|
||||
|
||||
### Condition Expressions
|
||||
```java
|
||||
// Check if attribute exists
|
||||
.setAttribute("conditionExpression", "attribute_not_exists(customerId)")
|
||||
|
||||
// Check attribute values
|
||||
.setAttribute("conditionExpression", "points > :currentPoints")
|
||||
.setAttribute("expressionAttributeValues", Map.of(
|
||||
":currentPoints", AttributeValue.builder().n("500").build()))
|
||||
|
||||
// Multiple conditions
|
||||
.setAttribute("conditionExpression", "points > :min AND active = :active")
|
||||
.setAttribute("expressionAttributeValues", Map.of(
|
||||
":min", AttributeValue.builder().n("100").build(),
|
||||
":active", AttributeValue.builder().bool(true).build()))
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Provisioned Throughput Exceeded
|
||||
```java
|
||||
try {
|
||||
table.putItem(customer);
|
||||
} catch (TransactionCanceledException e) {
|
||||
// Handle transaction cancellation
|
||||
} catch (ConditionalCheckFailedException e) {
|
||||
// Handle conditional check failure
|
||||
} catch (ResourceNotFoundException e) {
|
||||
// Handle table not found
|
||||
} catch (DynamoDbException e) {
|
||||
// Handle other DynamoDB exceptions
|
||||
}
|
||||
```
|
||||
|
||||
### Exponential Backoff for Retry
|
||||
```java
|
||||
int maxRetries = 3;
|
||||
long baseDelay = 1000; // 1 second
|
||||
|
||||
for (int attempt = 0; attempt < maxRetries; attempt++) {
|
||||
try {
|
||||
operation();
|
||||
break;
|
||||
} catch (ProvisionedThroughputExceededException e) {
|
||||
long delay = baseDelay * (1 << attempt);
|
||||
Thread.sleep(delay);
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,120 @@
|
||||
# Entity Mapping Reference
|
||||
|
||||
This document provides detailed information about entity mapping in DynamoDB Enhanced Client.
|
||||
|
||||
## @DynamoDbBean Annotation
|
||||
|
||||
The `@DynamoDbBean` annotation marks a class as a DynamoDB entity:
|
||||
|
||||
```java
|
||||
@DynamoDbBean
|
||||
public class Customer {
|
||||
// Class implementation
|
||||
}
|
||||
```
|
||||
|
||||
## Field Annotations
|
||||
|
||||
### @DynamoDbPartitionKey
|
||||
Marks a field as the partition key:
|
||||
|
||||
```java
|
||||
@DynamoDbPartitionKey
|
||||
public String getCustomerId() {
|
||||
return customerId;
|
||||
}
|
||||
```
|
||||
|
||||
### @DynamoDbSortKey
|
||||
Marks a field as the sort key (used with composite keys):
|
||||
|
||||
```java
|
||||
@DynamoDbSortKey
|
||||
@DynamoDbAttribute("order_id")
|
||||
public String getOrderId() {
|
||||
return orderId;
|
||||
}
|
||||
```
|
||||
|
||||
### @DynamoDbAttribute
|
||||
Maps a field to a DynamoDB attribute with custom name:
|
||||
|
||||
```java
|
||||
@DynamoDbAttribute("customer_name")
|
||||
public String getName() {
|
||||
return name;
|
||||
}
|
||||
```
|
||||
|
||||
### @DynamoDbSecondaryPartitionKey
|
||||
Marks a field as a partition key for a Global Secondary Index:
|
||||
|
||||
```java
|
||||
@DynamoDbSecondaryPartitionKey(indexNames = "category-index")
|
||||
public String getCategory() {
|
||||
return category;
|
||||
}
|
||||
```
|
||||
|
||||
### @DynamoDbSecondarySortKey
|
||||
Marks a field as a sort key for a Global Secondary Index:
|
||||
|
||||
```java
|
||||
@DynamoDbSecondarySortKey(indexNames = "category-index")
|
||||
public BigDecimal getPrice() {
|
||||
return price;
|
||||
}
|
||||
```
|
||||
|
||||
### @DynamoDbConvertedBy
|
||||
Custom attribute conversion:
|
||||
|
||||
```java
|
||||
@DynamoDbConvertedBy(LocalDateTimeConverter.class)
|
||||
public LocalDateTime getCreatedAt() {
|
||||
return createdAt;
|
||||
}
|
||||
```
|
||||
|
||||
## Supported Data Types
|
||||
|
||||
The enhanced client automatically handles the following data types:
|
||||
|
||||
- String → S (String)
|
||||
- Integer, Long → N (Number)
|
||||
- BigDecimal → N (Number)
|
||||
- Boolean → BOOL
|
||||
- LocalDateTime → S (ISO-8601 format)
|
||||
- LocalDate → S (ISO-8601 format)
|
||||
- UUID → S (String)
|
||||
- Enum → S (String representation)
|
||||
- Custom types with converters
|
||||
|
||||
## Custom Converters
|
||||
|
||||
Create custom converters for complex data types:
|
||||
|
||||
```java
|
||||
public class LocalDateTimeConverter extends AttributeConverter<LocalDateTime, String> {
|
||||
|
||||
@Override
|
||||
public String transformFrom(LocalDateTime input) {
|
||||
return input.toString();
|
||||
}
|
||||
|
||||
@Override
|
||||
public LocalDateTime transformTo(String input) {
|
||||
return LocalDateTime.parse(input);
|
||||
}
|
||||
|
||||
@Override
|
||||
public AttributeValue transformToAttributeValue(String input) {
|
||||
return AttributeValue.builder().s(input).build();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String transformFromAttributeValue(AttributeValue attributeValue) {
|
||||
return attributeValue.s();
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,377 @@
|
||||
# Spring Boot Integration Reference
|
||||
|
||||
This document provides detailed information about integrating DynamoDB with Spring Boot applications.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Basic Configuration
|
||||
```java
|
||||
@Configuration
|
||||
public class DynamoDbConfiguration {
|
||||
|
||||
@Bean
|
||||
@Profile("local")
|
||||
public DynamoDbClient dynamoDbClient() {
|
||||
return DynamoDbClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
@Profile("prod")
|
||||
public DynamoDbClient dynamoDbClientProd(
|
||||
@Value("${aws.region}") String region,
|
||||
@Value("${aws.accessKeyId}") String accessKeyId,
|
||||
@Value("${aws.secretAccessKey}") String secretAccessKey) {
|
||||
|
||||
return DynamoDbClient.builder()
|
||||
.region(Region.of(region))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DynamoDbEnhancedClient dynamoDbEnhancedClient(DynamoDbClient dynamoDbClient) {
|
||||
return DynamoDbEnhancedClient.builder()
|
||||
.dynamoDbClient(dynamoDbClient)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Properties Configuration
|
||||
`application-local.properties`:
|
||||
```properties
|
||||
aws.region=us-east-1
|
||||
```
|
||||
|
||||
`application-prod.properties`:
|
||||
```properties
|
||||
aws.region=us-east-1
|
||||
aws.accessKeyId=${AWS_ACCESS_KEY_ID}
|
||||
aws.secretAccessKey=${AWS_SECRET_ACCESS_KEY}
|
||||
```
|
||||
|
||||
## Repository Pattern Implementation
|
||||
|
||||
### Base Repository Interface
|
||||
```java
|
||||
public interface DynamoDbRepository<T> {
|
||||
void save(T entity);
|
||||
Optional<T> findById(Object partitionKey);
|
||||
Optional<T> findById(Object partitionKey, Object sortKey);
|
||||
void delete(Object partitionKey);
|
||||
void delete(Object partitionKey, Object sortKey);
|
||||
List<T> findAll();
|
||||
List<T> findAll(int limit);
|
||||
boolean existsById(Object partitionKey);
|
||||
boolean existsById(Object partitionKey, Object sortKey);
|
||||
}
|
||||
|
||||
public interface CustomerRepository extends DynamoDbRepository<Customer> {
|
||||
List<Customer> findByEmail(String email);
|
||||
List<Customer> findByPointsGreaterThan(Integer minPoints);
|
||||
}
|
||||
```
|
||||
|
||||
### Generic Repository Implementation
|
||||
```java
|
||||
@Repository
|
||||
public class GenericDynamoDbRepository<T> implements DynamoDbRepository<T> {
|
||||
|
||||
private final DynamoDbTable<T> table;
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public GenericDynamoDbRepository(DynamoDbEnhancedClient enhancedClient,
|
||||
Class<T> entityClass,
|
||||
String tableName) {
|
||||
this.table = enhancedClient.table(tableName, TableSchema.fromBean(entityClass));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void save(T entity) {
|
||||
table.putItem(entity);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<T> findById(Object partitionKey) {
|
||||
Key key = Key.builder().partitionValue(partitionKey).build();
|
||||
return Optional.ofNullable(table.getItem(key));
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<T> findById(Object partitionKey, Object sortKey) {
|
||||
Key key = Key.builder()
|
||||
.partitionValue(partitionKey)
|
||||
.sortValue(sortKey)
|
||||
.build();
|
||||
return Optional.ofNullable(table.getItem(key));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void delete(Object partitionKey) {
|
||||
Key key = Key.builder().partitionValue(partitionKey).build();
|
||||
table.deleteItem(key);
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<T> findAll() {
|
||||
return table.scan().items().stream()
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<T> findAll(int limit) {
|
||||
return table.scan(ScanEnhancedRequest.builder().limit(limit).build())
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Specific Repository Implementation
|
||||
```java
|
||||
@Repository
|
||||
public class CustomerRepositoryImpl implements CustomerRepository {
|
||||
|
||||
private final DynamoDbTable<Customer> customerTable;
|
||||
|
||||
public CustomerRepositoryImpl(DynamoDbEnhancedClient enhancedClient) {
|
||||
this.customerTable = enhancedClient.table(
|
||||
"Customers",
|
||||
TableSchema.fromBean(Customer.class));
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Customer> findByEmail(String email) {
|
||||
Expression filter = Expression.builder()
|
||||
.expression("email = :email")
|
||||
.putExpressionValue(":email", AttributeValue.builder().s(email).build())
|
||||
.build();
|
||||
|
||||
return customerTable.scan(r -> r.filterExpression(filter))
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<Customer> findByPointsGreaterThan(Integer minPoints) {
|
||||
Expression filter = Expression.builder()
|
||||
.expression("points >= :minPoints")
|
||||
.putExpressionValue(":minPoints", AttributeValue.builder().n(minPoints.toString()).build())
|
||||
.build();
|
||||
|
||||
return customerTable.scan(r -> r.filterExpression(filter))
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Service Layer Implementation
|
||||
|
||||
### Service with Transaction Management
|
||||
```java
|
||||
@Service
|
||||
@Transactional
|
||||
public class CustomerService {
|
||||
|
||||
private final CustomerRepository customerRepository;
|
||||
private final OrderRepository orderRepository;
|
||||
private final DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
public CustomerService(CustomerRepository customerRepository,
|
||||
OrderRepository orderRepository,
|
||||
DynamoDbEnhancedClient enhancedClient) {
|
||||
this.customerRepository = customerRepository;
|
||||
this.orderRepository = orderRepository;
|
||||
this.enhancedClient = enhancedClient;
|
||||
}
|
||||
|
||||
public void createCustomerWithOrder(Customer customer, Order order) {
|
||||
// Use transaction for atomic operation
|
||||
enhancedClient.transactWriteItems(r -> r
|
||||
.addPutItem(getCustomerTable(), customer)
|
||||
.addPutItem(getOrderTable(), order));
|
||||
}
|
||||
|
||||
private DynamoDbTable<Customer> getCustomerTable() {
|
||||
return enhancedClient.table("Customers", TableSchema.fromBean(Customer.class));
|
||||
}
|
||||
|
||||
private DynamoDbTable<Order> getOrderTable() {
|
||||
return enhancedClient.table("Orders", TableSchema.fromBean(Order.class));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Async Operations
|
||||
```java
|
||||
@Service
|
||||
public class AsyncCustomerService {
|
||||
|
||||
private final DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
public CompletableFuture<Void> saveCustomerAsync(Customer customer) {
|
||||
return CompletableFuture.runAsync(() -> {
|
||||
DynamoDbTable<Customer> table = enhancedClient.table(
|
||||
"Customers",
|
||||
TableSchema.fromBean(Customer.class));
|
||||
table.putItem(customer);
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<List<Customer>> findCustomersByPointsAsync(Integer minPoints) {
|
||||
return CompletableFuture.supplyAsync(() -> {
|
||||
Expression filter = Expression.builder()
|
||||
.expression("points >= :minPoints")
|
||||
.putExpressionValue(":minPoints", AttributeValue.builder().n(minPoints.toString()).build())
|
||||
.build();
|
||||
|
||||
DynamoDbTable<Customer> table = enhancedClient.table(
|
||||
"Customers",
|
||||
TableSchema.fromBean(Customer.class));
|
||||
|
||||
return table.scan(r -> r.filterExpression(filter))
|
||||
.items().stream()
|
||||
.collect(Collectors.toList());
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing with LocalStack
|
||||
|
||||
### Test Configuration
|
||||
```java
|
||||
@TestConfiguration
|
||||
@ContextConfiguration(classes = {LocalStackDynamoDbConfig.class})
|
||||
public class DynamoDbTestConfig {
|
||||
|
||||
@Bean
|
||||
public DynamoDbClient dynamoDbClient() {
|
||||
return LocalStackDynamoDbConfig.dynamoDbClient();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DynamoDbEnhancedClient dynamoDbEnhancedClient() {
|
||||
return DynamoDbEnhancedClient.builder()
|
||||
.dynamoDbClient(dynamoDbClient())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
@SpringBootTest(classes = {DynamoDbTestConfig.class})
|
||||
@Import(DynamoDbTestConfig.class)
|
||||
public class CustomerRepositoryIntegrationTest {
|
||||
|
||||
@Autowired
|
||||
private DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
@BeforeEach
|
||||
void setUp() {
|
||||
// Clean up test data
|
||||
clearTestData();
|
||||
}
|
||||
|
||||
@Test
|
||||
void testCustomerOperations() {
|
||||
// Test implementation
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### LocalStack Container Setup
|
||||
```java
|
||||
public class LocalStackDynamoDbConfig {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(LocalStackContainer.Service.DYNAMODB);
|
||||
|
||||
@Bean
|
||||
@DynamicPropertySource
|
||||
public static void configureProperties(DynamicPropertyRegistry registry) {
|
||||
registry.add("aws.region", () -> Region.US_EAST_1.toString());
|
||||
registry.add("aws.accessKeyId", () -> localstack.getAccessKey());
|
||||
registry.add("aws.secretAccessKey", () -> localstack.getSecretKey());
|
||||
registry.add("aws.endpoint",
|
||||
() -> localstack.getEndpointOverride(LocalStackContainer.Service.DYNAMODB).toString());
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DynamoDbClient dynamoDbClient(
|
||||
@Value("${aws.region}") String region,
|
||||
@Value("${aws.accessKeyId}") String accessKeyId,
|
||||
@Value("${aws.secretAccessKey}") String secretAccessKey,
|
||||
@Value("${aws.endpoint}") String endpoint) {
|
||||
|
||||
return DynamoDbClient.builder()
|
||||
.region(Region.of(region))
|
||||
.endpointOverride(URI.create(endpoint))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(accessKeyId, secretAccessKey)))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Health Check Integration
|
||||
|
||||
### Custom Health Indicator
|
||||
```java
|
||||
@Component
|
||||
public class DynamoDbHealthIndicator implements HealthIndicator {
|
||||
|
||||
private final DynamoDbClient dynamoDbClient;
|
||||
|
||||
public DynamoDbHealthIndicator(DynamoDbClient dynamoDbClient) {
|
||||
this.dynamoDbClient = dynamoDbClient;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Health health() {
|
||||
try {
|
||||
dynamoDbClient.listTables();
|
||||
return Health.up()
|
||||
.withDetail("region", dynamoDbClient.serviceClientConfiguration().region())
|
||||
.build();
|
||||
} catch (Exception e) {
|
||||
return Health.down()
|
||||
.withException(e)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Metrics Collection
|
||||
|
||||
### Micrometer Integration
|
||||
```java
|
||||
@Component
|
||||
public class DynamoDbMetricsCollector {
|
||||
|
||||
private final DynamoDbClient dynamoDbClient;
|
||||
private final MeterRegistry meterRegistry;
|
||||
|
||||
@EventListener
|
||||
public void handleDynamoDbOperation(DynamoDbOperationEvent event) {
|
||||
Timer.Sample sample = Timer.start();
|
||||
sample.stop(Timer.builder("dynamodb.operation")
|
||||
.tag("operation", event.getOperation())
|
||||
.tag("table", event.getTable())
|
||||
.register(meterRegistry));
|
||||
}
|
||||
}
|
||||
|
||||
public class DynamoDbOperationEvent {
|
||||
private String operation;
|
||||
private String table;
|
||||
private long duration;
|
||||
|
||||
// Getters and setters
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,407 @@
|
||||
# Testing Strategies for DynamoDB
|
||||
|
||||
This document provides comprehensive testing strategies for DynamoDB applications using the AWS SDK for Java 2.x.
|
||||
|
||||
## Unit Testing with Mocks
|
||||
|
||||
### Mocking DynamoDbClient
|
||||
```java
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class CustomerServiceTest {
|
||||
|
||||
@Mock
|
||||
private DynamoDbClient dynamoDbClient;
|
||||
|
||||
@Mock
|
||||
private DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
@Mock
|
||||
private DynamoDbTable<Customer> customerTable;
|
||||
|
||||
@InjectMocks
|
||||
private CustomerService customerService;
|
||||
|
||||
@Test
|
||||
void saveCustomer_ShouldReturnSavedCustomer() {
|
||||
// Arrange
|
||||
Customer customer = new Customer("123", "John Doe", "john@example.com");
|
||||
|
||||
when(enhancedClient.table(anyString(), any(TableSchema.class)))
|
||||
.thenReturn(customerTable);
|
||||
|
||||
when(customerTable.putItem(customer))
|
||||
.thenReturn(null);
|
||||
|
||||
// Act
|
||||
Customer result = customerService.saveCustomer(customer);
|
||||
|
||||
// Assert
|
||||
assertNotNull(result);
|
||||
assertEquals("123", result.getCustomerId());
|
||||
verify(customerTable).putItem(customer);
|
||||
}
|
||||
|
||||
@Test
|
||||
void getCustomer_NotFound_ShouldReturnEmpty() {
|
||||
// Arrange
|
||||
when(enhancedClient.table(anyString(), any(TableSchema.class)))
|
||||
.thenReturn(customerTable);
|
||||
|
||||
when(customerTable.getItem(any(Key.class)))
|
||||
.thenReturn(null);
|
||||
|
||||
// Act
|
||||
Optional<Customer> result = customerService.getCustomer("123");
|
||||
|
||||
// Assert
|
||||
assertFalse(result.isPresent());
|
||||
verify(customerTable).getItem(any(Key.class));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Query Operations
|
||||
```java
|
||||
@Test
|
||||
void queryCustomersByStatus_ShouldReturnMatchingCustomers() {
|
||||
// Arrange
|
||||
List<Customer> mockCustomers = List.of(
|
||||
new Customer("1", "Alice", "alice@example.com"),
|
||||
new Customer("2", "Bob", "bob@example.com")
|
||||
);
|
||||
|
||||
DynamoDbTable<Customer> mockTable = mock(DynamoDbTable.class);
|
||||
DynamoDbIndex<Customer> mockIndex = mock(DynamoDbIndex.class);
|
||||
|
||||
QueryEnhancedRequest queryRequest = QueryEnhancedRequest.builder()
|
||||
.queryConditional(QueryConditional.keyEqualTo(Key.builder()
|
||||
.partitionValue("ACTIVE")
|
||||
.build()))
|
||||
.build();
|
||||
|
||||
when(enhancedClient.table("Customers", TableSchema.fromBean(Customer.class)))
|
||||
.thenReturn(mockTable);
|
||||
when(mockTable.index("status-index"))
|
||||
.thenReturn(mockIndex);
|
||||
when(mockIndex.query(queryRequest))
|
||||
.thenReturn(PaginatedQueryIterable.from(mock(Customer.class), mock(QueryResponseEnhanced.class)));
|
||||
|
||||
QueryResponseEnhanced mockResponse = mock(QueryResponseEnhanced.class);
|
||||
when(mockResponse.items())
|
||||
.thenReturn(mockCustomers.stream());
|
||||
|
||||
when(mockIndex.query(any(QueryEnhancedRequest.class)))
|
||||
.thenReturn(PaginatedQueryIterable.from(mock(Customer.class), mockResponse));
|
||||
|
||||
// Act
|
||||
List<Customer> result = customerService.findByStatus("ACTIVE");
|
||||
|
||||
// Assert
|
||||
assertEquals(2, result.size());
|
||||
verify(mockIndex).query(any(QueryEnhancedRequest.class));
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Testing with Testcontainers
|
||||
|
||||
### LocalStack Setup
|
||||
```java
|
||||
@Testcontainers
|
||||
@SpringBootTest
|
||||
@AutoConfigureMockMvc
|
||||
class DynamoDbIntegrationTest {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(LocalStackContainer.Service.DYNAMODB);
|
||||
|
||||
@DynamicPropertySource
|
||||
static void configureProperties(DynamicPropertyRegistry registry) {
|
||||
registry.add("aws.region", () -> Region.US_EAST_1.toString());
|
||||
registry.add("aws.accessKeyId", () -> localstack.getAccessKey());
|
||||
registry.add("aws.secretAccessKey", () -> localstack.getSecretKey());
|
||||
registry.add("aws.endpoint",
|
||||
() -> localstack.getEndpointOverride(LocalStackContainer.Service.DYNAMODB).toString());
|
||||
}
|
||||
|
||||
@Autowired
|
||||
private DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
@BeforeEach
|
||||
void setup() {
|
||||
createTestTable();
|
||||
}
|
||||
|
||||
@Test
|
||||
void testCustomerCRUDOperations() {
|
||||
// Test create
|
||||
Customer customer = new Customer("test-123", "Test User", "test@example.com");
|
||||
enhancedClient.table("Customers", TableSchema.fromBean(Customer.class))
|
||||
.putItem(customer);
|
||||
|
||||
// Test read
|
||||
Customer retrieved = enhancedClient.table("Customers", TableSchema.fromBean(Customer.class))
|
||||
.getItem(Key.builder().partitionValue("test-123").build());
|
||||
|
||||
assertNotNull(retrieved);
|
||||
assertEquals("Test User", retrieved.getName());
|
||||
|
||||
// Test update
|
||||
customer.setPoints(1000);
|
||||
enhancedClient.table("Customers", TableSchema.fromBean(Customer.class))
|
||||
.putItem(customer);
|
||||
|
||||
// Test delete
|
||||
enhancedClient.table("Customers", TableSchema.fromBean(Customer.class))
|
||||
.deleteItem(Key.builder().partitionValue("test-123").build());
|
||||
}
|
||||
|
||||
private void createTestTable() {
|
||||
DynamoDbClient client = DynamoDbClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.endpointOverride(localstack.getEndpointOverride(LocalStackContainer.Service.DYNAMODB))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(localstack.getAccessKey(), localstack.getSecretKey())))
|
||||
.build();
|
||||
|
||||
CreateTableRequest request = CreateTableRequest.builder()
|
||||
.tableName("Customers")
|
||||
.keySchema(KeySchemaElement.builder()
|
||||
.attributeName("customerId")
|
||||
.keyType(KeyType.HASH)
|
||||
.build())
|
||||
.attributeDefinitions(AttributeDefinition.builder()
|
||||
.attributeName("customerId")
|
||||
.attributeType(ScalarAttributeType.S)
|
||||
.build())
|
||||
.provisionedThroughput(ProvisionedThroughput.builder()
|
||||
.readCapacityUnits(5L)
|
||||
.writeCapacityUnits(5L)
|
||||
.build())
|
||||
.build();
|
||||
|
||||
client.createTable(request);
|
||||
waiterForTableActive(client, "Customers");
|
||||
}
|
||||
|
||||
private void waiterForTableActive(DynamoDbClient client, String tableName) {
|
||||
Waiter waiter = client.waiter();
|
||||
CreateTableResponse response = client.createTable(request);
|
||||
|
||||
waiter.waitUntilTableExists(r -> r
|
||||
.tableName(tableName)
|
||||
.maxWait(Duration.ofSeconds(30)));
|
||||
|
||||
try {
|
||||
waiter.waitUntilTableExists(r -> r.tableName(tableName));
|
||||
} catch (WaiterTimeoutException e) {
|
||||
throw new RuntimeException("Table creation timed out", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testcontainers with H2 Migration
|
||||
```java
|
||||
@SpringBootTest
|
||||
@Testcontainers
|
||||
@AutoConfigureDataJpa
|
||||
class CustomerRepositoryTest {
|
||||
|
||||
@Container
|
||||
static PostgreSQLContainer<?> postgres = new PostgreSQLContainer<>("postgres:15-alpine")
|
||||
.withDatabaseName("testdb")
|
||||
.withUsername("test")
|
||||
.withPassword("test");
|
||||
|
||||
@DynamicPropertySource
|
||||
static void postgresProperties(DynamicPropertyRegistry registry) {
|
||||
registry.add("spring.datasource.url", postgres::getJdbcUrl);
|
||||
registry.add("spring.datasource.username", postgres::getUsername);
|
||||
registry.add("spring.datasource.password", postgres::getPassword);
|
||||
}
|
||||
|
||||
@Autowired
|
||||
private CustomerRepository customerRepository;
|
||||
|
||||
@Autowired
|
||||
private DynamoDbEnhancedClient dynamoDbClient;
|
||||
|
||||
@Test
|
||||
void testRepositoryWithRealDatabase() {
|
||||
// Test with real database
|
||||
Customer customer = new Customer("123", "Test User", "test@example.com");
|
||||
customerRepository.save(customer);
|
||||
|
||||
Customer retrieved = customerRepository.findById("123").orElse(null);
|
||||
assertNotNull(retrieved);
|
||||
assertEquals("Test User", retrieved.getName());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### Load Testing with Gatling
|
||||
```java
|
||||
class CustomerSimulation extends Simulation {
|
||||
HttpProtocolBuilder httpProtocolBuilder = http
|
||||
.baseUrl("http://localhost:8080")
|
||||
.acceptHeader("application/json");
|
||||
|
||||
ScenarioBuilder scn = scenario("Customer Operations")
|
||||
.exec(http("create_customer")
|
||||
.post("/api/customers")
|
||||
.body(StringBody(
|
||||
"""{
|
||||
"customerId": "test-123",
|
||||
"name": "Test User",
|
||||
"email": "test@example.com"
|
||||
}"""))
|
||||
.asJson()
|
||||
.check(status().is(201)))
|
||||
.exec(http("get_customer")
|
||||
.get("/api/customers/test-123")
|
||||
.check(status().is(200)));
|
||||
|
||||
{
|
||||
setUp(
|
||||
scn.injectOpen(
|
||||
rampUsersPerSec(10).to(100).during(60),
|
||||
constantUsersPerSec(100).during(120)
|
||||
)
|
||||
).protocols(httpProtocolBuilder);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Microbenchmark Testing
|
||||
```java
|
||||
@BenchmarkMode(Mode.AverageTime)
|
||||
@OutputTimeUnit(TimeUnit.MILLISECONDS)
|
||||
@Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
|
||||
@Measurement(iterations = 10, time = 1, timeUnit = TimeUnit.SECONDS)
|
||||
@Fork(1)
|
||||
@State(Scope.Benchmark)
|
||||
public class DynamoDbPerformanceBenchmark {
|
||||
|
||||
private DynamoDbEnhancedClient enhancedClient;
|
||||
private DynamoDbTable<Customer> customerTable;
|
||||
private Customer testCustomer;
|
||||
|
||||
@Setup
|
||||
public void setup() {
|
||||
enhancedClient = DynamoDbEnhancedClient.builder()
|
||||
.dynamoDbClient(DynamoDbClient.builder().build())
|
||||
.build();
|
||||
|
||||
customerTable = enhancedClient.table("Customers", TableSchema.fromBean(Customer.class));
|
||||
testCustomer = new Customer("benchmark-123", "Benchmark User", "benchmark@example.com");
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
public void testPutItem() {
|
||||
customerTable.putItem(testCustomer);
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
public void testGetItem() {
|
||||
customerTable.getItem(Key.builder().partitionValue("benchmark-123").build());
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
public void testQuery() {
|
||||
customerTable.scan().items().stream().collect(Collectors.toList());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Property-Based Testing
|
||||
|
||||
### Using jqwik
|
||||
```java
|
||||
@Property
|
||||
@Report(Reporting.GENERATED)
|
||||
void customerSerializationShouldBeConsistent(
|
||||
@ForAll("customers") Customer customer
|
||||
) {
|
||||
// When
|
||||
String serialized = serializeCustomer(customer);
|
||||
Customer deserialized = deserializeCustomer(serialized);
|
||||
|
||||
// Then
|
||||
assertEquals(customer.getCustomerId(), deserialized.getCustomerId());
|
||||
assertEquals(customer.getName(), deserialized.getName());
|
||||
assertEquals(customer.getEmail(), deserialized.getEmail());
|
||||
}
|
||||
|
||||
@Provide
|
||||
Arbitrary<Customer> customers() {
|
||||
return Arbitraries.one(
|
||||
Arbitraries.of("customer-", "user-", "client-").string()
|
||||
).map(id -> new Customer(
|
||||
id + Arbitraries.integers().between(1000, 9999).sample(),
|
||||
Arbitraries.strings().ofLength(10).sample(),
|
||||
Arbitraries.strings().email().sample()
|
||||
));
|
||||
}
|
||||
```
|
||||
|
||||
## Test Data Management
|
||||
|
||||
### Test Data Factory
|
||||
```java
|
||||
@Component
|
||||
public class TestDataFactory {
|
||||
|
||||
private final DynamoDbEnhancedClient enhancedClient;
|
||||
|
||||
@Autowired
|
||||
public TestDataFactory(DynamoDbEnhancedClient enhancedClient) {
|
||||
this.enhancedClient = enhancedClient;
|
||||
}
|
||||
|
||||
public Customer createTestCustomer(String id) {
|
||||
Customer customer = new Customer(
|
||||
id != null ? id : UUID.randomUUID().toString(),
|
||||
"Test User",
|
||||
"test@example.com"
|
||||
);
|
||||
customer.setPoints(1000);
|
||||
customer.setCreatedAt(LocalDateTime.now());
|
||||
|
||||
enhancedClient.table("Customers", TableSchema.fromBean(Customer.class))
|
||||
.putItem(customer);
|
||||
|
||||
return customer;
|
||||
}
|
||||
|
||||
public void cleanupTestData() {
|
||||
// Implementation to clean up test data
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Test Database Configuration
|
||||
```java
|
||||
@TestConfiguration
|
||||
public class TestDataConfig {
|
||||
|
||||
@Bean
|
||||
public TestDataCleaner testDataCleaner() {
|
||||
return new TestDataCleaner();
|
||||
}
|
||||
}
|
||||
|
||||
@Component
|
||||
public class TestDataCleaner {
|
||||
|
||||
private final DynamoDbClient dynamoDbClient;
|
||||
|
||||
@EventListener(ApplicationReadyEvent.class)
|
||||
public void cleanup() {
|
||||
// Clean up test data before each test run
|
||||
}
|
||||
}
|
||||
```
|
||||
416
skills/aws-java/aws-sdk-java-v2-kms/SKILL.md
Normal file
416
skills/aws-java/aws-sdk-java-v2-kms/SKILL.md
Normal file
@@ -0,0 +1,416 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-kms
|
||||
description: AWS Key Management Service (KMS) patterns using AWS SDK for Java 2.x. Use when creating/managing encryption keys, encrypting/decrypting data, generating data keys, digital signing, key rotation, or integrating encryption into Spring Boot applications.
|
||||
category: aws
|
||||
tags: [aws, kms, java, sdk, encryption, security]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash, WebFetch
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - AWS KMS (Key Management Service)
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides comprehensive patterns for AWS Key Management Service (KMS) using AWS SDK for Java 2.x. Focus on implementing secure encryption solutions with proper key management, envelope encryption, and Spring Boot integration patterns.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Creating and managing symmetric encryption keys for data protection
|
||||
- Implementing client-side encryption and envelope encryption patterns
|
||||
- Generating data keys for local data encryption with KMS-managed keys
|
||||
- Setting up digital signatures and verification with asymmetric keys
|
||||
- Integrating encryption capabilities into Spring Boot applications
|
||||
- Implementing secure key lifecycle management
|
||||
- Setting up key rotation policies and access controls
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Maven
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>kms</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### Gradle
|
||||
|
||||
```groovy
|
||||
implementation 'software.amazon.awssdk:kms:2.x.x'
|
||||
```
|
||||
|
||||
## Client Setup
|
||||
|
||||
### Basic Synchronous Client
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.kms.KmsClient;
|
||||
|
||||
KmsClient kmsClient = KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Basic Asynchronous Client
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.kms.KmsAsyncClient;
|
||||
|
||||
KmsAsyncClient kmsAsyncClient = KmsAsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Advanced Client Configuration
|
||||
|
||||
```java
|
||||
KmsClient kmsClient = KmsClient.builder()
|
||||
.region(Region.of(System.getenv("AWS_REGION")))
|
||||
.credentialsProvider(DefaultCredentialsProvider.create())
|
||||
.overrideConfiguration(c -> c.retryPolicy(RetryPolicy.builder()
|
||||
.numRetries(3)
|
||||
.build()))
|
||||
.build();
|
||||
```
|
||||
|
||||
## Basic Key Management
|
||||
|
||||
### Create Encryption Key
|
||||
|
||||
```java
|
||||
public String createEncryptionKey(KmsClient kmsClient, String description) {
|
||||
CreateKeyRequest request = CreateKeyRequest.builder()
|
||||
.description(description)
|
||||
.keyUsage(KeyUsageType.ENCRYPT_DECRYPT)
|
||||
.build();
|
||||
|
||||
CreateKeyResponse response = kmsClient.createKey(request);
|
||||
return response.keyMetadata().keyId();
|
||||
}
|
||||
```
|
||||
|
||||
### Describe Key
|
||||
|
||||
```java
|
||||
public KeyMetadata getKeyMetadata(KmsClient kmsClient, String keyId) {
|
||||
DescribeKeyRequest request = DescribeKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
|
||||
return kmsClient.describeKey(request).keyMetadata();
|
||||
}
|
||||
```
|
||||
|
||||
### Enable/Disable Key
|
||||
|
||||
```java
|
||||
public void toggleKeyState(KmsClient kmsClient, String keyId, boolean enable) {
|
||||
if (enable) {
|
||||
kmsClient.enableKey(EnableKeyRequest.builder().keyId(keyId).build());
|
||||
} else {
|
||||
kmsClient.disableKey(DisableKeyRequest.builder().keyId(keyId).build());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Basic Encryption and Decryption
|
||||
|
||||
### Encrypt Data
|
||||
|
||||
```java
|
||||
public String encryptData(KmsClient kmsClient, String keyId, String plaintext) {
|
||||
SdkBytes plaintextBytes = SdkBytes.fromString(plaintext, StandardCharsets.UTF_8);
|
||||
|
||||
EncryptRequest request = EncryptRequest.builder()
|
||||
.keyId(keyId)
|
||||
.plaintext(plaintextBytes)
|
||||
.build();
|
||||
|
||||
EncryptResponse response = kmsClient.encrypt(request);
|
||||
return Base64.getEncoder().encodeToString(
|
||||
response.ciphertextBlob().asByteArray());
|
||||
}
|
||||
```
|
||||
|
||||
### Decrypt Data
|
||||
|
||||
```java
|
||||
public String decryptData(KmsClient kmsClient, String ciphertextBase64) {
|
||||
byte[] ciphertext = Base64.getDecoder().decode(ciphertextBase64);
|
||||
SdkBytes ciphertextBytes = SdkBytes.fromByteArray(ciphertext);
|
||||
|
||||
DecryptRequest request = DecryptRequest.builder()
|
||||
.ciphertextBlob(ciphertextBytes)
|
||||
.build();
|
||||
|
||||
DecryptResponse response = kmsClient.decrypt(request);
|
||||
return response.plaintext().asString(StandardCharsets.UTF_8);
|
||||
}
|
||||
```
|
||||
|
||||
## Envelope Encryption Pattern
|
||||
|
||||
### Generate and Use Data Key
|
||||
|
||||
```java
|
||||
public DataKeyResult encryptWithEnvelope(KmsClient kmsClient, String masterKeyId, byte[] data) {
|
||||
// Generate data key
|
||||
GenerateDataKeyRequest keyRequest = GenerateDataKeyRequest.builder()
|
||||
.keyId(masterKeyId)
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build();
|
||||
|
||||
GenerateDataKeyResponse keyResponse = kmsClient.generateDataKey(keyRequest);
|
||||
|
||||
// Encrypt data with data key
|
||||
byte[] encryptedData = encryptWithAES(data,
|
||||
keyResponse.plaintext().asByteArray());
|
||||
|
||||
// Clear plaintext key from memory
|
||||
Arrays.fill(keyResponse.plaintext().asByteArray(), (byte) 0);
|
||||
|
||||
return new DataKeyResult(
|
||||
encryptedData,
|
||||
keyResponse.ciphertextBlob().asByteArray());
|
||||
}
|
||||
|
||||
public byte[] decryptWithEnvelope(KmsClient kmsClient,
|
||||
DataKeyResult encryptedEnvelope) {
|
||||
// Decrypt data key
|
||||
DecryptRequest keyDecryptRequest = DecryptRequest.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(
|
||||
encryptedEnvelope.encryptedKey()))
|
||||
.build();
|
||||
|
||||
DecryptResponse keyDecryptResponse = kmsClient.decrypt(keyDecryptRequest);
|
||||
|
||||
// Decrypt data with decrypted key
|
||||
byte[] decryptedData = decryptWithAES(
|
||||
encryptedEnvelope.encryptedData(),
|
||||
keyDecryptResponse.plaintext().asByteArray());
|
||||
|
||||
// Clear plaintext key from memory
|
||||
Arrays.fill(keyDecryptResponse.plaintext().asByteArray(), (byte) 0);
|
||||
|
||||
return decryptedData;
|
||||
}
|
||||
```
|
||||
|
||||
## Digital Signatures
|
||||
|
||||
### Create Signing Key and Sign Data
|
||||
|
||||
```java
|
||||
public String createAndSignData(KmsClient kmsClient, String description, String message) {
|
||||
// Create signing key
|
||||
CreateKeyRequest keyRequest = CreateKeyRequest.builder()
|
||||
.description(description)
|
||||
.keySpec(KeySpec.RSA_2048)
|
||||
.keyUsage(KeyUsageType.SIGN_VERIFY)
|
||||
.build();
|
||||
|
||||
CreateKeyResponse keyResponse = kmsClient.createKey(keyRequest);
|
||||
String keyId = keyResponse.keyMetadata().keyId();
|
||||
|
||||
// Sign data
|
||||
SignRequest signRequest = SignRequest.builder()
|
||||
.keyId(keyId)
|
||||
.message(SdkBytes.fromString(message, StandardCharsets.UTF_8))
|
||||
.signingAlgorithm(SigningAlgorithmSpec.RSASSA_PSS_SHA_256)
|
||||
.build();
|
||||
|
||||
SignResponse signResponse = kmsClient.sign(signRequest);
|
||||
return Base64.getEncoder().encodeToString(
|
||||
signResponse.signature().asByteArray());
|
||||
}
|
||||
```
|
||||
|
||||
### Verify Signature
|
||||
|
||||
```java
|
||||
public boolean verifySignature(KmsClient kmsClient,
|
||||
String keyId,
|
||||
String message,
|
||||
String signatureBase64) {
|
||||
byte[] signature = Base64.getDecoder().decode(signatureBase64);
|
||||
|
||||
VerifyRequest verifyRequest = VerifyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.message(SdkBytes.fromString(message, StandardCharsets.UTF_8))
|
||||
.signature(SdkBytes.fromByteArray(signature))
|
||||
.signingAlgorithm(SigningAlgorithmSpec.RSASSA_PSS_SHA_256)
|
||||
.build();
|
||||
|
||||
VerifyResponse verifyResponse = kmsClient.verify(verifyRequest);
|
||||
return verifyResponse.signatureValid();
|
||||
}
|
||||
```
|
||||
|
||||
## Spring Boot Integration
|
||||
|
||||
### Configuration Class
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
public class KmsConfiguration {
|
||||
|
||||
@Bean
|
||||
public KmsClient kmsClient() {
|
||||
return KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KmsAsyncClient kmsAsyncClient() {
|
||||
return KmsAsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Encryption Service
|
||||
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class KmsEncryptionService {
|
||||
|
||||
private final KmsClient kmsClient;
|
||||
|
||||
@Value("${kms.encryption-key-id}")
|
||||
private String keyId;
|
||||
|
||||
public String encrypt(String plaintext) {
|
||||
try {
|
||||
EncryptRequest request = EncryptRequest.builder()
|
||||
.keyId(keyId)
|
||||
.plaintext(SdkBytes.fromString(plaintext, StandardCharsets.UTF_8))
|
||||
.build();
|
||||
|
||||
EncryptResponse response = kmsClient.encrypt(request);
|
||||
return Base64.getEncoder().encodeToString(
|
||||
response.ciphertextBlob().asByteArray());
|
||||
|
||||
} catch (KmsException e) {
|
||||
throw new RuntimeException("Encryption failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
public String decrypt(String ciphertextBase64) {
|
||||
try {
|
||||
byte[] ciphertext = Base64.getDecoder().decode(ciphertextBase64);
|
||||
|
||||
DecryptRequest request = DecryptRequest.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(ciphertext))
|
||||
.build();
|
||||
|
||||
DecryptResponse response = kmsClient.decrypt(request);
|
||||
return response.plaintext().asString(StandardCharsets.UTF_8);
|
||||
|
||||
} catch (KmsException e) {
|
||||
throw new RuntimeException("Decryption failed", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Encryption Example
|
||||
|
||||
```java
|
||||
public class BasicEncryptionExample {
|
||||
public static void main(String[] args) {
|
||||
KmsClient kmsClient = KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Create key
|
||||
String keyId = createEncryptionKey(kmsClient, "Example encryption key");
|
||||
System.out.println("Created key: " + keyId);
|
||||
|
||||
// Encrypt and decrypt
|
||||
String plaintext = "Hello, World!";
|
||||
String encrypted = encryptData(kmsClient, keyId, plaintext);
|
||||
String decrypted = decryptData(kmsClient, encrypted);
|
||||
|
||||
System.out.println("Original: " + plaintext);
|
||||
System.out.println("Decrypted: " + decrypted);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Envelope Encryption Example
|
||||
|
||||
```java
|
||||
public class EnvelopeEncryptionExample {
|
||||
public static void main(String[] args) {
|
||||
KmsClient kmsClient = KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
String masterKeyId = "alias/your-master-key";
|
||||
String largeData = "This is a large amount of data that needs encryption...";
|
||||
byte[] data = largeData.getBytes(StandardCharsets.UTF_8);
|
||||
|
||||
// Encrypt using envelope pattern
|
||||
DataKeyResult encryptedEnvelope = encryptWithEnvelope(
|
||||
kmsClient, masterKeyId, data);
|
||||
|
||||
// Decrypt
|
||||
byte[] decryptedData = decryptWithEnvelope(
|
||||
kmsClient, encryptedEnvelope);
|
||||
|
||||
String result = new String(decryptedData, StandardCharsets.UTF_8);
|
||||
System.out.println("Decrypted: " + result);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
- **Always use envelope encryption for large data** - Encrypt data locally and only encrypt the data key with KMS
|
||||
- **Use encryption context** - Add contextual information to track and audit usage
|
||||
- **Never log sensitive data** - Avoid logging plaintext or encryption keys
|
||||
- **Implement proper key lifecycle** - Enable automatic rotation and set deletion policies
|
||||
- **Use separate keys for different purposes** - Don't reuse keys across multiple applications
|
||||
|
||||
### Performance
|
||||
- **Cache encrypted data keys** - Reduce KMS API calls by caching data keys
|
||||
- **Use async operations** - Leverage async clients for non-blocking I/O
|
||||
- **Reuse client instances** - Don't create new clients for each operation
|
||||
- **Implement connection pooling** - Configure proper connection pooling settings
|
||||
|
||||
### Error Handling
|
||||
- **Implement retry logic** - Handle throttling exceptions with exponential backoff
|
||||
- **Check key states** - Verify key is enabled before performing operations
|
||||
- **Use circuit breakers** - Prevent cascading failures during KMS outages
|
||||
- **Log errors comprehensively** - Include KMS error codes and context
|
||||
|
||||
## References
|
||||
|
||||
For detailed implementation patterns, advanced techniques, and comprehensive examples:
|
||||
|
||||
- @references/technical-guide.md - Complete technical implementation patterns
|
||||
- @references/spring-boot-integration.md - Spring Boot integration patterns
|
||||
- @references/testing.md - Testing strategies and examples
|
||||
- @references/best-practices.md - Security and operational best practices
|
||||
|
||||
## Related Skills
|
||||
|
||||
- @aws-sdk-java-v2-core - Core AWS SDK patterns and configuration
|
||||
- @aws-sdk-java-v2-dynamodb - DynamoDB integration patterns
|
||||
- @aws-sdk-java-v2-secrets-manager - Secrets management patterns
|
||||
- @spring-boot-dependency-injection - Spring dependency injection patterns
|
||||
|
||||
## External References
|
||||
|
||||
- [AWS KMS Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/)
|
||||
- [AWS SDK for Java 2.x Documentation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.html)
|
||||
- [KMS Best Practices](https://docs.aws.amazon.com/kms/latest/developerguide/best-practices.html)
|
||||
550
skills/aws-java/aws-sdk-java-v2-kms/references/best-practices.md
Normal file
550
skills/aws-java/aws-sdk-java-v2-kms/references/best-practices.md
Normal file
@@ -0,0 +1,550 @@
|
||||
# AWS KMS Best Practices
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
### Key Management
|
||||
|
||||
1. **Use Separate Keys for Different Purposes**
|
||||
- Create unique keys for different applications or data types
|
||||
- Avoid reusing keys across multiple purposes
|
||||
- Use aliases instead of raw key IDs for references
|
||||
|
||||
```java
|
||||
// Good: Create specific keys
|
||||
String encryptionKey = kms.createKey("Database encryption key");
|
||||
String signingKey = kms.createSigningKey("Document signing key");
|
||||
|
||||
// Bad: Use the same key for everything
|
||||
```
|
||||
|
||||
2. **Enable Automatic Key Rotation**
|
||||
- Enable automatic key rotation for enhanced security
|
||||
- Review rotation schedules based on compliance requirements
|
||||
|
||||
```java
|
||||
public void enableKeyRotation(KmsClient kmsClient, String keyId) {
|
||||
EnableKeyRotationRequest request = EnableKeyRotationRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
kmsClient.enableKeyRotation(request);
|
||||
}
|
||||
```
|
||||
|
||||
3. **Implement Key Lifecycle Policies**
|
||||
- Set key expiration dates based on data retention policies
|
||||
- Schedule key deletion when no longer needed
|
||||
- Use key policies to enforce lifecycle rules
|
||||
|
||||
4. **Use Key Aliases**
|
||||
- Always use aliases instead of raw key IDs
|
||||
- Create meaningful aliases following naming conventions
|
||||
- Regularly review and update aliases
|
||||
|
||||
```java
|
||||
public void createKeyWithAlias(KmsClient kmsClient, String alias, String description) {
|
||||
// Create key
|
||||
CreateKeyResponse response = kmsClient.createKey(
|
||||
CreateKeyRequest.builder()
|
||||
.description(description)
|
||||
.build());
|
||||
|
||||
// Create alias
|
||||
CreateAliasRequest aliasRequest = CreateAliasRequest.builder()
|
||||
.aliasName(alias)
|
||||
.targetKeyId(response.keyMetadata().keyId())
|
||||
.build();
|
||||
kmsClient.createAlias(aliasRequest);
|
||||
}
|
||||
```
|
||||
|
||||
### Encryption Security
|
||||
|
||||
1. **Never Log Plaintext or Encryption Keys**
|
||||
- Avoid logging sensitive data in any form
|
||||
- Ensure proper logging configuration to prevent accidental exposure
|
||||
|
||||
```java
|
||||
// Bad: Logging sensitive data
|
||||
logger.info("Encrypted data: {}", encryptedData);
|
||||
|
||||
// Good: Log only metadata
|
||||
logger.info("Encryption completed for user: {}", userId);
|
||||
```
|
||||
|
||||
2. **Use Encryption Context**
|
||||
- Always include encryption context for additional security
|
||||
- Use contextual information to verify data integrity
|
||||
|
||||
```java
|
||||
public Map<String, String> createEncryptionContext(String userId, String dataType) {
|
||||
return Map.of(
|
||||
"userId", userId,
|
||||
"dataType", dataType,
|
||||
"timestamp", Instant.now().toString()
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
3. **Implement Least Privilege IAM Policies**
|
||||
- Grant minimal required permissions to KMS keys
|
||||
- Use IAM policies to restrict access to specific resources
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Principal": {"AWS": "arn:aws:iam::123456789012:role/app-role"},
|
||||
"Action": [
|
||||
"kms:Encrypt",
|
||||
"kms:Decrypt",
|
||||
"kms:DescribeKey"
|
||||
],
|
||||
"Resource": "arn:aws:kms:us-east-1:123456789012:key/your-key-id",
|
||||
"Condition": {
|
||||
"StringEquals": {
|
||||
"kms:EncryptionContext:userId": "${aws:userid}"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
4. **Clear Sensitive Data from Memory**
|
||||
- Explicitly clear sensitive data from memory after use
|
||||
- Use secure memory management practices
|
||||
|
||||
```java
|
||||
public void secureMemoryExample() {
|
||||
byte[] sensitiveKey = new byte[32];
|
||||
// ... use the key ...
|
||||
|
||||
// Clear sensitive data
|
||||
Arrays.fill(sensitiveKey, (byte) 0);
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Best Practices
|
||||
|
||||
1. **Cache Data Keys for Envelope Encryption**
|
||||
- Cache encrypted data keys to avoid repeated KMS calls
|
||||
- Use appropriate cache eviction policies
|
||||
- Monitor cache hit rates
|
||||
|
||||
```java
|
||||
public class DataKeyCache {
|
||||
private final Cache<String, byte[]> keyCache;
|
||||
|
||||
public DataKeyCache() {
|
||||
this.keyCache = Caffeine.newBuilder()
|
||||
.expireAfterWrite(1, TimeUnit.HOURS)
|
||||
.maximumSize(1000)
|
||||
.build();
|
||||
}
|
||||
|
||||
public byte[] getCachedDataKey(String keyId, KmsClient kmsClient) {
|
||||
return keyCache.get(keyId, k -> {
|
||||
GenerateDataKeyResponse response = kmsClient.generateDataKey(
|
||||
GenerateDataKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build());
|
||||
return response.ciphertextBlob().asByteArray();
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Use Async Operations for Non-Blocking I/O**
|
||||
- Leverage async clients for parallel processing
|
||||
- Use CompletableFuture for chaining operations
|
||||
|
||||
```java
|
||||
public CompletableFuture<Void> processMultipleAsync(List<String> dataItems) {
|
||||
List<CompletableFuture<Void>> futures = dataItems.stream()
|
||||
.map(item -> CompletableFuture.runAsync(() ->
|
||||
encryptAndStoreItem(item)))
|
||||
.collect(Collectors.toList());
|
||||
|
||||
return CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
|
||||
}
|
||||
```
|
||||
|
||||
3. **Implement Connection Pooling**
|
||||
- Configure connection pooling for better resource utilization
|
||||
- Set appropriate pool sizes based on load
|
||||
|
||||
```java
|
||||
public KmsClient createPooledClient() {
|
||||
return KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.httpClientBuilder(ApacheHttpClient.builder()
|
||||
.maxConnections(100)
|
||||
.connectionTimeToLive(Duration.ofSeconds(30))
|
||||
.build())
|
||||
.build();
|
||||
}
|
||||
```
|
||||
|
||||
4. **Reuse KMS Client Instances**
|
||||
- Create and reuse client instances rather than creating new ones
|
||||
- Use dependency injection for client management
|
||||
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class KmsService {
|
||||
private final KmsClient kmsClient; // Inject and reuse
|
||||
|
||||
public void performOperation() {
|
||||
// Use the same client instance
|
||||
kmsClient.someOperation();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Cost Optimization
|
||||
|
||||
1. **Use Envelope Encryption for Large Data**
|
||||
- Generate data keys for encrypting large datasets
|
||||
- Only use KMS for encrypting the data key, not the entire dataset
|
||||
|
||||
```java
|
||||
public class EnvelopeEncryption {
|
||||
private final KmsClient kmsClient;
|
||||
|
||||
public byte[] encryptLargeData(byte[] largeData) {
|
||||
// Generate data key
|
||||
GenerateDataKeyResponse response = kmsClient.generateDataKey(
|
||||
GenerateDataKeyRequest.builder()
|
||||
.keyId("master-key-id")
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build());
|
||||
|
||||
byte[] encryptedKey = response.ciphertextBlob().asByteArray();
|
||||
byte[] plaintextKey = response.plaintext().asByteArray();
|
||||
|
||||
// Encrypt data with local key
|
||||
byte[] encryptedData = localEncrypt(largeData, plaintextKey);
|
||||
|
||||
// Return both encrypted data and encrypted key
|
||||
return combine(encryptedKey, encryptedData);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Cache Encrypted Data Keys**
|
||||
- Cache encrypted data keys to avoid repeated KMS calls
|
||||
- Use time-based cache expiration
|
||||
|
||||
3. **Monitor API Usage**
|
||||
- Track KMS API calls for billing and optimization
|
||||
- Set up CloudWatch alarms for unexpected usage
|
||||
|
||||
```java
|
||||
public class KmsUsageMonitor {
|
||||
private final MeterRegistry meterRegistry;
|
||||
|
||||
public void recordEncryption() {
|
||||
meterRegistry.counter("kms.encryption.count").increment();
|
||||
meterRegistry.timer("kms.encryption.time").record(() -> {
|
||||
// Perform encryption
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Use Data Key Caching Libraries**
|
||||
- Implement proper caching strategies
|
||||
- Consider using dedicated caching solutions for data keys
|
||||
|
||||
## Error Handling Best Practices
|
||||
|
||||
1. **Implement Retry Logic for Throttling**
|
||||
- Add retry logic for throttling exceptions
|
||||
- Use exponential backoff for retries
|
||||
|
||||
```java
|
||||
public class KmsRetryHandler {
|
||||
private static final int MAX_RETRIES = 3;
|
||||
private static final long INITIAL_DELAY = 1000; // 1 second
|
||||
|
||||
public <T> T executeWithRetry(Supplier<T> operation) {
|
||||
int attempt = 0;
|
||||
while (attempt < MAX_RETRIES) {
|
||||
try {
|
||||
return operation.get();
|
||||
} catch (KmsException e) {
|
||||
if (!isRetryable(e) || attempt == MAX_RETRIES - 1) {
|
||||
throw e;
|
||||
}
|
||||
attempt++;
|
||||
try {
|
||||
Thread.sleep(INITIAL_DELAY * (long) Math.pow(2, attempt));
|
||||
} catch (InterruptedException ie) {
|
||||
Thread.currentThread().interrupt();
|
||||
throw new RuntimeException("Retry interrupted", ie);
|
||||
}
|
||||
}
|
||||
}
|
||||
throw new IllegalStateException("Should not reach here");
|
||||
}
|
||||
|
||||
private boolean isRetryable(KmsException e) {
|
||||
return "ThrottlingException".equals(e.awsErrorDetails().errorCode());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Handle Key State Errors Gracefully**
|
||||
- Check key state before performing operations
|
||||
- Handle key states like PendingDeletion, Disabled, etc.
|
||||
|
||||
```java
|
||||
public void performOperationWithKeyStateCheck(KmsClient kmsClient, String keyId) {
|
||||
KeyMetadata metadata = describeKey(kmsClient, keyId);
|
||||
|
||||
switch (metadata.keyState()) {
|
||||
case ENABLED:
|
||||
// Perform operation
|
||||
break;
|
||||
case DISABLED:
|
||||
throw new IllegalStateException("Key is disabled");
|
||||
case PENDING_DELETION:
|
||||
throw new IllegalStateException("Key is scheduled for deletion");
|
||||
default:
|
||||
throw new IllegalStateException("Unknown key state: " + metadata.keyState());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Log KMS-Specific Error Codes**
|
||||
- Implement comprehensive error logging
|
||||
- Map KMS error codes to meaningful application errors
|
||||
|
||||
```java
|
||||
public class KmsErrorHandler {
|
||||
public String mapKmsErrorToAppError(KmsException e) {
|
||||
String errorCode = e.awsErrorDetails().errorCode();
|
||||
switch (errorCode) {
|
||||
case "NotFoundException":
|
||||
return "Key not found";
|
||||
case "DisabledException":
|
||||
return "Key is disabled";
|
||||
case "AccessDeniedException":
|
||||
return "Access denied";
|
||||
case "InvalidKeyUsageException":
|
||||
return "Invalid key usage";
|
||||
default:
|
||||
return "KMS error: " + errorCode;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Implement Circuit Breakers**
|
||||
- Use circuit breakers to handle KMS unavailability
|
||||
- Prevent cascading failures during outages
|
||||
|
||||
```java
|
||||
public class KmsCircuitBreaker {
|
||||
private final CircuitBreaker circuitBreaker;
|
||||
|
||||
public KmsCircuitBreaker() {
|
||||
this.circuitBreaker = CircuitBreaker.builder()
|
||||
.name("kmsService")
|
||||
.failureRateThreshold(50)
|
||||
.waitDurationInOpenState(Duration.ofSeconds(30))
|
||||
.ringBufferSizeInHalfOpenState(2)
|
||||
.ringBufferSizeInClosedState(2)
|
||||
.build();
|
||||
}
|
||||
|
||||
public <T> T executeWithCircuitBreaker(Callable<T> operation) {
|
||||
return circuitBreaker.executeCallable(() -> {
|
||||
try {
|
||||
return operation.call();
|
||||
} catch (KmsException e) {
|
||||
if (isFailure(e)) {
|
||||
throw new CircuitBreakerOpenException("KMS service unavailable");
|
||||
}
|
||||
throw e;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private boolean isFailure(KmsException e) {
|
||||
return "KMSDisabledException".equals(e.awsErrorDetails().errorCode());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Best Practices
|
||||
|
||||
1. **Test with Mock KMS Client**
|
||||
- Use mock clients for unit tests
|
||||
- Verify all expected interactions
|
||||
|
||||
```java
|
||||
@Test
|
||||
void shouldEncryptWithProperEncryptionContext() {
|
||||
// Arrange
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class))).thenReturn(...);
|
||||
|
||||
// Act
|
||||
String result = encryptionService.encrypt("test", "user123");
|
||||
|
||||
// Assert
|
||||
verify(kmsClient).encrypt(argThat(request ->
|
||||
request.encryptionContext().containsKey("userId") &&
|
||||
request.encryptionContext().get("userId").equals("user123")));
|
||||
}
|
||||
```
|
||||
|
||||
2. **Test Error Scenarios**
|
||||
- Test various error conditions
|
||||
- Verify proper error handling and recovery
|
||||
|
||||
3. **Performance Testing**
|
||||
- Test performance under load
|
||||
- Measure latency and throughput
|
||||
|
||||
4. **Integration Testing with Local KMS**
|
||||
- Test with local KMS when possible
|
||||
- Verify integration with real AWS services
|
||||
|
||||
## Monitoring and Observability
|
||||
|
||||
1. **Implement Comprehensive Logging**
|
||||
- Log all KMS operations with appropriate levels
|
||||
- Include correlation IDs for tracing
|
||||
|
||||
```java
|
||||
public class KmsLoggingAspect {
|
||||
private static final Logger logger = LoggerFactory.getLogger(KmsService.class);
|
||||
|
||||
@Around("execution(* com.yourcompany.kms..*.*(..))")
|
||||
public Object logKmsOperation(ProceedingJoinPoint joinPoint) throws Throwable {
|
||||
String operation = joinPoint.getSignature().getName();
|
||||
logger.info("Starting KMS operation: {}", operation);
|
||||
|
||||
long startTime = System.currentTimeMillis();
|
||||
try {
|
||||
Object result = joinPoint.proceed();
|
||||
long duration = System.currentTimeMillis() - startTime;
|
||||
logger.info("Completed KMS operation: {} in {}ms", operation, duration);
|
||||
return result;
|
||||
} catch (Exception e) {
|
||||
long duration = System.currentTimeMillis() - startTime;
|
||||
logger.error("KMS operation {} failed in {}ms: {}", operation, duration, e.getMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Set Up CloudWatch Alarms**
|
||||
- Monitor API call rates
|
||||
- Set up alarms for error rates
|
||||
- Track key usage patterns
|
||||
|
||||
3. **Use Distributed Tracing**
|
||||
- Implement tracing for KMS operations
|
||||
- Correlate KMS calls with application operations
|
||||
|
||||
4. **Monitor Key Usage Metrics**
|
||||
- Track key usage patterns
|
||||
- Monitor for unusual usage patterns
|
||||
|
||||
## Compliance and Auditing
|
||||
|
||||
1. **Enable KMS Key Usage Logging**
|
||||
- Configure CloudTrail to log KMS operations
|
||||
- Enable detailed logging for compliance
|
||||
|
||||
2. **Regular Security Audits**
|
||||
- Conduct regular audits of KMS key usage
|
||||
- Review access policies periodically
|
||||
|
||||
3. **Comprehensive Backup Strategy**
|
||||
- Implement key backup and recovery procedures
|
||||
- Test backup restoration processes
|
||||
|
||||
4. **Comprehensive Access Reviews**
|
||||
- Regularly review IAM policies for KMS access
|
||||
- Remove unnecessary permissions
|
||||
|
||||
## Advanced Security Considerations
|
||||
|
||||
1. **Multi-Region KMS Keys**
|
||||
- Consider multi-region keys for disaster recovery
|
||||
- Test failover scenarios
|
||||
|
||||
2. **Cross-Account Access**
|
||||
- Implement proper cross-account access controls
|
||||
- Use resource-based policies for account sharing
|
||||
|
||||
3. **Custom Key Stores**
|
||||
- Consider custom key stores for enhanced security
|
||||
- Implement proper key management in custom stores
|
||||
|
||||
4. **Key Material External**
|
||||
- Use imported key material for enhanced control
|
||||
- Implement proper key rotation for imported keys
|
||||
|
||||
## Development Best Practices
|
||||
|
||||
1. **Use Dependency Injection**
|
||||
- Inject KMS clients rather than creating them directly
|
||||
- Use proper configuration management
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
@ConfigurationProperties(prefix = "aws.kms")
|
||||
public class KmsProperties {
|
||||
private String region = "us-east-1";
|
||||
private String encryptionKeyId;
|
||||
private int maxRetries = 3;
|
||||
|
||||
// Getters and setters
|
||||
}
|
||||
```
|
||||
|
||||
2. **Proper Configuration Management**
|
||||
- Use environment-specific configurations
|
||||
- Secure sensitive configuration values
|
||||
|
||||
3. **Version Control and Documentation**
|
||||
- Keep KMS-related code well documented
|
||||
- Track key usage patterns in version control
|
||||
|
||||
4. **Code Reviews**
|
||||
- Conduct thorough code reviews for KMS-related code
|
||||
- Focus on security and error handling
|
||||
|
||||
## Implementation Checklists
|
||||
|
||||
### Key Setup Checklist
|
||||
- [ ] Create appropriate KMS keys for different purposes
|
||||
- [ ] Enable automatic key rotation
|
||||
- [ ] Set up key aliases
|
||||
- [ ] Configure IAM policies with least privilege
|
||||
- [ ] Set up CloudTrail logging
|
||||
|
||||
### Implementation Checklist
|
||||
- [ ] Use envelope encryption for large data
|
||||
- [ ] Implement proper error handling
|
||||
- [ ] Add comprehensive logging
|
||||
- [ ] Set up monitoring and alarms
|
||||
- [ ] Write comprehensive tests
|
||||
|
||||
### Security Checklist
|
||||
- [ ] Never log sensitive data
|
||||
- [ ] Use encryption context
|
||||
- [ ] Implement proper access controls
|
||||
- [ ] Clear sensitive data from memory
|
||||
- [ ] Regularly audit access patterns
|
||||
|
||||
By following these best practices, you can ensure that your AWS KMS implementation is secure, performant, cost-effective, and maintainable.
|
||||
@@ -0,0 +1,504 @@
|
||||
# Spring Boot Integration with AWS KMS
|
||||
|
||||
## Configuration
|
||||
|
||||
### Basic Configuration
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
public class KmsConfiguration {
|
||||
|
||||
@Bean
|
||||
public KmsClient kmsClient() {
|
||||
return KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KmsAsyncClient kmsAsyncClient() {
|
||||
return KmsAsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration with Custom Settings
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
@ConfigurationProperties(prefix = "aws.kms")
|
||||
public class KmsAdvancedConfiguration {
|
||||
|
||||
private Region region = Region.US_EAST_1;
|
||||
private String endpoint;
|
||||
private Duration timeout = Duration.ofSeconds(10);
|
||||
private String accessKeyId;
|
||||
private String secretAccessKey;
|
||||
|
||||
@Bean
|
||||
public KmsClient kmsClient() {
|
||||
KmsClientBuilder builder = KmsClient.builder()
|
||||
.region(region)
|
||||
.overrideConfiguration(c -> c.retryPolicy(RetryPolicy.builder()
|
||||
.numRetries(3)
|
||||
.build()));
|
||||
|
||||
if (endpoint != null) {
|
||||
builder.endpointOverride(URI.create(endpoint));
|
||||
}
|
||||
|
||||
// Add credentials if provided
|
||||
if (accessKeyId != null && secretAccessKey != null) {
|
||||
builder.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(accessKeyId, secretAccessKey)));
|
||||
}
|
||||
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
// Getters and Setters
|
||||
public Region getRegion() { return region; }
|
||||
public void setRegion(Region region) { this.region = region; }
|
||||
public String getEndpoint() { return endpoint; }
|
||||
public void setEndpoint(String endpoint) { this.endpoint = endpoint; }
|
||||
public Duration getTimeout() { return timeout; }
|
||||
public void setTimeout(Duration timeout) { this.timeout = timeout; }
|
||||
public String getAccessKeyId() { return accessKeyId; }
|
||||
public void setAccessKeyId(String accessKeyId) { this.accessKeyId = accessKeyId; }
|
||||
public String getSecretAccessKey() { return secretAccessKey; }
|
||||
public void setSecretAccessKey(String secretAccessKey) { this.secretAccessKey = secretAccessKey; }
|
||||
}
|
||||
```
|
||||
|
||||
### Application Properties
|
||||
|
||||
```properties
|
||||
# AWS KMS Configuration
|
||||
aws.kms.region=us-east-1
|
||||
aws.kms.endpoint=
|
||||
aws.kms.timeout=10s
|
||||
aws.kms.access-key-id=
|
||||
aws.kms.secret-access-key=
|
||||
|
||||
# KMS Key Configuration
|
||||
kms.encryption-key-id=alias/your-encryption-key
|
||||
kms.signing-key-id=alias/your-signing-key
|
||||
```
|
||||
|
||||
## Encryption Service
|
||||
|
||||
### Basic Encryption Service
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class KmsEncryptionService {
|
||||
|
||||
private final KmsClient kmsClient;
|
||||
|
||||
@Value("${kms.encryption-key-id}")
|
||||
private String keyId;
|
||||
|
||||
public KmsEncryptionService(KmsClient kmsClient) {
|
||||
this.kmsClient = kmsClient;
|
||||
}
|
||||
|
||||
public String encrypt(String plaintext) {
|
||||
try {
|
||||
EncryptRequest request = EncryptRequest.builder()
|
||||
.keyId(keyId)
|
||||
.plaintext(SdkBytes.fromString(plaintext, StandardCharsets.UTF_8))
|
||||
.build();
|
||||
|
||||
EncryptResponse response = kmsClient.encrypt(request);
|
||||
|
||||
// Return Base64-encoded ciphertext
|
||||
return Base64.getEncoder()
|
||||
.encodeToString(response.ciphertextBlob().asByteArray());
|
||||
|
||||
} catch (KmsException e) {
|
||||
throw new RuntimeException("Encryption failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
public String decrypt(String ciphertextBase64) {
|
||||
try {
|
||||
byte[] ciphertext = Base64.getDecoder().decode(ciphertextBase64);
|
||||
|
||||
DecryptRequest request = DecryptRequest.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(ciphertext))
|
||||
.build();
|
||||
|
||||
DecryptResponse response = kmsClient.decrypt(request);
|
||||
|
||||
return response.plaintext().asString(StandardCharsets.UTF_8);
|
||||
|
||||
} catch (KmsException e) {
|
||||
throw new RuntimeException("Decryption failed", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Secure Data Repository
|
||||
|
||||
```java
|
||||
@Repository
|
||||
public class SecureDataRepository {
|
||||
|
||||
private final KmsEncryptionService encryptionService;
|
||||
private final JdbcTemplate jdbcTemplate;
|
||||
|
||||
public SecureDataRepository(KmsEncryptionService encryptionService,
|
||||
JdbcTemplate jdbcTemplate) {
|
||||
this.encryptionService = encryptionService;
|
||||
this.jdbcTemplate = jdbcTemplate;
|
||||
}
|
||||
|
||||
public void saveSecureData(String id, String sensitiveData) {
|
||||
String encryptedData = encryptionService.encrypt(sensitiveData);
|
||||
|
||||
jdbcTemplate.update(
|
||||
"INSERT INTO secure_data (id, encrypted_value) VALUES (?, ?)",
|
||||
id, encryptedData);
|
||||
}
|
||||
|
||||
public String getSecureData(String id) {
|
||||
String encryptedData = jdbcTemplate.queryForObject(
|
||||
"SELECT encrypted_value FROM secure_data WHERE id = ?",
|
||||
String.class, id);
|
||||
|
||||
return encryptionService.decrypt(encryptedData);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Envelope Encryption Service
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class EnvelopeEncryptionService {
|
||||
|
||||
private final KmsClient kmsClient;
|
||||
|
||||
@Value("${kms.master-key-id}")
|
||||
private String masterKeyId;
|
||||
|
||||
private final Cache<String, DataKeyPair> keyCache =
|
||||
Caffeine.newBuilder()
|
||||
.expireAfterWrite(1, TimeUnit.HOURS)
|
||||
.maximumSize(100)
|
||||
.build();
|
||||
|
||||
public EnvelopeEncryptionService(KmsClient kmsClient) {
|
||||
this.kmsClient = kmsClient;
|
||||
}
|
||||
|
||||
public EncryptedEnvelope encryptLargeData(byte[] data) {
|
||||
// Check cache for existing key
|
||||
DataKeyPair dataKeyPair = keyCache.getIfPresent(masterKeyId);
|
||||
|
||||
if (dataKeyPair == null) {
|
||||
// Generate new data key
|
||||
GenerateDataKeyResponse dataKeyResponse = kmsClient.generateDataKey(
|
||||
GenerateDataKeyRequest.builder()
|
||||
.keyId(masterKeyId)
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build());
|
||||
|
||||
dataKeyPair = new DataKeyPair(
|
||||
dataKeyResponse.plaintext().asByteArray(),
|
||||
dataKeyResponse.ciphertextBlob().asByteArray());
|
||||
|
||||
// Cache the encrypted key (not plaintext)
|
||||
keyCache.put(masterKeyId, dataKeyPair);
|
||||
}
|
||||
|
||||
try {
|
||||
// Encrypt data with plaintext data key
|
||||
byte[] encryptedData = encryptWithAES(data, dataKeyPair.plaintext());
|
||||
|
||||
// Clear plaintext key from memory immediately after use
|
||||
Arrays.fill(dataKeyPair.plaintext(), (byte) 0);
|
||||
|
||||
return new EncryptedEnvelope(encryptedData, dataKeyPair.encrypted());
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Envelope encryption failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
public byte[] decryptLargeData(EncryptedEnvelope envelope) {
|
||||
// Get data key from cache or decrypt from KMS
|
||||
DataKeyPair dataKeyPair = keyCache.getIfPresent(masterKeyId);
|
||||
|
||||
if (dataKeyPair == null || !Arrays.equals(dataKeyPair.encrypted(), envelope.encryptedKey())) {
|
||||
// Decrypt data key from KMS
|
||||
DecryptResponse decryptResponse = kmsClient.decrypt(
|
||||
DecryptRequest.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(envelope.encryptedKey()))
|
||||
.build());
|
||||
|
||||
dataKeyPair = new DataKeyPair(
|
||||
decryptResponse.plaintext().asByteArray(),
|
||||
envelope.encryptedKey());
|
||||
|
||||
// Cache for future use
|
||||
keyCache.put(masterKeyId, dataKeyPair);
|
||||
}
|
||||
|
||||
try {
|
||||
// Decrypt data with plaintext data key
|
||||
byte[] decryptedData = decryptWithAES(envelope.encryptedData(), dataKeyPair.plaintext());
|
||||
|
||||
// Clear plaintext key from memory
|
||||
Arrays.fill(dataKeyPair.plaintext(), (byte) 0);
|
||||
|
||||
return decryptedData;
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Envelope decryption failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
private byte[] encryptWithAES(byte[] data, byte[] key) throws Exception {
|
||||
SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
|
||||
Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
|
||||
GCMParameterSpec spec = new GCMParameterSpec(128, key, key.length - 16);
|
||||
cipher.init(Cipher.ENCRYPT_MODE, keySpec, spec);
|
||||
return cipher.doFinal(data);
|
||||
}
|
||||
|
||||
private byte[] decryptWithAES(byte[] data, byte[] key) throws Exception {
|
||||
SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
|
||||
Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
|
||||
GCMParameterSpec spec = new GCMParameterSpec(128, key, key.length - 16);
|
||||
cipher.init(Cipher.DECRYPT_MODE, keySpec, spec);
|
||||
return cipher.doFinal(data);
|
||||
}
|
||||
|
||||
public record DataKeyPair(byte[] plaintext, byte[] encrypted) {}
|
||||
public record EncryptedEnvelope(byte[] encryptedData, byte[] encryptedKey) {}
|
||||
}
|
||||
```
|
||||
|
||||
## Data Encryption Interceptor
|
||||
|
||||
### SQL Encryption Interceptor
|
||||
|
||||
```java
|
||||
public class KmsDataEncryptInterceptor implements StatementInterceptor {
|
||||
|
||||
private final KmsEncryptionService encryptionService;
|
||||
|
||||
public KmsDataEncryptInterceptor(KmsEncryptionService encryptionService) {
|
||||
this.encryptionService = encryptionService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ResultSet intercept(ResultSet rs, Statement statement, Connection connection) throws SQLException {
|
||||
return new EncryptingResultSetWrapper(rs, encryptionService);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void interceptAfterExecution(Statement statement) {
|
||||
// No-op
|
||||
}
|
||||
}
|
||||
|
||||
class EncryptingResultSetWrapper implements ResultSet {
|
||||
|
||||
private final ResultSet delegate;
|
||||
private final KmsEncryptionService encryptionService;
|
||||
|
||||
public EncryptingResultSetWrapper(ResultSet delegate, KmsEncryptionService encryptionService) {
|
||||
this.delegate = delegate;
|
||||
this.encryptionService = encryptionService;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getString(String columnLabel) throws SQLException {
|
||||
String value = delegate.getString(columnLabel);
|
||||
if (value == null) return null;
|
||||
|
||||
// Check if this is an encrypted column
|
||||
if (isEncryptedColumn(columnLabel)) {
|
||||
return encryptionService.decrypt(value);
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
private boolean isEncryptedColumn(String columnLabel) {
|
||||
// Implement logic to identify encrypted columns
|
||||
return columnLabel.contains("encrypted") || columnLabel.contains("secure");
|
||||
}
|
||||
|
||||
// Delegate other methods to original ResultSet
|
||||
@Override
|
||||
public boolean next() throws SQLException {
|
||||
return delegate.next();
|
||||
}
|
||||
|
||||
// ... other ResultSet method implementations
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration Profiles
|
||||
|
||||
### Development Profile
|
||||
|
||||
```properties
|
||||
# src/main/resources/application-dev.properties
|
||||
aws.kms.region=us-east-1
|
||||
kms.encryption-key-id=alias/dev-encryption-key
|
||||
logging.level.com.yourcompany=DEBUG
|
||||
```
|
||||
|
||||
### Production Profile
|
||||
|
||||
```properties
|
||||
# src/main/resources/application-prod.properties
|
||||
aws.kms.region=${AWS_REGION:us-east-1}
|
||||
kms.encryption-key-id=${KMS_ENCRYPTION_KEY_ID:alias/production-encryption-key}
|
||||
logging.level.com.yourcompany=WARN
|
||||
spring.cloud.aws.credentials.access-key=${AWS_ACCESS_KEY_ID}
|
||||
spring.cloud.aws.credentials.secret-key=${AWS_SECRET_ACCESS_KEY}
|
||||
```
|
||||
|
||||
### Test Configuration
|
||||
|
||||
```java
|
||||
@Configuration
|
||||
@Profile("test")
|
||||
public class KmsTestConfiguration {
|
||||
|
||||
@Bean
|
||||
@Primary
|
||||
public KmsClient testKmsClient() {
|
||||
// Return a mock or test-specific KMS client
|
||||
return mock(KmsClient.class);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KmsEncryptionService testKmsEncryptionService() {
|
||||
return new KmsEncryptionService(testKmsClient());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Health Checks and Monitoring
|
||||
|
||||
### KMS Health Indicator
|
||||
|
||||
```java
|
||||
@Component
|
||||
public class KmsHealthIndicator implements HealthIndicator {
|
||||
|
||||
private final KmsClient kmsClient;
|
||||
private final String keyId;
|
||||
|
||||
public KmsHealthIndicator(KmsClient kmsClient,
|
||||
@Value("${kms.encryption-key-id}") String keyId) {
|
||||
this.kmsClient = kmsClient;
|
||||
this.keyId = keyId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Health health() {
|
||||
try {
|
||||
// Test KMS connectivity by describing the key
|
||||
DescribeKeyRequest request = DescribeKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
|
||||
DescribeKeyResponse response = kmsClient.describeKey(request);
|
||||
|
||||
// Check if key is in a healthy state
|
||||
KeyState keyState = response.keyMetadata().keyState();
|
||||
boolean isHealthy = keyState == KeyState.ENABLED;
|
||||
|
||||
if (isHealthy) {
|
||||
return Health.up()
|
||||
.withDetail("keyId", keyId)
|
||||
.withDetail("keyState", keyState)
|
||||
.withDetail("keyArn", response.keyMetadata().arn())
|
||||
.build();
|
||||
} else {
|
||||
return Health.down()
|
||||
.withDetail("keyId", keyId)
|
||||
.withDetail("keyState", keyState)
|
||||
.withDetail("message", "KMS key is not in ENABLED state")
|
||||
.build();
|
||||
}
|
||||
|
||||
} catch (KmsException e) {
|
||||
return Health.down()
|
||||
.withDetail("keyId", keyId)
|
||||
.withDetail("error", e.awsErrorDetails().errorMessage())
|
||||
.withDetail("errorCode", e.awsErrorDetails().errorCode())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metrics Collection
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class KmsMetricsCollector {
|
||||
|
||||
private final MeterRegistry meterRegistry;
|
||||
private final KmsClient kmsClient;
|
||||
|
||||
private final Counter encryptionCounter;
|
||||
private final Counter decryptionCounter;
|
||||
private final Timer encryptionTimer;
|
||||
private final Timer decryptionTimer;
|
||||
|
||||
public KmsMetricsCollector(MeterRegistry meterRegistry, KmsClient kmsClient) {
|
||||
this.meterRegistry = meterRegistry;
|
||||
this.kmsClient = kmsClient;
|
||||
|
||||
this.encryptionCounter = Counter.builder("kms.encryption.count")
|
||||
.description("Number of encryption operations")
|
||||
.register(meterRegistry);
|
||||
|
||||
this.decryptionCounter = Counter.builder("kms.decryption.count")
|
||||
.description("Number of decryption operations")
|
||||
.register(meterRegistry);
|
||||
|
||||
this.encryptionTimer = Timer.builder("kms.encryption.time")
|
||||
.description("Time taken for encryption operations")
|
||||
.register(meterRegistry);
|
||||
|
||||
this.decryptionTimer = Timer.builder("kms.decryption.time")
|
||||
.description("Time taken for decryption operations")
|
||||
.register(meterRegistry);
|
||||
}
|
||||
|
||||
public String encryptWithMetrics(String plaintext) {
|
||||
encryptionCounter.increment();
|
||||
|
||||
return encryptionTimer.record(() -> {
|
||||
try {
|
||||
EncryptRequest request = EncryptRequest.builder()
|
||||
.keyId("your-key-id")
|
||||
.plaintext(SdkBytes.fromString(plaintext, StandardCharsets.UTF_8))
|
||||
.build();
|
||||
|
||||
EncryptResponse response = kmsClient.encrypt(request);
|
||||
return Base64.getEncoder().encodeToString(
|
||||
response.ciphertextBlob().asByteArray());
|
||||
|
||||
} catch (KmsException e) {
|
||||
meterRegistry.counter("kms.encryption.errors")
|
||||
.increment();
|
||||
throw e;
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,639 @@
|
||||
# AWS KMS Technical Guide
|
||||
|
||||
## Key Management Operations
|
||||
|
||||
### Create KMS Key
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.kms.model.*;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public String createKey(KmsClient kmsClient, String description) {
|
||||
try {
|
||||
CreateKeyRequest request = CreateKeyRequest.builder()
|
||||
.description(description)
|
||||
.keyUsage(KeyUsageType.ENCRYPT_DECRYPT)
|
||||
.origin(OriginType.AWS_KMS)
|
||||
.build();
|
||||
|
||||
CreateKeyResponse response = kmsClient.createKey(request);
|
||||
|
||||
String keyId = response.keyMetadata().keyId();
|
||||
System.out.println("Created key: " + keyId);
|
||||
|
||||
return keyId;
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error creating key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Create Key with Custom Key Store
|
||||
|
||||
```java
|
||||
public String createKeyWithCustomStore(KmsClient kmsClient,
|
||||
String description,
|
||||
String customKeyStoreId) {
|
||||
CreateKeyRequest request = CreateKeyRequest.builder()
|
||||
.description(description)
|
||||
.keyUsage(KeyUsageType.ENCRYPT_DECRYPT)
|
||||
.origin(OriginType.AWS_CLOUDHSM)
|
||||
.customKeyStoreId(customKeyStoreId)
|
||||
.build();
|
||||
|
||||
CreateKeyResponse response = kmsClient.createKey(request);
|
||||
|
||||
return response.keyMetadata().keyId();
|
||||
}
|
||||
```
|
||||
|
||||
### List Keys
|
||||
|
||||
```java
|
||||
import java.util.List;
|
||||
|
||||
public List<KeyListEntry> listKeys(KmsClient kmsClient) {
|
||||
try {
|
||||
ListKeysRequest request = ListKeysRequest.builder()
|
||||
.limit(100)
|
||||
.build();
|
||||
|
||||
ListKeysResponse response = kmsClient.listKeys(request);
|
||||
|
||||
response.keys().forEach(key -> {
|
||||
System.out.println("Key ARN: " + key.keyArn());
|
||||
System.out.println("Key ID: " + key.keyId());
|
||||
System.out.println();
|
||||
});
|
||||
|
||||
return response.keys();
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error listing keys: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### List Keys with Pagination (Async)
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.kms.paginators.ListKeysPublisher;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
public CompletableFuture<Void> listAllKeysAsync(KmsAsyncClient kmsAsyncClient) {
|
||||
ListKeysRequest request = ListKeysRequest.builder()
|
||||
.limit(15)
|
||||
.build();
|
||||
|
||||
ListKeysPublisher keysPublisher = kmsAsyncClient.listKeysPaginator(request);
|
||||
|
||||
return keysPublisher
|
||||
.subscribe(r -> r.keys().forEach(key ->
|
||||
System.out.println("Key ARN: " + key.keyArn())))
|
||||
.whenComplete((result, exception) -> {
|
||||
if (exception != null) {
|
||||
System.err.println("Error: " + exception.getMessage());
|
||||
} else {
|
||||
System.out.println("Successfully listed all keys");
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Describe Key
|
||||
|
||||
```java
|
||||
public KeyMetadata describeKey(KmsClient kmsClient, String keyId) {
|
||||
try {
|
||||
DescribeKeyRequest request = DescribeKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
|
||||
DescribeKeyResponse response = kmsClient.describeKey(request);
|
||||
KeyMetadata metadata = response.keyMetadata();
|
||||
|
||||
System.out.println("Key ID: " + metadata.keyId());
|
||||
System.out.println("Key ARN: " + metadata.arn());
|
||||
System.out.println("Key State: " + metadata.keyState());
|
||||
System.out.println("Creation Date: " + metadata.creationDate());
|
||||
System.out.println("Enabled: " + metadata.enabled());
|
||||
|
||||
return metadata;
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error describing key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Enable/Disable Key
|
||||
|
||||
```java
|
||||
public void enableKey(KmsClient kmsClient, String keyId) {
|
||||
try {
|
||||
EnableKeyRequest request = EnableKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
|
||||
kmsClient.enableKey(request);
|
||||
System.out.println("Key enabled: " + keyId);
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error enabling key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
public void disableKey(KmsClient kmsClient, String keyId) {
|
||||
try {
|
||||
DisableKeyRequest request = DisableKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
|
||||
kmsClient.disableKey(request);
|
||||
System.out.println("Key disabled: " + keyId);
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error disabling key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Encryption and Decryption
|
||||
|
||||
### Encrypt Data
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.SdkBytes;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
|
||||
public byte[] encryptData(KmsClient kmsClient, String keyId, String plaintext) {
|
||||
try {
|
||||
SdkBytes plaintextBytes = SdkBytes.fromString(plaintext, StandardCharsets.UTF_8);
|
||||
|
||||
EncryptRequest request = EncryptRequest.builder()
|
||||
.keyId(keyId)
|
||||
.plaintext(plaintextBytes)
|
||||
.build();
|
||||
|
||||
EncryptResponse response = kmsClient.encrypt(request);
|
||||
|
||||
byte[] encryptedData = response.ciphertextBlob().asByteArray();
|
||||
System.out.println("Data encrypted successfully");
|
||||
|
||||
return encryptedData;
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error encrypting data: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Decrypt Data
|
||||
|
||||
```java
|
||||
public String decryptData(KmsClient kmsClient, byte[] ciphertext) {
|
||||
try {
|
||||
SdkBytes ciphertextBytes = SdkBytes.fromByteArray(ciphertext);
|
||||
|
||||
DecryptRequest request = DecryptRequest.builder()
|
||||
.ciphertextBlob(ciphertextBytes)
|
||||
.build();
|
||||
|
||||
DecryptResponse response = kmsClient.decrypt(request);
|
||||
|
||||
String decryptedText = response.plaintext().asString(StandardCharsets.UTF_8);
|
||||
System.out.println("Data decrypted successfully");
|
||||
|
||||
return decryptedText;
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error decrypting data: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Encrypt with Encryption Context
|
||||
|
||||
```java
|
||||
import java.util.Map;
|
||||
|
||||
public byte[] encryptWithContext(KmsClient kmsClient,
|
||||
String keyId,
|
||||
String plaintext,
|
||||
Map<String, String> encryptionContext) {
|
||||
try {
|
||||
EncryptRequest request = EncryptRequest.builder()
|
||||
.keyId(keyId)
|
||||
.plaintext(SdkBytes.fromString(plaintext, StandardCharsets.UTF_8))
|
||||
.encryptionContext(encryptionContext)
|
||||
.build();
|
||||
|
||||
EncryptResponse response = kmsClient.encrypt(request);
|
||||
|
||||
return response.ciphertextBlob().asByteArray();
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error encrypting with context: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Data Key Generation (Envelope Encryption)
|
||||
|
||||
### Generate Data Key
|
||||
|
||||
```java
|
||||
public record DataKeyPair(byte[] plaintext, byte[] encrypted) {}
|
||||
|
||||
public DataKeyPair generateDataKey(KmsClient kmsClient, String keyId) {
|
||||
try {
|
||||
GenerateDataKeyRequest request = GenerateDataKeyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build();
|
||||
|
||||
GenerateDataKeyResponse response = kmsClient.generateDataKey(request);
|
||||
|
||||
byte[] plaintextKey = response.plaintext().asByteArray();
|
||||
byte[] encryptedKey = response.ciphertextBlob().asByteArray();
|
||||
|
||||
System.out.println("Data key generated");
|
||||
|
||||
return new DataKeyPair(plaintextKey, encryptedKey);
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error generating data key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Generate Data Key Without Plaintext
|
||||
|
||||
```java
|
||||
public byte[] generateDataKeyWithoutPlaintext(KmsClient kmsClient, String keyId) {
|
||||
try {
|
||||
GenerateDataKeyWithoutPlaintextRequest request =
|
||||
GenerateDataKeyWithoutPlaintextRequest.builder()
|
||||
.keyId(keyId)
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build();
|
||||
|
||||
GenerateDataKeyWithoutPlaintextResponse response =
|
||||
kmsClient.generateDataKeyWithoutPlaintext(request);
|
||||
|
||||
return response.ciphertextBlob().asByteArray();
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error generating data key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Digital Signing
|
||||
|
||||
### Create Signing Key
|
||||
|
||||
```java
|
||||
public String createSigningKey(KmsClient kmsClient, String description) {
|
||||
try {
|
||||
CreateKeyRequest request = CreateKeyRequest.builder()
|
||||
.description(description)
|
||||
.keySpec(KeySpec.RSA_2048)
|
||||
.keyUsage(KeyUsageType.SIGN_VERIFY)
|
||||
.origin(OriginType.AWS_KMS)
|
||||
.build();
|
||||
|
||||
CreateKeyResponse response = kmsClient.createKey(request);
|
||||
|
||||
return response.keyMetadata().keyId();
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error creating signing key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Sign Data
|
||||
|
||||
```java
|
||||
public byte[] signData(KmsClient kmsClient, String keyId, String message) {
|
||||
try {
|
||||
SdkBytes messageBytes = SdkBytes.fromString(message, StandardCharsets.UTF_8);
|
||||
|
||||
SignRequest request = SignRequest.builder()
|
||||
.keyId(keyId)
|
||||
.message(messageBytes)
|
||||
.signingAlgorithm(SigningAlgorithmSpec.RSASSA_PSS_SHA_256)
|
||||
.build();
|
||||
|
||||
SignResponse response = kmsClient.sign(request);
|
||||
|
||||
byte[] signature = response.signature().asByteArray();
|
||||
System.out.println("Data signed successfully");
|
||||
|
||||
return signature;
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error signing data: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Verify Signature
|
||||
|
||||
```java
|
||||
public boolean verifySignature(KmsClient kmsClient,
|
||||
String keyId,
|
||||
String message,
|
||||
byte[] signature) {
|
||||
try {
|
||||
VerifyRequest request = VerifyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.message(SdkBytes.fromString(message, StandardCharsets.UTF_8))
|
||||
.signature(SdkBytes.fromByteArray(signature))
|
||||
.signingAlgorithm(SigningAlgorithmSpec.RSASSA_PSS_SHA_256)
|
||||
.build();
|
||||
|
||||
VerifyResponse response = kmsClient.verify(request);
|
||||
|
||||
boolean isValid = response.signatureValid();
|
||||
System.out.println("Signature valid: " + isValid);
|
||||
|
||||
return isValid;
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error verifying signature: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Sign and Verify (Async)
|
||||
|
||||
```java
|
||||
public CompletableFuture<Boolean> signAndVerifyAsync(KmsAsyncClient kmsAsyncClient,
|
||||
String message) {
|
||||
String signMessage = message;
|
||||
|
||||
// Create signing key
|
||||
CreateKeyRequest createKeyRequest = CreateKeyRequest.builder()
|
||||
.keySpec(KeySpec.RSA_2048)
|
||||
.keyUsage(KeyUsageType.SIGN_VERIFY)
|
||||
.origin(OriginType.AWS_KMS)
|
||||
.build();
|
||||
|
||||
return kmsAsyncClient.createKey(createKeyRequest)
|
||||
.thenCompose(createKeyResponse -> {
|
||||
String keyId = createKeyResponse.keyMetadata().keyId();
|
||||
|
||||
SdkBytes messageBytes = SdkBytes.fromString(signMessage, StandardCharsets.UTF_8);
|
||||
SignRequest signRequest = SignRequest.builder()
|
||||
.keyId(keyId)
|
||||
.message(messageBytes)
|
||||
.signingAlgorithm(SigningAlgorithmSpec.RSASSA_PSS_SHA_256)
|
||||
.build();
|
||||
|
||||
return kmsAsyncClient.sign(signRequest)
|
||||
.thenCompose(signResponse -> {
|
||||
byte[] signedBytes = signResponse.signature().asByteArray();
|
||||
|
||||
VerifyRequest verifyRequest = VerifyRequest.builder()
|
||||
.keyId(keyId)
|
||||
.message(messageBytes)
|
||||
.signature(SdkBytes.fromByteArray(signedBytes))
|
||||
.signingAlgorithm(SigningAlgorithmSpec.RSASSA_PSS_SHA_256)
|
||||
.build();
|
||||
|
||||
return kmsAsyncClient.verify(verifyRequest)
|
||||
.thenApply(VerifyResponse::signatureValid);
|
||||
});
|
||||
})
|
||||
.exceptionally(throwable -> {
|
||||
throw new RuntimeException("Failed to sign or verify", throwable);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
## Key Tagging
|
||||
|
||||
### Tag Key
|
||||
|
||||
```java
|
||||
public void tagKey(KmsClient kmsClient, String keyId, Map<String, String> tags) {
|
||||
try {
|
||||
List<Tag> tagList = tags.entrySet().stream()
|
||||
.map(entry -> Tag.builder()
|
||||
.tagKey(entry.getKey())
|
||||
.tagValue(entry.getValue())
|
||||
.build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
TagResourceRequest request = TagResourceRequest.builder()
|
||||
.keyId(keyId)
|
||||
.tags(tagList)
|
||||
.build();
|
||||
|
||||
kmsClient.tagResource(request);
|
||||
System.out.println("Key tagged successfully");
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error tagging key: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### List Tags
|
||||
|
||||
```java
|
||||
public Map<String, String> listTags(KmsClient kmsClient, String keyId) {
|
||||
try {
|
||||
ListResourceTagsRequest request = ListResourceTagsRequest.builder()
|
||||
.keyId(keyId)
|
||||
.build();
|
||||
|
||||
ListResourceTagsResponse response = kmsClient.listResourceTags(request);
|
||||
|
||||
return response.tags().stream()
|
||||
.collect(Collectors.toMap(Tag::tagKey, Tag::tagValue));
|
||||
|
||||
} catch (KmsException e) {
|
||||
System.err.println("Error listing tags: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Techniques
|
||||
|
||||
### Envelope Encryption Service
|
||||
|
||||
```java
|
||||
@Service
|
||||
public class EnvelopeEncryptionService {
|
||||
|
||||
private final KmsClient kmsClient;
|
||||
|
||||
@Value("${kms.master-key-id}")
|
||||
private String masterKeyId;
|
||||
|
||||
public EnvelopeEncryptionService(KmsClient kmsClient) {
|
||||
this.kmsClient = kmsClient;
|
||||
}
|
||||
|
||||
public EncryptedEnvelope encryptLargeData(byte[] data) {
|
||||
// Generate data key
|
||||
GenerateDataKeyResponse dataKeyResponse = kmsClient.generateDataKey(
|
||||
GenerateDataKeyRequest.builder()
|
||||
.keyId(masterKeyId)
|
||||
.keySpec(DataKeySpec.AES_256)
|
||||
.build());
|
||||
|
||||
byte[] plaintextKey = dataKeyResponse.plaintext().asByteArray();
|
||||
byte[] encryptedKey = dataKeyResponse.ciphertextBlob().asByteArray();
|
||||
|
||||
try {
|
||||
// Encrypt data with plaintext data key
|
||||
byte[] encryptedData = encryptWithAES(data, plaintextKey);
|
||||
|
||||
// Clear plaintext key from memory
|
||||
Arrays.fill(plaintextKey, (byte) 0);
|
||||
|
||||
return new EncryptedEnvelope(encryptedData, encryptedKey);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Envelope encryption failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
public byte[] decryptLargeData(EncryptedEnvelope envelope) {
|
||||
// Decrypt data key
|
||||
DecryptResponse decryptResponse = kmsClient.decrypt(
|
||||
DecryptRequest.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(envelope.encryptedKey()))
|
||||
.build());
|
||||
|
||||
byte[] plaintextKey = decryptResponse.plaintext().asByteArray();
|
||||
|
||||
try {
|
||||
// Decrypt data with plaintext data key
|
||||
byte[] decryptedData = decryptWithAES(envelope.encryptedData(), plaintextKey);
|
||||
|
||||
// Clear plaintext key from memory
|
||||
Arrays.fill(plaintextKey, (byte) 0);
|
||||
|
||||
return decryptedData;
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Envelope decryption failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
private byte[] encryptWithAES(byte[] data, byte[] key) throws Exception {
|
||||
SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
|
||||
Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
|
||||
cipher.init(Cipher.ENCRYPT_MODE, keySpec);
|
||||
return cipher.doFinal(data);
|
||||
}
|
||||
|
||||
private byte[] decryptWithAES(byte[] data, byte[] key) throws Exception {
|
||||
SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
|
||||
Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
|
||||
cipher.init(Cipher.DECRYPT_MODE, keySpec);
|
||||
return cipher.doFinal(data);
|
||||
}
|
||||
|
||||
public record EncryptedEnvelope(byte[] encryptedData, byte[] encryptedKey) {}
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling Strategies
|
||||
|
||||
```java
|
||||
public class KmsErrorHandler {
|
||||
|
||||
private static final int MAX_RETRIES = 3;
|
||||
private static final long RETRY_DELAY_MS = 1000;
|
||||
|
||||
public <T> T executeWithRetry(Supplier<T> operation, String operationName) {
|
||||
int attempt = 0;
|
||||
KmsException lastException = null;
|
||||
|
||||
while (attempt < MAX_RETRIES) {
|
||||
try {
|
||||
return operation.get();
|
||||
} catch (KmsException e) {
|
||||
lastException = e;
|
||||
attempt++;
|
||||
|
||||
// Check if it's a throttling error and retryable
|
||||
if (e.awsErrorDetails().errorCode().equals("ThrottlingException") && attempt < MAX_RETRIES) {
|
||||
try {
|
||||
Thread.sleep(RETRY_DELAY_MS);
|
||||
} catch (InterruptedException ie) {
|
||||
Thread.currentThread().interrupt();
|
||||
throw new RuntimeException("Retry interrupted", ie);
|
||||
}
|
||||
} else {
|
||||
// Non-retryable error or max retries exceeded
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw new RuntimeException(String.format("Failed to execute %s after %d attempts", operationName, MAX_RETRIES), lastException);
|
||||
}
|
||||
|
||||
public boolean isRetryableError(KmsException e) {
|
||||
String errorCode = e.awsErrorDetails().errorCode();
|
||||
return "ThrottlingException".equals(errorCode)
|
||||
|| "TooManyRequestsException".equals(errorCode)
|
||||
|| "LimitExceededException".equals(errorCode);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Connection Pooling Configuration
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.http.apache.ApacheHttpClient;
|
||||
import org.apache.http.impl.client.CloseableHttpClient;
|
||||
import org.apache.http.impl.client.HttpClients;
|
||||
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
|
||||
|
||||
public class KmsConnectionPool {
|
||||
|
||||
public static KmsClient createPooledClient() {
|
||||
// Configure connection pool
|
||||
PoolingHttpClientConnectionManager connectionManager =
|
||||
new PoolingHttpClientConnectionManager();
|
||||
connectionManager.setMaxTotal(100);
|
||||
connectionManager.setDefaultMaxPerRoute(20);
|
||||
|
||||
CloseableHttpClient httpClient = HttpClients.custom()
|
||||
.setConnectionManager(connectionManager)
|
||||
.build();
|
||||
|
||||
ApacheHttpClient.Builder httpClientBuilder = ApacheHttpClient.builder()
|
||||
.httpClient(httpClient);
|
||||
|
||||
return KmsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.httpClientBuilder(httpClientBuilder)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
589
skills/aws-java/aws-sdk-java-v2-kms/references/testing.md
Normal file
589
skills/aws-java/aws-sdk-java-v2-kms/references/testing.md
Normal file
@@ -0,0 +1,589 @@
|
||||
# Testing AWS KMS Integration
|
||||
|
||||
## Unit Testing with Mocked Client
|
||||
|
||||
### Basic Unit Test
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.SdkBytes;
|
||||
import software.amazon.awssdk.services.kms.KmsClient;
|
||||
import software.amazon.awssdk.services.kms.model.*;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.InjectMocks;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
|
||||
import java.nio.charset.StandardCharsets;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class KmsEncryptionServiceTest {
|
||||
|
||||
@Mock
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@InjectMocks
|
||||
private KmsEncryptionService encryptionService;
|
||||
|
||||
@Test
|
||||
void shouldEncryptData() {
|
||||
// Arrange
|
||||
String plaintext = "sensitive data";
|
||||
byte[] ciphertext = "encrypted".getBytes();
|
||||
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenReturn(EncryptResponse.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(ciphertext))
|
||||
.build());
|
||||
|
||||
// Act
|
||||
String result = encryptionService.encrypt(plaintext);
|
||||
|
||||
// Assert
|
||||
assertThat(result).isNotEmpty();
|
||||
verify(kmsClient).encrypt(any(EncryptRequest.class));
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldDecryptData() {
|
||||
// Arrange
|
||||
String encryptedText = "ciphertext";
|
||||
String expectedPlaintext = "sensitive data";
|
||||
|
||||
when(kmsClient.decrypt(any(DecryptRequest.class)))
|
||||
.thenReturn(DecryptResponse.builder()
|
||||
.plaintext(SdkBytes.fromString(expectedPlaintext, StandardCharsets.UTF_8))
|
||||
.build());
|
||||
|
||||
// Act
|
||||
String result = encryptionService.decrypt(encryptedText);
|
||||
|
||||
// Assert
|
||||
assertThat(result).isEqualTo(expectedPlaintext);
|
||||
verify(kmsClient).decrypt(any(DecryptRequest.class));
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldThrowExceptionOnEncryptionFailure() {
|
||||
// Arrange
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenThrow(KmsException.builder()
|
||||
.awsErrorDetails(AwsErrorDetails.builder()
|
||||
.errorCode("KMSDisabledException")
|
||||
.errorMessage("KMS is disabled")
|
||||
.build())
|
||||
.build());
|
||||
|
||||
// Act & Assert
|
||||
assertThatThrownBy(() -> encryptionService.encrypt("test"))
|
||||
.isInstanceOf(RuntimeException.class)
|
||||
.hasMessageContaining("Encryption failed");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Parameterized Tests
|
||||
|
||||
```java
|
||||
import org.junit.jupiter.params.ParameterizedTest;
|
||||
import org.junit.jupiter.params.provider.CsvSource;
|
||||
|
||||
class KmsEncryptionParameterizedTest {
|
||||
|
||||
@Mock
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@InjectMocks
|
||||
private KmsEncryptionService encryptionService;
|
||||
|
||||
@ParameterizedTest
|
||||
@CsvSource({
|
||||
"hello, world",
|
||||
"12345, 67890",
|
||||
"special@chars, normal",
|
||||
"very long string with multiple words, another string",
|
||||
"", // empty string
|
||||
"null test, null test"
|
||||
})
|
||||
void shouldEncryptAndDecrypt(String plaintext, String testIdentifier) {
|
||||
// Arrange
|
||||
byte[] ciphertext = "encrypted".getBytes();
|
||||
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenReturn(EncryptResponse.builder()
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(ciphertext))
|
||||
.build());
|
||||
|
||||
when(kmsClient.decrypt(any(DecryptRequest.class)))
|
||||
.thenReturn(DecryptResponse.builder()
|
||||
.plaintext(SdkBytes.fromString(plaintext, StandardCharsets.UTF_8))
|
||||
.build());
|
||||
|
||||
// Act
|
||||
String encrypted = encryptionService.encrypt(plaintext);
|
||||
String decrypted = encryptionService.decrypt(encrypted);
|
||||
|
||||
// Assert
|
||||
assertThat(decrypted).isEqualTo(plaintext);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Testing with Testcontainers
|
||||
|
||||
### Local KMS Mock Setup
|
||||
|
||||
```java
|
||||
import org.testcontainers.containers.localstack.LocalStackContainer;
|
||||
import org.testcontainers.junit.jupiter.Container;
|
||||
import org.testcontainers.junit.jupiter.Testcontainers;
|
||||
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
|
||||
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
|
||||
import software.amazon.awssdk.services.kms.KmsClient;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import org.junit.jupiter.api.BeforeAll;
|
||||
import org.junit.jupiter.api.TestInstance;
|
||||
|
||||
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.KMS;
|
||||
|
||||
@Testcontainers
|
||||
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
|
||||
class KmsIntegrationTest {
|
||||
|
||||
@Container
|
||||
private static final LocalStackContainer localStack =
|
||||
new LocalStackContainer(DockerImageName.parse("localstack/localstack:latest"))
|
||||
.withServices(KMS);
|
||||
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@BeforeAll
|
||||
void setup() {
|
||||
kmsClient = KmsClient.builder()
|
||||
.region(Region.of(localStack.getRegion()))
|
||||
.endpointOverride(localStack.getEndpointOverride(KMS))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(localStack.getAccessKey(), localStack.getSecretKey())))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldCreateAndManageKeysWithLocalKms() {
|
||||
// Create a key
|
||||
String keyId = createTestKey(kmsClient, "test-key");
|
||||
assertThat(keyId).isNotEmpty();
|
||||
|
||||
// Describe the key
|
||||
KeyMetadata metadata = describeKey(kmsClient, keyId);
|
||||
assertThat(metadata.keyState()).isEqualTo(KeyState.ENABLED);
|
||||
|
||||
// List keys
|
||||
List<KeyListEntry> keys = listKeys(kmsClient);
|
||||
assertThat(keys).hasSizeGreaterThan(0);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing with Spring Boot Test Slices
|
||||
|
||||
### KmsServiceSlice Test
|
||||
|
||||
```java
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.boot.test.mock.mockito.MockBean;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.springframework.test.web.servlet.MockMvc;
|
||||
|
||||
@SpringBootTest
|
||||
@AutoConfigureMockMvc
|
||||
@ActiveProfiles("test")
|
||||
class KmsControllerIntegrationTest {
|
||||
|
||||
@Autowired
|
||||
private MockMvc mockMvc;
|
||||
|
||||
@MockBean
|
||||
private KmsEncryptionService kmsEncryptionService;
|
||||
|
||||
@Test
|
||||
void shouldEncryptData() throws Exception {
|
||||
String plaintext = "test data";
|
||||
String encrypted = "encrypted-data";
|
||||
|
||||
when(kmsEncryptionService.encrypt(plaintext)).thenReturn(encrypted);
|
||||
|
||||
mockMvc.perform(post("/api/kms/encrypt")
|
||||
.contentType(MediaType.APPLICATION_JSON)
|
||||
.content("{\"data\":\"" + plaintext + "\"}"))
|
||||
.andExpect(status().isOk())
|
||||
.andExpect(jsonPath("$.data").value(encrypted));
|
||||
|
||||
verify(kmsEncryptionService).encrypt(plaintext);
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldHandleEncryptionErrors() throws Exception {
|
||||
when(kmsEncryptionService.encrypt(any()))
|
||||
.thenThrow(new RuntimeException("KMS error"));
|
||||
|
||||
mockMvc.perform(post("/api/kms/encrypt")
|
||||
.contentType(MediaType.APPLICATION_JSON)
|
||||
.content("{\"data\":\"test\"}"))
|
||||
.andExpect(status().isInternalServerError());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testing with SpringBootTest and Configuration
|
||||
|
||||
```java
|
||||
import org.springframework.boot.test.context.TestConfiguration;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Primary;
|
||||
|
||||
@TestConfiguration
|
||||
class KmsTestConfiguration {
|
||||
|
||||
@Bean
|
||||
@Primary
|
||||
public KmsClient testKmsClient() {
|
||||
// Create a mock KMS client for testing
|
||||
KmsClient mockClient = mock(KmsClient.class);
|
||||
|
||||
// Mock key creation
|
||||
when(mockClient.createKey(any(CreateKeyRequest.class)))
|
||||
.thenReturn(CreateKeyResponse.builder()
|
||||
.keyMetadata(KeyMetadata.builder()
|
||||
.keyId("test-key-id")
|
||||
.keyArn("arn:aws:kms:us-east-1:123456789012:key/test-key-id")
|
||||
.keyState(KeyState.ENABLED)
|
||||
.build())
|
||||
.build());
|
||||
|
||||
// Mock encryption
|
||||
when(mockClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenReturn(EncryptResponse.builder()
|
||||
.ciphertextBlob(SdkBytes.fromString("encrypted-data", StandardCharsets.UTF_8))
|
||||
.build());
|
||||
|
||||
// Mock decryption
|
||||
when(mockClient.decrypt(any(DecryptRequest.class)))
|
||||
.thenReturn(DecryptResponse.builder()
|
||||
.plaintext(SdkBytes.fromString("decrypted-data", StandardCharsets.UTF_8))
|
||||
.build());
|
||||
|
||||
return mockClient;
|
||||
}
|
||||
}
|
||||
|
||||
@SpringBootTest(classes = {Application.class, KmsTestConfiguration.class})
|
||||
class KmsServiceWithTestConfigIntegrationTest {
|
||||
|
||||
@Autowired
|
||||
private KmsEncryptionService encryptionService;
|
||||
|
||||
@Test
|
||||
void shouldUseTestConfiguration() {
|
||||
String result = encryptionService.encrypt("test");
|
||||
assertThat(result).isNotEmpty();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Envelope Encryption
|
||||
|
||||
### Envelope Encryption Test
|
||||
|
||||
```java
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.*;
|
||||
|
||||
class EnvelopeEncryptionServiceTest {
|
||||
|
||||
@Mock
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@InjectMocks
|
||||
private EnvelopeEncryptionService envelopeEncryptionService;
|
||||
|
||||
@Test
|
||||
void shouldEncryptAndDecryptLargeData() {
|
||||
// Arrange
|
||||
byte[] testData = "large test data".getBytes();
|
||||
byte[] encryptedDataKey = "encrypted-data-key".getBytes();
|
||||
|
||||
// Mock data key generation
|
||||
when(kmsClient.generateDataKey(any(GenerateDataKeyRequest.class)))
|
||||
.thenReturn(GenerateDataKeyResponse.builder()
|
||||
.plaintext(SdkBytes.fromByteArray("data-key".getBytes()))
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(encryptedDataKey))
|
||||
.build());
|
||||
|
||||
// Mock data key decryption
|
||||
when(kmsClient.decrypt(any(DecryptRequest.class)))
|
||||
.thenReturn(DecryptResponse.builder()
|
||||
.plaintext(SdkBytes.fromByteArray("data-key".getBytes()))
|
||||
.build());
|
||||
|
||||
// Act
|
||||
EncryptedEnvelope encryptedEnvelope = envelopeEncryptionService.encryptLargeData(testData);
|
||||
byte[] decryptedData = envelopeEncryptionService.decryptLargeData(encryptedEnvelope);
|
||||
|
||||
// Assert
|
||||
assertThat(encryptedEnvelope.encryptedData()).isNotEmpty();
|
||||
assertThat(encryptedEnvelope.encryptedKey()).isEqualTo(encryptedDataKey);
|
||||
assertThat(decryptedData).isEqualTo(testData);
|
||||
|
||||
// Verify interactions
|
||||
verify(kmsClient).generateDataKey(any(GenerateDataKeyRequest.class));
|
||||
verify(kmsClient).decrypt(any(DecryptRequest.class));
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldClearSensitiveDataFromMemory() {
|
||||
// Arrange
|
||||
byte[] testData = "test data".getBytes();
|
||||
byte[] encryptedDataKey = "encrypted-key".getBytes();
|
||||
|
||||
when(kmsClient.generateDataKey(any(GenerateDataKeyRequest.class)))
|
||||
.thenReturn(GenerateDataKeyResponse.builder()
|
||||
.plaintext(SdkBytes.fromByteArray("sensitive-data-key".getBytes()))
|
||||
.ciphertextBlob(SdkBytes.fromByteArray(encryptedDataKey))
|
||||
.build());
|
||||
|
||||
when(kmsClient.decrypt(any(DecryptRequest.class)))
|
||||
.thenReturn(DecryptResponse.builder()
|
||||
.plaintext(SdkBytes.fromByteArray("sensitive-data-key".getBytes()))
|
||||
.build());
|
||||
|
||||
// Act
|
||||
envelopeEncryptionService.encryptLargeData(testData);
|
||||
envelopeEncryptionService.decryptLargeData(new EncryptedEnvelope(testData, encryptedDataKey));
|
||||
|
||||
// Note: Memory clearing is difficult to test directly
|
||||
// In real tests, you would verify no sensitive data remains in memory traces
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Digital Signatures
|
||||
|
||||
### Digital Signature Tests
|
||||
|
||||
```java
|
||||
class DigitalSignatureServiceTest {
|
||||
|
||||
@Mock
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@InjectMocks
|
||||
private DigitalSignatureService signatureService;
|
||||
|
||||
@Test
|
||||
void shouldSignAndVerifyData() {
|
||||
// Arrange
|
||||
String message = "test message";
|
||||
byte[] signature = "signature-data".getBytes();
|
||||
|
||||
when(kmsClient.sign(any(SignRequest.class)))
|
||||
.thenReturn(SignResponse.builder()
|
||||
.signature(SdkBytes.fromByteArray(signature))
|
||||
.build());
|
||||
|
||||
when(kmsClient.verify(any(VerifyRequest.class)))
|
||||
.thenReturn(VerifyResponse.builder()
|
||||
.signatureValid(true)
|
||||
.build());
|
||||
|
||||
// Act
|
||||
byte[] signedSignature = signatureService.signData(message);
|
||||
boolean isValid = signatureService.verifySignature(message, signedSignature);
|
||||
|
||||
// Assert
|
||||
assertThat(signedSignature).isEqualTo(signature);
|
||||
assertThat(isValid).isTrue();
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldDetectInvalidSignature() {
|
||||
// Arrange
|
||||
String message = "test message";
|
||||
byte[] signature = "invalid-signature".getBytes();
|
||||
|
||||
when(kmsClient.verify(any(VerifyRequest.class)))
|
||||
.thenReturn(VerifyResponse.builder()
|
||||
.signatureValid(false)
|
||||
.build());
|
||||
|
||||
// Act & Assert
|
||||
assertThatThrownBy(() ->
|
||||
signatureService.verifySignature(message, signature))
|
||||
.isInstanceOf(SecurityException.class)
|
||||
.hasMessageContaining("Invalid signature");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### Performance Test with JMH
|
||||
|
||||
```java
|
||||
import org.openjdk.jmh.annotations.*;
|
||||
import org.openjdk.jmh.infra.Blackhole;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
@BenchmarkMode(Mode.AverageTime)
|
||||
@OutputTimeUnit(TimeUnit.MILLISECONDS)
|
||||
@Warmup(iterations = 3, time = 1)
|
||||
@Measurement(iterations = 5, time = 1)
|
||||
@Fork(1)
|
||||
class KmsPerformanceTest {
|
||||
|
||||
@MockBean
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@Autowired
|
||||
private KmsEncryptionService encryptionService;
|
||||
|
||||
@Benchmark
|
||||
public void testEncryptionPerformance(Blackhole bh) {
|
||||
String testData = "performance test data with some content";
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenReturn(EncryptResponse.builder()
|
||||
.ciphertextBlob(SdkBytes.fromString("encrypted", StandardCharsets.UTF_8))
|
||||
.build());
|
||||
|
||||
String result = encryptionService.encrypt(testData);
|
||||
bh.consume(result);
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
public void testDecryptionPerformance(Blackhole bh) {
|
||||
String encryptedData = "encrypted-performance-data";
|
||||
when(kmsClient.decrypt(any(DecryptRequest.class)))
|
||||
.thenReturn(DecryptResponse.builder()
|
||||
.plaintext(SdkBytes.fromString("decrypted", StandardCharsets.UTF_8))
|
||||
.build());
|
||||
|
||||
String result = encryptionService.decrypt(encryptedData);
|
||||
bh.consume(result);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Error Scenarios
|
||||
|
||||
### Error Handling Tests
|
||||
|
||||
```java
|
||||
class KmsErrorHandlingTest {
|
||||
|
||||
@Mock
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@InjectMocks
|
||||
private KmsEncryptionService encryptionService;
|
||||
|
||||
@Test
|
||||
void shouldHandleThrottlingException() {
|
||||
// Arrange
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenThrow(KmsException.builder()
|
||||
.awsErrorDetails(AwsErrorDetails.builder()
|
||||
.errorCode("ThrottlingException")
|
||||
.errorMessage("Rate exceeded")
|
||||
.build())
|
||||
.build());
|
||||
|
||||
// Act & Assert
|
||||
assertThatThrownBy(() -> encryptionService.encrypt("test"))
|
||||
.isInstanceOf(RuntimeException.class)
|
||||
.hasMessageContaining("Rate limit exceeded");
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldHandleDisabledKey() {
|
||||
// Arrange
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenThrow(KmsException.builder()
|
||||
.awsErrorDetails(AwsErrorDetails.builder()
|
||||
.errorCode("DisabledException")
|
||||
.errorMessage("Key is disabled")
|
||||
.build())
|
||||
.build());
|
||||
|
||||
// Act & Assert
|
||||
assertThatThrownBy(() -> encryptionService.encrypt("test"))
|
||||
.isInstanceOf(RuntimeException.class)
|
||||
.hasMessageContaining("Key is disabled");
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldHandleNotFoundException() {
|
||||
// Arrange
|
||||
when(kmsClient.encrypt(any(EncryptRequest.class)))
|
||||
.thenThrow(KmsException.builder()
|
||||
.awsErrorDetails(AwsErrorDetails.builder()
|
||||
.errorCode("NotFoundException")
|
||||
.errorMessage("Key not found")
|
||||
.build())
|
||||
.build());
|
||||
|
||||
// Act & Assert
|
||||
assertThatThrownBy(() -> encryptionService.encrypt("test"))
|
||||
.isInstanceOf(RuntimeException.class)
|
||||
.hasMessageContaining("Key not found");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Testing with AWS Local
|
||||
|
||||
### Testcontainers KMS Setup
|
||||
|
||||
```java
|
||||
import org.testcontainers.containers.localstack.LocalStackContainer;
|
||||
import software.amazon.awssdk.services.kms.KmsClient;
|
||||
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.KMS;
|
||||
|
||||
@SpringBootTest
|
||||
class KmsAwsLocalIntegrationTest {
|
||||
|
||||
@Container
|
||||
private static final LocalStackContainer localStack =
|
||||
new LocalStackContainer(DockerImageName.parse("localstack/localstack:latest"))
|
||||
.withServices(KMS)
|
||||
.withEnv("DEFAULT_REGION", "us-east-1");
|
||||
|
||||
private KmsClient kmsClient;
|
||||
|
||||
@BeforeEach
|
||||
void setup() {
|
||||
kmsClient = KmsClient.builder()
|
||||
.region(Region.AWS_GLOBAL)
|
||||
.endpointOverride(localStack.getEndpointOverride(KMS))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(localStack.getAccessKey(), localStack.getSecretKey())))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldCreateKeyInLocalKms() {
|
||||
// This test creates a real key in the local KMS instance
|
||||
CreateKeyRequest request = CreateKeyRequest.builder()
|
||||
.description("Test key")
|
||||
.keyUsage(KeyUsageType.ENCRYPT_DECRYPT)
|
||||
.build();
|
||||
|
||||
CreateKeyResponse response = kmsClient.createKey(request);
|
||||
assertThat(response.keyMetadata().keyId()).isNotEmpty();
|
||||
}
|
||||
}
|
||||
```
|
||||
508
skills/aws-java/aws-sdk-java-v2-lambda/SKILL.md
Normal file
508
skills/aws-java/aws-sdk-java-v2-lambda/SKILL.md
Normal file
@@ -0,0 +1,508 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-lambda
|
||||
description: AWS Lambda patterns using AWS SDK for Java 2.x. Use when invoking Lambda functions, creating/updating functions, managing function configurations, working with Lambda layers, or integrating Lambda with Spring Boot applications.
|
||||
category: aws
|
||||
tags: [aws, lambda, java, sdk, serverless, functions]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - AWS Lambda
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Invoking Lambda functions programmatically
|
||||
- Creating or updating Lambda functions
|
||||
- Managing Lambda function configurations
|
||||
- Working with Lambda environment variables
|
||||
- Managing Lambda layers and aliases
|
||||
- Implementing asynchronous Lambda invocations
|
||||
- Integrating Lambda with Spring Boot
|
||||
|
||||
## Overview
|
||||
|
||||
AWS Lambda is a compute service that runs code without the need to manage servers. Your code runs automatically, scaling up and down with pay-per-use pricing. Use this skill to implement AWS Lambda operations using AWS SDK for Java 2.x in applications and services.
|
||||
|
||||
## Dependencies
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>lambda</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Setup
|
||||
|
||||
To use AWS Lambda, create a LambdaClient with the required region configuration:
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.lambda.LambdaClient;
|
||||
|
||||
LambdaClient lambdaClient = LambdaClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
For asynchronous operations, use LambdaAsyncClient:
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.lambda.LambdaAsyncClient;
|
||||
|
||||
LambdaAsyncClient asyncLambdaClient = LambdaAsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
## Invoke Lambda Function
|
||||
|
||||
### Synchronous Invocation
|
||||
|
||||
Invoke Lambda functions synchronously to get immediate results:
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.lambda.model.*;
|
||||
import software.amazon.awssdk.core.SdkBytes;
|
||||
|
||||
public String invokeLambda(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String payload) {
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
return response.payload().asUtf8String();
|
||||
}
|
||||
```
|
||||
|
||||
### Asynchronous Invocation
|
||||
|
||||
Use asynchronous invocation for fire-and-forget scenarios:
|
||||
|
||||
```java
|
||||
public void invokeLambdaAsync(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String payload) {
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.invocationType(InvocationType.EVENT) // Asynchronous
|
||||
.payload(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
System.out.println("Status: " + response.statusCode());
|
||||
}
|
||||
```
|
||||
|
||||
### Invoke with JSON Objects
|
||||
|
||||
Work with JSON payloads for complex data structures:
|
||||
|
||||
```java
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
public <T> String invokeLambdaWithObject(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
T requestObject) throws Exception {
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
String jsonPayload = mapper.writeValueAsString(requestObject);
|
||||
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
return response.payload().asUtf8String();
|
||||
}
|
||||
```
|
||||
|
||||
### Parse Typed Responses
|
||||
|
||||
Parse JSON responses into typed objects:
|
||||
|
||||
```java
|
||||
public <T> T invokeLambdaAndParse(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
Object request,
|
||||
Class<T> responseType) throws Exception {
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
String jsonPayload = mapper.writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(invokeRequest);
|
||||
|
||||
String responseJson = response.payload().asUtf8String();
|
||||
|
||||
return mapper.readValue(responseJson, responseType);
|
||||
}
|
||||
```
|
||||
|
||||
## Function Management
|
||||
|
||||
### List Functions
|
||||
|
||||
List all Lambda functions for the current account:
|
||||
|
||||
```java
|
||||
public List<FunctionConfiguration> listFunctions(LambdaClient lambdaClient) {
|
||||
ListFunctionsResponse response = lambdaClient.listFunctions();
|
||||
|
||||
return response.functions();
|
||||
}
|
||||
```
|
||||
|
||||
### Get Function Configuration
|
||||
|
||||
Retrieve function configuration and metadata:
|
||||
|
||||
```java
|
||||
public FunctionConfiguration getFunctionConfig(LambdaClient lambdaClient,
|
||||
String functionName) {
|
||||
GetFunctionRequest request = GetFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.build();
|
||||
|
||||
GetFunctionResponse response = lambdaClient.getFunction(request);
|
||||
|
||||
return response.configuration();
|
||||
}
|
||||
```
|
||||
|
||||
### Update Function Code
|
||||
|
||||
Update Lambda function code with new deployment package:
|
||||
|
||||
```java
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
public void updateFunctionCode(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String zipFilePath) throws IOException {
|
||||
byte[] zipBytes = Files.readAllBytes(Paths.get(zipFilePath));
|
||||
|
||||
UpdateFunctionCodeRequest request = UpdateFunctionCodeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.zipFile(SdkBytes.fromByteArray(zipBytes))
|
||||
.publish(true)
|
||||
.build();
|
||||
|
||||
UpdateFunctionCodeResponse response = lambdaClient.updateFunctionCode(request);
|
||||
|
||||
System.out.println("Updated function version: " + response.version());
|
||||
}
|
||||
```
|
||||
|
||||
### Update Function Configuration
|
||||
|
||||
Modify function settings like timeout, memory, and environment variables:
|
||||
|
||||
```java
|
||||
public void updateFunctionConfiguration(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
Map<String, String> environment) {
|
||||
Environment env = Environment.builder()
|
||||
.variables(environment)
|
||||
.build();
|
||||
|
||||
UpdateFunctionConfigurationRequest request = UpdateFunctionConfigurationRequest.builder()
|
||||
.functionName(functionName)
|
||||
.environment(env)
|
||||
.timeout(60)
|
||||
.memorySize(512)
|
||||
.build();
|
||||
|
||||
lambdaClient.updateFunctionConfiguration(request);
|
||||
}
|
||||
```
|
||||
|
||||
### Create Function
|
||||
|
||||
Create new Lambda functions with code and configuration:
|
||||
|
||||
```java
|
||||
public void createFunction(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String roleArn,
|
||||
String handler,
|
||||
String zipFilePath) throws IOException {
|
||||
byte[] zipBytes = Files.readAllBytes(Paths.get(zipFilePath));
|
||||
|
||||
FunctionCode code = FunctionCode.builder()
|
||||
.zipFile(SdkBytes.fromByteArray(zipBytes))
|
||||
.build();
|
||||
|
||||
CreateFunctionRequest request = CreateFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.runtime(Runtime.JAVA17)
|
||||
.role(roleArn)
|
||||
.handler(handler)
|
||||
.code(code)
|
||||
.timeout(60)
|
||||
.memorySize(512)
|
||||
.build();
|
||||
|
||||
CreateFunctionResponse response = lambdaClient.createFunction(request);
|
||||
|
||||
System.out.println("Function ARN: " + response.functionArn());
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Function
|
||||
|
||||
Remove Lambda functions when no longer needed:
|
||||
|
||||
```java
|
||||
public void deleteFunction(LambdaClient lambdaClient, String functionName) {
|
||||
DeleteFunctionRequest request = DeleteFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.build();
|
||||
|
||||
lambdaClient.deleteFunction(request);
|
||||
}
|
||||
```
|
||||
|
||||
## Spring Boot Integration
|
||||
|
||||
### Configuration
|
||||
|
||||
Configure Lambda clients as Spring beans:
|
||||
|
||||
```java
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
@Configuration
|
||||
public class LambdaConfiguration {
|
||||
|
||||
@Bean
|
||||
public LambdaClient lambdaClient() {
|
||||
return LambdaClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Lambda Invoker Service
|
||||
|
||||
Create a service for Lambda function invocation:
|
||||
|
||||
```java
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
|
||||
@Service
|
||||
public class LambdaInvokerService {
|
||||
|
||||
private final LambdaClient lambdaClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
@Autowired
|
||||
public LambdaInvokerService(LambdaClient lambdaClient, ObjectMapper objectMapper) {
|
||||
this.lambdaClient = lambdaClient;
|
||||
this.objectMapper = objectMapper;
|
||||
}
|
||||
|
||||
public <T, R> R invoke(String functionName, T request, Class<R> responseType) {
|
||||
try {
|
||||
String jsonPayload = objectMapper.writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(invokeRequest);
|
||||
|
||||
if (response.functionError() != null) {
|
||||
throw new LambdaInvocationException(
|
||||
"Lambda function error: " + response.functionError());
|
||||
}
|
||||
|
||||
String responseJson = response.payload().asUtf8String();
|
||||
|
||||
return objectMapper.readValue(responseJson, responseType);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to invoke Lambda function", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void invokeAsync(String functionName, Object request) {
|
||||
try {
|
||||
String jsonPayload = objectMapper.writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.invocationType(InvocationType.EVENT)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
lambdaClient.invoke(invokeRequest);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to invoke Lambda function async", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Typed Lambda Client
|
||||
|
||||
Create type-safe interfaces for Lambda services:
|
||||
|
||||
```java
|
||||
public interface OrderProcessor {
|
||||
OrderResponse processOrder(OrderRequest request);
|
||||
}
|
||||
|
||||
@Service
|
||||
public class LambdaOrderProcessor implements OrderProcessor {
|
||||
|
||||
private final LambdaInvokerService lambdaInvoker;
|
||||
|
||||
@Value("${lambda.order-processor.function-name}")
|
||||
private String functionName;
|
||||
|
||||
public LambdaOrderProcessor(LambdaInvokerService lambdaInvoker) {
|
||||
this.lambdaInvoker = lambdaInvoker;
|
||||
}
|
||||
|
||||
@Override
|
||||
public OrderResponse processOrder(OrderRequest request) {
|
||||
return lambdaInvoker.invoke(functionName, request, OrderResponse.class);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Implement comprehensive error handling for Lambda operations:
|
||||
|
||||
```java
|
||||
public String invokeLambdaSafe(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String payload) {
|
||||
try {
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
// Check for function error
|
||||
if (response.functionError() != null) {
|
||||
String errorMessage = response.payload().asUtf8String();
|
||||
throw new RuntimeException("Lambda error: " + errorMessage);
|
||||
}
|
||||
|
||||
// Check status code
|
||||
if (response.statusCode() != 200) {
|
||||
throw new RuntimeException("Lambda invocation failed with status: " +
|
||||
response.statusCode());
|
||||
}
|
||||
|
||||
return response.payload().asUtf8String();
|
||||
|
||||
} catch (LambdaException e) {
|
||||
System.err.println("Lambda error: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
public class LambdaInvocationException extends RuntimeException {
|
||||
public LambdaInvocationException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
public LambdaInvocationException(String message, Throwable cause) {
|
||||
super(message, cause);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
For comprehensive code examples, see the references section:
|
||||
|
||||
- **Basic examples** - Simple invocation patterns and function management
|
||||
- **Spring Boot integration** - Complete Spring Boot configuration and service patterns
|
||||
- **Testing examples** - Unit and integration test patterns
|
||||
- **Advanced patterns** - Complex scenarios and best practices
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Reuse Lambda clients**: Create once and reuse across invocations
|
||||
2. **Set appropriate timeouts**: Match client timeout to Lambda function timeout
|
||||
3. **Use async invocation**: For fire-and-forget scenarios
|
||||
4. **Handle errors properly**: Check for function errors and status codes
|
||||
5. **Use environment variables**: For function configuration
|
||||
6. **Implement retry logic**: For transient failures
|
||||
7. **Monitor invocations**: Use CloudWatch metrics
|
||||
8. **Version functions**: Use aliases and versions for production
|
||||
9. **Use VPC**: For accessing resources in private subnets
|
||||
10. **Optimize payload size**: Keep payloads small for better performance
|
||||
|
||||
## Testing
|
||||
|
||||
Test Lambda services using mocks and test assertions:
|
||||
|
||||
```java
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.InjectMocks;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class LambdaInvokerServiceTest {
|
||||
|
||||
@Mock
|
||||
private LambdaClient lambdaClient;
|
||||
|
||||
@Mock
|
||||
private ObjectMapper objectMapper;
|
||||
|
||||
@InjectMocks
|
||||
private LambdaInvokerService service;
|
||||
|
||||
@Test
|
||||
void shouldInvokeLambdaSuccessfully() throws Exception {
|
||||
// Test implementation
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- @aws-sdk-java-v2-core - Core AWS SDK patterns and client configuration
|
||||
- @spring-boot-dependency-injection - Spring dependency injection best practices
|
||||
- @unit-test-service-layer - Service testing patterns with Mockito
|
||||
- @spring-boot-actuator - Production monitoring and health checks
|
||||
|
||||
## References
|
||||
|
||||
For detailed information and examples, see the following reference files:
|
||||
|
||||
- **[Official Documentation](references/official-documentation.md)** - AWS Lambda concepts, API reference, and official guidance
|
||||
- **[Examples](references/examples.md)** - Complete code examples and integration patterns
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Lambda Examples on GitHub](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/lambda)
|
||||
- [Lambda API Reference](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/lambda/package-summary.html)
|
||||
- [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)
|
||||
544
skills/aws-java/aws-sdk-java-v2-lambda/references/examples.md
Normal file
544
skills/aws-java/aws-sdk-java-v2-lambda/references/examples.md
Normal file
@@ -0,0 +1,544 @@
|
||||
# AWS Lambda Java SDK Examples
|
||||
|
||||
## Client Setup
|
||||
|
||||
### Basic Client Configuration
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.lambda.LambdaClient;
|
||||
|
||||
// Create synchronous client
|
||||
LambdaClient lambdaClient = LambdaClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Create asynchronous client
|
||||
LambdaAsyncClient asyncLambdaClient = LambdaAsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Client with Configuration
|
||||
```java
|
||||
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
|
||||
import software.amazon.awssdk.http.nio.netty.NettyNioHttpServer;
|
||||
|
||||
LambdaClient lambdaClient = LambdaClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.credentialsProvider(DefaultCredentialsProvider.create())
|
||||
.httpClientBuilder(NettyNioHttpServer.builder())
|
||||
.build();
|
||||
```
|
||||
|
||||
## Function Invocation Examples
|
||||
|
||||
### Synchronous Invocation with String Payload
|
||||
```java
|
||||
import software.amazon.awssdk.services.lambda.model.*;
|
||||
import software.amazon.awssdk.core.SdkBytes;
|
||||
|
||||
public String invokeLambdaSync(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String payload) {
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
// Check for function errors
|
||||
if (response.functionError() != null) {
|
||||
throw new RuntimeException("Lambda function error: " +
|
||||
response.payload().asUtf8String());
|
||||
}
|
||||
|
||||
return response.payload().asUtf8String();
|
||||
}
|
||||
```
|
||||
|
||||
### Asynchronous Invocation
|
||||
```java
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
public CompletableFuture<String> invokeLambdaAsync(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String payload) {
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.invocationType(InvocationType.EVENT) // Asynchronous
|
||||
.payload(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
return lambdaClient.invoke(request)
|
||||
.thenApply(response -> response.payload().asUtf8String());
|
||||
}
|
||||
```
|
||||
|
||||
### Invocation with JSON Object
|
||||
```java
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
public <T> String invokeLambdaWithObject(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
T requestObject) throws Exception {
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
String jsonPayload = mapper.writeValueAsString(requestObject);
|
||||
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
return response.payload().asUtf8String();
|
||||
}
|
||||
```
|
||||
|
||||
### Parse Typed Response
|
||||
```java
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
|
||||
public <T> T invokeLambdaAndParse(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
Object request,
|
||||
Class<T> responseType) throws Exception {
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
String jsonPayload = mapper.writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(invokeRequest);
|
||||
String responseJson = response.payload().asUtf8String();
|
||||
|
||||
return mapper.readValue(responseJson, responseType);
|
||||
}
|
||||
```
|
||||
|
||||
## Function Management Examples
|
||||
|
||||
### List Functions
|
||||
```java
|
||||
public List<FunctionConfiguration> listLambdaFunctions(LambdaClient lambdaClient) {
|
||||
ListFunctionsResponse response = lambdaClient.listFunctions();
|
||||
return response.functions();
|
||||
}
|
||||
|
||||
// List functions with pagination
|
||||
public List<FunctionConfiguration> listAllFunctions(LambdaClient lambdaClient) {
|
||||
ListFunctionsRequest request = ListFunctionsRequest.builder().build();
|
||||
ListFunctionsResponse response = lambdaClient.listFunctions(request);
|
||||
|
||||
return response.functions();
|
||||
}
|
||||
```
|
||||
|
||||
### Get Function Configuration
|
||||
```java
|
||||
public FunctionConfiguration getFunctionConfig(LambdaClient lambdaClient,
|
||||
String functionName) {
|
||||
GetFunctionRequest request = GetFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.build();
|
||||
|
||||
GetFunctionResponse response = lambdaClient.getFunction(request);
|
||||
return response.configuration();
|
||||
}
|
||||
```
|
||||
|
||||
### Get Function Code
|
||||
```java
|
||||
public byte[] getFunctionCode(LambdaClient lambdaClient,
|
||||
String functionName) {
|
||||
GetFunctionRequest request = GetFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.build();
|
||||
|
||||
GetFunctionResponse response = lambdaClient.getFunction(request);
|
||||
return response.code().zipFile().asByteArray();
|
||||
}
|
||||
```
|
||||
|
||||
### Update Function Code
|
||||
```java
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
public void updateLambdaFunction(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String zipFilePath) throws IOException {
|
||||
byte[] zipBytes = Files.readAllBytes(Paths.get(zipFilePath));
|
||||
|
||||
UpdateFunctionCodeRequest request = UpdateFunctionCodeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.zipFile(SdkBytes.fromByteArray(zipBytes))
|
||||
.publish(true) // Create new version
|
||||
.build();
|
||||
|
||||
UpdateFunctionCodeResponse response = lambdaClient.updateFunctionCode(request);
|
||||
System.out.println("Updated function version: " + response.version());
|
||||
}
|
||||
```
|
||||
|
||||
### Update Function Configuration
|
||||
```java
|
||||
public void updateFunctionConfig(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
Map<String, String> environment) {
|
||||
Environment env = Environment.builder()
|
||||
.variables(environment)
|
||||
.build();
|
||||
|
||||
UpdateFunctionConfigurationRequest request = UpdateFunctionConfigurationRequest.builder()
|
||||
.functionName(functionName)
|
||||
.environment(env)
|
||||
.timeout(60)
|
||||
.memorySize(512)
|
||||
.build();
|
||||
|
||||
lambdaClient.updateFunctionConfiguration(request);
|
||||
}
|
||||
```
|
||||
|
||||
### Create Function
|
||||
```java
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
public void createLambdaFunction(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String roleArn,
|
||||
String handler,
|
||||
String zipFilePath) throws IOException {
|
||||
byte[] zipBytes = Files.readAllBytes(Paths.get(zipFilePath));
|
||||
|
||||
FunctionCode code = FunctionCode.builder()
|
||||
.zipFile(SdkBytes.fromByteArray(zipBytes))
|
||||
.build();
|
||||
|
||||
CreateFunctionRequest request = CreateFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.runtime(Runtime.JAVA17)
|
||||
.role(roleArn)
|
||||
.handler(handler)
|
||||
.code(code)
|
||||
.timeout(60)
|
||||
.memorySize(512)
|
||||
.environment(Environment.builder()
|
||||
.variables(Map.of("ENV", "production"))
|
||||
.build())
|
||||
.build();
|
||||
|
||||
CreateFunctionResponse response = lambdaClient.createFunction(request);
|
||||
System.out.println("Function ARN: " + response.functionArn());
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Function
|
||||
```java
|
||||
public void deleteLambdaFunction(LambdaClient lambdaClient, String functionName) {
|
||||
DeleteFunctionRequest request = DeleteFunctionRequest.builder()
|
||||
.functionName(functionName)
|
||||
.build();
|
||||
|
||||
lambdaClient.deleteFunction(request);
|
||||
}
|
||||
```
|
||||
|
||||
## Spring Boot Integration Examples
|
||||
|
||||
### Configuration Class
|
||||
```java
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
@Configuration
|
||||
public class LambdaConfiguration {
|
||||
|
||||
@Bean
|
||||
public LambdaClient lambdaClient() {
|
||||
return LambdaClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public LambdaAsyncClient asyncLambdaClient() {
|
||||
return LambdaAsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Lambda Invoker Service
|
||||
```java
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
|
||||
@Service
|
||||
public class LambdaInvokerService {
|
||||
|
||||
private final LambdaClient lambdaClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
@Autowired
|
||||
public LambdaInvokerService(LambdaClient lambdaClient,
|
||||
ObjectMapper objectMapper) {
|
||||
this.lambdaClient = lambdaClient;
|
||||
this.objectMapper = objectMapper;
|
||||
}
|
||||
|
||||
public <T, R> R invokeFunction(String functionName,
|
||||
T request,
|
||||
Class<R> responseType) {
|
||||
try {
|
||||
String jsonPayload = objectMapper.writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(invokeRequest);
|
||||
|
||||
if (response.functionError() != null) {
|
||||
throw new LambdaInvocationException(
|
||||
"Lambda function error: " + response.functionError());
|
||||
}
|
||||
|
||||
String responseJson = response.payload().asUtf8String();
|
||||
return objectMapper.readValue(responseJson, responseType);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to invoke Lambda function", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void invokeFunctionAsync(String functionName, Object request) {
|
||||
try {
|
||||
String jsonPayload = objectMapper.writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.invocationType(InvocationType.EVENT)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
lambdaClient.invoke(invokeRequest);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to invoke Lambda function async", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Typed Lambda Client Interface
|
||||
```java
|
||||
public interface OrderProcessor {
|
||||
OrderResponse processOrder(OrderRequest request);
|
||||
CompletableFuture<OrderResponse> processOrderAsync(OrderRequest request);
|
||||
}
|
||||
|
||||
@Service
|
||||
public class LambdaOrderProcessor implements OrderProcessor {
|
||||
|
||||
private final LambdaInvokerService lambdaInvoker;
|
||||
private final LambdaAsyncClient asyncLambdaClient;
|
||||
|
||||
@Value("${lambda.order-processor.function-name}")
|
||||
private String functionName;
|
||||
|
||||
public LambdaOrderProcessor(LambdaInvokerService lambdaInvoker,
|
||||
LambdaAsyncClient asyncLambdaClient) {
|
||||
this.lambdaInvoker = lambdaInvoker;
|
||||
this.asyncLambdaClient = asyncLambdaClient;
|
||||
}
|
||||
|
||||
@Override
|
||||
public OrderResponse processOrder(OrderRequest request) {
|
||||
return lambdaInvoker.invoke(functionName, request, OrderResponse.class);
|
||||
}
|
||||
|
||||
@Override
|
||||
public CompletableFuture<OrderResponse> processOrderAsync(OrderRequest request) {
|
||||
// Implement async invocation using async client
|
||||
try {
|
||||
String jsonPayload = new ObjectMapper().writeValueAsString(request);
|
||||
|
||||
InvokeRequest invokeRequest = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(jsonPayload))
|
||||
.build();
|
||||
|
||||
return asyncLambdaClient.invoke(invokeRequest)
|
||||
.thenApply(response -> {
|
||||
try {
|
||||
return new ObjectMapper().readValue(
|
||||
response.payload().asUtf8String(),
|
||||
OrderResponse.class);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to parse response", e);
|
||||
}
|
||||
});
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to invoke Lambda function", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling Examples
|
||||
|
||||
### Comprehensive Error Handling
|
||||
```java
|
||||
public String invokeLambdaWithFullErrorHandling(LambdaClient lambdaClient,
|
||||
String functionName,
|
||||
String payload) {
|
||||
try {
|
||||
InvokeRequest request = InvokeRequest.builder()
|
||||
.functionName(functionName)
|
||||
.payload(SdkBytes.fromUtf8String(payload))
|
||||
.build();
|
||||
|
||||
InvokeResponse response = lambdaClient.invoke(request);
|
||||
|
||||
// Check for function error
|
||||
if (response.functionError() != null) {
|
||||
String errorMessage = response.payload().asUtf8String();
|
||||
throw new LambdaInvocationException(
|
||||
"Lambda function error: " + errorMessage);
|
||||
}
|
||||
|
||||
// Check status code
|
||||
if (response.statusCode() != 200) {
|
||||
throw new LambdaInvocationException(
|
||||
"Lambda invocation failed with status: " + response.statusCode());
|
||||
}
|
||||
|
||||
return response.payload().asUtf8String();
|
||||
|
||||
} catch (LambdaException e) {
|
||||
System.err.println("AWS Lambda error: " + e.awsErrorDetails().errorMessage());
|
||||
throw new LambdaInvocationException(
|
||||
"AWS Lambda service error: " + e.awsErrorDetails().errorMessage(), e);
|
||||
}
|
||||
}
|
||||
|
||||
public class LambdaInvocationException extends RuntimeException {
|
||||
public LambdaInvocationException(String message) {
|
||||
super(message);
|
||||
}
|
||||
|
||||
public LambdaInvocationException(String message, Throwable cause) {
|
||||
super(message, cause);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Examples
|
||||
|
||||
### Unit Test for Lambda Service
|
||||
```java
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.InjectMocks;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import static org.mockito.Mockito.*;
|
||||
import static org.assertj.core.api.Assertions.*;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class LambdaInvokerServiceTest {
|
||||
|
||||
@Mock
|
||||
private LambdaClient lambdaClient;
|
||||
|
||||
@Mock
|
||||
private ObjectMapper objectMapper;
|
||||
|
||||
@InjectMocks
|
||||
private LambdaInvokerService service;
|
||||
|
||||
@Test
|
||||
void shouldInvokeLambdaSuccessfully() throws Exception {
|
||||
// Given
|
||||
OrderRequest request = new OrderRequest("ORDER-123");
|
||||
OrderResponse expectedResponse = new OrderResponse("SUCCESS");
|
||||
String jsonPayload = "{\"orderId\":\"ORDER-123\"};
|
||||
String jsonResponse = "{\"status\":\"SUCCESS\"};
|
||||
|
||||
when(objectMapper.writeValueAsString(request))
|
||||
.thenReturn(jsonPayload);
|
||||
|
||||
when(lambdaClient.invoke(any(InvokeRequest.class)))
|
||||
.thenReturn(InvokeResponse.builder()
|
||||
.statusCode(200)
|
||||
.payload(SdkBytes.fromUtf8String(jsonResponse))
|
||||
.build());
|
||||
|
||||
when(objectMapper.readValue(jsonResponse, OrderResponse.class))
|
||||
.thenReturn(expectedResponse);
|
||||
|
||||
// When
|
||||
OrderResponse result = service.invoke(
|
||||
"order-processor", request, OrderResponse.class);
|
||||
|
||||
// Then
|
||||
assertThat(result).isEqualTo(expectedResponse);
|
||||
verify(lambdaClient).invoke(any(InvokeRequest.class));
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldHandleFunctionError() throws Exception {
|
||||
// Given
|
||||
OrderRequest request = new OrderRequest("ORDER-123");
|
||||
String jsonPayload = "{\"orderId\":\"ORDER-123\"};
|
||||
String errorResponse = "{\"errorMessage\":\"Invalid input\"};
|
||||
|
||||
when(objectMapper.writeValueAsString(request))
|
||||
.thenReturn(jsonPayload);
|
||||
|
||||
when(lambdaClient.invoke(any(InvokeRequest.class)))
|
||||
.thenReturn(InvokeResponse.builder()
|
||||
.statusCode(200)
|
||||
.functionError("Unhandled")
|
||||
.payload(SdkBytes.fromUtf8String(errorResponse))
|
||||
.build());
|
||||
|
||||
// When & Then
|
||||
assertThatThrownBy(() ->
|
||||
service.invoke("order-processor", request, OrderResponse.class))
|
||||
.isInstanceOf(LambdaInvocationException.class)
|
||||
.hasMessageContaining("Lambda function error");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Maven Dependencies
|
||||
```xml
|
||||
<!-- AWS SDK for Java v2 Lambda -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>lambda</artifactId>
|
||||
<version>2.36.3</version> // Use the latest version available
|
||||
</dependency>
|
||||
|
||||
<!-- Jackson for JSON processing -->
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Spring Boot support -->
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-web</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
@@ -0,0 +1,112 @@
|
||||
# AWS Lambda Official Documentation Reference
|
||||
|
||||
## Overview
|
||||
AWS Lambda is a compute service that runs code without the need to manage servers. Your code runs automatically, scaling up and down with pay-per-use pricing.
|
||||
|
||||
## Common Use Cases
|
||||
- Stream processing: Process real-time data streams for analytics
|
||||
- Web applications: Build scalable web apps that automatically adjust
|
||||
- Mobile backends: Create secure API backends
|
||||
- IoT backends: Handle web, mobile, IoT, and third-party API requests
|
||||
- File processing: Process files automatically when uploaded
|
||||
- Database operations: Respond to database changes and automate data workflows
|
||||
- Scheduled tasks: Run automated operations on a regular schedule
|
||||
|
||||
## How Lambda Works
|
||||
1. You write and organize your code in Lambda functions
|
||||
2. You control security through Lambda permissions using execution roles
|
||||
3. Event sources and AWS services trigger your Lambda functions
|
||||
4. Lambda runs your code with language-specific runtimes
|
||||
|
||||
## Key Features
|
||||
|
||||
### Configuration & Security
|
||||
- Environment variables modify behavior without deployments
|
||||
- Versions safely test new features while maintaining stable production
|
||||
- Lambda layers optimize code reuse across multiple functions
|
||||
- Code signing ensures only approved code reaches production
|
||||
|
||||
### Performance
|
||||
- Concurrency controls manage responsiveness and resource utilization
|
||||
- Lambda SnapStart reduces cold start times to sub-second performance
|
||||
- Response streaming delivers large payloads incrementally
|
||||
- Container images package functions with complex dependencies
|
||||
|
||||
### Integration
|
||||
- VPC networks secure sensitive resources and internal services
|
||||
- File system integration shares persistent data across function invocations
|
||||
- Function URLs create public APIs without additional services
|
||||
- Lambda extensions augment functions with monitoring and operational tools
|
||||
|
||||
## AWS Lambda Java SDK API
|
||||
|
||||
### Key Classes
|
||||
- `LambdaClient` - Synchronous service client
|
||||
- `LambdaAsyncClient` - Asynchronous service client
|
||||
- `LambdaClientBuilder` - Builder for synchronous client
|
||||
- `LambdaAsyncClientBuilder` - Builder for asynchronous client
|
||||
- `LambdaServiceClientConfiguration` - Client settings configuration
|
||||
|
||||
### Related Packages
|
||||
- `software.amazon.awssdk.services.lambda.model` - API models
|
||||
- `software.amazon.awssdk.services.lambda.transform` - Request/response transformations
|
||||
- `software.amazon.awssdk.services.lambda.paginators` - Pagination utilities
|
||||
- `software.amazon.awssdk.services.lambda.waiters` - Waiter utilities
|
||||
|
||||
### Authentication
|
||||
Lambda supports signature version 4 for API authentication.
|
||||
|
||||
### CA Requirements
|
||||
Clients need to support these CAs:
|
||||
- Amazon Root CA 1
|
||||
- Starfield Services Root Certificate Authority - G2
|
||||
- Starfield Class 2 Certification Authority
|
||||
|
||||
## Core API Operations
|
||||
|
||||
### Function Management Operations
|
||||
- `CreateFunction` - Create new Lambda function
|
||||
- `DeleteFunction` - Delete existing function
|
||||
- `GetFunction` - Retrieve function configuration
|
||||
- `UpdateFunctionCode` - Update function code
|
||||
- `UpdateFunctionConfiguration` - Update function settings
|
||||
- `ListFunctions` - List functions for account
|
||||
|
||||
### Invocation Operations
|
||||
- `Invoke` - Invoke Lambda function synchronously
|
||||
- `Invoke` with `InvocationType.EVENT` - Asynchronous invocation
|
||||
|
||||
### Environment & Configuration
|
||||
- Environment variable management
|
||||
- Function configuration updates
|
||||
- Version and alias management
|
||||
- Layer management
|
||||
|
||||
## Examples Overview
|
||||
The AWS documentation includes examples for:
|
||||
- Basic Lambda function creation and invocation
|
||||
- Function configuration and updates
|
||||
- Environment variable management
|
||||
- Function listing and cleanup
|
||||
- Integration patterns
|
||||
|
||||
## Best Practices from Official Docs
|
||||
- Reuse Lambda clients across invocations
|
||||
- Set appropriate timeouts matching function requirements
|
||||
- Use async invocation for fire-and-forget scenarios
|
||||
- Implement proper error handling for function errors and status codes
|
||||
- Use environment variables for configuration management
|
||||
- Version functions for production stability
|
||||
- Monitor invocations using CloudWatch metrics
|
||||
- Implement retry logic for transient failures
|
||||
- Use VPC integration for private resources
|
||||
- Optimize payload sizes for performance
|
||||
|
||||
## Security Considerations
|
||||
- Use IAM roles with least privilege
|
||||
- Implement proper Lambda permissions
|
||||
- Use environment variables for sensitive data
|
||||
- Enable CloudTrail logging
|
||||
- Monitor security events with CloudWatch
|
||||
- Use code signing for production deployments
|
||||
- Implement proper authentication and authorization
|
||||
310
skills/aws-java/aws-sdk-java-v2-messaging/SKILL.md
Normal file
310
skills/aws-java/aws-sdk-java-v2-messaging/SKILL.md
Normal file
@@ -0,0 +1,310 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-messaging
|
||||
description: Implement AWS messaging patterns using AWS SDK for Java 2.x for SQS queues and SNS topics. Send/receive messages, manage FIFO queues, implement DLQ, publish messages, manage subscriptions, and build pub/sub patterns.
|
||||
category: aws
|
||||
tags: [aws, sqs, sns, java, sdk, messaging, pub-sub, queues, events]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash, Grep
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - Messaging (SQS & SNS)
|
||||
|
||||
## Overview
|
||||
|
||||
Provide comprehensive AWS messaging patterns using AWS SDK for Java 2.x for both SQS and SNS services. Include client setup, queue management, message operations, subscription management, and Spring Boot integration patterns.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when working with:
|
||||
- Amazon SQS queues for message queuing
|
||||
- SNS topics for event publishing and notification
|
||||
- FIFO queues and standard queues
|
||||
- Dead Letter Queues (DLQ) for message handling
|
||||
- SNS subscriptions with email, SMS, SQS, Lambda endpoints
|
||||
- Pub/sub messaging patterns and event-driven architectures
|
||||
- Spring Boot integration with AWS messaging services
|
||||
- Testing strategies using LocalStack or Testcontainers
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Dependencies
|
||||
|
||||
```xml
|
||||
<!-- SQS -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>sqs</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- SNS -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>sns</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### Basic Client Setup
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.sqs.SqsClient;
|
||||
import software.amazon.awssdk.services.sns.SnsClient;
|
||||
|
||||
SqsClient sqsClient = SqsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
SnsClient snsClient = SnsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic SQS Operations
|
||||
|
||||
#### Create and Send Message
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.sqs.SqsClient;
|
||||
import software.amazon.awssdk.services.sqs.model.*;
|
||||
|
||||
// Setup SQS client
|
||||
SqsClient sqsClient = SqsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Create queue
|
||||
String queueUrl = sqsClient.createQueue(CreateQueueRequest.builder()
|
||||
.queueName("my-queue")
|
||||
.build()).queueUrl();
|
||||
|
||||
// Send message
|
||||
String messageId = sqsClient.sendMessage(SendMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.messageBody("Hello, SQS!")
|
||||
.build()).messageId();
|
||||
```
|
||||
|
||||
#### Receive and Delete Message
|
||||
```java
|
||||
// Receive messages with long polling
|
||||
ReceiveMessageResponse response = sqsClient.receiveMessage(ReceiveMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.maxNumberOfMessages(10)
|
||||
.waitTimeSeconds(20)
|
||||
.build());
|
||||
|
||||
// Process and delete messages
|
||||
response.messages().forEach(message -> {
|
||||
System.out.println("Received: " + message.body());
|
||||
sqsClient.deleteMessage(DeleteMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.receiptHandle(message.receiptHandle())
|
||||
.build());
|
||||
});
|
||||
```
|
||||
|
||||
### Basic SNS Operations
|
||||
|
||||
#### Create Topic and Publish
|
||||
```java
|
||||
import software.amazon.awssdk.services.sns.SnsClient;
|
||||
import software.amazon.awssdk.services.sns.model.*;
|
||||
|
||||
// Setup SNS client
|
||||
SnsClient snsClient = SnsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Create topic
|
||||
String topicArn = snsClient.createTopic(CreateTopicRequest.builder()
|
||||
.name("my-topic")
|
||||
.build()).topicArn();
|
||||
|
||||
// Publish message
|
||||
String messageId = snsClient.publish(PublishRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.subject("Test Notification")
|
||||
.message("Hello, SNS!")
|
||||
.build()).messageId();
|
||||
```
|
||||
|
||||
### Advanced Examples
|
||||
|
||||
#### FIFO Queue Pattern
|
||||
```java
|
||||
// Create FIFO queue
|
||||
Map<QueueAttributeName, String> attributes = Map.of(
|
||||
QueueAttributeName.FIFO_QUEUE, "true",
|
||||
QueueAttributeName.CONTENT_BASED_DEDUPLICATION, "true"
|
||||
);
|
||||
|
||||
String fifoQueueUrl = sqsClient.createQueue(CreateQueueRequest.builder()
|
||||
.queueName("my-queue.fifo")
|
||||
.attributes(attributes)
|
||||
.build()).queueUrl();
|
||||
|
||||
// Send FIFO message with group ID
|
||||
String fifoMessageId = sqsClient.sendMessage(SendMessageRequest.builder()
|
||||
.queueUrl(fifoQueueUrl)
|
||||
.messageBody("Order #12345")
|
||||
.messageGroupId("orders")
|
||||
.messageDeduplicationId(UUID.randomUUID().toString())
|
||||
.build()).messageId();
|
||||
```
|
||||
|
||||
#### SNS to SQS Subscription
|
||||
```java
|
||||
// Create SQS queue for subscription
|
||||
String subscriptionQueueUrl = sqsClient.createQueue(CreateQueueRequest.builder()
|
||||
.queueName("notification-subscriber")
|
||||
.build()).queueUrl();
|
||||
|
||||
// Get queue ARN
|
||||
String queueArn = sqsClient.getQueueAttributes(GetQueueAttributesRequest.builder()
|
||||
.queueUrl(subscriptionQueueUrl)
|
||||
.attributeNames(QueueAttributeName.QUEUE_ARN)
|
||||
.build()).attributes().get(QueueAttributeName.QUEUE_ARN);
|
||||
|
||||
// Subscribe SQS to SNS
|
||||
String subscriptionArn = snsClient.subscribe(SubscribeRequest.builder()
|
||||
.protocol("sqs")
|
||||
.endpoint(queueArn)
|
||||
.topicArn(topicArn)
|
||||
.build()).subscriptionArn();
|
||||
```
|
||||
|
||||
### Spring Boot Integration Example
|
||||
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class OrderNotificationService {
|
||||
|
||||
private final SnsClient snsClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
@Value("${aws.sns.order-topic-arn}")
|
||||
private String orderTopicArn;
|
||||
|
||||
public void sendOrderNotification(Order order) {
|
||||
try {
|
||||
String jsonMessage = objectMapper.writeValueAsString(order);
|
||||
|
||||
snsClient.publish(PublishRequest.builder()
|
||||
.topicArn(orderTopicArn)
|
||||
.subject("New Order Received")
|
||||
.message(jsonMessage)
|
||||
.messageAttributes(Map.of(
|
||||
"orderType", MessageAttributeValue.builder()
|
||||
.dataType("String")
|
||||
.stringValue(order.getType())
|
||||
.build()))
|
||||
.build());
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to send order notification", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### SQS Best Practices
|
||||
- **Use long polling**: Set `waitTimeSeconds` (20-40 seconds) to reduce empty responses
|
||||
- **Batch operations**: Use `sendMessageBatch` for multiple messages to reduce API calls
|
||||
- **Visibility timeout**: Set appropriately based on message processing time (default 30 seconds)
|
||||
- **Delete messages**: Always delete messages after successful processing
|
||||
- **Handle duplicates**: Implement idempotent processing for retries
|
||||
- **Implement DLQ**: Route failed messages to dead letter queues for analysis
|
||||
- **Monitor queue depth**: Use CloudWatch alarms for high queue backlog
|
||||
- **Use FIFO queues**: When message order and deduplication are critical
|
||||
|
||||
### SNS Best Practices
|
||||
- **Use filter policies**: Reduce noise by filtering messages at the source
|
||||
- **Message attributes**: Add metadata for subscription routing decisions
|
||||
- **Retry logic**: Handle transient failures with exponential backoff
|
||||
- **Monitor failed deliveries**: Set up CloudWatch alarms for failed notifications
|
||||
- **Security**: Use IAM policies for access control and data encryption
|
||||
- **FIFO topics**: Use when order and deduplication are critical
|
||||
- **Avoid large payloads**: Keep messages under 256KB for optimal performance
|
||||
|
||||
### General Guidelines
|
||||
- **Region consistency**: Use the same region for all AWS resources
|
||||
- **Resource naming**: Use consistent naming conventions for queues and topics
|
||||
- **Error handling**: Implement proper exception handling and logging
|
||||
- **Testing**: Use LocalStack for local development and testing
|
||||
- **Documentation**: Document subscription endpoints and message formats
|
||||
|
||||
## Instructions
|
||||
|
||||
### Setup AWS Credentials
|
||||
Configure AWS credentials using environment variables, AWS CLI, or IAM roles:
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID=your-access-key
|
||||
export AWS_SECRET_ACCESS_KEY=your-secret-key
|
||||
export AWS_REGION=us-east-1
|
||||
```
|
||||
|
||||
### Configure Clients
|
||||
```java
|
||||
// Basic client configuration
|
||||
SqsClient sqsClient = SqsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Advanced client with custom configuration
|
||||
SnsClient snsClient = SnsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.credentialsProvider(DefaultCredentialsProvider.create())
|
||||
.httpClient(UrlConnectionHttpClient.create())
|
||||
.build();
|
||||
```
|
||||
|
||||
### Implement Message Processing
|
||||
1. **Connect** to SQS/SNS using the AWS SDK clients
|
||||
2. **Create** queues and topics as needed
|
||||
3. **Send/receive** messages with appropriate timeout settings
|
||||
4. **Process** messages in batches for efficiency
|
||||
5. **Delete** messages after successful processing
|
||||
6. **Handle** failures with proper error handling and retries
|
||||
|
||||
### Integrate with Spring Boot
|
||||
1. **Configure** beans for `SqsClient` and `SnsClient` in `@Configuration` classes
|
||||
2. **Use** `@Value` to inject queue URLs and topic ARNs from properties
|
||||
3. **Create** service classes with business logic for messaging operations
|
||||
4. **Implement** error handling with `@Retryable` or custom retry logic
|
||||
5. **Test** integration using Testcontainers or LocalStack
|
||||
|
||||
### Monitor and Debug
|
||||
- Use AWS CloudWatch for monitoring queue depth and message metrics
|
||||
- Enable AWS SDK logging for debugging client operations
|
||||
- Implement proper logging for message processing activities
|
||||
- Use AWS X-Ray for distributed tracing in production environments
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
- **Queue does not exist**: Verify queue URL and permissions
|
||||
- **Message not received**: Check visibility timeout and consumer logic
|
||||
- **Permission denied**: Verify IAM policies and credentials
|
||||
- **Connection timeout**: Check network connectivity and region configuration
|
||||
- **Rate limiting**: Implement retry logic with exponential backoff
|
||||
|
||||
### Performance Optimization
|
||||
- Use long polling to reduce empty responses
|
||||
- Batch message operations to minimize API calls
|
||||
- Adjust visibility timeout based on processing time
|
||||
- Implement connection pooling and reuse clients
|
||||
- Use appropriate message sizes to avoid fragmentation
|
||||
|
||||
## Detailed References
|
||||
|
||||
For comprehensive API documentation and advanced patterns, see:
|
||||
|
||||
- [@references/detailed-sqs-operations] - Complete SQS operations reference
|
||||
- [@references/detailed-sns-operations] - Complete SNS operations reference
|
||||
- [@references/spring-boot-integration] - Spring Boot integration patterns
|
||||
- [@references/aws-official-documentation] - Official AWS documentation and best practices
|
||||
@@ -0,0 +1,158 @@
|
||||
# AWS SQS & SNS Official Documentation Reference
|
||||
|
||||
This file contains reference information extracted from official AWS resources for the AWS SDK for Java 2.x messaging patterns.
|
||||
|
||||
## Source Documents
|
||||
- [AWS Java SDK v2 Examples - SQS](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/sqs)
|
||||
- [AWS Java SDK v2 Examples - SNS](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2/example_code/sns)
|
||||
- [AWS SQS Developer Guide](https://docs.aws.amazon.com/sqs/latest/dg/)
|
||||
- [AWS SNS Developer Guide](https://docs.aws.amazon.com/sns/latest/dg/)
|
||||
|
||||
## Amazon SQS Reference
|
||||
|
||||
### Core Operations
|
||||
- **CreateQueue** - Create new SQS queue
|
||||
- **DeleteMessage** - Delete individual message from queue
|
||||
- **ListQueues** - List available queues
|
||||
- **ReceiveMessage** - Receive messages from queue
|
||||
- **SendMessage** - Send message to queue
|
||||
- **SendMessageBatch** - Send multiple messages to queue
|
||||
|
||||
### Advanced Features
|
||||
- **Large Message Handling** - Use S3 for messages larger than 256KB
|
||||
- **Batch Operations** - Process multiple messages efficiently
|
||||
- **Long Polling** - Reduce empty responses with `waitTimeSeconds`
|
||||
- **Visibility Timeout** - Control message visibility during processing
|
||||
- **Dead Letter Queues (DLQ)** - Handle failed messages
|
||||
- **FIFO Queues** - Ensure message ordering and deduplication
|
||||
|
||||
### Java SDK v2 Key Classes
|
||||
```java
|
||||
// Core clients and models
|
||||
software.amazon.awssdk.services.sqs.SqsClient
|
||||
software.amazon.awssdk.services.sqs.model.*
|
||||
software.amazon.awssdk.services.sqs.model.QueueAttributeName
|
||||
```
|
||||
|
||||
## Amazon SNS Reference
|
||||
|
||||
### Core Operations
|
||||
- **CreateTopic** - Create new SNS topic
|
||||
- **Publish** - Send message to topic
|
||||
- **Subscribe** - Subscribe endpoint to topic
|
||||
- **ListSubscriptions** - List topic subscriptions
|
||||
- **Unsubscribe** - Remove subscription
|
||||
|
||||
### Advanced Features
|
||||
- **Platform Endpoints** - Mobile push notifications
|
||||
- **SMS Publishing** - Send SMS messages
|
||||
- **FIFO Topics** - Ordered message delivery with deduplication
|
||||
- **Filter Policies** - Filter messages based on attributes
|
||||
- **Message Attributes** - Enrich messages with metadata
|
||||
- **DLQ for Subscriptions** - Handle failed deliveries
|
||||
|
||||
### Java SDK v2 Key Classes
|
||||
```java
|
||||
// Core clients and models
|
||||
software.amazon.awssdk.services.sns.SnsClient
|
||||
software.amazon.awssdk.services.sns.model.*
|
||||
software.amazon.awssdk.services.sns.model.MessageAttributeValue
|
||||
```
|
||||
|
||||
## Best Practices from AWS
|
||||
|
||||
### SQS Best Practices
|
||||
1. **Use Long Polling**: Set `waitTimeSeconds` (10-40 seconds) to reduce empty responses
|
||||
2. **Batch Operations**: Use `SendMessageBatch` for efficiency
|
||||
3. **Visibility Timeout**: Set appropriately based on processing time
|
||||
4. **Handle Duplicates**: Implement idempotent processing for retries
|
||||
5. **Monitor Queue Depth**: Use CloudWatch for monitoring
|
||||
6. **Implement DLQ**: Route failed messages for analysis
|
||||
|
||||
### SNS Best Practices
|
||||
1. **Use Filter Policies**: Reduce noise by filtering messages
|
||||
2. **Message Attributes**: Add metadata for routing decisions
|
||||
3. **Retry Logic**: Handle transient failures gracefully
|
||||
4. **Monitor Failed Deliveries**: Set up CloudWatch alarms
|
||||
5. **Security**: Use IAM policies for access control
|
||||
6. **FIFO Topics**: Use when order and deduplication are critical
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Common SQS Errors
|
||||
- **QueueDoesNotExistException**: Verify queue URL
|
||||
- **MessageNotInflightException**: Check message visibility
|
||||
- **OverLimitException**: Implement backoff/retry logic
|
||||
- **InvalidAttributeValueException**: Validate queue attributes
|
||||
|
||||
### Common SNS Errors
|
||||
- **NotFoundException**: Verify topic ARN
|
||||
- **InvalidParameterException**: Validate subscription parameters
|
||||
- **InternalFailureException**: Implement retry logic
|
||||
- **AuthorizationErrorException**: Check IAM permissions
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Spring Boot Integration
|
||||
- Use `@Service` classes for business logic
|
||||
- Inject `SqsClient` and `SnsClient` via constructor injection
|
||||
- Configure clients with `@Configuration` beans
|
||||
- Use `@Value` for externalizing configuration
|
||||
|
||||
### Testing Strategies
|
||||
- Use LocalStack for local development
|
||||
- Mock AWS services with Mockito for unit tests
|
||||
- Integrate with Testcontainers for integration tests
|
||||
- Test idempotent operations thoroughly
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### SQS Configuration
|
||||
```java
|
||||
SqsClient sqsClient = SqsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### SNS Configuration
|
||||
```java
|
||||
SnsClient snsClient = SnsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Advanced Configuration
|
||||
- Override endpoint for local development
|
||||
- Configure custom credentials provider
|
||||
- Set custom HTTP client
|
||||
- Configure retry policies
|
||||
|
||||
## Monitoring and Observability
|
||||
|
||||
### SQS Metrics
|
||||
- ApproximateNumberOfMessagesVisible
|
||||
- ApproximateNumberOfMessagesNotVisible
|
||||
- ApproximateNumberOfMessagesDelayed
|
||||
- SentMessages
|
||||
- ReceiveCalls
|
||||
|
||||
### SNS Metrics
|
||||
- NumberOfNotifications
|
||||
- PublishSuccess
|
||||
- PublishFailed
|
||||
- SubscriptionConfirmation
|
||||
- SubscriptionConfirmationFailed
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### IAM Permissions
|
||||
- Grant least privilege access
|
||||
- Use IAM roles for EC2/ECS
|
||||
- Implement resource-based policies
|
||||
- Use condition keys for fine-grained control
|
||||
|
||||
### Data Protection
|
||||
- Encrypt sensitive data in messages
|
||||
- Use KMS for message encryption
|
||||
- Implement message signing
|
||||
- Secure endpoints with HTTPS
|
||||
@@ -0,0 +1,179 @@
|
||||
# Detailed SNS Operations Reference
|
||||
|
||||
## Topic Management
|
||||
|
||||
### Create Standard Topic
|
||||
```java
|
||||
public String createTopic(SnsClient snsClient, String topicName) {
|
||||
CreateTopicRequest request = CreateTopicRequest.builder()
|
||||
.name(topicName)
|
||||
.build();
|
||||
|
||||
CreateTopicResponse response = snsClient.createTopic(request);
|
||||
return response.topicArn();
|
||||
}
|
||||
```
|
||||
|
||||
### Create FIFO Topic
|
||||
```java
|
||||
public String createFifoTopic(SnsClient snsClient, String topicName) {
|
||||
Map<String, String> attributes = new HashMap<>();
|
||||
attributes.put("FifoTopic", "true");
|
||||
attributes.put("ContentBasedDeduplication", "true");
|
||||
|
||||
CreateTopicRequest request = CreateTopicRequest.builder()
|
||||
.name(topicName + ".fifo")
|
||||
.attributes(attributes)
|
||||
.build();
|
||||
|
||||
CreateTopicResponse response = snsClient.createTopic(request);
|
||||
return response.topicArn();
|
||||
}
|
||||
```
|
||||
|
||||
### Topic Operations
|
||||
```java
|
||||
public List<Topic> listTopics(SnsClient snsClient) {
|
||||
return snsClient.listTopics().topics();
|
||||
}
|
||||
|
||||
public String getTopicArn(SnsClient snsClient, String topicName) {
|
||||
return snsClient.createTopic(CreateTopicRequest.builder()
|
||||
.name(topicName)
|
||||
.build()).topicArn();
|
||||
}
|
||||
```
|
||||
|
||||
## Message Publishing
|
||||
|
||||
### Publish Basic Message
|
||||
```java
|
||||
public String publishMessage(SnsClient snsClient, String topicArn, String message) {
|
||||
PublishRequest request = PublishRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.message(message)
|
||||
.build();
|
||||
|
||||
PublishResponse response = snsClient.publish(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
### Publish with Subject
|
||||
```java
|
||||
public String publishMessageWithSubject(SnsClient snsClient,
|
||||
String topicArn,
|
||||
String subject,
|
||||
String message) {
|
||||
PublishRequest request = PublishRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.subject(subject)
|
||||
.message(message)
|
||||
.build();
|
||||
|
||||
PublishResponse response = snsClient.publish(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
### Publish with Attributes
|
||||
```java
|
||||
public String publishMessageWithAttributes(SnsClient snsClient,
|
||||
String topicArn,
|
||||
String message,
|
||||
Map<String, String> attributes) {
|
||||
Map<String, MessageAttributeValue> messageAttributes = attributes.entrySet().stream()
|
||||
.collect(Collectors.toMap(
|
||||
Map.Entry::getKey,
|
||||
e -> MessageAttributeValue.builder()
|
||||
.dataType("String")
|
||||
.stringValue(e.getValue())
|
||||
.build()));
|
||||
|
||||
PublishRequest request = PublishRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.message(message)
|
||||
.messageAttributes(messageAttributes)
|
||||
.build();
|
||||
|
||||
PublishResponse response = snsClient.publish(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
### Publish FIFO Message
|
||||
```java
|
||||
public String publishFifoMessage(SnsClient snsClient,
|
||||
String topicArn,
|
||||
String message,
|
||||
String messageGroupId) {
|
||||
PublishRequest request = PublishRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.message(message)
|
||||
.messageGroupId(messageGroupId)
|
||||
.messageDeduplicationId(UUID.randomUUID().toString())
|
||||
.build();
|
||||
|
||||
PublishResponse response = snsClient.publish(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
## Subscription Management
|
||||
|
||||
### Subscribe Email to Topic
|
||||
```java
|
||||
public String subscribeEmail(SnsClient snsClient, String topicArn, String email) {
|
||||
SubscribeRequest request = SubscribeRequest.builder()
|
||||
.protocol("email")
|
||||
.endpoint(email)
|
||||
.topicArn(topicArn)
|
||||
.build();
|
||||
|
||||
SubscribeResponse response = snsClient.subscribe(request);
|
||||
return response.subscriptionArn();
|
||||
}
|
||||
```
|
||||
|
||||
### Subscribe SQS to Topic
|
||||
```java
|
||||
public String subscribeSqs(SnsClient snsClient, String topicArn, String queueArn) {
|
||||
SubscribeRequest request = SubscribeRequest.builder()
|
||||
.protocol("sqs")
|
||||
.endpoint(queueArn)
|
||||
.topicArn(topicArn)
|
||||
.build();
|
||||
|
||||
SubscribeResponse response = snsClient.subscribe(request);
|
||||
return response.subscriptionArn();
|
||||
}
|
||||
```
|
||||
|
||||
### Subscribe Lambda to Topic
|
||||
```java
|
||||
public String subscribeLambda(SnsClient snsClient, String topicArn, String lambdaArn) {
|
||||
SubscribeRequest request = SubscribeRequest.builder()
|
||||
.protocol("lambda")
|
||||
.endpoint(lambdaArn)
|
||||
.topicArn(topicArn)
|
||||
.build();
|
||||
|
||||
SubscribeResponse response = snsClient.subscribe(request);
|
||||
return response.subscriptionArn();
|
||||
}
|
||||
```
|
||||
|
||||
### Subscription Operations
|
||||
```java
|
||||
public List<Subscription> listSubscriptions(SnsClient snsClient, String topicArn) {
|
||||
return snsClient.listSubscriptionsByTopic(ListSubscriptionsByTopicRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.build()).subscriptions();
|
||||
}
|
||||
|
||||
public void unsubscribe(SnsClient snsClient, String subscriptionArn) {
|
||||
snsClient.unsubscribe(UnsubscribeRequest.builder()
|
||||
.subscriptionArn(subscriptionArn)
|
||||
.build());
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,199 @@
|
||||
# Detailed SQS Operations Reference
|
||||
|
||||
## Queue Management
|
||||
|
||||
### Create Standard Queue
|
||||
```java
|
||||
public String createQueue(SqsClient sqsClient, String queueName) {
|
||||
CreateQueueRequest request = CreateQueueRequest.builder()
|
||||
.queueName(queueName)
|
||||
.build();
|
||||
|
||||
CreateQueueResponse response = sqsClient.createQueue(request);
|
||||
return response.queueUrl();
|
||||
}
|
||||
```
|
||||
|
||||
### Create FIFO Queue
|
||||
```java
|
||||
public String createFifoQueue(SqsClient sqsClient, String queueName) {
|
||||
Map<QueueAttributeName, String> attributes = new HashMap<>();
|
||||
attributes.put(QueueAttributeName.FIFO_QUEUE, "true");
|
||||
attributes.put(QueueAttributeName.CONTENT_BASED_DEDUPLICATION, "true");
|
||||
|
||||
CreateQueueRequest request = CreateQueueRequest.builder()
|
||||
.queueName(queueName + ".fifo")
|
||||
.attributes(attributes)
|
||||
.build();
|
||||
|
||||
CreateQueueResponse response = sqsClient.createQueue(request);
|
||||
return response.queueUrl();
|
||||
}
|
||||
```
|
||||
|
||||
### Queue Operations
|
||||
```java
|
||||
public String getQueueUrl(SqsClient sqsClient, String queueName) {
|
||||
return sqsClient.getQueueUrl(GetQueueUrlRequest.builder()
|
||||
.queueName(queueName)
|
||||
.build()).queueUrl();
|
||||
}
|
||||
|
||||
public List<String> listQueues(SqsClient sqsClient) {
|
||||
return sqsClient.listQueues().queueUrls();
|
||||
}
|
||||
|
||||
public void purgeQueue(SqsClient sqsClient, String queueUrl) {
|
||||
sqsClient.purgeQueue(PurgeQueueRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.build());
|
||||
}
|
||||
```
|
||||
|
||||
## Message Operations
|
||||
|
||||
### Send Basic Message
|
||||
```java
|
||||
public String sendMessage(SqsClient sqsClient, String queueUrl, String messageBody) {
|
||||
SendMessageRequest request = SendMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.messageBody(messageBody)
|
||||
.build();
|
||||
|
||||
SendMessageResponse response = sqsClient.sendMessage(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
### Send Message with Attributes
|
||||
```java
|
||||
public String sendMessageWithAttributes(SqsClient sqsClient,
|
||||
String queueUrl,
|
||||
String messageBody,
|
||||
Map<String, String> attributes) {
|
||||
Map<String, MessageAttributeValue> messageAttributes = attributes.entrySet().stream()
|
||||
.collect(Collectors.toMap(
|
||||
Map.Entry::getKey,
|
||||
e -> MessageAttributeValue.builder()
|
||||
.dataType("String")
|
||||
.stringValue(e.getValue())
|
||||
.build()));
|
||||
|
||||
SendMessageRequest request = SendMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.messageBody(messageBody)
|
||||
.messageAttributes(messageAttributes)
|
||||
.build();
|
||||
|
||||
SendMessageResponse response = sqsClient.sendMessage(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
### Send FIFO Message
|
||||
```java
|
||||
public String sendFifoMessage(SqsClient sqsClient,
|
||||
String queueUrl,
|
||||
String messageBody,
|
||||
String messageGroupId) {
|
||||
SendMessageRequest request = SendMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.messageBody(messageBody)
|
||||
.messageGroupId(messageGroupId)
|
||||
.messageDeduplicationId(UUID.randomUUID().toString())
|
||||
.build();
|
||||
|
||||
SendMessageResponse response = sqsClient.sendMessage(request);
|
||||
return response.messageId();
|
||||
}
|
||||
```
|
||||
|
||||
### Send Batch Messages
|
||||
```java
|
||||
public void sendBatchMessages(SqsClient sqsClient,
|
||||
String queueUrl,
|
||||
List<String> messages) {
|
||||
List<SendMessageBatchRequestEntry> entries = IntStream.range(0, messages.size())
|
||||
.mapToObj(i -> SendMessageBatchRequestEntry.builder()
|
||||
.id(String.valueOf(i))
|
||||
.messageBody(messages.get(i))
|
||||
.build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
SendMessageBatchRequest request = SendMessageBatchRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.entries(entries)
|
||||
.build();
|
||||
|
||||
SendMessageBatchResponse response = sqsClient.sendMessageBatch(request);
|
||||
|
||||
System.out.println("Successful: " + response.successful().size());
|
||||
System.out.println("Failed: " + response.failed().size());
|
||||
}
|
||||
```
|
||||
|
||||
## Message Processing
|
||||
|
||||
### Receive Messages with Long Polling
|
||||
```java
|
||||
public List<Message> receiveMessages(SqsClient sqsClient, String queueUrl) {
|
||||
ReceiveMessageRequest request = ReceiveMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.maxNumberOfMessages(10)
|
||||
.waitTimeSeconds(20) // Long polling
|
||||
.messageAttributeNames("All")
|
||||
.build();
|
||||
|
||||
ReceiveMessageResponse response = sqsClient.receiveMessage(request);
|
||||
return response.messages();
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Message
|
||||
```java
|
||||
public void deleteMessage(SqsClient sqsClient, String queueUrl, String receiptHandle) {
|
||||
DeleteMessageRequest request = DeleteMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.receiptHandle(receiptHandle)
|
||||
.build();
|
||||
|
||||
sqsClient.deleteMessage(request);
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Batch Messages
|
||||
```java
|
||||
public void deleteBatchMessages(SqsClient sqsClient,
|
||||
String queueUrl,
|
||||
List<Message> messages) {
|
||||
List<DeleteMessageBatchRequestEntry> entries = messages.stream()
|
||||
.map(msg -> DeleteMessageBatchRequestEntry.builder()
|
||||
.id(msg.messageId())
|
||||
.receiptHandle(msg.receiptHandle())
|
||||
.build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
DeleteMessageBatchRequest request = DeleteMessageBatchRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.entries(entries)
|
||||
.build();
|
||||
|
||||
sqsClient.deleteMessageBatch(request);
|
||||
}
|
||||
```
|
||||
|
||||
### Change Message Visibility
|
||||
```java
|
||||
public void changeMessageVisibility(SqsClient sqsClient,
|
||||
String queueUrl,
|
||||
String receiptHandle,
|
||||
int visibilityTimeout) {
|
||||
ChangeMessageVisibilityRequest request = ChangeMessageVisibilityRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.receiptHandle(receiptHandle)
|
||||
.visibilityTimeout(visibilityTimeout)
|
||||
.build();
|
||||
|
||||
sqsClient.changeMessageVisibility(request);
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,292 @@
|
||||
# Spring Boot Integration Reference
|
||||
|
||||
## Configuration
|
||||
|
||||
### Basic Bean Configuration
|
||||
```java
|
||||
@Configuration
|
||||
public class MessagingConfiguration {
|
||||
|
||||
@Bean
|
||||
public SqsClient sqsClient() {
|
||||
return SqsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public SnsClient snsClient() {
|
||||
return SnsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Properties
|
||||
```yaml
|
||||
# application.yml
|
||||
aws:
|
||||
sqs:
|
||||
queue-url: https://sqs.us-east-1.amazonaws.com/123456789012/my-queue
|
||||
sns:
|
||||
topic-arn: arn:aws:sns:us-east-1:123456789012:my-topic
|
||||
```
|
||||
|
||||
## Service Layer Integration
|
||||
|
||||
### SQS Message Service
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class SqsMessageService {
|
||||
|
||||
private final SqsClient sqsClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
@Value("${aws.sqs.queue-url}")
|
||||
private String queueUrl;
|
||||
|
||||
public <T> void sendMessage(T message) {
|
||||
try {
|
||||
String jsonMessage = objectMapper.writeValueAsString(message);
|
||||
|
||||
SendMessageRequest request = SendMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.messageBody(jsonMessage)
|
||||
.build();
|
||||
|
||||
sqsClient.sendMessage(request);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to send SQS message", e);
|
||||
}
|
||||
}
|
||||
|
||||
public <T> List<T> receiveMessages(Class<T> messageType) {
|
||||
ReceiveMessageRequest request = ReceiveMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.maxNumberOfMessages(10)
|
||||
.waitTimeSeconds(20)
|
||||
.build();
|
||||
|
||||
ReceiveMessageResponse response = sqsClient.receiveMessage(request);
|
||||
|
||||
return response.messages().stream()
|
||||
.map(msg -> {
|
||||
try {
|
||||
return objectMapper.readValue(msg.body(), messageType);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to parse message", e);
|
||||
}
|
||||
})
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
public void deleteMessage(String receiptHandle) {
|
||||
DeleteMessageRequest request = DeleteMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.receiptHandle(receiptHandle)
|
||||
.build();
|
||||
|
||||
sqsClient.deleteMessage(request);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### SNS Notification Service
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class SnsNotificationService {
|
||||
|
||||
private final SnsClient snsClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
@Value("${aws.sns.topic-arn}")
|
||||
private String topicArn;
|
||||
|
||||
public void publishNotification(String subject, Object message) {
|
||||
try {
|
||||
String jsonMessage = objectMapper.writeValueAsString(message);
|
||||
|
||||
PublishRequest request = PublishRequest.builder()
|
||||
.topicArn(topicArn)
|
||||
.subject(subject)
|
||||
.message(jsonMessage)
|
||||
.build();
|
||||
|
||||
snsClient.publish(request);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to publish SNS notification", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Message Listener Pattern
|
||||
|
||||
### Scheduled Polling
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class SqsMessageListener {
|
||||
|
||||
private final SqsClient sqsClient;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
@Value("${aws.sqs.queue-url}")
|
||||
private String queueUrl;
|
||||
|
||||
@Scheduled(fixedDelay = 5000)
|
||||
public void pollMessages() {
|
||||
ReceiveMessageRequest request = ReceiveMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.maxNumberOfMessages(10)
|
||||
.waitTimeSeconds(20)
|
||||
.build();
|
||||
|
||||
ReceiveMessageResponse response = sqsClient.receiveMessage(request);
|
||||
|
||||
response.messages().forEach(this::processMessage);
|
||||
}
|
||||
|
||||
private void processMessage(Message message) {
|
||||
try {
|
||||
// Process message
|
||||
System.out.println("Processing: " + message.body());
|
||||
|
||||
// Delete message after successful processing
|
||||
deleteMessage(message.receiptHandle());
|
||||
|
||||
} catch (Exception e) {
|
||||
// Handle error - message will become visible again
|
||||
System.err.println("Failed to process message: " + e.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
private void deleteMessage(String receiptHandle) {
|
||||
DeleteMessageRequest request = DeleteMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.receiptHandle(receiptHandle)
|
||||
.build();
|
||||
|
||||
sqsClient.deleteMessage(request);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Pub/Sub Pattern Integration
|
||||
|
||||
### Configuration for Pub/Sub
|
||||
```java
|
||||
@Configuration
|
||||
@RequiredArgsConstructor
|
||||
public class PubSubConfiguration {
|
||||
|
||||
private final SnsClient snsClient;
|
||||
private final SqsClient sqsClient;
|
||||
|
||||
@Bean
|
||||
@DependsOn("sqsClient")
|
||||
public String setupPubSub() {
|
||||
// Create SNS topic
|
||||
String topicArn = snsClient.createTopic(CreateTopicRequest.builder()
|
||||
.name("order-events")
|
||||
.build()).topicArn();
|
||||
|
||||
// Create SQS queue
|
||||
String queueUrl = sqsClient.createQueue(CreateQueueRequest.builder()
|
||||
.queueName("order-processor")
|
||||
.build()).queueUrl();
|
||||
|
||||
// Get queue ARN
|
||||
String queueArn = sqsClient.getQueueAttributes(GetQueueAttributesRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.attributeNames(QueueAttributeName.QUEUE_ARN)
|
||||
.build()).attributes().get(QueueAttributeName.QUEUE_ARN);
|
||||
|
||||
// Subscribe SQS to SNS
|
||||
snsClient.subscribe(SubscribeRequest.builder()
|
||||
.protocol("sqs")
|
||||
.endpoint(queueArn)
|
||||
.topicArn(topicArn)
|
||||
.build());
|
||||
|
||||
return topicArn;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Retry Mechanism
|
||||
```java
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class RetryableSqsService {
|
||||
|
||||
private final SqsClient sqsClient;
|
||||
private final RetryTemplate retryTemplate;
|
||||
|
||||
public void sendMessageWithRetry(String queueUrl, String messageBody) {
|
||||
retryTemplate.execute(context -> {
|
||||
try {
|
||||
SendMessageRequest request = SendMessageRequest.builder()
|
||||
.queueUrl(queueUrl)
|
||||
.messageBody(messageBody)
|
||||
.build();
|
||||
|
||||
sqsClient.sendMessage(request);
|
||||
return null;
|
||||
} catch (Exception e) {
|
||||
throw new RetryableException("Failed to send message", e);
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Integration
|
||||
|
||||
### LocalStack Configuration
|
||||
```java
|
||||
@TestConfiguration
|
||||
public class LocalStackMessagingConfig {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(
|
||||
LocalStackContainer.Service.SQS,
|
||||
LocalStackContainer.Service.SNS
|
||||
);
|
||||
|
||||
@Bean
|
||||
public SqsClient sqsClient() {
|
||||
return SqsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.endpointOverride(
|
||||
localstack.getEndpointOverride(LocalStackContainer.Service.SQS))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
localstack.getAccessKey(),
|
||||
localstack.getSecretKey())))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public SnsClient snsClient() {
|
||||
return SnsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.endpointOverride(
|
||||
localstack.getEndpointOverride(LocalStackContainer.Service.SNS))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
localstack.getAccessKey(),
|
||||
localstack.getSecretKey())))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
400
skills/aws-java/aws-sdk-java-v2-rds/SKILL.md
Normal file
400
skills/aws-java/aws-sdk-java-v2-rds/SKILL.md
Normal file
@@ -0,0 +1,400 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-rds
|
||||
description: AWS RDS (Relational Database Service) management using AWS SDK for Java 2.x. Use when creating, modifying, monitoring, or managing Amazon RDS database instances, snapshots, parameter groups, and configurations.
|
||||
category: aws
|
||||
tags: [aws, rds, database, java, sdk, postgresql, mysql, aurora, spring-boot]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java v2 - RDS Management
|
||||
|
||||
This skill provides comprehensive guidance for working with Amazon RDS (Relational Database Service) using the AWS SDK for Java 2.x, covering database instance management, snapshots, parameter groups, and RDS operations.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Creating and managing RDS database instances (PostgreSQL, MySQL, Aurora, etc.)
|
||||
- Taking and restoring database snapshots
|
||||
- Managing DB parameter groups and configurations
|
||||
- Querying RDS instance metadata and status
|
||||
- Setting up Multi-AZ deployments
|
||||
- Configuring automated backups
|
||||
- Managing security groups for RDS
|
||||
- Connecting Lambda functions to RDS databases
|
||||
- Implementing RDS IAM authentication
|
||||
- Monitoring RDS instances and metrics
|
||||
|
||||
## Getting Started
|
||||
|
||||
### RDS Client Setup
|
||||
|
||||
The `RdsClient` is the main entry point for interacting with Amazon RDS.
|
||||
|
||||
**Basic Client Creation:**
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.rds.RdsClient;
|
||||
|
||||
RdsClient rdsClient = RdsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
// Use client
|
||||
describeInstances(rdsClient);
|
||||
|
||||
// Always close the client
|
||||
rdsClient.close();
|
||||
```
|
||||
|
||||
**Client with Custom Configuration:**
|
||||
```java
|
||||
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
|
||||
import software.amazon.awssdk.http.apache.ApacheHttpClient;
|
||||
|
||||
RdsClient rdsClient = RdsClient.builder()
|
||||
.region(Region.US_WEST_2)
|
||||
.credentialsProvider(ProfileCredentialsProvider.create("myprofile"))
|
||||
.httpClient(ApacheHttpClient.builder()
|
||||
.connectionTimeout(Duration.ofSeconds(30))
|
||||
.socketTimeout(Duration.ofSeconds(60))
|
||||
.build())
|
||||
.build();
|
||||
```
|
||||
|
||||
### Describing DB Instances
|
||||
|
||||
Retrieve information about existing RDS instances.
|
||||
|
||||
**List All DB Instances:**
|
||||
```java
|
||||
public static void describeInstances(RdsClient rdsClient) {
|
||||
try {
|
||||
DescribeDbInstancesResponse response = rdsClient.describeDBInstances();
|
||||
List<DBInstance> instanceList = response.dbInstances();
|
||||
|
||||
for (DBInstance instance : instanceList) {
|
||||
System.out.println("Instance ARN: " + instance.dbInstanceArn());
|
||||
System.out.println("Engine: " + instance.engine());
|
||||
System.out.println("Status: " + instance.dbInstanceStatus());
|
||||
System.out.println("Endpoint: " + instance.endpoint().address());
|
||||
System.out.println("Port: " + instance.endpoint().port());
|
||||
System.out.println("---");
|
||||
}
|
||||
} catch (RdsException e) {
|
||||
System.err.println(e.getMessage());
|
||||
System.exit(1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Key Operations
|
||||
|
||||
### Creating DB Instances
|
||||
|
||||
Create new RDS database instances with various configurations.
|
||||
|
||||
**Create Basic DB Instance:**
|
||||
```java
|
||||
public static String createDBInstance(RdsClient rdsClient,
|
||||
String dbInstanceIdentifier,
|
||||
String dbName,
|
||||
String masterUsername,
|
||||
String masterPassword) {
|
||||
try {
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.dbInstanceIdentifier(dbInstanceIdentifier)
|
||||
.dbName(dbName)
|
||||
.engine("postgres")
|
||||
.engineVersion("14.7")
|
||||
.dbInstanceClass("db.t3.micro")
|
||||
.allocatedStorage(20)
|
||||
.masterUsername(masterUsername)
|
||||
.masterUserPassword(masterPassword)
|
||||
.publiclyAccessible(false)
|
||||
.build();
|
||||
|
||||
CreateDbInstanceResponse response = rdsClient.createDBInstance(request);
|
||||
System.out.println("Creating DB instance: " + response.dbInstance().dbInstanceArn());
|
||||
|
||||
return response.dbInstance().dbInstanceArn();
|
||||
} catch (RdsException e) {
|
||||
System.err.println("Error creating instance: " + e.getMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Managing DB Parameter Groups
|
||||
|
||||
Create and manage custom parameter groups for database configuration.
|
||||
|
||||
**Create DB Parameter Group:**
|
||||
```java
|
||||
public static void createDBParameterGroup(RdsClient rdsClient,
|
||||
String groupName,
|
||||
String description) {
|
||||
try {
|
||||
CreateDbParameterGroupRequest request = CreateDbParameterGroupRequest.builder()
|
||||
.dbParameterGroupName(groupName)
|
||||
.dbParameterGroupFamily("postgres15")
|
||||
.description(description)
|
||||
.build();
|
||||
|
||||
CreateDbParameterGroupResponse response = rdsClient.createDBParameterGroup(request);
|
||||
System.out.println("Created parameter group: " + response.dbParameterGroup().dbParameterGroupName());
|
||||
} catch (RdsException e) {
|
||||
System.err.println("Error creating parameter group: " + e.getMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Managing DB Snapshots
|
||||
|
||||
Create, restore, and manage database snapshots.
|
||||
|
||||
**Create DB Snapshot:**
|
||||
```java
|
||||
public static String createDBSnapshot(RdsClient rdsClient,
|
||||
String dbInstanceIdentifier,
|
||||
String snapshotIdentifier) {
|
||||
try {
|
||||
CreateDbSnapshotRequest request = CreateDbSnapshotRequest.builder()
|
||||
.dbInstanceIdentifier(dbInstanceIdentifier)
|
||||
.dbSnapshotIdentifier(snapshotIdentifier)
|
||||
.build();
|
||||
|
||||
CreateDbSnapshotResponse response = rdsClient.createDBSnapshot(request);
|
||||
System.out.println("Creating snapshot: " + response.dbSnapshot().dbSnapshotIdentifier());
|
||||
|
||||
return response.dbSnapshot().dbSnapshotArn();
|
||||
} catch (RdsException e) {
|
||||
System.err.println("Error creating snapshot: " + e.getMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Spring Boot Integration
|
||||
|
||||
Refer to [references/spring-boot-integration.md](references/spring-boot-integration.md) for complete Spring Boot integration examples including:
|
||||
|
||||
- Spring Boot configuration with application properties
|
||||
- RDS client bean configuration
|
||||
- Service layer implementation
|
||||
- REST controller design
|
||||
- Exception handling
|
||||
- Testing strategies
|
||||
|
||||
### Lambda Integration
|
||||
|
||||
Refer to [references/lambda-integration.md](references/lambda-integration.md) for Lambda integration examples including:
|
||||
|
||||
- Traditional Lambda + RDS connections
|
||||
- Lambda with connection pooling
|
||||
- Using AWS Secrets Manager for credentials
|
||||
- Lambda with AWS SDK for RDS management
|
||||
- Security configuration and best practices
|
||||
|
||||
## Advanced Operations
|
||||
|
||||
### Modifying DB Instances
|
||||
|
||||
Update existing RDS instances.
|
||||
|
||||
```java
|
||||
public static void modifyDBInstance(RdsClient rdsClient,
|
||||
String dbInstanceIdentifier,
|
||||
String newInstanceClass) {
|
||||
try {
|
||||
ModifyDbInstanceRequest request = ModifyDbInstanceRequest.builder()
|
||||
.dbInstanceIdentifier(dbInstanceIdentifier)
|
||||
.dbInstanceClass(newInstanceClass)
|
||||
.applyImmediately(false) // Apply during maintenance window
|
||||
.build();
|
||||
|
||||
ModifyDbInstanceResponse response = rdsClient.modifyDBInstance(request);
|
||||
System.out.println("Modified instance: " + response.dbInstance().dbInstanceIdentifier());
|
||||
System.out.println("New class: " + response.dbInstance().dbInstanceClass());
|
||||
} catch (RdsException e) {
|
||||
System.err.println("Error modifying instance: " + e.getMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deleting DB Instances
|
||||
|
||||
Delete RDS instances with optional final snapshot.
|
||||
|
||||
```java
|
||||
public static void deleteDBInstanceWithSnapshot(RdsClient rdsClient,
|
||||
String dbInstanceIdentifier,
|
||||
String finalSnapshotIdentifier) {
|
||||
try {
|
||||
DeleteDbInstanceRequest request = DeleteDbInstanceRequest.builder()
|
||||
.dbInstanceIdentifier(dbInstanceIdentifier)
|
||||
.skipFinalSnapshot(false)
|
||||
.finalDBSnapshotIdentifier(finalSnapshotIdentifier)
|
||||
.build();
|
||||
|
||||
DeleteDbInstanceResponse response = rdsClient.deleteDBInstance(request);
|
||||
System.out.println("Deleting instance: " + response.dbInstance().dbInstanceIdentifier());
|
||||
} catch (RdsException e) {
|
||||
System.err.println("Error deleting instance: " + e.getMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
|
||||
**Always use encryption:**
|
||||
```java
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.storageEncrypted(true)
|
||||
.kmsKeyId("arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012")
|
||||
.build();
|
||||
```
|
||||
|
||||
**Use VPC security groups:**
|
||||
```java
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.vpcSecurityGroupIds("sg-12345678")
|
||||
.publiclyAccessible(false)
|
||||
.build();
|
||||
```
|
||||
|
||||
### High Availability
|
||||
|
||||
**Enable Multi-AZ for production:**
|
||||
```java
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.multiAZ(true)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Backups
|
||||
|
||||
**Configure automated backups:**
|
||||
```java
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.backupRetentionPeriod(7)
|
||||
.preferredBackupWindow("03:00-04:00")
|
||||
.build();
|
||||
```
|
||||
|
||||
### Monitoring
|
||||
|
||||
**Enable CloudWatch logs:**
|
||||
```java
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.enableCloudwatchLogsExports("postgresql", "upgrade")
|
||||
.build();
|
||||
```
|
||||
|
||||
### Cost Optimization
|
||||
|
||||
**Use appropriate instance class:**
|
||||
```java
|
||||
// Development
|
||||
.dbInstanceClass("db.t3.micro")
|
||||
|
||||
// Production
|
||||
.dbInstanceClass("db.r5.large")
|
||||
```
|
||||
|
||||
### Deletion Protection
|
||||
|
||||
**Enable for production databases:**
|
||||
```java
|
||||
CreateDbInstanceRequest request = CreateDbInstanceRequest.builder()
|
||||
.deletionProtection(true)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Resource Management
|
||||
|
||||
**Always close clients:**
|
||||
```java
|
||||
try (RdsClient rdsClient = RdsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build()) {
|
||||
// Use client
|
||||
} // Automatically closed
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Maven Dependencies
|
||||
|
||||
```xml
|
||||
<dependencies>
|
||||
<!-- AWS SDK for RDS -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>rds</artifactId>
|
||||
<version>2.20.0</version> // Use the latest version available
|
||||
</dependency>
|
||||
|
||||
<!-- PostgreSQL Driver -->
|
||||
<dependency>
|
||||
<groupId>org.postgresql</groupId>
|
||||
<artifactId>postgresql</artifactId>
|
||||
<version>42.6.0</version> // Use the correct version available
|
||||
</dependency>
|
||||
|
||||
<!-- MySQL Driver -->
|
||||
<dependency>
|
||||
<groupId>mysql</groupId>
|
||||
<artifactId>mysql-connector-java</artifactId>
|
||||
<version>8.0.33</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
### Gradle Dependencies
|
||||
|
||||
```gradle
|
||||
dependencies {
|
||||
// AWS SDK for RDS
|
||||
implementation 'software.amazon.awssdk:rds:2.20.0'
|
||||
|
||||
// PostgreSQL Driver
|
||||
implementation 'org.postgresql:postgresql:42.6.0'
|
||||
|
||||
// MySQL Driver
|
||||
implementation 'mysql:mysql-connector-java:8.0.33'
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Documentation
|
||||
|
||||
For detailed API reference, see:
|
||||
- [API Reference](references/api-reference.md) - Complete API documentation and data models
|
||||
- [Spring Boot Integration](references/spring-boot-integration.md) - Spring Boot patterns and examples
|
||||
- [Lambda Integration](references/lambda-integration.md) - Lambda function patterns and best practices
|
||||
|
||||
## Error Handling
|
||||
|
||||
See [API Reference](references/api-reference.md#error-handling) for comprehensive error handling patterns including common exceptions, error response structure, and pagination support.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- Use connection pooling for multiple database operations
|
||||
- Implement retry logic for transient failures
|
||||
- Monitor CloudWatch metrics for performance optimization
|
||||
- Use appropriate instance types for workload requirements
|
||||
- Enable Performance Insights for database optimization
|
||||
|
||||
## Support
|
||||
|
||||
For support with AWS RDS operations using AWS SDK for Java 2.x:
|
||||
- AWS Documentation: [Amazon RDS User Guide](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html)
|
||||
- AWS SDK Documentation: [AWS SDK for Java 2.x](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/home.html)
|
||||
- AWS Support: [AWS Support Center](https://aws.amazon.com/premiumsupport/)
|
||||
122
skills/aws-java/aws-sdk-java-v2-rds/references/api-reference.md
Normal file
122
skills/aws-java/aws-sdk-java-v2-rds/references/api-reference.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# AWS RDS API Reference
|
||||
|
||||
## Core API Operations
|
||||
|
||||
### Describe Operations
|
||||
- `describeDBInstances` - List database instances
|
||||
- `describeDBParameterGroups` - List parameter groups
|
||||
- `describeDBSnapshots` - List database snapshots
|
||||
- `describeDBSubnetGroups` - List subnet groups
|
||||
|
||||
### Instance Management
|
||||
- `createDBInstance` - Create new database instance
|
||||
- `modifyDBInstance` - Modify existing instance
|
||||
- `deleteDBInstance` - Delete database instance
|
||||
|
||||
### Parameter Groups
|
||||
- `createDBParameterGroup` - Create parameter group
|
||||
- `modifyDBParameterGroup` - Modify parameters
|
||||
- `deleteDBParameterGroup` - Delete parameter group
|
||||
|
||||
### Snapshots
|
||||
- `createDBSnapshot` - Create database snapshot
|
||||
- `restoreDBInstanceFromDBSnapshot` - Restore from snapshot
|
||||
- `deleteDBSnapshot` - Delete snapshot
|
||||
|
||||
## Key Data Models
|
||||
|
||||
### DBInstance
|
||||
```java
|
||||
String dbInstanceIdentifier() // Instance name
|
||||
String dbInstanceArn() // ARN identifier
|
||||
String engine() // Database engine
|
||||
String engineVersion() // Engine version
|
||||
String dbInstanceClass() // Instance type
|
||||
int allocatedStorage() // Storage size in GB
|
||||
Endpoint endpoint() // Connection endpoint
|
||||
String dbInstanceStatus() // Instance status
|
||||
boolean multiAZ() // Multi-AZ enabled
|
||||
boolean storageEncrypted() // Storage encrypted
|
||||
```
|
||||
|
||||
### DBParameter
|
||||
```java
|
||||
String parameterName() // Parameter name
|
||||
String parameterValue() // Parameter value
|
||||
String description() // Description
|
||||
int applyMethod() // Apply method (immediate/reboot)
|
||||
```
|
||||
|
||||
### CreateDbInstanceRequest Builder
|
||||
```java
|
||||
CreateDbInstanceRequest.builder()
|
||||
.dbInstanceIdentifier(identifier)
|
||||
.engine("postgres") // Database engine
|
||||
.engineVersion("15.2") // Engine version
|
||||
.dbInstanceClass("db.t3.micro") // Instance type
|
||||
.allocatedStorage(20) // Storage size
|
||||
.masterUsername(username) // Admin username
|
||||
.masterUserPassword(password) // Admin password
|
||||
.publiclyAccessible(false) // Public access
|
||||
.storageEncrypted(true) // Storage encryption
|
||||
.multiAZ(true) // High availability
|
||||
.backupRetentionPeriod(7) // Backup retention
|
||||
.deletionProtection(true) // Protection from deletion
|
||||
.build()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Exceptions
|
||||
- `DBInstanceNotFoundFault` - Instance doesn't exist
|
||||
- `DBSnapshotAlreadyExistsFault` - Snapshot name conflicts
|
||||
- `InsufficientDBInstanceCapacity` - Instance type unavailable
|
||||
- `InvalidParameterValueException` - Invalid configuration value
|
||||
- `StorageQuotaExceeded` - Storage limit reached
|
||||
|
||||
### Error Response Structure
|
||||
```java
|
||||
try {
|
||||
rdsClient.createDBInstance(request);
|
||||
} catch (RdsException e) {
|
||||
// AWS specific error handling
|
||||
String errorCode = e.awsErrorDetails().errorCode();
|
||||
String errorMessage = e.awsErrorDetails().errorMessage();
|
||||
|
||||
switch (errorCode) {
|
||||
case "DBInstanceNotFoundFault":
|
||||
// Handle missing instance
|
||||
break;
|
||||
case "InvalidParameterValueException":
|
||||
// Handle invalid parameters
|
||||
break;
|
||||
default:
|
||||
// Generic error handling
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Pagination Support
|
||||
|
||||
### List Instances with Pagination
|
||||
```java
|
||||
DescribeDbInstancesRequest request = DescribeDbInstancesRequest.builder()
|
||||
.maxResults(100) // Limit results per page
|
||||
.build();
|
||||
|
||||
String marker = null;
|
||||
do {
|
||||
if (marker != null) {
|
||||
request = request.toBuilder()
|
||||
.marker(marker)
|
||||
.build();
|
||||
}
|
||||
|
||||
DescribeDbInstancesResponse response = rdsClient.describeDBInstances(request);
|
||||
List<DBInstance> instances = response.dbInstances();
|
||||
|
||||
// Process instances
|
||||
|
||||
marker = response.marker();
|
||||
} while (marker != null);
|
||||
```
|
||||
@@ -0,0 +1,382 @@
|
||||
# AWS Lambda Integration with RDS
|
||||
|
||||
## Lambda RDS Connection Patterns
|
||||
|
||||
### 1. Traditional Lambda + RDS Connection
|
||||
|
||||
```java
|
||||
import com.amazonaws.services.lambda.runtime.Context;
|
||||
import com.amazonaws.services.lambda.runtime.RequestHandler;
|
||||
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent;
|
||||
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyResponseEvent;
|
||||
import java.sql.Connection;
|
||||
import java.sql.DriverManager;
|
||||
import java.sql.PreparedStatement;
|
||||
import java.sql.ResultSet;
|
||||
|
||||
public class RdsLambdaHandler implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
|
||||
|
||||
@Override
|
||||
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) {
|
||||
APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent();
|
||||
|
||||
try {
|
||||
// Get environment variables
|
||||
String host = System.getenv("ProxyHostName");
|
||||
String port = System.getenv("Port");
|
||||
String dbName = System.getenv("DBName");
|
||||
String username = System.getenv("DBUserName");
|
||||
String password = System.getenv("DBPassword");
|
||||
|
||||
// Create connection string
|
||||
String connectionString = String.format(
|
||||
"jdbc:mysql://%s:%s/%s?useSSL=true&requireSSL=true",
|
||||
host, port, dbName
|
||||
);
|
||||
|
||||
// Execute query
|
||||
String sql = "SELECT COUNT(*) FROM users";
|
||||
|
||||
try (Connection connection = DriverManager.getConnection(connectionString, username, password);
|
||||
PreparedStatement statement = connection.prepareStatement(sql);
|
||||
ResultSet resultSet = statement.executeQuery()) {
|
||||
|
||||
if (resultSet.next()) {
|
||||
int count = resultSet.getInt(1);
|
||||
response.setStatusCode(200);
|
||||
response.setBody("{\"count\": " + count + "}");
|
||||
}
|
||||
}
|
||||
} catch (Exception e) {
|
||||
response.setStatusCode(500);
|
||||
response.setBody("{\"error\": \"" + e.getMessage() + "\"}");
|
||||
}
|
||||
|
||||
return response;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Lambda with Connection Pooling
|
||||
|
||||
```java
|
||||
import com.zaxxer.hikari.HikariConfig;
|
||||
import com.zaxxer.hikari.HikariDataSource;
|
||||
import javax.sql.DataSource;
|
||||
|
||||
public class RdsLambdaConfig {
|
||||
|
||||
private static DataSource dataSource;
|
||||
|
||||
public static synchronized DataSource getDataSource() {
|
||||
if (dataSource == null) {
|
||||
HikariConfig config = new HikariConfig();
|
||||
|
||||
String host = System.getenv("ProxyHostName");
|
||||
String port = System.getenv("Port");
|
||||
String dbName = System.getenv("DBName");
|
||||
String username = System.getenv("DBUserName");
|
||||
String password = System.getenv("DBPassword");
|
||||
|
||||
config.setJdbcUrl(String.format("jdbc:mysql://%s:%s/%s", host, port, dbName));
|
||||
config.setUsername(username);
|
||||
config.setPassword(password);
|
||||
|
||||
// Connection pool settings
|
||||
config.setMaximumPoolSize(5);
|
||||
config.setMinimumIdle(2);
|
||||
config.setIdleTimeout(30000);
|
||||
config.setConnectionTimeout(20000);
|
||||
config.setMaxLifetime(1800000);
|
||||
|
||||
// MySQL-specific settings
|
||||
config.addDataSourceProperty("useSSL", true);
|
||||
config.addDataSourceProperty("requireSSL", true);
|
||||
config.addDataSourceProperty("serverSslCertificate", "rds-ca-2019");
|
||||
config.addDataSourceProperty("connectTimeout", "30");
|
||||
|
||||
dataSource = new HikariDataSource(config);
|
||||
}
|
||||
return dataSource;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Using AWS Secrets Manager for Credentials
|
||||
|
||||
```java
|
||||
import com.amazonaws.services.secretsmanager.AWSSecretsManager;
|
||||
import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
|
||||
import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
|
||||
import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
public class RdsSecretsHelper {
|
||||
|
||||
private static final String SECRET_NAME = "prod/rds/db_credentials";
|
||||
private static final String REGION = "us-east-1";
|
||||
|
||||
public static Map<String, String> getRdsCredentials() {
|
||||
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard()
|
||||
.withRegion(REGION)
|
||||
.build();
|
||||
|
||||
GetSecretValueRequest request = GetSecretValueRequest.builder()
|
||||
.secretId(SECRET_NAME)
|
||||
.build();
|
||||
|
||||
GetSecretValueResult result = client.getSecretValue(request);
|
||||
|
||||
// Parse secret JSON
|
||||
ObjectMapper objectMapper = new ObjectMapper();
|
||||
Map<String, Object> secretMap = objectMapper.readValue(result.getSecretString(), HashMap.class);
|
||||
|
||||
Map<String, String> credentials = new HashMap<>();
|
||||
secretMap.forEach((key, value) -> {
|
||||
credentials.put(key, value.toString());
|
||||
});
|
||||
|
||||
return credentials;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Lambda with AWS SDK for RDS
|
||||
|
||||
```java
|
||||
import com.amazonaws.services.lambda.runtime.Context;
|
||||
import com.amazonaws.services.lambda.runtime.RequestHandler;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.rds.RdsClient;
|
||||
import software.amazon.awssdk.services.rds.model.*;
|
||||
|
||||
public class RdsManagementLambda implements RequestHandler<ApiRequest, ApiResponse> {
|
||||
|
||||
@Override
|
||||
public ApiResponse handleRequest(ApiRequest request, Context context) {
|
||||
RdsClient rdsClient = RdsClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
try {
|
||||
switch (request.getAction()) {
|
||||
case "list-instances":
|
||||
return listInstances(rdsClient);
|
||||
case "create-snapshot":
|
||||
return createSnapshot(rdsClient, request.getInstanceId(), request.getSnapshotId());
|
||||
case "describe-instance":
|
||||
return describeInstance(rdsClient, request.getInstanceId());
|
||||
default:
|
||||
return new ApiResponse(400, "Unknown action: " + request.getAction());
|
||||
}
|
||||
} catch (Exception e) {
|
||||
context.getLogger().log("Error: " + e.getMessage());
|
||||
return new ApiResponse(500, "Error: " + e.getMessage());
|
||||
} finally {
|
||||
rdsClient.close();
|
||||
}
|
||||
}
|
||||
|
||||
private ApiResponse listInstances(RdsClient rdsClient) {
|
||||
DescribeDbInstancesResponse response = rdsClient.describeDBInstances();
|
||||
return new ApiResponse(200, response.toString());
|
||||
}
|
||||
|
||||
private ApiResponse createSnapshot(RdsClient rdsClient, String instanceId, String snapshotId) {
|
||||
CreateDbSnapshotRequest request = CreateDbSnapshotRequest.builder()
|
||||
.dbInstanceIdentifier(instanceId)
|
||||
.dbSnapshotIdentifier(snapshotId)
|
||||
.build();
|
||||
|
||||
CreateDbSnapshotResponse response = rdsClient.createDBSnapshot(request);
|
||||
return new ApiResponse(200, "Snapshot created: " + response.dbSnapshot().dbSnapshotIdentifier());
|
||||
}
|
||||
|
||||
private ApiResponse describeInstance(RdsClient rdsClient, String instanceId) {
|
||||
DescribeDbInstancesRequest request = DescribeDbInstancesRequest.builder()
|
||||
.dbInstanceIdentifier(instanceId)
|
||||
.build();
|
||||
|
||||
DescribeDbInstancesResponse response = rdsClient.describeDBInstances(request);
|
||||
return new ApiResponse(200, response.toString());
|
||||
}
|
||||
}
|
||||
|
||||
class ApiRequest {
|
||||
private String action;
|
||||
private String instanceId;
|
||||
private String snapshotId;
|
||||
// getters and setters
|
||||
}
|
||||
|
||||
class ApiResponse {
|
||||
private int statusCode;
|
||||
private String body;
|
||||
// constructor, getters
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices for Lambda + RDS
|
||||
|
||||
### 1. Security Configuration
|
||||
|
||||
**IAM Role:**
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"rds:*"
|
||||
],
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Security Group:**
|
||||
- Use security groups to restrict access
|
||||
- Only allow Lambda function IP ranges
|
||||
- Use VPC endpoints for private connections
|
||||
|
||||
### 2. Environment Variables
|
||||
|
||||
```bash
|
||||
# Environment variables for Lambda
|
||||
DB_HOST=mydb.abc123.us-east-1.rds.amazonaws.com
|
||||
DB_PORT=5432
|
||||
DB_NAME=mydatabase
|
||||
DB_USERNAME=admin
|
||||
DB_PASSWORD=${DB_PASSWORD}
|
||||
DB_CONNECTION_STRING=jdbc:postgresql://${DB_HOST}:${DB_PORT}/${DB_NAME}
|
||||
```
|
||||
|
||||
### 3. Error Handling
|
||||
|
||||
```java
|
||||
import com.amazonaws.services.lambda.runtime.LambdaLogger;
|
||||
|
||||
public class LambdaErrorHandler {
|
||||
|
||||
public static void handleRdsError(Exception e, LambdaLogger logger) {
|
||||
if (e instanceof RdsException) {
|
||||
RdsException rdsException = (RdsException) e;
|
||||
logger.log("RDS Error: " + rdsException.awsErrorDetails().errorCode());
|
||||
|
||||
switch (rdsException.awsErrorDetails().errorCode()) {
|
||||
case "DBInstanceNotFoundFault":
|
||||
logger.log("Database instance not found");
|
||||
break;
|
||||
case "InvalidParameterValueException":
|
||||
logger.log("Invalid parameter provided");
|
||||
break;
|
||||
case "InstanceAlreadyExistsFault":
|
||||
logger.log("Instance already exists");
|
||||
break;
|
||||
default:
|
||||
logger.log("Unknown RDS error: " + rdsException.getMessage());
|
||||
}
|
||||
} else {
|
||||
logger.log("Non-RDS error: " + e.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Performance Optimization
|
||||
|
||||
**Cold Start Mitigation:**
|
||||
```java
|
||||
import javax.sql.DataSource;
|
||||
import java.sql.Connection;
|
||||
|
||||
public class RdsConnectionHelper {
|
||||
private static DataSource dataSource;
|
||||
private static long lastConnectionTime = 0;
|
||||
private static final long CONNECTION_TIMEOUT = 300000; // 5 minutes
|
||||
|
||||
public static Connection getConnection() throws SQLException {
|
||||
long currentTime = System.currentTimeMillis();
|
||||
|
||||
if (dataSource == null || (currentTime - lastConnectionTime) > CONNECTION_TIMEOUT) {
|
||||
dataSource = createDataSource();
|
||||
lastConnectionTime = currentTime;
|
||||
}
|
||||
|
||||
return dataSource.getConnection();
|
||||
}
|
||||
|
||||
private static DataSource createDataSource() {
|
||||
// Connection pool creation
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Batch Processing:**
|
||||
```java
|
||||
public class RdsBatchProcessor {
|
||||
|
||||
public void processBatch(List<String> userIds) {
|
||||
String sql = "SELECT * FROM users WHERE user_id IN (?)";
|
||||
|
||||
try (Connection connection = getConnection();
|
||||
PreparedStatement statement = connection.prepareStatement(sql)) {
|
||||
|
||||
// Convert list to SQL IN clause
|
||||
String placeholders = userIds.stream()
|
||||
.map(id -> "?")
|
||||
.collect(Collectors.joining(","));
|
||||
|
||||
String finalSql = sql.replace("?", placeholders);
|
||||
|
||||
// Set parameters
|
||||
for (int i = 0; i < userIds.size(); i++) {
|
||||
statement.setString(i + 1, userIds.get(i));
|
||||
}
|
||||
|
||||
ResultSet resultSet = statement.executeQuery();
|
||||
// Process results
|
||||
|
||||
} catch (SQLException e) {
|
||||
LambdaErrorHandler.handleRdsError(e, logger);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Monitoring and Logging
|
||||
|
||||
```java
|
||||
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
|
||||
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
|
||||
import com.amazonaws.services.cloudwatch.model.MetricDatum;
|
||||
import com.amazonaws.services.cloudwatch.model.PutMetricDataRequest;
|
||||
|
||||
public class RdsMetricsPublisher {
|
||||
|
||||
private static final String NAMESPACE = "RDS/Lambda";
|
||||
private AmazonCloudWatch cloudWatch;
|
||||
|
||||
public RdsMetricsPublisher() {
|
||||
this.cloudWatch = AmazonCloudWatchClientBuilder.defaultClient();
|
||||
}
|
||||
|
||||
public void publishMetric(String metricName, double value) {
|
||||
MetricDatum datum = new MetricDatum()
|
||||
.withMetricName(metricName)
|
||||
.withUnit("Count")
|
||||
.withValue(value)
|
||||
.withTimestamp(new Date());
|
||||
|
||||
PutMetricDataRequest request = new PutMetricDataRequest()
|
||||
.withNamespace(NAMESPACE)
|
||||
.withMetricData(Collections.singletonList(datum));
|
||||
|
||||
cloudWatch.putMetricData(request);
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,325 @@
|
||||
# Spring Boot Integration with AWS RDS
|
||||
|
||||
## Configuration
|
||||
|
||||
### application.properties
|
||||
```properties
|
||||
# AWS Configuration
|
||||
aws.region=us-east-1
|
||||
aws.rds.instance-identifier=mydb-instance
|
||||
|
||||
# RDS Connection (from RDS endpoint)
|
||||
spring.datasource.url=jdbc:postgresql://mydb.abc123.us-east-1.rds.amazonaws.com:5432/mydatabase
|
||||
spring.datasource.username=admin
|
||||
spring.datasource.password=${DB_PASSWORD}
|
||||
spring.datasource.driver-class-name=org.postgresql.Driver
|
||||
|
||||
# JPA Configuration
|
||||
spring.jpa.hibernate.ddl-auto=validate
|
||||
spring.jpa.show-sql=false
|
||||
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
|
||||
|
||||
# Connection Pool Configuration
|
||||
spring.datasource.hikari.maximum-pool-size=10
|
||||
spring.datasource.hikari.minimum-idle=5
|
||||
spring.datasource.hikari.idle-timeout=30000
|
||||
spring.datasource.hikari.connection-timeout=20000
|
||||
```
|
||||
|
||||
### AWS Configuration
|
||||
```java
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.rds.RdsClient;
|
||||
|
||||
@Configuration
|
||||
public class AwsRdsConfig {
|
||||
|
||||
@Value("${aws.region}")
|
||||
private String awsRegion;
|
||||
|
||||
@Bean
|
||||
public RdsClient rdsClient() {
|
||||
return RdsClient.builder()
|
||||
.region(Region.of(awsRegion))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Service Layer
|
||||
```java
|
||||
import org.springframework.stereotype.Service;
|
||||
import software.amazon.awssdk.services.rds.RdsClient;
|
||||
import software.amazon.awssdk.services.rds.model.*;
|
||||
import java.util.List;
|
||||
|
||||
@Service
|
||||
public class RdsService {
|
||||
|
||||
private final RdsClient rdsClient;
|
||||
|
||||
public RdsService(RdsClient rdsClient) {
|
||||
this.rdsClient = rdsClient;
|
||||
}
|
||||
|
||||
public List<DBInstance> listInstances() {
|
||||
DescribeDbInstancesResponse response = rdsClient.describeDBInstances();
|
||||
return response.dbInstances();
|
||||
}
|
||||
|
||||
public DBInstance getInstanceDetails(String instanceId) {
|
||||
DescribeDbInstancesRequest request = DescribeDbInstancesRequest.builder()
|
||||
.dbInstanceIdentifier(instanceId)
|
||||
.build();
|
||||
|
||||
DescribeDbInstancesResponse response = rdsClient.describeDBInstances(request);
|
||||
return response.dbInstances().get(0);
|
||||
}
|
||||
|
||||
public String createSnapshot(String instanceId, String snapshotId) {
|
||||
CreateDbSnapshotRequest request = CreateDbSnapshotRequest.builder()
|
||||
.dbInstanceIdentifier(instanceId)
|
||||
.dbSnapshotIdentifier(snapshotId)
|
||||
.build();
|
||||
|
||||
CreateDbSnapshotResponse response = rdsClient.createDBSnapshot(request);
|
||||
return response.dbSnapshot().dbSnapshotArn();
|
||||
}
|
||||
|
||||
public void modifyInstance(String instanceId, String newInstanceClass) {
|
||||
ModifyDbInstanceRequest request = ModifyDbInstanceRequest.builder()
|
||||
.dbInstanceIdentifier(instanceId)
|
||||
.dbInstanceClass(newInstanceClass)
|
||||
.applyImmediately(true)
|
||||
.build();
|
||||
|
||||
rdsClient.modifyDBInstance(request);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### REST Controller
|
||||
```java
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.web.bind.annotation.*;
|
||||
import software.amazon.awssdk.services.rds.model.DBInstance;
|
||||
import java.util.List;
|
||||
|
||||
@RestController
|
||||
@RequestMapping("/api/rds")
|
||||
public class RdsController {
|
||||
|
||||
private final RdsService rdsService;
|
||||
|
||||
public RdsController(RdsService rdsService) {
|
||||
this.rdsService = rdsService;
|
||||
}
|
||||
|
||||
@GetMapping("/instances")
|
||||
public ResponseEntity<List<DBInstance>> listInstances() {
|
||||
return ResponseEntity.ok(rdsService.listInstances());
|
||||
}
|
||||
|
||||
@GetMapping("/instances/{id}")
|
||||
public ResponseEntity<DBInstance> getInstanceDetails(@PathVariable String id) {
|
||||
return ResponseEntity.ok(rdsService.getInstanceDetails(id));
|
||||
}
|
||||
|
||||
@PostMapping("/snapshots")
|
||||
public ResponseEntity<String> createSnapshot(
|
||||
@RequestParam String instanceId,
|
||||
@RequestParam String snapshotId) {
|
||||
String arn = rdsService.createSnapshot(instanceId, snapshotId);
|
||||
return ResponseEntity.ok(arn);
|
||||
}
|
||||
|
||||
@PutMapping("/instances/{id}")
|
||||
public ResponseEntity<String> modifyInstance(
|
||||
@PathVariable String id,
|
||||
@RequestParam String instanceClass) {
|
||||
rdsService.modifyInstance(id, instanceClass);
|
||||
return ResponseEntity.ok("Instance modified successfully");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Exception Handling
|
||||
```java
|
||||
import org.springframework.http.HttpStatus;
|
||||
import org.springframework.web.bind.annotation.ExceptionHandler;
|
||||
import org.springframework.web.bind.annotation.ResponseStatus;
|
||||
import org.springframework.web.bind.annotation.RestControllerAdvice;
|
||||
|
||||
@RestControllerAdvice
|
||||
public class RdsExceptionHandler {
|
||||
|
||||
@ExceptionHandler(RdsException.class)
|
||||
@ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
|
||||
public ErrorResponse handleRdsException(RdsException e) {
|
||||
return new ErrorResponse(
|
||||
"RDS_ERROR",
|
||||
e.getMessage(),
|
||||
e.awsErrorDetails().errorCode()
|
||||
);
|
||||
}
|
||||
|
||||
@ExceptionHandler(Exception.class)
|
||||
@ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
|
||||
public ErrorResponse handleGenericException(Exception e) {
|
||||
return new ErrorResponse(
|
||||
"INTERNAL_ERROR",
|
||||
e.getMessage()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
class ErrorResponse {
|
||||
private String code;
|
||||
private String message;
|
||||
private String details;
|
||||
|
||||
// Constructor, getters, setters
|
||||
}
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
```java
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import static org.mockito.Mockito.*;
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class RdsServiceTest {
|
||||
|
||||
@Mock
|
||||
private RdsClient rdsClient;
|
||||
|
||||
@Test
|
||||
void listInstances_shouldReturnInstances() {
|
||||
// Arrange
|
||||
DescribeDbInstancesResponse response = DescribeDbInstancesResponse.builder()
|
||||
.dbInstances(List.of(createTestInstance()))
|
||||
.build();
|
||||
|
||||
when(rdsClient.describeDBInstances()).thenReturn(response);
|
||||
|
||||
RdsService service = new RdsService(rdsClient);
|
||||
|
||||
// Act
|
||||
List<DBInstance> result = service.listInstances();
|
||||
|
||||
// Assert
|
||||
assertEquals(1, result.size());
|
||||
verify(rdsClient).describeDBInstances();
|
||||
}
|
||||
|
||||
private DBInstance createTestInstance() {
|
||||
return DBInstance.builder()
|
||||
.dbInstanceIdentifier("test-instance")
|
||||
.engine("postgres")
|
||||
.dbInstanceStatus("available")
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
```java
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
@SpringBootTest
|
||||
@ActiveProfiles = "test"
|
||||
class RdsServiceIntegrationTest {
|
||||
|
||||
@Autowired
|
||||
private RdsService rdsService;
|
||||
|
||||
@Test
|
||||
void listInstances_integrationTest() {
|
||||
// This test requires actual AWS credentials and RDS instances
|
||||
// Should only run with proper test configuration
|
||||
assumeTrue(false, "Integration test disabled");
|
||||
|
||||
List<DBInstance> instances = rdsService.listInstances();
|
||||
assertNotNull(instances);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Configuration Management
|
||||
- Use Spring profiles for different environments
|
||||
- Externalize sensitive configuration (passwords, keys)
|
||||
- Use Spring Cloud Config for multi-environment management
|
||||
|
||||
### 2. Connection Pooling
|
||||
```properties
|
||||
# HikariCP Configuration
|
||||
spring.datasource.hikari.maximum-pool-size=20
|
||||
spring.datasource.hikari.minimum-idle=10
|
||||
spring.datasource.hikari.idle-timeout=600000
|
||||
spring.datasource.hikari.connection-timeout=30000
|
||||
spring.datasource.hikari.connection-test-query=SELECT 1
|
||||
```
|
||||
|
||||
### 3. Retry Logic
|
||||
```java
|
||||
import org.springframework.retry.annotation.Retryable;
|
||||
import org.springframework.retry.annotation.Backoff;
|
||||
|
||||
@Service
|
||||
public class RdsServiceWithRetry {
|
||||
|
||||
private final RdsClient rdsClient;
|
||||
|
||||
@Retryable(value = { RdsException.class },
|
||||
maxAttempts = 3,
|
||||
backoff = @Backoff(delay = 1000))
|
||||
public List<DBInstance> listInstancesWithRetry() {
|
||||
return rdsClient.describeDBInstances().dbInstances();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Monitoring
|
||||
```java
|
||||
import org.springframework.boot.actuator.health.Health;
|
||||
import org.springframework.boot.actuator.health.HealthIndicator;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
@Component
|
||||
public class RdsHealthIndicator implements HealthIndicator {
|
||||
|
||||
private final RdsClient rdsClient;
|
||||
|
||||
public RdsHealthIndicator(RdsClient rdsClient) {
|
||||
this.rdsClient = rdsClient;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Health health() {
|
||||
try {
|
||||
rdsClient.describeDBInstances();
|
||||
return Health.up()
|
||||
.withDetail("service", "RDS")
|
||||
.build();
|
||||
} catch (Exception e) {
|
||||
return Health.down()
|
||||
.withDetail("error", e.getMessage())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
691
skills/aws-java/aws-sdk-java-v2-s3/SKILL.md
Normal file
691
skills/aws-java/aws-sdk-java-v2-s3/SKILL.md
Normal file
@@ -0,0 +1,691 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-s3
|
||||
description: Amazon S3 patterns and examples using AWS SDK for Java 2.x. Use when working with S3 buckets, uploading/downloading objects, multipart uploads, presigned URLs, S3 Transfer Manager, object operations, or S3-specific configurations.
|
||||
category: aws
|
||||
tags: [aws, s3, java, sdk, storage, objects, transfer-manager, presigned-urls]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - Amazon S3
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Creating, listing, or deleting S3 buckets with proper configuration
|
||||
- Uploading or downloading objects from S3 with metadata and encryption
|
||||
- Working with multipart uploads for large files (>100MB) with error handling
|
||||
- Generating presigned URLs for temporary access to S3 objects
|
||||
- Copying or moving objects between S3 buckets with metadata preservation
|
||||
- Setting object metadata, storage classes, and access controls
|
||||
- Implementing S3 Transfer Manager for optimized file transfers
|
||||
- Integrating S3 with Spring Boot applications for cloud storage
|
||||
- Setting up S3 event notifications for object lifecycle management
|
||||
- Managing bucket policies, CORS configuration, and access controls
|
||||
- Implementing retry mechanisms and error handling for S3 operations
|
||||
- Testing S3 integrations with LocalStack for development environments
|
||||
|
||||
## Dependencies
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>s3</artifactId>
|
||||
<version>2.20.0</version> // Use the latest stable version
|
||||
</dependency>
|
||||
|
||||
<!-- For S3 Transfer Manager -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>s3-transfer-manager</artifactId>
|
||||
<version>2.20.0</version> // Use the latest stable version
|
||||
</dependency>
|
||||
|
||||
<!-- For async operations -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>netty-nio-client</artifactId>
|
||||
<version>2.20.0</version> // Use the latest stable version
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Setup
|
||||
|
||||
### Basic Synchronous Client
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Basic Asynchronous Client
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.s3.S3AsyncClient;
|
||||
|
||||
S3AsyncClient s3AsyncClient = S3AsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Configured Client with Retry Logic
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.http.apache.ApacheHttpClient;
|
||||
import software.amazon.awssdk.core.retry.RetryPolicy;
|
||||
import software.amazon.awssdk.core.retry.backoff.ExponentialRetryBackoff;
|
||||
import java.time.Duration;
|
||||
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.httpClientBuilder(ApacheHttpClient.builder()
|
||||
.maxConnections(200)
|
||||
.connectionTimeout(Duration.ofSeconds(5)))
|
||||
.overrideConfiguration(b -> b
|
||||
.apiCallTimeout(Duration.ofSeconds(60))
|
||||
.apiCallAttemptTimeout(Duration.ofSeconds(30))
|
||||
.retryPolicy(RetryPolicy.builder()
|
||||
.numRetries(3)
|
||||
.retryBackoffStrategy(ExponentialRetryBackoff.builder()
|
||||
.baseDelay(Duration.ofSeconds(1))
|
||||
.maxBackoffTime(Duration.ofSeconds(30))
|
||||
.build())
|
||||
.build()))
|
||||
.build();
|
||||
```
|
||||
|
||||
## Basic Bucket Operations
|
||||
|
||||
### Create Bucket
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.s3.model.*;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
public void createBucket(S3Client s3Client, String bucketName) {
|
||||
try {
|
||||
CreateBucketRequest request = CreateBucketRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.build();
|
||||
|
||||
s3Client.createBucket(request);
|
||||
|
||||
// Wait until bucket is ready
|
||||
HeadBucketRequest waitRequest = HeadBucketRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.build();
|
||||
|
||||
s3Client.waiter().waitUntilBucketExists(waitRequest);
|
||||
System.out.println("Bucket created successfully: " + bucketName);
|
||||
|
||||
} catch (S3Exception e) {
|
||||
System.err.println("Error creating bucket: " + e.awsErrorDetails().errorMessage());
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### List All Buckets
|
||||
|
||||
```java
|
||||
public List<String> listAllBuckets(S3Client s3Client) {
|
||||
ListBucketsResponse response = s3Client.listBuckets();
|
||||
|
||||
return response.buckets().stream()
|
||||
.map(Bucket::name)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
```
|
||||
|
||||
### Check if Bucket Exists
|
||||
|
||||
```java
|
||||
public boolean bucketExists(S3Client s3Client, String bucketName) {
|
||||
try {
|
||||
HeadBucketRequest request = HeadBucketRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.build();
|
||||
|
||||
s3Client.headBucket(request);
|
||||
return true;
|
||||
|
||||
} catch (NoSuchBucketException e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Basic Object Operations
|
||||
|
||||
### Upload File to S3
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.sync.RequestBody;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
public void uploadFile(S3Client s3Client, String bucketName, String key, String filePath) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
|
||||
System.out.println("File uploaded: " + key);
|
||||
}
|
||||
```
|
||||
|
||||
### Download File from S3
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.ResponseInputStream;
|
||||
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
public void downloadFile(S3Client s3Client, String bucketName, String key, String destPath) {
|
||||
GetObjectRequest request = GetObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
s3Client.getObject(request, Paths.get(destPath));
|
||||
System.out.println("File downloaded: " + destPath);
|
||||
}
|
||||
```
|
||||
|
||||
### Get Object Metadata
|
||||
|
||||
```java
|
||||
public Map<String, String> getObjectMetadata(S3Client s3Client, String bucketName, String key) {
|
||||
HeadObjectRequest request = HeadObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
HeadObjectResponse response = s3Client.headObject(request);
|
||||
return response.metadata();
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Object Operations
|
||||
|
||||
### Upload with Metadata and Encryption
|
||||
|
||||
```java
|
||||
public void uploadWithMetadata(S3Client s3Client, String bucketName, String key,
|
||||
String filePath, Map<String, String> metadata) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.metadata(metadata)
|
||||
.contentType("application/pdf")
|
||||
.serverSideEncryption(ServerSideEncryption.AES256)
|
||||
.storageClass(StorageClass.STANDARD_IA)
|
||||
.build();
|
||||
|
||||
PutObjectResponse response = s3Client.putObject(request,
|
||||
RequestBody.fromFile(Paths.get(filePath)));
|
||||
|
||||
System.out.println("Upload completed. ETag: " + response.eTag());
|
||||
}
|
||||
```
|
||||
|
||||
### Copy Object Between Buckets
|
||||
|
||||
```java
|
||||
public void copyObject(S3Client s3Client, String sourceBucket, String sourceKey,
|
||||
String destBucket, String destKey) {
|
||||
CopyObjectRequest request = CopyObjectRequest.builder()
|
||||
.sourceBucket(sourceBucket)
|
||||
.sourceKey(sourceKey)
|
||||
.destinationBucket(destBucket)
|
||||
.destinationKey(destKey)
|
||||
.build();
|
||||
|
||||
s3Client.copyObject(request);
|
||||
System.out.println("Object copied: " + sourceKey + " -> " + destKey);
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Multiple Objects
|
||||
|
||||
```java
|
||||
public void deleteMultipleObjects(S3Client s3Client, String bucketName, List<String> keys) {
|
||||
List<ObjectIdentifier> objectIds = keys.stream()
|
||||
.map(key -> ObjectIdentifier.builder().key(key).build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
Delete delete = Delete.builder()
|
||||
.objects(objectIds)
|
||||
.build();
|
||||
|
||||
DeleteObjectsRequest request = DeleteObjectsRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.delete(delete)
|
||||
.build();
|
||||
|
||||
DeleteObjectsResponse response = s3Client.deleteObjects(request);
|
||||
|
||||
response.deleted().forEach(deleted ->
|
||||
System.out.println("Deleted: " + deleted.key()));
|
||||
|
||||
response.errors().forEach(error ->
|
||||
System.err.println("Failed to delete " + error.key() + ": " + error.message()));
|
||||
}
|
||||
```
|
||||
|
||||
## Presigned URLs
|
||||
|
||||
### Generate Download URL
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
|
||||
import software.amazon.awssdk.services.s3.presigner.model.*;
|
||||
import java.time.Duration;
|
||||
|
||||
public String generateDownloadUrl(String bucketName, String key) {
|
||||
try (S3Presigner presigner = S3Presigner.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build()) {
|
||||
|
||||
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
|
||||
.signatureDuration(Duration.ofMinutes(10))
|
||||
.getObjectRequest(getObjectRequest)
|
||||
.build();
|
||||
|
||||
PresignedGetObjectRequest presignedRequest = presigner.presignGetObject(presignRequest);
|
||||
|
||||
return presignedRequest.url().toString();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Generate Upload URL
|
||||
|
||||
```java
|
||||
public String generateUploadUrl(String bucketName, String key) {
|
||||
try (S3Presigner presigner = S3Presigner.create()) {
|
||||
|
||||
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
PutObjectPresignRequest presignRequest = PutObjectPresignRequest.builder()
|
||||
.signatureDuration(Duration.ofMinutes(5))
|
||||
.putObjectRequest(putObjectRequest)
|
||||
.build();
|
||||
|
||||
PresignedPutObjectRequest presignedRequest = presigner.presignPutObject(presignRequest);
|
||||
|
||||
return presignedRequest.url().toString();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## S3 Transfer Manager
|
||||
|
||||
### Upload with Transfer Manager
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.transfer.s3.*;
|
||||
import software.amazon.awssdk.transfer.s3.model.*;
|
||||
|
||||
public void uploadWithTransferManager(String bucketName, String key, String filePath) {
|
||||
try (S3TransferManager transferManager = S3TransferManager.create()) {
|
||||
|
||||
UploadFileRequest uploadRequest = UploadFileRequest.builder()
|
||||
.putObjectRequest(req -> req
|
||||
.bucket(bucketName)
|
||||
.key(key))
|
||||
.source(Paths.get(filePath))
|
||||
.build();
|
||||
|
||||
FileUpload upload = transferManager.uploadFile(uploadRequest);
|
||||
|
||||
// Monitor progress
|
||||
upload.progressFuture().thenAccept(progress -> {
|
||||
System.out.println("Upload progress: " + progress.progressPercent() + "%");
|
||||
});
|
||||
|
||||
CompletedFileUpload result = upload.completionFuture().join();
|
||||
|
||||
System.out.println("Upload complete. ETag: " + result.response().eTag());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Download with Transfer Manager
|
||||
|
||||
```java
|
||||
public void downloadWithTransferManager(String bucketName, String key, String destPath) {
|
||||
try (S3TransferManager transferManager = S3TransferManager.create()) {
|
||||
|
||||
DownloadFileRequest downloadRequest = DownloadFileRequest.builder()
|
||||
.getObjectRequest(req -> req
|
||||
.bucket(bucketName)
|
||||
.key(key))
|
||||
.destination(Paths.get(destPath))
|
||||
.build();
|
||||
|
||||
FileDownload download = transferManager.downloadFile(downloadRequest);
|
||||
|
||||
CompletedFileDownload result = download.completionFuture().join();
|
||||
|
||||
System.out.println("Download complete. Size: " + result.response().contentLength());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Spring Boot Integration
|
||||
|
||||
### Configuration Properties
|
||||
|
||||
```java
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
|
||||
@ConfigurationProperties(prefix = "aws.s3")
|
||||
public class S3Properties {
|
||||
private String accessKey;
|
||||
private String secretKey;
|
||||
private String region = "us-east-1";
|
||||
private String endpoint;
|
||||
private String defaultBucket;
|
||||
private boolean asyncEnabled = false;
|
||||
private boolean transferManagerEnabled = true;
|
||||
|
||||
// Getters and setters
|
||||
public String getAccessKey() { return accessKey; }
|
||||
public void setAccessKey(String accessKey) { this.accessKey = accessKey; }
|
||||
// ... other getters and setters
|
||||
}
|
||||
```
|
||||
|
||||
### S3 Configuration Class
|
||||
|
||||
```java
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
|
||||
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
import software.amazon.awssdk.services.s3.S3AsyncClient;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import java.net.URI;
|
||||
|
||||
@Configuration
|
||||
public class S3Configuration {
|
||||
|
||||
private final S3Properties properties;
|
||||
|
||||
public S3Configuration(S3Properties properties) {
|
||||
this.properties = properties;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public S3Client s3Client() {
|
||||
S3Client.Builder builder = S3Client.builder()
|
||||
.region(Region.of(properties.getRegion()));
|
||||
|
||||
if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
|
||||
builder.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
properties.getAccessKey(),
|
||||
properties.getSecretKey())));
|
||||
}
|
||||
|
||||
if (properties.getEndpoint() != null) {
|
||||
builder.endpointOverride(URI.create(properties.getEndpoint()));
|
||||
}
|
||||
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public S3AsyncClient s3AsyncClient() {
|
||||
S3AsyncClient.Builder builder = S3AsyncClient.builder()
|
||||
.region(Region.of(properties.getRegion()));
|
||||
|
||||
if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
|
||||
builder.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
properties.getAccessKey(),
|
||||
properties.getSecretKey())));
|
||||
}
|
||||
|
||||
if (properties.getEndpoint() != null) {
|
||||
builder.endpointOverride(URI.create(properties.getEndpoint()));
|
||||
}
|
||||
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public S3TransferManager s3TransferManager() {
|
||||
return S3TransferManager.builder()
|
||||
.s3Client(s3Client())
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### S3 Service
|
||||
|
||||
```java
|
||||
import org.springframework.stereotype.Service;
|
||||
import software.amazon.awssdk.transfer.s3.S3TransferManager;
|
||||
import software.amazon.awssdk.services.s3.model.*;
|
||||
import java.nio.file.*;
|
||||
import java.util.*;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class S3Service {
|
||||
|
||||
private final S3Client s3Client;
|
||||
private final S3AsyncClient s3AsyncClient;
|
||||
private final S3TransferManager transferManager;
|
||||
private final S3Properties properties;
|
||||
|
||||
public CompletableFuture<Void> uploadFileAsync(String key, Path file) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(properties.getDefaultBucket())
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
return CompletableFuture.runAsync(() -> {
|
||||
s3Client.putObject(request, RequestBody.fromFile(file));
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<byte[]> downloadFileAsync(String key) {
|
||||
GetObjectRequest request = GetObjectRequest.builder()
|
||||
.bucket(properties.getDefaultBucket())
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
return CompletableFuture.supplyAsync(() -> {
|
||||
try (ResponseInputStream<GetObjectResponse> response = s3Client.getObject(request)) {
|
||||
return response.readAllBytes();
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException("Failed to read S3 object", e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
public CompletableFuture<String> generatePresignedUrl(String key, Duration duration) {
|
||||
return CompletableFuture.supplyAsync(() -> {
|
||||
try (S3Presigner presigner = S3Presigner.builder()
|
||||
.region(Region.of(properties.getRegion()))
|
||||
.build()) {
|
||||
|
||||
GetObjectRequest getRequest = GetObjectRequest.builder()
|
||||
.bucket(properties.getDefaultBucket())
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
|
||||
.signatureDuration(duration)
|
||||
.getObjectRequest(getRequest)
|
||||
.build();
|
||||
|
||||
return presigner.presignGetObject(presignRequest).url().toString();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
public Flux<S3Object> listObjects(String prefix) {
|
||||
ListObjectsV2Request request = ListObjectsV2Request.builder()
|
||||
.bucket(properties.getDefaultBucket())
|
||||
.prefix(prefix)
|
||||
.build();
|
||||
|
||||
return Flux.create(sink -> {
|
||||
s3Client.listObjectsV2Paginator(request)
|
||||
.contents()
|
||||
.forEach(sink::next);
|
||||
sink.complete();
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic File Upload Example
|
||||
|
||||
```java
|
||||
public class S3UploadExample {
|
||||
public static void main(String[] args) {
|
||||
// Initialize client
|
||||
S3Client s3Client = S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
|
||||
String bucketName = "my-example-bucket";
|
||||
String filePath = "document.pdf";
|
||||
String key = "uploads/document.pdf";
|
||||
|
||||
// Create bucket if it doesn't exist
|
||||
if (!bucketExists(s3Client, bucketName)) {
|
||||
createBucket(s3Client, bucketName);
|
||||
}
|
||||
|
||||
// Upload file
|
||||
Map<String, String> metadata = Map.of(
|
||||
"author", "John Doe",
|
||||
"content-type", "application/pdf",
|
||||
"upload-date", java.time.LocalDate.now().toString()
|
||||
);
|
||||
|
||||
uploadWithMetadata(s3Client, bucketName, key, filePath, metadata);
|
||||
|
||||
// Generate presigned URL
|
||||
String downloadUrl = generateDownloadUrl(bucketName, key);
|
||||
System.out.println("Download URL: " + downloadUrl);
|
||||
|
||||
// Close client
|
||||
s3Client.close();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Batch File Processing Example
|
||||
|
||||
```java
|
||||
import java.nio.file.*;
|
||||
import java.util.stream.*;
|
||||
|
||||
public class S3BatchProcessing {
|
||||
public void processDirectoryUpload(S3Client s3Client, String bucketName, String directoryPath) {
|
||||
try (Stream<Path> paths = Files.walk(Paths.get(directoryPath))) {
|
||||
List<CompletableFuture<Void>> futures = paths
|
||||
.filter(Files::isRegularFile)
|
||||
.map(path -> {
|
||||
String key = bucketName + "/" + path.getFileName().toString();
|
||||
return CompletableFuture.runAsync(() -> {
|
||||
uploadFile(s3Client, bucketName, key, path.toString());
|
||||
});
|
||||
})
|
||||
.collect(Collectors.toList());
|
||||
|
||||
// Wait for all uploads to complete
|
||||
CompletableFuture.allOf(
|
||||
futures.toArray(new CompletableFuture[0])
|
||||
).join();
|
||||
|
||||
System.out.println("All files uploaded successfully");
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException("Failed to process directory", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
1. **Use S3 Transfer Manager**: Automatically handles multipart uploads, parallel transfers, and progress tracking for files >100MB
|
||||
2. **Reuse S3 Client**: Clients are thread-safe and should be reused throughout the application lifecycle
|
||||
3. **Enable async operations**: Use S3AsyncClient for I/O-bound operations to improve throughput
|
||||
4. **Configure proper timeouts**: Set appropriate timeouts for large file operations
|
||||
5. **Use connection pooling**: Configure HTTP client for optimal connection management
|
||||
|
||||
### Security Considerations
|
||||
|
||||
1. **Use temporary credentials**: Always use IAM roles or AWS STS for short-lived access tokens
|
||||
2. **Enable server-side encryption**: Use AES-256 or AWS KMS for sensitive data
|
||||
3. **Implement access controls**: Use bucket policies and IAM roles instead of access keys in production
|
||||
4. **Validate object metadata**: Sanitize user-provided metadata to prevent header injection
|
||||
5. **Use presigned URLs**: Avoid exposing credentials by using temporary access URLs
|
||||
|
||||
### Error Handling
|
||||
|
||||
1. **Implement retry logic**: Network operations should have exponential backoff retry strategies
|
||||
2. **Handle throttling**: Implement proper handling of 429 Too Many Requests responses
|
||||
3. **Validate object existence**: Check if objects exist before operations that require them
|
||||
4. **Clean up failed operations**: Abort multipart uploads that fail
|
||||
5. **Log appropriately**: Log successful operations and errors for monitoring
|
||||
|
||||
### Cost Optimization
|
||||
|
||||
1. **Use appropriate storage classes**: Choose STANDARD, STANDARD_IA, INTELLIGENT_TIERING based on access patterns
|
||||
2. **Implement lifecycle policies**: Automatically transition or expire objects
|
||||
3. **Enable object versioning**: For important data that needs retention
|
||||
4. **Monitor usage**: Track data transfer and storage costs
|
||||
5. **Minimize API calls**: Use batch operations when possible
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
- **File size limits**: Single PUT operations limited to 5GB; use multipart uploads for larger files
|
||||
- **Batch operations**: Maximum 1000 objects per DeleteObjects operation
|
||||
- **Metadata size**: User-defined metadata limited to 2KB
|
||||
- **Concurrent transfers**: Transfer Manager handles up to 100 concurrent transfers by default
|
||||
- **Region consistency**: Cross-region operations may incur additional costs and latency
|
||||
- **S3 eventual consistency**: New objects might not be immediately visible after upload
|
||||
|
||||
## References
|
||||
|
||||
For more detailed information, see:
|
||||
- [AWS S3 Object Operations Reference](./references/s3-object-operations.md)
|
||||
- [S3 Transfer Manager Patterns](./references/s3-transfer-patterns.md)
|
||||
- [Spring Boot Integration Guide](./references/s3-spring-boot-integration.md)
|
||||
- [AWS S3 Developer Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)
|
||||
- [AWS SDK for Java 2.x S3 API](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/package-summary.html)
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `aws-sdk-java-v2-core` - Core AWS SDK patterns and configuration
|
||||
- `spring-boot-dependency`-injection - Spring dependency injection patterns
|
||||
- `unit-test-service-layer` - Testing service layer patterns
|
||||
- `unit-test-wiremock-rest-api` - Testing external API integrations
|
||||
@@ -0,0 +1,371 @@
|
||||
# S3 Object Operations Reference
|
||||
|
||||
## Detailed Object Operations
|
||||
|
||||
### Advanced Upload Patterns
|
||||
|
||||
#### Streaming Upload with Progress Monitoring
|
||||
|
||||
```java
|
||||
public void uploadWithProgress(S3Client s3Client, String bucketName, String key,
|
||||
String filePath) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
try (RequestBody file = RequestBody.fromFile(Paths.get(filePath))) {
|
||||
s3Client.putObject(request, file);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Conditional Upload
|
||||
|
||||
```java
|
||||
public void conditionalUpload(S3Client s3Client, String bucketName, String key,
|
||||
String filePath, String expectedETag) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.ifMatch(expectedETag)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Download Patterns
|
||||
|
||||
#### Range Requests for Large Files
|
||||
|
||||
```java
|
||||
public void downloadInChunks(S3Client s3Client, String bucketName, String key,
|
||||
String destPath, int chunkSizeMB) {
|
||||
long fileSize = getFileSize(s3Client, bucketName, key);
|
||||
int chunkSize = chunkSizeMB * 1024 * 1024;
|
||||
|
||||
try (OutputStream os = new FileOutputStream(destPath)) {
|
||||
for (long start = 0; start < fileSize; start += chunkSize) {
|
||||
long end = Math.min(start + chunkSize - 1, fileSize - 1);
|
||||
|
||||
GetObjectRequest request = GetObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.range("bytes=" + start + "-" + end)
|
||||
.build();
|
||||
|
||||
try (ResponseInputStream<GetObjectResponse> response =
|
||||
s3Client.getObject(request)) {
|
||||
response.transferTo(os);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata Management
|
||||
|
||||
#### Setting and Retrieving Object Metadata
|
||||
|
||||
```java
|
||||
public void setObjectMetadata(S3Client s3Client, String bucketName, String key,
|
||||
Map<String, String> metadata) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.metadata(metadata)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.empty());
|
||||
}
|
||||
|
||||
public Map<String, String> getObjectMetadata(S3Client s3Client,
|
||||
String bucketName, String key) {
|
||||
HeadObjectRequest request = HeadObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
HeadObjectResponse response = s3Client.headObject(request);
|
||||
return response.metadata();
|
||||
}
|
||||
```
|
||||
|
||||
### Storage Classes and Lifecycle
|
||||
|
||||
#### Managing Different Storage Classes
|
||||
|
||||
```java
|
||||
public void uploadWithStorageClass(S3Client s3Client, String bucketName, String key,
|
||||
String filePath, StorageClass storageClass) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.storageClass(storageClass)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
|
||||
}
|
||||
|
||||
// Storage class options:
|
||||
// STANDARD - Default storage class
|
||||
// STANDARD_IA - Infrequent Access
|
||||
// ONEZONE_IA - Single-zone infrequent access
|
||||
// INTELLIGENT_TIERING - Automatically optimizes storage
|
||||
// GLACIER - Archive storage
|
||||
// DEEP_ARCHIVE - Long-term archive storage
|
||||
```
|
||||
|
||||
### Object Tagging
|
||||
|
||||
#### Adding and Managing Tags
|
||||
|
||||
```java
|
||||
public void addTags(S3Client s3Client, String bucketName, String key,
|
||||
Map<String, String> tags) {
|
||||
Tagging tagging = Tagging.builder()
|
||||
.tagSet(tags.entrySet().stream()
|
||||
.map(entry -> Tag.builder()
|
||||
.key(entry.getKey())
|
||||
.value(entry.getValue())
|
||||
.build())
|
||||
.collect(Collectors.toList()))
|
||||
.build();
|
||||
|
||||
PutObjectTaggingRequest request = PutObjectTaggingRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.tagging(tagging)
|
||||
.build();
|
||||
|
||||
s3Client.putObjectTagging(request);
|
||||
}
|
||||
|
||||
public Map<String, String> getTags(S3Client s3Client, String bucketName, String key) {
|
||||
GetObjectTaggingRequest request = GetObjectTaggingRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
GetObjectTaggingResponse response = s3Client.getObjectTagging(request);
|
||||
|
||||
return response.tagSet().stream()
|
||||
.collect(Collectors.toMap(Tag::key, Tag::value));
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Copy Operations
|
||||
|
||||
#### Server-Side Copy with Metadata
|
||||
|
||||
```java
|
||||
public void copyWithMetadata(S3Client s3Client, String sourceBucket, String sourceKey,
|
||||
String destBucket, String destKey,
|
||||
Map<String, String> metadata) {
|
||||
CopyObjectRequest request = CopyObjectRequest.builder()
|
||||
.sourceBucket(sourceBucket)
|
||||
.sourceKey(sourceKey)
|
||||
.destinationBucket(destBucket)
|
||||
.destinationKey(destKey)
|
||||
.metadata(metadata)
|
||||
.metadataDirective(MetadataDirective.REPLACE)
|
||||
.build();
|
||||
|
||||
s3Client.copyObject(request);
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Retry Mechanisms
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.retry.RetryPolicy;
|
||||
import software.amazon.awssdk.core.retry.backoff.FixedRetryBackoff;
|
||||
import software.amazon.awssdk.core.retry.conditions.RetryCondition;
|
||||
|
||||
public S3Client createS3ClientWithRetry() {
|
||||
return S3Client.builder()
|
||||
.overrideConfiguration(b -> b
|
||||
.retryPolicy(RetryPolicy.builder()
|
||||
.numRetries(3)
|
||||
.retryBackoffStrategy(FixedRetryBackoff.create(
|
||||
Duration.ofSeconds(1), 3))
|
||||
.retryCondition(RetryCondition.defaultRetryCondition())
|
||||
.build()))
|
||||
.build();
|
||||
}
|
||||
```
|
||||
|
||||
### Throttling Handling
|
||||
|
||||
```java
|
||||
public void handleThrottling(S3Client s3Client, String bucketName, String key) {
|
||||
try {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromString("test"));
|
||||
|
||||
} catch (S3Exception e) {
|
||||
if (e.statusCode() == 429) {
|
||||
// Too Many Requests - implement backoff
|
||||
try {
|
||||
Thread.sleep(1000);
|
||||
// Retry logic here
|
||||
} catch (InterruptedException ie) {
|
||||
Thread.currentThread().interrupt();
|
||||
}
|
||||
}
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Batch Operations
|
||||
|
||||
#### Batch Delete Objects
|
||||
|
||||
```java
|
||||
public void batchDeleteObjects(S3Client s3Client, String bucketName,
|
||||
List<String> keys) {
|
||||
int batchSize = 1000; // S3 limit for batch operations
|
||||
int totalBatches = (int) Math.ceil((double) keys.size() / batchSize);
|
||||
|
||||
for (int i = 0; i < totalBatches; i++) {
|
||||
List<String> batchKeys = keys.subList(
|
||||
i * batchSize,
|
||||
Math.min((i + 1) * batchSize, keys.size()));
|
||||
|
||||
List<ObjectIdentifier> objectIdentifiers = batchKeys.stream()
|
||||
.map(key -> ObjectIdentifier.builder().key(key).build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
Delete delete = Delete.builder()
|
||||
.objects(objectIdentifiers)
|
||||
.build();
|
||||
|
||||
DeleteObjectsRequest request = DeleteObjectsRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.delete(delete)
|
||||
.build();
|
||||
|
||||
s3Client.deleteObjects(request);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Parallel Uploads
|
||||
|
||||
```java
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
|
||||
public void parallelUploads(S3Client s3Client, String bucketName,
|
||||
List<String> keys, ExecutorService executor) {
|
||||
List<CompletableFuture<Void>> futures = new ArrayList<>();
|
||||
|
||||
for (String key : keys) {
|
||||
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromString("data"));
|
||||
}, executor);
|
||||
|
||||
futures.add(future);
|
||||
}
|
||||
|
||||
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
|
||||
}
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Access Control
|
||||
|
||||
#### Setting Object ACLs
|
||||
|
||||
```java
|
||||
public void setObjectAcl(S3Client s3Client, String bucketName, String key,
|
||||
CannedAccessControlList acl) {
|
||||
PutObjectAclRequest request = PutObjectAclRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.acl(acl)
|
||||
.build();
|
||||
|
||||
s3Client.putObjectAcl(request);
|
||||
}
|
||||
|
||||
// ACL options:
|
||||
// private, public-read, public-read-write, authenticated-read,
|
||||
// aws-exec-read, bucket-owner-read, bucket-owner-full-control
|
||||
```
|
||||
|
||||
#### Encryption
|
||||
|
||||
```java
|
||||
public void encryptedUpload(S3Client s3Client, String bucketName, String key,
|
||||
String filePath, String kmsKeyId) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.serverSideEncryption(ServerSideEncryption.AWS_KMS)
|
||||
.ssekmsKeyId(kmsKeyId)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
|
||||
}
|
||||
```
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
#### Upload Completion Events
|
||||
|
||||
```java
|
||||
public void uploadWithMonitoring(S3Client s3Client, String bucketName, String key,
|
||||
String filePath) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
Response<PutObjectResponse> response = s3Client.putObject(request,
|
||||
RequestBody.fromFile(Paths.get(filePath)));
|
||||
|
||||
System.out.println("Upload completed with ETag: " +
|
||||
response.response().eTag());
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Event Notifications
|
||||
|
||||
```java
|
||||
public void setupEventNotifications(S3Client s3Client, String bucketName) {
|
||||
NotificationConfiguration configuration = NotificationConfiguration.builder()
|
||||
.topicConfigurations(TopicConfiguration.builder()
|
||||
.topicArn("arn:aws:sns:us-east-1:123456789012:my-topic")
|
||||
.events(Event.OBJECT_CREATED_PUT, Event.OBJECT_CREATED_POST)
|
||||
.build())
|
||||
.build();
|
||||
|
||||
PutBucketNotificationConfigurationRequest request =
|
||||
PutBucketNotificationConfigurationRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.notificationConfiguration(configuration)
|
||||
.build();
|
||||
|
||||
s3Client.putBucketNotificationConfiguration(request);
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,668 @@
|
||||
# S3 Spring Boot Integration Reference
|
||||
|
||||
## Advanced Spring Boot Configuration
|
||||
|
||||
### Multi-Environment Configuration
|
||||
|
||||
```java
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
import org.springframework.boot.context.properties.EnableConfigurationProperties;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
@Configuration
|
||||
@EnableConfigurationProperties(S3Properties.class)
|
||||
public class S3Configuration {
|
||||
|
||||
private final S3Properties properties;
|
||||
|
||||
public S3Configuration(S3Properties properties) {
|
||||
this.properties = properties;
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty(name = "s3.client.async.enabled", havingValue = "true")
|
||||
public S3AsyncClient s3AsyncClient() {
|
||||
return S3AsyncClient.builder()
|
||||
.region(Region.of(properties.getRegion()))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
properties.getAccessKey(),
|
||||
properties.getSecretKey())))
|
||||
.endpointOverride(URI.create(properties.getEndpoint()))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty(name = "s3.client.sync.enabled", havingValue = "true", matchIfMissing = true)
|
||||
public S3Client s3Client() {
|
||||
return S3Client.builder()
|
||||
.region(Region.of(properties.getRegion()))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
properties.getAccessKey(),
|
||||
properties.getSecretKey())))
|
||||
.endpointOverride(URI.create(properties.getEndpoint()))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty(name = "s3.transfer-manager.enabled", havingValue = "true")
|
||||
public S3TransferManager s3TransferManager() {
|
||||
return S3TransferManager.builder()
|
||||
.s3Client(s3Client())
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnProperty(name = "s3.presigner.enabled", havingValue = "true")
|
||||
public S3Presigner s3Presigner() {
|
||||
return S3Presigner.builder()
|
||||
.region(Region.of(properties.getRegion()))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
|
||||
@ConfigurationProperties(prefix = "s3")
|
||||
@Data
|
||||
public class S3Properties {
|
||||
private String accessKey;
|
||||
private String secretKey;
|
||||
private String region = "us-east-1";
|
||||
private String endpoint = null;
|
||||
private boolean syncEnabled = true;
|
||||
private boolean asyncEnabled = false;
|
||||
private boolean transferManagerEnabled = false;
|
||||
private boolean presignerEnabled = false;
|
||||
private int maxConnections = 100;
|
||||
private int connectionTimeout = 5000;
|
||||
private int socketTimeout = 30000;
|
||||
private String defaultBucket;
|
||||
}
|
||||
```
|
||||
|
||||
### Profile-Specific Configuration
|
||||
|
||||
```properties
|
||||
# application-dev.properties
|
||||
s3.access-key=${AWS_ACCESS_KEY}
|
||||
s3.secret-key=${AWS_SECRET_KEY}
|
||||
s3.region=us-east-1
|
||||
s3.endpoint=http://localhost:4566
|
||||
s3.async-enabled=true
|
||||
s3.transfer-manager-enabled=true
|
||||
|
||||
# application-prod.properties
|
||||
s3.access-key=${AWS_ACCESS_KEY}
|
||||
s3.secret-key=${AWS_SECRET_KEY}
|
||||
s3.region=us-east-1
|
||||
s3.async-enabled=true
|
||||
s3.presigner-enabled=true
|
||||
```
|
||||
|
||||
## Advanced Service Patterns
|
||||
|
||||
### Generic S3 Service Template
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.services.s3.model.*;
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.util.StringUtils;
|
||||
import reactor.core.publisher.Flux;
|
||||
import reactor.core.publisher.Mono;
|
||||
import java.nio.file.*;
|
||||
import java.util.*;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class S3Service {
|
||||
|
||||
private final S3Client s3Client;
|
||||
private final S3AsyncClient s3AsyncClient;
|
||||
private final S3TransferManager transferManager;
|
||||
private final S3Properties s3Properties;
|
||||
|
||||
// Basic Operations
|
||||
public Mono<Void> uploadObjectAsync(String key, byte[] data) {
|
||||
return Mono.fromFuture(() -> {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
return s3AsyncClient.putObject(request,
|
||||
RequestBody.fromBytes(data)).future();
|
||||
});
|
||||
}
|
||||
|
||||
public Mono<byte[]> downloadObjectAsync(String key) {
|
||||
return Mono.fromFuture(() -> {
|
||||
GetObjectRequest request = GetObjectRequest.builder()
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
return s3AsyncClient.getObject(request)
|
||||
.thenApply(response -> {
|
||||
try {
|
||||
return response.readAllBytes();
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException("Failed to read S3 object", e);
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Advanced Operations
|
||||
public Mono<UploadResult> uploadWithMetadata(String key,
|
||||
Path file,
|
||||
Map<String, String> metadata) {
|
||||
return Mono.fromFuture(() -> {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.key(key)
|
||||
.metadata(metadata)
|
||||
.contentType(getContentType(file))
|
||||
.build();
|
||||
|
||||
return s3AsyncClient.putObject(request, RequestBody.fromFile(file))
|
||||
.thenApply(response -> new UploadResult(key, response.eTag()));
|
||||
});
|
||||
}
|
||||
|
||||
public Flux<S3Object> listObjectsWithPrefix(String prefix) {
|
||||
ListObjectsV2Request request = ListObjectsV2Request.builder()
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.prefix(prefix)
|
||||
.build();
|
||||
|
||||
return Flux.create(sink -> {
|
||||
s3Client.listObjectsV2Paginator(request)
|
||||
.contents()
|
||||
.forEach(sink::next);
|
||||
sink.complete();
|
||||
});
|
||||
}
|
||||
|
||||
public Mono<Void> batchDelete(List<String> keys) {
|
||||
return Mono.fromFuture(() -> {
|
||||
List<ObjectIdentifier> objectIdentifiers = keys.stream()
|
||||
.map(key -> ObjectIdentifier.builder().key(key).build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
Delete delete = Delete.builder()
|
||||
.objects(objectIdentifiers)
|
||||
.build();
|
||||
|
||||
DeleteObjectsRequest request = DeleteObjectsRequest.builder()
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.delete(delete)
|
||||
.build();
|
||||
|
||||
return s3AsyncClient.deleteObjects(request).future();
|
||||
});
|
||||
}
|
||||
|
||||
// Transfer Manager Operations
|
||||
public Mono<UploadResult> uploadWithTransferManager(String key, Path file) {
|
||||
return Mono.fromFuture(() -> {
|
||||
UploadFileRequest request = UploadFileRequest.builder()
|
||||
.putObjectRequest(req -> req
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.key(key))
|
||||
.source(file)
|
||||
.build();
|
||||
|
||||
return transferManager.uploadFile(request)
|
||||
.completionFuture()
|
||||
.thenApply(result -> new UploadResult(key, result.response().eTag()));
|
||||
});
|
||||
}
|
||||
|
||||
public Mono<DownloadResult> downloadWithTransferManager(String key, Path destination) {
|
||||
return Mono.fromFuture(() -> {
|
||||
DownloadFileRequest request = DownloadFileRequest.builder()
|
||||
.getObjectRequest(req -> req
|
||||
.bucket(s3Properties.getDefaultBucket())
|
||||
.key(key))
|
||||
.destination(destination)
|
||||
.build();
|
||||
|
||||
return transferManager.downloadFile(request)
|
||||
.completionFuture()
|
||||
.thenApply(result -> new DownloadResult(destination, result.response().contentLength()));
|
||||
});
|
||||
}
|
||||
|
||||
// Utility Methods
|
||||
private String getContentType(Path file) {
|
||||
try {
|
||||
return Files.probeContentType(file);
|
||||
} catch (IOException e) {
|
||||
return "application/octet-stream";
|
||||
}
|
||||
}
|
||||
|
||||
// Records for Results
|
||||
public record UploadResult(String key, String eTag) {}
|
||||
public record DownloadResult(Path path, long size) {}
|
||||
}
|
||||
```
|
||||
|
||||
### Event-Driven S3 Operations
|
||||
|
||||
```java
|
||||
import org.springframework.context.ApplicationEventPublisher;
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.transaction.annotation.Transactional;
|
||||
import reactor.core.publisher.Mono;
|
||||
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class S3EventService {
|
||||
|
||||
private final S3Service s3Service;
|
||||
private final ApplicationEventPublisher eventPublisher;
|
||||
|
||||
@Transactional
|
||||
public Mono<UploadResult> uploadAndPublishEvent(String key, Path file) {
|
||||
return s3Service.uploadWithTransferManager(key, file)
|
||||
.doOnSuccess(result -> {
|
||||
eventPublisher.publishEvent(new S3UploadEvent(key, result.eTag()));
|
||||
})
|
||||
.doOnError(error -> {
|
||||
eventPublisher.publishEvent(new S3UploadFailedEvent(key, error.getMessage()));
|
||||
});
|
||||
}
|
||||
|
||||
public Mono<String> generatePresignedUrl(String key) {
|
||||
return s3Service.downloadObjectAsync(key)
|
||||
.flatMap(data -> {
|
||||
return Mono.fromCallable(() -> {
|
||||
S3Presigner presigner = S3Presigner.create();
|
||||
try {
|
||||
GetObjectRequest request = GetObjectRequest.builder()
|
||||
.bucket(s3Service.getDefaultBucket())
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
GetObjectPresignRequest presignRequest = GetObjectPresignRequest.builder()
|
||||
.signatureDuration(Duration.ofMinutes(10))
|
||||
.getObjectRequest(request)
|
||||
.build();
|
||||
|
||||
return presigner.presignGetObject(presignRequest)
|
||||
.url()
|
||||
.toString();
|
||||
} finally {
|
||||
presigner.close();
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Event Classes
|
||||
public class S3UploadEvent extends ApplicationEvent {
|
||||
private final String key;
|
||||
private final String eTag;
|
||||
|
||||
public S3UploadEvent(String key, String eTag) {
|
||||
super(key);
|
||||
this.key = key;
|
||||
this.eTag = eTag;
|
||||
}
|
||||
|
||||
public String getKey() { return key; }
|
||||
public String getETag() { return eTag; }
|
||||
}
|
||||
|
||||
public class S3UploadFailedEvent extends ApplicationEvent {
|
||||
private final String key;
|
||||
private final String errorMessage;
|
||||
|
||||
public S3UploadFailedEvent(String key, String errorMessage) {
|
||||
super(key);
|
||||
this.key = key;
|
||||
this.errorMessage = errorMessage;
|
||||
}
|
||||
|
||||
public String getKey() { return key; }
|
||||
public String getErrorMessage() { return errorMessage; }
|
||||
}
|
||||
```
|
||||
|
||||
### Retry and Error Handling
|
||||
|
||||
```java
|
||||
import org.springframework.retry.annotation.*;
|
||||
import org.springframework.retry.support.RetryTemplate;
|
||||
import org.springframework.stereotype.Service;
|
||||
import reactor.core.publisher.Mono;
|
||||
import software.amazon.awssdk.services.s3.model.*;
|
||||
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class ResilientS3Service {
|
||||
|
||||
private final S3Client s3Client;
|
||||
private final RetryTemplate retryTemplate;
|
||||
|
||||
@Retryable(value = {S3Exception.class, SdkClientException.class},
|
||||
maxAttempts = 3,
|
||||
backoff = @Backoff(delay = 1000, multiplier = 2))
|
||||
public Mono<PutObjectResponse> uploadWithRetry(String key, Path file) {
|
||||
return Mono.fromCallable(() -> {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket("my-bucket")
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
return s3Client.putObject(request, RequestBody.fromFile(file));
|
||||
});
|
||||
}
|
||||
|
||||
@Recover
|
||||
public Mono<PutObjectResponse> uploadRecover(S3Exception e, String key, Path file) {
|
||||
// Log the failure and potentially send notification
|
||||
System.err.println("Upload failed after retries: " + e.getMessage());
|
||||
return Mono.error(new S3UploadException("Upload failed after retries", e));
|
||||
}
|
||||
|
||||
@Retryable(value = {S3Exception.class},
|
||||
maxAttempts = 5,
|
||||
backoff = @Backoff(delay = 2000, multiplier = 2))
|
||||
public Mono<Void> copyObjectWithRetry(String sourceKey, String destinationKey) {
|
||||
return Mono.fromFuture(() -> {
|
||||
CopyObjectRequest request = CopyObjectRequest.builder()
|
||||
.sourceBucket("source-bucket")
|
||||
.sourceKey(sourceKey)
|
||||
.destinationBucket("destination-bucket")
|
||||
.destinationKey(destinationKey)
|
||||
.build();
|
||||
|
||||
return s3AsyncClient.copyObject(request).future();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
public class S3UploadException extends RuntimeException {
|
||||
public S3UploadException(String message, Throwable cause) {
|
||||
super(message, cause);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Integration
|
||||
|
||||
### Test Configuration with LocalStack
|
||||
|
||||
```java
|
||||
import org.testcontainers.containers.localstack.LocalStackContainer;
|
||||
import org.testcontainers.junit.jupiter.Container;
|
||||
import org.testcontainers.junit.jupiter.Testcontainers;
|
||||
import org.springframework.boot.test.context.TestConfiguration;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import org.testcontainers.utility.DockerImageName;
|
||||
|
||||
@Testcontainers
|
||||
@ActiveProfiles("test")
|
||||
@TestConfiguration
|
||||
public class S3TestConfig {
|
||||
|
||||
@Container
|
||||
static LocalStackContainer localstack = new LocalStackContainer(
|
||||
DockerImageName.parse("localstack/localstack:3.0"))
|
||||
.withServices(LocalStackContainer.Service.S3)
|
||||
.withEnv("DEFAULT_REGION", "us-east-1");
|
||||
|
||||
@Bean
|
||||
public S3Client testS3Client() {
|
||||
return S3Client.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.endpointOverride(localstack.getEndpointOverride(LocalStackContainer.Service.S3))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
localstack.getAccessKey(),
|
||||
localstack.getSecretKey())))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public S3AsyncClient testS3AsyncClient() {
|
||||
return S3AsyncClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.endpointOverride(localstack.getEndpointOverride(LocalStackContainer.Service.S3))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
localstack.getAccessKey(),
|
||||
localstack.getSecretKey())))
|
||||
.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Unit Testing with Mocks
|
||||
|
||||
```java
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.InjectMocks;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
import software.amazon.awssdk.services.s3.model.*;
|
||||
import reactor.core.publisher.Mono;
|
||||
import java.util.List;
|
||||
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
import static org.mockito.ArgumentMatchers.*;
|
||||
import static org.mockito.Mockito.*;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class S3ServiceTest {
|
||||
|
||||
@Mock
|
||||
private S3Client s3Client;
|
||||
|
||||
@InjectMocks
|
||||
private S3Service s3Service;
|
||||
|
||||
@Test
|
||||
void uploadObjectAsync_ShouldReturnUploadResult() {
|
||||
// Arrange
|
||||
String key = "test-key";
|
||||
byte[] data = "test-content".getBytes();
|
||||
String eTag = "12345";
|
||||
|
||||
PutObjectResponse response = PutObjectResponse.builder()
|
||||
.eTag(eTag)
|
||||
.build();
|
||||
|
||||
when(s3Client.putObject(any(PutObjectRequest.class), any()))
|
||||
.thenReturn(response);
|
||||
|
||||
// Act
|
||||
Mono<UploadResult> result = s3Service.uploadObjectAsync(key, data);
|
||||
|
||||
// Assert
|
||||
result.subscribe(uploadResult -> {
|
||||
assertEquals(key, uploadResult.key());
|
||||
assertEquals(eTag, uploadResult.eTag());
|
||||
});
|
||||
}
|
||||
|
||||
@Test
|
||||
void listObjectsWithPrefix_ShouldReturnObjectList() {
|
||||
// Arrange
|
||||
String prefix = "documents/";
|
||||
S3Object object1 = S3Object.builder().key("documents/file1.txt").build();
|
||||
S3Object object2 = S3Object.builder().key("documents/file2.txt").build();
|
||||
|
||||
ListObjectsV2Response response = ListObjectsV2Response.builder()
|
||||
.contents(object1, object2)
|
||||
.build();
|
||||
|
||||
when(s3Client.listObjectsV2(any(ListObjectsV2Request.class)))
|
||||
.thenReturn(response);
|
||||
|
||||
// Act
|
||||
Flux<S3Object> result = s3Service.listObjectsWithPrefix(prefix);
|
||||
|
||||
// Assert
|
||||
result.collectList()
|
||||
.subscribe(objects -> {
|
||||
assertEquals(2, objects.size());
|
||||
assertTrue(objects.stream().allMatch(obj -> obj.key().startsWith(prefix)));
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
|
||||
```java
|
||||
import org.junit.jupiter.api.*;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.test.context.ActiveProfiles;
|
||||
import software.amazon.awssdk.services.s3.model.*;
|
||||
import java.nio.file.*;
|
||||
import java.util.Map;
|
||||
|
||||
@SpringBootTest
|
||||
@ActiveProfiles("test")
|
||||
@TestMethodOrder(MethodOrderer.OrderAnnotation.class)
|
||||
class S3IntegrationTest {
|
||||
|
||||
@Autowired
|
||||
private S3Service s3Service;
|
||||
|
||||
private static final String TEST_BUCKET = "test-bucket";
|
||||
private static final String TEST_FILE = "test-document.txt";
|
||||
|
||||
@BeforeAll
|
||||
static void setup() throws Exception {
|
||||
// Create test file
|
||||
Files.write(Paths.get(TEST_FILE), "Test content".getBytes());
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(1)
|
||||
void uploadFile_ShouldSucceed() {
|
||||
// Act & Assert
|
||||
s3Service.uploadWithMetadata(TEST_FILE, Paths.get(TEST_FILE),
|
||||
Map.of("author", "test", "type", "document"))
|
||||
.as(StepVerifier::create)
|
||||
.expectNextMatches(result ->
|
||||
result.key().equals(TEST_FILE) && result.eTag() != null)
|
||||
.verifyComplete();
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(2)
|
||||
void downloadFile_ShouldReturnContent() {
|
||||
// Act & Assert
|
||||
s3Service.downloadObjectAsync(TEST_FILE)
|
||||
.as(StepVerifier::create)
|
||||
.expectNext("Test content".getBytes())
|
||||
.verifyComplete();
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(3)
|
||||
void listObjects_ShouldReturnFiles() {
|
||||
// Act & Assert
|
||||
s3Service.listObjectsWithPrefix("")
|
||||
.as(StepVerifier::create)
|
||||
.expectNextCount(1)
|
||||
.verifyComplete();
|
||||
}
|
||||
|
||||
@AfterAll
|
||||
static void cleanup() {
|
||||
try {
|
||||
Files.deleteIfExists(Paths.get(TEST_FILE));
|
||||
} catch (IOException e) {
|
||||
// Ignore
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Configuration Patterns
|
||||
|
||||
### Environment-Specific Configuration
|
||||
|
||||
```java
|
||||
import org.springframework.boot.autoconfigure.condition.*;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import software.amazon.awssdk.auth.credentials.*;
|
||||
|
||||
@Configuration
|
||||
public class EnvironmentAwareS3Config {
|
||||
|
||||
@Bean
|
||||
@ConditionalOnMissingBean
|
||||
public AwsCredentialsProvider awsCredentialsProvider(S3Properties properties) {
|
||||
if (properties.getAccessKey() != null && properties.getSecretKey() != null) {
|
||||
return StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
properties.getAccessKey(),
|
||||
properties.getSecretKey()));
|
||||
}
|
||||
return DefaultCredentialsProvider.create();
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnMissingBean
|
||||
@ConditionalOnProperty(name = "s3.region")
|
||||
public Region region(S3Properties properties) {
|
||||
return Region.of(properties.getRegion());
|
||||
}
|
||||
|
||||
@Bean
|
||||
@ConditionalOnMissingBean
|
||||
@ConditionalOnProperty(name = "s3.endpoint")
|
||||
public String endpoint(S3Properties properties) {
|
||||
return properties.getEndpoint();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Multi-Bucket Support
|
||||
|
||||
```java
|
||||
import org.springframework.stereotype.Service;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
@Service
|
||||
@RequiredArgsConstructor
|
||||
public class MultiBucketS3Service {
|
||||
|
||||
private final Map<String, S3Client> bucketClients = new HashMap<>();
|
||||
private final S3Client defaultS3Client;
|
||||
|
||||
@Autowired
|
||||
public MultiBucketS3Service(S3Client defaultS3Client) {
|
||||
this.defaultS3Client = defaultS3Client;
|
||||
}
|
||||
|
||||
public S3Client getClientForBucket(String bucketName) {
|
||||
return bucketClients.computeIfAbsent(bucketName, name ->
|
||||
S3Client.builder()
|
||||
.region(defaultS3Client.config().region())
|
||||
.credentialsProvider(defaultS3Client.config().credentialsProvider())
|
||||
.build());
|
||||
}
|
||||
|
||||
public Mono<UploadResult> uploadToBucket(String bucketName, String key, Path file) {
|
||||
S3Client client = getClientForBucket(bucketName);
|
||||
// Upload implementation using the specific client
|
||||
return Mono.empty(); // Implementation
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,473 @@
|
||||
# S3 Transfer Patterns Reference
|
||||
|
||||
## S3 Transfer Manager Advanced Patterns
|
||||
|
||||
### Configuration and Optimization
|
||||
|
||||
#### Custom Transfer Manager Configuration
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.transfer.s3.S3TransferManager;
|
||||
import software.amazon.awssdk.transfer.s3.model.UploadFileRequest;
|
||||
import software.amazon.awssdk.core.sync.RequestBody;
|
||||
import software.amazon.awssdk.services.s3.S3Client;
|
||||
import software.amazon.awssdk.http.apache.ApacheHttpClient;
|
||||
import java.time.Duration;
|
||||
|
||||
public S3TransferManager createOptimizedTransferManager(S3Client s3Client) {
|
||||
return S3TransferManager.builder()
|
||||
.s3Client(s3Client)
|
||||
.storageProvider(ApacheHttpClient.builder()
|
||||
.maxConnections(200)
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.socketTimeout(Duration.ofSeconds(60))
|
||||
.build())
|
||||
.build();
|
||||
}
|
||||
```
|
||||
|
||||
#### Parallel Upload Configuration
|
||||
|
||||
```java
|
||||
public void configureParallelUploads() {
|
||||
S3TransferManager transferManager = S3TransferManager.create();
|
||||
|
||||
FileUpload upload = transferManager.uploadFile(
|
||||
UploadFileRequest.builder()
|
||||
.putObjectRequest(req -> req
|
||||
.bucket("my-bucket")
|
||||
.key("large-file.bin"))
|
||||
.source(Paths.get("large-file.bin"))
|
||||
.build());
|
||||
|
||||
// Track upload progress
|
||||
upload.progressFuture().thenAccept(progress -> {
|
||||
System.out.println("Upload progress: " + progress.progressPercent());
|
||||
});
|
||||
|
||||
// Handle completion
|
||||
upload.completionFuture().thenAccept(result -> {
|
||||
System.out.println("Upload completed with ETag: " +
|
||||
result.response().eTag());
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Upload Patterns
|
||||
|
||||
#### Multipart Upload with Progress Monitoring
|
||||
|
||||
```java
|
||||
public void multipartUploadWithProgress(S3Client s3Client, String bucketName,
|
||||
String key, String filePath) {
|
||||
int partSize = 5 * 1024 * 1024; // 5 MB parts
|
||||
File file = new File(filePath);
|
||||
|
||||
CreateMultipartUploadRequest createRequest = CreateMultipartUploadRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
CreateMultipartUploadResponse createResponse = s3Client.createMultipartUpload(createRequest);
|
||||
String uploadId = createResponse.uploadId();
|
||||
|
||||
List<CompletedPart> completedParts = new ArrayList<>();
|
||||
long uploadedBytes = 0;
|
||||
long totalBytes = file.length();
|
||||
|
||||
try (FileInputStream fis = new FileInputStream(file)) {
|
||||
byte[] buffer = new byte[partSize];
|
||||
int partNumber = 1;
|
||||
|
||||
while (true) {
|
||||
int bytesRead = fis.read(buffer);
|
||||
if (bytesRead == -1) break;
|
||||
|
||||
byte[] partData = Arrays.copyOf(buffer, bytesRead);
|
||||
|
||||
UploadPartRequest uploadRequest = UploadPartRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.uploadId(uploadId)
|
||||
.partNumber(partNumber)
|
||||
.build();
|
||||
|
||||
UploadPartResponse uploadResponse = s3Client.uploadPart(
|
||||
uploadRequest, RequestBody.fromBytes(partData));
|
||||
|
||||
completedParts.add(CompletedPart.builder()
|
||||
.partNumber(partNumber)
|
||||
.eTag(uploadResponse.eTag())
|
||||
.build());
|
||||
|
||||
uploadedBytes += bytesRead;
|
||||
partNumber++;
|
||||
|
||||
// Log progress
|
||||
double progress = (double) uploadedBytes / totalBytes * 100;
|
||||
System.out.printf("Upload progress: %.2f%%%n", progress);
|
||||
}
|
||||
|
||||
CompleteMultipartUploadRequest completeRequest =
|
||||
CompleteMultipartUploadRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.uploadId(uploadId)
|
||||
.multipartUpload(CompletedMultipartUpload.builder()
|
||||
.parts(completedParts)
|
||||
.build())
|
||||
.build();
|
||||
|
||||
s3Client.completeMultipartUpload(completeRequest);
|
||||
|
||||
} catch (Exception e) {
|
||||
// Abort on failure
|
||||
AbortMultipartUploadRequest abortRequest =
|
||||
AbortMultipartUploadRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.uploadId(uploadId)
|
||||
.build();
|
||||
|
||||
s3Client.abortMultipartUpload(abortRequest);
|
||||
throw new RuntimeException("Multipart upload failed", e);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Resume Interrupted Uploads
|
||||
|
||||
```java
|
||||
public void resumeUpload(S3Client s3Client, String bucketName, String key,
|
||||
String filePath, String existingUploadId) {
|
||||
ListMultipartUploadsRequest listRequest = ListMultipartUploadsRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.prefix(key)
|
||||
.build();
|
||||
|
||||
ListMultipartUploadsResponse listResponse = s3Client.listMultipartUploads(listRequest);
|
||||
|
||||
// Check if upload already exists
|
||||
boolean uploadExists = listResponse.uploads().stream()
|
||||
.anyMatch(upload -> upload.key().equals(key) &&
|
||||
upload.uploadId().equals(existingUploadId));
|
||||
|
||||
if (uploadExists) {
|
||||
// Resume existing upload
|
||||
continueExistingUpload(s3Client, bucketName, key, existingUploadId, filePath);
|
||||
} else {
|
||||
// Start new upload
|
||||
multipartUploadWithProgress(s3Client, bucketName, key, filePath);
|
||||
}
|
||||
}
|
||||
|
||||
private void continueExistingUpload(S3Client s3Client, String bucketName,
|
||||
String key, String uploadId, String filePath) {
|
||||
// List already uploaded parts
|
||||
ListPartsRequest listPartsRequest = ListPartsRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.uploadId(uploadId)
|
||||
.build();
|
||||
|
||||
ListPartsResponse listPartsResponse = s3Client.listParts(listPartsRequest);
|
||||
|
||||
List<CompletedPart> completedParts = listPartsResponse.parts().stream()
|
||||
.map(part -> CompletedPart.builder()
|
||||
.partNumber(part.partNumber())
|
||||
.eTag(part.eTag())
|
||||
.build())
|
||||
.collect(Collectors.toList());
|
||||
|
||||
// Upload remaining parts
|
||||
// ... implementation of remaining parts upload
|
||||
}
|
||||
```
|
||||
|
||||
### Advanced Download Patterns
|
||||
|
||||
#### Partial File Download
|
||||
|
||||
```java
|
||||
public void downloadPartialFile(S3Client s3Client, String bucketName, String key,
|
||||
String destPath, long startByte, long endByte) {
|
||||
GetObjectRequest request = GetObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.range("bytes=" + startByte + "-" + endByte)
|
||||
.build();
|
||||
|
||||
try (ResponseInputStream<GetObjectResponse> response = s3Client.getObject(request);
|
||||
OutputStream outputStream = new FileOutputStream(destPath)) {
|
||||
|
||||
response.transferTo(outputStream);
|
||||
System.out.println("Partial download completed: " +
|
||||
(endByte - startByte + 1) + " bytes");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Parallel Downloads
|
||||
|
||||
```java
|
||||
import java.util.concurrent.*;
|
||||
import java.util.stream.*;
|
||||
|
||||
public void parallelDownloads(S3Client s3Client, String bucketName,
|
||||
String key, String destPath, int chunkCount) {
|
||||
long fileSize = getFileSize(s3Client, bucketName, key);
|
||||
long chunkSize = fileSize / chunkCount;
|
||||
|
||||
ExecutorService executor = Executors.newFixedThreadPool(chunkCount);
|
||||
List<Future<Void>> futures = new ArrayList<>();
|
||||
|
||||
for (int i = 0; i < chunkCount; i++) {
|
||||
long start = i * chunkSize;
|
||||
long end = (i == chunkCount - 1) ? fileSize - 1 : start + chunkSize - 1;
|
||||
|
||||
Future<Void> future = executor.submit(() -> {
|
||||
downloadPartialFile(s3Client, bucketName, key,
|
||||
destPath + ".part" + i, start, end);
|
||||
return null;
|
||||
});
|
||||
|
||||
futures.add(future);
|
||||
}
|
||||
|
||||
// Wait for all downloads to complete
|
||||
for (Future<Void> future : futures) {
|
||||
try {
|
||||
future.get();
|
||||
} catch (InterruptedException | ExecutionException e) {
|
||||
throw new RuntimeException("Download failed", e);
|
||||
}
|
||||
}
|
||||
|
||||
// Combine chunks
|
||||
combineChunks(destPath, chunkCount);
|
||||
|
||||
executor.shutdown();
|
||||
}
|
||||
|
||||
private void combineChunks(String baseName, int chunkCount) throws IOException {
|
||||
try (OutputStream outputStream = new FileOutputStream(baseName)) {
|
||||
for (int i = 0; i < chunkCount; i++) {
|
||||
String chunkFile = baseName + ".part" + i;
|
||||
try (InputStream inputStream = new FileInputStream(chunkFile)) {
|
||||
inputStream.transferTo(outputStream);
|
||||
}
|
||||
new File(chunkFile).delete();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling and Retry
|
||||
|
||||
#### Upload with Exponential Backoff
|
||||
|
||||
```java
|
||||
import software.amazon.awssdk.core.retry.conditions.*;
|
||||
import software.amazon.awssdk.core.retry.*;
|
||||
import software.amazon.awssdk.core.retry.backoff.*;
|
||||
|
||||
public void resilientUpload(S3Client s3Client, String bucketName, String key,
|
||||
String filePath) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
// Configure retry policy
|
||||
S3Client retryS3Client = S3Client.builder()
|
||||
.overrideConfiguration(b -> b
|
||||
.retryPolicy(RetryPolicy.builder()
|
||||
.numRetries(5)
|
||||
.retryBackoffStrategy(
|
||||
ExponentialRetryBackoff.builder()
|
||||
.baseDelay(Duration.ofSeconds(1))
|
||||
.maxBackoffTime(Duration.ofSeconds(30))
|
||||
.build())
|
||||
.retryCondition(
|
||||
RetryCondition.or(
|
||||
RetryCondition.defaultRetryCondition(),
|
||||
RetryCondition.create(response ->
|
||||
response.httpResponse().is5xxServerError()))
|
||||
)
|
||||
.build()))
|
||||
.build();
|
||||
|
||||
retryS3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
|
||||
}
|
||||
```
|
||||
|
||||
#### Upload with Checkpoint
|
||||
|
||||
```java
|
||||
import java.nio.file.*;
|
||||
|
||||
public void uploadWithCheckpoint(S3Client s3Client, String bucketName,
|
||||
String key, String filePath) {
|
||||
String checkpointFile = filePath + ".checkpoint";
|
||||
Path checkpointPath = Paths.get(checkpointFile);
|
||||
|
||||
long startPos = 0;
|
||||
if (Files.exists(checkpointPath)) {
|
||||
// Read checkpoint
|
||||
try {
|
||||
startPos = Long.parseLong(Files.readString(checkpointPath));
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException("Failed to read checkpoint", e);
|
||||
}
|
||||
}
|
||||
|
||||
if (startPos > 0) {
|
||||
// Resume upload
|
||||
continueUploadFromCheckpoint(s3Client, bucketName, key, filePath, startPos);
|
||||
} else {
|
||||
// Start new upload
|
||||
startNewUpload(s3Client, bucketName, key, filePath);
|
||||
}
|
||||
|
||||
// Update checkpoint
|
||||
long endPos = new File(filePath).length();
|
||||
try {
|
||||
Files.writeString(checkpointPath, String.valueOf(endPos));
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException("Failed to write checkpoint", e);
|
||||
}
|
||||
}
|
||||
|
||||
private void continueUploadFromCheckpoint(S3Client s3Client, String bucketName,
|
||||
String key, String filePath, long startPos) {
|
||||
// Implement resume logic
|
||||
}
|
||||
|
||||
private void startNewUpload(S3Client s3Client, String bucketName,
|
||||
String key, String filePath) {
|
||||
// Implement initial upload logic
|
||||
}
|
||||
```
|
||||
|
||||
### Performance Tuning
|
||||
|
||||
#### Buffer Configuration
|
||||
|
||||
```java
|
||||
public S3Client configureLargeBuffer() {
|
||||
return S3Client.builder()
|
||||
.overrideConfiguration(b -> b
|
||||
.apiCallAttemptTimeout(Duration.ofMinutes(5))
|
||||
.apiCallTimeout(Duration.ofMinutes(10)))
|
||||
.build();
|
||||
}
|
||||
|
||||
public S3TransferManager configureHighThroughput() {
|
||||
return S3TransferManager.builder()
|
||||
.multipartUploadThreshold(8 * 1024 * 1024) // 8 MB
|
||||
.multipartUploadPartSize(10 * 1024 * 1024) // 10 MB
|
||||
.build();
|
||||
}
|
||||
```
|
||||
|
||||
#### Network Optimization
|
||||
|
||||
```java
|
||||
public S3Client createOptimizedS3Client() {
|
||||
return S3Client.builder()
|
||||
.httpClientBuilder(ApacheHttpClient.builder()
|
||||
.maxConnections(200)
|
||||
.connectionPoolStrategy(ConnectionPoolStrategy.defaultStrategy())
|
||||
.socketTimeout(Duration.ofSeconds(30))
|
||||
.connectionTimeout(Duration.ofSeconds(5))
|
||||
.connectionAcquisitionTimeout(Duration.ofSeconds(30))
|
||||
.build())
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
}
|
||||
```
|
||||
|
||||
### Monitoring and Metrics
|
||||
|
||||
#### Upload Progress Tracking
|
||||
|
||||
```java
|
||||
public void uploadWithProgressTracking(S3Client s3Client, String bucketName,
|
||||
String key, String filePath) {
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
// Create progress listener
|
||||
software.amazon.awssdk.core.ProgressListener progressListener =
|
||||
progressEvent -> {
|
||||
System.out.println("Transferred: " +
|
||||
progressEvent.transferredBytes() + " bytes");
|
||||
System.out.println("Progress: " +
|
||||
progressEvent.progressPercent() + "%");
|
||||
};
|
||||
|
||||
Response<PutObjectResponse> response = s3Client.putObject(
|
||||
request,
|
||||
RequestBody.fromFile(Paths.get(filePath)),
|
||||
software.amazon.awssdk.core.sync.RequestBody.fromFile(Paths.get(filePath))
|
||||
.contentLength(new File(filePath).length()),
|
||||
progressListener);
|
||||
|
||||
System.out.println("Upload complete. ETag: " +
|
||||
response.response().eTag());
|
||||
}
|
||||
```
|
||||
|
||||
#### Throughput Measurement
|
||||
|
||||
```java
|
||||
public void measureUploadThroughput(S3Client s3Client, String bucketName,
|
||||
String key, String filePath) {
|
||||
long startTime = System.currentTimeMillis();
|
||||
long fileSize = new File(filePath).length();
|
||||
|
||||
PutObjectRequest request = PutObjectRequest.builder()
|
||||
.bucket(bucketName)
|
||||
.key(key)
|
||||
.build();
|
||||
|
||||
s3Client.putObject(request, RequestBody.fromFile(Paths.get(filePath)));
|
||||
|
||||
long endTime = System.currentTimeMillis();
|
||||
long duration = endTime - startTime;
|
||||
double throughput = (fileSize * 1000.0) / duration / (1024 * 1024); // MB/s
|
||||
|
||||
System.out.printf("Upload throughput: %.2f MB/s%n", throughput);
|
||||
}
|
||||
```
|
||||
|
||||
## Testing and Validation
|
||||
|
||||
#### Upload Validation
|
||||
|
||||
```java
|
||||
public void validateUpload(S3Client s3Client, String bucketName, String key,
|
||||
String localFilePath) {
|
||||
// Download file from S3
|
||||
byte[] s3Content = downloadObject(s3Client, bucketName, key);
|
||||
|
||||
// Read local file
|
||||
byte[] localContent = Files.readAllBytes(Paths.get(localFilePath));
|
||||
|
||||
// Validate content matches
|
||||
if (!Arrays.equals(s3Content, localContent)) {
|
||||
throw new RuntimeException("Upload validation failed: content mismatch");
|
||||
}
|
||||
|
||||
// Verify file size
|
||||
long s3Size = s3Content.length;
|
||||
long localSize = localContent.length;
|
||||
if (s3Size != localSize) {
|
||||
throw new RuntimeException("Upload validation failed: size mismatch");
|
||||
}
|
||||
|
||||
System.out.println("Upload validation successful");
|
||||
}
|
||||
```
|
||||
342
skills/aws-java/aws-sdk-java-v2-secrets-manager/SKILL.md
Normal file
342
skills/aws-java/aws-sdk-java-v2-secrets-manager/SKILL.md
Normal file
@@ -0,0 +1,342 @@
|
||||
---
|
||||
name: aws-sdk-java-v2-secrets-manager
|
||||
description: AWS Secrets Manager patterns using AWS SDK for Java 2.x. Use when storing/retrieving secrets (passwords, API keys, tokens), rotating secrets automatically, managing database credentials, or integrating secret management into Spring Boot applications.
|
||||
category: aws
|
||||
tags: [aws, secrets-manager, java, sdk, security, credentials, spring-boot]
|
||||
version: 1.1.0
|
||||
allowed-tools: Read, Write, Glob, Bash
|
||||
---
|
||||
|
||||
# AWS SDK for Java 2.x - AWS Secrets Manager
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when:
|
||||
- Storing and retrieving application secrets programmatically
|
||||
- Managing database credentials securely without hardcoding
|
||||
- Implementing automatic secret rotation with Lambda functions
|
||||
- Integrating AWS Secrets Manager with Spring Boot applications
|
||||
- Setting up secret caching for improved performance
|
||||
- Creating secure configuration management systems
|
||||
- Working with multi-region secret deployments
|
||||
- Implementing audit logging for secret access
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Maven
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>secretsmanager</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- For secret caching (recommended for production) -->
|
||||
<dependency>
|
||||
<groupId>com.amazonaws.secretsmanager</groupId>
|
||||
<artifactId>aws-secretsmanager-caching-java</artifactId>
|
||||
<version>2.0.0</version> // Use the sdk v2 compatible version
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### Gradle
|
||||
```gradle
|
||||
implementation 'software.amazon.awssdk:secretsmanager'
|
||||
implementation 'com.amazonaws.secretsmanager:aws-secretsmanager-caching-java:2.0.0
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Client Setup
|
||||
```java
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
|
||||
|
||||
SecretsManagerClient secretsClient = SecretsManagerClient.builder()
|
||||
.region(Region.US_EAST_1)
|
||||
.build();
|
||||
```
|
||||
|
||||
### Store a Secret
|
||||
```java
|
||||
import software.amazon.awssdk.services.secretsmanager.model.*;
|
||||
|
||||
public String createSecret(String secretName, String secretValue) {
|
||||
CreateSecretRequest request = CreateSecretRequest.builder()
|
||||
.name(secretName)
|
||||
.secretString(secretValue)
|
||||
.build();
|
||||
|
||||
CreateSecretResponse response = secretsClient.createSecret(request);
|
||||
return response.arn();
|
||||
}
|
||||
```
|
||||
|
||||
### Retrieve a Secret
|
||||
```java
|
||||
public String getSecretValue(String secretName) {
|
||||
GetSecretValueRequest request = GetSecretValueRequest.builder()
|
||||
.secretId(secretName)
|
||||
.build();
|
||||
|
||||
GetSecretValueResponse response = secretsClient.getSecretValue(request);
|
||||
return response.secretString();
|
||||
}
|
||||
```
|
||||
|
||||
## Core Operations
|
||||
|
||||
### Secret Management
|
||||
- Create secrets with `createSecret()`
|
||||
- Retrieve secrets with `getSecretValue()`
|
||||
- Update secrets with `updateSecret()`
|
||||
- Delete secrets with `deleteSecret()`
|
||||
- List secrets with `listSecrets()`
|
||||
- Restore deleted secrets with `restoreSecret()`
|
||||
|
||||
### Secret Versioning
|
||||
- Access specific versions by `versionId`
|
||||
- Access versions by stage (e.g., "AWSCURRENT", "AWSPENDING")
|
||||
- Automatically manage version history
|
||||
|
||||
### Secret Rotation
|
||||
- Configure automatic rotation schedules
|
||||
- Lambda-based rotation functions
|
||||
- Immediate rotation with `rotateSecret()`
|
||||
|
||||
## Caching for Performance
|
||||
|
||||
### Setup Cache
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
|
||||
public class CachedSecrets {
|
||||
private final SecretCache cache;
|
||||
|
||||
public CachedSecrets(SecretsManagerClient secretsClient) {
|
||||
this.cache = new SecretCache(secretsClient);
|
||||
}
|
||||
|
||||
public String getCachedSecret(String secretName) {
|
||||
return cache.getSecretString(secretName);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Configuration
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCacheConfiguration;
|
||||
|
||||
SecretCacheConfiguration config = SecretCacheConfiguration.builder()
|
||||
.maxCacheSize(1000)
|
||||
.cacheItemTTL(3600000) // 1 hour
|
||||
.build();
|
||||
```
|
||||
|
||||
## Spring Boot Integration
|
||||
|
||||
### Configuration
|
||||
```java
|
||||
@Configuration
|
||||
public class SecretsManagerConfiguration {
|
||||
|
||||
@Bean
|
||||
public SecretsManagerClient secretsManagerClient() {
|
||||
return SecretsManagerClient.builder()
|
||||
.region(Region.of(region))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public SecretCache secretCache(SecretsManagerClient secretsClient) {
|
||||
return new SecretCache(secretsClient);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Service Layer
|
||||
```java
|
||||
@Service
|
||||
public class SecretsService {
|
||||
|
||||
private final SecretCache cache;
|
||||
|
||||
public SecretsService(SecretCache cache) {
|
||||
this.cache = cache;
|
||||
}
|
||||
|
||||
public <T> T getSecretAsObject(String secretName, Class<T> type) {
|
||||
String secretJson = cache.getSecretString(secretName);
|
||||
return objectMapper.readValue(secretJson, type);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Database Configuration
|
||||
```java
|
||||
@Configuration
|
||||
public class DatabaseConfiguration {
|
||||
|
||||
@Bean
|
||||
public DataSource dataSource(SecretsService secretsService) {
|
||||
Map<String, String> credentials = secretsService.getSecretAsMap(
|
||||
"prod/database/credentials");
|
||||
|
||||
HikariConfig config = new HikariConfig();
|
||||
config.setJdbcUrl(credentials.get("url"));
|
||||
config.setUsername(credentials.get("username"));
|
||||
config.setPassword(credentials.get("password"));
|
||||
|
||||
return new HikariDataSource(config);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Database Credentials Structure
|
||||
```json
|
||||
{
|
||||
"engine": "postgres",
|
||||
"host": "mydb.us-east-1.rds.amazonaws.com",
|
||||
"port": 5432,
|
||||
"username": "admin",
|
||||
"password": "MySecurePassword123!",
|
||||
"dbname": "mydatabase",
|
||||
"url": "jdbc:postgresql://mydb.us-east-1.rds.amazonaws.com:5432/mydatabase"
|
||||
}
|
||||
```
|
||||
|
||||
### API Keys Structure
|
||||
```json
|
||||
{
|
||||
"api_key": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
||||
"api_secret": "MySecretKey123!",
|
||||
"api_token": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
|
||||
}
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Error Handling
|
||||
```java
|
||||
try {
|
||||
String secret = secretsClient.getSecretValue(request).secretString();
|
||||
} catch (SecretsManagerException e) {
|
||||
if (e.awsErrorDetails().errorCode().equals("ResourceNotFoundException")) {
|
||||
// Handle missing secret
|
||||
}
|
||||
throw e;
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Operations
|
||||
```java
|
||||
List<String> secretNames = List.of("secret1", "secret2", "secret3");
|
||||
Map<String, String> secrets = secretNames.stream()
|
||||
.collect(Collectors.toMap(
|
||||
Function.identity(),
|
||||
name -> cache.getSecretString(name)
|
||||
));
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Secret Management**:
|
||||
- Use descriptive secret names with hierarchical structure
|
||||
- Implement versioning and rotation
|
||||
- Add tags for organization and billing
|
||||
|
||||
2. **Caching**:
|
||||
- Always use caching in production environments
|
||||
- Configure appropriate TTL values based on secret sensitivity
|
||||
- Monitor cache hit rates
|
||||
|
||||
3. **Security**:
|
||||
- Never log secret values
|
||||
- Use KMS encryption for sensitive secrets
|
||||
- Implement least privilege IAM policies
|
||||
- Enable CloudTrail logging
|
||||
|
||||
4. **Performance**:
|
||||
- Reuse SecretsManagerClient instances
|
||||
- Use async operations when appropriate
|
||||
- Monitor API throttling limits
|
||||
|
||||
5. **Spring Boot Integration**:
|
||||
- Use `@Value` annotations for secret names
|
||||
- Implement proper exception handling
|
||||
- Use configuration properties for secret names
|
||||
|
||||
## Testing Strategies
|
||||
|
||||
### Unit Testing
|
||||
```java
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class SecretsServiceTest {
|
||||
|
||||
@Mock
|
||||
private SecretCache cache;
|
||||
|
||||
@InjectMocks
|
||||
private SecretsService secretsService;
|
||||
|
||||
@Test
|
||||
void shouldGetSecret() {
|
||||
when(cache.getSecretString("test-secret")).thenReturn("secret-value");
|
||||
|
||||
String result = secretsService.getSecret("test-secret");
|
||||
|
||||
assertEquals("secret-value", result);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
```java
|
||||
@SpringBootTest(classes = TestSecretsConfiguration.class)
|
||||
class SecretsManagerIntegrationTest {
|
||||
|
||||
@Autowired
|
||||
private SecretsService secretsService;
|
||||
|
||||
@Test
|
||||
void shouldRetrieveSecret() {
|
||||
String secret = secretsService.getSecret("test-secret");
|
||||
assertNotNull(secret);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
- **Access Denied**: Check IAM permissions
|
||||
- **Resource Not Found**: Verify secret name and region
|
||||
- **Decryption Failure**: Ensure KMS key permissions
|
||||
- **Throttling**: Implement retry logic and backoff
|
||||
|
||||
### Debug Commands
|
||||
```bash
|
||||
# Check secret exists
|
||||
aws secretsmanager describe-secret --secret-id my-secret
|
||||
|
||||
# List all secrets
|
||||
aws secretsmanager list-secrets
|
||||
|
||||
# Get secret value (CLI)
|
||||
aws secretsmanager get-secret-value --secret-id my-secret
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
For detailed information and advanced patterns, see:
|
||||
|
||||
- [API Reference](./references/api-reference.md) - Complete API documentation
|
||||
- [Caching Guide](./references/caching-guide.md) - Performance optimization strategies
|
||||
- [Spring Boot Integration](./references/spring-boot-integration.md) - Complete Spring integration patterns
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `aws-sdk-java-v2-core` - Core AWS SDK patterns and best practices
|
||||
- `aws-sdk-java-v2-kms` - KMS encryption and key management
|
||||
- `spring-boot-dependency-injection` - Spring dependency injection patterns
|
||||
@@ -0,0 +1,38 @@
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import com.amazonaws.secretsmanager.caching.SecretCacheConfiguration;
|
||||
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
|
||||
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
@Configuration
|
||||
public class {{ConfigClass}} {
|
||||
|
||||
@Value("${aws.secrets.region}")
|
||||
private String region;
|
||||
|
||||
@Bean
|
||||
public SecretsManagerClient secretsManagerClient() {
|
||||
return SecretsManagerClient.builder()
|
||||
.region(Region.of(region))
|
||||
.credentialsProvider(StaticCredentialsProvider.create(
|
||||
AwsBasicCredentials.create(
|
||||
"${aws.accessKeyId}",
|
||||
"${aws.secretKey}"
|
||||
)
|
||||
))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public SecretCache secretCache(SecretsManagerClient secretsClient) {
|
||||
SecretCacheConfiguration config = SecretCacheConfiguration.builder()
|
||||
.maxCacheSize(100)
|
||||
.cacheItemTTL(3600000) // 1 hour
|
||||
.build();
|
||||
|
||||
return new SecretCache(secretsClient, config);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,126 @@
|
||||
# AWS Secrets Manager API Reference
|
||||
|
||||
## Overview
|
||||
AWS Secrets Manager provides a service to enable you to store, manage, and retrieve secrets with API version 2017-10-17.
|
||||
|
||||
## Core Classes
|
||||
|
||||
### SecretsManagerClient
|
||||
- **Purpose**: Synchronous client for AWS Secrets Manager
|
||||
- **Location**: `software.amazon.awssdk.services.secretsmanager.SecretsManagerClient`
|
||||
- **Builder**: `SecretsManagerClient.builder()`
|
||||
|
||||
### SecretsManagerAsyncClient
|
||||
- **Purpose**: Asynchronous client for AWS Secrets Manager
|
||||
- **Location**: `software.amazon.awssdk.services.secretsmanager.SecretsManagerAsyncClient`
|
||||
- **Builder**: `SecretsManagerAsyncClient.builder()`
|
||||
|
||||
## Configuration Classes
|
||||
|
||||
### SecretsManagerClientBuilder
|
||||
- Methods:
|
||||
- `region(Region region)` - Set AWS region
|
||||
- `credentialsProvider(AwsCredentialsProvider credentialsProvider)` - Set credentials
|
||||
- `build()` - Create client instance
|
||||
|
||||
### SecretsManagerServiceClientConfiguration
|
||||
- Service client settings and configuration
|
||||
|
||||
## Request Types
|
||||
|
||||
### CreateSecretRequest
|
||||
- **Fields**:
|
||||
- `name(String name)` - Secret name (required)
|
||||
- `secretString(String secretString)` - Secret value
|
||||
- `secretBinary(SdkBytes secretBinary)` - Binary secret value
|
||||
- `description(String description)` - Secret description
|
||||
- `kmsKeyId(String kmsKeyId)` - KMS key for encryption
|
||||
- `tags(List<Tag> tags)` - Tags for organization
|
||||
|
||||
### GetSecretValueRequest
|
||||
- **Fields**:
|
||||
- `secretId(String secretId)` - Secret name or ARN
|
||||
- `versionId(String versionId)` - Specific version ID
|
||||
- `versionStage(String versionStage)` - Version stage (e.g., "AWSCURRENT")
|
||||
|
||||
### UpdateSecretRequest
|
||||
- **Fields**:
|
||||
- `secretId(String secretId)` - Secret name or ARN
|
||||
- `secretString(String secretString)` - New secret value
|
||||
- `secretBinary(SdkBytes secretBinary)` - New binary secret value
|
||||
- `kmsKeyId(String kmsKeyId)` - KMS key for encryption
|
||||
|
||||
### DeleteSecretRequest
|
||||
- **Fields**:
|
||||
- `secretId(String secretId)` - Secret name or ARN
|
||||
- `recoveryWindowInDays(Long recoveryWindowInDays)` - Recovery period
|
||||
- `forceDeleteWithoutRecovery(Boolean forceDeleteWithoutRecovery)` - Immediate deletion
|
||||
|
||||
### RotateSecretRequest
|
||||
- **Fields**:
|
||||
- `secretId(String secretId)` - Secret name or ARN
|
||||
- `rotationLambdaArn(String rotationLambdaArn)` - Lambda ARN for rotation
|
||||
- `rotationRules(RotationRulesType rotationRules)` - Rotation configuration
|
||||
- `rotationSchedule(RotationScheduleType rotationSchedule)` - Schedule configuration
|
||||
|
||||
## Response Types
|
||||
|
||||
### CreateSecretResponse
|
||||
- **Fields**:
|
||||
- `arn()` - Secret ARN
|
||||
- `name()` - Secret name
|
||||
- `versionId()` - Version ID
|
||||
|
||||
### GetSecretValueResponse
|
||||
- **Fields**:
|
||||
- `arn()` - Secret ARN
|
||||
- `name()` - Secret name
|
||||
- `versionId()` - Version ID
|
||||
- `secretString()` - Secret value as string
|
||||
- `secretBinary()` - Secret value as binary
|
||||
- `versionStages()` - Version stages
|
||||
|
||||
### UpdateSecretResponse
|
||||
- **Fields**:
|
||||
- `arn()` - Secret ARN
|
||||
- `name()` - Secret name
|
||||
- `versionId()` - New version ID
|
||||
|
||||
### DeleteSecretResponse
|
||||
- **Fields**:
|
||||
- `arn()` - Secret ARN
|
||||
- `name()` - Secret name
|
||||
- `deletionDate()` - Deletion date/time
|
||||
|
||||
### RotateSecretResponse
|
||||
- **Fields**:
|
||||
- `arn()` - Secret ARN
|
||||
- `name()` - Secret name
|
||||
- `versionId()` - New version ID
|
||||
|
||||
## Paginated Operations
|
||||
|
||||
### ListSecretsRequest
|
||||
- **Fields**:
|
||||
- `maxResults(Integer maxResults)` - Maximum results per page
|
||||
- `nextToken(String nextToken)` - Token for next page
|
||||
- `filter(String filter)` - Filter criteria
|
||||
|
||||
### ListSecretsResponse
|
||||
- **Fields**:
|
||||
- `secretList()` - List of secrets
|
||||
- `nextToken()` - Token for next page
|
||||
|
||||
## Error Handling
|
||||
|
||||
### SecretsManagerException
|
||||
- Common error codes:
|
||||
- `ResourceNotFoundException` - Secret not found
|
||||
- `InvalidParameterException` - Invalid parameters
|
||||
- `MalformedPolicyDocumentException` - Invalid policy document
|
||||
- `InternalServiceErrorException` - Internal service error
|
||||
- `InvalidRequestException` - Invalid request
|
||||
- `DecryptionFailure` - Decryption failed
|
||||
- `ResourceExistsException` - Resource already exists
|
||||
- `ResourceConflictException` - Resource conflict
|
||||
- `ValidationException` - Validation failed
|
||||
@@ -0,0 +1,304 @@
|
||||
# AWS Secrets Manager Caching Guide
|
||||
|
||||
## Overview
|
||||
The AWS Secrets Manager Java caching client enables in-process caching of secrets for Java applications, reducing API calls and improving performance.
|
||||
|
||||
## Prerequisites
|
||||
- Java 8+ development environment
|
||||
- AWS account with Secrets Manager access
|
||||
- Appropriate IAM permissions
|
||||
|
||||
## Installation
|
||||
|
||||
### Maven Dependency
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.amazonaws.secretsmanager</groupId>
|
||||
<artifactId>aws-secretsmanager-caching-java</artifactId>
|
||||
<version>2.0.0</version> // Use the latest version compatible with sdk v2
|
||||
</dependency>
|
||||
```
|
||||
|
||||
### Gradle Dependency
|
||||
```gradle
|
||||
implementation 'com.amazonaws.secretsmanager:aws-secretsmanager-caching-java:2.0.0'
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Simple Cache Setup
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
|
||||
public class SimpleCacheExample {
|
||||
private final SecretCache cache = new SecretCache();
|
||||
|
||||
public String getSecret(String secretId) {
|
||||
return cache.getSecretString(secretId);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Cache with Custom SecretsManagerClient
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
|
||||
|
||||
public class ClientAwareCacheExample {
|
||||
private final SecretCache cache;
|
||||
|
||||
public ClientAwareCacheExample(SecretsManagerClient secretsClient) {
|
||||
this.cache = new SecretCache(secretsClient);
|
||||
}
|
||||
|
||||
public String getSecret(String secretId) {
|
||||
return cache.getSecretString(secretId);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Cache Configuration
|
||||
|
||||
### SecretCacheConfiguration
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCacheConfiguration;
|
||||
|
||||
public class ConfiguredCacheExample {
|
||||
private final SecretCache cache;
|
||||
|
||||
public ConfiguredCacheExample(SecretsManagerClient secretsClient) {
|
||||
SecretCacheConfiguration config = new SecretCacheConfiguration()
|
||||
.withMaxCacheSize(1000) // Maximum number of cached secrets
|
||||
.withCacheItemTTL(3600000); // 1 hour TTL in milliseconds
|
||||
|
||||
this.cache = new SecretCache(secretsClient, config);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
| Property | Type | Default | Description |
|
||||
|----------|------|---------|-------------|
|
||||
| `maxCacheSize` | Integer | 1000 | Maximum number of cached secrets |
|
||||
| `cacheItemTTL` | Long | 300000 (5 min) | Cache item TTL in milliseconds |
|
||||
| `cacheSizeEvictionPercentage` | Integer | 10 | Percentage of items to evict when cache is full |
|
||||
|
||||
## Advanced Caching Patterns
|
||||
|
||||
### Multi-Layer Cache
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
|
||||
public class MultiLayerCache {
|
||||
private final SecretCache secretsManagerCache;
|
||||
private final ConcurrentHashMap<String, String> localCache;
|
||||
private final long localCacheTtl = 30000; // 30 seconds
|
||||
|
||||
public MultiLayerCache(SecretsManagerClient secretsClient) {
|
||||
this.secretsManagerCache = new SecretCache(secretsClient);
|
||||
this.localCache = new ConcurrentHashMap<>();
|
||||
}
|
||||
|
||||
public String getSecret(String secretId) {
|
||||
// Check local cache first
|
||||
String cached = localCache.get(secretId);
|
||||
if (cached != null) {
|
||||
return cached;
|
||||
}
|
||||
|
||||
// Get from Secrets Manager cache
|
||||
String secret = secretsManagerCache.getSecretString(secretId);
|
||||
if (secret != null) {
|
||||
localCache.put(secretId, secret);
|
||||
}
|
||||
|
||||
return secret;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Statistics
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
|
||||
public class CacheStatsExample {
|
||||
private final SecretCache cache;
|
||||
|
||||
public void demonstrateCacheStats() {
|
||||
// Get cache statistics
|
||||
long hitCount = cache.getHitCount();
|
||||
long missCount = cache.getMissCount();
|
||||
double hitRatio = cache.getHitRatio();
|
||||
|
||||
System.out.println("Cache Hit Ratio: " + hitRatio);
|
||||
System.out.println("Hits: " + hitCount + ", Misses: " + missCount);
|
||||
|
||||
// Clear cache statistics
|
||||
cache.clearCacheStats();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling and Cache Management
|
||||
|
||||
### Cache Refresh Strategy
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
public class CacheRefreshManager {
|
||||
private final SecretCache cache;
|
||||
private final ScheduledExecutorService scheduler;
|
||||
|
||||
public CacheRefreshManager(SecretsManagerClient secretsClient) {
|
||||
this.cache = new SecretCache(secretsClient);
|
||||
this.scheduler = Executors.newScheduledThreadPool(1);
|
||||
}
|
||||
|
||||
public void startRefreshSchedule() {
|
||||
// Refresh cache every hour
|
||||
scheduler.scheduleAtFixedRate(this::refreshCache, 1, 1, TimeUnit.HOURS);
|
||||
}
|
||||
|
||||
private void refreshCache() {
|
||||
System.out.println("Refreshing cache...");
|
||||
cache.refresh();
|
||||
}
|
||||
|
||||
public void shutdown() {
|
||||
scheduler.shutdown();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fallback Mechanism
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
|
||||
public class FallbackCacheExample {
|
||||
private final SecretCache cache;
|
||||
private final SecretsManagerClient fallbackClient;
|
||||
|
||||
public FallbackCacheExample(SecretsManagerClient primaryClient, SecretsManagerClient fallbackClient) {
|
||||
this.cache = new SecretCache(primaryClient);
|
||||
this.fallbackClient = fallbackClient;
|
||||
}
|
||||
|
||||
public String getSecretWithFallback(String secretId) {
|
||||
try {
|
||||
// Try cached value first
|
||||
return cache.getSecretString(secretId);
|
||||
} catch (Exception e) {
|
||||
// Fallback to direct API call
|
||||
return getSecretDirect(secretId);
|
||||
}
|
||||
}
|
||||
|
||||
private String getSecretDirect(String secretId) {
|
||||
GetSecretValueRequest request = GetSecretValueRequest.builder()
|
||||
.secretId(secretId)
|
||||
.build();
|
||||
|
||||
return fallbackClient.getSecretValue(request).secretString();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Batch Secret Retrieval
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import java.util.List;
|
||||
import java.util.ArrayList;
|
||||
|
||||
public class BatchSecretRetrieval {
|
||||
private final SecretCache cache;
|
||||
|
||||
public List<String> getMultipleSecrets(List<String> secretIds) {
|
||||
List<String> results = new ArrayList<>();
|
||||
|
||||
for (String secretId : secretIds) {
|
||||
String secret = cache.getSecretString(secretId);
|
||||
results.add(secret != null ? secret : "NOT_FOUND");
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
public Map<String, String> getSecretsAsMap(List<String> secretIds) {
|
||||
Map<String, String> secretMap = new HashMap<>();
|
||||
|
||||
for (String secretId : secretIds) {
|
||||
String secret = cache.getSecretString(secretId);
|
||||
if (secret != null) {
|
||||
secretMap.put(secretId, secret);
|
||||
}
|
||||
}
|
||||
|
||||
return secretMap;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Monitoring and Debugging
|
||||
|
||||
### Cache Monitoring
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
|
||||
public class CacheMonitor {
|
||||
private final SecretCache cache;
|
||||
|
||||
public void monitorCachePerformance() {
|
||||
// Monitor cache hit rate
|
||||
double hitRatio = cache.getHitRatio();
|
||||
System.out.println("Cache Hit Ratio: " + hitRatio);
|
||||
|
||||
// Monitor cache size
|
||||
long currentSize = cache.size();
|
||||
System.out.println("Current Cache Size: " + currentSize);
|
||||
|
||||
// Monitor cache hits and misses
|
||||
long hits = cache.getHitCount();
|
||||
long misses = cache.getMissCount();
|
||||
System.out.println("Cache Hits: " + hits + ", Misses: " + misses);
|
||||
}
|
||||
|
||||
public void printCacheContents() {
|
||||
// Note: SecretCache doesn't provide direct access to all cached items
|
||||
// This is a security feature to prevent accidental exposure of secrets
|
||||
System.out.println("Cache contents are protected and cannot be directly inspected");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Cache Size Configuration**:
|
||||
- Adjust `maxCacheSize` based on available memory
|
||||
- Monitor memory usage and adjust accordingly
|
||||
- Consider using heap analysis tools
|
||||
|
||||
2. **TTL Configuration**:
|
||||
- Balance between performance and freshness
|
||||
- Shorter TTL for frequently changing secrets
|
||||
- Longer TTL for stable secrets
|
||||
|
||||
3. **Error Handling**:
|
||||
- Implement fallback mechanisms
|
||||
- Handle cache misses gracefully
|
||||
- Log errors without exposing sensitive information
|
||||
|
||||
4. **Security Considerations**:
|
||||
- Never log secret values
|
||||
- Use appropriate IAM permissions
|
||||
- Consider encryption at rest for cached data
|
||||
|
||||
5. **Memory Management**:
|
||||
- Monitor memory usage
|
||||
- Consider cache eviction strategies
|
||||
- Implement proper cleanup in shutdown hooks
|
||||
@@ -0,0 +1,535 @@
|
||||
# AWS Secrets Manager Spring Boot Integration
|
||||
|
||||
## Overview
|
||||
Integrate AWS Secrets Manager with Spring Boot applications using the caching library for optimal performance and security.
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Required Dependencies
|
||||
```xml
|
||||
<!-- AWS Secrets Manager -->
|
||||
<dependency>
|
||||
<groupId>software.amazon.awssdk</groupId>
|
||||
<artifactId>secretsmanager</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- AWS Secrets Manager Caching -->
|
||||
<dependency>
|
||||
<groupId>com.amazonaws.secretsmanager</groupId>
|
||||
<artifactId>aws-secretsmanager-caching-java</artifactId>
|
||||
<version>2.0.0</version> // Use the latest version compatible with sdk v2
|
||||
</dependency>
|
||||
|
||||
<!-- Spring Boot Starter -->
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-web</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Jackson for JSON processing -->
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Connection Pooling -->
|
||||
<dependency>
|
||||
<groupId>com.zaxxer</groupId>
|
||||
<artifactId>HikariCP</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Configuration Properties
|
||||
|
||||
### application.yml
|
||||
```yaml
|
||||
spring:
|
||||
application:
|
||||
name: aws-secrets-manager-app
|
||||
datasource:
|
||||
url: jdbc:postgresql://localhost:5432/mydb
|
||||
username: ${db.username}
|
||||
password: ${db.password}
|
||||
hikari:
|
||||
maximum-pool-size: 10
|
||||
minimum-idle: 5
|
||||
|
||||
aws:
|
||||
secrets:
|
||||
region: us-east-1
|
||||
# Secret names for different environments
|
||||
database-credentials: prod/database/credentials
|
||||
api-keys: prod/external-api/keys
|
||||
redis-config: prod/redis/config
|
||||
|
||||
app:
|
||||
external-api:
|
||||
secret-name: prod/external/credentials
|
||||
base-url: https://api.example.com
|
||||
```
|
||||
|
||||
## Core Components
|
||||
|
||||
### SecretsManager Configuration
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import com.amazonaws.secretsmanager.caching.SecretCacheConfiguration;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
|
||||
import software.amazon.awssdk.auth.credentials.StaticCredentialsProvider;
|
||||
import software.amazon.awssdk.regions.Region;
|
||||
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
|
||||
|
||||
@Configuration
|
||||
public class SecretsManagerConfiguration {
|
||||
|
||||
@Value("${aws.secrets.region}")
|
||||
private String region;
|
||||
|
||||
@Bean
|
||||
public SecretsManagerClient secretsManagerClient() {
|
||||
return SecretsManagerClient.builder()
|
||||
.region(Region.of(region))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Bean
|
||||
public SecretCache secretCache(SecretsManagerClient secretsClient) {
|
||||
SecretCacheConfiguration config = SecretCacheConfiguration.builder()
|
||||
.maxCacheSize(100)
|
||||
.cacheItemTTL(3600000) // 1 hour
|
||||
.build();
|
||||
|
||||
return new SecretCache(secretsClient, config);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Secrets Service
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.springframework.stereotype.Service;
|
||||
import java.util.Map;
|
||||
|
||||
@Service
|
||||
public class SecretsService {
|
||||
|
||||
private final SecretCache secretCache;
|
||||
private final ObjectMapper objectMapper;
|
||||
|
||||
public SecretsService(SecretCache secretCache, ObjectMapper objectMapper) {
|
||||
this.secretCache = secretCache;
|
||||
this.objectMapper = objectMapper;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get secret as string
|
||||
*/
|
||||
public String getSecret(String secretName) {
|
||||
try {
|
||||
return secretCache.getSecretString(secretName);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to retrieve secret: " + secretName, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get secret as object of specified type
|
||||
*/
|
||||
public <T> T getSecretAsObject(String secretName, Class<T> type) {
|
||||
try {
|
||||
String secretJson = secretCache.getSecretString(secretName);
|
||||
return objectMapper.readValue(secretJson, type);
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to parse secret: " + secretName, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get secret as Map
|
||||
*/
|
||||
public Map<String, String> getSecretAsMap(String secretName) {
|
||||
try {
|
||||
String secretJson = secretCache.getSecretString(secretName);
|
||||
return objectMapper.readValue(secretJson,
|
||||
new TypeReference<Map<String, String>>() {});
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to parse secret map: " + secretName, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get secret with fallback
|
||||
*/
|
||||
public String getSecretWithFallback(String secretName, String defaultValue) {
|
||||
try {
|
||||
String secret = secretCache.getSecretString(secretName);
|
||||
return secret != null ? secret : defaultValue;
|
||||
} catch (Exception e) {
|
||||
return defaultValue;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Database Configuration Integration
|
||||
|
||||
### Dynamic DataSource Configuration
|
||||
```java
|
||||
import com.zaxxer.hikari.HikariConfig;
|
||||
import com.zaxxer.hikari.HikariDataSource;
|
||||
import org.springframework.beans.factory.annotation.Value;
|
||||
import org.springframework.boot.jdbc.DataSourceBuilder;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import javax.sql.DataSource;
|
||||
|
||||
@Configuration
|
||||
public class DatabaseConfiguration {
|
||||
|
||||
private final SecretsService secretsService;
|
||||
|
||||
@Value("${aws.secrets.database-credentials}")
|
||||
private String dbSecretName;
|
||||
|
||||
public DatabaseConfiguration(SecretsService secretsService) {
|
||||
this.secretsService = secretsService;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public DataSource dataSource() {
|
||||
Map<String, String> credentials = secretsService.getSecretAsMap(dbSecretName);
|
||||
|
||||
HikariConfig config = new HikariConfig();
|
||||
config.setJdbcUrl(credentials.get("url"));
|
||||
config.setUsername(credentials.get("username"));
|
||||
config.setPassword(credentials.get("password"));
|
||||
config.setMaximumPoolSize(10);
|
||||
config.setMinimumIdle(5);
|
||||
config.setConnectionTimeout(30000);
|
||||
config.setIdleTimeout(600000);
|
||||
config.setMaxLifetime(1800000);
|
||||
config.setLeakDetectionThreshold(15000);
|
||||
|
||||
return new HikariDataSource(config);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Properties with Secrets
|
||||
```java
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
@Component
|
||||
@ConfigurationProperties(prefix = "app")
|
||||
public class AppProperties {
|
||||
|
||||
private final SecretsService secretsService;
|
||||
|
||||
@Value("${app.external-api.secret-name}")
|
||||
private String apiSecretName;
|
||||
|
||||
public AppProperties(SecretsService secretsService) {
|
||||
this.secretsService = secretsService;
|
||||
}
|
||||
|
||||
private String apiKey;
|
||||
|
||||
public String getApiKey() {
|
||||
if (apiKey == null) {
|
||||
apiKey = secretsService.getSecret(apiSecretName);
|
||||
}
|
||||
return apiKey;
|
||||
}
|
||||
|
||||
// Additional application properties
|
||||
private String externalApiBaseUrl;
|
||||
|
||||
public String getExternalApiBaseUrl() {
|
||||
return externalApiBaseUrl;
|
||||
}
|
||||
|
||||
public void setExternalApiBaseUrl(String externalApiBaseUrl) {
|
||||
this.externalApiBaseUrl = externalApiBaseUrl;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Property Source Integration
|
||||
|
||||
### Custom Property Source
|
||||
```java
|
||||
import org.springframework.core.env.Environment;
|
||||
import org.springframework.core.env.PropertySource;
|
||||
import org.springframework.stereotype.Component;
|
||||
import javax.annotation.PostConstruct;
|
||||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
|
||||
@Component
|
||||
public class SecretsManagerPropertySource extends PropertySource<SecretsService> {
|
||||
|
||||
public static final String SECRETS_MANAGER_PROPERTY_SOURCE_NAME = "secretsManagerPropertySource";
|
||||
|
||||
private final SecretsService secretsService;
|
||||
private final Environment environment;
|
||||
|
||||
public SecretsManagerPropertySource(SecretsService secretsService, Environment environment) {
|
||||
super(SECRETS_MANAGER_PROPERTY_SOURCE_NAME, secretsService);
|
||||
this.secretsService = secretsService;
|
||||
this.environment = environment;
|
||||
}
|
||||
|
||||
@PostConstruct
|
||||
public void loadSecrets() {
|
||||
// Load secrets specified in application.yml
|
||||
String secretPrefix = "aws.secrets.";
|
||||
environment.getPropertyNames().forEach(propertyName -> {
|
||||
if (propertyName.startsWith(secretPrefix)) {
|
||||
String secretName = environment.getProperty(propertyName);
|
||||
String secretValue = secretsService.getSecret(secretName);
|
||||
if (secretValue != null) {
|
||||
// Add to property source (note: this is simplified)
|
||||
// In practice, you'd need to work with PropertySources
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@Override
|
||||
public Object getProperty(String name) {
|
||||
if (name.startsWith("aws.secret.")) {
|
||||
String secretName = name.substring("aws.secret.".length());
|
||||
return secretsService.getSecret(secretName);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## API Integration
|
||||
|
||||
### REST Client with Secrets
|
||||
```java
|
||||
import org.springframework.http.HttpEntity;
|
||||
import org.springframework.http.HttpHeaders;
|
||||
import org.springframework.http.HttpMethod;
|
||||
import org.springframework.http.ResponseEntity;
|
||||
import org.springframework.stereotype.Service;
|
||||
import org.springframework.web.client.RestTemplate;
|
||||
|
||||
@Service
|
||||
public class ExternalApiClient {
|
||||
|
||||
private final SecretsService secretsService;
|
||||
private final RestTemplate restTemplate;
|
||||
private final AppProperties appProperties;
|
||||
|
||||
public ExternalApiClient(SecretsService secretsService,
|
||||
RestTemplate restTemplate,
|
||||
AppProperties appProperties) {
|
||||
this.secretsService = secretsService;
|
||||
this.restTemplate = restTemplate;
|
||||
this.appProperties = appProperties;
|
||||
}
|
||||
|
||||
public String callExternalApi(String endpoint) {
|
||||
Map<String, String> apiCredentials = secretsService.getSecretAsMap(
|
||||
appProperties.getExternalApiSecretName());
|
||||
|
||||
HttpHeaders headers = new HttpHeaders();
|
||||
headers.set("Authorization", "Bearer " + apiCredentials.get("api_token"));
|
||||
headers.set("X-API-Key", apiCredentials.get("api_key"));
|
||||
headers.set("Content-Type", "application/json");
|
||||
|
||||
HttpEntity<String> entity = new HttpEntity<>(headers);
|
||||
|
||||
ResponseEntity<String> response = restTemplate.exchange(
|
||||
endpoint,
|
||||
HttpMethod.GET,
|
||||
entity,
|
||||
String.class);
|
||||
|
||||
return response.getBody();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration for REST Template
|
||||
```java
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.web.client.RestTemplate;
|
||||
|
||||
@Configuration
|
||||
public class RestTemplateConfiguration {
|
||||
|
||||
@Bean
|
||||
public RestTemplate restTemplate() {
|
||||
return new RestTemplate();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Security Configuration
|
||||
|
||||
### Security Setup
|
||||
```java
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
|
||||
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
|
||||
import org.springframework.security.web.SecurityFilterChain;
|
||||
|
||||
@Configuration
|
||||
@EnableWebSecurity
|
||||
public class SecurityConfiguration {
|
||||
|
||||
@Bean
|
||||
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
|
||||
http
|
||||
.authorizeHttpRequests(auth -> auth
|
||||
.requestMatchers("/api/secrets/**").hasRole("ADMIN")
|
||||
.anyRequest().permitAll()
|
||||
)
|
||||
.httpBasic()
|
||||
.and()
|
||||
.csrf().disable();
|
||||
|
||||
return http.build();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Configuration
|
||||
|
||||
### Test Configuration
|
||||
```java
|
||||
import com.amazonaws.secretsmanager.caching.SecretCache;
|
||||
import org.springframework.boot.test.context.TestConfiguration;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Primary;
|
||||
import org.springframework.mock.env.MockEnvironment;
|
||||
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
|
||||
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;
|
||||
|
||||
import static org.mockito.Mockito.*;
|
||||
|
||||
@TestConfiguration
|
||||
public class TestSecretsConfiguration {
|
||||
|
||||
@Bean
|
||||
@Primary
|
||||
public SecretsManagerClient secretsManagerClient() {
|
||||
SecretsManagerClient mockClient = mock(SecretsManagerClient.class);
|
||||
|
||||
// Mock successful secret retrieval
|
||||
when(mockClient.getSecretValue(any()))
|
||||
.thenReturn(GetSecretValueResponse.builder()
|
||||
.secretString("{\"username\":\"test\",\"password\":\"testpass\"}")
|
||||
.build());
|
||||
|
||||
return mockClient;
|
||||
}
|
||||
|
||||
@Bean
|
||||
@Primary
|
||||
public SecretCache secretCache(SecretsManagerClient mockClient) {
|
||||
SecretCache mockCache = mock(SecretCache.class);
|
||||
when(mockCache.getSecretString(anyString()))
|
||||
.thenReturn("{\"username\":\"test\",\"password\":\"testpass\"}");
|
||||
return mockCache;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public MockEnvironment mockEnvironment() {
|
||||
MockEnvironment env = new MockEnvironment();
|
||||
env.setProperty("aws.secrets.region", "us-east-1");
|
||||
env.setProperty("aws.secrets.database-credentials", "test-db-credentials");
|
||||
return env;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Unit Tests
|
||||
```java
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.mockito.InjectMocks;
|
||||
import org.mockito.Mock;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
import static org.mockito.Mockito.*;
|
||||
import static org.junit.jupiter.api.Assertions.*;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class SecretsServiceTest {
|
||||
|
||||
@Mock
|
||||
private SecretCache secretCache;
|
||||
|
||||
@InjectMocks
|
||||
private SecretsService secretsService;
|
||||
|
||||
@Test
|
||||
void shouldGetSecret() {
|
||||
String secretName = "test-secret";
|
||||
String expectedValue = "secret-value";
|
||||
|
||||
when(secretCache.getSecretString(secretName))
|
||||
.thenReturn(expectedValue);
|
||||
|
||||
String result = secretsService.getSecret(secretName);
|
||||
|
||||
assertEquals(expectedValue, result);
|
||||
verify(secretCache).getSecretString(secretName);
|
||||
}
|
||||
|
||||
@Test
|
||||
void shouldGetSecretAsMap() throws Exception {
|
||||
String secretName = "test-secret";
|
||||
String secretJson = "{\"key\":\"value\"}";
|
||||
Map<String, String> expectedMap = Map.of("key", "value");
|
||||
|
||||
when(secretCache.getSecretString(secretName))
|
||||
.thenReturn(secretJson);
|
||||
|
||||
Map<String, String> result = secretsService.getSecretAsMap(secretName);
|
||||
|
||||
assertEquals(expectedMap, result);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Environment-Specific Configuration**:
|
||||
- Use different secret names for development, staging, and production
|
||||
- Implement proper environment variable management
|
||||
- Use Spring profiles for environment-specific configurations
|
||||
|
||||
2. **Security Considerations**:
|
||||
- Never log secret values
|
||||
- Use appropriate IAM roles and policies
|
||||
- Enable encryption in transit and at rest
|
||||
- Implement proper access controls
|
||||
|
||||
3. **Performance Optimization**:
|
||||
- Use caching for frequently accessed secrets
|
||||
- Configure appropriate TTL values
|
||||
- Monitor cache hit rates and adjust accordingly
|
||||
- Use connection pooling for database connections
|
||||
|
||||
4. **Error Handling**:
|
||||
- Implement fallback mechanisms for critical secrets
|
||||
- Handle partial secret retrieval gracefully
|
||||
- Provide meaningful error messages without exposing sensitive information
|
||||
- Implement circuit breakers for external API calls
|
||||
|
||||
5. **Monitoring and Logging**:
|
||||
- Monitor secret retrieval performance
|
||||
- Track cache hit/miss ratios
|
||||
- Log secret access patterns (without values)
|
||||
- Set up alerts for abnormal secret access patterns
|
||||
Reference in New Issue
Block a user