Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:28:34 +08:00
commit 390afca02b
220 changed files with 86013 additions and 0 deletions

View File

@@ -0,0 +1,130 @@
---
name: langchain4j-spring-boot-integration
description: Integration patterns for LangChain4j with Spring Boot. Auto-configuration, dependency injection, and Spring ecosystem integration. Use when embedding LangChain4j into Spring Boot applications.
category: ai-development
tags: [langchain4j, spring-boot, ai, llm, rag, chatbot, integration, configuration, java]
version: 1.1.0
allowed-tools: Read, Write, Bash, Grep
---
# LangChain4j Spring Boot Integration
To accomplish integration of LangChain4j with Spring Boot applications, follow this comprehensive guidance covering auto-configuration, declarative AI Services, chat models, embedding stores, and production-ready patterns for building AI-powered applications.
## When to Use
To accomplish integration of LangChain4j with Spring Boot when:
- Integrating LangChain4j into existing Spring Boot applications
- Building AI-powered microservices with Spring Boot
- Setting up auto-configuration for AI models and services
- Creating declarative AI Services with Spring dependency injection
- Configuring multiple AI providers (OpenAI, Azure, Ollama, etc.)
- Implementing RAG systems with Spring Boot
- Setting up observability and monitoring for AI components
- Building production-ready AI applications with Spring Boot
## Overview
LangChain4j Spring Boot integration provides declarative AI Services through Spring Boot starters, enabling automatic configuration of AI components based on properties. The integration combines the power of Spring dependency injection with LangChain4j's AI capabilities, allowing developers to create AI-powered applications using interface-based definitions with annotations.
## Core Concepts
To accomplish basic setup of LangChain4j with Spring Boot:
**Add Dependencies:**
```xml
<!-- Core LangChain4j -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-spring-boot-starter</artifactId>
<version>1.8.0</version> // Use latest version
</dependency>
<!-- OpenAI Integration -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>1.8.0</version>
</dependency>
```
**Configure Properties:**
```properties
# application.properties
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o-mini
langchain4j.open-ai.chat-model.temperature=0.7
```
**Create Declarative AI Service:**
```java
@AiService
interface CustomerSupportAssistant {
@SystemMessage("You are a helpful customer support agent for TechCorp.")
String handleInquiry(String customerMessage);
}
```
## Configuration
To accomplish Spring Boot configuration for LangChain4j:
**Property-Based Configuration:** Configure AI models through application properties for different providers.
**Manual Bean Configuration:** For advanced configurations, define beans manually using @Configuration.
**Multiple Providers:** Support for multiple AI providers with explicit wiring when needed.
## Declarative AI Services
To accomplish interface-based AI service definitions:
**Basic AI Service:** Create interfaces with @AiService annotation and define methods with message templates.
**Streaming AI Service:** Implement streaming responses using Reactor or Project Reactor.
**Explicit Wiring:** Specify which model to use with @AiService(wiringMode = EXPLICIT, chatModel = "modelBeanName").
## RAG Implementation
To accomplish RAG system implementation:
**Embedding Stores:** Configure various embedding stores (PostgreSQL/pgvector, Neo4j, Pinecone, etc.).
**Document Ingestion:** Implement document processing and embedding generation.
**Content Retrieval:** Set up content retrieval mechanisms for knowledge augmentation.
## Tool Integration
To accomplish AI tool integration:
**Spring Component Tools:** Define tools as Spring components with @Tool annotations.
**Database Access Tools:** Create tools for database operations and business logic.
**Tool Registration:** Automatically register tools with AI services.
## Examples
To understand implementation patterns, refer to the comprehensive examples in [references/examples.md](references/examples.md).
## Best Practices
To accomplish production-ready AI applications:
- **Use Property-Based Configuration:** External configuration over hardcoded values
- **Implement Proper Error Handling:** Graceful degradation and meaningful error responses
- **Use Profiles for Different Environments:** Separate configurations for development, testing, and production
- **Implement Proper Logging:** Debug AI service calls and monitor performance
- **Secure API Keys:** Use environment variables and never commit to version control
- **Handle Failures:** Implement retry mechanisms and fallback strategies
- **Monitor Performance:** Add metrics and health checks for observability
## References
For detailed API references, advanced configurations, and additional patterns, refer to:
- [API Reference](references/references.md) - Complete API reference and configurations
- [Examples](references/examples.md) - Comprehensive implementation examples
- [Configuration Guide](references/configuration.md) - Deep dive into configuration options

View File

@@ -0,0 +1,812 @@
# LangChain4j Spring Boot Integration - Configuration Guide
Detailed configuration options and advanced setup patterns for LangChain4j with Spring Boot.
## Property-Based Configuration
### Core Configuration Properties
**application.yml**
```yaml
langchain4j:
# OpenAI Configuration
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o-mini
temperature: 0.7
max-tokens: 1000
log-requests: true
log-responses: true
timeout: PT60S
max-retries: 3
organization: ${OPENAI_ORGANIZATION:}
embedding-model:
api-key: ${OPENAI_API_KEY}
model-name: text-embedding-3-small
dimensions: 1536
timeout: PT60S
streaming-chat-model:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o-mini
temperature: 0.7
max-tokens: 2000
# Azure OpenAI Configuration
azure-open-ai:
chat-model:
endpoint: ${AZURE_OPENAI_ENDPOINT}
api-key: ${AZURE_OPENAI_KEY}
deployment-name: gpt-4o
service-version: 2024-02-15-preview
temperature: 0.7
max-tokens: 1000
log-requests-and-responses: true
embedding-model:
endpoint: ${AZURE_OPENAI_ENDPOINT}
api-key: ${AZURE_OPENAI_KEY}
deployment-name: text-embedding-3-small
dimensions: 1536
# Anthropic Configuration
anthropic:
chat-model:
api-key: ${ANTHROPIC_API_KEY}
model-name: claude-3-5-sonnet-20241022
max-tokens: 4000
temperature: 0.7
streaming-chat-model:
api-key: ${ANTHROPIC_API_KEY}
model-name: claude-3-5-sonnet-20241022
# Ollama Configuration
ollama:
chat-model:
base-url: http://localhost:11434
model-name: llama3.1
temperature: 0.8
timeout: PT60S
# Memory Configuration
memory:
store-type: in-memory # in-memory, postgresql, mysql, mongodb
max-messages: 20
window-size: 10
# Vector Store Configuration
vector-store:
type: in-memory # in-memory, pinecone, weaviate, qdrant, postgresql
pinecone:
api-key: ${PINECONE_API_KEY}
index-name: my-index
namespace: production
qdrant:
host: localhost
port: 6333
collection-name: documents
weaviate:
host: localhost
port: 8080
collection-name: Documents
postgresql:
table: document_embeddings
dimension: 1536
```
### Spring Profiles Configuration
**application-dev.yml**
```yaml
langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY_DEV}
model-name: gpt-4o-mini
temperature: 0.8 # Higher temperature for experimentation
log-requests: true
log-responses: true
vector-store:
type: in-memory
```
**application-prod.yml**
```yaml
langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY_PROD}
model-name: gpt-4o
temperature: 0.3 # Lower temperature for consistency
log-requests: false
log-responses: false
vector-store:
type: pinecone
pinecone:
api-key: ${PINECONE_API_KEY_PROD}
index-name: production-knowledge-base
```
## Manual Bean Configuration
### Advanced Chat Model Configuration
```java
@Configuration
@Profile("custom-openai")
public class CustomOpenAiConfiguration {
@Bean
@Primary
public ChatModel customOpenAiChatModel(
@Value("${custom.openai.api.key}") String apiKey,
@Value("${custom.openai.model}") String model,
@Value("${custom.openai.temperature}") Double temperature) {
OpenAiChatModelBuilder builder = OpenAiChatModel.builder()
.apiKey(apiKey)
.modelName(model)
.temperature(temperature);
if (Boolean.TRUE.equals(env.getProperty("custom.openai.log-requests", Boolean.class))) {
builder.logRequests(true);
}
if (Boolean.TRUE.equals(env.getProperty("custom.openai.log-responses", Boolean.class))) {
builder.logResponses(true);
}
return builder.build();
}
@Bean
@ConditionalOnProperty(name = "custom.openai.proxy.enabled", havingValue = "true")
public ChatModel proxiedChatModel(ChatModel delegate) {
return new ProxiedChatModel(delegate,
env.getProperty("custom.openai.proxy.url"),
env.getProperty("custom.openai.proxy.username"),
env.getProperty("custom.openai.proxy.password"));
}
}
class ProxiedChatModel implements ChatModel {
private final ChatModel delegate;
private final String proxyUrl;
private final String username;
private final String password;
public ProxiedChatModel(ChatModel delegate, String proxyUrl, String username, String password) {
this.delegate = delegate;
this.proxyUrl = proxyUrl;
this.username = username;
this.password = password;
}
@Override
public Response<AiMessage> generate(ChatRequest request) {
// Apply proxy configuration
// Make request through proxy
return delegate.generate(request);
}
}
```
### Multiple Provider Configuration
```java
@Configuration
public class MultiProviderConfiguration {
@Bean("openAiChatModel")
public ChatModel openAiChatModel(
@Value("${openai.api.key}") String apiKey,
@Value("${openai.model.name}") String modelName) {
return OpenAiChatModel.builder()
.apiKey(apiKey)
.modelName(modelName)
.temperature(0.7)
.logRequests(env.acceptsProfiles("dev"))
.build();
}
@Bean("anthropicChatModel")
public ChatModel anthropicChatModel(
@Value("${anthropic.api.key}") String apiKey,
@Value("${anthropic.model.name}") String modelName) {
return AnthropicChatModel.builder()
.apiKey(apiKey)
.modelName(modelName)
.maxTokens(4000)
.build();
}
@Bean("ollamaChatModel")
@ConditionalOnProperty(name = "ollama.enabled", havingValue = "true")
public ChatModel ollamaChatModel(
@Value("${ollama.base-url}") String baseUrl,
@Value("${ollama.model.name}") String modelName) {
return OllamaChatModel.builder()
.baseUrl(baseUrl)
.modelName(modelName)
.temperature(0.8)
.build();
}
}
```
### Explicit Wiring Configuration
```java
@AiService(wiringMode = EXPLICIT, chatModel = "productionChatModel")
interface ProductionAssistant {
@SystemMessage("You are a production-grade AI assistant providing high-quality, reliable responses.")
String chat(String message);
}
@AiService(wiringMode = EXPLICIT, chatModel = "developmentChatModel")
interface DevelopmentAssistant {
@SystemMessage("You are a development assistant helping with code and debugging. " +
"Be experimental and creative in your responses.")
String chat(String message);
}
@AiService(wiringMode = EXPLICIT,
chatModel = "specializedChatModel",
tools = "businessTools")
interface SpecializedAssistant {
@SystemMessage("You are a specialized assistant with access to business tools. " +
"Use the available tools to provide accurate information.")
String chat(String message);
}
@Component("businessTools")
public class BusinessLogicTools {
@Tool("Calculate discount based on customer status")
public BigDecimal calculateDiscount(
@P("Purchase amount") BigDecimal amount,
@P("Customer status") String customerStatus) {
return switch (customerStatus.toLowerCase()) {
case "vip" -> amount.multiply(new BigDecimal("0.15"));
case "premium" -> amount.multiply(new BigDecimal("0.10"));
case "standard" -> amount.multiply(new BigDecimal("0.05"));
default -> BigDecimal.ZERO;
};
}
}
```
## Embedding Store Configuration
### PostgreSQL with pgvector
```java
@Configuration
@RequiredArgsConstructor
public class PostgresEmbeddingStoreConfiguration {
@Bean
public EmbeddingStore<TextSegment> postgresEmbeddingStore(
DataSource dataSource,
@Value("${spring.datasource.schema}") String schema) {
return PgVectorEmbeddingStore.builder()
.dataSource(dataSource)
.table("document_embeddings")
.dimension(1536)
.initializeSchema(true)
.schema(schema)
.indexName("document_embeddings_idx")
.build();
}
@Bean
public ContentRetriever postgresContentRetriever(
EmbeddingStore<TextSegment> embeddingStore,
EmbeddingModel embeddingModel) {
return EmbeddingStoreContentRetriever.builder()
.embeddingStore(embeddingStore)
.embeddingModel(embeddingModel)
.maxResults(5)
.minScore(0.7)
.build();
}
}
```
### Pinecone Configuration
```java
@Configuration
@Profile("pinecone")
public class PineconeConfiguration {
@Bean
public EmbeddingStore<TextSegment> pineconeEmbeddingStore(
@Value("${pinecone.api.key}") String apiKey,
@Value("${pinecone.index.name}") String indexName,
@Value("${pinecone.namespace}") String namespace) {
PineconeEmbeddingStore store = PineconeEmbeddingStore.builder()
.apiKey(apiKey)
.indexName(indexName)
.namespace(namespace)
.build();
// Initialize if needed
if (!store.indexExists()) {
store.createIndex(1536);
}
return store;
}
}
```
### Custom Embedding Store
```java
@Component
public class CustomEmbeddingStore implements EmbeddingStore<TextSegment> {
private final Map<UUID, TextSegment> embeddings = new ConcurrentHashMap<>();
private final Map<UUID, float[]> vectors = new ConcurrentHashMap<>();
@Override
public void add(Embedding embedding, TextSegment textSegment) {
UUID id = UUID.randomUUID();
embeddings.put(id, textSegment);
vectors.put(id, embedding.vector());
}
@Override
public void addAll(List<Embedding> embeddings, List<TextSegment> textSegments) {
for (int i = 0; i < embeddings.size(); i++) {
add(embeddings.get(i), textSegments.get(i));
}
}
@Override
public List<Embedding> findRelevant(Embedding embedding, int maxResults) {
return vectors.entrySet().stream()
.sorted(Comparator.comparingDouble(e -> cosineSimilarity(e.getValue(), embedding.vector())))
.limit(maxResults)
.map(e -> new EmbeddingImpl(e.getValue(), embeddings.get(e.getKey()).id()))
.collect(Collectors.toList());
}
private double cosineSimilarity(float[] vec1, float[] vec2) {
// Implementation of cosine similarity
return 0.0;
}
}
```
## Memory Configuration
### Chat Memory Store Configuration
```java
@Configuration
public class MemoryConfiguration {
@Bean
@Profile("in-memory")
public ChatMemoryStore inMemoryChatMemoryStore() {
return new InMemoryChatMemoryStore();
}
@Bean
@Profile("database")
public ChatMemoryStore databaseChatMemoryStore(ChatMessageRepository messageRepository) {
return new DatabaseChatMemoryStore(messageRepository);
}
@Bean
public ChatMemoryProvider chatMemoryProvider(ChatMemoryStore memoryStore) {
return memoryId -> MessageWindowChatMemory.builder()
.id(memoryId)
.maxMessages(getMaxMessages())
.chatMemoryStore(memoryStore)
.build();
}
private int getMaxMessages() {
return env.getProperty("langchain4j.memory.max-messages", int.class, 20);
}
}
```
### Database Chat Memory Store
```java
@Component
@RequiredArgsConstructor
public class DatabaseChatMemoryStore implements ChatMemoryStore {
private final ChatMessageRepository repository;
@Override
public List<ChatMessage> getMessages(Object memoryId) {
return repository.findByMemoryIdOrderByCreatedAtAsc(memoryId.toString())
.stream()
.map(this::toMessage)
.collect(Collectors.toList());
}
@Override
public void updateMessages(Object memoryId, List<ChatMessage> messages) {
String id = memoryId.toString();
repository.deleteByMemoryId(id);
List<ChatMessageEntity> entities = messages.stream()
.map(msg -> toEntity(id, msg))
.collect(Collectors.toList());
repository.saveAll(entities);
}
private ChatMessage toMessage(ChatMessageEntity entity) {
return switch (entity.getMessageType()) {
case USER -> UserMessage.from(entity.getContent());
case AI -> AiMessage.from(entity.getContent());
case SYSTEM -> SystemMessage.from(entity.getContent());
};
}
private ChatMessageEntity toEntity(String memoryId, ChatMessage message) {
ChatMessageEntity entity = new ChatMessageEntity();
entity.setMemoryId(memoryId);
entity.setContent(message.text());
entity.setCreatedAt(LocalDateTime.now());
entity.setMessageType(determineMessageType(message));
return entity;
}
private MessageType determineMessageType(ChatMessage message) {
if (message instanceof UserMessage) return MessageType.USER;
if (message instanceof AiMessage) return MessageType.AI;
if (message instanceof SystemMessage) return MessageType.SYSTEM;
throw new IllegalArgumentException("Unknown message type: " + message.getClass());
}
}
```
## Observability Configuration
### Monitoring and Metrics
```java
@Configuration
public class ObservabilityConfiguration {
@Bean
public ChatModelListener chatModelListener(MeterRegistry meterRegistry) {
return new MonitoringChatModelListener(meterRegistry);
}
@Bean
public HealthIndicator aiHealthIndicator(ChatModel chatModel) {
return new AiHealthIndicator(chatModel);
}
}
class MonitoringChatModelListener implements ChatModelListener {
private final MeterRegistry meterRegistry;
private final Counter requestCounter;
private final Timer responseTimer;
public MonitoringChatModelListener(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
this.requestCounter = Counter.builder("ai.requests.total")
.description("Total AI requests")
.register(meterRegistry);
this.responseTimer = Timer.builder("ai.response.duration")
.description("AI response time")
.register(meterRegistry);
}
@Override
public void onRequest(ChatModelRequestContext requestContext) {
requestCounter.increment();
logRequest(requestContext);
}
@Override
public void onResponse(ChatModelResponseContext responseContext) {
responseTimer.record(responseContext.duration());
logResponse(responseContext);
}
private void logRequest(ChatModelRequestContext requestContext) {
meterRegistry.gauge("ai.request.tokens",
requestContext.request().messages().size());
}
private void logResponse(ChatModelResponseContext responseContext) {
Response<AiMessage> response = responseContext.response();
meterRegistry.gauge("ai.response.tokens",
response.tokenUsage().totalTokenCount());
}
}
```
### Custom Health Check
```java
@Component
@RequiredArgsConstructor
public class AiHealthIndicator implements HealthIndicator {
private final ChatModel chatModel;
private final EmbeddingModel embeddingModel;
@Override
public Health health() {
try {
// Test chat model
Health.Builder builder = Health.up();
String chatResponse = chatModel.chat("ping");
builder.withDetail("chat_model", "healthy");
if (chatResponse == null || chatResponse.trim().isEmpty()) {
return Health.down().withDetail("reason", "Empty response");
}
// Test embedding model
List<String> testTexts = List.of("test", "ping", "hello");
List<Embedding> embeddings = embeddingModel.embedAll(testTexts).content();
if (embeddings.isEmpty()) {
return Health.down().withDetail("reason", "No embeddings generated");
}
builder.withDetail("embedding_model", "healthy")
.withDetail("embedding_dimension", embeddings.get(0).vector().length);
return builder.build();
} catch (Exception e) {
return Health.down()
.withDetail("error", e.getMessage())
.withDetail("exception_class", e.getClass().getSimpleName());
}
}
}
```
## Security Configuration
### API Key Security
```java
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http
.csrf().disable()
.authorizeRequests()
.requestMatchers("/api/ai/**").hasRole("AI_USER")
.requestMatchers("/actuator/ai/**").hasRole("AI_ADMIN")
.anyRequest().permitAll()
.and()
.httpBasic();
return http.build();
}
@Bean
public ApiKeyAuthenticationFilter apiKeyAuthenticationFilter() {
return new ApiKeyAuthenticationFilter("/api/ai/**");
}
}
class ApiKeyAuthenticationFilter extends OncePerRequestFilter {
private final String pathPrefix;
public ApiKeyAuthenticationFilter(String pathPrefix) {
this.pathPrefix = pathPrefix;
}
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response,
FilterChain filterChain) throws ServletException, IOException {
if (request.getRequestURI().startsWith(pathPrefix)) {
String apiKey = request.getHeader("X-API-Key");
if (apiKey == null || !isValidApiKey(apiKey)) {
response.sendError(HttpServletResponse.SC_UNAUTHORIZED, "Invalid API key");
return;
}
}
filterChain.doFilter(request, response);
}
private boolean isValidApiKey(String apiKey) {
// Validate API key against database or security service
return true;
}
}
```
### Configuration Validation
```java
@Component
@RequiredArgsConstructor
@Slf4j
public class AiConfigurationValidator implements InitializingBean {
private final AiProperties properties;
@Override
public void afterPropertiesSet() {
validateConfiguration();
}
private void validateConfiguration() {
if (properties.getOpenai() != null) {
validateOpenAiConfiguration();
}
if (properties.getAzureOpenAi() != null) {
validateAzureConfiguration();
}
if (properties.getAnthropic() != null) {
validateAnthropicConfiguration();
}
log.info("AI configuration validation completed successfully");
}
private void validateOpenAiConfiguration() {
OpenAiProperties openAi = properties.getOpenai();
if (openAi.getChatModel() != null &&
(openAi.getChatModel().getApiKey() == null ||
openAi.getChatModel().getApiKey().isEmpty())) {
log.warn("OpenAI chat model API key is not configured");
}
if (openAi.getChatModel() != null &&
openAi.getChatModel().getMaxTokens() != null &&
openAi.getChatModel().getMaxTokens() > 8192) {
log.warn("OpenAI max tokens {} exceeds recommended limit of 8192",
openAi.getChatModel().getMaxTokens());
}
}
private void validateAzureConfiguration() {
AzureOpenAiProperties azure = properties.getAzureOpenAi();
if (azure.getChatModel() != null &&
(azure.getChatModel().getEndpoint() == null ||
azure.getChatModel().getApiKey() == null)) {
log.error("Azure OpenAI endpoint or API key is not configured");
}
}
private void validateAnthropicConfiguration() {
AnthropicProperties anthropic = properties.getAnthropic();
if (anthropic.getChatModel() != null &&
(anthropic.getChatModel().getApiKey() == null ||
anthropic.getChatModel().getApiKey().isEmpty())) {
log.warn("Anthropic chat model API key is not configured");
}
}
}
@Configuration
@ConfigurationProperties(prefix = "langchain4j")
@Validated
@Data
public class AiProperties {
private OpenAiProperties openai;
private AzureOpenAiProperties azureOpenAi;
private AnthropicProperties anthropic;
private MemoryProperties memory;
private VectorStoreProperties vectorStore;
// Validation annotations for properties
}
@Data
@Validated
public class OpenAiProperties {
private ChatModelProperties chatModel;
private EmbeddingModelProperties embeddingModel;
private StreamingChatModelProperties streamingChatModel;
@Valid
@NotNull
public ChatModelProperties getChatModel() {
return chatModel;
}
}
```
## Environment-Specific Configurations
### Development Configuration
```yaml
# application-dev.yml
langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY_DEV}
model-name: gpt-4o-mini
temperature: 0.8
log-requests: true
log-responses: true
memory:
store-type: in-memory
max-messages: 10
vector-store:
type: in-memory
logging:
level:
dev.langchain4j: DEBUG
org.springframework.ai: DEBUG
```
### Production Configuration
```yaml
# application-prod.yml
langchain4j:
open-ai:
chat-model:
api-key: ${OPENAI_API_KEY_PROD}
model-name: gpt-4o
temperature: 0.3
log-requests: false
log-responses: false
max-tokens: 4000
memory:
store-type: postgresql
max-messages: 5
vector-store:
type: pinecone
pinecone:
index-name: production-knowledge-base
namespace: prod
logging:
level:
dev.langchain4j: WARN
org.springframework.ai: WARN
management:
endpoints:
web:
exposure:
include: health, metrics, info
endpoint:
health:
show-details: when-authorized
```
This configuration guide provides comprehensive options for setting up LangChain4j with Spring Boot, covering various providers, storage backends, monitoring, and security considerations.

View File

@@ -0,0 +1,465 @@
# LangChain4j Spring Boot Integration - Examples
Comprehensive implementation examples for Spring Boot integration with LangChain4j.
## Basic Setup Example
### Complete Spring Boot Application
```java
@SpringBootApplication
public class Langchain4jApplication {
public static void main(String[] args) {
SpringApplication.run(Langchain4jApplication.class, args);
}
}
@Configuration
public class AiConfiguration {
@Bean
@Profile("openai")
public ChatModel openAiChatModel(@Value("${langchain4j.open-ai.chat-model.api-key}") String apiKey) {
return OpenAiChatModel.builder()
.apiKey(apiKey)
.modelName("gpt-4o-mini")
.temperature(0.7)
.maxTokens(1000)
.logRequests(true)
.logResponses(true)
.build();
}
@Bean
public EmbeddingModel openAiEmbeddingModel(@Value("${langchain4j.open-ai.embedding-model.api-key}") String apiKey) {
return OpenAiEmbeddingModel.builder()
.apiKey(apiKey)
.modelName("text-embedding-3-small")
.dimensions(1536)
.build();
}
}
@AiService
interface CustomerSupportAssistant {
@SystemMessage("You are a helpful customer support agent for TechCorp. " +
"Be polite, professional, and try to resolve customer issues efficiently. " +
"If you cannot resolve an issue, escalate to a human agent.")
String handleInquiry(String customerMessage);
@UserMessage("Analyze this customer feedback and extract sentiment: {{feedback}}")
@SystemMessage("Return only: POSITIVE, NEGATIVE, or NEUTRAL")
String analyzeSentiment(String feedback);
@UserMessage("Extract key entities from this text: {{text}}")
@SystemMessage("Return a JSON object with entities as keys and their types as values")
String extractEntities(String text);
}
@RestController
@RequestMapping("/api/support")
@RequiredArgsConstructor
public class CustomerSupportController {
private final CustomerSupportAssistant assistant;
@PostMapping("/inquiry")
public ResponseEntity<SupportResponse> handleInquiry(@RequestBody @Valid SupportRequest request) {
String response = assistant.handleInquiry(request.getMessage());
return ResponseEntity.ok(new SupportResponse(response, Instant.now()));
}
@PostMapping("/sentiment")
public ResponseEntity<SentimentResponse> analyzeSentiment(@RequestBody @Valid SentimentRequest request) {
String sentiment = assistant.analyzeSentiment(request.getFeedback());
return ResponseEntity.ok(new SentimentResponse(sentiment, Instant.now()));
}
@PostMapping("/entities")
public ResponseEntity<EntitiesResponse> extractEntities(@RequestBody @Valid EntitiesRequest request) {
String entities = assistant.extractEntities(request.getText());
return ResponseEntity.ok(new EntitiesResponse(entities, Instant.now()));
}
}
// DTO Classes
record SupportRequest(String message) {}
record SupportResponse(String response, Instant timestamp) {}
record SentimentRequest(String feedback) {}
record SentimentResponse(String sentiment, Instant timestamp) {}
record EntitiesRequest(String text) {}
record EntitiesResponse(String entities, Instant timestamp) {}
```
## 2. Custom AI Service Bean Configuration
**Scenario**: Configure AI services as Spring beans.
```java
@Configuration
public class AiConfig {
@Bean
public ChatModel chatModel() {
return OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.modelName("gpt-4o-mini")
.temperature(0.7)
.build();
}
@Bean
public EmbeddingModel embeddingModel() {
return OpenAiEmbeddingModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.modelName("text-embedding-3-small")
.build();
}
@Bean
public DocumentAssistant documentAssistant(ChatModel chatModel) {
return AiServices.builder(DocumentAssistant.class)
.chatModel(chatModel)
.chatMemory(MessageWindowChatMemory.withMaxMessages(10))
.build();
}
}
interface DocumentAssistant {
String chat(String message);
}
```
## 3. REST API with AI Service
**Scenario**: Expose AI functionality via REST endpoints.
```java
@RestController
@RequestMapping("/api/chat")
public class ChatController {
private final ChatAssistant assistant;
@Autowired
public ChatController(ChatAssistant assistant) {
this.assistant = assistant;
}
@PostMapping
public ResponseEntity<ChatResponse> chat(@RequestBody ChatRequest request) {
try {
String response = assistant.chat(request.getMessage());
return ResponseEntity.ok(new ChatResponse(response));
} catch (Exception e) {
return ResponseEntity.internalServerError()
.body(new ChatResponse("Error: " + e.getMessage()));
}
}
@PostMapping("/stream")
public ResponseEntity<StreamingResponseBody> streamChat(@RequestBody ChatRequest request) {
return ResponseEntity.ok(outputStream -> {
var streamAssistant = streamingAssistant;
var stream = streamAssistant.streamChat(request.getMessage());
stream.onNext(token -> {
try {
outputStream.write(token.getBytes());
outputStream.flush();
} catch (IOException e) {
// Handle write error
}
}).start();
});
}
}
@Data
class ChatRequest {
private String message;
}
@Data
class ChatResponse {
private String response;
}
```
## 4. Service with RAG Integration
**Scenario**: Service layer with document search and retrieval.
```java
@Service
public class KnowledgeBaseService {
private final DocumentAssistant assistant;
private final EmbeddingStore<TextSegment> embeddingStore;
private final EmbeddingModel embeddingModel;
@Autowired
public KnowledgeBaseService(
DocumentAssistant assistant,
EmbeddingStore<TextSegment> embeddingStore,
EmbeddingModel embeddingModel) {
this.assistant = assistant;
this.embeddingStore = embeddingStore;
this.embeddingModel = embeddingModel;
}
public void ingestDocument(String content, Map<String, Object> metadata) {
var document = Document.from(content);
document.metadata().putAll(metadata);
var ingestor = EmbeddingStoreIngestor.builder()
.embeddingModel(embeddingModel)
.embeddingStore(embeddingStore)
.documentSplitter(DocumentSplitters.recursive(500, 50))
.build();
ingestor.ingest(document);
}
public String answerQuestion(String question) {
return assistant.answerAbout(question);
}
}
interface DocumentAssistant {
String answerAbout(String question);
}
```
## 5. Scheduled Task for Document Updates
**Scenario**: Periodically update knowledge base.
```java
@Service
public class DocumentUpdateService {
private final EmbeddingStore<TextSegment> embeddingStore;
private final EmbeddingModel embeddingModel;
@Autowired
public DocumentUpdateService(
EmbeddingStore<TextSegment> embeddingStore,
EmbeddingModel embeddingModel) {
this.embeddingStore = embeddingStore;
this.embeddingModel = embeddingModel;
}
@Scheduled(fixedRate = 86400000) // Daily
public void updateDocuments() {
var documents = fetchLatestDocuments();
var ingestor = EmbeddingStoreIngestor.builder()
.embeddingModel(embeddingModel)
.embeddingStore(embeddingStore)
.build();
documents.forEach(ingestor::ingest);
logger.info("Documents updated successfully");
}
private List<Document> fetchLatestDocuments() {
// Fetch from database or external API
return Collections.emptyList();
}
}
```
## 6. Controller with Tool Integration
**Scenario**: AI service with business logic tools.
```java
@Service
public class BusinessLogicService {
@Tool("Get user by ID")
public User getUser(@P("user ID") String userId) {
// Implementation
return new User(userId);
}
@Tool("Calculate discount")
public double calculateDiscount(@P("purchase amount") double amount) {
if (amount > 1000) return 0.15;
if (amount > 500) return 0.10;
return 0.05;
}
}
@Service
public class ToolAssistant {
private final ChatModel chatModel;
private final BusinessLogicService businessLogic;
@Autowired
public ToolAssistant(ChatModel chatModel, BusinessLogicService businessLogic) {
this.chatModel = chatModel;
this.businessLogic = businessLogic;
}
public String processRequest(String request) {
return AiServices.builder(Assistant.class)
.chatModel(chatModel)
.tools(businessLogic)
.build()
.chat(request);
}
}
interface Assistant {
String chat(String message);
}
```
## 7. Error Handling with Spring Exception Handler
**Scenario**: Centralized error handling for AI services.
```java
@ControllerAdvice
public class AiExceptionHandler {
@ExceptionHandler(IllegalArgumentException.class)
public ResponseEntity<ErrorResponse> handleBadRequest(IllegalArgumentException e) {
return ResponseEntity.badRequest()
.body(new ErrorResponse("Invalid input: " + e.getMessage()));
}
@ExceptionHandler(Exception.class)
public ResponseEntity<ErrorResponse> handleError(Exception e) {
logger.error("Error in AI service", e);
return ResponseEntity.internalServerError()
.body(new ErrorResponse("An error occurred: " + e.getMessage()));
}
}
@Data
class ErrorResponse {
private String message;
}
```
## 8. Configuration Properties
**Scenario**: Externalize AI configuration.
```java
@Configuration
@ConfigurationProperties(prefix = "app.ai")
@Data
public class AiProperties {
private String openaiApiKey;
private String openaiModel = "gpt-4o-mini";
private double temperature = 0.7;
private int maxTokens = 2000;
private String embeddingModel = "text-embedding-3-small";
private int memorySize = 10;
private String vectorStoreType = "in-memory";
}
// application.yml
app:
ai:
openai-api-key: ${OPENAI_API_KEY}
openai-model: gpt-4o-mini
temperature: 0.7
max-tokens: 2000
embedding-model: text-embedding-3-small
memory-size: 10
vector-store-type: pinecone
```
## 9. Integration Testing
**Scenario**: Test AI services with Spring Boot Test.
```java
@SpringBootTest
class ChatServiceTest {
@MockBean
private ChatModel chatModel;
@Autowired
private ChatService chatService;
@Test
void testChat() {
when(chatModel.chat("Hello"))
.thenReturn("Hi there!");
String response = chatService.chat("Hello");
assertEquals("Hi there!", response);
}
}
```
## 10. Async Processing with CompletableFuture
**Scenario**: Non-blocking AI service calls.
```java
@Service
@EnableAsync
public class AsyncChatService {
private final ChatModel chatModel;
@Autowired
public AsyncChatService(ChatModel chatModel) {
this.chatModel = chatModel;
}
@Async
public CompletableFuture<String> chatAsync(String message) {
try {
String response = chatModel.chat(message);
return CompletableFuture.completedFuture(response);
} catch (Exception e) {
return CompletableFuture.failedFuture(e);
}
}
}
// Usage in controller
@RestController
public class AsyncController {
@Autowired
private AsyncChatService asyncChatService;
@PostMapping("/chat/async")
public CompletableFuture<ResponseEntity<String>> chatAsync(@RequestBody ChatRequest request) {
return asyncChatService.chatAsync(request.getMessage())
.thenApply(ResponseEntity::ok)
.exceptionally(e -> ResponseEntity.internalServerError().build());
}
}
```
## Configuration Examples
### Maven Dependency
```xml
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-spring-boot-starter</artifactId>
<version>0.27.0</version>
</dependency>
```
### Gradle
```gradle
implementation 'dev.langchain4j:langchain4j-spring-boot-starter:0.27.0'
```

View File

@@ -0,0 +1,423 @@
# LangChain4j Spring Boot Integration - API References
Complete API reference for Spring Boot integration with LangChain4j.
## Spring Boot Starter Dependencies
### Maven
```xml
<!-- Core Spring Boot LangChain4j integration -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-spring-boot-starter</artifactId>
<version>0.27.0</version>
</dependency>
<!-- OpenAI integration -->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>0.27.0</version>
</dependency>
```
### Gradle
```gradle
implementation 'dev.langchain4j:langchain4j-spring-boot-starter:0.27.0'
implementation 'dev.langchain4j:langchain4j-open-ai-spring-boot-starter:0.27.0'
```
## Auto-Configuration Properties
### OpenAI Configuration
```yaml
langchain4j:
open-ai:
api-key: ${OPENAI_API_KEY}
model-name: gpt-4o-mini
temperature: 0.7
top-p: 1.0
max-tokens: 2000
timeout: 60s
log-requests: true
log-responses: true
openai-embedding:
api-key: ${OPENAI_API_KEY}
model-name: text-embedding-3-small
timeout: 60s
```
### Vector Store Configuration
```yaml
langchain4j:
vector-store:
type: in-memory # or pinecone, weaviate, qdrant, etc.
# Pinecone
pinecone:
api-key: ${PINECONE_API_KEY}
index-name: my-index
namespace: production
# Qdrant
qdrant:
host: localhost
port: 6333
collection-name: documents
# Weaviate
weaviate:
host: localhost
port: 8080
collection-name: Documents
```
## Spring Configuration Annotations
### @Configuration
```java
@Configuration
public class AiConfig {
@Bean
public ChatModel chatModel() {
// Bean definition
}
@Bean
@ConditionalOnMissingBean
public EmbeddingModel embeddingModel() {
// Fallback bean
}
}
```
### @ConditionalOnProperty
```java
@Configuration
@ConditionalOnProperty(
prefix = "app.ai",
name = "enabled",
havingValue = "true"
)
public class AiFeatureConfig {
// Configuration only if enabled
}
```
### @EnableConfigurationProperties
```java
@Configuration
@EnableConfigurationProperties(AiProperties.class)
public class AiConfig {
@Autowired
private AiProperties aiProperties;
}
```
## Dependency Injection
### Constructor Injection (Recommended)
```java
@Service
public class ChatService {
private final ChatModel chatModel;
private final EmbeddingModel embeddingModel;
public ChatService(ChatModel chatModel, EmbeddingModel embeddingModel) {
this.chatModel = chatModel;
this.embeddingModel = embeddingModel;
}
}
```
### Field Injection (Discouraged)
```java
@Service
public class ChatService {
@Autowired
private ChatModel chatModel; // Not recommended
}
```
### Setter Injection
```java
@Service
public class ChatService {
private ChatModel chatModel;
@Autowired
public void setChatModel(ChatModel chatModel) {
this.chatModel = chatModel;
}
}
```
## REST Annotations
### @RestController with RequestMapping
```java
@RestController
@RequestMapping("/api/chat")
public class ChatController {
@PostMapping
public ResponseEntity<Response> chat(@RequestBody ChatRequest request) {
// Implementation
}
@GetMapping("/{id}")
public ResponseEntity<Response> getChat(@PathVariable String id) {
// Implementation
}
}
```
### RequestBody Validation
```java
@PostMapping
public ResponseEntity<Response> chat(@Valid @RequestBody ChatRequest request) {
// Validates request object
}
public class ChatRequest {
@NotBlank(message = "Message cannot be blank")
private String message;
@Min(0)
@Max(100)
private int maxTokens = 2000;
}
```
## Exception Handling
### @ControllerAdvice
```java
@ControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(IllegalArgumentException.class)
public ResponseEntity<ErrorResponse> handleBadRequest(IllegalArgumentException e) {
return ResponseEntity.badRequest()
.body(new ErrorResponse(400, e.getMessage()));
}
@ExceptionHandler(Exception.class)
public ResponseEntity<ErrorResponse> handleGlobalException(Exception e) {
return ResponseEntity.internalServerError()
.body(new ErrorResponse(500, "Internal server error"));
}
}
```
### ResponseStatusException
```java
if (!authorized) {
throw new ResponseStatusException(
HttpStatus.FORBIDDEN,
"User not authorized"
);
}
```
## Async and Reactive
### @Async
```java
@Service
@EnableAsync
public class AsyncService {
@Async
public CompletableFuture<String> processAsync(String input) {
String result = processSync(input);
return CompletableFuture.completedFuture(result);
}
}
```
### @Scheduled
```java
@Component
public class ScheduledTasks {
@Scheduled(fixedRate = 60000) // Every minute
public void performTask() {
// Task implementation
}
@Scheduled(cron = "0 0 * * * *") // Daily at midnight
public void dailyTask() {
// Daily task
}
}
```
## Testing
### @SpringBootTest
```java
@SpringBootTest
class ChatServiceTest {
@Autowired
private ChatService chatService;
@Test
void testChat() {
// Test implementation
}
}
```
### @WebMvcTest
```java
@WebMvcTest(ChatController.class)
class ChatControllerTest {
@Autowired
private MockMvc mockMvc;
@MockBean
private ChatService chatService;
@Test
void testChatEndpoint() throws Exception {
mockMvc.perform(post("/api/chat")
.contentType(MediaType.APPLICATION_JSON)
.content("{\"message\": \"Hello\"}"))
.andExpect(status().isOk());
}
}
```
### @DataJpaTest
```java
@DataJpaTest
class DocumentRepositoryTest {
@Autowired
private DocumentRepository repository;
@Test
void testFindByUserId() {
// Test implementation
}
}
```
## Logging Configuration
### application.yml
```yaml
logging:
level:
root: INFO
dev.langchain4j: DEBUG
org.springframework: WARN
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} - %msg%n"
file: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n"
file:
name: logs/app.log
```
## Health Checks
### Custom Health Indicator
```java
@Component
public class AiHealthIndicator extends AbstractHealthIndicator {
@Override
protected void doHealthCheck(Health.Builder builder) {
try {
// Check AI service availability
chatModel.chat("ping");
builder.up();
} catch (Exception e) {
builder.down().withDetail("reason", e.getMessage());
}
}
}
```
## Actuator Integration
### Maven Dependency
```xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
```
### Configuration
```yaml
management:
endpoints:
web:
exposure:
include: health, metrics, info
endpoint:
health:
show-details: always
```
## Security Configuration
### @EnableWebSecurity
```java
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.csrf().disable()
.authorizeRequests()
.antMatchers("/api/public/**").permitAll()
.antMatchers("/api/private/**").authenticated()
.and()
.httpBasic();
return http.build();
}
}
```
## Bean Lifecycle
### @PostConstruct and @PreDestroy
```java
@Service
public class AiService {
@PostConstruct
public void init() {
// Initialize resources
embeddingStore = createEmbeddingStore();
}
@PreDestroy
public void cleanup() {
// Clean up resources
embeddingStore.close();
}
}
```
## Best Practices
1. **Use Constructor Injection**: Explicitly declare dependencies
2. **Externalize Configuration**: Use application.yml for settings
3. **Handle Exceptions**: Use @ControllerAdvice for consistent error handling
4. **Implement Caching**: Cache AI responses when appropriate
5. **Use Async Processing**: For long-running AI operations
6. **Add Health Checks**: Implement custom health indicators
7. **Log Appropriately**: Debug AI service calls in development
8. **Test Thoroughly**: Use @SpringBootTest and @WebMvcTest
9. **Secure APIs**: Implement authentication and authorization
10. **Monitor Performance**: Track AI service metrics