Skip to main content

Spring Boot Integration

LangChain4j provides Spring Boot starters for:

Spring Boot Starters

Spring Boot starters help with creating and configuring language models, embedding models, embedding stores, and other core LangChain4j components through properties.

To use one of the Spring Boot starters, import the corresponding dependency.

The naming convention for the Spring Boot starter dependency is: langchain4j-{integration-name}-spring-boot-starter.

For example, for OpenAI (langchain4j-open-ai), the dependency name would be langchain4j-open-ai-spring-boot-starter:

<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>1.12.1-beta21</version>
</dependency>

Then, you can configure model parameters in the application.properties file as follows:

langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o
langchain4j.open-ai.chat-model.log-requests=true
langchain4j.open-ai.chat-model.log-responses=true
...

In this case, an instance of OpenAiChatModel (an implementation of a ChatModel) will be automatically created, and you can autowire it where needed:

@RestController
public class ChatController {

ChatModel chatModel;

public ChatController(ChatModel chatModel) {
this.chatModel = chatModel;
}

@GetMapping("/chat")
public String model(@RequestParam(value = "message", defaultValue = "Hello") String message) {
return chatModel.chat(message);
}
}

If you need an instance of a StreamingChatModel, use the streaming-chat-model instead of the chat-model properties:

langchain4j.open-ai.streaming-chat-model.api-key=${OPENAI_API_KEY}
...

Spring Boot starter for declarative AI Services

LangChain4j provides a Spring Boot starter for auto-configuring AI Services, RAG, Tools etc.

Assuming you have already imported one of the integrations starters (see above), import langchain4j-spring-boot-starter:

<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-spring-boot-starter</artifactId>
<version>1.12.1-beta21</version>
</dependency>

You can now define AI Service interface and annotate it with @AiService:

@AiService
interface Assistant {

@SystemMessage("You are a polite assistant")
String chat(String userMessage);
}

Think of it as a standard Spring Boot @Service, but with AI capabilities.

When the application starts, LangChain4j starter will scan the classpath and find all interfaces annotated with @AiService. For each AI Service found, it will create an implementation of this interface using all LangChain4j components available in the application context and will register it as a bean, so you can auto-wire it where needed:

@RestController
class AssistantController {

@Autowired
Assistant assistant;

@GetMapping("/chat")
public String chat(String message) {
return assistant.chat(message);
}
}

Automatic Component Wiring

The following components will be automatically wired into the AI Service if available in the application context:

  • ChatModel
  • StreamingChatModel
  • ChatMemory
  • ChatMemoryProvider
  • ContentRetriever
  • RetrievalAugmentor
  • ToolProvider
  • All methods of any @Component or @Service class that are annotated with @Tool An example:
@Component
public class BookingTools {

private final BookingService bookingService;

public BookingTools(BookingService bookingService) {
this.bookingService = bookingService;
}

@Tool
public Booking getBookingDetails(String bookingNumber, String customerName, String customerSurname) {
return bookingService.getBookingDetails(bookingNumber, customerName, customerSurname);
}

@Tool
public void cancelBooking(String bookingNumber, String customerName, String customerSurname) {
bookingService.cancelBooking(bookingNumber, customerName, customerSurname);
}
}
note

If multiple components of the same type are present in the application context, the application will fail to start. In this case, use the explicit wiring mode (explained below).

Explicit Component Wiring

If you have multiple AI Services and want to wire different LangChain4j components into each of them, you can specify which components to use with explicit wiring mode (@AiService(wiringMode = EXPLICIT)).

Let's say we have two ChatModels configured:

# OpenAI
langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY}
langchain4j.open-ai.chat-model.model-name=gpt-4o-mini

# Ollama
langchain4j.ollama.chat-model.base-url=http://localhost:11434
langchain4j.ollama.chat-model.model-name=llama3.1
@AiService(wiringMode = EXPLICIT, chatModel = "openAiChatModel")
interface OpenAiAssistant {

@SystemMessage("You are a polite assistant")
String chat(String userMessage);
}

@AiService(wiringMode = EXPLICIT, chatModel = "ollamaChatModel")
interface OllamaAssistant {

@SystemMessage("You are a polite assistant")
String chat(String userMessage);
}
note

In this case, you must explicitly specify all components.

More details can be found here.

Listening for AI Service Registration Events

After you have completed the development of the AI Service in a declarative manner, you can listen for the AiServiceRegisteredEvent by implementing the ApplicationListener<AiServiceRegisteredEvent> interface. This event is triggered when AI Service is registered in the Spring context, allowing you to obtain information about all registered AI services and their tools at runtime. Here is an example:

@Component
class AiServiceRegisteredEventListener implements ApplicationListener<AiServiceRegisteredEvent> {


@Override
public void onApplicationEvent(AiServiceRegisteredEvent event) {
Class<?> aiServiceClass = event.aiServiceClass();
List<ToolSpecification> toolSpecifications = event.toolSpecifications();
for (int i = 0; i < toolSpecifications.size(); i++) {
System.out.printf("[%s]: [Tool-%s]: %s%n", aiServiceClass.getSimpleName(), i + 1, toolSpecifications.get(i));
}
}
}

Flux

When streaming, you can use Flux<String> as a return type of AI Service:

@AiService
interface Assistant {

@SystemMessage("You are a polite assistant")
Flux<String> chat(String userMessage);
}

For this, please import langchain4j-reactor module. See more details here.

Observability

To enable observability for a ChatModel or StreamingChatModel bean, you need to declare one or more ChatModelListener beans:

@Configuration
class MyConfiguration {

@Bean
ChatModelListener chatModelListener() {
return new ChatModelListener() {

private static final Logger log = LoggerFactory.getLogger(ChatModelListener.class);

@Override
public void onRequest(ChatModelRequestContext requestContext) {
log.info("onRequest(): {}", requestContext.chatRequest());
}

@Override
public void onResponse(ChatModelResponseContext responseContext) {
log.info("onResponse(): {}", responseContext.chatResponse());
}

@Override
public void onError(ChatModelErrorContext errorContext) {
log.info("onError(): {}", errorContext.error().getMessage());
}
};
}
}

Every ChatModelListener bean in the application context will be automatically injected into all ChatModel and StreamingChatModel beans created by one of our Spring Boot starters.

Micrometer Metrics

Add the langchain4j-micrometer-metrics dependency to your project:

For Maven:

<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-micrometer-metrics</artifactId>
<version>1.12.1-beta21</version>
</dependency>

For Gradle:

implementation 'dev.langchain4j:langchain4j-micrometer-metrics:1.12.1-beta21'

Micrometer (Actuator) Configuration

You should also have the necessary Actuator dependency in your project. For example, if you are using Spring Boot, you can add the following dependencies to your pom.xml:

For Maven:

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

For Gradle:

implementation 'org.springframework.boot:spring-boot-starter-actuator'

Enable the /metrics Actuator endpoint in your properties.

application.properties:

management.endpoints.web.exposure.include=metrics

application.yaml:

management:
endpoints:
web:
exposure:
include: metrics

Configure MicrometerMetricsChatModelListener bean

In a Spring Boot application, you can define the listener as a bean and inject the MeterRegistry:

import dev.langchain4j.micrometer.metrics.listeners.MicrometerMetricsChatModelListener;
import io.micrometer.core.instrument.MeterRegistry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class MetricsConfig {

@Bean
public MicrometerMetricsChatModelListener listener(MeterRegistry meterRegistry) {
return new MicrometerMetricsChatModelListener(meterRegistry);
}
}

View the Metrics

You can view the metrics by visiting the /actuator/metrics endpoint of your application.

For example, if you are running your application on localhost:8080, you can visit http://localhost:8080/actuator/metrics to view the metrics.

Token Usage Metric

View the token usage metric at:

http://localhost:8080/actuator/metrics/gen_ai.client.token.usage
Filtering by Token Type

The gen_ai.token.type tag indicates whether the tokens were used for input or output:

Token TypeEndpoint
Input tokens/actuator/metrics/gen_ai.client.token.usage?tag=gen_ai.token.type:input
Output tokens/actuator/metrics/gen_ai.client.token.usage?tag=gen_ai.token.type:output

Note: The gen_ai.client.token.usage metric is a histogram (DistributionSummary). The endpoint without any tags shows aggregated statistics (count, total, max) across all token types, models, and providers.

Micrometer Observation API

This implements the ChatModelListener using the Micrometer Observation API allowing transparent generation of Metrics and Traces by adding the following dependency:

For Maven:

<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-observation</artifactId>
</dependency>

For Gradle:

implementation 'dev.langchain4j:langchain4j-observation'

You need to instantiate the Observation listener as follows...

Configure the ObservationChatModelListener bean

@Configuration
public class ObservationConfig {

@Bean
public ObservationChatModelListener listener(ObservationRegistry observationRegistry, MeterRegistry meterRegistry) {
return new ObservationChatModelListener(observationRegistry, meterRegistry);
}
}

This dependency requires the configuration of the SpringBoot Actuator, as described above.

For additional observability requirements on a SpringBoot application please follow: Building Your First Observed Application

For more details about the langchain4j-observation library, please check the Observability documentation.

Testing

Supported versions

LangChain4j Spring Boot integration requires Java 17 and Spring Boot 3.5, in line with the Spring Boot OSS support policy.

Support for Spring Boot 4.x is not available yet in LangChain4j, but it's planned for a future release.

Examples