Class OllamaModelProvider
- All Implemented Interfaces:
ModelProvider
This provider enables integration with Ollama, a tool for running large language models locally. Ollama supports a wide variety of models including Llama, Mistral, Code Llama, and many others, providing both chat and embedding capabilities.
The provider supports all three model types: embedding models for creating vector representations of text, chat models for conversational AI, and streaming chat models for real-time response generation.
Configuration is managed through MicroProfile Config with the following property:
- ollama.base.url - Base URL for Ollama service (defaults to
http://localhost:11434
)
Example usage:
@Inject
@Named("Ollama")
ModelProvider ollamaProvider;
EmbeddingModel embeddingModel = ollamaProvider.getEmbeddingModel("nomic-embed-text");
ChatModel chatModel = ollamaProvider.getChatModel("llama3.1:8b");
StreamingChatModel streamingModel = ollamaProvider.getStreamingChatModel("llama3.1:8b");
- Since:
- 25.09
- Author:
- Aleks Seovic 2025.07.04
-
Constructor Summary
ConstructorsConstructorDescriptionOllamaModelProvider
(org.eclipse.microprofile.config.Config config, ConfigRepository jsonConfig) Default constructor for CDI initialization. -
Method Summary
Modifier and TypeMethodDescriptiondev.langchain4j.model.chat.ChatModel
getChatModel
(String sName) Returns a chat model instance for the specified model name.dev.langchain4j.model.embedding.EmbeddingModel
getEmbeddingModel
(String sName) Returns an embedding model instance for the specified model name.dev.langchain4j.model.chat.StreamingChatModel
getStreamingChatModel
(String sName) Returns a streaming chat model instance for the specified model name.protected String
Returns the Ollama base URL from configuration.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface com.oracle.coherence.rag.ModelProvider
getScoringModel
-
Constructor Details
-
OllamaModelProvider
@Inject public OllamaModelProvider(org.eclipse.microprofile.config.Config config, ConfigRepository jsonConfig) Default constructor for CDI initialization.
-
-
Method Details
-
getEmbeddingModel
Returns an embedding model instance for the specified model name.Embedding models convert text into vector representations that can be used for semantic similarity search and document retrieval operations.
Creates an Ollama embedding model for generating vector representations of text. Supports models like nomic-embed-text, mxbai-embed-large, and other embedding-capable models available in Ollama.
- Specified by:
getEmbeddingModel
in interfaceModelProvider
- Parameters:
sName
- the name of the embedding model to create- Returns:
- a configured OllamaEmbeddingModel instance
- Throws:
IllegalArgumentException
- if the model name is null or empty
-
getChatModel
Returns a chat model instance for the specified model name.Chat models provide conversational AI capabilities for generating responses to user queries in a blocking manner. These models are suitable for synchronous chat interactions.
Creates an Ollama chat model for conversational AI. Supports a wide variety of models including Llama, Mistral, Code Llama, and others.
- Specified by:
getChatModel
in interfaceModelProvider
- Parameters:
sName
- the name of the chat model to create (e.g., "llama3.1:8b")- Returns:
- a configured OllamaChatModel instance
- Throws:
IllegalArgumentException
- if the model name is null or empty
-
getStreamingChatModel
Returns a streaming chat model instance for the specified model name.Streaming chat models provide conversational AI capabilities with streaming response generation, allowing for real-time token-by-token response delivery. This is useful for creating responsive chat interfaces.
Creates an Ollama streaming chat model for real-time response generation. Enables progressive response streaming suitable for interactive applications.
- Specified by:
getStreamingChatModel
in interfaceModelProvider
- Parameters:
sName
- the name of the streaming chat model to create (e.g., "llama3.1:8b")- Returns:
- a configured OllamaStreamingChatModel instance
- Throws:
IllegalArgumentException
- if the model name is null or empty
-
ollamaBaseUrl
Returns the Ollama base URL from configuration.- Returns:
- the base URL for Ollama service, defaults to
http://localhost:11434
-