Generative Ai Inference

oci.generative_ai_inference.GenerativeAiInferenceClient OCI Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases for text generation, summarization, and text embeddings.
oci.generative_ai_inference.GenerativeAiInferenceClientCompositeOperations This class provides a wrapper around GenerativeAiInferenceClient and offers convenience methods for operations that would otherwise need to be chained together.

Models

oci.generative_ai_inference.models.BaseChatRequest Base class for chat inference requests
oci.generative_ai_inference.models.BaseChatResponse Base class for chat inference response
oci.generative_ai_inference.models.ChatChoice Represents a single instance of the chat response.
oci.generative_ai_inference.models.ChatContent The base class for the chat content.
oci.generative_ai_inference.models.ChatDetails Details of the conversation for the model to respond.
oci.generative_ai_inference.models.ChatResult The response to the chat conversation.
oci.generative_ai_inference.models.Choice Represents a single instance of generated text.
oci.generative_ai_inference.models.Citation A section of the generated reply which cites external knowledge.
oci.generative_ai_inference.models.CohereChatRequest Details for the chat request for Cohere models.
oci.generative_ai_inference.models.CohereChatResponse The response to the chat conversation.
oci.generative_ai_inference.models.CohereLlmInferenceRequest Details for the text generation request for Cohere models.
oci.generative_ai_inference.models.CohereLlmInferenceResponse The generated text result to return.
oci.generative_ai_inference.models.CohereMessage An message that represents a single dialogue of chat
oci.generative_ai_inference.models.DedicatedServingMode The model’s serving mode is dedicated serving and has an endpoint on a dedicated AI cluster.
oci.generative_ai_inference.models.EmbedTextDetails Details for the request to embed texts.
oci.generative_ai_inference.models.EmbedTextResult The generated embedded result to return.
oci.generative_ai_inference.models.GenerateTextDetails Details for the request to generate text.
oci.generative_ai_inference.models.GenerateTextResult The generated text result to return.
oci.generative_ai_inference.models.GeneratedText The text generated during each run.
oci.generative_ai_inference.models.GenericChatRequest Details for the chat request.
oci.generative_ai_inference.models.GenericChatResponse The response to the chat conversation.
oci.generative_ai_inference.models.LlamaLlmInferenceRequest Details for the text generation request for Llama models.
oci.generative_ai_inference.models.LlamaLlmInferenceResponse The generated text result to return.
oci.generative_ai_inference.models.LlmInferenceRequest The base class for the inference requests.
oci.generative_ai_inference.models.LlmInferenceResponse The base class for inference responses.
oci.generative_ai_inference.models.Logprobs Returns if the logarithmic probabilites is set.
oci.generative_ai_inference.models.Message An message that represents a single dialogue of chat
oci.generative_ai_inference.models.OnDemandServingMode The model’s serving mode is on-demand serving on a shared infrastructure.
oci.generative_ai_inference.models.SearchQuery The generated search query.
oci.generative_ai_inference.models.ServingMode The model’s serving mode, which could be on-demand serving or dedicated serving.
oci.generative_ai_inference.models.SummarizeTextDetails Details for the request to summarize text.
oci.generative_ai_inference.models.SummarizeTextResult Summarize text result to return to caller.
oci.generative_ai_inference.models.TextContent Represents a single instance of text chat content.
oci.generative_ai_inference.models.TokenLikelihood An object that contains the returned token and its corresponding likelihood.