| oci.generative_ai_inference.models.Annotation | An annotation attached to the assistant’s message, used to represent additional metadata such as citations. | 
| oci.generative_ai_inference.models.ApplyGuardrailsDetails | Details for applying guardrails to the input text. | 
| oci.generative_ai_inference.models.ApplyGuardrailsResult | The result of applying guardrails to the input text. | 
| oci.generative_ai_inference.models.ApproximateLocation | To refine search results based on geography, you can specify an approximate user location using any of the following: - city and region are free-text strings, like “Minneapolis” and “Minnesota”. | 
| oci.generative_ai_inference.models.AssistantMessage | Represents a single instance of assistant message. | 
| oci.generative_ai_inference.models.AudioContent | Represents a single instance of chat audio content. | 
| oci.generative_ai_inference.models.AudioUrl | Provide a base64 encoded audio or an audio uri if it’s supported. | 
| oci.generative_ai_inference.models.BaseChatRequest | The base class to use for the chat inference request. | 
| oci.generative_ai_inference.models.BaseChatResponse | The base class that creates the chat response. | 
| oci.generative_ai_inference.models.CategoryScore | A category with its score. | 
| oci.generative_ai_inference.models.ChatChoice | Represents a single instance of the chat response. | 
| oci.generative_ai_inference.models.ChatContent | The base class for the chat content. | 
| oci.generative_ai_inference.models.ChatDetails | Details of the conversation for the model to respond. | 
| oci.generative_ai_inference.models.ChatResult | The response to the chat conversation. | 
| oci.generative_ai_inference.models.Choice | Represents a single instance of the generated text. | 
| oci.generative_ai_inference.models.Citation | A section of the generated response which cites the documents that were used for generating the response. | 
| oci.generative_ai_inference.models.CohereChatBotMessage | A message that represents a single chat dialog as CHATBOT role. | 
| oci.generative_ai_inference.models.CohereChatRequest | Details for the chat request for Cohere models. | 
| oci.generative_ai_inference.models.CohereChatResponse | The response to the chat conversation. | 
| oci.generative_ai_inference.models.CohereLlmInferenceRequest | Details for the text generation request for Cohere models. | 
| oci.generative_ai_inference.models.CohereLlmInferenceResponse | The generated text result to return. | 
| oci.generative_ai_inference.models.CohereMessage | A message that represents a single chat dialog. | 
| oci.generative_ai_inference.models.CohereParameterDefinition | A definition of tool parameter. | 
| oci.generative_ai_inference.models.CohereResponseFormat | Specify the format the model output is guaranteed to be of | 
| oci.generative_ai_inference.models.CohereResponseJsonFormat | The json object format for the model structured output | 
| oci.generative_ai_inference.models.CohereResponseTextFormat | The text format for cohere model response | 
| oci.generative_ai_inference.models.CohereSystemMessage | A message that represents a single chat dialog as SYSTEM role. | 
| oci.generative_ai_inference.models.CohereTool | A definition of tool (function). | 
| oci.generative_ai_inference.models.CohereToolCall | A tool call generated by the model. | 
| oci.generative_ai_inference.models.CohereToolMessage | A message that represents a single chat dialog as TOOL role. | 
| oci.generative_ai_inference.models.CohereToolResult | The result from invoking tools recommended by the model in the previous chat turn. | 
| oci.generative_ai_inference.models.CohereUserMessage | A message that represents a single chat dialog as USER role. | 
| oci.generative_ai_inference.models.CompletionTokensDetails | Breakdown of tokens used in a completion. | 
| oci.generative_ai_inference.models.ContentModerationConfiguration | Configuration for content moderation. | 
| oci.generative_ai_inference.models.ContentModerationResult | The result of content moderation. | 
| oci.generative_ai_inference.models.DedicatedServingMode | The model’s serving mode is dedicated serving and has an endpoint on a dedicated AI cluster. | 
| oci.generative_ai_inference.models.DeveloperMessage | Developer-provided instructions that the model should follow, regardless of messages sent by the user. | 
| oci.generative_ai_inference.models.Document | The input of the document to rerank. | 
| oci.generative_ai_inference.models.DocumentRank | An object that contains a relevance score, an index and the text for a document. | 
| oci.generative_ai_inference.models.EmbedTextDetails | Details for the request to embed texts. | 
| oci.generative_ai_inference.models.EmbedTextResult | The generated embedded result to return. | 
| oci.generative_ai_inference.models.FunctionCall | The function call generated by the model. | 
| oci.generative_ai_inference.models.FunctionDefinition | A function the model may call. | 
| oci.generative_ai_inference.models.GenerateTextDetails | Details for the request to generate text. | 
| oci.generative_ai_inference.models.GenerateTextResult | The generated text result to return. | 
| oci.generative_ai_inference.models.GeneratedText | The text generated during each run. | 
| oci.generative_ai_inference.models.GenericChatRequest | Details for the chat request. | 
| oci.generative_ai_inference.models.GenericChatResponse | The response for a chat conversation. | 
| oci.generative_ai_inference.models.GroundingChunk | object containing the source. | 
| oci.generative_ai_inference.models.GroundingMetadata | Grounding metadata. | 
| oci.generative_ai_inference.models.GroundingSupport | chunk to connect model response text to the source in groundingChunk | 
| oci.generative_ai_inference.models.GroundingSupportSegment | segment within groundingSupport. | 
| oci.generative_ai_inference.models.GroundingWebChunk | object containing the web source. | 
| oci.generative_ai_inference.models.GuardrailConfigs | Additional configuration for each guardrail. | 
| oci.generative_ai_inference.models.GuardrailsInput | The input data for applying guardrails. | 
| oci.generative_ai_inference.models.GuardrailsResults | The results of applying each guardrail. | 
| oci.generative_ai_inference.models.GuardrailsTextInput | Represents a single instance of text in the guardrails input. | 
| oci.generative_ai_inference.models.ImageContent | Represents a single instance of chat image content. | 
| oci.generative_ai_inference.models.ImageUrl | Provide a base64 encoded image or an image uri if it’s supported. | 
| oci.generative_ai_inference.models.JsonObjectResponseFormat | Enables JSON mode, which ensures the message the model generates is valid JSON. | 
| oci.generative_ai_inference.models.JsonSchemaResponseFormat | Enables Structured Outputs which ensures the model will match your supplied JSON schema. | 
| oci.generative_ai_inference.models.LlamaLlmInferenceRequest | Details for the text generation request for Llama models. | 
| oci.generative_ai_inference.models.LlamaLlmInferenceResponse | The generated text result to return. | 
| oci.generative_ai_inference.models.LlmInferenceRequest | The base class for the inference requests. | 
| oci.generative_ai_inference.models.LlmInferenceResponse | The base class for inference responses. | 
| oci.generative_ai_inference.models.Logprobs | Includes the logarithmic probabilities for the most likely output tokens and the chosen tokens. | 
| oci.generative_ai_inference.models.Message | A message that represents a single chat dialog. | 
| oci.generative_ai_inference.models.OnDemandServingMode | The model’s serving mode is on-demand serving on a shared infrastructure. | 
| oci.generative_ai_inference.models.PersonallyIdentifiableInformationConfiguration | Configuration for personally identifiable information detection. | 
| oci.generative_ai_inference.models.PersonallyIdentifiableInformationResult | An item of personally identifiable information. | 
| oci.generative_ai_inference.models.Prediction | Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. | 
| oci.generative_ai_inference.models.PromptInjectionConfiguration | Configuration for prompt injection | 
| oci.generative_ai_inference.models.PromptInjectionProtectionResult | The result of prompt injection protection. | 
| oci.generative_ai_inference.models.PromptTokensDetails | Breakdown of tokens used in the prompt. | 
| oci.generative_ai_inference.models.RerankTextDetails | Details required for a rerank request. | 
| oci.generative_ai_inference.models.RerankTextResult | The rerank response to return to the caller. | 
| oci.generative_ai_inference.models.ResponseFormat | An object specifying the format that the model must output. | 
| oci.generative_ai_inference.models.ResponseJsonSchema | The JSON schema definition to be used in JSON_SCHEMA response format. | 
| oci.generative_ai_inference.models.SearchEntryPoint | Contains the HTML and CSS to render the required Search Suggestions. | 
| oci.generative_ai_inference.models.SearchQuery | The generated search query. | 
| oci.generative_ai_inference.models.ServingMode | The model’s serving mode, which is either on-demand serving or dedicated serving. | 
| oci.generative_ai_inference.models.StaticContent | Static predicted output content, such as the content of a text file that is being regenerated. | 
| oci.generative_ai_inference.models.StreamOptions | Options for streaming response. | 
| oci.generative_ai_inference.models.SummarizeTextDetails | Details for the request to summarize text. | 
| oci.generative_ai_inference.models.SummarizeTextResult | Summarize text result to return to caller. | 
| oci.generative_ai_inference.models.SystemMessage | Represents a single instance of system message. | 
| oci.generative_ai_inference.models.TextContent | Represents a single instance of text in the chat content. | 
| oci.generative_ai_inference.models.TextResponseFormat | Enables TEXT mode. | 
| oci.generative_ai_inference.models.TokenLikelihood | An object that contains the returned token and its corresponding likelihood. | 
| oci.generative_ai_inference.models.ToolCall | The tool call generated by the model, such as function call. | 
| oci.generative_ai_inference.models.ToolChoice | The tool choice for a tool. | 
| oci.generative_ai_inference.models.ToolChoiceAuto | The model can pick between generating a message or calling one or more tools. | 
| oci.generative_ai_inference.models.ToolChoiceFunction | The tool choice for a function. | 
| oci.generative_ai_inference.models.ToolChoiceNone | The model will not call any tool and instead generates a message. | 
| oci.generative_ai_inference.models.ToolChoiceRequired | The model must call one or more tools. | 
| oci.generative_ai_inference.models.ToolDefinition | A tool the model may call. | 
| oci.generative_ai_inference.models.ToolMessage | Represents a single instance of tool message. | 
| oci.generative_ai_inference.models.UrlCitation | Contains metadata for a cited URL included in the assistant’s response. | 
| oci.generative_ai_inference.models.Usage | Usage statistics for the completion request. | 
| oci.generative_ai_inference.models.UserMessage | Represents a single instance of user message. | 
| oci.generative_ai_inference.models.VideoContent | Represents a single instance of chat video content. | 
| oci.generative_ai_inference.models.VideoUrl | The base64 encoded video data or a video uri if it’s supported. | 
| oci.generative_ai_inference.models.WebSearchOptions | Options for performing a web search to augment the response. |