Class | Description |
---|---|
BaseChatRequest |
Base class for chat inference requests
Note: Objects should always be created or deserialized using the Builder . |
BaseChatResponse |
Base class for chat inference response
Note: Objects should always be created or deserialized using the Builder . |
ChatChoice |
Represents a single instance of the chat response.
|
ChatChoice.Builder | |
ChatContent |
The base class for the chat content.
|
ChatDetails |
Details of the conversation for the model to respond.
|
ChatDetails.Builder | |
ChatResult |
The response to the chat conversation.
|
ChatResult.Builder | |
Choice |
Represents a single instance of generated text.
|
Choice.Builder | |
Citation |
A section of the generated reply which cites external knowledge.
|
Citation.Builder | |
CohereChatRequest |
Details for the chat request for Cohere models.
|
CohereChatRequest.Builder | |
CohereChatResponse |
The response to the chat conversation.
|
CohereChatResponse.Builder | |
CohereLlmInferenceRequest |
Details for the text generation request for Cohere models.
|
CohereLlmInferenceRequest.Builder | |
CohereLlmInferenceResponse |
The generated text result to return.
|
CohereLlmInferenceResponse.Builder | |
CohereMessage |
An message that represents a single dialogue of chat
Note: Objects should always be created or deserialized using the CohereMessage.Builder . |
CohereMessage.Builder | |
DedicatedServingMode |
The model’s serving mode is dedicated serving and has an endpoint on a dedicated AI cluster.
|
DedicatedServingMode.Builder | |
EmbedTextDetails |
Details for the request to embed texts.
|
EmbedTextDetails.Builder | |
EmbedTextResult |
The generated embedded result to return.
|
EmbedTextResult.Builder | |
GeneratedText |
The text generated during each run.
|
GeneratedText.Builder | |
GenerateTextDetails |
Details for the request to generate text.
|
GenerateTextDetails.Builder | |
GenerateTextResult |
The generated text result to return.
|
GenerateTextResult.Builder | |
GenericChatRequest |
Details for the chat request.
|
GenericChatRequest.Builder | |
GenericChatResponse |
The response to the chat conversation.
|
GenericChatResponse.Builder | |
LlamaLlmInferenceRequest |
Details for the text generation request for Llama models.
|
LlamaLlmInferenceRequest.Builder | |
LlamaLlmInferenceResponse |
The generated text result to return.
|
LlamaLlmInferenceResponse.Builder | |
LlmInferenceRequest |
The base class for the inference requests.
|
LlmInferenceResponse |
The base class for inference responses.
|
Logprobs |
Returns if the logarithmic probabilites is set.
|
Logprobs.Builder | |
Message |
An message that represents a single dialogue of chat
Note: Objects should always be created or deserialized using the Message.Builder . |
Message.Builder | |
OnDemandServingMode |
The model’s serving mode is on-demand serving on a shared infrastructure.
|
OnDemandServingMode.Builder | |
SearchQuery |
The generated search query.
|
SearchQuery.Builder | |
ServingMode |
The model’s serving mode, which could be on-demand serving or dedicated serving.
|
SummarizeTextDetails |
Details for the request to summarize text.
|
SummarizeTextDetails.Builder | |
SummarizeTextResult |
Summarize text result to return to caller.
|
SummarizeTextResult.Builder | |
TextContent |
Represents a single instance of text chat content.
|
TextContent.Builder | |
TokenLikelihood |
An object that contains the returned token and its corresponding likelihood.
|
TokenLikelihood.Builder |
Enum | Description |
---|---|
BaseChatRequest.ApiFormat |
The api format for the model’s request
|
BaseChatResponse.ApiFormat |
The api format for the model’s response
|
ChatContent.Type |
The type of the content.
|
CohereChatResponse.FinishReason |
Why the generation was completed.
|
CohereLlmInferenceRequest.ReturnLikelihoods |
Specifies how and if the token likelihoods are returned with the response.
|
CohereLlmInferenceRequest.Truncate |
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
CohereMessage.Role |
One of CHATBOT|USER to identify who the message is coming from.
|
EmbedTextDetails.InputType |
Specifies the input type.
|
EmbedTextDetails.Truncate |
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
LlmInferenceRequest.RuntimeType |
The runtime of the provided model.
|
LlmInferenceResponse.RuntimeType |
The runtime of the provided model.
|
ServingMode.ServingType |
The serving mode type, which could be on-demand serving or dedicated serving.
|
SummarizeTextDetails.Extractiveness |
Controls how close to the original text the summary is.
|
SummarizeTextDetails.Format |
Indicates the style in which the summary will be delivered - in a free form paragraph or in
bullet points.
|
SummarizeTextDetails.Length |
Indicates the approximate length of the summary.
|
Copyright © 2016–2024. All rights reserved.