Package | Description |
---|---|
com.oracle.bmc.generativeaiinference.model |
Modifier and Type | Method and Description |
---|---|
static CohereLlmInferenceRequest.Truncate |
CohereLlmInferenceRequest.Truncate.create(String key) |
CohereLlmInferenceRequest.Truncate |
CohereLlmInferenceRequest.getTruncate()
For an input that’s longer than the maximum token length, specifies which part of the input
text will be truncated.
|
static CohereLlmInferenceRequest.Truncate |
CohereLlmInferenceRequest.Truncate.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static CohereLlmInferenceRequest.Truncate[] |
CohereLlmInferenceRequest.Truncate.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
Modifier and Type | Method and Description |
---|---|
CohereLlmInferenceRequest.Builder |
CohereLlmInferenceRequest.Builder.truncate(CohereLlmInferenceRequest.Truncate truncate)
For an input that’s longer than the maximum token length, specifies which part of the
input text will be truncated.
|
Constructor and Description |
---|
CohereLlmInferenceRequest(String prompt,
Boolean isStream,
Integer numGenerations,
Boolean isEcho,
Integer maxTokens,
Double temperature,
Integer topK,
Double topP,
Double frequencyPenalty,
Double presencePenalty,
List<String> stopSequences,
CohereLlmInferenceRequest.ReturnLikelihoods returnLikelihoods,
CohereLlmInferenceRequest.Truncate truncate)
Deprecated.
|
Copyright © 2016–2024. All rights reserved.