Class LlamaLlmInferenceRequest
  Details for the text generation request for Llama models.
    Inheritance
    
    
    LlamaLlmInferenceRequest
   
  
  
  Assembly: OCI.DotNetSDK.Generativeaiinference.dll
  Syntax
  
    public class LlamaLlmInferenceRequest : LlmInferenceRequest
   
  Properties
  
  FrequencyPenalty
  
  
  Declaration
  
    [JsonProperty(PropertyName = "frequencyPenalty")]
public double FrequencyPenalty { get; set; }
   
  Property Value
  
  
  IsEcho
  
  
  Declaration
  
    [JsonProperty(PropertyName = "isEcho")]
public bool? IsEcho { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | bool? | Whether or not to return the user prompt in the response. Applies only to non-stream results. | 
    
  
  
  IsStream
  
  
  Declaration
  
    [JsonProperty(PropertyName = "isStream")]
public bool? IsStream { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | bool? | Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available. | 
    
  
  
  LogProbs
  
  
  Declaration
  
    [JsonProperty(PropertyName = "logProbs")]
public int? LogProbs { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | int? | Includes the logarithmic probabilities for the most likely output tokens and the chosen tokens.
For example, if the log probability is 5, the API returns a list of the 5 most likely tokens. The API returns the log probability of the sampled token, so there might be up to logprobs+1 elements in the response.
 | 
    
  
  
  MaxTokens
  
  
  Declaration
  
    [JsonProperty(PropertyName = "maxTokens")]
public int? MaxTokens { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | int? | The maximum number of tokens that can be generated per output sequence. The token count of the prompt plus maxTokenscannot exceed the model's context length. | 
    
  
  
  NumGenerations
  
  
  Declaration
  
    [JsonProperty(PropertyName = "numGenerations")]
public int? NumGenerations { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | int? | The number of of generated texts that will be returned. | 
    
  
  
  PresencePenalty
  
  
  Declaration
  
    [JsonProperty(PropertyName = "presencePenalty")]
public double PresencePenalty { get; set; }
   
  Property Value
  
  
  Prompt
  
  
  Declaration
  
    [JsonProperty(PropertyName = "prompt")]
public string Prompt { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | string | Represents the prompt to be completed. The trailing white spaces are trimmed before completion. | 
    
  
  
  Stop
  
  
  Declaration
  
    [JsonProperty(PropertyName = "stop")]
public List<string> Stop { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | List<string> | List of strings that stop the generation if they are generated for the response text. The returned output will not contain the stop strings. | 
    
  
  
  Temperature
  
  
  Declaration
  
    [JsonProperty(PropertyName = "temperature")]
public double Temperature { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | double | A number that sets the randomness of the generated output. A lower temperature means a less random generations.
Use lower numbers for tasks with a correct answer such as question answering or summarizing. High temperatures can generate hallucinations or factually incorrect information. Start with temperatures lower than 1.0 and increase the temperature for more creative outputs, as you regenerate the prompts to refine the outputs.
 | 
    
  
  
  TopK
  
  
  Declaration
  
    [JsonProperty(PropertyName = "topK")]
public int? TopK { get; set; }
   
  Property Value
  
    
      
        | Type | Description | 
    
    
      
        | int? | An integer that sets up the model to use only the top k most likely tokens in the generated output. A higher k introduces more randomness into the output making the output text sound more natural. Default value is -1 which means to consider all tokens. Setting to 0 disables this method and considers all tokens.
If also using top p, then the model considers only the top tokens whose probabilities add up to p percent and ignores the rest of the k tokens. For example, if k is 20, but the probabilities of the top 10 add up to .75, then only the top 10 tokens are chosen.
 | 
    
  
  
  TopP
  
  
  Declaration
  
    [JsonProperty(PropertyName = "topP")]
public double TopP { get; set; }
   
  Property Value