xAI Grok 4.20

The xAI Grok 4.20 offers reasoning and non-reasoning variants with industry-leading speed and agentic tool-calling support. It 's designed to reduce hallucinations and follow prompts closely, producing more reliable and precise responses.

Learn about Grok 4.20

Regions for this Model

Important

For supported regions, endpoint types (on-demand or dedicated AI clusters), and hosting (OCI Generative AI or external calls) for this model, see the Models by Region page. For details about the regions, see the Generative AI Regions page.

Overview

The xAI Grok 4.20 model comes in two modes offered in two separate models. A Reasoning model and a Non‑Reasoning model. See the following table to help you decide which model to select.

Mode Model Name When to Use
Reasoning xai.grok-4.20-0309-reasoning Complex logic and math, scientific/technical analysis, multi-step investigations, or higher-stakes tasks where accuracy matters more than lowest latency.
Non-Reasoning xai.grok-4.20-0309-non-reasoning Routine Q&A, general information retrieval, and high-throughput scenarios where response speed is the priority.

Key Features

  • Model names in OCI Generative AI:
    Reasoning
    • xai.grok-4.20-0309-reasoning
    • xai.grok-4.20-reasoning (an alias that points to xai.grok-4.20-0309-reasoning)
    Non-Reasoning
    • xai.grok-4.20-0309-non-reasoning
    • xai.grok-4.20-non-reasoning (an alias that points to xai.grok-4.20-0309-non-reasoning)
  • Available On-Demand: Access this model on-demand, through the Console playground or the API.
  • Multimodal support: Input text and images and get a text output.
  • Context Length: 2 million tokens (maximum prompt + response length is 2 million tokens for keeping the context). In the playground, the response length is capped at 131,000 tokens for each run, but the context remains 2 million.
  • Modes: Operates in two modes: "reasoning" for complex tasks and "non-reasoning" for speed-critical, straightforward requests.
  • Function Calling: Yes, through the API.
  • Structured Outputs: Yes.
  • Cached Input Tokens: Yes

    Important note: The cached‑input feature is available in both the playground and the API. However, that information can only be retrieved through the API.

  • Knowledge Cutoff: Not available

Limits

Tokens per minute (TPM)
For the TPM limit increase, use the following limit names:
  • For the reasoning model: grok-4-2-reasoning-tokens-per-minute-count (for 200,000 tokens)
  • For the non-reasoning model: grok-4-2-non-reasoning-tokens-per-minute-count (for 200,000 tokens)

See Requesting a Service Limit Increase.

Image Inputs
  • Console: Upload one or more .png or .jpg images, each 5 MB or smaller.
  • API: Only JPG/JPEG and PNG file formats are supported. Submit a base64 encoded version of an image, ensuring that each converted image is more than 256 and less than 1,792 tokens. For example, a 512 x 512 image typically converts to around 1,610 tokens. There's no stated maximum number of images that can be uploaded. The combined token count for both text and images must be within the model's overall context window of 2 million tokens.

On-Demand Mode

Note

The Grok models are available only in the on-demand mode.
Model Name OCI Model Name
xAI Grok 4.20
  • xai.grok-4.20-0309-reasoning
  • xai.grok-4.20-0309-non-reasoning

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens.

Tip

For large inputs with difficult problems, set a high value for the maximum output tokens parameter. See Troubleshooting.
Temperature

The level of randomness used to generate the output text. Min: 0, Max: 2

Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign p a decimal number between 0.05 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens.

Troubleshooting

Issue: The Grok 4.20 model doesn't respond.

Cause: The Maximum output tokens parameter in the playground or the max_tokens parameter in the API is likely too low. For example, by default this parameter is set to 600 tokens in the playground which might be low for complex tasks.

Action: Increase the maximum output tokens parameter.