xAI Grok 3 Fast

The xai.grok-3-fast model excels at enterprise use cases such as data extraction, coding, and summarizing text. This model has a deep domain knowledge in finance, healthcare, law, and science.

The xai.grok-3 and xai.grok-3-fast models, both use the same underlying model and deliver identical response quality. The difference lies in how they're served: The xai.grok-3-fast model is served on faster infrastructure, offering response times that are significantly faster than the standard xai.grok-3 model. The increased speed comes at a higher cost per output token.

The xai.grok-3 and xai.grok-3-fast models point to the same underlying model. Select xai.grok-3-fast for latency-sensitive applications, and select xai.grok-3 for reduced cost.

Available in This Region

  • US Midwest (Chicago) (on-demand only)
Important

Cross-Region Calls

When a user enters an inference request to this model in Chicago, Generative AI service in Chicago makes a request to this model hosted in Salt Lake City, and returns the model's response back to Chicago where the user's inference request came from. See Pretrained Models with Cross-Region Calls.

Key Features

  • Model name in OCI Generative AI: xai.grok-3-fast
  • Available On-Demand: Access this model on-demand, through the Console playground or the API.
  • Text-Mode Only: Input text and get a text output. (No image support.)
  • Knowledge: Has a deep domain knowledge in finance, healthcare, law, and science.
  • Context Length: 131,072 tokens (maximum prompt + response length is 131,072 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
  • Excels at These Use Cases: Data extraction, coding, and summarizing text
  • Function Calling: Yes, through the API.
  • Structured Outputs: Yes.
  • Has Reasoning: No.
  • Knowledge Cutoff: November 2024

Release Date

Model Beta Release Date General Availability Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
xai.grok-3-fast 2025-05-22 2025-06-24 Tentative This model isn't available for the dedicated mode.
Important

For a list of all model time lines and retirement details, see Retiring the Models.

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 131,072 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.

Temperature

The level of randomness used to generate the output text. Min: 0, Max: 2

Tip

Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information.
Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign p a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens.

Frequency penalty

A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output.

This penalty can be positive or negative. Positive numbers encourage the model to use new tokens and negative numbers encourage the model to repeat the tokens. Min: -2, Max: 2. Set to 0 to disable.

Presence penalty

A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used. Min: -2, Max: 2. Set to 0 to disable.