xAI Grok 3 Mini Fast (Beta)

Important

Pre-General Availability: 2025-05-22

The xai.grok-3-mini-fast model is a lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that don't require deep domain knowledge. The raw thinking traces are accessible.

The xai.grok-3-mini and xai.grok-3-mini-fast models, both use the same underlying model and deliver identical response quality. The difference lies in how they're served: The xai.grok-3-mini-fast model is served on faster infrastructure, offering response times that are significantly faster than the standard xai.grok-3-mini model. The increased speed comes at a higher cost per output token.

The xai.grok-3-mini and xai.grok-3-mini-fast models point to the same underlying model. Select xai.grok-3-mini-fast for latency-sensitive applications, and select xai.grok-3-mini for reduced cost.

Available in This Region

  • US Midwest (Chicago) (on-demand only)

Key Features

  • Model name in OCI Generative AI: xai.grok-3-mini
  • Available On-Demand: Access this model on-demand, through the Console playground or the API.
  • Text-Mode Only: Input text and get a text output. (No image support.)
  • Fast: Great for logic-based tasks that don't require deep domain knowledge.
  • Context Length: 131,072 tokens (maximum prompt + response length is 131,072 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
  • Function Calling: Yes, through the API.
  • Structured Outputs: Yes.
  • Has Reasoning: Yes. See the reasoning_effort parameter in the Model Parameters section.
  • Knowledge Cutoff: November 2024

Release Date

Model Beta Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
xai.grok-3-mini-fast (Beta) 2025-05-22 Tentative This model isn't available for the dedicated mode.
Important

For a list of all model time lines and retirement details, see Retiring the Models.

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 131,072 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.

Temperature

The level of randomness used to generate the output text. Min: 0, Max: 2

Tip

Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information.
Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Min: 0, Max: 1.

Assign p a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens.

Reasoning Effort

The reasoning_effort parameter, available through the API and not the Console, controls how much time the model spends thinking before responding. You must set it to one of these values:

  • low: Minimal thinking time, using fewer tokens for quick responses.
  • high: Maximum thinking time, leveraging more tokens for complex problems.

Choosing the correct level depends on your task: use low for simple queries that complete quickly, and high for harder problems where response latency is less important. Learn about this parameter in the xAI guides.