xAI Grok 3 Mini Fast (Beta)
Pre-General Availability: 2025-05-22
This documentation is in pre-General Availability status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.
This documentation is not a commitment by Oracle to deliver any material, code, functionality or services. This documentation, and Oracle Pre-GA programs and services are subject to change at any time without notice and, accordingly, should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality for Oracle’s Pre-GA programs and services remains at the sole discretion of Oracle. All release dates or other predictions of future events are subject to change. The future availability of any future Oracle program or service should not be relied on in entering into any license or service agreement with Oracle.
The xai.grok-3-mini-fast
model is a lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that don't require deep domain knowledge. The raw thinking traces are accessible.
The xai.grok-3-mini
and xai.grok-3-mini-fast
models, both use the same underlying model and deliver identical response quality. The difference lies in how they're served: The xai.grok-3-mini-fast
model is served on faster infrastructure, offering response times that are significantly faster than the standard xai.grok-3-mini
model. The increased speed comes at a higher cost per output token.
The xai.grok-3-mini
and xai.grok-3-mini-fast
models point to the same underlying model. Select xai.grok-3-mini-fast
for latency-sensitive applications, and select xai.grok-3-mini
for reduced cost.
Available in This Region
- US Midwest (Chicago) (on-demand only)
Key Features
- Model name in OCI
Generative AI:
xai.grok-3-mini
- Available On-Demand: Access this model on-demand, through the Console playground or the API.
- Text-Mode Only: Input text and get a text output. (No image support.)
- Fast: Great for logic-based tasks that don't require deep domain knowledge.
- Context Length: 131,072 tokens (maximum prompt + response length is 131,072 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
- Function Calling: Yes, through the API.
- Structured Outputs: Yes.
- Has Reasoning: Yes. See the
reasoning_effort
parameter in the Model Parameters section. - Knowledge Cutoff: November 2024
Release Date
Model | Beta Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
---|---|---|---|
xai.grok-3-mini-fast (Beta) |
2025-05-22 | Tentative | This model isn't available for the dedicated mode. |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 131,072 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.
- Temperature
-
The level of randomness used to generate the output text. Min: 0, Max: 2
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Min: 0, Max: 1.
Assign
p
a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setp
to 1 to consider all tokens. - Reasoning Effort
-
The
reasoning_effort
parameter, available through the API and not the Console, controls how much time the model spends thinking before responding. You must set it to one of these values:low
: Minimal thinking time, using fewer tokens for quick responses.high
: Maximum thinking time, leveraging more tokens for complex problems.
Choosing the correct level depends on your task: use
low
for simple queries that complete quickly, andhigh
for harder problems where response latency is less important. Learn about this parameter in the xAI guides.