xAI Grok Code Fast 1 (Beta)
Pre-General Availability: 2025-09-12
This documentation is in pre-General Availability status and is intended for demonstration and preliminary use only. It may not be specific to the hardware on which you are using the software. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to this documentation and will not be responsible for any loss, costs, or damages incurred due to the use of this documentation.
This documentation is not a commitment by Oracle to deliver any material, code, functionality or services. This documentation, and Oracle Pre-GA programs and services are subject to change at any time without notice and, accordingly, should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality for Oracle’s Pre-GA programs and services remains at the sole discretion of Oracle. All release dates or other predictions of future events are subject to change. The future availability of any future Oracle program or service should not be relied on in entering into any license or service agreement with Oracle.
See Oracle Legal Notices.
The xAI Grok Code Fast 1 model is a coding‑focused AI model that excels at common, high-volume coding tasks such as debugging and editing and is designed specifically for agentic coding workflows. With its speed, efficiency, and low cost, this model is versatile across the software development stack and proficient in TypeScript, Python, Java, Rust, C++, and Go. Use this model for building zero‑to‑one projects, answering codebase questions, performing bug fixes, and agentic coding.
Available in These Regions
- US East (Ashburn) (on-demand only)
- US Midwest (Chicago) (on-demand only)
- US West (Phoenix) (on-demand only)
External Calls
The xAI Grok models are hosted in an OCI data center, in a tenancy provisioned for xAI. The xAI Grok models, which can be accessed through the OCI Generative AI service, are managed by xAI.
Key Features
- Model name in OCI
Generative AI:
xai.grok-code-fast-1
- Available On-Demand: Access this model on-demand, through the Console playground or the API.
- Text Mode Only: Enter text input and get text output. Images and file inputs such as audio, video, and document files aren't supported.
- Knowledge: Has a deep domain knowledge in finance, healthcare, law, and science.
- Context Length: 256,000 tokens (maximum prompt + response length is 256,000 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
- Excels at These Use Cases: Agentic coding
- Function Calling: Yes, through the API.
- Structured Outputs: Yes.
- Has Reasoning: No.
- Knowledge Cutoff: No known cutoff date
For key feature details, see the Grok Code Fast 1 documentation and model card.
On-Demand Mode
You can reach the pretrained foundational models in Generative AI through two modes: on-demand and dedicated.
The Grok models are available only in the on-demand mode.
Here are key features for the on-demand mode:
-
You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.
- Low barrier to start using Generative AI.
- Great for experimentation, proof of concept, and model evaluation.
- Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Model Name | OCI Model Name | Getting Access |
---|---|---|
xAI Grok Code Fast 1 (Beta) | xai.grok-code-fast-1 |
Contact Oracle Beta Programs |
Release Date
Model | General Availability Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
---|---|---|---|
xai.grok-code-fast-1 |
2025-09-12 | Tentative | This model isn't available for the dedicated mode. |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 256,000 tokens for each run.
Tip
For large inputs with difficult problems, set a high value for the maximum output tokens parameter. - Temperature
-
The level of randomness used to generate the output text. Min: 0, Max: 2
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
p
a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setp
to 1 to consider all tokens.