xAI Grok Code Fast 1
Released in late August 2025, the xAI Grok Code Fast 1 model is a coding‑focused AI model that excels at common, high-volume coding task and is designed especially for agentic coding workflows. With its speed, efficiency, and low cost, this model is built to handle the loop of modern software development (planning, writing, testing, and debugging), offers real-time, summarized trace of its reasoning, and is proficient in TypeScript, Python, Java, Rust, C++, and Go. Use this model for building zero‑to‑one projects, answering codebase questions, performing bug fixes, and agentic coding.
Key Capabilities and Features
Available in These Regions
- US East (Ashburn) (on-demand only)
- US Midwest (Chicago) (on-demand only)
- US West (Phoenix) (on-demand only)
External Calls
The xAI Grok models are hosted in an OCI data center, in a tenancy provisioned for xAI. The xAI Grok models, which can be accessed through the OCI Generative AI service, are managed by xAI.
Key Features
- Model name in OCI
Generative AI:
xai.grok-code-fast-1 - Available On-Demand: Access this model on-demand, through the Console playground or the API.
- Text Mode Only: Enter text input and get text output. Images and file inputs such as audio, video, and document files aren't supported.
- Knowledge: Has a deep domain knowledge in finance, healthcare, law, and science.
- Context Length: 256,000 tokens (maximum prompt + response length is 256,000 tokens for keeping the context). In the playground, the response length is capped at 16,000 tokens for each run, but the context remains 256,000 tokens.
- Excels at These Use Cases: Agentic coding - Unlike general models that are trained to only write code, this model is optimized for tool-use. It's trained to autonomously use the terminal, for example, run a
grepcommand to find files, and perform multi-step edits across a repository. - Massive Throughput: At the time of its release, this model was one of the fastest models in its class, delivering roughly 90–100 tokens per second. In many IDE integrations such as Cursor or GitHub Copilot, this model can perform dozens of tool calls and edits before you finish reading its initial plan.
- Summarized Thinking Traces: One of its standout features is the visibility of its mind. As it works, it provides a real-time, summarized trace of its reasoning. You can see it think through a bug before it starts writing the fix, which helps you catch logic errors early.
- Function Calling: Yes, through the API.
- Structured Outputs: Yes.
- Has Reasoning: Yes.
-
Cached Input Tokens: Yes
- Token count: See the
cachedTokensattribute in the PromptTokensDetails Reference API. - Pricing: See the Pricing Page.
Important note: The cached‑input feature is available in both the playground and the API. However, that information can only be retrieved through the API.
- Token count: See the
- Knowledge Cutoff: No known cutoff date
- Low Cost: At the time of its release it was cheaper than other flagship models.
On-Demand Mode
You can reach the pretrained foundational models in Generative AI through two modes: on-demand and dedicated.
The Grok models are available only in the on-demand mode.
Here are key features for the on-demand mode:
-
You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.
- Low barrier to start using Generative AI.
- Great for experimentation, proof of concept, and model evaluation.
- Available for the pretrained models in regions not listed as (dedicated AI cluster only).
| Model Name | OCI Model Name | Pricing Page Product Name |
|---|---|---|
| xAI Grok Code Fast 1 | xai.grok-code-fast-1 |
xAI – Grok-Code-Fast-1 Prices are listed for:
|
Release Date
| Model | General Availability Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
|---|---|---|---|
xai.grok-code-fast-1 |
2025-10-01 | Tentative | This model isn't available for the dedicated mode. |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 256,000 tokens for each run.
Tip
For large inputs with difficult problems, set a high value for the maximum output tokens parameter. - Temperature
-
The level of randomness used to generate the output text. Min: 0, Max: 2
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
pa decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setpto 1 to consider all tokens.
API Parameter for Summarized Thinking Traces
- reasoning_content
-
To use Summarized Thinking Traces in the xAI API, you primarily interact with the
reasoning_contentfield. Unlike the final answer, this field contains the model's internal logic and is streamed back to you in real-time. You can get its thinking trace throughchunk.choices[0].delta.reasoning_contentin the streaming mode. See For developers building coding agents via the xAI API.
Summarized thinking traces are available only when you use the streaming mode.