OpenAI gpt-oss-120b (Beta)

Important

See Oracle Legal Notices.

The openai.gpt-oss-120b is an open-weight, text-only language model designed for powerful reasoning and agentic tasks.

Available in These Regions

  • Germany Central (Frankfurt) (on-demand only)
  • Japan Central (Osaka) (on-demand only)
  • US Midwest (Chicago) (on-demand only)

Key Features

  • Model Name in OCI Generative AI: openai.gpt-oss-120b
  • Model Size: 117 billion parameters
  • Available On-Demand: Access this model on-demand, through the Console playground or the API.
  • Text Mode Only: Input text and get a text output. Images and file inputs such as audio, video, and document files aren't supported.
  • Knowledge: Specialized in advanced reasoning and text-based tasks across a wide range of subjects.
  • Context Length: 128,000 tokens (maximum prompt + response length is 128,000 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
  • Excels at These Use Cases: Because of its training data, this model is especially strong in STEM (science, technology, engineering, and mathematics), coding, and general knowledge. Suitable for high-reasoning, production-level tasks.
  • Function Calling: Yes, through the API.
  • Has Reasoning: Yes.
  • Knowledge Cutoff: June 2024

For key feature details, see the OpenAI gpt-oss documentation.

On-Demand Mode

You can reach the pretrained foundational models in Generative AI through two modes: on-demand and dedicated. Here are key features for the on-demand mode:
  • You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.

  • Low barrier to start using Generative AI.
  • Great for experimenting, proof of concepts, and evaluating the models.
  • Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Important

Dynamic Throttling Limit Adjustment for On-Demand Mode

For optimized allocation of resources to tenants and to ensure that tenants receive fair access to the models, OCI Generative AI regularly adjusts the request throttling limit for each active tenancy based on model demand and system capacity. This adjustment depends on the following factors:

  • The current maximum throughput supported by the target model.
  • Any unused system capacity at the time of adjustment.
  • Each tenancy's historical throughput usage and any specified override limits set for that tenancy.
Tip

Because of the dynamic throttling limit adjustment, we recommend implementing a back-off strategy, which involves delaying requests after a rejection. Without one, repeated rapid requests can lead to further rejections over time, increased latency, and potential temporary blocking of client by the Generative AI service. By using a back-off strategy, such as an exponential back-off strategy, you can distribute requests more evenly, reduce load, and improve retry success, following industry best practices and enhancing the overall stability and performance of your integration to the service.

Note

The OpenAI gpt-oss-120b (Beta) model is available only in the on-demand mode.
Model Name OCI Model Name Getting Access
OpenAI gpt-oss-120b (Beta) openai.gpt-oss-120b Contact Oracle Beta Programs

Release Date

Model Beta Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
openai.gpt-oss-120b 2025-09-09 Tentative This model isn't available for the dedicated mode.
Important

To learn about OCI Generative AI model deprecation and retirement, see Retiring the Models.

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 128,000 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.

Tip

For large inputs with difficult problems, set a high value for the maximum output tokens parameter.
Temperature

The level of randomness used to generate the output text. Min: 0, Max: 2, Default: 1

Tip

Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information.
Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign p a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens. Default: 1

Frequency penalty

A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output. Set to 0 to disable. Default: 0

Presence penalty

A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used. Set to 0 to disable. Default: 0