Supported Microsoft Models

You can import large language models from Hugging Face and OCI Object Storage buckets into OCI Generative AI, create endpoints for those models, and use them in the Generative AI service.

Microsoft Phi-3 models, known for their efficiency and compactness, are designed for scalable and flexible performance. See the Phi-3 documentation on Hugging Face.

Phi 3

Supported Phi 3 Models
Hugging Face Model ID Model Capability Recommended Dedicated AI Cluster Unit Size
microsoft/phi-4 TEXT_TO_TEXT A100_80G_X1
microsoft/Phi-3-mini-4k-instruct TEXT_TO_TEXT A100_80G_X1
microsoft/Phi-3-mini-128k-instruct TEXT_TO_TEXT A100_80G_X1
microsoft/Phi-3-small-8k-instruct TEXT_TO_TEXT A100_80G_X1
microsoft/Phi-3-medium-4k-instruct TEXT_TO_TEXT A100_80G_X1
microsoft/Phi-3-vision-128k-instruct IMAGE_TEXT_TO_TEXT H100_X1
Important

  • While you can import any chat, embedding, (and fine-tuned) model supported through Open Model Engine (with vLLM or SGLang runtime), only models explicitly listed on this page are supported for this model family. Unlisted models might have compatibility issues and we recommend that you test any unlisted model before production use. Learn about OCI Generative AI Imported Model Architecture.

  • Imported models support the native context length specified by the model provider. However, the effective maximum context length is also limited by OCI Generative AI's underlying hardware setup. To take full advantage of a model's native context length, you might need to provision more hardware resources.
  • Fine-tuned models are supported only if they match the supported base model's transformer version and have a parameter count within ±10% of the original.
  • For available hardware and steps on how to deploy the imported models, see Managing Imported Models.
  • If the recommended unit shape isn't available in the region, select a higher-tier option. For example, if A100 isn't available, select H100.