Using the Large Language Models (LLMs) in Generative AI
The playground is an interface in the Oracle Cloud Console for exploring the hosted pretrained and custom models in OCI Generative AI without writing a single line of code. Use the playground to test your use cases and refine prompts and parameters. When you're happy with the results, integrate the generated code into your applications.
In addition to the playground, you can chat, generate text, and summarize using the Generative AI Inference API operations and the Generative AI Inference CLI commands.
You can perform the following tasks with the OCI Generative AI large language models (LLMs):
- Chat: Ask questions and get conversational responses through an AI chatbot.
- Generate text: Run prompts to generate or classify text or extract information from text. (On-demand models for this endpoint are retired - Use the chat option instead.)
- Summarize: Use as an executive summary generator for documents that are too long to read and summarize any type of text into free-form paragraphs or bullet points and optionally specify the tone. (On-demand models for this endpoint are retired. Use the chat option instead.)
Important
- Not Available on-demand: All OCI Generative AI foundational pretrained models supported for the on-demand serving mode that use the text generation and summarization APIs (including the playground) are now retired. We recommend that you use the chat models instead.
- Can be hosted on clusters: If you host a summarization or a generation model such as
cohere.commandon a dedicated AI cluster, (dedicated serving mode), you can continue to use that model until it's retired. These models, when hosted on a dedicated AI cluster are only available in US Midwest (Chicago). See Deprecated APIs in Generative AI for the date that the APIs are no longer available.