Create Text Embeddings in Generative AI

Use the cohere.embed models in OCI Generative AI to convert text to vector embeddings to use in applications for semantic searches, text classification, or text clustering.

    1. In the navigation bar of the Console, select a region with Generative AI, for example, US Midwest (Chicago). If you don't know which region to select, see Regions with Generative AI.
    2. Open the navigation menu and click Analytics & AI. Under AI Services, click Generative AI.
    3. Select a compartment that you have permission to work in. If you don't see the playground, ask an administrator to give you access to Generative AI resources and then return to the following steps.
    4. Click Playground.
    5. Click Embedding.
    6. Choose a model for summarizing text by performing one of the following actions:
      • In the Model list, select the cohere.command model.
      • Click View model details, and then click Choose model.
    7. Select a model for creating text embeddings by performing one of the following actions:
      • In the Model list, select a model.
      • Click View model details, and then click Choose model.
    8. (Optional) To use an example from the Example list, use the following steps:
      1. Select an example from the Example list.
      2. Click Run to generate embeddings for the example.
      3. Review a two-dimensional version of the output in the Output vector projection section.

        To visualize the output with embeddings, output vectors are projected into two dimensions and plotted as points. Points that are close together correspond to phrases that the model considers similar.

      4. Click Clear to remove all the sentences and start generating embeddings for new sentences.
    9. In the Sentence input area, enter text in one of the following ways:
      • Type a sentence in the 1. box, and then click Add sentence to add more sentences.
      • Click Upload file and select a file with text that you want to add.
      Note

      Only files with a .txt extension are allowed. Each input sentence, phrase, or paragraph must be separated with a newline character. A maximum 96 inputs are allowed for each run, and each input must be less than 512 tokens. You can add sentences manually or upload more than one file until you reach the maximum number of inputs.
    10. For the Truncate parameter, choose whether to truncate the start or end tokens when the tokens exceed the maximum number of allowed tokens (512).
      Tip

      For input that exceeds 512 tokens, if you set the Truncate parameter to None, you get an error message. Before you run an embedding model, choose Start or End for the Truncate parameter.
    11. Click Run.
    12. Review a two-dimensional version of the output in the Output vector projection section.
      To visualize the outputs with embeddings, output vectors are projected into two dimensions and plotted as points. Points that are close together correspond to phrases that the model considers similar.
    13. When you're happy with the result, click Export embeddings to JSON to get a JSON file that contains a 1024-dimensional vector for each input.
    14. (Optional) Click View code, select a programming language, click Copy code, and paste the code into a file. Ensure that the file maintains the format of the pasted code.
      Tip

      If you're using the code in your applications, ensure that you authenticate your code.
    15. (Optional) Click Clear to remove all the sentences and start generating embeddings for new sentences.
      Note

      When you click Clear the Truncate parameter resets to its default value of None.
  • To create embeddings for text, use the embed-text-result operation.

    Enter the following command for a list of options to create text embeddings.

    oci generative-ai-inference embed-text-result embed-text -h

    For a complete list of parameters and values for the OCIGenerative AI CLI commands, see Generative AI Inference CLI and Generative AI Management CLI.

  • Run the EmbedText operation to create text embeddings.

    For information about using the API and signing requests, see REST API documentation and Security Credentials. For information about SDKs, see SDKs and the CLI.