Select AI Concepts
Explores the concepts and terms related to Select AI.
- Actions
- AI Profile
- AI Provider
- Conversations
- Database Credentials
- Hallucination in LLM
- IAM
- Iterative Refinement
- Large Language Model (LLM)
- MapReduce
- Metadata
- Metadata Clone
- Natural Language Prompts
- Network Access Control List (ACL)
- Retrieval Augmented Generation (RAG)
- Semantic Similarity Search
- Vector Distance
- Vector Index
- Vector Store
Actions
An action in Select AI is a keyword that instructs Select AI to perform different behavior when acting on the prompt. By specifying an action, users can instruct Select AI to process their natural language prompt to generate SQL code, to respond to a chat prompt, narrate the output, display the SQL statement, or explain the SQL code, leveraging the LLMs to efficiently interact with the data within their database environment.
See Use AI Keyword to Enter Prompts for supported Select AI actions.
Parent topic: Select AI Concepts
AI Profile
Parent topic: Select AI Concepts
AI Provider
An AI Provider in Select AI refers to the service provider that supplies the LLM or transformer or both for processing and generating responses to natural language prompts. These providers offer models that can interpret and convert natural language for the use cases highlighted under the LLM concept. See Select your AI Provider and LLMs for the supported providers.
Parent topic: Select AI Concepts
Conversations
Conversations in Select AI represent an interactive exchange between the user and the system, enabling users to query or interact with the database through a series of natural language prompts. Select AI incorporates up to 10 previous prompts into the current request through session-based conversations, creating an augmented prompt sent to the LLM. Select AI supports multiple customizable conversations that can be configured through conversation APIs from the DBMS_CLOUD_AI Package. See Select AI Conversations.
Parent topic: Select AI Concepts
Database Credentials
Parent topic: Select AI Concepts
Hallucination in LLM
Parent topic: Select AI Concepts
IAM
Parent topic: Select AI Concepts
Iterative Refinement
Iterative refinement is a process of gradually improving a solution or a model through repeated cycles of adjustments based on feedback or evaluation. It starts with an initial approximation, refines it step by step, and continues until the desired accuracy or outcome is achieved. Each iteration builds on the previous one, incorporating corrections or optimizations to move closer to the goal.
In text summary generation, iterative refinement can be useful for processing large files or documents. The process splits the text into manageable-sized chunks, for example, that fit within an LLM's token limits, generates a summary for one chunk, and then improves the summary by sequentially incorporating the following chunks.
Use cases for iterative refinement:
- Best suited for situations where contextual accuracy and coherence are critical, such as when summarizing complex or highly interconnected texts where each part builds on the previous.
- Ideal for smaller-scale tasks where sequential processing is acceptable.
Parent topic: Select AI Concepts
Large Language Model (LLM)
A Large Language Model (LLM) refers to an advanced type of artificial intelligence model that is trained on massive amounts of text data to support a range of use cases depending on their training data. This includes understanding and generating human-like language as well as software code and database queries. These models are capable of performing a wide range of natural language processing tasks, including text generation, translation, summarization, question answering, sentiment analysis, and more. LLMs are typically based on sophisticated deep learning neural network models that learn patterns, context, and semantics from the input data, enabling them to generate coherent and contextually relevant text.
Parent topic: Select AI Concepts
MapReduce
- Map: Processes input data and transforms it into key-value pairs.
- Reduce: Aggregates and summarizes the mapped data based on keys. MapReduce performs parallel processing of large data sets.
In the case of Select AI Summarize, MapReduce partitions text into multiple chunks and processes them in parallel and independently, generating individual summaries for each chunk. These summaries are then combined to form a cohesive overall summary.
Use cases for map reduce:
- Best suited for large-scale, parallel tasks where speed and scalability are priorities, such as summarizing very large data sets or documents.
- Ideal for situations where chunk independence is acceptable, and the summaries can be aggregated later.
Parent topic: Select AI Concepts
Metadata
Parent topic: Select AI Concepts
Metadata Clone
Parent topic: Select AI Concepts
Natural Language Prompts
Parent topic: Select AI Concepts
Network Access Control List (ACL)
Parent topic: Select AI Concepts
Retrieval Augmented Generation (RAG)
Most commonly, RAG involves vector search, but more generally, includes augmenting a prompt of database content (either manually or automatically) such as schema metadata for SQL generation or database content explicitly queried. Other forms of augmentation can involve technologies such as graph analytics and traditional machine learning.
Parent topic: Select AI Concepts
Semantic Similarity Search
Parent topic: Select AI Concepts
Vector Distance
Parent topic: Select AI Concepts
Vector Index
Parent topic: Select AI Concepts
Vector Store
Parent topic: Select AI Concepts