Components in an Agent Builder

Components, also referred to as nodes, are the building blocks in an Agent Builder flow.

This section explains when to use each node, their inputs and outputs, and how to configure them to reliably build your flows.

Common data types across nodes include:

Core Orchestration

LLM

The LLM node executes prompts using a Large Language Model configured in LLM Management. This node serves as the primary reasoning and completion engine for many flows.

LLM Component

When to use?

How to add and configure?

  1. Drag the LLM node onto the canvas.
  2. Choose a configured model from LLM Management.
  3. Supply prompts or instructions via an upstream Prompt or Message node.
  4. Set the temperature to a low value for more predictable and deterministic responses from the LLM, or to a high value for more varied and creative output.
  5. Connect the output Message as input to Agent, Output, or Processing components per the flow logic.

Inputs:

Outputs:

Agent

An Agent node uses an LLM configured in LLM Management to carry out instructions, answer questions, and orchestrate complex multi-step tasks using hierarchical manager/worker orchestration and sub-agents.

Agent Component

When to use?

How to add and configure?

  1. Drag the Agent node onto the canvas.
  2. Select a base LLM from LLM Management.
  3. Optionally, attach relevant Tools.
  4. Optionally, connect other Agent nodes to the Sub-agents connector to set up manager/worker orchestration.
  5. Provide instructions through a Prompt or Message input.
  6. Optionally, set the temperature to a low value for more predictable and deterministic responses from the LLM, or to a high value for more varied and creative output.
  7. Connect the output Message as input to Agent, Output, or Processing components per the flow logic.

Inputs:

Outputs:

Select AI

Use these nodes to design Select AI flows with components created in Utilities. See Configure Select AI Utilities for Your Database to learn how to set up your database with Select AI utilities, such as Profiles with RAG or NL2SQL.
Refer to Select AI Workflows for sample workflows created using Select AI nodes.

In-Database Agent

Use this node to choose an existing in-database agent or to create a new agent using a selected database and profile.

Select AI

Select AI

When to use?
Use the Select AI node when you need to perform an AI-driven action, such as chat, run SQL, or other supported interactions, within a specific database and profile context. This node is ideal for executing discrete AI actions and retrieving results as part of your Select AI workflow. See Select AI Demos for sample workflows created for each action.

How to add and configure?

  1. Drag the Select AI node onto the canvas.
  2. Select the database to provide context for the action.
  3. Select a profile from the dropdown to link the node to the desired profile, or click the Create Profile icon at the bottom of the node to create a new profile.
  4. Choose the specific AI Action you want to perform (e.g., Chat, Run SQL, ShowSQL, ExplainSQL, Narrate, Show Prompt).
  5. Enter the prompt or message to be sent to the AI model for processing. You can supply prompts or instructions via an upstream Prompt or Message node.
  6. Click the View or Delete icons to view the profile or delete the profile.

Inputs: Message (prompt)

Outputs: LLM/Action Result

In-Database Task

Use this node to choose or define a in-database task. You can create new tasks using one or more connected tools or select from available ones.

In-Database Team

Use the In-Database Team node when you need to orchestrate multiple AI agents to work together as a team within a defined database context. This is useful for workflows that require collaborative or sequential agent actions, either by selecting an existing team or creating a new one with custom settings.

In-Database Tool

Use this node to define or select a tool—such as RAG, SQL, or database functions—bound to a specific database, profile, or function. You can create new tools by type or select from available ones.

Tools

Bug Tools

The Bug Tools node is a specialized component for interacting with your organization’s bug or issue trackers.

Bug Tools Component

When to use?

How to add and configure?

  1. Drag the Agent node (or another compatible node that supports tool calls) onto the canvas.
  2. Attach Bug Tools in the Agent’s Tools configuration.
  3. Provide the required credentials or configuration for your bug tracking system.
  4. Use prompts or instructions within your Agent to invoke Bug Tools.

Inputs: Tool parameters supplied by Agent prompts or upstream data

Outputs: Message summaries or structured JSON results returned by the tool

Calculator

Calculator tool evaluates mathematical expressions safely and returns the computed result. The node supports arithmetic operations, parentheses, and standard numeric functions.

Calculator Component

When to use?

How to add and configure?

  1. Drag the Calculator node into the workflow.
  2. Enter a mathematical expression in the Expression field.
  3. You can include:
    • numbers
    • operators
    • parentheses
    • values from previous nodes
  4. Verify the expression syntax.

Inputs:

Outputs:

MCP Server

The MCP Server node integrates an MCP server to expose external capabilities as tools to Agents. It is a specialized component for interfacing with AI models using the Model Context Protocol (MCP) with Server-Sent Events (SSE) transport.

MCP Server Component

When to use?

How to add and configure?

  1. Configure your MCP Server and build the server URL. See Build Your Own MCP Server.

  2. Drag and drop an MCP Server node from the Tools category.

  3. Select the MCP Server from the dropdown. See Add MCP Server to add and configure your MCP server.

  4. Specify an optional timeout value, in seconds, after which the MCP Tool call will time out. This is applicable if the MCP Server tool call takes some time to receive a response. By default, the value is 45 seconds.

  5. Connect the MCP Server node’s Tools output to downstream Agent nodes that consume the discovered MCP tools. Trigger tool usage through Agent prompts or tool-calling logic.

Inputs: Agent instructions and any necessary tool arguments

Outputs: JSON responses from MCP tools and message summaries generated by the Agent

See FAQs and Troubleshooting for MCP server related questions.

REST API Tools

The REST API node allows you to call REST APIs directly from your flows, often through an Agent.

REST API Component

When to use?

How to add and configure?

  1. Configure REST API connection. See Add REST API Data Source.
  2. Drag the REST API node onto the canvas and connect it to an Agent or any other compatible tool-enabled node.
  3. Specify the endpoints, headers, authentication, and HTTP method (GET, POST, PUT, or DELETE).
  4. Use Agent prompts to determine which endpoint and parameters to use.

Inputs: URL/endpoint and parameters or body data, often passed from previous node outputs

Outputs: JSON responses from the API and message summaries created by the Agent

Select AI Bridge

Use the Select AI Bridge node to configure Select AI in either Profile mode or Team mode. The node outputs a Server tool with prompt-only input, and the tool name and description are personalized based on the selected profile or team information.

Text Combiner

The Text Combiner node combines two text inputs into a single output, optionally separated by a delimiter and is useful for merging strings, building prompts, or formatting content by joining values, adding labels or separators, and creating structured text (e.g., full names, sentences, headings, or CSV lines).

Text Combiner Component

When to use?

How to add and configure?

  1. Drag the Text Combiner node into your workflow.
  2. Enter a delimiter (optional) such as a space, comma, newline, dash, or pipe.
  3. Add text to Text 1 and Text 2, or connect outputs from previous nodes.
  4. Leave the delimiter blank to join text directly.
  5. Connect the combined output to the next node.

Inputs:

Outputs:

Wikipedia Search

The Wikipedia search node searches Wikipedia and returns both structured JSON results and a human-readable text summary.

Wikipedia Search Component

When to use?

How to add and configure?

  1. Drag Wikipedia Search from the Tools category into your flow.
  2. Configure the fields:
    • Search Query (required): Text to search on Wikipedia
    • Language (optional, default: en): e.g., en, de, fr
    • Number of Results (optional, default: 4, range: 1–10)
  3. You can enter a query directly or connect a Message output from an upstream node.
  4. Connect outputs as needed:
    • JSON → parsing, filtering, RAG, or structured processing
    • Message → chat response, preview, logging, or LLM input

Inputs:

Outputs:

Inputs

Chat Input

Chat Input node captures conversational user input in a chat interface.

Note: A custom flow can contain a maximum of one Chat Input component.

Chat Input Component

When to use:

How to add and configure:

  1. Drag the Chat Input node onto the canvas.
  2. Connect its Message output to downstream nodes such as Prompt, Agent, or LLM.

Inputs: None, as the user provides input at runtime

Outputs: Message containing the user’s text

Text Input

The Text Input node captures plain text input to pass to the next node.

Text Input Component

When to use?

How to add and configure?

  1. Drag the Text Input node onto the canvas.
  2. Optionally, set a label or placeholder for the input field.
  3. Connect its Message output to downstream nodes.

Inputs: None, as input is provided by the user

Outputs: Message containing the entered text

Prompt

The Prompt node creates and formats textual instructions for an LLM or Agent, often combining fixed instructions with dynamic values from upstream nodes.

Prompt Component

When to use?

How to add and configure?

  1. Drag the Prompt node onto the canvas.
  2. Write your instructions, optionally using placeholders to reference outputs from upstream nodes.
  3. Connect the Prompt node to an LLM or Agent node.

Inputs: Message or JSON data for variable interpolation

Outputs: Message containing the final prompt text

Outputs

Chat Output

The Chat Output node renders or returns model/agent responses in a chat interface.

Chat Output Component

When to use?

How to add and configure?

  1. Drag the Chat Output node onto the canvas.
  2. Connect Message output from the LLM or Agent (or a formatted message from a Parser node) to the Chat Output node.

Inputs: Message

Outputs: None, as the response is rendered directly in the UI

Email Output

The Email Output node sends generated content via email. Accepts comma-separated email addresses and sends the generated content by email to multiple recipients.

Email Output Component

When to use?

How to add and configure?

  1. Make sure SMTP is configured. See SMTP Configuration
  2. Drag the Email Output node onto the canvas.
  3. Provide a subject, specify recipients as comma-separated email addresses to send to multiple people, and connect the Message output from an upstream node to serve as the email body.
  4. Optionally, attach any supported files using upstream file or data nodes.

Inputs:

Outputs: None (the node sends the email)

Data

Read CSV File

The Read CSV node imports and parses CSV files for batch or tabular data processing.

Read CSV Component

When to Use?

How to Add and Configure?

  1. Drag the Read CSV node into your workflow.
  2. Select or upload a CSV file.
  3. Adjust delimiter and encoding settings if necessary.
  4. Connect the node’s outputs to subsequent steps in your workflow.

Inputs: File path or uploaded CSV file

Outputs:

File Upload

The File Upload node allows users to upload files for processing or analysis.

File Upload Component

When to Use?

How to Add and Configure?

  1. Drag the File Upload component into your flow
  2. Select the file to upload
  3. Connect the node to parsers, LLMs, or data converters

Inputs: None

Outputs: File handles/paths and/or Message/JSON, depending on integration

SQL Query

The SQL Query node executes SQL statements against configured databases and returns the results for use in workflows.

Note: Only SELECT-like queries are supported for security and governance.

SQL Query Component

When to use?

How to add and configure?

  1. Ensure the database connection is set up. See Database Data Source.
  2. Drag the SQL Query node into your workflow.
  3. Enter your SQL query text. This can be static or parameterized using data from upstream nodes.
  4. Connect the output to nodes such as Parser, Type Convert, or Combine JSON.

Inputs: Message or JSON containing the query and its parameters

Outputs:

URL to Markdown Content

The URL to Markdown Content node fetches data from one or more URLs accessible within the same network, provided they do not require authentication.

URL to Markdown Component

When to Use?

How to Add and Configure?

  1. Ensure the URLs are accessible on the application network.
  2. Confirm that the URLs do not require authentication.
  3. Drag the URL to Markdown Content node into your flow.
  4. Add the URL(s) and connect the node to a Prompt node or LLM node.

Inputs: Single URL, comma-separated URLs, or a paragraph containing URLs

Outputs: Content from the URL page in Markdown format

Processing

Condition

The Condition node enables simple if-else decision logic in agent flows. Use condition node to compare a Text Input against a Match Text using operators. If the condition evaluates to true, it outputs a configurable true message and follows the “true” branch. If false, it outputs a false message and follows the “false” branch. You can use it for basic branching without complex orchestration.

Condition Component

When to use?

How to add and configure?

  1. Drag the Condition node onto the canvas.

  2. Configure the following settings:
    • Text Input: The value you want to check.
    • Match Text: The value to compare against.
    • Operator: Choose from equals, not equals, contains, not contains, greater than, less than, or regex match.
    • True Message (optional): Message to return if the condition is met.
    • False Message (optional): Message to return if the condition is not met.
  3. Connect the True and False outputs to their respective next steps.
  4. Test the Condition node using sample inputs.

Regex Extractor

The node extracts text from input using regular expressions (regex) to identify and return specific values such as emails, IDs, dates, URLs, or structured data fragments.

Regex Extractor Component

When to use?

How to add and configure?

  1. Ensure the input text contains the data you want to extract.
  2. Drag the Regex Extractor node into your workflow.
  3. Add the input text directly, or connect a node that outputs text.
  4. Enter the desired regex pattern.

Inputs:

Outputs:

Type Convert

The Type Convert node transforms data between message to string, JSON to dictionary or list, and dataframe to list of dictionaries for use by downstream nodes that require a specific structural type. It outputs all three types simultaneously, allowing downstream nodes to use whichever type they require.

Type Convert node supports robust JSON parsing from text:

Type Convert Component

When to use?

How to add and configure?

  1. Drag and drop a Type Convert node from the Processing category.
  2. Connect any of Message, JSON, or DataFrame from the upstream node as input.
  3. Test the Type Convert node by connecting the output(s) - string, dictionary, or list of dictinories to appropriate downstream node(s). Execute the flow by selecting Playground.

Conversion Examples:

Inputs: Message, JSON, or DataFrame

Outputs (provided simultaneously):

For example flows that use the Type Convert node together with the Parser and Combine JSON Data nodes, see Example Flows Using Combine JSON Data and Example Flow Using Parser.

Combine JSON Data

The Combine JSON Data node merges multiple JSON payloads into a single object. It allows you to filter top-level keys and define how to resolve conflicts between keys. The node outputs both a JSON object and a Message containing the stringified JSON.

Combine JSON Component

When to use?

How to add and configure?

  1. Drag and drop a *Combine JSON Data** node from the Processing category.

  2. Connect one or more upstream JSON sources to the JSON inputs inlet.

  3. Optionally, connect the output of a previous Combine JSON node to the Initial accumulator field for multi-stage merging. Leave it empty for a filter-only operation.

  4. Input the top-level keys to include from input sources in the Keys to include field. Supported formats are a JSON array or a comma-separated list: ["a","b"], a, b, or a b. Leave it empty to include all the top-level keys.

  5. Select the Merge mode from Deep or Replace. Use Deep merge to recursively extend lists, replace scalars, and combine keys from all JSONs. Use Replace mode to overwrite keys with values from the JSON of the last connected node to the JSON inputs inlet.

  6. Test the Combine JSON node by connecting two or more JSON inputs to the JSON inlet and the stringified JSON or merged dictorionary JSON to a chat output node. Execute the flow by selecting Playground, typing a space in the Chat input, and pressing Enter.

Inputs: Multiple JSON or Message-with-JSON

Outputs:

Example Flows

Example 1: Using the Combine JSON Data Component in Deep Merge Mode:

Example: Combine JSON Data in Deep Merge Mode

The output JSON {"Title": "Final Report", "report": {"topics": ["Physics", "Maths", "Chemistry", "Biology"], "Date": "24/11/25", "other_details2": "Rainy weather"}, "other_key1": "Other Report"} includes only the keys specified in the ‘Keys to include’ field (report and other_key1) of the Combine JSON Data node. In Deep Merge mode, list values are combined for matching keys (key=topics), whereas scalar values (key=Date) are replaced.

The Initial Accumulator can be any JSON, with or without keys that overlap with the incoming JSONs and will appear in the final JSON output. Any matching key in the Initial Accumulator will be replaced by the incoming value in the final JSON output. In this example, the Initial Accumulator is {“Title”: “Final Report”}.

Example 2: Using the Combine JSON Data Component in Replace Merge Mode:

Example: Combine JSON Data in Replace Merge Mode

Description of the illustration combine-json-data-example-replace.png

The output JSON {"Title": "Final Report", "report": {"topics": ["Chemistry", "Biology"], "Date": "24/11/25", "other_details2": "Rainy weather"}, "other_key1": "Other Report"} includes the keys that were entered in the ‘Keys to include’ field (report and other_key1) of the Combine JSON Data Node. In Replace Merge mode, all the values for the same keys (key=topics) are replaced. The key value from the last input replaces the previous key values, so nodes connected later to Combine JSON Data overwrite those connected earlier.

The Initial Accumulator can be any JSON, with or without keys that overlap with the incoming JSONs and will appear in the final JSON output. Any matching key in the Initial Accumulator will be replaced by the incoming value in the final JSON output. In this example, the Initial Accumulator is {"Title": "Final Report"}.

Parser

The Parser node extracts, formats, and combines text from structured inputs, such as JSON or DataFrame. It transforms these inputs into a clean, readable message output that can be displayed or passed to downstream steps.

Parser Component

When to use?

How to add and configure?

  1. Drag and drop a Parser node from the Processing category .

  2. Connect JSON or Message outputs from upstream nodes to the JSON or DataFrame inlet. Alternatively, input a JSON in the text box.

  3. Input an optional dot-path in the Root path field to navigate into the input before applying template. Examples: records, records.0, outer.inner.items.

  4. Select Mode from Parser or Stringify. The parser mode applies the template to each of the selected items or objects. The stringify mode ignores template and converts the selection(s) to a raw string.

  5. Input Template for parser mode as a format string with placeholders referencing keys or columns from objects, or {text} for entire string inputs.

    Examples:

    • Name: {name}, Age: {age}
    • {PRODUCT_ID}: {PRODUCT_NAME} — stock {STOCK_QUANTITY}
    • Text: {text}
  6. Enter a Separator which is a string used to join multiple rendered items when the input is a list. Default is \n.

  7. Test the Parser node by connecting a JSON input to its JSON inlet and routing the output message to a chat output node. Execute the flow by selecting Playground.

Inputs: JSON or DataFrame

Outputs: Message containing the final rendered text

Example Flow

Below flow demonstrates grouping the Jira issues into a ‘Primary Group’ and storing the remaining Jira issues as ‘Remaining Jiras Primary’.

Parser Example

Description of the illustration parser-example.png

Using the following prompt, the LLM will generate a JSON with two keys - primary_groups and remaining_jiras_primary.

You are a JIRA analysis expert focusing ONLY on cluster-based grouping.

INPUT DATA: Problem Set:
<problem-set>
{{problem_set_data}}
</problem-set>

TASK: Group JIRAs based ONLY on cluster IDs.
STRICT RULES:
ONLY group by cluster ID matches
MANDATORY : EXACT cluster ID should match
MANDATORY : Each group MUST have minimum 2 JIRAs in the jira_list
If a group has fewer than 2 JIRAs, dissolve that group and ALL JIRAs from that group MUST be moved to remaining_jiras_primary
Order JIRAs chronologically within groups
DO NOT group by any other attributes
NO partial matches allowed
NO inferred relationships
VERIFY all groups meet the minimum 2 JIRAs requirement BEFORE outputting
DO NOT explain your reasoning or corrections
DO NOT output any intermediate results
MANDATORY: Use double quotes for ALL strings in JSON (not single quotes)
MANDATORY: Do NOT add “JIRA” prefix to any ID - use the exact ID format from the input data

OUTPUT FORMAT:
{{
    "primary_groups": [
        {{
            "title": "Cluster - <Full Cluster ID>",
            "summary": "Clear description of cluster impact",
            "jira_list": ["EXACSOPS-1", "EXACSOPS2-2"]
        }}
    ],
    "remaining_jiras_primary": ["EXACSOPS-3", "EXACSOPS-4"]
}}

CRITICAL INSTRUCTION: Return ONLY the final VALID JSON with proper double quotes for all strings. Do not include ANY explanations, reasoning, comments, corrections, or text of any kind outside the JSON structure. The response must begin with {{ and end with }} without any other characters.

Now, if you want to send the data in primary_groups directly for review and route remaining_jiras_primary through another round of analysis, you can use Parser nodes to parse the JSON generated by the LLM and extract data from two keys. This requires two Parser nodes. Select Playground to execute the flow.

Parser Example Output

Description of the illustration parser-example-output.png

You can use the two outputs from the two Parser nodes for different purposes in a complex workflow.

Utilities

Sticky Notes

The Sticky Notes node keeps track of important information or instructions within your flow.

Sticky Notes Component

When to use?

How to add and configure?

  1. Drag the Sticky Notes node onto the canvas.
  2. Click the note to edit its text.
  3. Resize or move the note as needed.
  4. Choose your preferred color.

Inputs: Raw text

Outputs: Not applicable