Plan Your Deployment

Deploy this architecture using the following basic steps:

  • Map the architectural building blocks to Oracle Cloud Infrastructure services
  • Plan the initial implementation with a focus on agent orchestration
  • Enhance the initial implementation by adding agents and integrating advanced LLM reasoning

Map OCI Services

Oracle Cloud Infrastructure (OCI) provides all the building blocks needed to implement this system in a cloud-native, scalable way.

Map each component to OCI services as follows.

  • Orchestrator Implementation

    You can run the MCP orchestrator on an OCI Compute instance on an Oracle Linux virtual machine (VM) or as a container on OCI Kubernetes Engine. The orchestrator hosts the GenAI Toolbox-based server that listens for agent requests and routes them to tools. OCI’s scalability ensures that the orchestrator can handle multiple concurrent fraud investigations. Optionally, you can use OCI API Gateway to expose a secure REST endpoint for the orchestrator so that external systems or demo clients can initiate the workflow.

    The orchestrator hosts all of the agents that interface with the UI and also provides routing to tools. You can also host the MCP server in Oracle Cloud Infrastructure Data Science where tools are exposed using the streamable HTTP protocol.



    mcp-architecture-oracle.zip

  • Agents as serverless functions

    Each agent’s logic can be deployed as an OCI function (serverless microservice) or as a lightweight container. For instance, the data retrieval agent can be an OCI function that accepts parameters (IDs, query type) and returns JSON data from Autonomous Database. The fraud analyzer could be another function that takes data and returns a score and message. Using OCI Functions for agents simplifies deployment and provides elasticity to automatically scale out if many fraud analyses run in parallel. The MCP orchestrator calls REST endpoints of these functions either by using OCI API Gateway or by using internal calls. In phase 1, tool execution is simplified by running the tools within the orchestrator process (as GenAI Toolbox does) so that the orchestrator itself can execute the database query tools. However, designing agents as independent OCI functions provides modularity and helps with demonstrations by showing each function triggered in sequence.

  • Data storage and processing

    Oracle Autonomous Transaction Processing (ATP) is the secure repository for all relevant financial data, such as transaction logs, account data, policy information, and historical fraud cases. Autonomous Database provides built-in autoscaling, encryption, and structured query language (SQL) analytics capabilities, all of which are crucial for real-world financial services workloads. The data agent uses SQL and an Oracle client or REST data application programming interface (API) to retrieve data from the database. You can also leverage tools like Oracle Machine Learning for advanced scoring if the fraud model is trained in-database. For streaming transaction data, you can use OCI Streaming or Oracle GoldenGate to feed data into the system. For a demonstration scenario, a simple direct query is sufficient.

  • AI and machine learning services

    To implement the fraud-detection logic, OCI offers multiple options. In phase 1, rules or anomaly detection are coded directly. Phase 2 implements Oracle AI services:

    • OCI Generative AI provides access to large language models for the fraud analyzer agent’s narrative generation and reasoning. OCI Generative AI is a fully-managed service offering pretrained LLMs that can be easily integrated into applications. The fraud analyzer can call this service by using its software development kit (SDK) and API with a prompt containing the transaction data and receive a fraud explanation text in response.
    • OCI Anomaly Detection scores transactions for anomalies in real-time with a high score indicating potential fraud. First trained on historical transaction data, the fraud analyzer agent simply invokes the anomaly detection API to get an anomaly score for a given transaction. Similarly, OCI offers Data Science and Oracle Machine Learning for training custom fraud models such as gradient boosting or graph algorithms for fraud. You can deploy models, such as an XGBoost model for fraud, as endpoints by using Data Science model deployment so that they can be invoked by an agent. To simplify demonstration, you can bypass the complex model and use a small rule set or synthetic scoring function directly. The architecture supports swapping in a sophisticated machine learning model later without changing the orchestration.
    • Oracle Cloud Infrastructure Language provides text analytics in cases where you also need to analyze unstructured notes or communications in a fraud case. For our primary use case, however, structured data and LLM provide the needed functionality.
  • Networking and integration

    Virtual cloud network (VCN) configuration and an OCI service gateway ensure that the orchestrator and agents using OCI Functions and OCI Compute instances can securely talk to the database and AI services without exposing data over the public internet. OCI Identity and Access Management (IAM) controls access so that only the orchestrator and agents can invoke each other's APIs and access the database. This is important in maintaining security, particularly in a financial context. For demonstration purposes, you can also set up monitoring by using OCI Logging to track agent function executions and OCI Application Performance Monitoring traces to show the end-to-end flow latency.

  • Client interface

    The user interacts with the system by using a simple web or mobile front end to call the OCI API Gateway to trigger an analysis, or by using an Oracle Digital Assistant (chatbot) interface for a more interactive demonstration. For example, an analyst could interact with the fraud analyzer agent by providing a chatbot prompt such as “Investigate transaction #123” and the system responds with the analysis. Oracle Digital Assistant can be an optional addition to showcase a conversational front-end, but the core use case might simply display the results on a dashboard or send an email alert by using OCI Notifications.

Phase 1: Implement Orchestration by Using MCP

In the initial implementation, the goal is to ensure that the system can orchestrate multiple agents and can integrate with Oracle systems, prior to adding complex AI logic.

To accomplish this goal, the plan emphasizes agent orchestration mechanics and leverages Google MCP Toolbox for Databases (also called Gen AI Toolbox) as a starting point for the model context protocol (MCP) orchestrator. The toolbox, an open-source code base on GitHub, is an MCP server for databases designed to connect LLM-based agents with SQL databases. This architecture adapts the toolbox for Oracle Cloud Infrastructure (OCI) by plugging in Oracle-specific tools and deploying it on OCI.

Phase 1 of this plan creates a working orchestration backbone on OCI that successfully integrates with an Oracle database and includes a central MCP server (Gen AI Toolbox) orchestrating at least two agents (data fetch and analysis). Phase 2 of this plan introduces more advanced reasoning.

  • Tool Definitions

    In Gen AI Toolbox, a tool is an action that an agent can perform, such as making a specific SQL query. A tool is defined declaratively, for example by using YAML, and is accessed through the MCP server. The current use case creates a similar tool registry for actions such as “Fetch recent transactions for user” or “Get policy details”, with each action mapped to an Oracle database query or to an OCI function call. The registry allows an agent to invoke these tools by name. The benefit of a configuration-driven approach is flexibility: you can update or add new tools without redeploying the whole app. For demonstration purposes, the orchestration in phase 1 can be more scripted, with tools called in sequence, but the architecture lays the groundwork for phase 2 where the agent can dynamically decide which tools to use.

  • Orchestrator Framework

    The Gen AI Toolbox works with agent orchestration frameworks such as LangChain/Langraph and LlamaIndex AgentWorkflow. In this implementation, the MCP server coordinates the sequence of agent calls. You can script the orchestration by using a simple workflow such as calling the data agent, then calling the fraud agent, then providing the decision. You could do this with custom code or by using an existing workflow library. It is useful to leverage any Gen AI Toolbox client libraries or patterns for maintaining context. The orchestrator uses the result from the data agent as the input to the fraud agent’s prompt/input structure. Phase 1 ensures that the workflow functions correctly and that the context is preserved accurately between steps. The result is an orchestration layer that simplifies running a system of one or more agents, maintains context across steps, and supports multi-agent workflows as described for AgentWorkflow but implemented in this case with OCI services.

  • Observability and Logging

    Gen AI Toolbox comes with integrated OpenTelemetry support for monitoring tool usage. OCI Logging provides logging of each agent and tool call to aid in debugging and to provide visibility for what each agent does during demonstrations. Phase 1 can use the OCI console or logs to show how the orchestrator calls a function to query the database, what was returned, and then how the fraud analysis was done. This transparency is appealing to stakeholders and helps to build trust in the AI decisions.

Phase 2: Implement Enhanced LLM Intelligence

The second phase of this plan integrates advanced large language model (LLM) reasoning and additional agents to make the system more autonomous and insightful.

The foundational architecture remains largely the same as in phase 1, but you simply replace or augment the internals of agents and add new ones. The MCP server continues to be the integration glue, brokering between the LLM agent and the OCI database, functions, and so on. This phased approach shows incremental progress by first demonstrating the backbone in OCI with simple logic and then by plugging in a powerful LLM to make the system far more intelligent in explaining and handling fraud cases.

  • LLM-driven agent reasoning

    Instead of performing actions in a fixed sequence, the fraud analyzer agent in phase 2 uses OCI Generative AI and an LLM to dynamically decide which tools to call and when. For example, given an open-ended instruction (“Investigate this claim for fraud”), the agent’s prompt could enumerate available tools (database queries, sanction check, and so on), and the LLM could plan a sequence of calls. This is akin to a ReAct-style agent or to using LangChain’s planning abilities. The MCP orchestrator facilitates these agent-to-tool interactions, looping the LLM’s decisions through tool executions and returning results. TheOracle Cloud Infrastructure Generative AI Agents service emphasizes tool orchestration to handle complex workflows. In the current architecture, we implement that concept by combining the Gen AI Toolbox approach with OCI’s LLM. You can incorporate frameworks like LangChain (supported in Oracle Cloud Infrastructure Data Science) to help manage prompts, such as “Tool: GetRecentTransactions(user_id=123)”, and to parse LLM outputs that the orchestrator can execute. This capability makes the fraud analyzer a more cognitive agent capable of multistep reasoning that enables more complex fraud investigation dialogues.

  • Additional agents and tools

    You can introduce additional agents to broaden the system’s capabilities. For example, a graph analysis agent could use network analytics, such as a graph database or an ML model, to find relationships between entities, such as common emails or devices across accounts used by fraud rings. Another might be an explanation agent that specifically checks the outputs for compliance or simplifies the language for a customer-facing explanation. Each agent uses a specific OCI service: a graph agent might use Oracle Graph or a network analysis library in Data Science. The MCP orchestrator can coordinate these agents in parallel or sequence as needed. For example, after the fraud analyzer agent concludes, the orchestrator could trigger a notification agent that uses OCI Email Delivery or OCI Notifications to send a report to investigators. The orchestrator functions as a conductor: adding more agents enriches the analysis orchestra without needing a rewrite of core logic.

  • Machine learning feedback loop

    Phase 2 can incorporate learning over time. All outcomes, whether fraud confirmed or not, can be fed into an Oracle Autonomous Data Warehouse and leveraged to retrain models by using Data Science pipelines. Although not part of the real-time agent orchestration, it closes the loop in the solution’s lifecycle. By demonstrating that OCI can not only perform detection but also improve it by using historical data with AutoML or Oracle Machine Learning, you can show continuous improvement to stakeholders.