FAQs to Oracle Analytics Generative AI

This topic provides answers to frequently asked questions about how generative AI is used with Oracle Analytics.

Does Oracle Analytics generative AI use more than one model or a system of interdependent models?

Oracle Analytics does not implement chained or interdependent generative AI models. Each model operates as an independent component, which simplifies performance evaluation, access controls, and risk management processes. Oracle continuously evaluates the most suitable models and architectures for a wide range of analytics use cases and may update or change its preferred models and architecture over time.

What category of models are used in the product? Is the model developed in-house or by a third party?

Oracle Analytics generative AI capabilities leverage foundation models from established AI providers that are configured for enterprise deployments. For a current list of our generative AI models, see Pretrained Generative AI Models.

Is the model monitored and tested on an ongoing basis? How frequently?

Models are re-validated with each new release with performance issues addressed as identified. During development, Oracle uses standard machine learning metrics including precision, recall, and F1 scores to validate AI models before deployment. Oracle Analytics uses synthetic data for model evaluation combined with a set of manually curated data. Model evaluation focuses on accuracy and drift by assessing the model's ability to generate responses that match the established ground truth. Results are compared against established benchmarks to identify mismatches (previously successful utterances that now fail) and matches (previously failed utterances that now succeed). Any mismatch is classified as a regression and serves as a control gate for any code changes, model revisions, or deployment changes.

Does Oracle have processes to communicate changes to their models and output?

As part of the Oracle Analytics release process, we notify you of changes to AI models or outputs through What’s New for Oracle Analytics Cloud. Tenancy administrators can also enable user subscriptions, ensuring broader awareness within the organization. In addition, anyone can stay informed about new features and upcoming releases by subscribing to the Oracle Analytics Weekly Email via the Oracle Analytics Community site.

Oracle Analytics also uses the standard Oracle Cloud change management policies that are documented in Oracle Cloud Hosting and Delivery Policies.

Does Oracle's AI Policy include a review process for legal and risk functions?

Product development teams within Oracle follow guidance and mandatory directives from Global Product Security who maintain the Oracle Secure Coding Standards (SCS). A section of these standards is devoted to AI/ML, which in turn has a number of security directives. These directives are split into the following categories:

  • AI Governance - directives cover steps for teams to ensure proper oversight procedures for their machine learning models.
  • AI Infrastructure - directives apply to teams configuring and using the infrastructure needed for machine learning model building and deployment.
  • AI Development - directives apply to teams involved in the development of machine learning models, providing guidance on best practices for model development, testing, and deployment.
  • AI Data - directives cover security steps for teams involved in collecting, processing, and managing data used in machine learning models.

As part of the above mandated Secure Coding Standards, Oracle product development teams regularly assess their projects for AI -specific security risks and vulnerabilities.

Security for the model, Oracle Analytics AI architecture, and integration into Oracle Analytics has undergone Cloud Security, Standards, and Architecture Program (CSSAP) security review. CSSAP is a comprehensive security review process developed by Corporate Security Architecture, Global Information Security, Global Product Security, Oracle Global IT, and Oracle's IT organizations to provide thorough information security management assessment. For additional information, see Corporate Security Architecture Oversight.

Does Oracle collect user data or similar metrics to measure differences for input and output data relative to test environments?

Currently, Oracle Analytics does not capture or collect explicit user feedback for this purpose. Since the model is not tuned or trained on customer data, testing is limited to default model benchmarks as noted above. To summarize: models are re-validated with each new release, and any identified performance issues are addressed accordingly. During development, Oracle applies standard machine learning metrics—such as precision, recall, and F1 scores—to validate AI models prior to deployment.

Does Oracle incorporate any external inputs or third-party tools with its model?

Oracle Analytics does not incorporate any external inputs in its interactions with the model. Model interactions are strictly confined to direct communication between Oracle Analytics and the model itself.

Is the model dependent on any third-party tools or solutions that could make it difficult to migrate the model to a different environment?

The model is deployed as an Oracle Cloud Infrastructure service, with the same foundation model available across multiple service instances. No model migration is required between instances.

How does Oracle respond to AI system incidents?

Oracle Analytics uses the standard Oracle Cloud Incident Response policies that are documented in the Service Level Agreement section of Oracle Cloud Hosting and Delivery Policies.

How does Oracle test the quality of systems explanations?

Oracle Analytics subjects all code changes, model revisions, and deployment changes to a release gate that includes an assessment of generated responses matching the established ground truth as part of our evaluation benchmark. The ground truth in the benchmark includes all systems explanations generated as part of the response. Any mismatch is classified as a regression and serves as a control gate.

How does Oracle assess system outputs for trustworthiness and fairness?

Oracle Analytics relies on the OCI Gen AI infrastructure for its foundation models and does not explicitly train the models. OCI Gen AI uses best practices to ensure trustworthiness and preventing bias in its foundation models.

Additionally, Oracle Analytics currently offers coarse-grained control over the Large Language Model’s (LLM) contribution to the response surfaced to end users. This mechanism ensures that the LLM does not directly surface information to end users, thereby ensuring that the generated responses are wholly produced by Oracle Analytics and hence trustworthy. In addition, the Oracle Analytics service administrator has the ability to disable any or all AI-driven features at the individual feature level. For more information, see About Generative AI Configuration.

Is there a disaster recovery and contingency plan for instances when the model is unavailable?

Oracle Analytics relies on the OCI Gen AI infrastructure for its foundation models. Resiliency and fault tolerance of the OCI infrastructure is documented in Oracle Cloud Hosting and Delivery Policies. More details about OCI Gen AI Dedicated clusters can be found in Creating a Dedicated AI Cluster in Generative AI for Hosting Models and Oracle PaaS and IaaS Public Cloud Services Pillar Document.

How does Oracle test the model for consistency in different environments?

All customer and development models are deployed on the same Oracle Cloud Infrastructure framework. Internal testing environments, including pre-production and production testing environments, maintain the same configuration state as customer environments.

Does Oracle have an established governance policy for the model?

Oracle Analytics leverages foundation models deployed through Oracle Cloud Infrastructure's Generative AI Service. These models are used in their native state without modification—Oracle Analytics does not perform training, fine-tuning, or model customization on the underlying foundation models. Accordingly, model governance policies used by the OCI Gen AI infrastructure are applicable for Oracle Analytics as well.

Has Oracle established policies and procedures that define roles and responsibilities for human oversight of deployed models?

Oracle Analytics has a robust process for model assessment using synthetic data that serves as a control gate for any code changes, model revisions, or deployment changes. Model assessment runs are evaluated by a combination of automation and human oversight. Beyond that, Oracle Analytics does not perform human oversight of deployed models.