1 Enterprise Deployment Overview

This chapter provides an overview of the enterprise topology for Oracle Business Intelligence.

Important:

Oracle strongly recommends that you read Oracle Fusion Middleware Release Notes for any additional installation and deployment considerations before starting the setup process.

This chapter contains the following topics:

1.1 What Is an Enterprise Deployment?

This Enterprise Deployment Guide defines an architectural blueprint that captures Oracle's recommended best practices for a highly available and secure Oracle Business Intelligence deployment. The best practices described in this blueprint use Oracle products from across the technology stack, including Oracle Database, Oracle Fusion Middleware, and Oracle Enterprise Manager. The resulting enterprise deployment can be readily scaled out to support increasing capacity requirements.

In particular, an Oracle Business Intelligence enterprise deployment:

  • Considers various business service level agreements (SLAs) to make high-availability best practices as widely applicable as possible

  • Leverages database grid servers and storage grids with low-cost storage to provide highly resilient, lower-cost infrastructure

  • Uses results from extensive performance impact studies for different configurations to ensure that the high-availability architecture is optimally configured to perform and scale to business needs

  • Enables control over the length of time to recover from an outage and the amount of acceptable data loss from a natural disaster

  • Uses Oracle best practices and recommended architecture that are independent of hardware and operating systems

For more information on high availability practices, go to:

http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm

Note:

This document focuses on enterprise deployments in Linux environments, but enterprise deployments can also be implemented in UNIX and Windows environments.

1.2 Terminology

The following terms are used in this document:

  • Oracle home: Contains installed files necessary to host a specific product. For example, the Oracle Business Intelligence Oracle home contains a directory that contains binary and library files for Oracle Business Intelligence. An Oracle home resides within the directory structure of the Middleware home. Each Oracle home can be associated with multiple Oracle instances or Oracle WebLogic Server domains.

  • WebLogic Server home: Contains installed files necessary to host an Oracle WebLogic Server. The WebLogic Server home directory is a peer of the Oracle home directories and resides within the directory structure of the Middleware home.

  • Middleware home: Consists of the WebLogic Server home, and, optionally, one or more Oracle homes. A Middleware home can reside on a local file system or on a remote shared disk that is accessible through NFS.

  • Oracle instance: Contains one or more active middleware system components, such as Oracle BI Server, Oracle BI Presentation Services, Oracle HTTP Server, or Oracle Internet Directory. You determine which components are part of an instance, either at install time or by creating and configuring an instance at a later time. An Oracle instance contains files that can be updated, such as configuration files, log files, and temporary files.

  • failover: The process that occurs when a member of a high availability system fails unexpectedly (unplanned downtime), so that the system can continue offering services to its consumers. If the system is an active-passive system, the passive member is activated during the failover operation and consumers are directed to it instead of the failed member. The failover process can be performed manually, or it can be automated by setting up hardware cluster services to detect failures and move cluster resources from the failed node to the standby node. If the system is an active-active system, the failover is performed by the load balancer entity serving requests to the active members. If an active member fails, the load balancer detects the failure and automatically redirects requests for the failed member to the surviving active members. See Oracle Fusion Middleware High Availability Guide for information on active-active and active-passive systems.

  • failback: The process that occurs after a system undergoes a successful failover operation. In the failback process, the original failed member is repaired over time and is then reintroduced into the system as a standby member. Optionally, a failback process can be initiated to activate this member and deactivate the other. This process reverts the system back to its pre-failure configuration.

  • hardware cluster: A collection of computers that provides a single view of network services (for example, an IP address) or application services (for example, databases and Web servers) to clients of these services. Each node in a hardware cluster is a standalone server that runs its own processes. These processes can communicate with one another to form what looks like a single system that cooperatively provides applications, system resources, and data to users.

    A hardware cluster achieves high availability and scalability with specialized hardware (cluster interconnect, shared storage) and software (health monitors, resource monitors). (The cluster interconnect is a private link used by the hardware cluster for heartbeat information to detect node death.) Due to the need for specialized hardware and software, hardware clusters are commonly provided by hardware vendors such as Sun, HP, IBM, and Dell. While the number of nodes that can be configured in a hardware cluster is vendor dependent, for the purpose of Oracle Fusion Middleware high availability, only two nodes are required. Hence, this document assumes a two-node hardware cluster for high availability solutions employing a hardware cluster.

  • cluster agent: The software that runs on a node member of a hardware cluster that coordinates availability and performance operations with other nodes. Clusterware provides resource grouping, monitoring, and the ability to move services. A cluster agent can automate the service failover.

  • clusterware: Software that manages the operations of the members of a cluster as a system. It enables you to define a set of resources and services to monitor through a heartbeat mechanism between cluster members and to move these resources and services to a different member in the cluster as efficiently and transparently as possible.

  • shared storage: The storage subsystem that is accessible by all the computers in the enterprise deployment. Among other things, the following is located on the shared disk:

    • Middleware home software

    • Administration Server domain home

    • JMS

    • Tlogs (where applicable)

    Managed Server homes can also be optionally located in the shared disk. The shared storage can be a Network Attached Storage (NAS), a Storage Area Network (SAN), or any other storage system that multiple nodes can access simultaneously and can read/write.

  • primary node: The node that is actively running an Oracle Fusion Middleware instance at any given time and has been configured to have a backup/secondary node. If the primary node fails, the applicable Oracle Fusion Middleware components are failed over to the secondary node. This failover can be manual, or automated using the Clusterware for Administration Server. For a server migration-based scenario, WebLogic Whole Server Migration is used for automated failover.

  • secondary node: The node that is the backup node for an Oracle Fusion Middleware instance. This is where the active instance fails over when the primary node is no longer available. See the definition for primary node in this section.

  • network host name: A name assigned to an IP address either through the /etc/hosts file or through DNS resolution. This name is visible in the network where the computer to which it refers is connected. Often, the network host name and physical host name are identical. However, each computer has only one physical host name, but may have multiple network host names. Thus, a computer's network host name may not always be its physical host name.

  • physical host name: The "internal name" of the current computer. On UNIX, this is the name returned by the hostname command. Note that this document differentiates between the terms physical host name and network host name.

    Oracle Fusion Middleware uses the physical host name to reference the local host. During installation, the installer automatically retrieves the physical host name from the current computer and stores it in the Oracle Fusion Middleware configuration metadata on disk.

  • physical IP: The IP address of a computer on the network. In most cases, it is normally associated with the physical host name of the computer (see the definition for physical host name). In contrast to a virtual IP, it is always associated with the same computer when on a network.

  • switchover: A process that occurs during normal operation when active members of a system might require maintenance or upgrading. A switchover process can be initiated to enable a substitute member to take over the workload performed by the member that requires maintenance or upgrading, which undergoes planned downtime. The switchover operation ensures continued service to consumers of the system.

  • switchback: The process that occurs after a system undergoes a successful switchover operation, in which a member of the system is deactivated for maintenance or upgrading. When the maintenance or upgrade is completed, the system can undergo a switchback operation to activate the upgraded member and bring the system back to the pre-switchover configuration.

  • virtual host name: A network addressable host name that maps to one or more physical computers through a load balancer or a hardware cluster. For load balancers, the name "virtual server name" is used interchangeably with virtual host name in this document. A load balancer can hold a virtual host name on behalf of a set of servers, and clients communicate indirectly with the computers using the virtual host name. A virtual host name in a hardware cluster is a network host name assigned to a cluster virtual IP. Because the cluster virtual IP is not permanently attached to any particular node of a cluster, the virtual host name is not permanently attached to any particular node either.

    Note:

    Whenever the term "virtual host name" is used in this document, it is assumed to be associated with a virtual IP address. In cases where just the IP address is needed or used, it is explicitly stated.
  • virtual IP: A virtual IP address that can be assigned to a hardware cluster or load balancer. To present a single system view of a cluster to network clients, a virtual IP serves as an entry point IP address to the group of servers which are members of the cluster. A virtual IP can be assigned to a server load balancer or a hardware cluster. Virtual IP is also called cluster virtual IP and load balancer virtual IP.

    A hardware cluster uses a cluster virtual IP to present to the outside world the entry point into the cluster (it can also be set up on a standalone computer). The hardware cluster's software manages the movement of this IP address between the two physical nodes of the cluster while clients connect to this IP address without the need to know which physical node this IP address is currently active on. In a typical two-node hardware cluster configuration, each computer has its own physical IP address and physical host name, while there could be several cluster IP addresses. These cluster IP addresses float or migrate between the two nodes. The node with current ownership of a cluster IP address is active for that address.

    A load balancer also uses a virtual IP as the entry point to a set of servers. These servers tend to be active at the same time. This virtual IP address is not assigned to any individual server but to the load balancer which acts as a proxy between servers and their clients.

In addition to the terms defined in this section, this Enterprise Deployment Guide assumes knowledge of general Oracle Fusion Middleware and Oracle WebLogic Server concepts and architecture. See Oracle Fusion Middleware Administrator's Guide for more information.

1.3 Benefits of Oracle Recommendations

The Oracle Fusion Middleware configurations discussed in this document are designed to ensure security of all invocations, maximize hardware resources, and provide a reliable, standards-compliant system for Oracle Business Intelligence.

The security and high availability benefits of the Oracle Fusion Middleware configurations are realized through isolation in firewall zones and replication of software components.

This section contains the following topics:

1.3.1 Built-in Security

The Enterprise Deployment architectures are secure because every functional group of software components is isolated in its own demilitarized zone (DMZ), and all traffic is restricted by protocol and port. A DMZ is a perimeter network that exposes external services to a larger untrusted network.

The following characteristics ensure security at all needed levels and a high level of standards compliance:

  • External load balancers are configured to redirect all external communication received on port 80 to port 443.

    Note:

    You can find a list of validated load balancers and their configuration on the Oracle Technology Network at:

    http://www.oracle.com/technology/products/ias/hi_av/Tested_LBR_FW_SSLAccel.html

  • Communication from external clients does not go beyond the Load Balancing Router level.

  • No direct communication from the Load Balancing Router to the data tier is allowed.

  • Components are separated in different protection zones: the Web tier, application tier, and the data tier.

  • Direct communication between two firewalls at any one time is prohibited.

  • If a communication begins in one firewall zone, it must end in the next firewall zone.

  • Oracle Internet Directory is isolated in the data tier.

  • Identity Management components are in a separate subnet.

  • All communication between components across protection zones is restricted by port and protocol, according to firewall rules.

1.3.2 High Availability

The enterprise deployment architectures are highly available, because each component or functional group of software components is replicated on a different computer, and configured for component-level high availability.

1.4 Hardware Requirements

Typical hardware requirements for the Enterprise Deployment on Linux operating systems are listed in Table 1-1.

For detailed requirements, or for requirements for other platforms, see the Oracle Fusion Middleware Installation Guide for that platform.

Table 1-1 Typical Hardware Requirements

Server Processor Disk Memory TMP Directory Swap

Database

4 or more X Pentium, 1.5 GHz or greater

nXm

n = number of disks, at least 4 (striped as one disk)

m = size of the disk (minimum of 30 GB)

6-8 GB

Default

Default

WEBHOSTn

2 or more X Pentium, 1.5 GHz or greater

10 GB

4 GB

Default

Default

APPHOSTn

1 dual-core Pentium, 2.8 GHz or greater

20 GB or more

8 GB

Default

Default


1.5 Enterprise Deployment Reference Topology

The instructions and diagrams in this document describe a reference topology, to which variations may be applied.

This document provides configuration instructions for a reference enterprise topology that uses Oracle Business Intelligence with Oracle Access Manager, as shown in Figure 1-1.

Figure 1-1 MyBICompany Topology with Oracle Access Manager

Description of Figure 1-1 follows
Description of "Figure 1-1 MyBICompany Topology with Oracle Access Manager"

This section covers the following topics:

1.5.1 Oracle Identity Management

Integration with the Oracle Identity Management system is an important aspect of the enterprise deployment architecture. This integration provides features such as single sign-on, integration with OPSS, centralized identity and credential store, authentication for the WebLogic domain, and so on. The IDM (Identity Management) EDG is separate from this EDG and exists in a separate domain by itself. For more information on identity management in an enterprise deployment context, see Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management.

The primary interface to the IDM EDG is the LDAP traffic to the LDAP servers, the OAP (Oracle Access Protocol) to the OAM Access Servers, and the HTTP redirection of authentication requests.

1.5.2 Web Tier

Nodes in the Web tier are located in the DMZ public zone.

In this tier, two nodes, WEBHOST1 and WEBHOST2, run Oracle HTTP Server configured with WebGate and mod_wl_ohs.

Through mod_wl_ohs, which allows requests to be proxied from Oracle HTTP Server to Oracle WebLogic Server, Oracle HTTP Server forwards the requests to Oracle WebLogic Server running in the application tier.

WebGate (which is an Oracle Access Manager component) in Oracle HTTP Server uses Oracle Access Protocol (OAP) to communicate with Oracle Access Manager running on OAMHOST2, in the Identity Management DMZ. WebGate and Oracle Access Manager are used to perform operations such as user authentication.

The Web tier also includes a load balancer router to handle external requests. External requests are sent to the virtual host names configured on the load balancer. The load balancer then forwards the requests to Oracle HTTP Server.

The WebGate module in Oracle HTTP Server uses Oracle Access Protocol (OAP) to communicate with Oracle Access Manager to perform operations such as querying user groups.

On the firewall protecting the Web tier, only the HTTP ports are open: 443 for HTTPS, and 80 for HTTP.

1.5.2.1 Load Balancer Requirements

This enterprise topology uses an external load balancer. This external load balancer should have the following features:

  • Ability to load-balance traffic to a pool of real servers through a virtual host name

    Clients access services using the virtual host name (instead of using actual host names). The load balancer can then load balance requests to the servers in the pool.

  • Port translation configuration

    This feature is necessary so that incoming requests on the virtual host name and port are directed to a different port on the back-end servers.

  • Monitoring of ports on the servers in the pool to determine the availability of a service

  • Ability to configure virtual server names and ports

    Note the following requirements:

    • The load balancer should allow configuration of multiple virtual servers. For each virtual server, the load balancer should allow configuration of traffic management on multiple ports. For example, for Oracle HTTP Server in the Web tier, the load balancer must be configured with a virtual server and ports for HTTP and HTTPS traffic.

    • The virtual server names must be associated with IP addresses and be part of your DNS. Clients must be able to access the external load balancer through the virtual server names.

  • Ability to detect node failures and immediately stop routing traffic to the failed node

  • Fault-tolerant mode

    It is highly recommended that you configure the load balancer to be in fault-tolerant mode.

  • Ability to configure the virtual server to return immediately to the calling client

    It is highly recommended that you configure the load balancer virtual server to return immediately to the calling client when the back-end services to which it forwards traffic are unavailable. This is preferred over the client disconnecting on its own after a timeout based on the TCP/IP settings on the client computer.

  • Sticky routing capability

    Sticky routing capability is the ability to maintain sticky connections to components. Examples of this include cookie-based persistence, IP-based persistence, and so on.

  • SSL acceleration

    The load balancer should be able to terminate SSL requests at the load balancer and forward traffic to the back-end real servers using the equivalent non-SSL protocol (for example, HTTPS to HTTP). Typically, this feature is called SSL acceleration and it is required for this EDG.

1.5.3 Application Tier

Nodes in the application tier are located in the DMZ secure zone.

APPHOST1 and APPHOST2 run the Oracle WebLogic Server Administration Console and Oracle Enterprise Manager Fusion Middleware Control, but in an active-passive configuration. You can fail over the Administration Server manually (see Section 5.14, "Manually Failing Over the Administration Server to APPHOST2"); alternatively you can configure the Administration Console with CFC/CRS to fail over automatically on a separate hardware cluster (not shown in this architecture).

The Oracle Business Intelligence Cluster Controller and Oracle BI Scheduler system components run on APPHOST1 and APPHOST2 in an active-passive configuration. The other Oracle Business Intelligence system components, Oracle BI Server, Oracle BI JavaHost, and Oracle BI Presentation Services, run on APPHOST1 and APPHOST2 in an active-active configuration. All system components are managed by OPMN and do not run in the Managed Servers.

The Oracle Business Intelligence Java components, including Oracle Real-Time Decisions, Oracle BI Publisher, Oracle BI for Microsoft Office, and the Oracle BI Enterprise Edition Analytics application, run in the two Managed Servers on APPHOST1 and APPHOST2. Oracle Web Services Manager (Oracle WSM) is another Java component that provides a policy framework to manage and secure Web services in the EDG topology. WSM Policy Manager runs in active-active configuration in the two Managed Servers in APPHOST1 and APPHOST2.

1.5.4 Data Tier

Nodes in the data tier are located in the most secured network zone (the intranet).

In this tier, an Oracle RAC database runs on the nodes CUSTDBHOST1 and CUSTDBHOST2. The database contains the schemas needed by the Oracle Business Intelligence components. The Oracle Business Intelligence components running in the application tier access this database.

On the firewall protecting the data tier, the database listener port (typically, 1521) is required to be open. The LDAP ports (typically, 389 and 636) are also required to be open for the traffic accessing the LDAP storage in the IDM EDG.

1.5.5 What to Install

Table 1-2 identifies the source for installation of each software component. For more information, see Oracle Fusion Middleware Installation Guide for Oracle Business Intelligence.

Table 1-2 Components and Installation Sources

Component Distribution Medium

Oracle Database 10g or 11g

Oracle Database CD (in 10g series, 10.2.0.4 or higher; in 11g series, 11.1.0.7 or higher)

Repository Creation Utility (RCU)

Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.1.0) DVD

Oracle WebLogic Server (WLS)

Oracle Weblogic Server (10.3.1) DVD

Oracle HTTP Server

Oracle Fusion Middleware WebTier and Utilities 11g (11.1.1.1.0) DVD

Oracle Business Intelligence

Oracle Business Intelligence 11g (11.1.1.5.0) DVD

Oracle Access Manager 10g Webgate

Oracle Access Manager 10g Webgates (10.1.4.3.0) DVD; OAM OHS 11g Webgates per platform

Oracle Virtual Directory (OVD)

Oracle Identity Management 11g (11.1.1.1.0) DVD


1.5.6 Unicast Requirement

Oracle recommends that the nodes in the MyBICompany topology communicate using unicast. Unlike multicast communication, unicast does not require cross-network configuration and it reduces potential network errors that can occur from multicast address conflicts as well.

The following considerations apply when using unicast to handle cluster communications:

  • All members of a WebLogic cluster must use the same message type. Mixing between multicast and unicast messaging is not allowed.

  • Individual cluster members cannot override the cluster messaging type.

  • The entire cluster must be shut down and restarted to change the message modes (from unicast to multicast or from multicast to unicast).

  • JMS topics configured for multicasting can access WebLogic clusters configured for unicast because a JMS topic publishes messages on its own multicast address that is independent of the cluster address. However, the following considerations apply:

    • The router hardware configurations that allow unicast clusters may not allow JMS multicast subscribers to work.

    • JMS multicast subscribers must be in a network hardware configuration that allows multicast accessibility. (That is, JMS subscribers must be in a multicast-enabled network to access multicast topics.)