2 Introduction to Oracle Identity and Access Management on Exalogic

This is a chapter describes Exalogic and the characteristics of an Oracle Identity and Access Management deployment on an Exalogic environment.

This chapter includes the following topics:

2.1 Understanding Exalogic

This section provides an overview of how exalogic functions in an Oracle Identity and Access Management enterprise deployment.

2.1.1 What is Exalogic?

Oracle Exalogic is an integrated hardware and software system designed to provide a complete platform for a wide range of application types and widely varied workloads. Exalogic is intended for large-scale, performance-sensitive, mission-critical application deployments. It combines Oracle Fusion Middleware software and industry-standard Sun hardware to enable a high degree of isolation between concurrently deployed applications, which have varied security, reliability, and performance requirements. With Exalogic, you can develop a single environment that can support end-to-end consolidation of your applications.

Exalogic includes the following components:

  • Servers (compute nodes)

  • Storage Area Network (SAN) (ZFS Storage Appliance)

  • Integrated Networking (wires and switches)

For more information about Exalogic, see 'Introduction to Exalogic Machine' in the Oracle Exalogic Elastic Cloud Machine Owner's Guide.

2.1.1.1 About the Exalogic Hardware Architecture

This section describes the Oracle Exalogic hardware architecture.

This section contains the following topics:

Oracle's Exalogic was tested extensively on a wide range of hardware configurations to arrive at the optimal configuration for middleware type deployments. Design considerations included high availability, compute density, state-of-the-art components, balanced system design, field serviceability, centralized storage, and high-performance networking.

2.1.1.1.1 About Compute Nodes

Processing is performed by compute nodes. The compute nodes are much like servers. These compute nodes contain CPU's, Networking and internal flash storage.

A full rack of Exalogic has 30 compute nodes, a half-rack has 16 compute nodes, a quarter-rack has 8 compute nodes, and a one-eighth rack has 4 compute nodes.

The compute node resembles traditional server hardware and is designed to be a general-purpose processing unit, although its hardware and software have been specifically constructed and tuned to run Java-based middleware software.

Compute nodes are pre-loaded with the Exalogic Linux base image. They can be re-imaged with either a Solaris or OVM server. You can run any type of application you want on a compute node if it is supported on the operating system.

Compute nodes balance high performance with high density. Density is a measure of computing power within a given amount of floor space in a data center. You could have multiple applications deployed on a single compute Node. You could configure the compute Node to have a backup compute node.

Compute nodes are the physical computing resources (servers) within the Exalogic rack. Compute nodes can either be used directly as in a physical deployment or configured to host a virtual environment in the case of a virtual Exalogic deployment.

In either case the compute nodes must have the following:

  • The correct packages installed

  • Correct kernel parameters

  • Time Server

  • Storage mounted

  • Up-to-date Exalogic bundle patches

For information about hardware requirements, see Section 3.6.2, "Exalogic Machine Requirements."

2.1.1.1.2 About Exalogic Storage

Shared storage is provided by a Sun ZFS Storage 7320 appliance, which is accessible by all the compute nodes. ZFS storage features optimized compression, performance and reliability optimizations and is built in to the Exalogic machine. With ZFS, storage has been specifically engineered to hold the binaries and configurations for both middleware and applications therefore reducing the number of installations and simplifying configuration management on the Exalogic system.

The Exalogic storage subsystem consists of two physically separate storage heads in an active/standby configuration and large shared disk array. Each of the storage heads is directly attached to the I/O fabric with redundant QRD InfiniBand. The storage subsystem is accelerated with two types of solid state memory that are used as read and write caches, respectively, in order to increase system performance. The storage heads transparently integrate the many Serial Attached SCSI disks in the disk array into a single ZFS cluster which is then made available to Exalogic compute nodes through standard network file systems supported by the compute node's operating system.

By ensuring that all Fusion Middleware software and configuration information is stored on the ZFS appliance, you make it easier to use the ZFS integrated snapshot and remote mirroring capabilities to ensure the integrity of the configuration.

To organize the enterprise deployment software on the appliance, you create a project, and then create shares within that project so you can mount the shares to each compute node.

For more information and specific instructions for configuring Sun ZFS Storage appliance, see Section 7.5, "Configuring Exalogic Storage for Oracle Identity Management."

2.1.1.1.3 About Exalogic Networking

Exalogic systems have of three network areas - Management, IP over InfiniBand (IPoIB), and Ethernet over InfiniBand (EoIB).

  • IPoIB Network - This network is used for inter rack communication. This network is the fastest available, but cannot be accessed from outside of the Exalogic machine rack.

  • Management network - This ethernet network allows people to connect to the individual compute nodes from the public ethernet. It is used for management and setup only. This network should not be used for regular ethernet communications.

  • EoIB Network - You can configure this network manually to allow communication between compute nodes and the external public network. This network would be used when:

    • You wish the external load balancer to communicate with the Oracle traffic Director instances installed within the Exalogic rack on compute nodes or vServers.

    • You wish your compute nodes/virtual servers to communicate with an external database.

    • You wish external Web servers (Oracle HTTP servers) to communicate with the WebLogic managed servers running on the compute nodes/virtual servers.

InfiniBand and Ethernet switches enable network communication in Exalogic. InfiniBand provides reliable delivery, security and quality of service at the physical layer in the networking stack, with a maximum bandwidth of 40Gb/s and latency down to 1 millisecond. The compute and storage nodes include InfiniBand network adapters, which are also referred to as host channel adapters (HCAs). The dual-port infiniband HCA provides a private internal network connecting the compute nodes and storage nodes to the system's I/O fabric.

The operating system images shipped with Exalogic are bundled with a suite of infiniBand drivers and utilities called OpenFabrics Enterprise Distribution (OFED). Oracle Fusion Middleware software contains optimizations that leverage OFED to provide higher performance over infiniband

IB networking is used for all communications and data transfers within the Exalogic machine and can be used to connect multiple Oracle Engineered Systems together to create a very high performance, multi-purpose computing environment.

Although the hardware within Exalogic utilizes an InfiniBand fabric, the rest of your data center, along with the outside world, still speaks only Ethernet. This includes your application clients, such as web browsers, as well as legacy enterprise information systems, which components running within Exalogic may need to communicate with. Exalogic's switches and nodes enable this communication through the Ethernet over InfiniBand (EoIB) protocol. As the name suggests, EoIB gives InfiniBand devices the ability to emulate an Ethernet connection using IB hardware.

2.1.1.1.4 Understanding Exalogic Components

Oracle Exalogic is delivered as a Rack of hardware which consist of the following components

Exalogic includes the following components:

  • Servers (compute nodes)

  • Storage Area Network (SAN) (ZFS Storage Appliance)

  • Integrated Networking (wires and switches)

In addition to the hardware components, Exalogic comprises Oracle Exalogic Elastic Cloud software, which consists of pre-integrated, standard technologies including the operating system, virtualization technology, networking software, device drivers, and firmware.

For more information about Exalogic, see 'Introduction to Exalogic Machine' in the Oracle Exalogic Elastic Cloud Machine Owner's Guide.

2.1.2 Understanding Types of Deployment

This section describes the types of Exalogic deployment.

This section includes the following topics:

2.1.2.1 About a Physical Exalogic Configuration

In a physical Exalogic configuration, the application software is deployed on compute nodes. Each compute node runs its own single operating system. All applications, including WebLogic Server, Coherence, and Tuxedo, then share this OS kernel and the local compute node resources.

The Exalogic compute nodes are engineered servers and thus provide extreme performance to Java-based Middleware software deployed on the compute nodes.

This configuration does not include Oracle VM and middleware. In addition, applications running on the Exalogic platform are deployed and managed in very much the same way as they are on traditional platforms; new deployments are associated with appropriate physical compute, storage, memory and I/O resources. Enterprise Manager is the primary administration tool.

2.1.2.2 About a Virtual Exalogic Configuration

The purpose of server virtualization is to fundamentally isolate the operating system and applications stack from the constraints and boundaries of the underlying physical servers. By doing this, multiple virtual machines can be presented with the impression that they are each running on their own physical hardware when, in fact, they are sharing a physical server with other virtual machines. This allows server consolidation in order to maximize the utilization of server hardware, while minimizing costs associated with the proliferation of physical servers-namely hardware, cooling, and real estate expenses.

This hardware isolation is accomplished either through a software based sharing or a direct device assignment (where a I/O device is directly assigned to a VM). Software based sharing is achieved by inserting a very thin layer of software between the OS in the virtual machine and the underlying hardware to either directly emulate the hardware or to otherwise manage the flow and control of everything from CPU scheduling across the multiple VMs, to I/O management, to error handling.

The challenge with Virtualization is to achieve a high enough consolidation ratio to achieve the cost benefits you need while still being able to provide the exceptional, predictable performance required from your core applications.

2.1.2.2.1 About the Oracle Exalogic Elastic Cloud

In the latest version of Oracle Exalogic, Oracle has virtualized the InfiniBand connectivity in Exabus, using state-of-the-art, standards-based technology to permit the consolidation of multiple virtual machines per physical server with no impact on performance. This is known as the Oracle Exalogic Elastic Cloud. Converting an Exalogic rack to an Oracle Exalogic Elastic cloud rack is optional and involves commissioning the Exalogic rack with the Oracle Exalogic Elastic cloud software.

Oracle Exalogic Elastic Cloud includes support for a highly optimized version of the Oracle VM Hypervisor, which can be used to subdivide a physical compute node into multiple virtual servers (vServers), each of which may run a separate Oracle Linux operating system instance and applications.

Oracle VM for Exalogic uses a technology called Single Root I/O Virtualization (SRIOV), which has been designed in a manner to eliminate virtualization overhead such as to provide maximum performance and scalability.

The logical vServers can have specific amounts of physical compute, storage, memory and I/O resources, optionally pre-configured with middleware and applications. This approach allows for maximum levels of resource sharing and agility as vServers can share physical resources and can be provisioned in minutes. Pre-configured OVM templates for Oracle Applications are available to download.

Oracle Elastic Cloud Architecture

Oracle Exalogic Elastic Cloud is Oracle's first engineered system for enterprise Java.

Figure 2-1 Oracle Exalogic Elastic Cloud

Surrounding text describes Figure 2-1 .

Oracle has made unique optimizations and enhancements to Exalogic components, as well as Oracle's Fusion middleware and Oracles applications, which includes on­chip network virtualization, high performance Remote Direct Memory Access (RDMA) at operating system and Java Virtual Machine (JVM) layers and Exalogic­aware workload management in Oracle Weblogic server (Oracle's Java EE application server), to meet the highest standards of reliability, availability, scalability and performance.

Exalogic Elastic Cloud comprises Exabus, which is a set of hardware, firmware and software optimizations that enable the operating system, middleware components and even certain Oracle applications to make full use of the infiniband fabric and the Oracle Traffic Director.

The InfiniBand network fabric, as we discussed in previous section, offers extremely high bandwidth and low latency, which provides major performance gains with respect to communication between the application server and the database server, and with respect to communication between different application server instances running with in the Exalogic system.

The current release of the Exalogic Elastic Cloud Software includes a tightly integrated server virtualization layer with unique capabilities allowing the consolidation of multiple, separate virtual machines containing applications or Middleware on each server node while introducing essentially no I/O virtualization overhead to the Exabus InfiniBand network and storage fabric.

Physically, Oracle Exalogic Elastic Cloud can be viewed as a rack of physical server machines plus centralized storage, which all have been designed together to cater to typical high-performance Java application use cases.

Understanding Exalogic Elastic Cloud Architecture

The Exalogic system consists of the following two major elements:

  • Exalogic X4­2 - A high performance hardware system, assembled by Oracle that integrates storage and compute resources using a high-performance I/O subsystem called Exabus, which is built on Oracle's Quad Data Rate (QDR) InfiniBand.

  • Exalogic Elastic Cloud Software - An essential package of Exalogic-specific software, device drivers and firmware that is pre-integrated with Oracle Linux and Solaris, enabling Exalogic's advanced performance and Infrastructure-as-a-Service (IaaS) capability, server and network virtualization, storage and cloud management capabilities.

    Figure 2-1 shows the Middleware software that is part of the Elastic Cloud and contains Exalogic specific optimizations. Exalogic specific optimizations in some of the Oracle Fusion Middleware applications have been described below.

    • WebLogic Server - Session replication uses the SDP layer of IB networking to maximize performance of large scale data operations as this avoids some of the typical TCP/IP network processing overhead. When processing HTTP requests, WLS makes native use of the SDP protocol when called by the Oracle Traffic Director, or when making HTTP requests to it. Through its Active Gridlink for RAC feature, WLS JDBC connections and connection pools can be configured to use the low level SDP protocol when communicating natively with Exadata over the IB fabric.

    • Coherence - Cluster communication has been dramatically redesigned to further minimize network latency when processing data sets across caches. Its elastic data feature increases performance in conjunction with the compute nodes built in solid state drives by optimizing both the use of RAM and garbage collection processing to minimize network and memory use. When sending data between caches it uses only an RDMA level IB verb set, thus avoiding nearly all the TCP/IP network processing overhead.

    • Tuxedo - Tuxedo has been similarly enhanced to make increasing use of SDP and RDMA protocols in order to optimize the performance of inter-process communications within and between compute nodes.

2.1.2.3 About Choosing a Type of Deployment

Both of these Exalogic implementation styles can support the creation of a private cloud. In a virtualized system, Exalogic Control is used to define, manage and monitor cloud users and services. In a physical system, equivalent functionality is provided by Enterprise Manager with the Cloud Management Pack.

Among the benefits of using virtualized approach is application consolidation, tenant isolation (provision secure Exalogic resources to multiple tenants), deployment simplification, including scaling up or down. With the advent of Exalogic Elastic Cloud technology, the impact of virtualization on application throughput and latency has been minimized to negligible. Applications running in Exalogic vServers perform on par with deployments on bare metal, but retain all of the manageability and efficiency benefits that come with server virtualization.

2.2 Understanding Oracle Traffic Director

Oracle Traffic Director (OTD) serves as a load balancer and a Web router. OTD is not a fully functional Web server, but can perform many tasks that a Web server performs. It is made up of an administration server, instances, and Failover Groups.

For information about installing and configuring Oracle Traffic Director, see Chapter 12, "Installing and Configuring Oracle Traffic Director for an Enterprise Deployment."

This section contains the following topics

2.2.1 About Oracle Traffic Director in a Standard Exalogic Deployment

Oracle Traffic Director is supported only with Exalogic deployments. It is used to load balance requests to:

  • LDAP

  • Internal call backs

Oracle Traffic Director also proxies Web requests to WebLogic Servers. It receives requests from the load balancer on the EoIB network, and sends requests to WebLogic servers and to Oracle Unified Directory using the IPoIB network.

For more information about the standard Exalogic topology described in this guide, see Section 3.2.1, "Primary Topologies."

2.2.2 About Oracle Traffic Director in a Deployment with Oracle HTTP Server

Oracle Traffic Director is supported only with Exalogic deployments. It is used to load balance requests to:

  • LDAP

  • Internal call backs

Oracle Traffic Director sends requests to Oracle HTTP Server (OHS), which resides on commodity servers on the corporate ethernet network. WebLogic servers and to Oracle Unified Directory using the IPoIB network. OHS sends requests to WebLogic Servers using the EoIB network.

For more information about using Oracle HTTP Server as a Web tier instead of Oracle Traffic Director, see Section 3.2.2.1, "Using an External Oracle HTTP Server Web Tier Instead of Oracle Traffic Director."

2.2.3 About Oracle Traffic Director Failover Groups

Oracle Traffic Director manages floating IP addresses for LDAP and internal callbacks on either network. When a failover group is created, you specify the IP address netmasks you wish to use for a primary node and a failover node. It uses a heartbeat between instances to detect a failure.

For information about creating and configuring failover groups, see Section 12.12, "Creating a Failover Group for Virtual Hosts."

2.2.4 About Oracle Traffic Director and the Load Balancer

You can configure the load balancer to point to Oracle Traffic Director instances. However, the load balancer failure detection is slower than using OTD failover groups. Therefore, Oracle recommends creating an external failover group for each instance and pointing the load balancer to the failover groups.

2.2.5 About Oracle Traffic Director and Identity and Access Management

Oracle Traffic Director has its own WebGate, which is used for authentication. After WebGate is installed and configured, Oracle Traffic Director intercepts requests for the consoles and forwards them to Access Manager for validation.

Internal callbacks go back to failover groups to make efficient use of the Infiniband network.

2.3 About Exalogic Optimizations for WebLogic

Oracle Exalogic includes performance optimizations for Oracle WebLogic Server to improve input/output, thread management, and request handling efficiency. You can configure a WebLogic Server domain to enable domain-wide input/output optimizations. These optimizations include multi-core architectural enhancements that improve thread management, request processing, and reduce lock contention.

Additional optimizations include reduced buffer copies, which result in more efficient input/output. Finally, session replication performance and CPU utilization is improved through lazy de-serialization, which avoids performing extra work on every session update that is only necessary when a server fails.

You can configure WebLogic Server clusters with cluster-wide optimizations that further improve server-to-server communication. The first optimization enables multiple replication channels, which improve network throughput among WebLogic Server cluster nodes. The second cluster optimization enables InfiniBand support for Sockets Direct Protocol, which reduces CPU utilization as network traffic bypasses the TCP stack.

For more information about, and procedures for Exalogic Optimization see Section 15.7, "Enabling Exalogic Optimizations."