10 Active-Passive Topologies for Oracle Fusion Middleware High Availability

This chapter describes how to configure and manage active-passive topologies. It contains the following sections:

10.1 Oracle Fusion Middleware Cold Failover Cluster Topology Concepts

Oracle Fusion Middleware provides an active-passive model for all its components using Oracle FMW Cold Failover Clusters. In an Oracle FMW Cold Failover Cluster configuration, two or more application server instances are configured to serve the same application workload but only one is active at any particular time.

A two-node Oracle FMW Cold Failover Cluster can be used to achieve active-passive availability for Oracle Application Server middle-tier components. In an Oracle FMW Cold Failover Cluster, one node is active while the other is passive, on standby. In the event that the active node fails, the standby node is activated, and the middle-tier components continue servicing clients from that node. All middle-tier components are failed over to the new active node. No middle-tier components run on the failed node after the failover.

The most common properties of an Oracle FMW Cold Failover cluster configuration include:

  • Shared Storage: The passive Oracle Fusion Middleware instance in an active-passive configuration has access to the same Oracle binaries, configuration files, domain directory, and data as the active instance. You configure this access by placing these artifacts in storage that can be accessed by all participating nodes in the Cold Failover Cluster configuration. Typically the active node has shared storage mounted, while the passive node's is unmounted but accessible if the node becomes active. The shared storage can be a dual ported disk device accessible to both the nodes, or a device-based storage such as a NAS or a SAN. You can install shared storage on a regular file system. With Cold Failover Clusters, you mount the volume on one node at a time. Shared storage is a key property of a Cold Failover Cluster deployment.

  • Virtual hostname: In the Cold Failover Cluster solution, a virtual hostname and a virtual IP are shared between the two nodes (the virtual hostname maps to the virtual IP and is used interchangeably in this guide). However, only one node, the active node, can use this virtual IP at any one time. When the active node fails and the standby node is made active, the virtual IP is moved to the new active node. The new active node now services all requests through the virtual IP. The Virtual Hostname provides a single system view of the deployment. A Cold Failover Cluster deployment is configured to listen on this virtual IP. For example, if the two physical hostnames of the hardware cluster are node1.mycompany.com and node2.mycompany.com, the single view of this cluster can be provided by the name cfcvip.mycompany.com. In the DNS, cfcvip.mycompany.com maps to the virtual IP, which floats between node1 and node2. When a hardware cluster is used, it manages the failover of the virtual IP without the middle tier clients detecting which physical node is active and actually servicing requests.

  • Hardware Cluster: A hardware cluster is typically used for Cold Failover Cluster deployments. The hardware cluster addresses the management of shared storage and virtual IP in its architecture. It plays a role in reliable failover of these shared resources, providing a robust active-passive solution. Most Cold Failover Cluster are deployed to hardware clusters that include the following:

    • Two nodes that are in the same subunit

    • A high speed private interconnect between the two nodes

    • Public network interfaces, on which the client requests are served and the virtual IP is enabled.

    • Shared storage accessible by the two nodes. This includes shared storage that acts as a quorum device as well as shared storage for Oracle Fusion Middleware and database installs.

    • Clusterware running to manage node and component failures

  • Planned Switchover and Unplanned Failover: The typical Cold Failover Cluster deployment is a two-node hardware cluster. To maximize utilization, both of these nodes typically have some elements of the deployment running, with the other node acting as a backup node for the appropriate element if needed. For example, a deployment may have the application tier (WebLogic container) running on one node and the Web tier (Oracle HTTP Server) running on the other node. If either node is brought down for hardware or software maintenance, or if either node crashes, the surviving node is used to host the services of the down node while continuing to host its current services.

    The high-level steps for switch-over to the standby node are as follows:

    1. Stop the middle-tier service on the primary node (if the node is still available).

    2. Fail over the virtual IP from the current active node, to the passive node. This involves bringing it down on the current node and enabling it and bringing up on the passive node.

    3. Failover the shared disk from the current active node to the passive node. This involves unmounting the shared disk from the current node and mounting it on the passive node.

    4. Start the middle-tier service on the passive node, which becomes active.

    For failover management, there are two possible approaches:

    • Automated failover using a cluster manager facility

      The cluster manager offers services, which allows development of packages to monitor the state of a service. If the service or the node is found to be down (either due to planned operation or unplanned operation), it automatically fails over the service from one node to the other node. The package can be developed to attempt to restart the service on a given node before failing over. Similarly, when the active node itself is down, the clusterware can detect the down state of the node and attempt to bring it up on the designated passive node. These can be automated using any clusterware to provide failover in a completely automated fashion.

      Oracle Fusion Middleware provides this capability using Oracle Clusterware. Please refer to Chapter 11, "Using Oracle Cluster Ready Services" for details of managing a Cold Failover Cluster environment with Oracle Cluster-Ready Services. However, you can use any available clusterware to manage cluster failover.

    • Manual failover

      For this approach, the planned switchover steps are executed manually. Both the detection of the failure and the failover itself is manual, Therefore this method may result in a longer period of service unavailability.

In active-passive deployments, services are typically down for a short period of time. This is the time taken to either restart the instance on the same node, or to failover the instance to the passive node.

Active-Passive Topologies: Advantages

  • Increased availability

    If the active instance fails for any reason or must be taken offline, an identically configured passive instance is prepared to take over at any time. This provides an increased level of availability than a normal single instance deployment. Active-passive deployments also are used to provide availability and protection against planned and unplanned maintenance operation of the hardware used. When a node needs to be brought down for planned maintenance such as a patch apply or OS upgrade or any other reason, the middle ware services can be brought up on the passive node. Switchback to the old node can be done at appropriate times.

  • Reduced operating costs

    In an active-passive configuration only one set of processes is up and serving requests. Management of the active instance is generally less costly than managing an array of active instances.

  • Cold Failover Clusters are less difficult to implement because they do not require a load balancer, which is required in active-active topologies

  • Cold Failover Clusters are less difficult to implement than active-active topologies because you are not required to configure options such as load balancing algorithms, clustering, and replication.

  • Active-passive topologies better simulates a one-instance topology than active-active topologies.

  • Application independence

    Some applications may not be suited to an active-active configuration. This may include applications which rely heavily on application state or on information stored locally. Singleton application by very nature are more suitable for active-passive deployments. An active-passive configuration has only one instance serving requests at any particular time.

Active-Passive Topologies: Disadvantages

  • Active-passive topologies do not scale as well as active-active topologies. You cannot add nodes to the topology to increase capacity.

  • State information from HTTP session state and EJB stateful session beans is not replicated, and therefore gets lost when a node terminates unexpectedly. Such state can be persisted to the database or to the file system residing on a shared storage, however, this requires additional overhead that may impact performance of the single node Cold Failover Cluster deployment.

  • Active-passive deployments have a shorter downtime than a single node deployment. However, downtime is much shorter in an active-active deployment.

10.2 Configuring Oracle Fusion Middleware for Active-Passive Deployments

Oracle Fusion Middleware components come in a variety of Java EE container-deployed components and non-Java EE components. Components such as Oracle Internet Directory, Oracle Virtual Directory, Oracle HTTP Server, Oracle Web Cache, Runtime Engines of Oracle Forms, and Oracle Reports are system components. Components such as the Oracle SOA Suite, Oracle WebCenter Suite, Oracle Identity Management components, such as ODSM and DIP are Java EE components that are deployed to Oracle WebLogic Server.

WebLogic Server Administration Console and Oracle Enterprise Manager Fusion Middleware Control are also deployed to the WebLogic container. Both Java EE and system components can be deployed to Cold Failover Cluster environments. They can co-exist on the same system or on different systems. When on the same system, you can configure them to failover as a unit, sharing the same virtual IP, or failover independently using separate virtual IPs. In most Oracle Fusion Middleware deployments, a database is used either for the component metadata created using Repository Creation Utility (RCU), or for application data. In many cases, a Cold Failover Cluster middle tier deployment uses a Cold Failover Cluster database, both deployed to the same cluster. The typical deployment has the two components configured as separate failover units using different VIPs and different shared disks on the same hardware cluster.

For Oracle Fusion Middleware, the recommended procedure to create an active-passive topology is:

  • Install the component as a single instance configuration. If you planned to transform this instance to a Cold Failover Cluster deployment, install it using a shared disk. This means that the Middleware home, the Instance home, in the case of system components, and the domain directory, in case of a WebLogic deployment are on a shared disk. Everything that fails over as a unit should be on a shared disk.

  • After the installation, you transform the deployment into a Cold Failover Cluster deployment, and configure it to listen on a Virtual IP. The Virtual IP is configured on the current active node. It fails over, along with the Oracle Fusion Middleware deployment to the passive node when failure occurs.

This general procedure applies to the Cold Failover Cluster Oracle database. For example, the Oracle database instance is installed as a single instance deployment and subsequently transformed for Cold Failover Clusters. A Cold Failover Cluster Oracle Fusion Middleware deployment can also use an Oracle Real Application Cluster (RAC) database.

The following sections describe the procedures for post-installation configuration to transform a single instance deployment to a Cold Failover Cluster deployment.

The rest of this chapter describes how to transform Cold Failover Clusters for each of the individual components in the Oracle Fusion Middleware suite. The first section details the procedure for the basic infrastructure components, and the subsequent section does so for the individual Oracle Fusion Middleware component. Any given deployment, for example an Oracle instance or domain, has more than one of these present in a given machine. To transform the entire instance or domain:

  • Decide which components form a unit of failover.

  • Deploy them on the same shared disk.

    Note:

    For details about installing and deploying Oracle Fusion Middleware components, see the installation guide for the specific Fusion Middleware component.
  • Determine a virtual IP to use for this unit of failover. Typically, a single virtual IP is used for all the components, but separate IPs can be used as long as all of them fail over together.

  • Apply the transformation procedure to each of the individual components to transform the deployment as a whole. Since more than one of these sections will apply for Cold Failover Clusters transformation of an installation, the order of transformation should always be as follows:

    • Transform the administration server or Enterprise Manager instance (if applicable).

    • Transform all managed servers in the deployment.

    • Transform the Oracle instances (non-Java EE deployments).

10.2.1 General Requirements for Cold Failover Clusters

As described earlier, a Cold Failover Clusters deployment has at least two nodes in the deployment. The installation is done on one of these nodes. The other node is the passive node. The requirements on these two nodes are as follows:

  • The nodes should be similar in all respects at the operating system level. For example, they should be the same operating system, the same version, and the same patch level.

  • The nodes should be similar in terms of the hardware characteristics. This ensures predictable performance during normal operations and on failover. Oracle suggests designing each node for capacity to handle both its normal role, as well as the additional load required to handle a failover scenario. However, if the SLA agreement indicates that during outage scenarios, reduced performance is acceptable, this is not required.

  • The nodes should have the same mount point free so that mounting of shared storage can occur to the same node during normal operations and failover conditions.

  • The user ID and group ID on the two nodes are similar, and the user ID and group ID of the user owning the instance is the same on both nodes.

  • The oraInventory location is the same on both nodes and has similar accessibility of the instance or domain owner. The location of the oraInst.loc file, as well as the beahomelist file should be the same.

  • Since a given instance uses the same listen ports irrespective of the machine o which it is currently active, it is required to ensure that the ports used by the Cold Failover Cluster instance are free on both the nodes.

Note:

Before beginning the transformation, back up of your entire domain. Oracle also recommends creating a local backup file before editing it. For details on backing up your domain, see the Oracle Fusion Middleware Administrator's Guide. Oracle recommends backing up:
  • All domain directories

  • All Instance homes

  • Optionally, the database repository and the Middleware homes

  • For all the sections below, before changing any file, ensure that a local backup copy is made before editing the file.

10.2.1.1 Terminology for Directories and Directory Environment Variables

The following list describes the directories and variables used in this chapter:

  • ORACLE_BASE: This environment variable and related directory path refers to the base directory under which Oracle products are installed.

  • MW_HOME: This environment variable and related directory path refers to the location where Fusion Middleware (FMW) resides.

  • WL_HOME: This environment variable and related directory path contains installed files necessary to host a WebLogic Server.

  • ORACLE_HOME: This environment variable and related directory path refers to the location where Oracle FMW SOA Suite is installed.

  • DOMAIN Directory: This directory path refers to the location where the Oracle WebLogic Domain information (configuration artifacts) is stored.

  • ORACLE_INSTANCE: An Oracle instance contains one or more system components, such as Oracle Web Cache, Oracle HTTP Server, or Oracle Internet Directory. An Oracle instance directory contains updateable files, such as configuration files, log files, and temporary files.

The values used and recommended for consistency for these directories are:

  • ORACLE_BASE: /u01/app/oracle

  • MW_HOME (Apptier): ORACLE_BASE/product/fmw

  • WLS_HOME: MW_HOME/wlserver_10.3

Component ORACLE_HOME DOMAIN_HOME Domain Directory
Identity Management MW_HOME/idm IDMDomain MW_HOME/user_projects/domains/IDMDomain
Oracle SOA MW_HOME/soa SOADomain MW_HOME/user_projects/domains/SOADomain
WebCenter MW_HOME/wc WCDomain MW_HOME/user_projects/domains/WCDomain
Oracle Portal MW_HOME/portal PortalDomain MW_HOME/user_projects/domains/PortalDomain
Oracle Forms MW_HOME/forms FormsDomain MW_HOME/user_projects/domains/FormsDomain
Oracle Reports MW_HOME/reports ReportsDomain MW_HOME/user_projects/domains/ReportsDomain
Oracle Discoverer MW_HOME/disco DiscoDomain MW_HOME/user_projects/domains/DiscoDomain
Web Tier MW_HOME/web    
Directory Tier MW_HOME/idm    

Location for Applications Directory: ORACLE_BASE/admin/domain_name/apps

Location for Oracle Instance: ORACLE_BASE/admin/instance_name

All entities on disk that failover as a unit should preferably be on the same mount point. They can, however, be on separate shared storage, on separate mount points as well. Oracle recommends that the mount point for the shared storage is ORACLE_BASE. In most cases, this ensures that all the persistent bits of a failover unit are on the same shared storage. When more than one Cold Failover Cluster exists on a node, and each fails over independent of the other, different mount points will exist for each such failover unit.

10.2.2 Transforming Oracle Fusion Middleware Infrastructure Components

An Oracle Fusion Middleware deployment is made up of basic infrastructure components that are common across all the product sets. This section describes Cold Failover Clusters transformation steps for these components.

There are two administration server topologies supported for Cold Failover Clusters configuration. The following sections describe these two topologies and provide installation and configuration steps to prepare the administration server for Cold Failover Clusters transformation.

10.2.2.1 Administration Server Topology 1

Figure 10-1 illustrates the first supported topology for Oracle Cold Failover Clusters.

Figure 10-1 Administration Server Cold Failover Cluster Topology 1

Administration Server Cold Failover Cluster Topology 1
Description of "Figure 10-1 Administration Server Cold Failover Cluster Topology 1"

In Figure 10-1, the Administration Server runs on a two-node hardware cluster: Node 1 and Node 2. The Administration Server is listening on the Virtual IP or hostname. The Middleware Home and the domain directory is on a shared disk that is mounted on Node 1 or Node 2 at any given point. Both the Middleware home and the domain directory should be on the same shared disk or shared disks that can fail over together. If an enterprise has multiple Fusion Middleware domains for multiple applications or environments, this topology is well suited for Administration Server high availability. A single hardware cluster can be deployed to host these multiple Administration Servers. Each Administration Server can use its own virtual IP and set of shared disks to provide high availability of domain services.

10.2.2.1.1 Topology 1 Installation Procedure

To install and configure Cold Failover Clusters for the application server in this topology:

Install the Middleware Home

This installation includes the Oracle home, WebLogic home, and the Domain home on a shared disk. This disk should be mountable by all the nodes that act as the failover destination for the administration server. Depending on the storage sub-system used, the shared disk may be mountable only on one node at a time. This is the preferred configuration, even when the storage sub system allows simultaneous mounts on more than one node. This is done as a regular single-instance installation. Please refer to the component chapters for details on installing the administration server (and Enterprise Manager) alone. The overall procedure for each suite is as follows:

For Oracle SOA or Oracle WebCenter:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Install the Oracle Home for Oracle SOA or WebCenter.

    See the Oracle Fusion Middleware Installation Guide for Oracle SOA Suite or the Oracle Fusion Middleware Installation Guide for Oracle WebCenter.

  3. Invoke the Configuration Wizard and create a domain with just the administration server.

    In the Select Domain Source screen, select the following:

    • Generate a domain configured automatically to support the following products

    • Select Enterprise Manager and Oracle JRF.

For Oracle Identity Management:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Using Oracle Identity Management 11g Installer, install and configure the IDM Domain using the create domain option. In the Configure Components Screen, de-select everything except Enterprise Manager (this is selected by default)

    See the Oracle Fusion Middleware Installation Guide for Oracle Identity Management.

For Oracle Portal, Forms, Reports and Discoverer:

  1. Install the WebLogic Server software.

  2. Using Oracle Fusion Middleware 11g Portal, Forms, Reports, and Discoverer Installer, install and configure the Classic Domain using the create domain option. In the Configure Components Screen, make sure that Enterprise Manager is also selected.

Note:

In this case, at least one more Managed Servers for the product components is also installed in this process (Adminserver by itself cannot be installed). This Managed Server must also be transformed to CFC using the specific procedure for the component. It is part of the same failover unit as the administration server.

Configuring the Administration Server for Cold Failover Clusters

To configure the administration server for Cold Failover Clusters:

  1. Provision the Virtual IP using the following commands as root user:

    For Linux

    /sbin/ifconfig interface:index IP_Address netmask netmask
    /sbin/arping -q -U -c 3 -I interface IP_Address
    

    Where IP Address is the virtual IP address and the netmask is the associated netmask. In the following example, the IP address is being enabled on the interface eth0.

    ifconfig eth0:1 130.35.46.17 netmask 255.255.224.0
    /sbin/arping -q -U -c 3 -I eth0 130.35.46.17
    

    For Windows:

    netsh interface ip add address interface IP Address netmask
    

    Where IP Address is the virtual IP address and the netmask is the associated netmask. In the example below, the IP address is being enabled on the interface 'Local Area Connection'.

    netsh interface ip add address "Local Area connection" 130.35.46.17 255.255.224.0
    
  2. Transform the administration server instance to Cold Failover Clusters following the procedure in section Section 10.2.2.3, "Transforming the Administration Server for Cold Failover Clusters."

  3. Validate the administration server transformation by accessing the consoles on the virtual IP.

    http://cfcvip.mycompany.com:7001/console

    http://cfcvip.mycompany.com:7001/em

  4. Failover the administration server manually to the second node using the following procedure:

    1. Stop the administration server process (and any other process running out of a given Middleware Home)

    2. Unmount the shared storage from Node1 where the Middleware Home and domain directory exists.

    3. Mount the shared storage on Node2, follow storage specific commands.

    4. Disable the Virtual IP on Node1 using the following command as root user:

      For Linux:

      /sbin/ifconfig interface:index down 
      

      Where IP Address is the virtual IP. In the example below, the IP address is being disabled on the interface eth0.

      /sbin/ifconfig eth0:1 down
      

      On Windows:

      netsh interface ip delete address interface IP_Address
      

      Where IP Address is the virtual IP address and the netmask is the associated netmask. In the example below, the IP address is being enabled on the interface 'Local Area Connection'.

      netsh interface ip delete address "Local Area connection" 130.35.46.17
      
    5. Enable the virtual IP on Node2 using similar commands as in Step 1.

    6. Start the administration server process.

      DOMAIN_HOME/bin/startWebLogic.sh
      

      Where DOMAIN_HOME is the location of your domain directory.

    7. Validate access to both the administration server and Enterprise Manager console.

10.2.2.2 Administration Server Topology 2

Figure 10-2 illustrates the second supported administration server topology for Oracle Cold Failover Clusters.

Figure 10-2 Administration Server Cold Failover Cluster Topology 2

Administration Server Cold Failover Cluster Topology 2
Description of "Figure 10-2 Administration Server Cold Failover Cluster Topology 2"

In Figure 10-2, the administration server runs on a two-node hardware cluster: Node 1 and Node 2. The administration server is listening on the Virtual IP or hostname. The domain directory used by the administration server is on a shared disk. This is mandatory. This shared disk is mounted on Node 1 or Node 2 at any given point. The Middleware Homes, which contain the software, (WebLogic Home and the Oracle Home) are not necessarily on a shared disk. They can be on the local disk as well. The administration server uses the Middleware Home on Node1 for the software when it is running on Node1 and it uses the Middleware Home on Node2 when it is running on Node2. These two Middleware Home must be maintained to be the same in terms of deployed products, Oracle Homes, and patches. In both cases, it uses the configuration available in the shared Domain Directory/Domain Home. Since this is shared, it ensures that the same configuration is used before and after failover.

This shared domain directory may also have other Managed Servers running. It may also be used exclusively for the administration server. If the domain directory is shared with other managed servers, appropriate consideration must be made for their failover when the administration server fails over. Some of these considerations are:

  1. If the shared storage can be mounted as read/write on multiple nodes simultaneously, the administration server domain directory can be shared with other managed servers. In addition, it can be failed over independently of the Managed Server. The administration server can failover and Managed Servers can continue to run independently on their designated nodes. This is possible because the administration server in this case requires only failover of the VIP, and does not require failover of the shared disk. The domain directory/domain home continues to remain available by the Managed Servers. Example of such storage include a NAS or a SAN/Direct attached storage with Cluster file system.

  2. If only one node can mount the shared storage a time, sharing the administration server domain directory with a Managed Server implies that when the administration server fails over, the Managed Server that runs off the same domain directory must be shut down.

A hardware cluster may be used in this topology. The cluster helps automate failover (when used with properly configured clusterware). However, it is not required. A single hardware cluster can be deployed to host these multiple administration servers. Each administration server can use its own virtual IP and set of shared disks to provide domain services high availability.

This topology is supported for Oracle SOA Suite and Oracle Web Center Suite only.

Note:

For the Oracle Identity Management, an alternate topology is also supported for Cold Failover Clusters. See the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management for more details.
10.2.2.2.1 Topology 2 Installation Procedure

To install and configure Cold Failover Clusters for the administration server in this topology:

Install the Middleware Home

Install the Middleware Home including the Oracle Home and WebLogic Home separately on the two nodes of the domain. The administration server domain directory is created on a shared disk. This disk should be mountable by all the nodes that act as the failover destination for the administration server. Depending on the storage sub-system used, the shared disk may be mountable only on one node at a time. This is a regular single-instance installation. Refer to the product suite for details on installing the administration server, and Enterprise Manager alone. To install the Middleware Home:

For Oracle SOA Suite or Oracle WebCenter:

  1. Install the Oracle WebLogic Server software on Node 1.

  2. Install the Oracle Home for SOA or WebCenter on Node 1.

  3. Repeat steps 1 and 2 on Node 2.

  4. Start the Configuration Wizard on Node 1 and create a domain with just the administration server.

    In the Select Domain Source screen, select the following:

    • Generate a domain configured automatically to support the following products.

    • Select Enterprise Manager and Oracle JRF.

  5. In the Specify Domain Name and Location screen, enter the domain name, and be sure the domain directory matches the directory and shared storage mount point.

Configuring the Middleware Home for Cold Failover Clusters

To configure the Middleware Home for Cold Failover Clusters:

  1. Provision the Virtual IP. For example:

    For Linux:

    ifconfig eth0:1 IP_Address netmask netmask
    /sbin/arping -q -U -c 3 -I eth0 IP_Address
    

    Where IP_Address is the virtual IP address and the netmask is the associated netmask. In the following example, the IP address is being enabled on the interface eth0.

    ifconfig eth0:1 130.35.46.17 netmask 255.255.224.0/sbin/arping -q -U -c 3 -I eth0 130.35.46.17
    

    For Windows:

    netsh interface ip add address interface IP_Adress netmask
    

    Where IP_Address is the virtual IP address and the netmask is the associated netmask. In the example below, the IP address is being enabled on the interface "Local Area Connection".

    netsh interface ip add address "Local Area connection" 130.35.46.17 255.255.224.0
    
  2. Transform the administration server instance to Cold Failover Clusters using the procedure in Section 10.2.2.3, "Transforming the Administration Server for Cold Failover Clusters."

  3. Validate the administration server by accessing the consoles on the virtual IP.

    http://cfcvip.mycompany.com:7001/console

    http://cfcvip.mycompany.com:7001/em

  4. Failover the administration server manually to the second node:

    1. Stop the administration server process (and any other process running out of a given Middleware Home).

    2. Unmount the shared storage from Node1 where the Middleware Home or domain directory exists.

    3. Mount the shared storage on Node2, following storage specific commands.

    4. Disable the virtual IP on Node1:

      For Linux:

      ifconfig interface:index down 
      

      In the following example, the IP address is being disabled on the interface eth0.

      ifconfig eth0:1 down
      

      For Windows:

      netsh interface ip delete address interface addr=IP Address
      

      Where IP Address is the virtual IP address. In the following example, the IP address is enabled on the interface 'Local Area Connection'.

      netsh interface ip delete address 'Local Area connection' addr=130.35.46.17
      DOMAIN_HOME/bin/startWebLogic.sh
      

      Where DOMAIN_HOME is the location of your domain directory.

    5. Enable the virtual IP on Node 2.

    6. Start up the administration server process using the following command:

      DOMAIN_HOME/bin/startWebLogic.sh
      

      Where DOMAIN_HOME is the location of your domain directory.

    7. Validate access to both administration server and Enterprise Manager console.

10.2.2.3 Transforming the Administration Server for Cold Failover Clusters

To transform the Administration Server installed on a shared disk from Node 1, follow the steps in this section. These step transform the container, therefore, both the WebLogic Server Administration Console and Oracle Enterprise Manager Fusion Middleware Control, are transformed for Cold Failover Clusters. This results in other components such as OWSM-PM, deployed to this container, to become Cold Failover Clusters ready as well. The address for all of these services transforms cfcvip.mycompany.com. After installation, to transform a non Cold Failover Cluster instance, to a Cold Failover Cluster:

  1. Login into WebLogic Server Administration Console.

  2. Create a Machine for the Virtual Host

    1. Select Environment, and then Machines.

    2. Click Lock & Edit.

    3. Click New.

    4. In the Name field, enter cfcvip.mycompany.com

    5. Select the appropriate operating system.

    6. Click OK.

    7. Click the Servers tab.

    8. Click Add.

    9. Select an existing server, and associate it with this machine.

      In the select server drop down list ensure AdminServer is selected.

    10. Click Activate Changes.

  3. Configure the Admin Server to Listen on cfcvip.mycompany.com.

    1. Select Environment, and then Servers from the Domain Structure menu.

    2. Click Lock and Edit from the Change Center.

    3. Click on the administration server (AdminServer)

    4. Change the Listen Address to cfcvip.mycompany.com

    5. Click Save.

    6. Click Activate Changes.

    7. Restart the administration server.

      Note:

      Since it is typically expected that transformation to Cold Failover Cluster for the administration server is done at domain creation time No other changes to other parts of the domain are expected. If this change happens post domain creation, and other components are installed in the domain, follow the steps in the following administration server transformation section.

Changing Client Side Configuration for Administration Server

Any existing entities in the domain must communicate with the administration server using the new address. For example, when starting the Managed Servers manually, the administration server address should be specified as cfcvip.mycompany.com.

In the instance.properties file, located in the INSTANCE_HOME/OPMN/opmn directory, make the following change:

adminHost=cfcvip.mycompany.com

If the Oracle Instance is to be registered or re-registered with a Cold Failover Clusters administration server using the OPMN registration commands, the AdminHost location in the opmnctl command should reference the new location of the administration server (cfcvip.mycompany.com).

Changing Client Side configuration for Oracle Enterprise Manager

Since the Enterprise Manager is part of the same container where the administration server runs, transforming the administration server to Cold Failover Clusters also transforms the Enterprise Manager. If there are existing Enterprise Manager Agents configured to be part of the domain, these agent configurations must use the new location for the Enterprise Manager. To configure the new location for Enterprise Manager, use the following steps for each agent:

  1. Set the directory to ORACLE_INSTANCE/EMAGENT/emagent_asinst_ 1/sysman/config.

  2. In the emd.properties file, change node1.mycompany.com to cfcvip.mycompany.com in the following attributes:

    • REPOSITORY_URL

    • EmdWalletSrcUrl

  3. Stop and restart the agent using the following commands:

    cd INSTANCE_HOME/EMAGENT/emagent_dir/bin 
    ./emctl stop agent 
    ./emctl start agent 
    ./emctl status agent 
    

    This shows the Repository URL and it should now point to the new host.

10.2.2.4 Transforming Oracle WebLogic Managed Servers

All Oracle Fusion Middleware components are deployed to a Managed Server. An important step to convert an application or component that is deployed to Oracle WebLogic Server to Cold Failover Clusters is to change its listen address to the virtual IP being used. This change is done for the specific Managed Server to which the component has been deployed. You can make this change using the WebLogic Server Administration Console or using WLST commands.

The following example describes the generic steps for Cold Failover Clusters transformation of a Managed Server named WLS_EXMPL. These steps apply to any Managed Server in the Fusion Middleware components.

10.2.2.4.1 Transforming an Oracle WebLogic Managed Server using the Fusion Middleware Administration Console

For this procedure, the WebLogic Server Administration Console must be running. In the following example, cfcvip.mycompany.com is the virtual IP used for the Cold Failover Clusters, and WLS_EXMPL is the managed server to be transformed.

  1. Log into the WebLogic Server Administration Console.

  2. Create a machine for the virtual host:

    1. Select Environment > Machines.

    2. Click Lock & Edit.

    3. Click New.

    4. For the Name field, enter cfcvip.mycompany.com

    5. For the Machine OS field, select the appropriate operating system.

    6. Click OK.

    7. Click the newly created Machine.

    8. Click Node Manager tab.

    9. Update Listen Address: cfcvip.mycompany.com.

    10. Click Save.

    11. Click Activate Changes.

  3. Stop the WLS_EXMPL Managed server:

    1. Choose Environment > Servers.

    2. Click Control.

    3. Select WLS_EXMPL.

    4. Select Force Shutdown Now in the Shutdown drop-down menu.

  4. Associate the WLS_EXMPL Managed Server with the VirtualHost Machine:

    1. Choose Environment > Servers.

    2. Click Lock & Edit.

    3. Click Configuration.

    4. Select WLS_EXMPL.

    5. For Machine, assign the newly created Machine by assigning it from the pull down menu.

    6. For Listen Address, enter cfcvip.mycompany.com.

    7. Click Save.

    8. Click Activate Changes.

  5. Start the WLS_EXMPL Managed Server:

    1. Choose Environment > Servers.

    2. Click Control.

    3. Select WLS_EXMPL.

    4. Click Start.

10.2.2.4.2 Transforming an Oracle WebLogic Managed Server using the WLST Command Line

You can transform an Oracle WebLogic managed server using WLST commands as well.

Oracle recommends shutting down the managed server you are transforming before performing these steps.

To transform a Managed Server using the WLST command line in online mode (with the WebLogic Server administration server up):

  1. In the command line, enter:

    WLS_HOME/server/bin/setWLSEnv.sh
    WLS_HOME/common/bin/wlst.sh
    
  2. In WLST, enter the following commands:

    wls:/offline>connect(<username>,<password>,<AdminServer location>)
    

    For example:

    wls:/offline>connect('WebLogic', 'welcome1', 't3://admin.mycompany.com:7001')
    
    wls:/DomainName/serverConfig> edit()
    wls:/DomainName/edit> startEdit()
    wls:/DomainName/edit !> create('cfcvip.mycompany.com','Machine')
    wls:/DomainName/edit !> cd('Machines/cfcvip.mycompany.com/NodeManager/cfcvip.mycompany.com')
    wls:/DomainName/edit !> set('ListenAddress', 'cfcvip.mycompany.com')
    wls:/DomainName/edit !>cd ('Servers')
    wls:/DomainName/edit/Servers !>cd ('WLS_EXMPL')
    wls:/DomainName/edit/Servers/WLS_EXMPL !>set('Machine',' cfcvip.mycompany.com ')
    wls:/DomainName/edit/Servers/WLS_EXMPL !>set('ListenAddress',' cfcvip.mycompany.com ')
    wls:/DomainName/edit/Servers/WLS_EXMPL !> save()
    wls:/DomainName/edit/Servers/WLS_EXMPL !> activate()
    wls:/DomainName/edit/Servers/WLS_EXMPL> exit()
    

    Stop (if not already down) and start the Managed server.

    Once the Managed server transformation is completed, all references to it should use the new Listen Address - cfcvip.mycompany.com. If Oracle HTTP Server serves as a front end to this Managed server, then any mod_wls_ohs configuration with mount points referring to applications in this Managed server should be changed to route to the new listening end point.

10.2.2.5 Transforming Node Manager

Node Manager can be used in a Cold Failover Cluster environment. The possible configurations are:

  • Using a Node Manager that listens on the virtual IP as well and fails over with the rest of the Cold Failover Clusters stack. With ASCRS based deployments, the Node Manager must be part of the same Middleware Home as where the Cold Failover Clusters Fusion Middleware instance is running. This Node Manager is assumed to be dedicated for this Fusion Middleware instance. In this case, Oracle recommends having additional Network Channels listening on the localhost configured. Other Node Managers may co-exist on the box listening on other ports. For more details, see Chapter 11, "Using Oracle Cluster Ready Services."

  • A Node Manager that does not failover with the rest of the Cold Failover Cluster stack. In this case, Node Manager is not configured for Cold Failover Cluster and listens on all IPs on the machine, and not specifically on the virtual IP for Cold Failover Cluster. The failover nodes also have a similarly configured Node Manager already available and configured. The Machine associated with the WebLogic instance communicates with the Node Manager on the localhost. For more details, see the Oracle Fusion Middleware Node Manager Administrator's Guide for Oracle WebLogic Server.

For Cold Failover Cluster in general, port usage should be planned so that there are no port conflicts when failover occurs.

To convert the Node Manager to Cold Failover Clusters:

  1. If Node Manager is running, stop it.

    The nodemanager.properties file is created only after the first start of nodemanager.

    Restart the node manager if necessary.

  2. In the nodemanager.properties file located in the WLS_HOME/common/nodemanager/ directory, set the ListenAddress to the virtual IP.

    For example:

    ListenAddress=cfcvip.mycompany.com
    
  3. Restart the node Manager using the StartNodeManager.sh file, located in the WLS_HOME/server/bin directory. For ASCRS based deployment, the node manager is started using WLS_HOME/server/bin/cfcStartNodemanager.sh.

    See Chapter 11, "Using Oracle Cluster Ready Services." for more details.

    Note:

    If needed, start the node manager only using the cfcStartNodemanager.sh script instead of the startNodeManager.sh script.

10.2.2.6 Transforming Oracle Process Management and Notification Server

Oracle Process Management and Notification Server (OPMN) is used for Process Management of system components and is part of the application server instance.

Oracle recommends keeping the default OPMN configuration in a Cold Failover Cluster environment. No further steps are necessary for Cold Failover Cluster transformation of the OPMN process itself.

If you are transforming an Oracle Instance for Cold Failover Clusters and it has already been registered with an administration server, make the following changes in the topology.xml file, located in the DOMAIN_HOME/opmn directory of the administration server domain. Change host name entries for this specific Oracle instance (being transformed to Cold Failover Clusters) to cfcvip.mycompany.com.

For example, for an Oracle HTTP Server instance transformed to Cold Failover Clusters, set the following in the topology.xml file

<property name="HTTPMachine" value="cfcvip.mycompany.com"/>

For the instance itself:

<ias-instance id="asinst " instance-home="/11gr1as3/MW/asinst" host="cfcvip.mycompany.com" port="6701">

10.2.2.7 Transforming Oracle Enterprise Manager for an Oracle Instance

When an Oracle instance such as Oracle Internet Directory, Oracle Virtual Directory, Web Tier components, and Oracle Portal, Forms, Reports, and Discoverer components is transformed to Cold Failover Clusters, the Enterprise Manager agent that is part of this Oracle instance must be transformed to Cold Failover Clusters as well.

To transform the Enterprise Manager agent:

  1. Stop the Enterprise Manager agent using the following command:

    cd INSTANCE_HOME/EMAGENT/emagent_dir/bin 
    ./emctl stop agent 
    
  2. Set the directory to ORACLE_INSTANCE/EMAGENT/emagent_instance name/sysman/config.

  3. In the emd.properties file, change node1.mycompany.com to cfcvip.mycompany.com for the emd_url attribute.

  4. Change the targets.xml file on the agent side:

    cd INSTANCE_HOME/EMAGENT/emagent_dir/sysman/emd 
    targets.xml targets.xml.org 
    

    Modify targets.xml so that it has only targets related to the host and oracle_emd. Remove all other entries. For example:

    <Targets AGENT_TOKEN="ad4e5899e7341bfe8c36ac4459a4d569ddbf03bc"> 
           <Target TYPE="oracle_emd" NAME=cfcvip.mycompany.com:<port>"/> 
           <Target TYPE="host" NAME=cfcvip.mycompany.com  DISPLAY_NAME=cfcvip.mycompany.com/> 
    </Targets>           
    
  5. Stop and restart the agent

    cd INSTANCE_HOME/EMAGENT/emagent_dir/bin 
    ./emctl start agent
    

Make the following changes for the Enterprise Manager server in the administration server domain directory:

  1. Set your directory to MW_HOME/user_projects/domains/domain_name/sysman/state.

  2. In the targets.xml file, located in MW_HOME/user_projects/domains/domain_name/sysman/state directory, modify the hostname from node1.mycompany.com to cfcvip.mycompany.com

10.2.2.8 Transforming Web Tier Components and Clients

The Web tier is made up of two primary components, Oracle HTTP Server and Oracle Web Cache. The next two sections describe how to transform Oracle HTTP Server and Oracle Web Cache for Cold Failover Clusters.

10.2.2.8.1 Transforming Oracle HTTP Server

To transform Oracle HTTP Server for Cold Failover Clusters:

In INSTANCE_HOME/config/OHS/component_name/httpd.conf, change the following attributes

Listen cfcvip.mycompany.com:<port> #OHS_LISTEN_PORT
Listen cfcvip.mycompany.com:<port> #OHS_PROXY_PORT
ServerName cfcvip.mycompany.com

Clients of Oracle HTTP Server

If an Oracle Web Cache instance is routing to Oracle HTTP Server that has been transformed to Cold Failover Clusters, in INSTANCE_HOME/config/WebCache/component_name/webcache.xml, change the following attributes:

Change node1.mycompany.com to cfcvip.mycompany.com, where node1.mycompany.com is the previous address of the Oracle HTTP server before transformation.

HOST ID="h1" NAME="cfcvip.mycompany.com" PORT="8888" LOADLIMIT="100"
 OSSTATE="ON"/>
<HOST ID="h2" NAME="cfcvip.mycompany.com" PORT="8890" LOADLIMIT="100" OSSTATE="ON"
 SSLENABLED="SSL"/>
10.2.2.8.2 Transforming Oracle Web Cache

To transform an Oracle Web Cache for Cold Failover Clusters:

  1. Set up an alias to the physical hostname on both nodes of the cluster in /etc/hosts.

    This is an alias to the IP address of the node. Set this in /etc/hosts for Linux and Windows location for Windows. The alias name is wcprfx.mycompany.com For example, On node Node1, the /etc/hosts file (on UNIX), the entry would be n.n.n.n node1 node1.mycompany.com wcprfx wcprfx.mycompany.com

    On the failover node Node2, the /etc/hosts file (on UNIX), the entry would be n.n.n.m node2 node2.mycompany.com wcprfx wcprfx.mycompany.com.

    On Windows, this change should be done on all nodes in the file located at C:\SystemRoot\system32\drivers\etc\hosts. SystemRoot is either Winnt or Windows.

  2. In INSTANCE_HOME/config/WebCache/wc1/webcache.xml:

    • Change node.mycompany.com to cfcvip.mycompany.comnode1.mycompany.com is where Oracle Web Cache was installed, and the host address it is listening on before transformation.

      SITE NAME="cfcvip.mycompany.com"
      
    • Change the Virtual Host Name entries to be cfcvip.mycompany.com for the SSL and non-SSL ports. For example:

      <HOST SSLENABLED="NONE" ISPROXY="NO" OSSTATE="ON" NUMRETRY="5"
       PINGINTERVAL="10" PINGURL="/" LOADLIMIT="100" PORT="8888"
       NAME="cfcvip.mycompany.com" ID="h0"/>
              <HOST SSLENABLED="SSL" ISPROXY="NO" OSSTATE="ON" NUMRETRY="5"
       PINGINTERVAL="10" PINGURL="/" LOADLIMIT="100" PORT="8890"
       NAME="cfcvip.mycompany.com" ID="h3"/>
              <VIRTUALHOSTMAP PORT="8094" NAME="cfcvip.mycompany.com">
                  <HOSTREF HOSTID="h3"/>
              </VIRTUALHOSTMAP>
              <VIRTUALHOSTMAP PORT="8090" NAME="cfcvip.mycompany.com">
                  <HOSTREF HOSTID="h0"/>
              </VIRTUALHOSTMAP> 
      
    • Change cache name entries to be based of wcprfx.mycompany.com where wcprfx.mycompany.com is an alias created in /etc/hosts on all nodes of the cluster. For example:

      <CACHE WCDEBUGON="NO" CAPACITY="30" VOTES="1" INSTANCENAME="asinst_1"
       COMPONENTNAME="wc1" ORACLEINSTANCE="/mnt1/ Oracle/Middleware/asinst_1"
       HOSTNAME="wcprfx.mycompany.com" ORACLEHOME="/mnt1/ Oracle/Middleware/as
      _1" NAME=" wcprfx.mycompany.com-WebCache">
      
    • In the MULTIPORT section, change IPADDR from ANY to cfcvip.mycompany.com for the following:

      PORTTYPE="NORM"
      SSLENABLED="SSL" PORTTYPE="NORM"
      PORTTYPE="ADMINISTRATION"
      PORTTYPE="INVALIDATION"
      PORTTYPE="STATISTICS"
      

      For example:

       <MULTIPORT>
                  <LISTEN PORTTYPE="NORM" PORT="8090"
       IPADDR="stbdd01-vip.us.oracle.com"/>
                  <LISTEN SSLENABLED="SSL" PORTTYPE="NORM" PORT="8094"
       IPADDR="cfcvip.mycompany.com">
                    <WALLET>/mnt1/Oracle/Middleware/asinst_
      1/config/WebCache/wc1/keystores/default</WALLET>
                  </LISTEN>
                  <LISTEN PORTTYPE="ADMINISTRATION" PORT="8091"
       IPADDR="cfcvip.mycompany.com"/>
                  <LISTEN PORTTYPE="INVALIDATION" PORT="8093" IPADDR="
       cfcvip.mycompany.com"/>
                  <LISTEN PORTTYPE="STATISTICS" PORT="8092"
       IPADDR="cfcvip.mycompany.com"/>
              </MULTIPORT>
      

10.2.2.9 Instance-specific considerations

In a Cold Failover Clusters environment, a failover node (node2.mycompany.com) must be equivalent to the install machine (node1.mycompany.com) in all respects. To make the failover node equivalent to the installation node, perform the following procedure on the failover instance:

10.2.2.9.1 UNIX Platforms

For UNIX platforms follow these steps:

  1. Failover the Middleware Home from Node 1 (the installation node) to the failover node (Node 2). This should be done manually following the mount/unmount procedure described earlier.

  2. As root, do the following:

    • Create an oraInst.loc file located in the file /etc directory identical to the one on Node1.

    • Run the oracleRoot.sh file located in the ORACLE_HOME directory node2, if required, and available for the product suite.

  3. Create the oraInventory on the second node, by using the attachHome command located in ORACLE_HOME/oui/bin/attachHome.sh

10.2.2.9.2 Windows Platform

For Windows Platforms, follow these steps:

For AS instances (Web Tier, OID/OVD for IDM installs, Oracle Portal, Form, Reports, and Discoverer):

  1. To create the OPMN service on Machine 2 run the following commands:

    sc create OracleProcessManager_instance_name  binPath= "ORACLE_HOME\opmn\bin\opmn.exe -S -I Instance_Home" 
    

    For example:

    sc create OracleProcessManager_asinst binPath= "X:\Middleware\im_oh\opmn\bin\opmn.exe -S -I X:\Middleware\asinst" 
    
  2. On both Machine 1 and 2, set the service OracleProcessManager_instance_name to be started manually.

    sc config OracleProcessManager_instance_name start= demand
    

For all installations:

To copy the Start Menu from Machine 1 to Machine 2, copy the files under C:\Document and Settings\All Users\Start Menu\Programs\<Menu> from Machine 1 to Machine 2. You can do this with any Windows Backup tool, or Zip utility. For example, using the Windows Backup Utility:

  1. Invoke the Windows Backup Utility by selecting Accessories, System Tools, and then Backup.

  2. Select C:\Document and Settings\All Users\Start Menu\Programs\<Menu>

  3. Copy the backup file to the second node.

  4. Restore the backup to the same location: C:\Document and Settings\All Users\Start Menu\Programs\

For Node Manager, use the standard WebLogic Server procedure to configure Node Manager as a service.

Node Manager is configured as a service using the WL_HOME\server\bin\installNodeMgrSvc.cmd command. To configure Node Manager:

  • It must be set on both Machines.

  • Node Manager should be set to start manually on both nodes.

To configure Node Manager on a Machine,

  1. Ensure that the Middleware Home is available on the machine.

  2. Install the service running the above command.

  3. Set it to be started manually using the following command:

    sc config nodemanager_service_name start= demand
    

10.2.3 Transforming Oracle Fusion Middleware Components

This section describes the considerations and the transformation steps for each of the Oracle product suites. For detailed explanation on the product components, see the appropriate component chapter in this guide.

10.2.3.1 Transforming Oracle Internet Directory and Its Clients

This section describes how to transform Oracle Internet Directory and its clients.

10.2.3.1.1 Transforming Oracle Internet Directory

Follow these steps to transform an Oracle Internet Directory server:

  1. In a text editor, create a file named oidcfc.ldif that sets up the virtual IP cfcvip.mycompany.com for the Oracle Internet Directory server:

    dn: cn=oid1,cn=osdldapd,cn=subconfigsubentry 
    changetype: modify 
    replace: orclhostname 
    orclhostname: cfcvip.mycompany.com
    
  2. Run the following command:

    ORACLE_HOME/bin/ldapmodify -p <oidPort> -h <oidHost> -D cn=orcladmin –w
    <adminPasswd> -f oidcfc.ldif
    

    OID must be up at the time of running this command. OidHost used in the above command is the physical hostname listening end point that OID was installed with.

  3. Stop and restart the Oracle Internet Directory server using opmnctl:

    For example:

    ORACLE_INSTANCE/bin/opmnctl stopproc ias-component=oid1
    ORACLE_INSTANCE/bin/opmnctl startproc ias-component=oid1
    
  4. Re-Register OID with the Admin Server

    ORACLE_INSTANCE/bin/opmnctl updatecomponentregistration -adminHost
     myAdminHost -adminPort 7001 -adminUsername weblogic -componentType OID
     -componentName oid1 -Host cfcvip.mycompany.com
    
10.2.3.1.2 Transforming Oracle Internet Directory Clients

To transform an Oracle Internet Directory client:

  1. All clients of the Oracle Internet Directory server should use the virtual IP cfcvip.mycompany.com to access that Oracle Internet Directory server.

  2. Oracle Directory Integration Platform installation that uses the Oracle Internet Directory server must access the Oracle Internet Directory server using the virtual IP. To do this:

    1. In a text editor, open the dip-config.xml file located in the DOMAIN_HOME/servers/wls_ods1/stage/DIP/11.1.1.1.0/DIP/configuration directory.

      For example:

      MW_HOME/user_projects/domains/IDMDomain/servers/wls_ods1/stage/DIP/
      11.1.1.1.0/DIP/configuration/dip-config.xml
      
    2. Enter the following value to set the LDAP address to the virtual IP:

      OID_NODE_HOST>cfcvip.mycompany.com</OID_NODE_HOST>
      
  3. An Oracle Directory Services Manager instance managing the Oracle Internet Directory server in Section 10.2.2.8.1, "Transforming Oracle HTTP Server" must use the virtual IP to connect to the Oracle Internet Directory server.

    For example, from the Oracle Directory Services Manager screen, verify that you can connect to Oracle Internet Directory using the virtual server by following these steps:

    1. Select the Connect to a directory --> Create A New Connection link in the upper right hand corner.

    2. In the New Connection screen, fill in the connection information below and click Connect:

      • Directory Type: OID

      • Name: OIDHA

      • Server: cfcvip.mycompany.com

      • Port: 389

      • SSL Enabled: Leave blank

      • User Name: cn=orcladmin

      • Password: ********

      • Start Page: Home (default)

10.2.3.2 Transforming Oracle Virtual Directory and Its Clients

This section describes how to transform Oracle Virtual Directory and its clients.

10.2.3.2.1 Transforming Oracle Virtual Directory

Follow these steps to transform an Oracle Virtual Directory server:

  1. In a text editor, open the listeners.os_xml file in the ORACLE_INSTANCE/config/OVD/componentname directory.

  2. Enter the following value to set the LDAP address to the virtual IP:

    <host>cfcvip.mycompany.com</host>
    
  3. Restart the Oracle Virtual Directory server using opmnctl.

    For example:

    ORACLE_INSTANCE/bin/opmnctl stopproc ias-component=ovd1
    ORACLE_INSTANCE/bin/opmnctl startproc ias-component=ovd1
    
10.2.3.2.2 Transforming Oracle Virtual Directory Clients

All clients of Oracle Virtual Directory must use the virtual IP cfcvip.mycompany.com to access Oracle Virtual Directory. For example, when using Oracle Directory Services Manager to administer a Cold Failover Cluster Oracle Virtual Directory instance, create a connection using cfcvip.mycompany.com as the location of the Oracle Virtual Directory instance.

10.2.3.3 Transforming Oracle Directory Integration Platform and Oracle Directory Services Manager and Their Clients

This section describes how to transform Oracle Directory Integration Platform, Oracle Directory Services Manager, and their clients.

10.2.3.3.1 Transforming Oracle Directory Integration Platform and Oracle Directory Services Manager

Oracle Directory Integration Platform and Oracle Directory Services Manager are deployed to a Managed Server. The procedure for CFC transformation is to configure the Managed Server to which they are deployed to listen on the cfcvip.mycompany.com virtual IP. Follow the steps in Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" to configure the WLS_ODS managed server to listen on the cfcvip.mycompany.com virtual IP.

10.2.3.3.2 Transforming Oracle Directory Integration Platform and Oracle Directory Services Manager Clients

Follow these steps to transform Oracle Directory Integration Platform and Oracle Directory Services Manager clients:

  1. Clients of Oracle Directory Integration Platform and Oracle Directory Services Manager must use the virtual IP cfcvip.mycompany.com to access these applications.

  2. When Oracle HTTP Server is the front end for Oracle Directory Services Manager, the WebLogic configuration for Oracle Directory Services Manager must specify the virtual IP cfcvip.mycompany.com as the address for the WLS_ODS Managed Server. To do this, open the mod_wl_ohs file in a text editor and make these edits for the mount points used by Oracle HTTP Server and Oracle Directory Services Manager:

    #Oracle Directory Services Manager
    <Location /odsm>
    SetHandler weblogic-handler
    WebLogicHost  cfcvip.mycompany.com:<port>
    </Location>
    

10.2.3.4 Transforming Oracle Identity Federation and Its Client

This section describes how to transform Oracle Identity Federation and its clients.

10.2.3.4.1 Transforming Oracle Identity Federation

Oracle Identity Federation is a component that is deployed to a Managed Server. The procedure for Cold Failover Clusters transformation is to configure the Managed Server to which it is deployed to listen on the cfcvip.mycompany.com virtual IP. Follow the steps in Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" to configure the WLS_OIF Managed Server to listen on the cfcvip.mycompany.com virtual IP. Since Oracle Identity Federation Cold Failover Clusters deployments are likely to be split into Service Provider and Identity Provider, more than one instance of WLS_OIF is likely to exist in a given deployment. Use the same Cold Failover Clusters procedure for both WLS_OIF instances.

After configuring the Managed Server to listen on the cfcvip.mycompany.com virtual IP, log into the Oracle Enterprise Manager Fusion Middleware Control and perform these steps:

  1. Navigate to Farm > Identity and Access > OIF.

  2. In the right frame, navigate to Oracle Identity Federation > Administration and then make these changes:

    1. Server Properties: change the host to cfcvip.mycompany.com

    2. Identity Provider > Common: change the providerId to cfcvip.mycompany.com

    3. Service Provider > Common: change the providerId to cfcvip.mycompany.com

    4. Data Stores: If LDAP is the data store, then replace the value of Connection URL for User Data Store and Federation Data Store with cfcvip.mycompany.com

    5. Authentication Engine > LDAP Directory: Set the ConnectionURL to cfcvip.mycompany.com

The metadata needs to be generated after the above changes are done.

10.2.3.4.2 Transforming Oracle Identity Federation Clients

Follow these steps to transform Oracle Identity Federation clients:

  1. Clients of Oracle Identity Federation must use the virtual IP cfcvip.mycompany.com to access these applications.

  2. When Oracle HTTP Server is the front end for Oracle Identity Federation, the WebLogic configuration for Oracle Directory Services Manager must specify the virtual IP cfcvip.mycompany.com as the address for the WLS_OIF Managed Server. To do this, open the mod_wl_ohs file in a text editor and make these edits for the mount points used by Oracle HTTP Server and Oracle Directory Services Manager:

    #Oracle Identity Federation
    <Location /oif>
    SetHandler weblogic-handler
    WebLogicHost  cfcvip.mycompany.com:<port>
    </Location>
    

10.2.3.5 Transforming an Oracle SOA Suite

Oracle SOA Suite is made up of Java EE components deployed to a managed server. The typical configuration has deployments of OWSM-PM applications and SOA applications. The SOA applications are always deployed to a separate managed server. In many Cold Failover Clusters deployments OWSM-PM is deployed to the Administration Server, however, they can be deployed to a managed server by itself, for example, WLS_OWSM, or to the SOA managed server. In all cases, since these are Java EE components, the Cold Failover Clusters transformation procedure involves configuring the managed server to which they are deployed to listen on the virtual IP. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for information on transforming managed servers WLS_OWSM and WLS_SOA.

In addition, follow these steps to transform a SOA component managed server:

  1. Set the front-end host for WLS_SOA managed server to cfcvip.mycompany.com.

    1. Log into Oracle WebLogic Server Administration Console.

    2. In the Environment section, select Servers.

    3. Select the name of the managed server.

    4. Select Protocols, then select HTTP.

    5. In the Frontend Host field, enter the host name as cfcvip.mycompany.com.

    6. Set the Frontend Port to the HTTP port

    7. Click Save.

    8. Activate the changes.

    9. Restart the Managed Server.

  2. When using Oracle HTTP Server as the front end, the mod WebLogic configuration for the applications deployed to WLS_OWSM and WLS_SOA should provide the VIP cfcvip.mycompany.com as the address of these managed server. This configuration change is done in mod_wl_ohs.conf for the mount points used by SOA components. For example:

    #SOA soa-infra app
    <Location /soa-infra>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
    </Location>
    
    # Worklist
    <Location /integration/>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
    </Location>
    
    # B2B
    <Location /b2b>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
    </Location>
    
    # UMS prefs
    <Location /sdpmessaging/userprefs-ui >
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
    </Location>
    
    # WSM
    <Location /wsm-pm>
              SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
    </Location>
    
    # workflow
    <Location /workflow>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
    </Location>
    )
    

10.2.3.6 Transforming an Oracle WebCenter Suite

The Web Center Suite is made up of Java EE components. The typical WebCenter Suite deployment has three managed servers, though more are possible when you have WebCenter custom applications. Typical managed server deployments include:

  • WLS_SPACES

  • WLS_PORTLETS

  • WLS_SERVICES

These are Java EE components, therefore, the procedure for Cold Failover Clusters transformation is to configure the Managed Server to which they are deployed to listen on the Virtual IP. In many Cold Failover Cluster deployments of Oracle WebCenter, OWSM-PM may be deployed to WLS_Portlets and WLS_Spaces container. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for information on transforming Oracle WebCenter Suite managed servers.

In addition, when you use Oracle HTTP Server as the front end, the mod_weblogic configuration for WebCenter Suite applications the managed servers should provide the VIP cfcvip.mycompany.com as the address of these managed servers. This configuration change is done in mod_wl_ohs.conf for the mount points used by WebCenter components. For example:

LoadModule weblogic_module "${ORACLE_HOME}/ohs/modules/mod_wl_ohs.so"
 
# Spaces
 
<Location /webcenter>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
<Location /webcenterhelp>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
<Location /rss>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
# Portlet
 
<Location /portalTools>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
<Location /wsrp-tools>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
# WSM
 
<Location /wsm-pm>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
# Discussions and Wiki
 
<Location /owc_discussions>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>
 
<Location /owc_wiki>
          SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com:<port>
</Location>

10.2.3.7 Transforming Oracle Portal, Forms, Reports, and Discoverer

is made up of these four separate components. Cold Failover Clusters configuration varies for each Oracle Portal, Forms, Reports, and Discoverer component. This section documents the steps for Cold Failover Clusters transformation of Oracle Portal, Forms, Reports, and Discoverer.

10.2.3.7.1 Transforming Oracle Forms for Cold Failover Clusters

Oracle Forms is made up of both Java EE and system components. To configure Oracle Forms for Cold Failover Clusters:

Note:

The transformation process requires the domain be shutdown before editing these files.
  1. Transform the WLS_FORMS managed server. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for this procedure. Start the Domain administration server to do the Managed server transformation, and shut it down once the Managed Server transformation is finished.

  2. Transform the Oracle Forms OPMN. See Section 10.2.2.6, "Transforming Oracle Process Management and Notification Server" for this procedure.

  3. Transform Enterprise Manager. See Section 10.2.2.7, "Transforming Oracle Enterprise Manager for an Oracle Instance" for this procedure.

  4. Transform Oracle Forms Web tier components. See Section 10.2.2.8, "Transforming Web Tier Components and Clients" for this procedure.

  5. Reregister Single Sign-On using the steps described in the Section 12.6.4.8.4, "Enable Single Sign On."

  6. In the forms.conf file located in the INSTANCE_HOME/config/OHS/ohs1/moduleconf directory, change the mod weblogic configuration.

    Start the administration server and Managed Servers in the WebLogic Domain as well as the Oracle instances to validate the Cold Failover Cluster installation:

    <Location /Forms>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:<port>
        DynamicServerList OFF
    </Location>
    
10.2.3.7.2 Transforming Oracle Reports for Cold Failover Clusters

Oracle Reports is made up of both Java EE and system components. To configure Oracle Reports for Cold Failover Clusters:

  1. Transform the WLS_REPORTS managed server. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for this procedure. Start the Domain administration server to do the Managed Server transformation, and shut it down once the Managed Server transformation is finished.

  2. Transform the Oracle Reports OPMN. See Section 10.2.2.6, "Transforming Oracle Process Management and Notification Server" for this procedure

    In addition, do the following:

    1. In the opmn.xml file, located in the INSTANCE_HOME/config/OPMN/opmn, directory, change the Reports server name in ias-component to reflect its new name.

      For example:

      <ias-component id="ReportsServer_virtual_hostname_instance_name">
      <process-set id="ReportsServer_virtual_hostname_instance_name"
       restart-on-death="true" numprocs="1">
      </ias-component>
      

      For example:

      <ias-component id="ReportsServer_cfcvip_asinst1">
         <process-set id="ReportsServer_ cfcvip asinst1"restart-on-death="true"
         numprocs="1">
      </ias-component>
      
    2. In DOMAIN_HOME/opmn/topology.xml of the administration server domain home, change all occurrences of the Report server name from ReportsServer_physical hostname_instance_name (example: ReportServer_node1_asinst) to ReportsServer_virtual_hostname_instance_name (example: ReportServer_cfcvip_asinst)

  3. Transform Oracle Enterprise Manager. See Section 10.2.2.7, "Transforming Oracle Enterprise Manager for an Oracle Instance" for this procedure.

  4. Transform Oracle Reports Web tier components. See Section 10.2.2.8, "Transforming Web Tier Components and Clients" for this procedure.

  5. Reregister Single Sign-On. To do this perform the steps described in the Section 12.6.4.8.4, "Enable Single Sign On."

  6. In the reports_ohs.conf file located in the INSTANCE_HOME/config/OHS/ohs1/moduleconf directory, change the following:

    <Location /reports>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.mycompany.com:port
        WebLogicPort 9001
    </Location>
    
  7. In the directory INSTANCE_HOME/config/ReportsServerComponent, rename the following directory:

    ReportsServer_physical_hostname_instance_name (for example: ReportServer_node1_asinst)
    

    To the following:

    ReportsServer_VIP_instance_name (for example: ReportServer_cfcvip_asinst)
    

    Note:

    All new logs for the report server go to ReportsServer_Virtual_Hostname_Instance_Name after restart of the instance.
  8. In the renamed directory, replace physical hostname with a virtual hostname in the following files:

    • component-logs.xml

    • logging.xml

  9. In the targets.xml file, located in the INSTANCE_HOME/EMAGENT/emagent_dir/sysman/emd directory, change the following:

    1. Change all hostnames from node1.mycompany.com to cfcvip.mycompany.com

    2. Change the report server name from ReportsServer_physical_hostname_instance_name (for example: ReportServer_node1_asinst) to ReportsServer_VIP_instance_name (for example: ReportServer_cfcvip_asinst) for the following two elements:

      Target TYPE="oracle_repserv"

      Target TYPE="oracle_repapp"

  10. In the INSTANCE_HOME/reports directory, replace the physical hostname with a virtual hostname in reports_install.properties.

  11. In the INSTANCE_HOME/reports/server directory, rename the following files:

    • Rename reportsserver_physical hostname_instance name.dat to reportsserver_virtual host name_instance name.dat

    • Rename rep_wls_reports_physical hostname_instance name.dat to rep_wls_reports_virtual hostname_instance name.dat

    • Start the administration server and Managed Servers in the WebLogic Domain, as well as the Oracle instances to validate the Cold Failover Cluster installation.

  12. In the DOMAIN_HOME/servers/WLS_REPORTS/stage/reports/reports/configuration directory, replace physical hostname with virtual hostname in the rwservlet.properties file (including changes to any occurrence of the reports sever name).

  13. In the DOMAIN_HOME/servers/WLS_REPORTS/stage/reports/reports/META-INF configuration directory, replace physical hostname with virtual hostname in the mbeans.xml file (including changes to any occurrence of the reports sever name).

10.2.3.7.3 Transforming Oracle Discoverer for Cold Failover Clusters

Oracle Discoverer is made up of both Java EE and system components. To configure Oracle Discoverer for Cold Failover Clusters:

  1. Transform the WLS_DISCO managed server. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for this procedure. Start the domain administration server to do the Managed Server transformation, and shut it down once the Managed Server transformation is finished.

  2. Transform the Oracle Discoverer OPMN. See Section 10.2.2.6, "Transforming Oracle Process Management and Notification Server" for this procedure.

  3. Transform Oracle Enterprise Manager. See Section 10.2.2.7, "Transforming Oracle Enterprise Manager for an Oracle Instance" for this procedure.

  4. Transform Oracle Discoverer Web tier components. See Section 10.2.2.8, "Transforming Web Tier Components and Clients" for this procedure.

  5. Reregister Single Sign-On. To do this perform the steps described in the Section 12.6.4.8.4, "Enable Single Sign On."

  6. In the reports_ohs.conf file located in the INSTANCE_HOME/config/OHS/ohs1/moduleconf directory, change the following:

    <IfModule mod_weblogic.c><Location /discoverer>
    SetHandler weblogic-handler
    WebLogicCluster cfcvip.mycompany.com:<port>
    DynamicServerList ON
    </Location>
    </IfModule> >    
    
  7. In configuration.xml file located in the DOMAIN_HOME/servers/WLS_DISCO/stage/discoverer/11.1.1.1.0/discoverer/configuration/ directory, change the following:

    applicationURL="http://cfcvip.mycompany.com:8090/discoverer">
    

    The port here in the applicationURL above is the Oracle HTTP Server port (assuming the Oracle HTTP Server has also been transformed to Cold Failover Clusters.)

    Start the administration server and Managed Servers in the WebLogic Domain, as well as the Oracle Instances to validate the Cold Failover Cluster installation.

10.2.3.7.4 Transforming Oracle Portal for Cold Failover Clusters

Oracle Portal is made up of both Java EE and system components. To configure Oracle Portal for Cold Failover Clusters:

  1. Transform the WLS_PORTAL managed server. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for this procedure. Start the domain administration server to do the Managed Server transformation, and shut it down once the Managed Server transformation is finished.

  2. Transform the Oracle Portal OPMN. See Section 10.2.2.6, "Transforming Oracle Process Management and Notification Server" for this procedure.

  3. Transform Oracle Enterprise Manager. See Section 10.2.2.7, "Transforming Oracle Enterprise Manager for an Oracle Instance" for this procedure.

  4. Transform Oracle Portal Web tier components. See Section 10.2.2.8, "Transforming Web Tier Components and Clients" for this procedure.

    Additionally, reset the invalidation and Administrator password:

    1. Login to the Oracle Enterprise Manager Fusion Middleware Console and in the navigator window, expand the Web Tier tree.

    2. Click on the Web Cache component, for example, wc1.

    3. From the drop-down list at the top of the page, select Administration and then Passwords:

      Enter a new invalidation password, confirm it, and click Apply.

      Enter a new administrator password, confirm it, and click Apply.

  5. Reregister Single Sign-On. To do this, perform the steps described in the Section 12.6.4.8.4, "Enable Single Sign On."

  6. In the portal.conf file located in the INSTANCE_HOME/config/config/OHS/ohs1/moduleconf, change the mod weblogic configuration, change the mod weblogic configuration:

    # WLS routing configuration
    <Location /portal>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location>
    
    <Location /portalTools>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location>
    
    <Location /wsrp-tools>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location>
    
    <Location /richtextportlet>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location>
    
    <Location /jpdk>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location>
    
    <Location /portalHelp>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location>
    
    <Location /portalHelp2>
    SetHandler weblogic-handler
    WebLogicHost cfcvip.mycompany.com
    WebLogicPort <port>
    </Location> 
    
  7. Rewire the Portal Repository:

    1. Log into the domain WebLogic Server Enterprise Manager using the following URL:

      http://cfcvip.mycompany.com:7001/em

    2. Change InValidation and Administrator password.

      In the Navigator Window Expand the Web Tier tree.

      Click the component wc1.

      From the drop-down list at the top of the page, select Administration - Passwords.

      Enter a new invalidation password, confirm it, and click Apply.

      Enter a new administrator password, confirm it and click Apply.

    3. Expand the Fusion Middleware menu in the left pane.

    4. Select Classic.

    5. Click Portal

      The Portal Domain information page appears.

    6. Right-click on Portal and select Settings, and then Wire Configuration.

    7. Enter the following information for Portal Midtier

      Host: Enter the Cold Failover Clusters Virtual IP name of the Web Cache host cfcvip.mycompany.com.

      Port: Enter the Web Cache port being used (HTTP or non HTTP)

      SSL Protocol: Enter this value if appropriate.

    8. Enter the following information for Web Cache:

      Host: Enter the Cold Failover Clusters Virtual IP name of the Web Cache host cfcvip.mycompany.com mysite.mycompany.com

      Invalidation port: Enter the Invalidation port, for example, 9401.

      Invalidation User Name: Enter the user name for Portal invalidations.

      Invalidation Password: Enter the password for this account.

    9. Click Apply to start the rewire.

    10. When the rewire is complete, click the Portal menu option again, and ensure that the Portal URL is now the following:

      https://cfcvip.mycompany.com:WCHTTPPort/portal/pls/portal

  8. Change Host Assertion in Oracle WebLogic Server.

    Because the Oracle HTTP Server acts as a proxy for WebLogic, by default certain CGI environment variables, including the host and port, are not passed through to WebLogic. TO ensure that Web Logic is ware that it is using a virtual site name and port so that it can generate internal URLs appropriately:

    1. Log in to the WebLogic Server Administration Console using the following URL:

      http://cfcvip.mycompany.com:7001/console

    2. Select WLS_PORTAL from the home page or select Environment, and then Clusters from the Domain Structure menu.

    3. Click Lock & Edit in the Change Center window to enable editing.

    4. Click Protocol, and select HTTP.

    5. Enter the following values:

      Frontend Host: Cfcvip.mycompany.com

      Frontend HTTP Port: WCHTTPPort (8090)

      Frontend HTTPS Port: WCHTTPSPort (8094)

      This ensures that any HTTPS URLs created from within WebLogic are directed to port 443 on the load balancer.

    6. Click Activate Changes in the Change Center window to enable editing.

    7. Restart the WLS_PORTAL managed server

10.2.3.7.5 Transforming Oracle Business Activity Management (BAM)

Oracle BAM is made up of Java EE components deployed to a managed server. Since these are Java EE components, the Cold Failover Clusters transformation procedure involves configuring the managed server to which they are deployed to listen on the virtual IP. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for information on transforming managed servers WLS_BAM.

When using Oracle HTTP Server as the front end, the mod WebLogic configuration for the applications deployed to WLS_BAM should provide the VIP cfcvip.mycompany.com as the address of these managed server. This configuration change is done in the mod_wl_ohs.conf file for the mount points used by SOA components. For example:

# BAM Web Application
<Location /OracleBAM >
    SetHandler weblogic-handler
WebLogicHost  cfcvip.mycompany.com:<port>
</Location>
10.2.3.7.6 Transforming a Custom ADF Deployment

For a deployment that uses a custom ADF application, cold failover clusters can be used in the same way as any of the Fusions Middleware deployment. The domain is created in this case using the installation from Oracle Application Developer DVD. Since this is primarily a Java EE components. the Cold Failover Clusters transformation procedure involves configuring the managed server to which they are deployed to listen on the virtual IP. See Section 10.2.2.4, "Transforming Oracle WebLogic Managed Servers" for information on transforming managed servers.

When using Oracle HTTP Server as the front end, the mod WebLogic configuration for the applications deployed to WLS_ADF (the name of the managed server for the customer app) should provide the VIP cfcvip.mycompany.com as the address of these managed server. This configuration change is done in mod_wl_ohs.conf for the mount points used by SOA components. For example:

# BAM Web Application
<Location /ADFApplicationMountPoint >
    SetHandler weblogic-handler
WebLogicHost  cfcvip.mycompany.com:<port>
</Location>

10.2.3.8 Single Sign-On Re-registration (If required)

Single Sign-On (SSO) re-registration typically applies only to Oracle Portal, Forms, Reports, and Discoverer. Once the front end listening endpoint on Oracle HTTP Server for this tier has been changed to the Virtual IP, it becomes necessary to do SSO re-registration so that the URL to be protected is configured with the virtual IP. To re-register SSO, perform these steps on the 10.1.x installation of Identity Management where the SSO server resides:

  1. Set the ORACLE_HOME variable to the SSO ORACLE_HOME location.

  2. Execute ORACLE_HOME/sso/bin/ssoreg.sh (ssoreg.bat for Windows) with the following parameters:

    -site_name cfcvip.mycompany.com:port
    -mod_osso_url http://cfcvip.mycompany.com 
    -config_mod_osso TRUE
    -oracle_home_path ORACLE_HOME
    -config_file /tmp/osso.conf
    -admin_info cn=orcladmin
    -virtualhost
    -remote_midtier
    
  3. Set the ORACLE_HOME variable to the SSO ORACLE_HOME location.

  4. Copy /tmp/osso.conf file to the Discoverer mid-tier home location:

    ORACLE_INSTANCE/config/OHS/ohs1

  5. Restart Oracle HTTP Server by issuing the following command from the ORACLE_HOME/opm/bin/directory:

    opmnctl restartproc process-type=OHS
    
  6. Log in to the SSO server through the following URL:

    http://login.mycompany.com/pls/orasso
    
  7. In the Administration page and then Administer Partner applications, delete the entry for node1.mycompany.com.

10.2.4 Transforming an Oracle Database

In an Oracle Fusion Middleware deployment, the Oracle database plays an important role. In a typical Cold Failover Clusters deployment of Oracle Fusion Middleware, the database is also deployed as a cold failover cluster. This section described the steps to transform a single instance Oracle database to a Cold Failover Cluster database. This transformation should be done before seeding the database using RCU and subsequent Fusion Middleware installations that use this seeded database. To enable the database for Cold Failover Clusters:

  1. Change the listener configuration used by the database instance.

    This requires changes to the listener.ora file. Ensure that the HOST name in the listener configuration has the value of the virtual hostname. In addition, ensure that the listener port is not in use by any other process (Oracle or third party).

    <listener_name>  =
     (DESCRIPTION_LIST =
       (DESCRIPTION =
             (ADDRESS = (PROTOCOL = TCP)(HOST = <virtual hostname>)(PORT = <port>))
       )
    )
    

    For example:

    LISTENER_CFCDB =
     (DESCRIPTION_LIST =
       (DESCRIPTION =
             (ADDRESS = (PROTOCOL = TCP)(HOST cfcdbhost.mycompany.com)(PORT = 1521))
       )
     )
    
  2. Change the tnsnames.ora file.

    Change an existing tns service alias entry or create a new one:

    <tns alias name> =
     (DESCRIPTION =
       (ADDRESS_LIST =
         (ADDRESS = (PROTOCOL = TCP)(HOST = <virtual hostname>)(PORT = <port>))
       )
       (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = <db service name>)
          (INSTANCE_NAME = <db instance name>)
       )
     )
    
    )
    

    For example:

    CFCDB =
     (DESCRIPTION =
       (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = cfcdbhost.mycompany.com)(PORT = 1521))
       )
       (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = cfcdb)
          (INSTANCE_NAME = cfcdb)
       )
     )
    
    
  3. Change the local sp file to update the local_listener parameter of the instance.

    Login as sysdba using sqlplus:

    SQL> alter system set local_listener='<tns alias name>' scope=both;
    )
    

    For example:

    SQL> alter system set local_listener='CFCDB' scope=both;
    
  4. Shutdown and restart the listener.

  5. Shutdown and restart the database instance.

  6. Create the database service for the application server.

    oracle recommends a dedicated service separate from the default database service used with the Oracle Application server. To create this service, execute the following sqlplus command:

    SQL> execute DBMS_SERVICE.CREATE_SERVICE(
       '<cfc db service name>'   '<cfc db network name>' ) \
    

    For example:

    SQL> execute DBMS_SERVICE.CREATE_SERVICE(
       'cfcdb_asservice'   'cfcdb_asservice' )
     
    SQL> execute DBMS_SERVICE.START_SERVICE(   'cfcdb_asservice' )
    

    Additional parameters for this service may be set depending on the needs of the installation. See the Oracle Database PL/SQL Packages and Types Reference for details about the DBMS_SERVICE command.

10.3 Oracle Fusion Middleware Cold Failover Cluster Example Topologies

In this section illustrates some example Cold Failover Clusters topologies. Since there are many possible combinations of topologies, these topologies are illustrative only. To achieve these topologies, more than one of the transformation steps apply. Refer to the steps mentioned earlier to configure the transformation.

10.3.1 Example Topology 1

Figure 10-3 shows an Oracle WebCenter Cold Failover Clusters deployment. Both the administration server and the WebCenter Managed Servers are in the domain, and failover as unit. Therefore, they share the same virtual IP and are installed together on the same shared disk. There may be an Oracle HTTP Server front ending this topology. It is on a separate node in the example topology. It can also be on the same node, and can be part of the Cold Failover Clusters deployment. In this example, the database is also on a separate node. However, it is equally likely that the database is on the same cluster and is also Cold Failover Clusters-based (using its own virtual IP and shared disk).

Figure 10-3 Cold Failover Cluster Example Topology 1

Cold Failover Cluster Example Topology 1
Description of "Figure 10-3 Cold Failover Cluster Example Topology 1"

10.3.2 Example Topology 2

Figure 10-4 shows an example SOA Cold Failover Clusters deployment. In this example, only the SOA instance is deployed as Cold Failover Clusters, and the administration server is on a separate node. The database is also on a separate node in this example topology. Oracle HTTP Server in this case is part of the Cold Failover Clusters deployment, and part of the same failover unit as the SOA Managed Servers. Important variants of this topology include a Cold Failover Clusters administration server on the same hardware cluster. It may share the same virtual IP and shared disk as the SOA Managed Servers (SOA and administration server are part of the same failover unit) or use a separate virtual IP, and shared disk (administration server fails over independently). Similarly, depending on the machine capacity, the database instance can also reside on the same hardware cluster.

Figure 10-4 Cold Failover Cluster Example Topology 2

Cold Failover Cluster Example Topology 2
Description of "Figure 10-4 Cold Failover Cluster Example Topology 2"

10.3.3 Example Topology 3

Figure 10-5 shows an Oracle Identity Management deployment. In this example topology, all components are on a two-node hardware cluster. Identity Management fails over as a unit, and both the Java EE (administration server and WLS_ods Managed Server) and system components are part of the same failover unit. They share the same virtual IP and shared disk (cfcvip1.mycompany.com). The database is also on the same hardware cluster. It uses a different virtual IP (cfcvip2.mycompany.com), and a different set of shared disks. During normal operations, the database runs on Node2 and the IDM stack runs on Node1. The other node acts as a back up for each.

This topology is the recommended topology for most Cold Failover Clusters deployments. The example is for Identity Management, but this is true for the oracle SOA, Oracle Web Center, and Oracle Portal, Forms, Reports, and Discoverer suites as well. In this recommended architecture, Oracle Fusion Middleware runs as one node of the hardware cluster. The Oracle database runs on the other node. Each node is a backup for the other. The oracle Fusion Middleware instance and the database instance failover independently of each other, using different shared disks and different VIPs. This architecture also ensures that the hardware cluster resources are optimally utilized.

Figure 10-5 Cold Failover Cluster Example Topology 3

Cold Failover Cluster Example Topology 3
Description of "Figure 10-5 Cold Failover Cluster Example Topology 3"