15 Active-Passive Topologies for Oracle Fusion Middleware High Availability

This chapter describes how to configure and manage active-passive topologies. It contains the following sections:

15.1 Oracle Fusion Middleware Cold Failover Cluster Topology Concepts

Oracle Fusion Middleware provides an active-passive model for all its components using Oracle Fusion Middleware Cold Failover Cluster (Cold Failover Cluster). In a Cold Failover Cluster configuration, two or more managed server instances are configured to serve the same application workload but only one is active at any particular time.

You can use a two-node Cold Failover Cluster to achieve active-passive availability for middle-tier components. In a Cold Failover Cluster, one node is active while the other is passive, on standby. If the active node fails, the standby node activates and the middle-tier components continue servicing clients from that node. All middle-tier components fail over to the new active node. No middle-tier components run on the failed node after the failover.

The most common properties of a Cold Failover Cluster configuration include:

  • Shared Storage: Shared storage is a key property of a Cold Failover Cluster. The passive Oracle Fusion Middleware instance in an active-passive configuration has access to the same Oracle binaries, configuration files, domain directory, and data as the active instance. You configure this access by placing these artifacts in storage that all participating nodes in the Cold Failover Cluster configuration can access. Typically the active node has shared storage mounted, while the passive node's is unmounted but accessible if the node becomes active. The shared storage can be a dual-ported disk device accessible to both the nodes or a device-based storage such as a NAS or a SAN. You can install shared storage on a regular file system. With Cold Failover Cluster, you mount the volume on one node at a time.

  • Virtual hostname: In a Cold Failover Cluster solution, two nodes share a virtual hostname and a virtual IP. (The virtual hostname maps to the virtual IP and is used interchangeably in this guide.) However, only the active node can use this virtual IP at any one time. When the active node fails and the standby node is made active, the virtual IP moves to the new active node. The new active node now services all requests through the virtual IP. The Virtual Hostname provides a single system view of the deployment. A Cold Failover Cluster deployment is configured to listen on this virtual IP. For example, if the two physical hostnames of the hardware cluster are node1.example.com and node2.example.com, the name cfcvip.example.com provides the single view of this cluster. In the DNS, cfcvip.example.com maps to the virtual IP, which floats between node1 and node2. When a hardware cluster is used, it manages the failover of the virtual IP without the middle tier clients detecting which physical node is active and actually servicing requests.

  • Hardware Cluster: Typically, a Cold Failover Cluster deployment uses a hardware cluster. The hardware cluster addresses the management of shared storage and virtual IP in its architecture. It plays a role in reliable failover of these shared resources, providing a robust active-passive solution. Most Cold Failover Cluster deploy to hardware clusters that include the following:

    • Two nodes that are in the same subunit,

    • A high-speed, private interconnect between the two nodes,

    • Public network interfaces, on which the client requests are served and the virtual IP is enabled.

    • Shared storage accessible by the two nodes. This includes shared storage that acts as a quorum device and shared storage for Oracle Fusion Middleware and database installs.

    • Clusterware running to manage node and component failures

  • Planned Switchover and Unplanned Failover: A typical Cold Failover Cluster deployment is a two-node hardware cluster. To maximize utilization, both nodes typically have some elements of the deployment running, with the other node acting as a backup node for the appropriate element if needed. For example, a deployment may have the application tier (WebLogic container) running on one node and the Web tier (Oracle HTTP Server) running on the other node. If either node is brought down for maintenance or if either node fails, the surviving node hosts the services of the down node while continuing to host its current services.

    The high-level steps for switch-over to the standby node are as follows:

    1. Stop the middle-tier service on the primary node if the node is still available.

    2. Fail over the virtual IP from the current active node to the passive node. Bring it down on the current node then enable it and bring it up on the passive node.

    3. Fail over the shared disk from the current active node to the passive node. This involves unmounting the shared disk from the current node and mounting it on the passive node.

    4. Start the middle-tier service on the passive node, which becomes active.

    To manage failover, you run the planned switchover steps manually; both the detection of the failure and the failover itself is manual.

In active-passive deployments, services are typically down for a short period of time. This is the time taken to either restart the instance on the same node, or to fail over the instance to the passive node.

Active-Passive Topologies: Advantages

  • Increased availability

    If the active instance fails or must be taken offline, an identically configured passive instance is ready to take over at any time. This provides an increased level of availability than a normal single instance deployment. Also, active-passive deployments provide availability and protection against planned and unplanned maintenance operation of the hardware. If you must bring down a node for planned maintenance, you can bring up middleware services on the passive node. You switch back to the old node at appropriate times.

  • Reduced operating costs

    In an active-passive configuration only one set of processes is up and serving requests. Managing the active instance is generally less costly than managing an array of active instances.

  • A Cold Failover Cluster topology is less difficult to implement because it does not require a load balancer, which is required in active-active topologies

  • A Cold Failover Cluster topology is less difficult to implement than active-active topologies because you are not required to configure options such as load balancing algorithms, clustering, and replication.

  • Active-passive topologies better simulate a one-instance topology than active-active topologies.

  • Application independence

    Some applications may not be suited to an active-active configuration. This may include applications which rely heavily on application state or on information stored locally. Singleton applications are more suitable for active-passive deployments. An active-passive configuration has only one instance serving requests at any particular time.

Active-Passive Topologies: Disadvantages

  • Active-passive topologies do not scale as well as active-active topologies. You cannot add nodes to the topology to increase capacity.

  • State information from HTTP session state and EJB stateful session beans is not replicated and is lost when a node terminates unexpectedly. Such state can be persisted to the database or to the file system residing on a shared storage. However, this requires additional overhead that may affect the single node Cold Failover Cluster deployment performance.

  • Active-passive deployments have a shorter downtime than a single node deployment. However, downtime is much shorter in an active-active deployment.

15.2 Configuring Oracle Fusion Middleware for Active-Passive Deployments

Oracle Fusion Middleware components come in a variety of Java EE container-deployed components and non-Java EE components. Oracle Internet Directory, Oracle Virtual Directory, and Oracle Reports are system components. Oracle SOA Suite and Oracle WebCenter Portal are Java EE components that are deployed to Oracle WebLogic Server.

Administration Console and Oracle Enterprise Manager Fusion Middleware Control also deploy to the WebLogic container. You can deploy both Java EE and system components to Cold Failover Cluster environments; they can co-exist on the same system or on different systems. When on the same system, you can configure them to failover as a unit, sharing the same virtual IP, or failover independently using separate virtual IPs. In most Oracle Fusion Middleware deployments, a database is used either for the component metadata created using Repository Creation Utility (RCU), or for application data. In many cases, a Cold Failover Cluster middle tier deployment uses a Cold Failover Cluster database, both deployed to the same cluster. The typical deployment has the two components configured as separate failover units using different VIPs and different shared disks on the same hardware cluster.

To create an active-passive topology for Oracle Fusion Middleware:

  1. Install the component as a single instance configuration. If you plan to transform this instance to a Cold Failover Cluster deployment, install it using a shared disk. That is, the Middleware home, the Instance home (for system components) and the domain directory (for a WebLogic deployment on a shared disk). Everything that fails over as a unit should be on a shared disk.

  2. After the installation, transform the deployment into a Cold Failover Cluster deployment and configure it to listen on a Virtual IP. The Virtual IP is configured on the current active node. It fails over, along with the Oracle Fusion Middleware deployment, to the passive node when failure occurs.

This general procedure applies to the Cold Failover Cluster Oracle database. For example, the Oracle database instance is installed as a single instance deployment and subsequently transformed for Cold Failover Cluster. A Cold Failover Cluster Oracle Fusion Middleware deployment can also use an Oracle Real Application Clusters (Oracle RAC) database.

The following sections describe the procedures for post-installation configuration to transform a single instance deployment to a Cold Failover Cluster deployment.

The rest of this chapter describes how to transform Cold Failover Cluster for each individual component in the Oracle Fusion Middleware suite. The first section details the procedure for the basic infrastructure components, and the subsequent section does so for the individual Oracle Fusion Middleware component. Any given deployment, for example an Oracle instance or domain, has more than one of these in a machine. To transform the entire instance or domain:

  • Decide which components form a unit of failover.

  • Deploy them on the same shared disk.

    Note:

    For details about installing and deploying Oracle Fusion Middleware components, see the installation guide for the specific Fusion Middleware component.

  • Determine a virtual IP to use for this unit of failover. Typically, a single virtual IP is used for all the components, but separate IPs can be used as long as all of them fail over together.

  • Apply the transformation procedure to each of the individual components to transform the deployment as a whole. Since more than one of these sections will apply for Cold Failover Cluster transformation of an installation, the order of transformation should always be as follows:

    • Transform the Administration Server or Enterprise Manager instance (if applicable).

    • Transform all managed servers in the deployment.

    • Transform the Oracle instances (non-Java EE deployments).

This section includes the following topics:

15.2.1 Cold Failover Cluster Requirements

A Cold Failover Cluster deployment has at least two nodes. You install on one of these nodes; the other node is the passive node. Requirements for both nodes are as follows:

  • The nodes should be same in all respects at the operating system level. For example, they should be the same operating system, version, and patch level.

  • The nodes should have similar hardware characteristics. This ensures predictable performance during normal operations and on failover. Oracle suggests designing each node for capacity to handle both its normal role and the additional load required to handle a failover scenario. This is not required only if the SLA agreement indicates that, during outage scenarios, reduced performance is acceptable.

  • The nodes should have the same mount point free so that mounting shared storage can occur to the same node during normal operations and failover conditions.

  • The user ID and group ID on the two nodes are similar, and the user ID and group ID of the user owning the instance is the same on both nodes.

  • The oraInventory location is the same on both nodes and has similar accessibility of the instance or domain owner. The location of the oraInst.loc file, as well as the beahomelist file should be the same.

  • Since a given instance uses the same listen ports irrespective of the machine on which it is currently active, ensure that the ports that the Cold Failover Cluster instance uses are free on both the nodes.

Note:

Before you start the transformation, back up the entire domain. Oracle also recommends that you create a local backup file before editing the source files. For more information, see the Oracle Fusion Middleware Administrator's Guide. Oracle recommends backing up:

  • All domain directories

  • All Instance homes

  • The database repository (optional)

  • Middleware homes (optional)

15.2.2 Directories and Environment Variables Terminology

The following list describes the directories and variables used in this chapter:

  • ORACLE_BASE: This environment variable and related directory path refers to the base directory under which Oracle products are installed.

  • MW_HOME: This environment variable and related directory path refers to the location where Fusion Middleware (FMW) resides.

  • WL_HOME: This environment variable and related directory path contains installed files necessary to host a WebLogic Server.

  • ORACLE_HOME: This environment variable and related directory path refers to the location where a specific Oracle FMW Suite, such as SOA, is installed.

  • ORACLE_COMMON_HOME: The Oracle home that contains the binary and library files that are common to all the Oracle Fusion Middleware software suites. In particular, the Oracle Common home includes the files required for Oracle Enterprise Manager Fusion Middleware Control (which is used to manage the Fusion Middleware) and the Oracle Java Required Files (JRF).

  • DOMAIN_HOME: This directory path refers to the location where the Oracle WebLogic Domain information (configuration artifacts) is stored.

  • ORACLE_INSTANCE: An Oracle instance contains one or more system components, such as Oracle Web Cache, Oracle HTTP Server, or Oracle Internet Directory. An Oracle instance directory contains updatable files, such as configuration files, log files, and temporary files.

  • /localdisk: The root of the directory tree when the FMW install (either MW_HOME or DOMAIN_HOME) is on a local disk. It is used to represent the MW_HOME on local disk.

  • /shareddisk: Root of the directory tree when the FMW install (either MW_HOME or DOMAIN_HOME) is on a shared storage system that is mounted by any one node of a CFC configuration. It is used to represent the MW_HOME on shared disk.

The example values this guide uses and that Oracle recommends for consistency are:

  • ORACLE_BASE: /u01/app/oracle

  • MW_HOME (Apptier): ORACLE_BASE/product/fmw

  • ORACLE_COMMON_HOME: MW_HOME/oracle_common

  • WL_HOME: MW_HOME/wlserver_10.3

The following table includes examples of Oracle home, domain home, and domain directory values used for some of the Oracle Fusion Middleware components:

Component ORACLE_HOME DOMAIN_HOME Domain Directory

Identity Management

MW_HOME/idm

IDMDomain

MW_HOME/user_projects/domains/IDMDomain

Oracle SOA

MW_HOME/soa

SOADomain

MW_HOME/user_projects/domains/SOADomain

WebCenter

MW_HOME/wc

WCDomain

MW_HOME/user_projects/domains/WCDomain

WebCenter Content

MW_HOME/wcc

WCCDomain

MW_HOME/user_projects/domains/WCCDomain

Oracle Portal

MW_HOME/portal

PortalDomain

MW_HOME/user_projects/domains/PortalDomain

Oracle Forms

MW_HOME/forms

FormsDomain

MW_HOME/user_projects/domains/FormsDomain

Oracle Reports

MW_HOME/reports

ReportsDomain

MW_HOME/user_projects/domains/ReportsDomain

Oracle Discoverer

MW_HOME/disco

DiscoDomain

MW_HOME/user_projects/domains/DiscoDomain

Web Tier

MW_HOME/web

Not applicable

Not applicable

Directory Tier

MW_HOME/idm

Not applicable

Not applicable


Example location for Applications Directory: ORACLE_BASE/admin/domain_name/apps

Example location for Oracle Instance: ORACLE_BASE/admin/instance_name

Oracle recommends ORACLE_BASE as the shared storage mount point. In most cases, this location helps to ensure that all the persistent bits of a failover unit are on the same shared storage. When more than one Cold Failover Cluster exists on a node, and each one fails over independently, different mount points will exist for each failover unit.

15.2.3 Transforming Oracle Fusion Middleware Infrastructure Components

An Oracle Fusion Middleware deployment comprises basic infrastructure components that are common across all the product sets. This section describes Cold Failover Cluster transformation steps for these components.

There are two Administration Server topologies supported for Cold Failover Cluster configuration. The following sections describe these two topologies and provide installation and configuration steps to prepare the Administration Server for Cold Failover Cluster transformation.

This section includes the following topics:

15.2.3.1 Administration Server Topology 1

Figure 15-1 shows the first supported topology for Oracle Cold Failover Cluster.

Figure 15-1 Administration Server Cold Failover Cluster Topology 1

Description of Figure 15-1 follows
Description of "Figure 15-1 Administration Server Cold Failover Cluster Topology 1"

In Figure 15-1, the Administration Server runs on a two-node hardware cluster: Node 1 and Node 2. The Administration Server listens on the Virtual IP or hostname. The Middleware Home and the domain directory are on a shared disk that is mounted on either Node 1 or Node 2 at any given point. Both the Middleware home and the domain directory should be on the same shared disk or shared disks that can fail over together. If an enterprise has multiple Fusion Middleware domains for multiple applications or environments, this topology is well suited for Administration Server high availability. You can deploy a single hardware cluster to host these multiple Administration Servers. Each Administration Server can use its own virtual IP and set of shared disks to provide high availability of domain services.

15.2.3.2 Topology 1 Installation Procedure

To install and configure Cold Failover Cluster for the managed server in this topology:

Install the Middleware Home

This installation includes the Oracle home, WebLogic home, and the Domain home on a shared disk. This disk should be mountable by all the nodes that act as the failover destination for the Administration Server. Depending on the storage sub-system used, the shared disk may be mountable only on one node at a time. This is the preferred configuration, even when the storage sub system enables simultaneous mounts on more than one node. This is done as a regular single-instance installation. See the component chapters to install the Administration Server (and Enterprise Manager) alone. The overall procedure for each suite is as follows:

For Administration Server only:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Invoke the Configuration Wizard and create a domain with just the Administration Server.

    In the Select Domain Source screen, select the following:

    • Generate a domain configured automatically to support the following products.

    • Select Enterprise Manager and Oracle JRF.

For Oracle SOA, Oracle WebCenter Portal or Oracle WebCenter Content:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Install the Oracle Home for Oracle SOA, Oracle WebCenter Portal or Oracle WebCenter Content.

    See the Oracle Fusion Middleware Installation Guide for Oracle SOA Suite, the Oracle Fusion Middleware Installation Guide for Oracle WebCenter Portal, or the Oracle WebCenter Content Installation Guide.

For Oracle Identity Management:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Using Oracle Identity Management 11g Installer, install and configure the IDM Domain using the create domain option. In the Configure Components Screen, de-select everything except Enterprise Manager (selected by default)

    See the Oracle Fusion Middleware Installation Guide for Oracle Identity Management.

For Oracle Portal, Forms, Reports and Discoverer:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Using Oracle Fusion Middleware 11g Portal, Forms, Reports, and Discoverer Installer, install and configure the Classic Domain using the create domain option. In the Configure Components Screen, ensure that Enterprise Manager is selected.

Note:

In this case, at least one more Managed Server for the product components is also installed in this process; you cannot install the Administration Server on its own. You must transform this Managed Server to CFC using the specific procedure for the component. It is part of the same failover unit as the Administration Server.

For Oracle Business Intelligence:

  1. Install the WebLogic Server software.

    See the Oracle Fusion Middleware Installation Guide for Oracle WebLogic Server.

  2. Using Oracle Fusion Middleware 11g Business Intelligence Installer, install and configure the Business Intelligence domain using the create new BI system option.

Note:

In this case, at least one more Managed Server for the product components is also installed in this process; you cannot install the Administration Server on its own. You must transform this Managed Server to CFC using the specific procedure for the component. It is part of the same failover unit as the Administration Server.

Configuring the Administration Server for Cold Failover Cluster

To configure the Administration Server for Cold Failover Cluster:

  1. Provision the Virtual IP using the following commands as root user:

    /sbin/ifconfig interface:index IP_Address netmask netmask
    /sbin/arping -q -U -c 3 -I interface IP_Address
    

    Where IP_Address is the virtual IP_address and the netmask is the associated netmask. In the following example, the IP_address is being enabled on the interface eth0.

    /sbin/ifconfig eth0:1 130.35.46.17 netmask 255.255.224.0
    /sbin/arping -q -U -c 3 -I eth0 130.35.46.17
    
  2. Transform the Administration Server instance to Cold Failover Cluster following the procedure in section Section 15.2.3.5, "Transforming the Administration Server for Cold Failover Cluster."

  3. Validate the Administration Server transformation by accessing the consoles on the virtual IP.This virtual IP address is the new IP address; Oracle recommends that you use only this virtual IP address.

    http://cfcvip.example.com:7001/console

    http://cfcvip.example.com:7001/em

  4. Failover the Administration Server manually to the second node using the following procedure:

    Note:

    If the Administration Server is managed by a Node Manager, you must enable the Node Manager for Cold Failover Cluster.

    1. Stop the Administration Server process (and any other process running out of a given Middleware Home)

    2. Unmount the shared storage from Node1 where the Middleware Home and domain directory exists.

    3. Mount the shared storage on Node2, follow storage specific commands.

    4. Disable the Virtual IP on Node1 using the following command as root user:

      /sbin/ifconfig interface:index down 
      

      In the example below, the virtual IP_Address is being disabled on the interface eth0.

      /sbin/ifconfig eth0:1 down
      
    5. Enable the virtual IP on Node2 using the same commands as in Step 1.

    6. Start the Administration Server process.

      DOMAIN_HOME/bin/startWebLogic.sh
      

      Where DOMAIN_HOME is the location of your domain directory.

    7. Validate access to both the Administration Server and Enterprise Manager console.

15.2.3.3 Administration Server Topology 2

Figure 15-2 shows the second supported Administration Server topology for Oracle Cold Failover Cluster.

Figure 15-2 Administration Server Cold Failover Cluster Topology 2

Administration Server Cold Failover Cluster Topology 2
Description of "Figure 15-2 Administration Server Cold Failover Cluster Topology 2"

In Figure 15-2, the Administration Server runs on a two-node hardware cluster: Node 1 and Node 2. The Administration Server is listening on the Virtual IP or hostname. The domain directory used by the Administration Server is on a shared disk. This is mandatory. This shared disk is mounted on Node 1 or Node 2 at any given point. The Middleware Homes, which contain the software, (WebLogic Home and the Oracle Home) are not necessarily on a shared disk. They can be on the local disk as well. The Administration Server uses the Middleware Home on Node1 for the software when it runs on Node1 and it uses the Middleware Home on Node2 when it runs on Node2. You must maintain the two Middleware Homes to be identical in terms of deployed products, Oracle Homes, and patches. In both cases, it uses the configuration available in the shared Domain Directory/Domain Home. Since this is shared, it ensures that the same configuration is used before and after failover.

This shared domain directory may also have other Managed Servers running. It may also be used exclusively for the Administration Server. If the domain directory is shared with other managed servers, appropriate consideration must be made for their failover when the Administration Server fails over. Some of these considerations are:

  1. If the shared storage can be mounted as read/write on multiple nodes simultaneously, the Administration Server domain directory can be shared with other managed servers. In addition, it can fail over independently of the Managed Server. The Administration Server can failover and Managed Servers can continue to run independently on their designated nodes. This is possible because the Administration Server in this case requires only failover of the VIP, and does not require failover of the shared disk. The domain directory/domain home continues to remain available by the Managed Servers. Example of such storage include a NAS or a SAN/Direct attached storage with Cluster file system.

  2. If only one node can mount the shared storage a time, sharing the Administration Server domain directory with a Managed Server implies that when the Administration Server fails over, the Managed Server that runs off the same domain directory must be shut down.

In this topology, you can use a hardware cluster to help automate failover (when used with properly configured clusterware). However, it is not required. A single hardware cluster can be deployed to host these multiple Administration Servers. Each Administration Server can use its own virtual IP and set of shared disks to provide domain services high availability.

This topology is supported for Oracle SOA Suite and Oracle WebCenter Portal Suite only.

Note:

For the Oracle Identity Management, an alternate topology is also supported for Cold Failover Cluster. See the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management for more details.

15.2.3.4 Topology 2 Installation Procedure

To install and configure Cold Failover Cluster for the Administration Server in this topology:

Install the Middleware Home

Install the Middleware Home including the Oracle Home and WebLogic Home separately on the two nodes of the domain. The Administration Server domain directory is created on a shared disk. This disk should be mountable by all the nodes that act as the failover destination for the Administration Server. Depending on the storage sub-system used, the shared disk may be mountable only on one node at a time. This is a regular single-instance installation. Refer to the product suite for details on installing the Administration Server, and Enterprise Manager alone.

To install the Middleware Home for Oracle SOA Suite, Oracle WebCenter Portal, or Oracle WebCenter Content:

  1. Install the Oracle WebLogic Server software on Node 1.

  2. Install the Oracle Home for SOA or WebCenter on Node 1.

  3. Repeat steps 1 and 2 on Node 2.

  4. Start the Configuration Wizard on Node 1 and create a domain with just the Administration Server.

    In the Select Domain Source screen, select the following:

    • Generate a domain configured automatically to support the following products.

    • Select Enterprise Manager and Oracle JRF.

  5. In the Specify Domain Name and Location screen, enter the domain name, and be sure the domain directory matches the directory and shared storage mount point.

Configuring the Middleware Home for Cold Failover Cluster

To configure the Middleware Home for Cold Failover Cluster:

  1. Provision the Virtual IP. For example:

    /sbin/ifconfig eth0:1 IP_Address netmask netmask
    /sbin/arping -q -U -c 3 -I eth0 IP_Address
    

    Where IP_Address is the virtual IP_Address and the netmask is the associated netmask. In the following example, the IP_Address is being enabled on the interface eth0.

    /sbin/ifconfig eth0:1 130.35.46.17 netmask 255.255.224.0
    /sbin/arping -q -U -c 3 -I eth0 130.35.46.17
    
  2. Transform the Administration Server instance to Cold Failover Cluster using the procedure in Section 15.2.3.5, "Transforming the Administration Server for Cold Failover Cluster."

  3. Validate the Administration Server by accessing the consoles on the virtual IP.

    http://cfcvip.example.com:7001/console

    http://cfcvip.example.com:7001/em

  4. Failover the Administration Server manually to the second node:

    1. Stop the Administration Server process and any other process running out of a given Middleware Home.

    2. Unmount the shared storage from Node1 where the Middleware Home or domain directory exists.

    3. Mount the shared storage on Node2, following storage specific commands.

    4. Disable the virtual IP on Node1:

      /sbin/ifconfig interface:index down 
      

      In the following example, the IP_Address is being disabled on the interface eth0.

      /sbin/ifconfig eth0:1 down
      
    5. Enable the virtual IP on Node 2.

    6. Start the Administration Server process using the following command:

      DOMAIN_HOME/bin/startWebLogic.sh
      

      Where DOMAIN_HOME is the location of your domain directory.

    7. Validate access to both Administration Server and Enterprise Manager console.

15.2.3.5 Transforming the Administration Server for Cold Failover Cluster

To transform the Administration Server installed on a shared disk from Node 1, follow the steps in this section. These steps transform the container, therefore, both the Administration Console and Oracle Enterprise Manager Fusion Middleware Control, are transformed for Cold Failover Cluster. This results in other components such as OWSM-PM, deployed to this container, to become Cold Failover Cluster ready as well. The address for all of these services transforms cfcvip.example.com. After installation, to transform a non Cold Failover Cluster instance, to a Cold Failover Cluster:

  1. Log into the Administration Console.

  2. Create a machine for the virtual host

    1. Select Environment, and then Machines.

    2. In the Change Center, click Lock & Edit then New.

    3. In the Name field, enter cfcvip.example.com

    4. For the Machine OS field, select the appropriate operating system.

    5. Click Next then Finish.

      Note:

      Keep the Listen Address field set to localhost; the CFC solution relies on this setting. Do not change it to the virtual IP_Address or any other value.

    6. On the Summary of Machines tab, click the machine name just created.

    7. Click the Servers tab then click Add.

    8. In the select server drop down list ensure AdminServer is selected.

    9. Click Finish.

    10. Click Activate Changes.

  3. Configure the Administration Server to Listen on cfcvip.example.com.

    1. Select Environment, and then Servers from the Domain Structure menu.

    2. In the Change Center, click Lock & Edit.

    3. Click on the Administration Server (AdminServer)

    4. Change the Listen Address to cfcvip.example.com

    5. Click Save.

    6. Click Activate Changes.

    7. Restart the Administration Server.

      Note:

      Typically Administration Server transformation to Cold Failover Cluster takes place at domain creation time No other changes to other parts of the domain are expected. If this change happens after domain creation and other components are installed in the domain, follow the steps in the following Administration Server transformation section.

Changing Client Side Configuration for Administration Server

Any existing entities in the domain must communicate with the Administration Server using the new address. For example, when you start the Managed Servers manually, the Administration Server address should be specified as cfcvip.example.com.

In the instance.properties file, located in the ORACLE_INSTANCE/config/OPMN/opmn directory, make the following change:

adminHost=cfcvip.example.com

If the Oracle Instance is to be registered or reregistered with a Cold Failover Cluster Administration Server using the OPMN registration commands, the AdminHost location in the opmnctl command should reference the new location of the Administration Server (cfcvip.example.com).

Changing Client Side Configuration for Oracle Enterprise Manager

Since the Enterprise Manager is part of the same container where the Administration Server runs, transforming the Administration Server to Cold Failover Cluster also transforms the Enterprise Manager. If there are existing Enterprise Manager Agents configured to be part of the domain, these agent configurations must use the new location for the Enterprise Manager. To configure the new location for Enterprise Manager, use the following steps for each agent:

  1. Set the directory to ORACLE_INSTANCE/EMAGENT/emagent_dir/sysman/config.

  2. In the emd.properties file, change node1.example.com to cfcvip.example.com in the following attributes:

    • REPOSITORY_URL

    • emdWalletSrcUrl

  3. Stop and restart the agent using the following commands:

    cd ORACLE_INSTANCE/EMAGENT/emagent_dir/bin 
    ./emctl stop agent 
    ./emctl start agent 
    ./emctl status agent 
    

    This shows the Repository URL and it should now point to the new host.

15.2.3.6 Transforming Oracle WebLogic Managed Servers

All Oracle Fusion Middleware components are deployed to a Managed Server. An important step to convert an application or component that is deployed to Oracle WebLogic Server to Cold Failover Cluster is to change its listen address to the virtual IP being used. This change is done for the specific Managed Server to which the component has been deployed. You can make this change using the Administration Console or using WLST commands.

The following example describes the generic steps for Cold Failover Cluster transformation of a Managed Server named WLS_EXMPL. These steps apply to any Managed Server in the Fusion Middleware components.

This section includes the following topics:

15.2.3.6.1 Transforming an Oracle WebLogic Managed Server using the Fusion Middleware Administration Console

In the following procedure, cfcvip.example.com is the virtual IP used for the Cold Failover Cluster and WLS_EXMPL is the managed server to be transformed.

  1. Log into the Administration Console.

  2. Create a machine for the virtual host:

    1. Select Environment > Machines.

    2. In the Change Center, click Lock & Edit then click New.

      Note:

      Create a new machine only if you used a different VIP to transform managed servers. If so, go to step g.

    3. For the Name field, enter cfcvip.example.com

    4. For the Machine OS field, select the appropriate operating system.

    5. Click Next and then click Finish.

      If you are transforming the Node Manager, you must complete the following steps:

    6. Click the newly created machine.

    7. Click Node Manager tab.

    8. Update Listen Address: cfcvip.example.com.

    9. Click Save.

    10. Click Activate Changes.

    11. Complete the steps in Section 15.2.3.7, "Transforming Node Manager."

  3. Stop the WLS_EXMPL Managed server:

    1. Choose Environment > Servers.

    2. Click Control.

    3. Select WLS_EXMPL.

    4. Select Force Shutdown Now in the Shutdown drop-down menu.

  4. Associate the WLS_EXMPL Managed Server with the VirtualHost Machine:

    1. Choose Environment > Servers.

    2. In the Change Center, click Lock & Edit.

    3. Click Configuration.

    4. Select WLS_EXMPL.

    5. For Machine, assign the newly created Machine by assigning it from the pull down menu.

    6. For Listen Address, enter cfcvip.example.com.

    7. Click Save.

    8. Click Activate Changes.

  5. Start the WLS_EXMPL Managed Server.

Note:

You can use several ways to start and stop a server instance. Use a method you have used previously to start and stop the server instances.

15.2.3.6.2 Transforming an Oracle WebLogic Managed Server using the WLST Command Line

You can transform an Oracle WebLogic managed server using WLST commands as well.

Oracle recommends shutting down the managed server you are transforming before performing these steps.

To transform a Managed Server using the WLST command line in online mode (with the WebLogic Server Administration Server up):

  1. In the command line, enter:

    WL_HOME/server/bin/setWLSEnv.sh
    WL_HOME/common/bin/wlst.sh
    
  2. In WLST, enter the following commands:

    wls:/offline>connect(<username>,<password>,<AdminServer location>)
    

    For example:

    wls:/offline>connect('WebLogic', 'welcome1', 't3://cfcvip.example.com:7001')
    
    wls:/DomainName/serverConfig> edit()
    wls:/DomainName/edit> startEdit()
    wls:/DomainName/edit !> create('cfcvip.example.com','Machine')
    wls:/DomainName/edit !> cd('Machines/cfcvip.example.com/NodeManager/cfcvip.example.com')
    wls:/DomainName/edit !> set('ListenAddress', 'cfcvip.example.com')
    wls:/DomainName/edit !>cd ('Servers')
    wls:/DomainName/edit/Servers !>cd ('WLS_EXMPL')
    wls:/DomainName/edit/Servers/WLS_EXMPL !>set('Machine',' cfcvip.example.com ')
    wls:/DomainName/edit/Servers/WLS_EXMPL !>set('ListenAddress',' cfcvip.example.com ')
    wls:/DomainName/edit/Servers/WLS_EXMPL !> save()
    wls:/DomainName/edit/Servers/WLS_EXMPL !> activate()
    wls:/DomainName/edit/Servers/WLS_EXMPL> exit()
    
  3. Stop (if not already down) and start the Managed server.

After the Managed server transformation completes, verify that all references to it use the new Listen Address cfcvip.example.com. If Oracle HTTP Server serves as a front end to this Managed server, change any mod_wls_ohs configuration with mount points referring to applications in this Managed server to route to the new listening end point.

Note:

When transforming Oracle FMW SOA server with existing or deployed composites, there are a few locations where possible appearances of the old server's listen address may occur. When the server is front ended by OHS or an LBR, the references remain the same as they do not reflect the server's listen address.

  • Deployed Composites may include specific endpoint references. For example they can use callbackServerURL specified as a binding property for the specific reference.These references must be updated to the new server's VIP.

  • Composites relay on the SOA properties specified for the soa-infra application in Enterprise Manager: ServerURL and CAllBackURL must be updated if they were modified from the default null values.

  • The system uses the front end address set at server level. So it must be updated to reflect the new VIP.

15.2.3.7 Transforming Node Manager

You can use Node Manager in a Cold Failover Cluster environment. A Node Manager that does not failover with the rest of the Cold Failover Cluster stack. In this case, Node Manager is not configured for Cold Failover Cluster and listens on all IPs on the machine, and not specifically on the virtual IP for Cold Failover Cluster. The failover nodes also have a similarly configured Node Manager already available and configured. The Node associated with the WebLogic instance communicates with the Node Manager on the localhost. For more details, see the Oracle Fusion Middleware Node Manager Administrator's Guide for Oracle WebLogic Server

For Cold Failover Cluster in general, port usage should be planned so that there are no port conflicts when failover occurs.

To convert the Node Manager to Cold Failover Cluster:

  1. If Node Manager is running, stop it.

    The nodemanager.properties file is created only after the first start of Node Manager.

    Restart the Node Manager if necessary.

  2. In the nodemanager.properties file located in the WL_HOME/common/nodemanager/ directory, set the ListenAddress to the virtual IP.

    For example:

    ListenAddress=cfcvip.example.com
    
  3. Restart the Node Manager using the startNodeManager.sh file, located in the WL_HOME/server/bin directory.

    Note:

    For WebLogic Managed Servers and Administration Servers, hostname verification may be enabled or disabled in a given installation. For CFC installation where hostname verification is enabled and Node Manager is managing these instances, the hostname verification step should use certificates for the virtual IP cfcvip.example.com as part of these steps.

15.2.3.8 Transforming Oracle Process Management and Notification Server

Oracle Process Management and Notification Server (OPMN) is used for Process Management of system components and is part of the managed server instance.

Oracle recommends keeping the default OPMN configuration in a Cold Failover Cluster environment. No further steps are necessary for Cold Failover Cluster transformation of the OPMN process itself.

If you are transforming an Oracle Instance for Cold Failover Cluster and it is registered with an Administration Server, make the following changes in these files:

  1. In the topology.xml file in the DOMAIN_HOME/opmn directory of the Administration Server domain, change hostname entries for this specific Oracle instance (being transformed to Cold Failover Cluster) to cfcvip.example.com.

    For example, for an Oracle HTTP Server instance transformed to Cold Failover Cluster, set the following in the topology.xml file

    <property name="HTTPMachine" value="cfcvip.example.com"/>
    

    For the instance itself:

    <ias-instance id="asinst " instance-home="/11gas3/MW/asinst" host="cfcvip.example.com" port="6701">
    
  2. In the instance.properties file in the ORACLE_INSTANCE/config/OPMN/opmn directory, change adminHost=<physical hostname> to adminHost=<cfcvip.example.com>.

  3. Restart all OPMN components.

15.2.3.9 Transforming Oracle Enterprise Manager for an Oracle Instance

When an Oracle instance transform to Cold Failover Cluster, you must also transform the Enterprise Manager agent that is part of this Oracle instance to Cold Failover Cluster. This topic describes how to transform the agent and the server.

To transform the Enterprise Manager agent:

  1. Stop the Enterprise Manager agent using the following command:

    cd ORACLE_INSTANCE/EMAGENT/emagent_dir/bin
    ./emctl stop agent 
    
  2. Set the directory to ORACLE_INSTANCE/EMAGENT/emagent_dir/sysman/config.

  3. Replace the physical host name with the virtual host name using the managed bean and setting the operation on it, for example: emoms.props:Location=AdminServer,name=emoms.properties,type=Properties,Application=em

    To do this for each attribute:

    1. Log in to Enterprise Manager at http://cfcvip.example.com:7001/em

      Note:

      This step assumes that you transformed the Administration Server listen address to listen on the virtual host name cfcvip.example.com. See Section 15.2.3.5, "Transforming the Administration Server for Cold Failover Cluster" for more information.

    2. Expand the WebLogic domain.

    3. Right click on the domain name and select System MBean browser.

    4. Select the search icon (binoculars). Enter enoms.props:Location to bring up the managed bean.

    5. Select the Attributes tab.

    6. Select Properties. The host name is in the Element listings. For example:

      +Element_ElementNumber
       key example.sysman.emSDK.svlt.ConsoleServerName 
      value hostname.example.com:7001_Management_Service
      
       +Element_ElementNumber 
       key example.sysman.emSDK.svlt.ConsoleServerHost 
       value hostname.example.com 
      
    7. Select the Operations tab then the setProperty link.

    8. Enter the key name and new value. Select Invoke.

  4. In the emd.properties file, change node1.example.com to cfcvip.example.com for the EMD_URL attribute.

  5. Change the targets.xml file on the agent side:

    cd ORACLE_INSTANCE/EMAGENT/emagent_dir/sysman/emd
    cp targets.xml targets.xml.org
    

    Modify targets.xml so that it has only targets related to the host and oracle_emd. Remove all other entries. For example:

    <Targets AGENT_TOKEN="ad4e5899e7341bfe8c36ac4459a4d569ddbf03bc"> 
           <Target TYPE="oracle_emd" NAME="cfcvip.example.com:port"/> 
           <Target TYPE="host" NAME="cfcvip.example.com"  DISPLAY_NAME="cfcvip.example.com/>" 
    </Targets>           
    
  6. Restart the agent

    cd ORACLE_INSTANCE/EMAGENT/emagent_dir/bin 
    ./emctl start agent
    

To transform the Enterprise Manager server, make the following changes in the Administration Server domain directory:

Stop the Administration Server before making any changes.

  1. Set your directory to MW_HOME/user_projects/domains/domain_name/sysman/state.

  2. In the targets.xml file, located in MW_HOME/user_projects/domains/domain_name/sysman/state directory, modify the hostname from node1.example.com to cfcvip.example.com.

  3. Restart the Administration Server.

15.2.3.10 Transforming Web Tier Components and Clients

The Web tier is made up of two primary components, Oracle HTTP Server and Oracle Web Cache. The next two sections describe how to transform Oracle HTTP Server and Oracle Web Cache for Cold Failover Cluster.

15.2.3.10.1 Transforming Oracle HTTP Server

To transform Oracle HTTP Server for Cold Failover Cluster:

  1. In ORACLE_INSTANCE/config/OHS/component_name/httpd.conf, change the following attributes:

    Listen cfcvip.example.com:port #OHS_LISTEN_PORT
    Listen cfcvip.example.com:port #OHS_PROXY_PORT
    ServerName cfcvip.example.com
    
  2. In ORACLE_INSTANCE/config/OHS/component_name/admin.conf, change the following attributes:

    Listen cfcvip.example.com:port #OHS_LISTEN_PORT
    Listen cfcvip.example.com:port #OHS_ADMINISTRATOR_PORT
    ServerName cfcvip.example.com
    
  3. Restart Oracle HTTP Server:

    .cd ORACLE_INSTANCE/bin
    ./opmnctl restartproc process-type=OHS
    

Also, perform a single sign-on registration, as described in Section 15.2.4.7, "Single Sign-On Reregistration (If required)."

Clients of Oracle HTTP Server

If an Oracle Web Cache instance is routing to Oracle HTTP Server that has been transformed to Cold Failover Cluster, in ORACLE_INSTANCE/config/WebCache/component_name/webcache.xml, change the following attributes:

Change node1.example.com to cfcvip.example.com, where node1.example.com is the previous address of the Oracle HTTP server before transformation.

<HOST ID="h1" NAME="cfcvip.example.com" PORT="8888" LOADLIMIT="100"
 OSSTATE="ON"/>
<HOST ID="h2" NAME="cfcvip.example.com" PORT="8890" LOADLIMIT="100" OSSTATE="ON"
 SSLENABLED="SSL"/>
15.2.3.10.2 Transforming Oracle Web Cache

To transform an Oracle Web Cache for Cold Failover Cluster:

  1. Set up an alias to the physical hostname on both nodes of the cluster in /etc/hosts.

    This is an alias to the IP_Address of the node. Set this in /etc/hosts. The alias name is wcprfx.example.com. For example, On node Node1, the /etc/hosts file entry would be n.n.n.n node1 node1.example.com wcprfx wcprfx.example.com

    On the failover node Node2, the /etc/hosts file, the entry would be n.n.n.m node2 node2.example.com wcprfx wcprfx.example.com.

  2. In ORACLE_INSTANCE/config/WebCache/component_name/webcache.xml:

    • Change node1.example.com to cfcvip.example.com node1.example.com is where Oracle Web Cache was installed, and the host address it is listening on before transformation.

      SITE NAME="cfcvip.example.com"
      
    • Change the Virtual Host Name entries to be cfcvip.example.com for the SSL and non-SSL ports. For example:

      <HOST SSLENABLED="NONE" ISPROXY="NO" OSSTATE="ON" NUMRETRY="5"
      PINGINTERVAL="10" PINGURL="/" LOADLIMIT="100" PORT="8888"
      NAME="cfcvip.example.com" ID="h0"/>
      <HOST SSLENABLED="SSL" ISPROXY="NO" OSSTATE="ON" NUMRETRY="5"
      PINGINTERVAL="10" PINGURL="/" LOADLIMIT="100" PORT="8890"
      NAME="cfcvip.example.com" ID="h3"/>
      <VIRTUALHOSTMAP PORT="8094" NAME="cfcvip.example.com">
         <HOSTREF HOSTID="h3"/>
      </VIRTUALHOSTMAP>
      <VIRTUALHOSTMAP PORT="8090" NAME="cfcvip.example.com">
         <HOSTREF HOSTID="h0"/>
      </VIRTUALHOSTMAP> 
      
    • Change cache name entries to be based of wcprfx.example.com where wcprfx.example.com is an alias created in /etc/hosts on all nodes of the cluster. For example:

      <CACHE WCDEBUGON="NO" CAPACITY="30" VOTES="1" INSTANCENAME="asinst_1"
      COMPONENTNAME="wc1" ORACLEINSTANCE="ORACLE_INSTANCE"
      HOSTNAME="wcprfx.example.com" ORACLEHOME="ORACLE_HOME"
      NAME="wcprfx.example.com-WebCache">
      
    • In the MULTIPORT section, change IPADDR from ANY to cfcvip.example.com for the following:

      PORTTYPE="NORM"
      SSLENABLED="SSL" PORTTYPE="NORM"
      PORTTYPE="ADMINISTRATION"
      PORTTYPE="INVALIDATION"
      PORTTYPE="STATISTICS"
      

      For example:

      <MULTIPORT>
         <LISTEN PORTTYPE="NORM" PORT="8090"
      IPADDR="cfcvip.example.com"/>
         <LISTEN SSLENABLED="SSL" PORTTYPE="NORM" PORT="8094"
      IPADDR="cfcvip.example.com">
             <WALLET>ORACLE_INSTANCE/config/WebCache/wc1/keystores/
      default</WALLET>
         </LISTEN>
         <LISTEN PORTTYPE="ADMINISTRATION" PORT="8091"
      IPADDR="cfcvip.example.com"/>
         <LISTEN PORTTYPE="INVALIDATION" PORT="8093" IPADDR="
      cfcvip.example.com"/>
         <LISTEN PORTTYPE="STATISTICS" PORT="8092"
      IPADDR="cfcvip.example.com"/>
      </MULTIPORT>
      
  3. Restart Oracle Web Cache:

    cd ORACLE_INSTANCE/bin
    ./opmnctl restartproc process-type=WebCache
    

15.2.4 Transforming Oracle Fusion Middleware Components

This section describes the following Fusion Middleware components:

For detailed explanation on the product components, see the appropriate component chapter in this guide.

15.2.4.1 Transforming Oracle Virtual Directory and Its Clients

This section describes how to transform Oracle Virtual Directory and its clients. It includes the following topics:

15.2.4.1.1 Transforming Oracle Virtual Directory

To transform an Oracle Virtual Directory server:

  1. In a text editor, open the listeners.os_xml file in the ORACLE_INSTANCE/config/OVD/componentname directory.

  2. Enter the following value to set the LDAP address to the virtual IP:

    <host>cfcvip.example.com</host>
    
  3. Restart the Oracle Virtual Directory server using opmnctl.

    For example:

    ORACLE_INSTANCE/bin/opmnctl stopproc ias-component=ovd1
    ORACLE_INSTANCE/bin/opmnctl startproc ias-component=ovd1
    
15.2.4.1.2 Generating a New Key for the Keystore

Admin should generate a new key pair with the virtual hostname (self-signed or signed by CA) and associate this certificate in OVD listener configuration. This certificate should also be trusted by EMAgent for which it should be imported as a trusted cert in EMAgent's cwallet.sso.

For more information, see Section 8.3.5.2, "Generating a New Key for the Keystore Using WLST".

If you select a different keystore or change the certificate in the keystore for the Admin Gateway Listener or the LDAP SSL Endpoint Listener, you must import the certificate into the Oracle Enterprise Manager Fusion Middleware Control Agent's wallet. If you do not import the certificate, Oracle Enterprise Manager Fusion Middleware Control cannot connect to Oracle Virtual Directory to retrieve performance metrics.

To import the certificate into the Oracle Enterprise Manager Fusion Middleware Control Agent's wallet:

  1. Export the Oracle Virtual Directory server certificate by executing the following command:

    ORACLE_HOME/jdk/jre/bin/keytool -exportcert \
    -keystore OVD_KEYSTORE_FILE -storepass PASSWORD \
    -alias OVD_SERVER_CERT_ALIAS -rfc \
    -file OVD_SERVER_CERT_FILE
    
  2. Add the Oracle Virtual Directory server certificate to the Oracle Enterprise Manager Fusion Middleware Control Agent's Wallet by executing the following command:

    ORACLE_COMMON_HOME/bin/orapki wallet add -wallet \
    $ORACLE_INSTANCE/EMAGENT/EMAGENT/sysman/config/monwallet \
    -trusted_cert -cert OVD_SERVER_CERT_FILE -pwd WALLET_PASSWORD
    
15.2.4.1.3 Transforming Oracle Virtual Directory Clients

All clients of Oracle Virtual Directory must use the virtual IP cfcvip.example.com to access Oracle Virtual Directory. For example, when using Oracle Directory Services Manager to administer a Cold Failover Cluster Oracle Virtual Directory instance, create a connection using cfcvip.example.com as the location of the Oracle Virtual Directory instance.

15.2.4.2 Transforming Oracle Directory Integration Platform and Oracle Directory Services Manager and Their Clients

This section describes how to transform Oracle Directory Integration Platform, Oracle Directory Services Manager, and their clients.

15.2.4.2.1 Transforming Oracle Directory Integration Platform and Oracle Directory Services Manager

Oracle Directory Integration Platform and Oracle Directory Services Manager are deployed to a Managed Server. The procedure for CFC transformation is to configure the Managed Server to which they are deployed to listen on the cfcvip.example.com virtual IP. Follow the steps in Section 15.2.3.6, "Transforming Oracle WebLogic Managed Servers" to configure the WLS_ODS managed server to listen on the cfcvip.example.com virtual IP.

15.2.4.2.2 Transforming Oracle Directory Integration Platform and Oracle Directory Services Manager Clients

Follow these steps to transform Oracle Directory Integration Platform and Oracle Directory Services Manager clients:

  1. Clients of Oracle Directory Integration Platform and Oracle Directory Services Manager must use the virtual IP cfcvip.example.com to access these applications.

  2. When Oracle HTTP Server is the front end for Oracle Directory Services Manager, the WebLogic configuration for Oracle Directory Services Manager must specify the virtual IP cfcvip.example.com as the address for the WLS_ODS Managed Server. To do this, change the WebLogic host configuration in the webserver proxy plugin configuration files for the mount points used by Oracle HTTP Server and Oracle Directory Services Manager. For example, use a text editor to make the following edits in the mod_wl_ohs.conf file:

    #Oracle Directory Services Manager
    <Location /odsm>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.example.com
        WebLogic port
    </Location>
    

15.2.4.3 Transforming Oracle Identity Federation and Its Client

This section describes how to transform Oracle Identity Federation and its clients.

15.2.4.3.1 Transforming Oracle Identity Federation

Oracle Identity Federation is a component that is deployed to a Managed Server. The procedure for Cold Failover Cluster transformation is to configure the Managed Server to which it is deployed to listen on the cfcvip.example.com virtual IP. Follow the steps in Section 15.2.3.6, "Transforming Oracle WebLogic Managed Servers" to configure the WLS_OIF Managed Server to listen on the cfcvip.example.com virtual IP. Since Oracle Identity Federation Cold Failover Cluster deployments are likely to be split into Service Provider and Identity Provider, more than one instance of WLS_OIF is likely to exist in a given deployment. Use the same Cold Failover Cluster procedure for both WLS_OIF instances.

After configuring the Managed Server to listen on the cfcvip.example.com virtual IP, log into the Oracle Enterprise Manager Fusion Middleware Control and perform these steps:

  1. Go to Farm > Identity and Access > OIF.

  2. In the right frame, go to Oracle Identity Federation > Administration and then make these changes:

    1. Server Properties: change the host to cfcvip.example.com

    2. Identity Provider > Common: change the providerId to cfcvip.example.com

    3. Service Provider > Common: change the providerId to cfcvip.example.com

    4. Data Stores: If LDAP is the data store, then replace the value of Connection URL for User Data Store and Federation Data Store with cfcvip.example.com

    5. Authentication Engines > LDAP Directory: Set the ConnectionURL to cfcvip.example.com

  3. Restart the managed server so that the metadata generates.

15.2.4.3.2 Transforming Oracle Identity Federation Clients

Follow these steps to transform Oracle Identity Federation clients:

  1. Clients of Oracle Identity Federation must use the virtual IP cfcvip.example.com to access these applications.

  2. When Oracle HTTP Server is the front end for Oracle Identity Federation, the WebLogic configuration for Oracle Identity Federation must specify the virtual IP cfcvip.example.com as the address for the WLS_OIF Managed Server. To do this, change the WebLogic host configuration in the webserver proxy plugin configuration files for the mount points used by Oracle HTTP Server and Oracle Identity Federation. For example, use a text editor to make the following edits in the oif.conf file:

    #Oracle Identity Federation
    <Location /oif>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.example.com
        WebLogic port
    </Location>
    

15.2.4.4 Transforming Oracle Access Manager and Its Clients

This section describes how to transform Oracle Access Manager to work in a Cold Failover Cluster environment.

As with other managed servers created using Configuration Wizard, a Cold Failover Cluster set up that requires the managed server to listen on a virtual IP can be done during the initial creation and by specifying the listen address of the managed server as the virtual hostname (cfcvip.example.com). In this case, the explicit transformation step is not needed.

15.2.4.4.1 Transforming Oracle Access Manager

Oracle Access Manager is deployed to a managed server (for example, WLS_OAM1) and the procedure for CFC transformation is to configure this managed server to listen on the virtual IP cfcvip.example.com. Follow the steps in Section 15.2.3.6, "Transforming Oracle WebLogic Managed Servers" to configure the WLS_OAM1 managed server to listen on the cfcvip.example.com virtual IP. All other requirements related to the placement of the Middleware Home and other related domain artifacts on a shared storage that can be failed over apply, as described in Section 15.2.1, "Cold Failover Cluster Requirements."

The Oracle Access Manager console is deployed as part of the Administration Server for the domain. For Cold Failover Cluster configuration of this console, the whole Administration Server needs to be configured in Active Passive. The Administration Server can share the same virtual IP cfcvip.example.com, or you can configure it to fail over independently using a separate virtual IP and shared disk. If they are using the same virtual IP, the Administration Server and the Oracle Access Manager managed server will fail over as a unit.

15.2.4.4.2 Transforming Oracle Access Manager Clients

Follow these steps to transform Oracle Access Manager clients:

  1. Clients of Oracle Access Manager must use the virtual IP cfcvip.example.com to access these applications. Any wiring done with other components such as Oracle Identity Manager and Oracle Adaptive Access Manager should also use the virtual IP cfcvip.example.com to access the applications.

  2. When Oracle HTTP Server is the front end for Oracle Access Manager, the mod webLogic configuration for Oracle Access Manager must specify the virtual IP cfcvip.example.com as the address for the Oracle Access Manager managed server. To do this, change the WebLogic host configuration in the webserver proxy plugin configuration files for the mount points used. For example, use a text editor to make the following edits in the mod_wl_ohs.conf file:

    #Oracle Access Manager
    <Location /oam>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.example.com:port
    </Location>
    
  3. When the Oracle Access Manager console as part of the Oracle Administration Server is also configured to be Active Passive, and the Administration Server is configured to be front-ended by Oracle HTTP Server, you must change the WebLogic host configuration in the webserver proxy plugin configuration files. For example, use a text editor to make the following edits in the mod_wl_ohs.conf file:

    #Oracle Access Manager Admin Console deployed to the Admin Server
    <Location /oamconsole>
        SetHandler weblogic-handler
        WebLogicHost ADMINVHN
        WebLogicPort 7001
    </Location>
    

15.2.4.5 Transforming Oracle Adaptive Access Manager and Its Clients

This section describes how to transform Oracle Adaptive Access Manager to work in a Cold Failover Cluster environment.

As with other managed servers created using Configuration Wizard, a Cold Failover Cluster set up that requires the managed server to listen on a virtual IP can be done during the initial creation as well by specifying the listen address of the managed server as the virtual hostname (cfcvip.example.com). In this case, the explicit transformation step is not needed.

15.2.4.5.1 Transforming Oracle Adaptive Access Manager

Oracle Adaptive Access Manager is deployed to managed servers, both deployed to fail over as a single unit, and therefore sharing the same virtual IP and the same shared storage. The procedure to convert the OAAM Admin managed server and the OAAM Server managed server for CFC transformation is to configure these managed servers to listen on the virtual IP cfcvip.example.com. Follow the steps in Section 15.2.3.6, "Transforming Oracle WebLogic Managed Servers" to configure these managed servers to listen on the cfcvip.example.com virtual IP. All other requirements related to the placement of the Middleware Home and other related domain artifacts on a shared storage that can be failed over apply, as described in Section 15.2.1, "Cold Failover Cluster Requirements."

15.2.4.5.2 Transforming Oracle Adaptive Access Manager Clients

Follow these steps to transform Oracle Adaptive Access Manager clients:

  1. Clients of Oracle Adaptive Access Manager must use the virtual IP cfcvip.example.com to access these applications. Any wiring done with other components such as Oracle Access Manager and Oracle Identity Manager should also use the virtual IP cfcvip.example.com to access the applications.

  2. When Oracle HTTP Server is the front end for Oracle Adaptive Access Manager, the mod webLogic configuration for Oracle Adaptive Access Manager must specify the virtual IP cfcvip.example.com as the address for the Oracle Adaptive Access Manager managed servers. To do this, change the WebLogic host configuration in the webserver proxy plugin configuration files for the mount points used. For example, use a text editor to make the following edits in the mod_wl_ohs.conf file:

    #Oracle Adaptive Access Manager
    <Location /oaam_server>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.example.com WebLogicPort port
    </Location>
    
    #Oracle Adaptive Access Manager Admin Console
    <Location /oaam_admin>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.example.com:port
    </Location>
    

15.2.4.6 Transforming Oracle Identity Manager and Its Clients

This section describes how to transform Oracle Identity Manager to work in a Cold Failover Cluster environment.

As with other managed servers created using Configuration Wizard, a Cold Failover Cluster set up that requires the managed server to listen on a virtual IP can be done during the initial creation as well by specifying the listen address of the managed server as the virtual hostname (cfcvip.example.com). In this case, the explicit transformation step is not needed.

15.2.4.6.1 Transforming Oracle Identity Manager

Oracle Identity Manager is deployed to managed servers. The typical CFC deployment will have the Oracle Identity Manager managed server and the another managed server with Oracle SOA and Oracle Web Services Manager deployed to fail over as a single unit and therefore sharing the same virtual IP and the same shared storage. The procedure to convert both these managed server for CFC transformation is to configure this managed servers to listen on the virtual IP cfcvip.example.com. Follow the steps in Section 15.2.3.6, "Transforming Oracle WebLogic Managed Servers" to configure these managed servers to listen on the cfcvip.example.com virtual IP. All other requirements related to the placement of the Middleware Home and other related domain artifacts on a shared storage that can be failed over apply, as described in Section 15.2.1, "Cold Failover Cluster Requirements."

15.2.4.6.2 Transforming Oracle Identity Manager Clients

Follow these steps to transform Oracle Identity Manager clients:

  1. Clients of Oracle Identity Manager must use the virtual IP cfcvip.example.com to access these applications. Any wiring done with other components such as Oracle Access Manager and Oracle Adaptive Access Manager should also use the virtual IP cfcvip.example.com to access the applications. Since SOA is also configured to fail over with Oracle Identity Manager, the wiring from Oracle Identity Manager to SOA should also use the common virtual IP.

  2. When Oracle HTTP Server is the front end for Oracle Identity Manager, the mod webLogic configuration for Oracle Identity Manager must specify the virtual IP cfcvip.example.com as the address for the managed servers. To do this, change the WebLogic host configuration in the webserver proxy plugin configuration files for the mount points used. For example, use a text editor to make the following edits in the mod_wl_ohs.conf file or oim.conf file:

    #Oracle Identity Manager
    <Location /oim>
        SetHandler weblogic-handler
        WebLogicHost cfcvip.example.com:port
        WebLogicPort port
    </Location>
    

    Refer to Oracle Fusion Middleware System Administrator's Guide for Oracle Identity Manager for more information about making configuration changes in Oracle Identity Manager.

15.2.4.7 Single Sign-On Reregistration (If required)

Single Sign-On (SSO) reregistration typically applies only to Oracle Portal, Forms, Reports, and Discoverer. After the front end listening endpoint on Oracle HTTP Server for this tier changes to the Virtual IP, it becomes necessary to do SSO reregistration so that the URL to be protected is configured with the virtual IP.

Note:

For 11g, you must create a WebGate using the new VIP instead of reregistering.

To reregister SSO, perform these steps on the 10.1.x installation of Identity Management where the SSO server resides

  1. Set the ORACLE_HOME variable to the SSO ORACLE_HOME location.

  2. Run ORACLE_HOME/sso/bin/ssoreg.sh with the following parameters:

    -site_name cfcvip.example.com:port
    -mod_osso_url http://cfcvip.example.com 
    -config_mod_osso TRUE
    -oracle_home_path ORACLE_HOME
    -config_file /tmp/osso.conf
    -admin_info cn=orcladmin
    -virtualhost
    -remote_midtier
    
  3. Copy /tmp/osso.conf file to the mid-tier home location:

    ORACLE_INSTANCE/config/OHS/ohs1

  4. Restart Oracle HTTP Server by running the following command from the ORACLE_INSTANCE/bin directory:

    ./opmnctl restartproc process-type=OHS
    
  5. Log into the SSO server through the following URL:

    http://sso.example.com/pls/orasso
    
  6. In the Administration page and then Administer Partner applications, delete the entry for node1.example.com.

15.2.5 Additional Actions for Fusion Middleware Failover

In a Cold Failover Cluster environment, a failover node (node2.example.com) must be equivalent to the install machine (node1.example.com) in all respects. To make the failover node equivalent to the installation node, perform the following procedure on the failover instance:

For UNIX platforms follow these steps:

  1. Failover the Middleware Home from Node 1 (the installation node) to the failover node (Node 2), following the mount/unmount procedure described previously.

  2. As root, do the following:

    • Create an oraInst.loc file located in the /etc directory identical to the file on Node1.

    • Run the root.sh file located in the ORACLE_HOME directory on Node2, if it is required, and is available for the product suite.

  3. Create the oraInventory on the second node, using the attachHome command located in the ORACLE_HOME/oui/bin/attachHome.sh directory.

15.2.6 Transforming an Oracle Database

In a typical Cold Failover Cluster deployment of Oracle Fusion Middleware, the database is also deployed as a cold failover cluster. This section describes how to transform a single instance Oracle database to a Cold Failover Cluster database. Perform this transformation before seeding the database using RCU and subsequent Fusion Middleware installations that use this seeded database.

To enable the database for Cold Failover Cluster:

  1. Change the listener configuration that the database instance uses in the listener.ora file.

    Ensure that the HOST name in the listener configuration has the value of the virtual hostname. In addition, ensure that no other process (Oracle or third party) uses the listener port.

    <listener_name>  =
     (DESCRIPTION_LIST =
       (DESCRIPTION =
             (ADDRESS = (PROTOCOL = TCP)(HOST = <virtual_hostname>)(PORT = port))
       )
    )
    

    For example:

    LISTENER_CFCDB =
     (DESCRIPTION_LIST =
       (DESCRIPTION =
             (ADDRESS = (PROTOCOL = TCP)(HOST cfcdbhost.example.com)(PORT = 1521))
       )
     )
    
  2. Change the tnsnames.ora file.

    Change an existing TNS service alias entry or create a new one:

    <tns_alias_name> =
     (DESCRIPTION =
       (ADDRESS_LIST =
         (ADDRESS = (PROTOCOL = TCP)(HOST = <virtual_hostname>)(PORT = port))
       )
       (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = <db_service_name>)
          (INSTANCE_NAME = <db_instance_name>)
       )
     )
    
    )
    

    For example:

    CFCDB =
     (DESCRIPTION =
       (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = cfcdbhost.example.com)(PORT = 1521))
       )
       (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = cfcdb)
          (INSTANCE_NAME = cfcdb)
       )
     )
    
    
  3. Change the local sp file to update the local_listener parameter of the instance.

    Log in as sysdba using SQL*Plus:

    SQL> alter system set local_listener='<tns_alias_name>' scope=both;
    )
    

    For example:

    SQL> alter system set local_listener='CFCDB' scope=both;
    
  4. Shutdown and restart the listener.

  5. Shutdown and restart the database instance.

  6. Create the database service for the application server.

    Oracle recommends a dedicated service separate from the default database service. To create this service, execute the following SQL*Plus command:

    SQL> execute DBMS_SERVICE.CREATE_SERVICE
    ('<cfc_db_service_name>','<cfc_db_network_name>')
    

    For example:

    SQL> execute DBMS_SERVICE.CREATE_SERVICE
    ('cfcdb_asservice','cfcdb_asservice')
     
    

    To start the service, execute the following SQL*PLUS command:

    SQL> execute DBMS_SERVICE.START_SERVICE ('cfcdb_asservice')
    

    You can set additional parameters for this service depending on the needs of the installation. See the Oracle Database PL/SQL Packages and Types Reference for details about the DBMS_SERVICE command.

15.2.6.1 Database Instance Considerations

Consider the following procedures for the Unix platform database instance:

  1. Manually failover the Database Oracle Home from Node 1 (the installation node) to the failover node (Node 2) following the mount/unmount procedure described earlier.

  2. As root, do the following:

    • Create an oraInst.loc file located in the /etc directory identical to the file on Node1.

    • Create an oratab file located in the /etc directory, identical to the file on Node1.

    • Run the oracleRoot.sh file located in the ORACLE_HOME directory on Node2, if it is required, and is available for the product suite.

  3. Create the oraInventory file on the second node, using the attachHome command located in ORACLE_HOME/oui/bin/attachHome.sh directory.

15.3 Oracle Fusion Middleware Cold Failover Cluster Example Topologies

This section shows sample Cold Failover Cluster topologies. Since there are many possible combinations of topologies, these topologies are illustrative only. To achieve these topologies, more than one of the transformation steps apply. Refer to the steps mentioned earlier to configure the transformation.

This section includes the following topics:

15.3.1 Example Topology 1

Figure 15-3 shows an Oracle WebCenter Portal Cold Failover Cluster deployment. Both the Administration Server and the WebCenter Managed Servers are in the domain and failover as unit. Therefore, they share the same virtual IP and are installed together on the same shared disk. There may be an Oracle HTTP Server front ending this topology. It is on a separate node in the example topology. It can also be on the same node, and can be part of the Cold Failover Cluster deployment. In this example, the database is also on a separate node. However, it is equally likely that the database is on the same cluster and is also Cold Failover Cluster-based (using its own virtual IP and shared disk).

Figure 15-3 Cold Failover Cluster Example Topology 1

Description of Figure 15-3 follows
Description of "Figure 15-3 Cold Failover Cluster Example Topology 1"

15.3.2 Example Topology 2

Figure 15-4 shows an example SOA Cold Failover Cluster deployment. In this example, only the SOA instance is deployed as Cold Failover Cluster, and the Administration Server is on a separate node. The database is also on a separate node in this example topology. Oracle HTTP Server in this case is part of the Cold Failover Cluster deployment, and part of the same failover unit as the SOA Managed Servers. Important variants of this topology include a Cold Failover Cluster Administration Server on the same hardware cluster. It may share the same virtual IP and shared disk as the SOA Managed Servers (SOA and Administration Server are part of the same failover unit) or use a separate virtual IP, and shared disk (Administration Server fails over independently). Similarly, depending on the machine capacity, the database instance can also reside on the same hardware cluster.

Figure 15-4 Cold Failover Cluster Example Topology 2

Description of Figure 15-4 follows
Description of "Figure 15-4 Cold Failover Cluster Example Topology 2"

15.3.3 Example Topology 3

Figure 15-5 shows an Oracle Identity Management deployment. In this example topology, all components are on a two-node hardware cluster. Identity Management fails over as a unit, and both the Java EE (Administration Server and WLS_ods Managed Server) and system components are part of the same failover unit. They share the same virtual IP and shared disk (cfcvip1.example.com). The database is also on the same hardware cluster. It uses a different virtual IP, cfcvip2.example.com, and a different set of shared disks. During normal operations, the database runs on Node2 and the IDM stack runs on Node1. The other node acts as a backup for each.

This topology is recommended for most Cold Failover Cluster deployments. The example is for Identity Management, but this is true for the Oracle SOA, Oracle WebCenter Portal, and Oracle Portal, Forms, Reports, and Discoverer suites. In the recommended architecture, Oracle Fusion Middleware runs as one node of the hardware cluster. The Oracle database runs on the other node. Each node is a backup for the other. The Oracle Fusion Middleware instance and the database instance failover independently of each other, using different shared disks and different VIPs. This architecture also ensures that the hardware cluster resources are optimally used.

Figure 15-5 Cold Failover Cluster Example Topology 3

Cold Failover Cluster Example Topology 3
Description of "Figure 15-5 Cold Failover Cluster Example Topology 3"

15.4 Transforming the Administration Server in an Existing Domain for Cold Failover Cluster

This section describes the steps for transforming an Administration Server in an existing domain to Cold Failover Cluster.

Note:

After Administration Server transformation, client side changes for Administration Server and Enterprise Manager, as mentioned in Section 15.2.3.5, "Transforming the Administration Server for Cold Failover Cluster," may be required.

Assumptions

The procedures in this section assume that:

  • All Fusion Middleware components are in a nostage deployment.

  • The starting topology is an active-active cluster of the product suite (Node1 and Node2).

  • The administration server is on Node1 to start.

  • It shares the same domain home as the Managed Server on Node1.

  • The MW_HOME path is the same on both Node1 and Node2.

These procedures also apply to customer applications deployed to Oracle WebLogic cluster installations, if the assumptions are met.

Start Topology

Figure 15-6 shows an example start topology before transforming the Administration Server.

Figure 15-6 Cold Failover Cluster Example Start Topology

Cold Failover Cluster Example Topology 1
Description of "Figure 15-6 Cold Failover Cluster Example Start Topology"

15.4.1 Destination Topologies

Figure 15-7 shows the possible destination topology after transforming the Administration Server for Cold Failover Cluster with the following characteristics:

  • The Administration Server Domain Home is moved out onto a shared disk that Node1 and Node2 can mount, but is mounted by either one of the two at any given point in time.

  • It continues to use the original Middleware Home available on Node1 and Node2.

  • The Listen Address of the Administration Server is moved to a virtual IP.

Figure 15-7 Possible Destination Topologies

Cold Failover Cluster Example Topology 1
Description of "Figure 15-7 Possible Destination Topologies"

15.4.2 Cold Failover Cluster Transformation Procedure

To transform the Administration Server in an existing domain:

  1. Shut down the cluster of the component Managed Servers and Administration Server.

  2. Shut down the Node Manager process on each of the nodes, if it is running.

  3. Back up the entire domain.

  4. Provision the Virtual IP on Node1. For example:

    /sbin/ifconfig eth0:1 IP_Address netmask netmask
    /sbin/arping -q -U -c 3 -I eth0 IP_Address
    

    Where IP_Address is the virtual IP_Address and the netmask is the associated netmask.

    In the example below, the IP_Address is enabled on the interface Local Area Connection.

    /sbin/ifconfig eth0:1 130.35.46.17 netmask 255.255.224.0
    /sbin/arping -q -U -c 3 -I eth0 IP_Address
    

    Where IP_Address is the virtual IP_Address and the netmask is the associated netmask.

    In the example below, the IP_Address is enabled on the interface Local Area Connection.

    netsh interface ip add address "Local Area connection" IP_Address netmask
    
  5. Start the Administration Server from the local Domain Home.

    cd DOMAIN_HOME/bin
    ./startWeblogic.sh
    
  6. Transform the Administration Server instance to Cold Failover Cluster:

    Log into the Administration Console.

    Create a machine for the Virtual Host.

    1. Select Environment, and then Machines.

    2. In the Change Center, click Lock & Edit. Click New.

    3. In the Name field, enter cfcvip.example.com.

      Note:

      Keep the Listen Address field set to localhost; the CFC solution relies on this setting. Do not change it to the virtual IP_Address or any other value.

      The Administration Server requires a new Virtual Host Machine so that it can interact with multiple node managers, not just the local one. Each node manager on each host listens on all interfaces, both the real IP and the local host (127.0.0.1). The Administration Server uses the new Virtual Host Machine definition exclusively.

      The Virtual Host Machine must point to localhost because localhost is the relative internal address for whatever machine is active; it sticks with the Administration Server. The Administration Server changes from one host to another but keeps the same Virtual Name and VIP. The node manager that is associated with the Administration Server also changes, because the Administration Server uses the localhost attribute in conjunction with the first host and then again, after failover, in conjunction with the second host. See Figure x

    4. Select the appropriate operating system and click OK.

    5. Select the machine you just created.

    6. Click the Servers tab then click Add.

    7. Select an existing server, and associate it with this machine.

    8. In the Select Server drop-down list, ensure AdminServer is selected.

    9. Click Finish then click Activate Changes.

    Configure the administration server to listen on cfcvip.example.com.

    1. Select Environment, and then Servers from the Domain Structure menu.

    2. In the Change Center, click Lock & Edit.

    3. Click on the Administration Server (AdminServer).

    4. Change the Listen Address to cfcvip.example.com Click Save.

    Stop the Administration Server from the Administration Console.

    1. Select Environment, and then Servers from the Domain Structure menu.

    2. Click Control.

    3. Select Adminserver by clicking on the checkbox next to it.

    4. Shut down Adminserver by selecting Force Shutdown Now under Shutdown pull-down menu.

    5. Click Yes.

    Ensure that the VIP is enabled on the system and start the Administration Server from the command line:

    cd DOMAIN_HOME/bin
    ./startWeblogic.sh
    
  7. Validate the Administration Server by accessing the consoles on the virtual IP:

    • http://cfcvip.example.com:7001/console

    • http://cfcvip.example.com:7001/em

  8. Shut down the Administration Server on Node1 using the Administration Console.

  9. Ensure that the shared disk is provisioned and mounted on Node1.

  10. Pack the entire domain on Node1:

    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=false -domain=/localdisk/user_projects/domains/domain_name
    -template=cfcdomaintemplate_all.jar -template_name=cfc_domain_template_all
    
  11. Unpack it on the shared disk:

    ./unpack.sh -domain=/shareddisk/user_projects/domains/domain_name
    -template=cfcdomaintemplate_all.jar 
    -app_dir=/shareddisk/user_projects/apps -server_start_mode=prod
    
  12. Product suite-specific changes:

    In the Oracle Identity Management Product Suite:

    • Back up the config.xml file, located in the /shareddisk/user_projects/domains/domain_name/config/ directory in this example.

    • Change WCC Socket Host Filter to VIP in the following files:

      • {domain}/ucm/ibr/config/config.cfg

      • {domain}/ucm/urm/config/config.cfg

      • {domain}/ucm/cs/config/config.cfg

    • Edit the config.xml file, located in the /shareddisk/user_projects/domains/domain_name/config directory in this example, and make the following changes to source-path:

      For dipapp, change source path to ORACLE_HOME/ldap/odi/dipapp/dipapps.ear.

      For example:

        <app-deployment>
          <name>DIP#11.1.1.2.0</name>
          <target>cluster_ods</target>
          <module-type>ear</module-type>
          <source-path>ORACLE_HOME/ldap/odi/dipapp/dipapps.ear</source_path>
          <security-dd-model>DDOnly</security-dd-model>
          <staging-mode>nostage</staging-mode>
        </app-deployment>
      

      For the ODSM app, change the source path to ORACLE_HOME/ldap/odsm/odsm.ear

      For example:

      <app-deployment>
          <name>odsm#11.1.1.2.0</name>
          <target>cluster_ods</target>
          <module-type>ear</module-type>
          <source-path>ORACLE_HOME/ldap/odsm/odsm.ear</source-path>
          <security-dd-model>DDOnly</security-dd-model>
          <staging-mode>nostage</staging-mode>
        </app-deployment>
      
    • For the OIF:

      Change the source path of OIF-APP to ORACLE_HOME/fed/install/oif.ear:

      <app-deployment>
          <name>OIF#11.1.1.2.0</name>
          <target>cluster_oif</target>
          <module-type>ear</module-type>
          <source-path>ORACLE_HOME/fed/install/oif.ear</source-path>
          <security-dd-model>Advanced</security-dd-model>
          <staging-mode>nostage</staging-mode>
        </app-deployment>
      

      Change the source path of oif-libs to ORACLE_HOME/lib/java/shared/oracle.idm.oif/11.1.1.0.0/oif-libs.ear:

      <library>
        <name>oif-libs#11.1.1.2.0@11.1.1.2.0</name>
          <target>cluster_oif</target>
          <module-type>ear</module-type>
          <source-path>ORACLE_HOME/lib/java/shared/oracle.idm.oif/11.1.1.0.0/oif-libs.ear</source-path>
          <security-dd-model>DDOnly</security-dd-model>
        </library>
      
  13. Start the Administration Server:

    cd /shareddisk/user_projects/domains/domain_name/bin
    ./startWeblogic.sh
    
  14. Validate the Administration Server by accessing the consoles on the virtual IP.

    • http://cfcvip.example.com:7001/console

    • http://cfcvip.example.com:7001/em

  15. Shut down the Administration Server.

  16. Perform the following on Node1:

    mv /localdisk/user_projects/domains/domain_name
    /localdisk/user_projects/domains/domain_name_old
    mv /localdisk/user_projects/applications/domain_name
    /localdisk/user_projects/applications/domain_name_old
    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true -domain=/shareddisk/user_projects/domains/domain_name
    -template=cfcdomaintemplate_mngd.jar -template_name=cfc_domain_template_mngd
    ./unpack.sh -domain=/localdisk/user_projects/domains/domain_name
    -template=cfcdomaintemplate_mngd.jar
    

    Note:

    These commands assume an applications directory exists under user_projects.

  17. Copy the template to Node2:

    scp ORACLE_COMMON_HOME/common/bin/cfcdomaintemplate_mngd.jar
    Node2:ORACLE_COMMON_HOME/common/bin
    
  18. Log in to Node2, and unpack on Node2

    mv /localdisk/user_projects/domains/domain_name
    /localdisk/user_projects/domains/domain_name_old
    mv /localdisk/user_projects/applications/domain_name
    /localdisk/user_projects/applications/domain_name_old
    cd ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=/localdisk/user_projects/domains/domain_name
    -template=cfcdomaintemplate_mngd.jar
    

    Note:

    These commands assume an applications directory exists under user_projects.

  19. For Oracle Identity Manager Suite, the following addition steps are required:

    For DIP:

    1. Locate the applications directory in the Oracle WebLogic Server domain directory on Node1:

      MW_HOME/user_projects/domains/IDMDomain/config/fmwconfig/servers/wls_ods1/applications
      
    2. Copy the applications directory and its contents on Node1 to the same location in the domain directory on Node2.

      scp -rp MW_HOME/user_projects/domains/IDMDomain/config/fmwconfig/servers
                     /wls_ods1/applications
           user@IDMHOST2:MW_HOME/user_projects/domains/IDMDomain/config/fmwconfig
                     /servers/wls_ods2/applications
      
      

    For OIF:

    1. Locate the applications directory in the Oracle WebLogic Server domain directory on Node1:

      MW_HOME/user_projects/domains/OIFDomain/config/fmwconfig/servers/wls_oif1/applications
      
    2. Copy the applications directory and its contents on Node1 to the same location in the domain directory on Node2.

      scp -rp MW_HOME/user_projects/domains/OIFDomain/config/fmwconfig/servers
                     /wls_oif1/applications
           user@IDMHOST2:MW_HOME/user_projects/domains/OIFDomain/config/fmwconfig
                     /servers/wls_oif2/applications
      
  20. Start the Administration Server:

    cd /shareddisk/user_projects/domains/domain_name/bin
    ./startWeblogic.sh
    
  21. Start the node manager (if used) on Node1 and Node2:

    cd WL_HOME/server/bin
    ./startNodeManager.sh
    
  22. Start the component Managed Servers on Node1 and Node2 from the Administration Server Console (if Node Manager used) or from the command line.

  23. Validate the deployment using component-specific functional tests.

  24. Test Administration Server failover.

    Failover the Administration Server manually to the second node:

    1. Stop the Administration Server process (and any other process running out of a given Middleware Home).

    2. Unmount the shared storage from Node1 where the Middleware Home or domain directory exists.

    3. Mount the shared storage on Node2, following storage specific commands.

    4. Disable the virtual IP on Node1:

      ifconfig interface:index down
      

      In the following example, the IP_Address is disabled on the interface eth0:

      ifconfig eth0:1 down
      
      netsh interface ip delete address interface addr=IP_Address
      

      Where IP_Address is the virtual IP_Address.

      In the following example, the IP_Address is enabled on the interface Local Area Connection.

      netsh interface ip delete address 'Local Area connection' addr=130.35.46.17
      
    5. Enable the virtual IP on Node2.

    6. Start the Administration Server process using the following command:

      DOMAIN_HOME/bin/startWebLogic.sh
      

      Where DOMAIN_HOME is the location of your domain directory.

    Validate access to both the Administration Server and Oracle Enterprise Manager Administration Console.

    Validate with component-specific tests.

  25. After validation, fail back the Administration Server to the node where it will normally run (this could be Node1 or Node2) and run normal operations.