Skip Headers
Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management (Oracle Fusion Applications Edition)
11g Release 1 (11.1.2)

Part Number E21032-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

16 Configuring Server Migration for Oracle Identity Manager

For this high availability topology, you must configure server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers. The WLS_OIM1 and WLS_SOA1 Managed Server are configured to restart on OIMHOST2 should a failure occur. The WLS_OIM2 and WLS_SOA2 Managed Servers are configured to restart on OIMHOST1 should a failure occur. For this configuration, the WLS_OIM1, WLS_SOA1, WLS_OIM2 and WLS_SOA2 servers listen on specific floating IPs that are failed over by WebLogic Server Migration. Configuring server migration for the Managed Servers consists of the following steps.

The following steps enable server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers. This enables a Managed Server to fail over to another node in the case of server or process failure.

This chapter contains the following steps:

16.1 Setting Up a User and Tablespace for the Server Migration Leasing Table

The first step to set up a user and tablespace for the server migration leasing table:

Note:

If other servers in the same domain have already been configured with server migration, the same tablespace and data sources can be used. In that case, the data sources and multi data source for database leasing do not need to be re-created, but they must be retargeted to the clusters being configured with server migration.

  1. Create a tablespace called leasing. For example, log on to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace leasing logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named leasing and assign to it the leasing tablespace:

    SQL> create user leasing identified by welcome1;
    SQL> grant create table to leasing;
    SQL> grant create session to leasing;
    SQL> alter user leasing default tablespace leasing;
    SQL> alter user leasing quota unlimited on LEASING;
    
  3. Create the leasing table using the leasing.ddl script:

    1. Copy the leasing.ddl file located in either the WL_HOME/server/db/oracle/817 or the WL_HOME/server/db/oracle/920 directory to your database node.

    2. Connect to the database as the leasing user.

    3. Run the leasing.ddl script in SQL*Plus:

      SQL> @Copy_Location/leasing.ddl;
      

16.2 Creating a Multi Data Source Using the Oracle WebLogic Administration Console

The second step is to create a multi data source for the leasing table from the Oracle WebLogic Server Administration Console. You create a data source to each of the Oracle RAC database instances during the process of setting up the multi data source, both for these data sources and the global leasing multi data source. When you create a data source:

Creating a Multi Data Source

Perform these steps to create a multi data source:

  1. From Domain Structure window in the Oracle WebLogic Server Administration Console, expand the Services node. The Summary of JDBC Data Source page appears.

  2. Click Data Sources. The Summary of JDBC Multi Data Source page is displayed.

  3. Click Lock and Edit.

  4. Click New Multi Data Source. The Create a New JDBC Multi Data Source page is displayed.

  5. Enter leasing as the name.

  6. Enter jdbc/leasing as the JNDI name.

  7. Select Failover as algorithm (default).

  8. Click Next.

  9. Select OIM_CLUSTER and SOA_CLUSTER as the targets.

  10. Click Next.

  11. Select non-XA driver (the default).

  12. Click Next.

  13. Click Create New Data Source.

  14. Enter leasing-rac0 as the name. Enter jdbc/leasing-rac0 as the JNDI name. Enter oracle as the database type. For the driver type, select Oracle Driver (Thin) for Oracle RAC Service-Instance connections, Versions:10 and later.

    Note:

    When creating the multi data sources for the leasing table, enter names in the format of MultiDS-rac0, MultiDS-rac1, and so on.

  15. Click Next.

  16. Deselect Supports Global Transactions.

  17. Click Next.

  18. Enter the service name, database name, host port, and password for your leasing schema.

  19. Click Next.

  20. Click Test Configuration and verify that the connection works.

  21. Click Next.

  22. Target the data source to OIM_CLUSTER and SOA cluster.

  23. Select the data source you just created, for example leasing-rac0, and add it to the right screen.

  24. Click Create a New Data Source for the second instance of your Oracle RAC database, target it to the OIM_CLUSTER and SOA_CLUSTER, repeating the steps for the second instance of your Oracle RAC database.

  25. Add the second data source to your multi data source.

  26. Click Activate Changes.

16.3 Editing Node Manager's Properties File

The third step is to edit Node Manager's properties file. This must be done for the Node Managers in both nodes (OIMHOST1 and OIMHOST2) where server migration is being configured:

Interface=eth0
NetMask=255.255.255.0
UseMACBroadcast=true

Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in Node Manager's output:

...
StateCheckInterval=500
Interface=eth0
NetMask=255.255.255.0
...

Note:

The following steps are not required if the server properties (start properties) have been properly set and Node Manager can start the servers remotely.

  1. Set the following property in the nodemanager.properties file:

    • StartScriptEnabled: Set this property to true. This is required to enable Node Manager to start the Managed Servers.

  2. Start Node Manager on OIMHOST1 and OIMHOST2 by running the startNodeManager.sh script, which is located in the WL_HOME/server/bin directory.

Note:

When running Node Manager from a shared storage installation, multiple nodes are started using the same nodemanager.properties file. However, each node may require different NetMask or Interface properties. In this case, specify individual parameters on a per-node basis using environment variables. For example, to use a different interface (eth3) in HOSTn, use the Interface environment variable as follows:

HOSTn> export JAVA_OPTIONS=-DInterface=eth3

and start Node Manager after the variable has been set in the shell.

16.4 Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

This section is not required on Windows. On Linux and UNIX-based systems, the fourth step is to set environment and superuser privileges for the wlsifconfig.sh script:

  1. Ensure that your PATH environment variable includes these files:

    Table 16-1 Files Required for the PATH Environment Variable

    File Located in this directory

    wlsifconfig.sh

    ORACLE_BASE/admin/domain_name/mserver/domain_name/bin/server_migration

    wlscontrol.sh

    WL_HOME/common/bin

    nodemanager.domains

    WL_HOME/common


  2. Grant sudo configuration for the wlsifconfig.sh script.

    • Configure sudo to work without a password prompt.

    • For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script. For example, perform the following steps to set the environment and superuser privileges for the wlsifconfig.sh script:

    • Grant sudo privilege to the WebLogic user oracle with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

    • Ensure the script is executable by the WebLogic user oracle. The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle and also over ifconfig and arping:

      oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
      

    Note:

    Ask the system administrator for the appropriate sudo and system rights to perform this step.

16.5 Configuring Server Migration Targets

The sixth step is to configure server migration targets. You first assign all the available nodes for the cluster's members and then specify candidate machines (in order of preference) for each server that is configured with server migration. Follow these steps to configure cluster migration in a migration in a cluster:

  1. Log in to the Oracle WebLogic Server Administration Console (http://Host:Admin_Port/console). Typically, Admin_Port is 7001 by default.

  2. In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page is displayed.

  3. Click the cluster for which you want to configure migration (OIM_CLUSTER) in the Name column of the table.

  4. Click the Migration tab.

  5. Click Lock and Edit.

  6. In the Available field, select the machine to which to allow migration and click the right arrow. In this case, select OIMHOST1 and OIMHOST2.

  7. Select the data source to be used for automatic migration. In this case, select the leasing data source.

  8. Click Save.

  9. Click Activate Changes.

  10. Repeat steps 2 through 9 for the SOA cluster.

  11. Set the candidate machines for server migration. You must perform this task for all of the Managed Servers as follows:

    1. In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.

      Tip:

      Click Customize this table in the Summary of Servers page and move Current Machine from the Available window to the Chosen window to view the machine on which the server is running. This is different from the configuration if the server gets migrated automatically.

    2. Select the server for which you want to configure migration.

    3. Click the Migration tab.

    4. In the Available field, located in the Migration Configuration section, select the machines to which to allow migration and click the right arrow. For WLS_OIM1, select OIMHOST2. For WLS_OIM2, select OIMHOST1.

    5. Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Click Activate Changes.

    8. Repeat the previous steps for the WLS_SOA1 and WLS_SOA2 Managed Servers.

    9. Restart WebLogic Administration Server, Node Managers, and the servers for which server migration has been configured.

16.6 Testing the Server Migration

The final step is to test the server migration. Perform these steps to verify that server migration is working properly:

From OIMHOST1:

  1. Stop the WLS_OIM1 Managed Server. To do this, run this command:

    OIMHOST1> kill -9 pid
    

    where pid specifies the process ID of the Managed Server. You can identify the pid in the node by running this command:

    OIMHOST1> ps -ef | grep WLS_OIM1
    
  2. Watch the Node Manager console. You should see a message indicating that WLS_OIM1's floating IP has been disabled.

  3. Wait for Node Manager to try a second restart of WLS_OIM1. It waits for a fence period of 30 seconds before trying this restart.

  4. Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.

From OIMHOST2:

  1. Watch the local Node Manager console. After 30 seconds since the last try to restart WLS_OIM1 on OIMHOST1, Node Manager on OIMHOST2 should prompt that the floating IP for WLS_OIM1 is being brought up and that the server is being restarted in this node.

  2. Access the soa-infra console in the same IP.

Follow the previous steps to test server migration for the WLS_OIM2, WLS_SOA1, and WLS_SOA2 Managed Servers.

Table 16-2 shows the Managed Servers and the hosts they migrate to in case of a failure.

Table 16-2 WLS_OIM1, WLS_OIM2, WLS_SOA1, WLS_SOA2 Server Migration

Managed Server Migrated From Migrated To

WLS_OIM1

OIMHOST1

OIMHOST2

WLS_OIM2

OIMHOST2

OIMHOST1

WLS_SOA1

OIMHOST1

OIMHOST2

WLS_SOA2

OIMHOST2

OIMHOST1


Verification From the Administration Console

Migration can also be verified in the Administration Console:

  1. Log in to the Administration Console.

  2. Click Domain on the left console.

  3. Click the Monitoring tab and then the Migration sub tab.

    The Migration Status table provides information on the status of the migration.

Note:

After a server is migrated, to fail it back to its original node/machine, stop the Managed Server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the Managed Server on the machine to which it was originally assigned.