Skip Headers
Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management (Oracle Fusion Applications Edition)
11g Release 7 (11.1.7)

Part Number E21032-21
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

13 Configuring Server Migration for an Enterprise Deployment

Configuring server migration allows SOA-managed and OIM-managed servers to be migrated from one host to another, so that if a node hosting one of the servers fails, the service can continue on another node. This chapter describes how to configure server migration for an Identity Management enterprise deployment.

This chapter contains the following steps:

13.1 Overview of Server Migration for an Enterprise Deployment

Configure server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers. The WLS_OIM1 and WLS_SOA1 Managed Server are configured to restart on IDMHOST2 should a failure occur. The WLS_OIM2 and WLS_SOA2 Managed Servers are configured to restart on IDMHOST1 should a failure occur. The WLS_OIM1, WLS_SOA1, WLS_OIM2 and WLS_SOA2 servers listen on specific floating IPs that are failed over by WebLogic Server Migration.

Perform the steps in the following sections configure server migration for the WLS_OIM1, WLS_SOA1, WLS_OIM2, and WLS_SOA2 Managed Servers.

13.2 Setting Up a User and Tablespace for the Server Migration Leasing Table

In this section, you set up a user and tablespace for the server migration leasing table:

Note:

If other servers in the same domain have already been configured with server migration, the same tablespace and data sources can be used. In that case, the data sources and multi data source for database leasing do not need to be re-created, but they must be retargeted to the clusters being configured with server migration.

  1. Create a tablespace called leasing. For example, log on to SQL*Plus as the sysdba user and run the following command:

    create tablespace leasing 
    logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' 
    size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named leasing and assign to it the leasing tablespace:

    create user leasing identified by password;
    grant create table to leasing;
    grant create session to leasing;
    alter user leasing default tablespace leasing;
    alter user leasing quota unlimited on LEASING;
    
  3. Create the leasing table using the leasing.ddl script:

    1. Copy the leasing.ddl file located in either of the following directories to your database node:

      WL_HOME/server/db/oracle/817
      WL_HOME/server/db/oracle/920
      
    2. Connect to the database as the leasing user.

    3. Run the leasing.ddl script in SQL*Plus:

      @Copy_Location/leasing.ddl;
      

13.3 Creating a Multi Data Source Using the Oracle WebLogic Administration Console

The second step is to create a multi data source for the leasing table from the Oracle WebLogic Server Administration Console. (Console URLs are provided in Section 16.2, "About Identity Management Console URLs.") You create a data source to each of the Oracle RAC database instances during the process of setting up the multi data source, both for these data sources and the global leasing multi data source. When you create a data source:

Creating a Multi Data Source

Perform these steps to create a multi data source:

  1. From Domain Structure window in the Oracle WebLogic Server Administration Console, expand the Services node. The Summary of JDBC Data Source page appears.

  2. Click Data Sources. The Summary of JDBC Multi Data Source page is displayed.

  3. Click Lock and Edit.

  4. Click New Multi Data Source. The Create a New JDBC Multi Data Source page is displayed.

  5. Enter leasing as the name.

  6. Enter jdbc/leasing as the JNDI name.

  7. Select Failover as algorithm (default).

  8. Click Next.

  9. Select oim_cluster and soa_cluster as the targets.

  10. Click Next.

  11. Select non-XA driver (the default).

  12. Click Next.

  13. Click Create New Data Source.

  14. Enter leasing-rac0 as the name. Enter jdbc/leasing-rac0 as the JNDI name. Enter oracle as the database type.

    Note:

    When creating the multi data sources for the leasing table, enter names in the format of MultiDS-rac0, MultiDS-rac1, and so on.

  15. Click Next.

  16. On JDBC Data Source Properties, select Database Driver: Oracle's Driver (Thin) for RAC Service-Instance connections.

  17. Deselect Supports Global Transactions.

  18. Click Next.

  19. Enter the service name, database name, host port, and password for your leasing schema.

  20. Click Next.

  21. Click Test Configuration and verify that the connection works.

  22. Click Next.

  23. Target the data source to oim_cluster and SOA cluster.

  24. Click Finish.

  25. Select the data source you just created, for example leasing-rac0, and add it to the right screen.

  26. Click Create a New Data Source for the second instance of your Oracle RAC database, target it to the oim_cluster and soa_cluster, repeating the steps for the second instance of your Oracle RAC database.

  27. Add the second data source to your multi data source.

  28. Click Activate Changes.

13.4 Editing Node Manager's Properties File

In this section, you edit Node Manager's properties file. This must be done for the Node Managers on the nodes where the servers are running, IDMHOST1 and IDMHOST2.

The nodemanager.properties file is located in the directory

SHARED_CONFIG_DIR/nodemanager/hostname

where hostname is the name of the host where the node manager is running.

Add the following properties to enable server migration to work properly:

Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in Node Manager's output:

StateCheckInterval=500
eth0=*,NetMask=255.255.255.0
UseMACBroadcast=true

Notes:

  • LogToStderr must be set to true (in node.propertes), in order for you to see the properties in the output.

  • The following steps are not required if the server properties (start properties) have been properly set and Node Manager can start the servers remotely.

  1. If not done already, set the StartScriptEnabled property in the nodemanager.properties file to true. This is required to enable Node Manager to start the managed servers.

  2. Start Node Manager on IDMHOST1 and IDMHOST2 by running the startNodeManagerWrapper.sh script, which is located in the SHARED_CONFIG_DIR/nodemanager/hostname directory.

Note:

When running Node Manager from a shared storage installation, multiple nodes are started using the same nodemanager.properties file. However, each node may require different NetMask or Interface properties. In this case, specify individual parameters on a per-node basis using environment variables. For example, to use a different interface (eth3) in HOSTn, use the Interface environment variable by setting JAVA_OPTIONS to: -DInterface=eth3

Then start Node Manager.

13.5 Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

On Linux, you must set environment and superuser privileges for the wlsifconfig.sh script:

Ensure that your PATH environment variable includes the files listed in Table 13-1.

Table 13-1 Files Required for the PATH Environment Variable

File Located in this directory

wlsifconfig.sh

MSERVER_HOME/bin/server_migration

wlscontrol.sh

WL_HOME/common/bin

nodemanager.domains

SHARED_CONFIG_DIR/nodemanager/HostName


For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script. For example, perform the following steps to set the environment and superuser privileges for the wlsifconfig.sh script.

Note:

Ask the system administrator for the appropriate sudo and system rights to perform this step.

Grant sudo privilege to the WebLogic user oracle with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

Make sure the script is executable by the WebLogic user ('oracle'). The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle and also over ifconfig and arping.

To grant sudo privilege to the WebLogic user ('oracle') with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries:

Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping

13.6 Configuring Server Migration Targets

In this section, you configure server migration targets. Configuring Cluster Migration sets the DataSourceForAutomaticMigration property to true.

To configure migration in a cluster:

  1. Log in to the Oracle WebLogic Server Administration Console at: http://ADMIN.mycompany.com/console

  2. In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page is displayed.

  3. Click the cluster for which you want to configure migration (oim_cluster) in the Name column of the table.

  4. Click the Migration tab.

  5. Click Lock and Edit.

  6. In the Candidate Machines for Migratable Servers field, select the machine to which to allow migration and click the right arrow. In this case, select IDMHOST1 and IDMHOST2.

  7. Select the data source to be used for automatic migration. In this case, select the leasing data source.

  8. Click Save.

  9. In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.

  10. Select the server for which you want to configure migration.

  11. Click the Migration tab.

  12. Select Automatic Server Migration Enabled and click Save.

  13. Click Activate Changes.

  14. Repeat steps 2 through 13 for the SOA cluster.

  15. Restart WebLogic Administration Server, Node Managers, and the servers for which server migration has been configured, as described in Section 16.1, "Starting and Stopping Components.".

Note:

If migration is only going to be allowed to specific machines, do not specify candidates for the cluster, but rather specify candidates only on a server per server basis.

13.7 Testing the Server Migration

In this section, you test the server migration. Perform these steps to verify that server migration is working properly:

To test from IDMHOST1:

  1. Stop the WLS_OIM1 Managed Server. To do this, run this command:

    kill -9 pid
    

    where pid specifies the process ID of the Managed Server. You can identify the pid in the node by running this command:

    ps -ef | grep WLS_OIM1
    
  2. Watch the Node Manager console. You should see a message indicating that WLS_OIM1's floating IP has been disabled.

  3. Wait for Node Manager to try a second restart of WLS_OIM1. It waits for a fence period of 30 seconds before trying this restart.

  4. Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.

To test from IDMHOST2:

  1. Watch the local Node Manager console. After 30 seconds since the last try to restart WLS_OIM1 on IDMHOST1, Node Manager on IDMHOST2 should prompt that the floating IP for WLS_OIM1 is being brought up and that the server is being restarted in this node.

  2. Access the OIM Console using the Virtual Host Name, for example: OIMVH1. (Console URLs are provided in Section 16.2, "About Identity Management Console URLs.")

Follow the previous steps to test server migration for the WLS_OIM2, WLS_SOA1, and WLS_SOA2 Managed Servers.

Table 13-2 shows the Managed Servers and the hosts they migrate to in case of a failure.

Table 13-2 Managed Server Migration

Managed Server Migrated From Migrated To

WLS_OIM1

IDMHOST1

IDMHOST2

WLS_OIM2

IDMHOST2

IDMHOST1

WLS_SOA1

IDMHOST1

IDMHOST2

WLS_SOA2

IDMHOST2

IDMHOST1


Verification From the Administration Console

Migration can also be verified in the Administration Console:

  1. Log in to the Administration Console. (Console URLs are provided in Section 16.2, "About Identity Management Console URLs.")

  2. Click IDMDomain on the left console.

  3. Click the Monitoring tab and then the Migration sub tab.

    The Migration Status table provides information on the status of the migration.

Note:

After a server is migrated, to fail it back to its original node/machine, stop the Managed Server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the Managed Server on the machine to which it was originally assigned.

13.8 Backing Up the Server Migration Configuration

Back up the database and the WebLogic domain, as described in Section 16.5.3, "Performing Backups During Installation and Configuration."