Skip Headers
Oracle® Exalogic Elastic Cloud Enterprise Deployment Guide for Oracle SOA Suite
Release EL X2-2 and EL X3-2

E47690-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

13 Configure Server Migration for an Exalogic Enterprise Deployment

Configuring server migration allows SOA managed servers to be migrated from one host to another, so that if a node hosting one of the servers fails, the service can continue on another node. This chapter describes how to configure server migration for an Fusion Middleware SOA Exalogic enterprise deployment.

This chapter contains the following steps:

13.1 Overview of Server Migration for an Exalogic Enterprise Deployment

Configure server migration for the WLS_OSB1, WLS_SOA1, WLS_OSB2, and WLS_SOA2 Managed Servers. The WLS_OSB1 and WLS_SOA1 Managed Server are configured to restart on SOAHOST2 should a failure occur. The WLS_OSB2 and WLS_SOA2 Managed Servers are configured to restart on SOAHOST1 should a failure occur. The WLS_OSB1, WLS_SOA1, WLS_OSB2 and WLS_SOA2 servers listen on specific floating IPs that are failed over by WebLogic Server Migration.

Perform the steps in the following sections configure server migration for the WLS_OSB1, WLS_SOA1, WLS_OSB2, and WLS_SOA2 Managed Servers.

13.2 Setting Up a User and Tablespace for the Server Migration Leasing Table

In this section, you set up a user and tablespace for the server migration leasing table:

Note:

If other servers in the same domain have already been configured with server migration, the same tablespace and data sources can be used. In that case, the data sources and multi data source for database leasing do not need to be re-created, but they must be retargeted to the clusters being configured with server migration.

  1. Create a tablespace called leasing. For example, log on to SQL*Plus as the sysdba user and run the following command:

    create tablespace leasing 
    logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named leasing and assign to it the leasing tablespace:

    create user leasing identified by password;
    grant create table to leasing;
    grant create session to leasing;
    alter user leasing default tablespace leasing;
    alter user leasing quota unlimited on leasing;
    
  3. Create the leasing table using the leasing.ddl script:

    1. Copy the leasing.ddl file located in either of the following directories to your database node:

      WL_HOME/server/db/oracle/817
      WL_HOME/server/db/oracle/920
      
    2. Connect to the database as the leasing user.

    3. Run the leasing.ddl script in SQL*Plus:

      @Copy_Location/leasing.ddl;
      
    4. After the tool completes, enter the following at the SQL*Plus prompt:

      commit;
      

13.3 Creating a GridLink Data Source for Leasing Using the Oracle WebLogic Administration Console

Use Appendix D, "Creating a GridLink Data Source." to create a GridLink data source for the Leasing table using the Oracle WebLogic Administration Console.

For the Leasing table data source, use the following names:

13.4 Editing Node Manager's Properties File

In this section, you edit Node Manager's properties file. This must be done for the Node Managers on the nodes where the servers are running, SOAHOST1 and SOAHOST2.

The nodemanager.properties file is located in the following directory:

WL_HOME/common/nodemanager 

Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in Node Manager's output:

StateCheckInterval=500
bond0=*,NetMask=255.255.248.0
UseMACBroadcast=true

Note:

The following steps are not required if the server properties (start properties) have been properly set and Node Manager can start the servers remotely.

  1. If not done already, set the StartScriptEnabled property in the nodemanager.properties file to true. This is required to enable Node Manager to start the managed servers.

  2. Start Node Manager on SOAHOST1 and SOAHOST2 by running the startNodeManager.sh script, which is located in the following directory:

    /u02/private/oracle/config/nodemanager 
    

13.5 Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

Set environment and superuser privileges for the wlsifconfig.sh script:

Ensure that your PATH environment variable includes the files listed inTable 13-1.

Table 13-1 Files Required for the PATH Environment Variable

File Located in this directory

wlsifconfig.sh

MSERVER_HOME/bin/server_migration

wlscontrol.sh

WL_HOME/common/bin

nodemanager.domains

WL_HOME/common/nodemanager


Grant sudo privilege to the WebLogic user ('oracle') with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script. For example, perform the following steps to set the environment and superuser privileges for the wlsifconfig.sh script.

Note:

Ask the system administrator for the appropriate sudo and system rights to perform this step.

Grant sudo privilege to the WebLogic user oracle with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

Make sure the script is executable by the WebLogic user ('oracle'). The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle and also over ifconfig and arping.

To grant sudo privilege to the WebLogic user ('oracle') with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries:

Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping

13.6 Configuring Server Migration Targets

In this section, you configure server migration targets for soa_cluster and osb_cluster. Configuring Cluster Migration sets the DataSourceForAutomaticMigration property to true.

To configure migration in a cluster:

  1. Log in to the Oracle WebLogic Server Administration Console at the URL listed in Section 8.18.2, "Validating Access through Oracle Traffic Director."

  2. In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page is displayed.

  3. Click the cluster name for which you want to configure migration in the Name column of the table.

  4. Click the Migration tab.

  5. Click Lock and Edit.

  6. In the Available field, select the machines to which to allow migration, SOAHOST1 and SOAHOST2, and click the right arrow.

  7. Select the data source to be used for automatic migration. In this case, select the leasing data source.

  8. Click Save.

  9. Click Activate Changes.

  10. In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.

  11. Select the server for which you want to configure migration.

  12. Click the Migration tab.

  13. Select Automatic Server Migration Enabled and click Save.

  14. Click Activate Changes.

  15. In the Available field, select the machine to which to allow migration and click the right arrow. In this case, select SOAHOST1 and SOAHOST2.

  16. Restart the managed servers for which server migration has been configured as described in Section 8.5.3, "Starting the Administration Server on SOAHOST1."

Note:

If migration is only going to be allowed to specific machines, do not specify candidates for the cluster, but rather specify candidates only on a server per server basis.

13.7 Testing the Server Migration

In this section, you test the server migration. For example, to test migration for OSB servers:

To test from SOAHOST1:

  1. Stop the WLS_OSB1 Managed Server. To do this, run this command:

    kill -9 pid
    

    where pid specifies the process ID of the Managed Server. You can identify the pid in the node by running this command:

    ps -ef | grep WLS_OSB1
    
  2. Watch the Node Manager console. You should see a message indicating that WLS_OSB1's floating IP has been disabled.

  3. Wait for Node Manager to try a second restart of WLS_OSB1. It waits for a fence period of 30 seconds before trying this restart.

  4. Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.

To test from SOAHOST2:

  1. Watch the local Node Manager console. After 30 seconds since the last try to restart WLS_OSB1 on SOAHOST1, Node Manager on SOAHOST2 should prompt that the floating IP for WLS_OSB1 is being brought up and that the server is being restarted in this node.

  2. Access the OSB Console using the Virtual Host Name, for example:

    soahost1vhn1.mycompany.com/soa-infra/
    

Follow the previous steps to test server migration for the WLS_OSB2, WLS_SOA1, and WLS_SOA2 Managed Servers.

Table 13-2 shows the Managed Servers and the hosts they migrate to in case of a failure.

Table 13-2 Managed Server Migration

Managed Server Migrated From Migrated To

WLS_OSB1

SOAHOST1

SOAHOST2

WLS_OSB2

SOAHOST2

SOAHOST1

WLS_SOA1

SOAHOST1

SOAHOST2

WLS_SOA2

SOAHOST2

SOAHOST1


Verification From the Administration Console

Migration can also be verified in the Administration Console:

  1. Log in to the Administration Console.

  2. Click Domain on the left console.

  3. Click the Monitoring tab and then the Migration sub tab.

    The Migration Status table provides information on the status of the migration.

Note:

After a server is migrated, to fail it back to its original node/machine, stop the Managed Server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the Managed Server on the machine to which it was originally assigned.

13.8 Backing Up the Server Migration Configuration

Back up the server migration configuration. For more information, see Section 14.8, "Backing Up the Oracle SOA Enterprise Deployment."