This chapter describes the procedures for configuring server migration for an enterprise deployment.
This chapter contains the following sections:
You can configure server migration for Oracle WebLogic Server Managed Servers. With server migration configured, should failure occur, each Managed Server can restart on a different host machine. The Managed Servers listen on specific floating IPs that are failed over by WebLogic Server.
Refer to each component's chapter for details on whether it uses or requires server migration or not.
The procedures described in this chapter must be performed for various components of the enterprise deployment topology outlined in Section 2.1.1, "Reference Topology Documented in the Guide." Variables are used in this chapter to distinguish between component-specific items:
WLS_SERVER1 and WLS_SERVER2 refer to the WebLogic Server Managed Servers for the enterprise deployment component.
HOST1 and HOST2 refer to the host machines for the enterprise deployment component.
CLUSTER refers to the cluster associated with the enterprise deployment component.
The values to be used for these variables are provided in the component-specific chapters in this EDG.
In this enterprise topology, you must configure server migration for the WLS_SERVER1 and WLS_SERVER2 Managed Servers. The WLS_SERVER1 Managed Server is configured to restart on HOST2 should a failure occur. The WLS_SERVER2 Managed Server is configured to restart on HOST1 should a failure occur. For this configuration, the WLS_SERVER1 and WLS_SERVER2 servers listen on specific floating IP addresses that are failed over by WebLogic Server migration.
Table 14-1 describes the steps for configuring server migration for the WebLogic Server Managed Servers.
Set up a user, tablespace, and migration table
Create GridLink data sources for the
Create a data source for each of the Oracle RAC database instances and the global
Specify Node Manager properties values for migration
Edit the property values in the nodemanager.properties file for each host, and verify the values.
Set the environment and specify superuser privileges for the oracle user
Add files to the
Configure cluster migration
Assign available nodes as migration targets, and specify candidate machines for each server.
Test server migration
Verify server migration between hosts from Node Manager or the Administration Console.
Set up a user and tablespace for the server migration
leasing table using the
create tablespace leasing command.
If other servers in the same domain have already been configured for server migration, you can use the same tablespace and data sources. In that case, the data sources and GridLink data source for the database table
leasing do not need to be re-created, but they will have to be retargeted to the cluster being configured with server migration.
Create a tablespace called
leasing. For example, log in to SQL*Plus as the
sysdba user and run the following command:
SQL> create tablespace leasing logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local;
The database file location will vary depending on the type of storage and data file location used for the database.
Create a user named
leasing, and assign to it the
SQL> create user leasing identified by password; SQL> grant create table to leasing; SQL> grant create session to leasing; SQL> alter user leasing default tablespace leasing; SQL> alter user leasing quota unlimited on leasing;
leasing.ddl file located in either of the following directories to your database node:
Connect to the database as the
leasing.ddl script in SQL*Plus:
Create a GridLink data source for the
leasing table from the Oracle WebLogic Server Administration Console.
To create a GridLink data source:
Log in to the WebLogic Server Administration Console.
If you have not already done so, in the Change Center, click Lock & Edit and click Next.
In the Domain Structure tree, expand Services, then select Data Sources.
On the Summary of Data Sources page, click New, select GridLink Data Source, and do the following:
Enter a logical name for the data source in the Name field. For example,
Enter a name for JNDI. For example,
For the Database Driver, select Oracle's Driver (Thin) for GridLink Connections; Versions: 11 and later.
In the Transaction Options page, clear Supports Global Transactions, and click Next.
In the GridLink Data Source Connection Properties Options screen, select Enter individual listener information, and click Next.
Enter the following connection properties:
Service Name: Enter the service name of the database with lowercase characters. For a GridLink data source, you must enter the Oracle RAC service name. For example:
Host Name and Port: Enter the SCAN address and port for the RAC database being used. You can identify this address by querying the appropriate parameter in the database using the TCP Protocol:
SQL>show parameter remote_listener; NAME TYPE VALUE -------------------------------------------------- remote_listener string db-scan.mycompany.com
For Oracle Database 11g Release 1 (11.1), use the virtual IP and port of each database instance listener; for example:
custdbhost1-vip.mycompany.com (port 1521)
For Oracle Database 10g, use multi data sources to connect to an Oracle RAC database. For information about configuring multi data sources, see Appendix A, "Using Multi Data Sources with Oracle RAC."
Port - The port on which the database server listens for connection requests.
Database User Name:
Password: Enter the password for the
Confirm Password: Enter the password again and click Next.
On the Test GridLink Database Connection page, review the connection parameters and click Test All Listeners. Here is an example of a successful connection notification:
Connection test for jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan.mycompany.com) (PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=wccedg.mycompany.com))) succeeded.
In the ONS Client Configuration page, do the following:
Select FAN Enabled to subscribe to and process Oracle FAN events.
Enter here also the SCAN address for the RAC database and the ONS remote port as reported by the database (example follows) and click ADD:
[orcl@db-scan1 ~]$ srvctl config nodeapps -s ONS exists: Local port 6100, remote port 6200, EM port 2016
For Oracle Database 11g Release 1 (11.1), use the host name and port of each database's ONS service; for example:
custdbhost1.mycompany.com (port 6200)
On the Test ONS Client Configuration page, review the connection parameters, and click Test All ONS Nodes.
Here is an example of a successful connection notification:
Connection test for db-scan.mycompany.com:6200 succeeded.
In the Select Targets page, select the targets for which you are doing server migration, IMG_Cluster and SOA_Cluster, and All Servers in the cluster.
Click Activate Changes.
The third step is to edit Node Manager's properties file. This needs to be done for the Node Managers in both nodes where server migration is being configured:
Interface=eth0 NetMask=255.255.255.0 UseMACBroadcast=true
Interface: This property specifies the interface name for the floating IP (for example, eth0).
Do not specify the sub-interface, such as
eth0:2. This interface is to be used without
:1. Node Manager's scripts traverse the different
:X-enabled IPs to determine which to add or remove. For example, the valid values in Linux environments are eth0, eth1, eth2, eth3, ethn, depending on the number of interfaces configured.
NetMask: This property specifies the net mask for the interface for the floating IP. The net mask should the same as the net mask on the interface; 255.255.255.0 is used as an example in this document.
UseMACBroadcast: This property specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the -
b flag in the
Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in Node Manager's output:
... StateCheckInterval=500 Interface=eth0 NetMask=255.255.255.0 ...
The following step is not required if the server properties (start properties) have been properly set and Node Manager can start the servers remotely.
Set the following property in the
StartScriptEnabled: Set this property to
true. This is required for Node Manager to start the Managed Servers using start scripts.
When you run Node Manager from a shared storage installation, multiple nodes are started using the same nodemanager.properties file. However, each node may require different NetMask or Interface properties. In this case, specify individual parameters on a per-node basis using environment variables. For example, to use a different interface (eth3) in
HOSTn, use the Interface environment variable as follows:
Start Node Manager after the variable has been set in the shell.
The fourth step is to set environment and superuser privileges for the wlsifconfig.sh script (for the
Ensure that your
PATH environment variable includes the files listed in Table 14-2.
sudo privilege for the
sudo to work without a password prompt.
For security reasons,
sudo should be restricted to the subset of commands required to run the
wlsifconfig.sh script. For example, perform these steps to set the
superuser privileges for the
sudo privilege to the WebLogic Server user (
oracle) with no password restriction, and grant
execute privilege on the
Make sure the script is executable by the WebLogic Server user (
oracle). The following is an example of an entry inside
/etc/sudoers granting the
sudo execution privilege for
oracle and also over
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
Ask the system administrator for the
system privileges as appropriate for this step.
Start Node Manager on
HOST2 by running the
startNodeManager.sh script, which is located in the
The fifth step is to configure server migration targets. You first assign all the available nodes for the cluster's members and then specify candidate machines (in order of preference) for each server that is configured with server migration. Follow these steps to configure cluster migration in a migration in a cluster:
Log in to the WebLogic Server Administration Console (
7001 by default.
In the Domain Structure tree on the left, expand Environment and select Clusters.
On the Summary of Clusters page, click the cluster for which you want to configure migration (
CLUSTER) in the Name column of the table.
For the procedures in this document, configure server migration for the Oracle SOA Suite and Imaging clusters.
Open the Migration tab.
Click Lock & Edit.
In the Available field, select the machine to which to allow migration and click the right arrow. In this case, select
Select the data source to be used for automatic migration. In this case, select the
leasing data source.
Click Activate Changes.
Click Lock & Edit.
Set the candidate machines for server migration. You must perform this task for all of the Managed Servers as follows:
In the Domain Structure tree on the left of the WebLogic Server Administration Console, expand Environment and select Servers.
Select the server for which you want to configure migration.
For the procedures in this document, configure server migration for the Oracle SOA Suite and Imaging servers.
Open the Migration tab.
In the Available field, located in the Migration Configuration section, select the machines to which to allow migration and click the right arrow. For
Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.
Click Activate Changes.
Restart the Administration Server, node managers, and the servers for which server migration has been configured.
The sixth and final step is to test the server migration. To verify that server migration is working properly:
Stop the WLS_SERVER1 Managed Server. To do this, run this command on
kill -9 pid
pid specifies the process ID of the Managed Server. You can identify the process ID in the node by running this command:
ps -ef | grep WLS_SERVER1
Watch the Node Manager console. You should see a message indicating that
WLS_SERVER1's floating IP has been disabled.
Wait for Node Manager to try a second restart of
WLS_SERVER1. It waits for a fence period of 30 seconds before trying this restart.
Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.
Watch the local Node Manager console. After 30 seconds since the last try to restart
WLS_SERVER1 on node 1, Node Manager on node 2 should prompt that the floating IP for
WLS_SERVER1 is being brought up and that the server is being restarted in this node.
Access your server's console in the same IP.
Migration can also be verified in the Administration Console:
Log in to the Administration Console.
Click Domain on the left.
Click the Monitoring tab and then the Migration subtab.
The Migration Status table provides information on the status of the migration (Figure 14-1).
After a server is migrated, to fail it back to its original node or machine, stop the Managed Server from the WebLogic Server Administration Console and then start it again. The appropriate Node Manager will start the Managed Server on the machine to which it was originally assigned.