Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter Content 11g Release 1 (11.1.1) Part Number E15483-05 |
|
|
PDF · Mobi · ePub |
This chapter describes how to configure server migration in accordance with the EDG recommendations. It contains the following sections:
Section 14.2, "Setting Up a User and Tablespace for the Server Migration Leasing Table"
Section 14.3, "Creating a Multi-Data Source Using the Oracle WebLogic Server Administration Console"
Section 14.5, "Setting Environment and Superuser Privileges for the wlsifconfig.sh Script"
You can configure server migration for Oracle WebLogic Server managed servers. With server migration configured, should failure occur, each managed server can restart on a different host machine. The managed servers listen on specific floating IPs that are failed over by Oracle WebLogic Server.
The procedures described in this chapter must be performed for various components of the enterprise deployment topology outlined in Section 2.1.1, "Reference Topology Documented in the Guide." Variables are used in this chapter to distinguish between component-specific items:
WLS_SERVER1 and WLS_SERVER2 refer to the Oracle WebLogic Server managed servers for the enterprise deployment component.
HOST1 and HOST2 refer to the host machines for the enterprise deployment component.
CLUSTER refers to the cluster associated with the enterprise deployment component.
The values to be used to these variables are provided in the component-specific chapters in this EDG.
In this enterprise topology, you must configure server migration for the WLS_SERVER1 and WLS_SERVER2 managed servers. The WLS_SERVER1 managed server is configured to restart on HOST2 should a failure occur. The WLS_SERVER2 managed server is configured to restart on HOST1 should a failure occur. For this configuration, the WLS_SERVER1 and WLS_SERVER2 servers listen on specific floating IP addresses that are failed over by WebLogic Server migration.
Table 14-1 describes the steps for configuring server migration for the Oracle WebLogic Server managed servers.
Table 14-1 Steps for Configuring Server Migration
Step | Description | More Information |
---|---|---|
Set up a user, tablespace, and migration table |
Create a |
Section 14.2, "Setting Up a User and Tablespace for the Server Migration Leasing Table" |
Create a multi-data source for the |
Create a data source for each of the Oracle RAC database instances and the global |
Section 14.3, "Creating a Multi-Data Source Using the Oracle WebLogic Server Administration Console" |
Specify Node Manager properties values for migration |
Edit the property values in the nodemanager.properties file for each host, and verify the values. |
|
Set the environment and specify superuser privileges for the oracle user |
Add files to the PATH environment variable, and grant sudo configuration for the wlsifconfig.sh script. |
Section 14.5, "Setting Environment and Superuser Privileges for the wlsifconfig.sh Script" |
Configure cluster migration |
Assign available nodes as migration targets, and specify candidate machines for each server. |
|
Test server migration |
Verify server migration between hosts from Node Manager or the Administration Console. |
The first step is to set up a user and tablespace for the server migration leasing table.
Note:
If other servers in the same domain have already been configured with server migration, the same tablespace and data sources can be used. In that case, the data sources and multi-data source for database leasing do not need to be re-created, but they will have to be retargeted to the cluster being configured with server migration.
Create a tablespace called leasing
. For example, log in to SQL*Plus as the sysdba user and run the following command:
create tablespace leasing logging datafile "DB_HOME/oradata/orcl/leasing.dbf" size 32m autoextend on next 32m maxsize 2048m extent management local;
Note:
The database file location will vary depending on the type of storage and data file location used for the database.
Create a user named leasing
and assign to it the leasing
tablespace:
create user leasing identified by welcome1; grant create table to leasing; grant create session to leasing; alter user leasing default tablespace leasing; alter user leasing quota unlimited on LEASING;
Create the leasing
table using the leasing.ddl script:
Copy the leasing.ddl file located in either the WL_HOME/server/db/oracle/817 or WL_HOME/server/db/oracle/920 directory to your database node.
Connect to the database as user leasing
.
Run the leasing.ddl script in SQL*Plus:
@Copy_Location/leasing.ddl;
The second step is to create a multi-data source for the leasing table from the Oracle WebLogic Server Administration Console. You create a data source to each of the Oracle RAC database instances during the process of setting up the multi-data source, both for these data sources and the global leasing multi-data source.
Please note the following considerations when creating a data source:
Make sure that this is a non-XA data source.
The names of the multi-data sources are in the format of MultiDS-rac0, MultiDS-rac1, and so on.
Use Oracle's Driver (Thin) Version 9.0.1, 9.2.0, 10, 11.
Data sources do not require support for global transactions. Therefore, do not use any type of distributed transaction emulation or participation algorithm for the data source (do not choose the Supports Global Transactions option, the Logging Last Resource, Emulate Two-Phase Commit, or One-Phase Commit options of the Supports Global Transactions option), and specify a service name for your database.
Target these data sources to the cluster assigned to the enterprise deployment component (CLUSTER; see the component-specific chapters in this guide).
Make sure the initial connection pool capacity of the data sources is set to 0 (zero). To do this, select Services and then Data Sources. In the Summary of JDBC Data Sources screen, click the data source name in the list, then open the Connection Pool tab, and enter 0 (zero) in the Initial Capacity field. Click Save and Activate Changes.
Creating a Multi-Data Source
To create a multi-data source:
Log in to the Oracle WebLogic Server Administration Console.
In the Domain Structure area in the left, expand the Services node and then select the Data Sources node.
Click Lock & Edit.
On the Summary of JDBC Data Source page, click New and choose Multi Data Source from the list.
On the Create a New JDBC Data Source page, enter leasing
as the name and jdbc/leasing
as the JNDI name.
Select Failover as the algorithm (which is the default), and click Next.
Select the cluster assigned to the enterprise deployment component as the target, and click Next. (See the CLUSTER variable in the component-specific chapters in this guide.)
Select non-XA driver (which is the default), and click Next.
Click Create New Data Source.
Enter leasing-rac0
as the name and jdbc/leasing-rac0
as the JNDI name. Enter oracle
as the database type. Click Next when you are ready.
Note:
When creating the multi-data sources for the leasing table, enter names in the format of MultiDS-rac0, MultiDS-rac1, and so on.
For the driver type, select Oracle driver (Thin) for RAC Service-Instance connections, Versions: 10 and later, and click Next.
Deselect Supports Global Transactions, and click Next.
Enter the service name, database name, host name, host port, database user name, and password for your leasing schema. Click Next when you are done.
Click Test Configuration and verify that the connection works. Click Next when you are done.
Target the data source to the cluster assigned to the enterprise deployment component (CLUSTER).
Click Create a New JDBC Multi Data Source.
Repeat steps 10 through 14 for the leasing-rac1
data source.
Target the second instance of your Oracle RAC database to the cluster assigned to the enterprise deployment component (CLUSTER).
Select the data sources that you just created (leasing-rac0 and leasing-rac1), and move them from the Available Data Sources box to the Chosen box.
Click Finish and Activate Changes.
Note:
Make sure the initial connection pool capacity of the data sources is set to 0 (zero). To do this, select Services and then Data Sources. In the Summary of JDBC Data Sources screen, click the data source name in the list, then open the Connection Pool tab, and enter 0 (zero) in the Initial Capacity field. Click Save and Activate Changes.
The third step is to edit Node Manager's properties file. This needs to be done for the Node Managers in both nodes where server migration is being configured:
Interface=eth0 NetMask=255.255.255.0 UseMACBroadcast=true
Interface: This property specifies the interface name for the floating IP (for example, eth0).
Do not specify the sub-interface, such as eth0:1
or eth0:2
. This interface is to be used without :0
or :1
. Node Manager's scripts traverse the different :X-enabled IPs to determine which to add or remove. For example, the valid values in Linux environments are eth0, eth1, eth2, eth3, ethn, depending on the number of interfaces configured.
NetMask: This property specifies the net mask for the interface for the floating IP. The net mask should the same as the net mask on the interface; 255.255.255.0 is used as an example in this document.
UseMACBroadcast: This property specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the -b
flag in the arping
command.
Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in Node Manager's output:
... StateCheckInterval=500 Interface=eth0 NetMask=255.255.255.0 ...
Note:
The steps below are not required if the server properties (start properties) have been properly set and Node Manager can start the servers remotely.
Set the following property in the nodemanager.properties file:
StartScriptEnabled: Set this property to true
. This is required for Node Manager to start the managed servers using start scripts.
Start Node Manager on HOST1 and HOST2 by running the startNodeManager.sh script, which is located in the WL_HOME/server/bin directory.
Note:
When you run Node Manager from a shared storage installation, multiple nodes are started using the same nodemanager.properties file. However, each node may require different NetMask or Interface properties. In this case, specify individual parameters on a per-node basis using environment variables. For example, to use a different interface (eth3) in HOSTn, use the Interface environment variable as follows:
export JAVA_OPTIONS=-DInterface=eth3
and start Node Manager after the variable has been set in the shell.
The fourth step is to set environment and superuser privileges for the wlsifconfig.sh script (for the oracle
user):
Ensure that your PATH environment variable includes the files listed in Table 14-2.
Grant sudo configuration for the wlsifconfig.sh script.
Configure sudo to work without a password prompt.
For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script. For example, perform these steps to set the environment and superuser privileges for the wlsifconfig.sh script:
Grant sudo privilege to the WebLogic user (oracle
) with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.
Make sure the script is executable by the WebLogic user (oracle
). The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle
and also over ifconfig
and arping
:
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
Note:
Ask the system administrator for the sudo and system rights as appropriate for this step.
The fifth step is to configure server migration targets. You first assign all the available nodes for the cluster's members and then specify candidate machines (in order of preference) for each server that is configured with server migration. Follow these steps to configure cluster migration in a migration in a cluster:
Log in to the Oracle WebLogic Server Administration Console (http://Host:Admin_Port/console). Typically, Admin_Port is 7001 by default.
In the Domain Structure window, expand Environment and select Clusters.
On the Summary of Clusters page, click the cluster for which you want to configure migration (CLUSTER) in the Name column of the table.
Open the Migration tab.
Click Lock & Edit.
In the Available field, select the machine to which to allow migration and click the right arrow. In this case, select HOST1 and HOST2.
Select the data source to be used for automatic migration. In this case, select the leasing data source.
Click Save.
Click Activate Changes.
Click Lock & Edit.
Set the candidate machines for server migration. You must perform this task for all of the managed servers as follows:
In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.
Select the server for which you want to configure migration.
Open the Migration tab.
In the Available field, located in the Migration Configuration section, select the machines to which to allow migration and click the right arrow. For WLS_SERVER1, select HOST2. For WLS_SERVER2, select HOST1.
Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.
Click Save.
Click Activate Changes.
Restart the administration server, node managers, and the servers for which server migration has been configured.
The sixth and final step is to test the server migration. To verify that server migration is working properly:
From HOST1:
Stop the WLS_SERVER1 managed server. To do this, run this command on HOST1:
kill -9 pid
where pid specifies the process ID of the managed server. You can identify the pid in the node by running this command:
ps -ef | grep WLS_SERVER1
Watch the Node Manager console. You should see a message indicating that WLS_SERVER1's floating IP has been disabled.
Wait for Node Manager to try a second restart of WLS_SERVER1. It waits for a fence period of 30 seconds before trying this restart.
Once Node Manager restarts the server, stop it again. Node Manager should now log a message indicating that the server will not be restarted again locally.
From HOST2:
Watch the local Node Manager console. After 30 seconds since the last try to restart WLS_SERVER1 on node 1, Node Manager on node 2 should prompt that the floating IP for WLS_SERVER1 is being brought up and that the server is being restarted in this node.
Access the soa-infra console in the same IP.
Verification from the Administration Console
Migration can also be verified in the Administration Console:
Log in to the Administration Console.
Click Domain on the left console.
Open the Monitoring tab and then the Migration subtab.
The Migration Status table provides information on the status of the migration (Figure 14-1).
Figure 14-1 Migration Status Screen in the Administration Console
Note:
After a server is migrated, to fail it back to its original node or machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.