Skip Headers
Oracle® Fusion Applications Enterprise Deployment Guide for Financials
11g Release 6 (11.1.6)

Part Number E27364-09
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

18 Setting Up Server Migration for an Enterprise Deployment

This chapter describes how to configure server migration according to enterprise deployment recommendations.

This chapter includes the following topic:

18.1 Prerequisite

Before migrating Oracle Fusion Applications domains, ensure you have completed the steps in Section 16.1, "Enabling Virtual IPs on FINHOST1 and FINHOST2," Section 16.2, "Setting the Listen Address for soa_server1," and Section 16.3, "Setting the Listen Address for soa_server2" for all Managed Servers needing to be migrated.

Note:

This prerequisite does not apply to the Oracle Business Intelligence domain.

18.2 Migrating Oracle Fusion Applications Domains

The procedures in this section apply to these domains and applications:

18.2.1 About Configuring Server Migration

The procedures described in this chapter must be performed for various components of the enterprise deployment topology outlined in Variables are used in this chapter to distinguish between component-specific items:

The procedures described in this chapter must be performed for various components of the enterprise deployment topology outlined in Section 2.1, "Overview of Reference Enterprise Deployment Topologies." Variables are used in this chapter to distinguish between component-specific items:

  • WLS_SERVER1 and WLS_SERVER2 refer to the managed WebLogic servers for the enterprise deployment component

  • FINHOST1 and FINHOST2 refer to the host machines for the enterprise deployment component

  • CLUSTER refers to the cluster associated with the enterprise deployment component.

The values to be used to these variables are provided in the component-specific chapters in this guide.

In this enterprise topology, you must configure server migration for the WLS_SERVER1 and WLS_SERVER2 Managed Servers. The WLS_SERVER1 Managed Server is configured to restart on FINHOST2 should a failure occur. The WLS_SERVER2 Managed Server is configured to restart on FINHOST1 should a failure occur. For this configuration, the WLS_SERVER1 and WLS_SERVER2 servers listen on specific floating IP addresses that are failed over by WebLogic Server migration. Configuring server migration for the WLS Managed Servers consists of the following steps:

18.2.2 Setting Up a User and Tablespace for the Server Migration Leasing Table

The first step is to set up a user and tablespace for the server migration leasing table.

Note:

If other servers in the same domain have already been configured with server migration, the same tablespace and data sources can be used. In that case, the data sources and multi-data source for database leasing do not need to be re-created, but they will have to be retargeted to the cluster being configured with server migration.

To set up a user and tablespace:

  1. Create a tablespace called 'leasing'. For example, log on to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace leasing logging datafile 
    'DB_HOME/oradata/orcl/leasing.dbf'  
    size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named 'leasing' and assign to it the leasing tablespace:

    SQL> create user leasing identified by password;
    SQL> grant create table to leasing;
    SQL> grant create session to leasing;
    SQL> alter user leasing default tablespace leasing;
    SQL> alter user leasing quota unlimited on LEASING;
    
  3. Create the leasing table using the leasing.ddl script:

    1. Copy the leasing.ddl file located in either the ORACLE_BASE/products/fusionapps/wlserver_10.3/server/db/oracle/817 or the ORACLE_BASE/products/fusionapps/wlserver_10.3/server/db/oracle/920 directory to your database node.

    2. Connect to the database as the leasing user.

    3. Run the leasing.ddl script in SQL*Plus:

      SQL> @Copy_Location/leasing.ddl;
      

18.2.3 Creating a Multi-Data Source Using the Oracle WebLogic Server Administration Console

The second step is to create a multi-data source for the leasing table from the Oracle WebLogic Server Administration Console. You create a data source to each of the Oracle RAC database instances during the process of setting up the multi-data source, both for these data sources and the global leasing multi-data source.

Please note the following considerations when creating a data source:

  • Make sure that this is a non-XA data source.

  • The names of the multi-data sources are in the format of <MultiDS>-rac0, <MultiDS>-rac1, and so on.

  • Use Oracle's Driver (Thin) Version 9.0.1, 9.2.0, 10, 11.

  • Use Supports Global Transactions, One-Phase Commit, and specify a service name for your database.

  • Target these data sources to the cluster assigned to the enterprise deployment component (CLUSTER; see the component-specific chapters in this guide).

Creating a Multi-Data Source

To create a multi-data source:

  1. In the Domain Structure window in the Oracle WebLogic Server Administration Console, click the Data Sources link.

  2. Click Lock & Edit.

  3. Select Multi Data Source from the New dropdown menu.

    The Create a New JDBC Multi Data Source page is displayed.

  4. Enter leasing as the name.

  5. Enter jdbc/leasing as the JNDI name.

  6. Select Failover as algorithm (default).

  7. Click Next.

  8. Select the cluster that must be migrated. In this case, SOA cluster.

  9. Click Next.

  10. Select non-XA driver (the default).

  11. Click Next.

  12. Click Create a New Data Source.

  13. Enter leasing-rac0 as the name. Enter jdbc/leasing-rac0 as the JNDI name. Enter oracle as the database type. For the driver type, select Oracle Driver (Thin) for Oracle RAC Service-Instance connections, Versions 10 and later.

    Note:

    When creating the multi-data sources for the leasing table, enter names in the format of <MultiDS>-rac0, <MultiDS>-rac1, and so on.

  14. Click Next.

  15. Deselect Supports Global Transactions.

  16. Click Next.

  17. Enter the following for your leasing schema:

    • Service Name: The service name of the database.

    • Database Name: The Instance Name for the first instance of the Oracle RAC database.

    • Host Name: The name of the node that is running the database. For the Oracle RAC database, specify the first instance's VIP name or the node name as the host name.

    • Port: The port number for the database (1521).

    • Database User Name: Enter leasing.

    • Password: The leasing password.

  18. Click Next.

  19. Click Test Configuration and verify that the connection works.

  20. Click Next.

  21. Target the data source to the cluster assigned to the enterprise deployment component (CLUSTER).

  22. Click Finish.

  23. Click Create a New Data Source for the second instance of your Oracle RAC database, target it to the cluster assigned to the enterprise deployment component (CLUSTER), repeating the steps for the second instance of your Oracle RAC database.

  24. Add leasing -rac0 and leasing -rac1 to your multi-data source.

  25. Make sure the initial connection pool capacity of the data sources is set to 0 (zero). In the Datasources screen do the following:

    1. Select Services, then select Datasources.

    2. Click Datasource Name, then click the Connection Pool tab.

    3. Enter 0 (zero) in the Initial Capacity field.

  26. Click Save, then click Activate Changes.

18.2.4 Editing Node Manager's Properties File

The third step is to edit Node Manager's properties file, which is located at:

ORACLE_BASE/config/nodemanager/FINHOST1

ORACLE_BASE/config/nodemanager/FINHOST2

This must be done for the node managers in both nodes where server migration is being configured. For example:

Interface=eth0
NetMask=255.255.255.0
UseMACBroadcast=true
  • Interface: This property specifies the interface name for the floating IP (for example, eth0).

    Do not specify the sub-interface, such as eth0:1 or eth0:2. This interface is to be used without :0 or :1. Node Manager's scripts traverse the different :X-enabled IPs to determine which to add or remove. For example, the valid values in Linux environments are eth0, eth1, eth2, eth3, ethn, depending on the number of interfaces configured.

  • NetMask: This property specifies the net mask for the interface for the floating IP. The net mask should the same as the net mask on the interface; 255.255.255.0 is used as an example in this document.

  • UseMACBroadcast: This property specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the -b flag in the arping command.

Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in Node Manager's output:

...
StateCheckInterval=500
Interface=eth0
NetMask=255.255.255.0
...

Note:

The steps below are not required if the server properties (start properties) have been properly set and Node Manager can start the servers remotely.

  1. Set the following property in the nodemanager.properties file:

    • StartScriptEnabled: Set this property to 'true'. This is required for Node Manager to start the Managed Servers using start scripts.

  2. Restart Node Manager on FINHOST1 and FINHOST2 by running the startNodeManagerWrapper.sh script, which is located in the ORACLE_BASE/config/nodemanager/FINHOST1 and ORACLE_BASE/config/nodemanager/FINHOST2 directories.

18.2.5 Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

The fourth step is to set environment and superuser privileges for the wlsifconfig.sh script:

  1. Ensure that your PATH is set with the environment variables in the terminal from where Node Manager is started, and that it includes these files:

    Table 18-1 Files Required for the PATH Environment Variable

    File Located in this directory

    wlsifconfig.sh

    /u02/local/oracle/config/domains/FINHOSTn/ManagedServer_Domain/bin/server_migration

    wlscontrol.sh

    ORACLE_BASE/products/fusionapps/wlserver_10.3/common/bin

    nodemanager.domains

    ORACLE_BASE/config/nodemanager/FINHOSTn


  2. Grant sudo configuration for the wlsifconfig.sh script.

    • Configure sudo to work without a password prompt.

    • For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script. For example, perform these steps to set the environment and superuser privileges for the wlsifconfig.sh script:

      1. Grant sudo privilege to the WebLogic user ('oracle') with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

      2. Make sure the script is executable by the WebLogic user ('oracle'). The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle and also over ifconfig and arping:

        oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
        

    Note:

    Ask the system administrator for the sudo and system rights as appropriate to this step.

18.2.6 Configuring Server Migration Targets

The fifth step is to configure server migration targets. You first assign all the available nodes for the cluster's members and then specify candidate machines (in order of preference) for each server that is configured with server migration.

Enterprise deployment recommends using cluster-based migration. Perform the following steps, including a step to enable automatic server migration (Step 10), to configure cluster-based migration for all Managed Servers in a cluster:

  1. Log in to the Oracle WebLogic Server Administration Console. For example, http://fininternal.mycompany.com:7777/console

  2. In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page is displayed.

  3. Click the cluster for which you want to configure migration (CLUSTER) in the Name column of the table.

  4. Click the Migration tab.

  5. Click Lock & Edit.

  6. In the Available field, select the machine to which to allow migration and click the right arrow. In this case, select FINHOST1 and FINHOST2.

    Note:

    When there are three (3) hosts, for example FINHOST1, FINHOST2, and FINHOST3, select all three hosts.

  7. Select the data source to be used for automatic migration. In this case, select the leasing data source.

  8. Click Save.

  9. Click Activate Changes.

  10. Enable automatic server migration for all Managed Servers in the cluster. (You must perform this task for all of the Managed Servers.)

    Note:

    Although you are using cluster-based migration for the Managed Servers, you must perform this step (from the Migration tab) to enable automatic server migration for all the Managed Servers in the selected cluster.

    1. In the Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.

      Tip:

      Click Customize this table in the Summary of Servers page and move Current Machine from the Available window to the Chosen window to view the machine on which the server is running. This will be different from the configuration if the server gets migrated automatically.

    2. Select the server for which you want to configure cluster-based migration.

    3. Click the Migration tab, and then click Lock & Edit.

    4. Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.

    5. Click Save.

    6. Click Activate Changes.

    7. Restart the administration server, node managers, and the servers for which server migration has been configured.

18.2.7 Testing the Server Migration

The sixth and final step is to test the server migration. Perform these steps to verify that server migration is working properly:

From FINHOST1:

  1. Stop the WLS_SERVER1 Managed Server. To do this, run this command:

    FINHOST1> kill -9 pid
    

    where pid specifies the process ID of the Managed Server. You can identify the pid in the node by running this command:

    FINHOST1> ps -ef | grep WLS_SERVER1 | grep DomainName_SOACluster
    
  2. Watch the Node Manager console. You should see a message indicating that WLS_SERVER1's floating IP has been disabled.

  3. Wait for Node Manager to try a second restart of WLS_SERVER1. It waits for a fence period of 10 seconds before trying this restart.

  4. After Node Manager restarts the server, stop it few times. Node Manager should now log a message indicating that the server will not be restarted again locally.

From FINHOST2:

  1. Watch the local Node Manager console. Ten (10) seconds after the last try to restart WLS_SERVER1 on FINHOST1>, Node Manager on FINHOST2> should prompt that the floating IP for WLS_SERVER1 is being brought up and that the server is being restarted in this node.

  2. As an example, for Oracle SOA Suite Managed Servers, access the soa-infra console in the same IP.

Verification from the Administration Console

Migration can also be verified in the Administration Console:

  1. Log in to the Administration Console.

  2. Click Domain on the left console.

  3. Click the Monitoring tab and then the Migration subtab.

    The Migration Status table, shown in Figure 18-1, provides information on the status of the migration.

    Figure 18-1 Migration Status Screen in the Administration Console

    Migration Status Screen in the Administration Console

Note:

To complete server migration in a cluster, perform the same steps on the second, third, and so on, managed servers.