14 Using Whole Server Migration and Service Migration in an Enterprise Deployment

The Oracle WebLogic Server migration framework supports Whole Server Migration and Service Migration. The following sections explain how these features can be used in an Oracle Fusion Middleware enterprise topology.

About Whole Server Migration and Automatic Service Migration in an Enterprise Deployment

Oracle WebLogic Server provides a migration framework that is an integral part of any highly available environment. The following sections provide more information about how this framework can be used effectively in an enterprise deployment.

Understanding the Difference between Whole Server and Service Migration

The Oracle WebLogic Server migration framework supports two distinct types of automatic migration:

  • Whole Server Migration, where the Managed Server instance is migrated to a different physical system upon failure.

    Whole server migration provides for the automatic restart of a server instance, with all its services, on a different physical machine. When a failure occurs in a server that is part of a cluster which is configured with server migration, the server is restarted on any of the other machines that host members of the cluster.

    For this to happen, the servers must use a floating IP as listen address and the required resources (transactions logs and JMS persistent stores) must be available on the candidate machines.

    See Whole Server Migration in Administering Clusters for Oracle WebLogic Server.

  • Service Migration, where specific services are moved to a different Managed Server within the cluster.

    To understand service migration, it's important to understand pinned services.

    In a WebLogic Server cluster, most subsystem services are hosted homogeneously on all server instances in the cluster, enabling transparent failover from one server to another. In contrast, pinned services, such as JMS-related services, the JTA Transaction Recovery Service, and user-defined singleton services, are hosted on individual server instances within a cluster—for these services, the WebLogic Server migration framework supports failure recovery with service migration, as opposed to failover.

    See Understanding the Service Migration Framework in Administering Clusters for Oracle WebLogic Server.

Implications of Using Whole Server Migration or Service Migration in an Enterprise Deployment

When a server or service is started in another system, the required resources (such as services data and logs) must be available to both the original system and to the failover system; otherwise, the service cannot resume the same operations successfully on the failover system.

For this reason, both whole server and service migration require that all members of the cluster have access to the same transaction and JMS persistent stores (whether the persistent store is file-based or database-based).

This is another reason why shared storage is important in an enterprise deployment. When you properly configure shared storage, you ensure that in the event of a manual failover (Administration Server failover) or an automatic failover (whole server migration or service migration), both the original machine and the failover machine can access the same file store with no change in service.

In the case of an automatic service migration, when a pinned service needs to be resumed, the JMS and JTA logs that it was using before failover need to be accessible.

In addition to shared storage, Whole Server Migration requires the procurement and assignment of a virtual IP address (VIP). When a Managed Server fails over to another machine, the VIP is automatically reassigned to the new machine.

Note that service migration does not require a VIP.

Understanding Which Products and Components Require Whole Server Migration and Service Migration

Note that the table lists the recommended best practice. It does not preclude you from using Whole Server or Automatic Server Migration for those components that support it.

Component Whole Server Migration (WSM) Automatic Service Migration (ASM)

Oracle Business Intelligence Publisher

YES

NO

Converting to Virtual IP Addresses in Preparation for Whole Server Migration

This section explains the steps necessary to configure each server, each machine, and each server’s Node Manager to listen on the appropriate Virtual IP Address.

If you have configured the BI Servers using the values for BIHOST1 and BIHOST2 (the physical hostnames), then you need to modify them to listen on a Virtual IP Address (BIHOST1VHN and BIHOST2VHN) as mentioned in Table 5-1.

Enabling the Required Virtual IP Addresses on Each Host

Follow the steps outlined in Enabling the Required Virtual IP Addresses on Each Host to enable the virtual IP addresses (BIHOST1VHN and BIHOST2VHN) for each BI WLS server.

Editing the Listen Address for the Managed Servers

To edit the listen address:
  1. Sign in to the Oracle WebLogic Server Administration Console.
  2. In the Domain Structure window, expand Environment and select Servers. The Summary of Servers page appears.
  3. Select the server for which you want to configure migration in the Name column of the table, which is WLS_BI1.
  4. Click Lock & Edit.
  5. On the Configuration tab, under the General tab, modify the Listen Address to be the value of the VIP assigned to the WLS_BI1 server (BIHOST1VHN).
  6. Click Save. Repeat the steps for each of the WLS Servers that you want to configure for migration.
  7. Click Activate Changes.
  8. Restart the servers.

Editing the OHS Virtual Host Configuration Files

To modify the OHS virtual host configuration files in order to reflect the change from physical hostname to virtual IPs:
  1. Change to the following directory on each OHS server:
    OHS_DOMAIN_HOME/config/fmwconfig/components/OHS/ohs1/moduleconf
  2. Edit the biinternal_vh.conf file and change the references of BIHOST1 to the value for BIHOST1VHN and the references of BIHOST1 to the value for BIHOST2VHN.
  3. Edit the bi_vh.conf file and change the references of BIHOST1 to the value for BIHOST1VHN and the references of BIHOST1 to the value for BIHOST2VHN.
  4. Restart the OHS servers.

Creating a GridLink Data Source for Leasing

Whole Server Migration and Automatic Service Migration require a data source for the leasing table, which is a tablespace created automatically as part of the Oracle WebLogic Server schemas by the Repository Creation Utility (RCU).

Note:

To accomplish data source consolidation and connection usage reduction, you can reuse the WLSSchemaDatasource as is for database leasing. This datasource is already configured with the FMW1221_WLS_RUNTIME schema, where the leasing table is stored.

For an enterprise deployment, you should create a GridLink data source:

  1. Log in to the Oracle WebLogic Server Administration Console.
  2. If you have not already done so, in the Change Center, click Lock & Edit.
  3. In the Domain Structure tree, expand Services, then select Data Sources.
  4. On the Summary of Data Sources page, click New and select GridLink Data Source, and enter the following:
    • Enter a logical name for the data source in the Name field. For example, Leasing.

    • Enter a name for JNDI. For example, jdbc/leasing.

    • For the Database Driver, select Oracle's Driver (Thin) for GridLink Connections Versions: Any.

    • Click Next.

  5. In the Transaction Options page, clear the Supports Global Transactions check box, and then click Next.
  6. In the GridLink Data Source Connection Properties Options screen, select Enter individual listener information and click Next.
  7. Enter the following connection properties:
    • Service Name: Enter the service name of the database with lowercase characters. For a GridLink data source, you must enter the Oracle RAC service name. For example:

      biedg.example.com

    • Host Name and Port: Enter the SCAN address and port for the RAC database, separated by a colon. For example:

      db-scan.example.com:1521
      

      Click Add to add the host name and port to the list box below the field.

      Figure 14-1 Specifying SCAN Address for the RAC Database

      Description of Figure 14-1 follows
      Description of "Figure 14-1 Specifying SCAN Address for the RAC Database"

      You can identify the SCAN address by querying the appropriate parameter in the database using the TCP Protocol:

      SQL>show parameter remote_listener;
      
      NAME                 TYPE        VALUE
       
      --------------------------------------------------
       
      remote_listener     string      db-scan.example.com

      Note:

      For Oracle Database 11g Release 1 (11.1), use the virtual IP and port of each database instance listener, for example:

      dbhost1-vip.mycompany.com (port 1521) 

      and

      dbhost2-vip.mycompany.com (1521)
      

      For Oracle Database 10g, use multi data sources to connect to an Oracle RAC database. For information about configuring multi data sources, see Using Multi Data Sources with Oracle RAC.

    • Database User Name: Enter the following:

      FMW1221_WLS_RUNTIME

      In this example, FMW1221 is the prefix you used when you created the schemas as you prepared to configure the initial enterprise manager domain.

      Note that in previous versions of Oracle Fusion Middleware, you had to manually create a user and tablespace for the migration leasing table. In Fusion Middleware 12c (12.2.1), the leasing table is created automatically when you create the WLS schemas with the Repository Creation Utility (RCU).

    • Password: Enter the password you used when you created the WLS schema in RCU.

    • Confirm Password: Enter the password again and click Next.

  8. On the Test GridLink Database Connection page, review the connection parameters and click Test All Listeners.

    Here is an example of a successful connection notification:

    Connection test for jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan.example.com)
    (PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=biedg.example.com))) succeeded.
    

    Click Next.

  9. In the ONS Client Configuration page, do the following:
    • Select FAN Enabled to subscribe to and process Oracle FAN events.

    • Enter the SCAN address in the ONS Host and Port field, and then click Add.

      This value should be the ONS host and ONS remote port for the RAC database. To find the ONS remote port for the database, you can use the following command on the database host:

      [orcl@db-scan1 ~]$ srvctl config nodeapps -s
       
      ONS exists: Local port 6100, remote port 6200, EM port 2016
      
    • Click Next.

    Note:

    For Oracle Database 11g Release 1 (11.1), use the hostname and port of each database's ONS service, for example:

    custdbhost1.example.com (port 6200)
    

    and

    custdbhost2.example.com (6200)
    
  10. On the Test ONS Client Configuration page, review the connection parameters and click Test All ONS Nodes.

    Here is an example of a successful connection notification:

    Connection test for db-scan.example.com:6200 succeeded.

    Click Next.

  11. In the Select Targets page, select the cluster that you are configuring for Whole Server Migration or Automatic Service Migration, and then select All Servers in the cluster.
  12. Click Finish.
  13. Click Activate Changes.

Configuring Whole Server Migration for an Enterprise Deployment

After you have prepared your domain for whole server migration or automatic service migration, you can configure Whole Server Migration for specific Managed Servers within a cluster.

Note:

As mentioned earlier, for migration to work, servers must use a virtual hostname that matches a floating IP, as the listen address. You can specify the listen address directly in the Configuration Wizard or update it in the administration console.

Editing the Node Manager's Properties File to Enable Whole Server Migration

Use the section to edit the Node Manager properties file on the two nodes where the servers are running.

  1. Locate and open the following file with a text editor:
    MSERVER_HOME/nodemanager/nodmeanager.properties
    
  2. If not done already, set the StartScriptEnabled property in the nodemanager.properties file to true.

    This is required to enable Node Manager to start the managed servers.

  3. Add the following properties to the nodemanager.properties file to enable server migration to work properly:
    • Interface

      Interface=eth0
      

      This property specifies the interface name for the floating IP (eth0, for example).

      Note:

      Do not specify the sub interface, such as eth0:1 or eth0:2. This interface is to be used without the :0, or :1.

      The Node Manager's scripts traverse the different :X enabled IPs to determine which to add or remove. For example, the valid values in Linux environments are eth0, eth1, or, eth2, eth3, ethn, depending on the number of interfaces configured.

    • NetMask

      NetMask=255.255.255.0
      

      This property specifies the net mask for the interface for the floating IP.

    • UseMACBroadcast

      UseMACBroadcast=true
      

      This property specifies whether to use a node's MAC address when sending ARP packets, that is, whether to use the -b flag in the arping command.

  4. Restart the Node Manager.
  5. Verify in the output of Node Manager (the shell where the Node Manager is started) that these properties are in use. Otherwise, problems may occur during migration. The output should be similar to the following:
    ...
    SecureListener=true
    LogCount=1
    eth0=*,NetMask=255.255.255.0
    ...

Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

Use this section to set the environment and superuser privileges for the wlsifconfig.sh script, which is used to transfer IP addresses from one machine to another during migration. It must be able to run ifconfig, which is generally only available to superusers.

For more information about the wlsifconfig.sh script, see Configuring Automatic Whole Server Migration in Administering Clusters for Oracle WebLogic Server.

Refer to the following sections for instructions on preparing your system to run the wlsifconfig.sh script.

Setting the PATH Environment Variable for the wlsifconfig.sh Script

Ensure that the commands listed in the following table are included in the PATH environment variable for each host computers.

File Directory Location

wlsifconfig.sh

MSERVER_HOME/bin/server_migration

wlscontrol.sh

WL_HOME/common/bin

nodemanager.domains

MSERVER_HOME/nodemanager
Granting Privileges to the wlsifconfig.sh Script

Grant sudo privilege to the operating system user (for example, oracle) with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

Note:

For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script.

Ask the system administrator for the sudo and system rights as appropriate to perform this required configuration task.

The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle to run ifconfig and arping:

Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping

Configuring Server Migration Targets

To configure migration in a cluster:

  1. Sign in to the Oracle WebLogic Server Administration Console.

  2. In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page is displayed.

  3. Click the cluster for which you want to configure migration in the Name column of the table.

  4. Click the Migration tab.

  5. Click Lock & Edit.

  6. Select Database as Migration Basis. From the drop-down list, select Leasing as Data Source For Automatic Migration.

  7. Under Candidate Machines For Migratable Server, in the Available filed, select the Managed Servers in the cluster and click the right arrow to move them to Chosen.

  8. Click Save.

  9. Set the Candidate Machines for Server Migration. You must perform this task for all of the managed servers as follows:

    1. In Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.

    2. Select the server for which you want to configure migration.

    3. Click the Migration tab.

    4. Select Automatic Server Migration Enabled and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

      For information on targeting applications and resources, see Using Multi Data Sources with Oracle RAC.

    5. In the Available field, located in the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

      In this step, you are identifying the host to which the Managed Server should failover if the current host is unavailable. For example, for the Managed Server on the HOST1, select HOST2; for the Managed Server on HOST2, select HOST1.

    Tip:

    Click Customize this table in the Summary of Servers page, move Current Machine from the Available Window to the Chosen window to view the machine on which the server is running. This is different from the configuration if the server is migrated automatically.

  10. Click Activate Changes.

  11. Restart the Administration Server and the servers for which server migration has been configured.

Testing Whole Server Migration

Perform the steps in this section to verify that automatic whole server migration is working properly.

To test from Node 1:

  1. Stop the managed server process.

    kill -9 pid
    

    pid specifies the process ID of the managed server. You can identify the pid in the node by running this command:

  2. Watch the Node Manager console (the terminal window where you performed the kill command): you should see a message indicating that the managed server's floating IP has been disabled.

  3. Wait for the Node Manager to try a second restart of the Managed Server. Node Manager waits for a period of 30 seconds before trying this restart.

  4. After node manager restarts the server and before it reaches Running state, kill the associated process again.

    Node Manager should log a message indicating that the server will not be restarted again locally.

    Note:

    The number of restarts required is determined by the RestartMax parameter in the following configuration file:

    The default value is RestartMax=2.

To test from Node 2:

  1. Watch the local Node Manager console. After 30 seconds since the last try to restart the managed server on Node 1, Node Manager on Node 2 should prompt that the floating IP for the managed server is being brought up and that the server is being restarted in this node.

  2. Access a product URL by using the same IP address. If the URL is successful, then the migration was successful.

Verification From the Administration Console

You can also verify migration using the Oracle WebLogic Server Administration Console:

  1. Log in to the Administration Console.
  2. Click Domain on the left console.
  3. Click the Monitoring tab and then the Migration subtab.

    The Migration Status table provides information on the status of the migration.

Note:

After a server is migrated, to fail it back to its original machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the managed server on the machine to which it was originally assigned.