18 Using Whole Server Migration and Service Migration in an Enterprise Deployment

The following topics describe Oracle WebLogic Server Whole Server migration and Oracle WebLogic Server Automatic Service Migration. The also explain how these features can be used in an Oracle Fusion Middleware enterprise topology.

18.1 About Whole Server Migration and Automatic Service Migration in an Enterprise Deployment

Oracle WebLogic Server provides a migration framework that is integral part of any highly available environment. The following sections provide more information about how this framework can be used effectively in an enterprise deployment.

18.1.1 Understanding the Difference Between Whole Server and Service Migration

The Oracle WebLogic Server migration framework supports two distinct types of automatic migration:

  • Whole Server Migration, where the Managed Server instance is migrated to a different physical system upon failure.

    Whole server migration provides for the automatic restart of a server instance, with all of its services, on a different physical machine. When a failure occurs in a server that is part of a cluster that is configured with server migration, the server is restarted on any of the other machines that host members of the cluster.

    For this to happen, the servers need to use a a floating IP as listen address and the required resources (transactions logs and JMS persistent stores) must be available on the candidate machines.

    For more information, see "Whole Server Migration" in Administering Clusters for Oracle WebLogic Server.

  • Service Migration, where specific services are moved to a different Managed Server within the cluster.

    To understand service migration, it's important to understand pinned services.

    In a WebLogic Server cluster, most subsystem services are hosted homogeneously on all server instances in the cluster, enabling transparent failover from one server to another. In contrast, pinned services, such as JMS-related services, the JTA Transaction Recovery Service, and user-defined singleton services, are hosted on individual server instances within a cluster—for these services, the WebLogic Server migration framework supports failure recovery with service migration, as opposed to failover.

    For more information, see "Understanding the Service Migration Framework" in Administering Clusters for Oracle WebLogic Server.

18.1.2 Implications of Using Whole Server Migration or Service Migration in an Enterprise Deployment

When a server or service is started in another system, the required resources (such as services data and logs) must be available to both the original system and to the failover system; otherwise, the service cannot resume the same operations successfully on the failover system.

For this reason, both whole server and service migration require that all members of the cluster have access to the same transaction and JMS persistent stores (whether the persistent store is file-based or database-based).

This is another reason why shared storage is important in an enterprise deployment. When you properly configure shared storage, you ensure that in the event of a manual failover (Administration Server failover) or an automatic failover (whole server migration or service migration), both the original machine and the failover machine can access the same file store with no change in service.

In the case of an automatic service migration, when a pinned service needs to be resumed, the JMS and JTA logs that it was using before failover need to be accessible.

In addition to shared storage, Whole Server Migration requires the procurement and assignment of a virtual IP address (VIP). When a Managed Server fails over to another machine, the VIP is automatically reassigned to the new machine.

Note that service migration does not require a VIP.

18.1.3 Understanding Which Products and Components Require Whole Server Migration and Service Migration

Note that the table lists the recommended best practice. It does not preclude you from using Whole Server or Automatic Server Migration for those components that support it.

Component Whole Server Migration (WSM) Automatic Service Migration (ASM)

Oracle WebCenter Content

NO

YES

Oracle SOA Suite

NO

YES

Oracle Enterprise Capture

NO

YES

18.2 Creating a GridLink Data Source for Leasing

Both Whole Server Migration and Automatic Service Migration require a data source for the leasing table, which is a tablespace created automatically as part of the Oracle WebLogic Server schemas by the Repository Creation Utility (RCU).

For an enterprise deployment, you should use create a GridLink data source:

  1. Log in to the Oracle WebLogic Server Administration Console.
  2. If you have not already done so, in the Change Center, click Lock & Edit.
  3. In the Domain Structure tree, expand Services, then select Data Sources.
  4. On the Summary of Data Sources page, click New and select GridLink Data Source, and enter the following:
    • Enter a logical name for the data source in the Name field. For example, Leasing.

    • Enter a name for JNDI. For example, jdbc/leasing.

    • For the Database Driver, select Oracle's Driver (Thin) for GridLink Connections Versions: Any.

    • Click Next.

  5. In the Transaction Options page, clear the Supports Global Transactions check box, and then click Next.
    Supports Global Transactions check box
  6. In the GridLink Data Source Connection Properties Options screen, select Enter individual listener information and click Next.
  7. Enter the following connection properties:
    • Service Name: Enter the service name of the database with lowercase characters. For a GridLink data source, you must enter the Oracle RAC service name. For example:

      wccedg.example.com

    • Host Name and Port: Enter the SCAN address and port for the RAC database, separated by a colon. For example:

      db-scan.example.com:1521
      

      Click Add to add the host name and port to the list box below the field.

      You can identify the SCAN address by querying the appropriate parameter in the database using the TCP Protocol:

      SQL>show parameter remote_listener;
      
      NAME                 TYPE        VALUE
       
      --------------------------------------------------
       
      remote_listener     string      db-scan.example.com
      

      Note:

      For Oracle Database 11g Release 1 (11.1), use the virtual IP and port of each database instance listener, for example:

      dbhost1-vip.mycompany.com (port 1521) 
      

      and

      dbhost2-vip.mycompany.com (1521)
      

      For Oracle Database 10g, use multi data sources to connect to an Oracle RAC database. For information about configuring multi data sources see Using Multi Data Sources with Oracle RAC.

    • Database User Name: Enter the following:

      FMW1221_WLS_RUNTIME
      

      In this example, FMW1221 is the prefix you used when you created the schemas as you prepared to configure the initial enterprise manager domain.

      Note that in previous versions of Oracle Fusion Middleware, you had to manually create a user and tablespace for the migration leasing table. In Fusion Middleware 12c (12.2.1), the leasing table is created automatically when you create the WLS schemas with the Repository Creation Utility (RCU).

    • Password: Enter the password you used when you created the WLS schema in RCU.

    • Confirm Password: Enter the password again and click Next.

  8. On the Test GridLink Database Connection page, review the connection parameters and click Test All Listeners.

    Here is an example of a successful connection notification:

    Connection test for jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan.example.com)
    (PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=wccedg.example.com))) succeeded.
    

    Click Next.

  9. In the ONS Client Configuration page, do the following:
    • Select FAN Enabled to subscribe to and process Oracle FAN events.

    • Enter the SCAN address in the ONS Host and Port field, and then click Add and click Add:

      This value should be the ONS host and ONS remote port for the RAC database. To find the ONS remote port for the database, you can use the following command on the database host:

      [orcl@db-scan1 ~]$ srvctl config nodeapps -s
       
      ONS exists: Local port 6100, remote port 6200, EM port 2016
      
    • Click Next.

    Note:

    For Oracle Database 11g Release 1 (11.1), use the hostname and port of each database's ONS service, for example:

    custdbhost1.example.com (port 6200)
    

    and

    custdbhost2.example.com (6200)
    
  10. On the Test ONS Client Configuration page, review the connection parameters and click Test All ONS Nodes.

    Here is an example of a successful connection notification:

    Connection test for db-scan.example.com:6200 succeeded.
    

    Click Next.

  11. In the Select Targets page, select the cluster that you are configuring for Whole Server Migration or Automatic Service Migration, and then select All Servers in the cluster.
  12. Click Finish.
  13. Click Activate Changes.

18.3 Configuring Whole Server Migration for an Enterprise Deployment

After you have prepared your domain for whole server migration or automatic service migration, you can then configure Whole Server Migration for specific Managed Servers within a cluster. See the following topics for more information.

18.3.1 Editing the Node Manager's Properties File to Enable Whole Server Migration

Use the section to edit the Node Manager properties file on the two nodes where the servers are running.

  1. Locate and open the following file with a text editor:
    MSERVER_HOME/nodemanager/nodmeanager.properties
    
  2. If not done already, set the StartScriptEnabled property in the nodemanager.properties file to true.

    This is required to enable Node Manager to start the managed servers.

  3. Add the following properties to the nodemanager.properties file to enable server migration to work properly:
    • Interface

      Interface=eth0
      

      This property specifies the interface name for the floating IP (eth0, for example).

      Note:

      Do not specify the sub interface, such as eth0:1 or eth0:2. This interface is to be used without the :0, or :1.

      The Node Manager's scripts traverse the different :X enabled IPs to determine which to add or remove. For example, the valid values in Linux environments are eth0, eth1, or, eth2, eth3, ethn, depending on the number of interfaces configured.

    • NetMask

      NetMask=255.255.255.0
      

      This property specifies the net mask for the interface for the floating IP.

    • UseMACBroadcast

      UseMACBroadcast=true
      

      This property specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the -b flag in the arping command.

  4. Restart the Node Manager.
  5. Verify in the output of Node Manager (the shell where the Node Manager is started) that these properties are in use. Otherwise, problems may occur during migration. The output should be similar to the following:
    ...
    SecureListener=true
    LogCount=1
    eth0=*,NetMask=255.255.255.0
    ...
    

18.3.2 Setting Environment and Superuser Privileges for the wlsifconfig.sh Script

Use this section to set the environment and superuser privileges for the wlsifconfig.sh script, which is used to transfer IP addresses from one machine to another during migration. It must be able to run ifconfig, which is generally only available to superusers.

For more information about the wlsifconfig.sh script, see "Configuring Automatic Whole Server Migration" in Administering Clusters for Oracle WebLogic Server.

Refer to the following sections for instructions on preparing your system to run the wlsifconfig.sh script.

18.3.2.1 Setting the PATH Environment Variable for the wlsifconfig.sh Script

Ensure that the commands listed in the following table are included in the PATH environment variable for each host computers.

File Directory Location

wlsifconfig.sh

MSERVER_HOME/bin/server_migration

wlscontrol.sh

WL_HOME/common/bin

nodemanager.domains

MSERVER_HOME/nodemanager

18.3.2.2 Granting Privileges to the wlsifconfig.sh Script

Grant sudo privilege to the operating system user (for example, oracle) with no password restriction, and grant execute privilege on the /sbin/ifconfig and /sbin/arping binaries.

Note:

For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh script.

Ask the system administrator for the sudo and system rights as appropriate to perform this required configuration task.

The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for oracle to run ifconfig and arping:

Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping

18.3.3 Configuring Server Migration Targets

To configure migration in a cluster:

  1. Log in to the Oracle WebLogic Server Administration Console.

  2. In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page appears.

  3. Click the cluster for which you want to configure migration in the Name column of the table.

  4. Click the Migration tab.

  5. Click Lock & Edit.

  6. Select Database as Migration Basis. From the drop-down list, select Leasing as Data Source For Automatic Migration.

  7. Under Candidate Machines For Migratable Server, in the Available filed, select the Managed Servers in the cluster and click the right arrow to move them to Chosen.

  8. Select the Leasing data source that you created in Creating a GridLink Data Source for Leasing.

  9. Click Save.

  10. Set the Candidate Machines for Server Migration. You must perform this task for all of the managed servers as follows:

    1. In Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.

    2. Select the server for which you want to configure migration.

    3. Click the Migration tab.

    4. Select Automatic Server Migration Enabled and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

      For information on targeting applications and resources, see Using Multi Data Sources with Oracle RAC.

    5. In the Available field, located in the Migration Configuration section, select the machines to which to allow migration and click the right arrow.

      In this step, you are identifying host to which the Managed Server should failover if the current host is unavailable. For example, for the Managed Server on the HOST1, select HOST2; for the Managed Server on HOST2, select HOST1.

    Tip:

    Click Customize this table in the Summary of Servers page, move Current Machine from the Available Window to the Chosen window to view the machine on which the server is running. This is different from the configuration if the server is migrated automatically.

  11. Click Activate Changes.

  12. Restart the Administration Server and the servers for which server migration has been configured.

18.3.4 Testing Whole Server Migration

Perform the steps in this section to verify that automatic whole server migration is working properly.

To test from Node 1:

  1. Stop the managed server process.

    kill -9 pid
    

    pid specifies the process ID of the managed server. You can identify the pid in the node by running this command:

    ps -ef | grep WLS_WCC1
    
  2. Watch the Node Manager console (the terminal window where you performed the kill command): you should see a message indicating that the managed server's floating IP has been disabled.

  3. Wait for the Node Manager to try a second restart of the Managed Server. Node Manager waits for a period of 30 seconds before trying this restart.

  4. After node manager restarts the server and before it reaches "RUNNING" state, kill the associated process again.

    Node Manager should log a message indicating that the server will not be restarted again locally.

    Note:

    The number of restarts required is determined by the RestartMax parameter in the following configuration file:

    MSERVER_HOME/servers/WLS_WCC1/data/nodemanager/startup.properties
    

    The default value is RestartMax=2.

To test from Node 2:

  1. Watch the local Node Manager console. After 30 seconds since the last try to restart the managed server on Node 1, Node Manager on Node 2 should prompt that the floating IP for the managed server is being brought up and that the server is being restarted in this node.

  2. Access a product URL using same IP address. If the URL is successful, then the migration was successful.

Verification From the Administration Console

You can also verify migration using the Oracle WebLogic Server Administration Console:

  1. Log in to the Administration Console.
  2. Click Domain on the left console.
  3. Click the Monitoring tab and then the Migration subtab.

    The Migration Status table provides information on the status of the migration.

Note:

After a server is migrated, to fail it back to its original machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the managed server on the machine to which it was originally assigned.

18.4 Configuring Automatic Service Migration in an Enterprise Deployment

To configure automatic service migration for specific services in an enterprise deployment, refer to the topics in this section.

18.4.1 Setting the Leasing Mechanism and Data Source for an Enterprise Deployment Cluster

Before you can configure automatic service migration, you must verify the leasing mechanism and data source that will be used by the automatic service migration feature:

Note:

The following procedure assumes you have already created the Leasing data source, as described in Creating a GridLink Data Source for Leasing.

  1. Log in to the Oracle WebLogic Server Administration Console.
  2. Click Lock & Edit.
  3. In the Domain Structure window, expand Environment and select Clusters.
    The Summary of Clusters page appears.
  4. In the Name column of the table, click the cluster for which you want to configure migration.
  5. Click the Migration tab.
  6. Verify that Database is selected in the Migration Basis drop-down menu.
  7. From the Data Source for Automatic Migration drop-down menu, select the Leasing data source that you created in Creating a GridLink Data Source for Leasing.
  8. Click Save.
  9. Activate changes.
  10. Restart the Managed Servers.

18.4.2 Changing the Migration Settings for the Managed Servers in the Cluster

After you set the leasing mechanism and data source for the cluster, you can then enable automatic JTA migration for the Managed Servers that you want to configure for service migration. Note that this topic applies only if you are deploying JTA services as part of your enterprise deployment.

For example, this task is not required for Oracle WebCenter Content enterprise deployments.
To change the migration settings for the Managed Servers in each cluster:
  1. If you haven’t already, log in to the Administration Console, and click Lock & Edit.
  2. In the Domain Structure pane, expand the Environment node and then click Servers.
    The Summary of Servers page appears.
  3. Click the name of the server you want to modify in Name column of the table.
    The settings page for the selected server appears and defaults to the Configuration tab.
  4. Click the Migration tab.
  5. From the JTA Migration Policy drop-down menu, select Failure Recovery.
  6. In the JTA Candidate Servers section of the page, select the Managed Servers in the Available list box, and then click the move button to move them into the Chosen list box.
  7. In the JMS Service Candidate Servers section of the page, select the Managed Servers in the Available list box, and then click the move button to move them into the Chosen list box.
  8. Click Save.
  9. Restart all the Managed Servers and the Administration Server.

18.4.3 About Selecting a Service Migration Policy

When you configure Automatic Service Migration, you select a Service Migration Policy for each cluster. This topic provides guidelines and considerations when selecting the Service Migration Policy.

For example, products or components running singletons or using Path services can benefit from the Auto-Migrate Exactly-Once policy. With this policy, if at least one Managed Server in the candidate server list is running, the services hosted by this migratable target will be active somewhere in the cluster if servers fail or are administratively shut down (either gracefully or forcibly). This can cause multiple homogenous services to end up in one server on startup.

When you are using this policy, you should monitor the cluster startup to identify what servers are running on each server. You can then perform a manual failback, if necessary, to place the system in a balanced configuration.

Other Fusion Middleware components are better suited for the Auto-Migrate Failure-Recovery Services policy. With this policy, the services hosted by the migratable target will start only if the migratable target's User Preferred Server (UPS) is started.

Based on these guidelines, you should use Auto-Migration Failure-Recovery Services for the clusters in an Oracle WebCenter Content enterprise deployment.

For more information, see Policies for Manual and Automatic Service Migration in Administering Clusters for Oracle WebLogic Server.

18.4.4 Setting the Service Migration Policy for Each Managed Server in the Cluster

After you modify the migration settings for each server in the cluster, you can then identify the services and set the migration policy for each Managed Server in the cluster, using the WebLogic Administration Console:
  1. If you haven’t already, log in to the Administration Console, and click Lock & Edit.
  2. In the Domain Structure pane, expand Environment, then expand Clusters, then select Migratable Targets.
  3. Click the name of the first Managed Server in the cluster.
  4. Click the Migration tab.
  5. From the Service Migration Policy drop-down menu, select the appropriate policy for the cluster.
  6. In the Constrained Candidate Servers section of the page, select both Managed Servers in the Available list box and move them to the Chosen list box.
  7. Click Save.
  8. Repeat steps 2 through 6 for each of the additional Managed Servers in the cluster.
  9. Activate the changes.
  10. Restart the Managed Servers in the cluster.

18.4.5 Restarting the Managed Servers and Validating Automatic Service Migration

After you configure automatic service migration for your cluster and Managed Servers, validate the configuration, as follows:
  1. If you haven’t already, log in to the Administration Console.
  2. In the Domain Structure pane, select Environment, then Clusters, and restart the cluster you just configured for automatic service migration.
  3. In the Domain Structure pane, expand Environment, and then expand Clusters.
  4. Click Migratable Targets.
  5. Click the Control tab.
    The console displays a list of migratable targets and their current hosting server.
  6. In the Migratable Targets table, select a row for the one of the migratable targets.
  7. Note the value in the Current Hosting Server column.
  8. Use the operating system command line to stop the first Managed Server.

    Use the following command to kill the Managed Server Process and simulate a crash scenario:

    kill -9 pid
    

    In this example, replace pid with the process ID (PID) of the Managed Server. You can identify the PID by running the following UNIX command:

    ps -ef | grep managed_server_name
    

    Note that after you kill the process, the Managed Server might be configured to start automatically after you initially kill the process. In this case, you must kill the second process using the kill –9 command again.

  9. Watch the terminal window (or console) where the Node Manager is running.

    You should see a message indicating that the selected Managed Server has failed. The message will be similar to the following:

    <INFO> <domain_name> <server_name> 
    <The server 'server_name' with process id 4668 is no longer alive; waiting for the process to die.>
    <INFO> <domain_name> <server_name> 
    <Server failed so attempting to restart (restart count = 1)>.
    
  10. Return to the Oracle WebLogic Server Administration Console and refresh the table of migratable targets; verify that the migratable targets are transferred to the remaining, running Managed Server in the cluster:
    • Verify that the Current Hosting Server for the process you killed is now updated to show that it has been migrated to a different host.
    • Verify that the value in the Status of Last Migration column for the process is "Succeeded".
  11. Open and review the log files for the Managed Servers that are now hosting the services; look for any JTA or JMS errors.

    Note:

    For JMS tests, it is a good practice to get message counts from destinations and make sure that there are no stuck messages in any of the migratable targets:

    For example, for uniform distributed destinations (UDDs):

    1. Access the JMS Subdeployment module in the Administration Console:

      In the Domain Structure pane, select Services, then Messaging, and then JMS Modules.

    2. Click the JMS Module.

    3. Click the destination in the Summary of Resources table. destination->Select monitoring and get the Messages Total and Messages Pending Counts

    4. Select the Monitoring tab, and review the Messages Total and Messages Pending values in the Destinations table.

18.4.6 Failing Back Services After Automatic Service Migration

When Automatic Service Migration occurs, Oracle WebLogic Server does not support failing back services to their original server when a server is back online and rejoins the cluster.

As a result, after the Automatic Service Migration migrates specific JMS services to a backup server during a fail-over, it does not migrate the services back to the original server after the original server is back online. Instead, you must migrate the services back to the original server manually.

To fail back a service to its original server, follow these steps:

  1. If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.

  2. In the Domain Structure tree, expand Environment, expand Clusters, and then select Migratable Targets.

  3. To migrate one or more migratable targets at once, on the Summary of Migratable Targets page:

    1. Click the Control tab.

    2. Use the check boxes to select one or more migratable targets to migrate.

    3. Click Migrate.

    4. Use the New hosting server drop-down to select the original Managed Server.

    5. Click OK.

      A request is submitted to migrate the JMS-related service and the configuration edit lock is released. In the Migratable Targets table, the Status of Last Migration column indicates whether the requested migration has succeeded or failed.

  4. To migrate a specific migratable target, on the Summary of Migratable Targets page:

    1. Select the migratable target to migrate.

    2. Click the Control tab.

    3. Reselect the migratable target to migrate.

    4. Click Migrate.

    5. Use the New hosting server drop-down to select a new server for the migratable target.

    6. Click OK.