Oracle WebLogic Server provides a migration framework that is an integral part of any highly available environment. The following sections provide more information about how this framework can be used effectively in an enterprise deployment.
The Oracle WebLogic Server migration framework supports two distinct types of automatic migration:
Whole Server Migration, where the Managed Server instance is migrated to a different physical system upon failure.
Whole server migration provides for the automatic restart of a server instance, with all its services, on a different physical machine. When a failure occurs in a server that is part of a cluster which is configured with server migration, the server is restarted on any of the other machines that host members of the cluster.
For this to happen, the servers must use a floating IP as listen address and the required resources (transactions logs and JMS persistent stores) must be available on the candidate machines.
For more information, see Whole Server Migration in Administering Clusters for Oracle WebLogic Server.
Service Migration, where specific services are moved to a different Managed Server within the cluster.
To understand service migration, it's important to understand pinned services.
In a WebLogic Server cluster, most subsystem services are hosted homogeneously on all server instances in the cluster, enabling transparent failover from one server to another. In contrast, pinned services, such as JMS-related services, the JTA Transaction Recovery Service, and user-defined singleton services, are hosted on individual server instances within a cluster—for these services, the WebLogic Server migration framework supports failure recovery with service migration, as opposed to failover.
For more information, see Understanding the Service Migration Framework in Administering Clusters for Oracle WebLogic Server.
When a server or service is started in another system, the required resources (such as services data and logs) must be available to both the original system and to the failover system; otherwise, the service cannot resume the same operations successfully on the failover system.
For this reason, both whole server and service migration require that all members of the cluster have access to the same transaction and JMS persistent stores (whether the persistent store is file-based or database-based).
This is another reason why shared storage is important in an enterprise deployment. When you properly configure shared storage, you ensure that in the event of a manual failover (Administration Server failover) or an automatic failover (whole server migration or service migration), both the original machine and the failover machine can access the same file store with no change in service.
In the case of an automatic service migration, when a pinned service needs to be resumed, the JMS and JTA logs that it was using before failover need to be accessible.
In addition to shared storage, Whole Server Migration requires the procurement and assignment of a virtual IP address (VIP). When a Managed Server fails over to another machine, the VIP is automatically reassigned to the new machine.
Note that service migration does not require a VIP.
The following table summarizes the list of FMW products and components that benefit from use of a migration capability and indicates the best-practice recommendation for this release. Components listed as migratable can use either Whole Server or Automatic Service Migration.
Note that the table lists the recommended best practice. It does not preclude you from using Whole Server or Automatic Server Migration for those components that support it.
|Component||Whole Server Migration (WSM)||Automatic Service Migration (ASM)|
Oracle Web Services Manager (OWSM)
Oracle WebCenter Portal
Oracle WebCenter Portal Portlets and Pagelet Producers
Oracle WebCenter Content
Oracle WebCenter Inbound Refinery
Oracle SOA Suite
Oracle Enterprise Scheduler
Whole Server Migration and Automatic Service Migration require a data source for the leasing table, which is a tablespace created automatically as part of the Oracle WebLogic Server schemas by the Repository Creation Utility (RCU).
For an enterprise deployment, you should create a GridLink data source:
Enter a logical name for the data source in the Name field. For example, Leasing.
Enter a name for JNDI. For example, jdbc/leasing.
For the Database Driver, select Oracle's Driver (Thin) for GridLink Connections Versions: Any.
Service Name: Enter the service name of the database with lowercase characters. For a GridLink data source, you must enter the Oracle RAC service name. For example:
Host Name and Port: Enter the SCAN address and port for the RAC database, separated by a colon. For example:
Click Add to add the host name and port to the list box below the field.
You can identify the SCAN address by querying the appropriate parameter in the database using the TCP Protocol:
SQL>show parameter remote_listener; NAME TYPE VALUE -------------------------------------------------- remote_listener string db-scan.example.com
For Oracle Database 11g Release 1 (11.1), use the virtual IP and port of each database instance listener, for example:
dbhost1-vip.mycompany.com (port 1521)
For Oracle Database 10g, use multi data sources to connect to an Oracle RAC database. For information about configuring multi data sources see Using Multi Data Sources with Oracle RAC.
Database User Name: Enter the following:
In this example, FMW1221 is the prefix you used when you created the schemas as you prepared to configure the initial enterprise manager domain.
Note that in previous versions of Oracle Fusion Middleware, you had to manually create a user and tablespace for the migration leasing table. In Fusion Middleware 12c (12.2.1), the leasing table is created automatically when you create the WLS schemas with the Repository Creation Utility (RCU).
Password: Enter the password you used when you created the WLS schema in RCU.
Confirm Password: Enter the password again and click Next.
Here is an example of a successful connection notification:
Connection test for jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan.example.com) (PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=wcpedg.example.com))) succeeded.
Select FAN Enabled to subscribe to and process Oracle FAN events.
Enter the SCAN address in the ONS Host and Port field, and then click Add and click Add:
This value should be the ONS host and ONS remote port for the RAC database. To find the ONS remote port for the database, you can use the following command on the database host:
[orcl@db-scan1 ~]$ srvctl config nodeapps -s ONS exists: Local port 6100, remote port 6200, EM port 2016
For Oracle Database 11g Release 1 (11.1), use the hostname and port of each database's ONS service, for example:
custdbhost1.example.com (port 6200)
Here is an example of a successful connection notification:
Connection test for db-scan.example.com:6200 succeeded.
After you have prepared your domain for whole server migration or automatic service migration, you can configure Whole Server Migration for specific Managed Servers within a cluster.
Use the section to edit the Node Manager properties file on the two nodes where the servers are running.
StartScriptEnabledproperty in the
nodemanager.propertiesfile to true.
This is required to enable Node Manager to start the managed servers.
nodemanager.propertiesfile to enable server migration to work properly:
This property specifies the interface name for the floating IP (
eth0, for example).
Do not specify the sub interface, such as
eth0:2. This interface is to be used without the
The Node Manager's scripts traverse the different
:X enabled IPs to determine which to add or remove. For example, the valid values in Linux environments are
ethn, depending on the number of interfaces configured.
This property specifies the net mask for the interface for the floating IP.
This property specifies whether or not to use a node's MAC address when sending ARP packets, that is, whether or not to use the
-b flag in the arping command.
... SecureListener=true LogCount=1 eth0=*,NetMask=255.255.255.0 ...
Use this section to set the environment and superuser privileges for the
wlsifconfig.sh script, which is used to transfer IP addresses from one machine to another during migration. It must be able to run
ifconfig, which is generally only available to superusers.
For more information about the
wlsifconfig.sh script, see Configuring Automatic Whole Server Migration in Administering Clusters for Oracle WebLogic Server.
Refer to the following sections for instructions on preparing your system to run the
Ensure that the commands listed in the following table are included in the PATH environment variable for each host computers.
Grant sudo privilege to the operating system user (for example,
oracle) with no password restriction, and grant execute privilege on the
For security reasons,
sudo should be restricted to the subset of commands required to run the
Ask the system administrator for the sudo and system rights as appropriate to perform this required configuration task.
The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for
oracle to run
Defaults:oracle !requiretty oracle ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
To configure migration in a cluster:
Log in to the Oracle WebLogic Server Administration Console.
In the Domain Structure window, expand Environment and select Clusters. The Summary of Clusters page appears.
Click the cluster for which you want to configure migration in the Name column of the table.
Click the Migration tab.
Click Lock & Edit.
Select Database as Migration Basis. From the drop-down list, select Leasing as Data Source For Automatic Migration.
Under Candidate Machines For Migratable Server, in the Available filed, select the Managed Servers in the cluster and click the right arrow to move them to Chosen.
Select the Leasing data source that you created in Creating a GridLink Data Source for Leasing.
Set the Candidate Machines for Server Migration. You must perform this task for all of the managed servers as follows:
In Domain Structure window of the Oracle WebLogic Server Administration Console, expand Environment and select Servers.
Select the server for which you want to configure migration.
Click the Migration tab.
Select Automatic Server Migration Enabled and click Save.
This enables the Node Manager to start a failed server on the target node automatically.
For information on targeting applications and resources, see Using Multi Data Sources with Oracle RAC.
In the Available field, located in the Migration Configuration section, select the machines to which to allow migration and click the right arrow.
In this step, you are identifying host to which the Managed Server should failover if the current host is unavailable. For example, for the Managed Server on the HOST1, select HOST2; for the Managed Server on HOST2, select HOST1.
Click Customize this table in the Summary of Servers page, move Current Machine from the Available Window to the Chosen window to view the machine on which the server is running. This is different from the configuration if the server is migrated automatically.
Click Activate Changes.
Restart the Administration Server and the servers for which server migration has been configured.
Perform the steps in this section to verify that automatic whole server migration is working properly.
To test from Node 1:
Stop the managed server process.
kill -9 pid
pid specifies the process ID of the managed server. You can identify the pid in the node by running this command:
ps -ef | grep WLS_SOA1
Watch the Node Manager console (the terminal window where you performed the kill command): you should see a message indicating that the managed server's floating IP has been disabled.
Wait for the Node Manager to try a second restart of the Managed Server. Node Manager waits for a period of 30 seconds before trying this restart.
After node manager restarts the server and before it reaches "RUNNING" state, kill the associated process again.
Node Manager should log a message indicating that the server will not be restarted again locally.
The number of restarts required is determined by the
RestartMax parameter in the following configuration file:
The default value is
To test from Node 2:
Watch the local Node Manager console. After 30 seconds since the last try to restart the managed server on Node 1, Node Manager on Node 2 should prompt that the floating IP for the managed server is being brought up and that the server is being restarted in this node.
Access a product URL using same IP address. If the URL is successful, then the migration was successful.
Verification From the Administration Console
You can also verify migration using the Oracle WebLogic Server Administration Console:
The Migration Status table provides information on the status of the migration.
After a server is migrated, to fail it back to its original machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the managed server on the machine to which it was originally assigned.
You may need to configure automatic service migration for specific services in an enterprise deployment.
The following procedure assumes you have already created the Leasing data source, as described in Creating a GridLink Data Source for Leasing.
After you set the leasing mechanism and data source for the cluster, you can then enable automatic JTA migration for the Managed Servers that you want to configure for service migration. Note that this topic applies only if you are deploying JTA services as part of your enterprise deployment.
When you configure Automatic Service Migration, you select a Service Migration Policy for each cluster. This topic provides guidelines and considerations when selecting the Service Migration Policy.
For example, products or components running singletons or using Path services can benefit from the Auto-Migrate Exactly-Once policy. With this policy, if at least one Managed Server in the candidate server list is running, the services hosted by this migratable target will be active somewhere in the cluster if servers fail or are administratively shut down (either gracefully or forcibly). This can cause multiple homogenous services to end up in one server on startup.
When you are using this policy, you should monitor the cluster startup to identify what servers are running on each server. You can then perform a manual failback, if necessary, to place the system in a balanced configuration.
Other Fusion Middleware components are better suited for the Auto-Migrate Failure-Recovery Services policy.
Based on these guidelines, you should use Auto-Migration Failure-Recovery Services for the clusters in an Oracle WebCenter Content enterprise deployment.
For more information, see Policies for Manual and Automatic Service Migration in Administering Clusters for Oracle WebLogic Server.
Use the following command to end the Managed Server Process and simulate a crash scenario:
kill -9 pid
In this example, replace pid with the process ID (PID) of the Managed Server. You can identify the PID by running the following UNIX command:
ps -ef | grep managed_server_name
Note that after you kill the process, the Managed Server might be configured to start automatically after you initially kill the process. In this case, you must kill the second process using the
kill –9 command again.
You should see a message indicating that the selected Managed Server has failed. The message will be similar to the following:
<INFO> <domain_name> <server_name> <The server 'server_name' with process id 4668 is no longer alive; waiting for the process to die.> <INFO> <domain_name> <server_name> <Server failed during startup. It may be retried according to the auto restart configuration.> <INFO> <domain_name> <server_name> <Server failed but will not be restarted because the maximum number of restart attempts has been exceeded.>
For JMS tests, it is a good practice to get message counts from destinations and make sure that there are no stuck messages in any of the migratable targets:
For example, for uniform distributed destinations (UDDs):
Access the JMS Subdeployment module in the Administration Console:
In the Domain Structure pane, select Services, then Messaging, and then JMS Modules.
Click the JMS Module.
Click the destination in the Summary of Resources table. destination->Select monitoring and get the Messages Total and Messages Pending Counts
Select the Monitoring tab, and review the Messages Total and Messages Pending values in the Destinations table.
When Automatic Service Migration occurs, Oracle WebLogic Server does not support failing back services to their original server when a server is back online and rejoins the cluster.
As a result, after the Automatic Service Migration migrates specific JMS services to a backup server during a fail-over, it does not migrate the services back to the original server after the original server is back online. Instead, you must migrate the services back to the original server manually.
To fail back a service to its original server, follow these steps:
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the Domain Structure tree, expand Environment, expand Clusters, and then select Migratable Targets.
To migrate one or more migratable targets at once, on the Summary of Migratable Targets page:
Click the Control tab.
Use the check boxes to select one or more migratable targets to migrate.
Use the New hosting server drop-down to select the original Managed Server.
A request is submitted to migrate the JMS-related service and the configuration edit lock is released. In the Migratable Targets table, the Status of Last Migration column indicates whether the requested migration has succeeded or failed.
To migrate a specific migratable target, on the Summary of Migratable Targets page:
Select the migratable target to migrate.
Click the Control tab.
Reselect the migratable target to migrate.
Use the New hosting server drop-down to select a new server for the migratable target.