Using WebLogic Server Clusters

     Previous  Next    Open TOC in new window  Open Index in new window  View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Migration

The following sections describe the different migration mechanisms supported by WebLogic Server:

These sections focus on migration of failed server instances and services. WebLogic Server also supports replication and failover at the application level. For more information, see Failover and Replication in a Cluster.

 


Understanding Server and Service Migration

Note: Server Migration is not supported on all platforms. See Server Migration in WebLogic Platform 9.2 Supported Configurations.

In a WebLogic Server cluster, most services are deployed homogeneously on all server instances in the cluster, enabling transparent failover from one server to another. In contrast, "pinned services" such as JMS and the JTA transaction recovery system are targeted at individual server instances within a cluster—for these services, WebLogic Server supports failure recovery with migration, as opposed to failover.

Migration in WebLogic Server is the process of moving a clustered WebLogic Server instance or a component running on a clustered instance elsewhere in the event of failure. In the case of server migration, the server instance is migrated to a different physical machine upon failure. In service-level migration, the services are moved to a different server instance within the cluster.

WebLogic Server provides a feature for making JMS and the JTA transaction system highly available: migratable servers. Migratable servers provide for both automatic and manual migration at the server-level, rather than the service level.

Note: Server-level migration is an alternative to service-level migration. Service migration and server migration are not intended to be used in combination. If you migrate an individual service within your cluster, do not migrate an entire server instance.
Note: Server migration is only supported when you use the SSH version of Node Manager. Server migration is not supported on Windows.

When a migratable server becomes unavailable for any reason, for example, if it hangs, loses network connectivity, or its host machine fails—migration is automatic. Upon failure, a migratable server is automatically restarted on the same machine if possible. If the migratable server cannot be restarted on the machine where it failed, it is migrated to another machine. In addition, an administrator can manually initiate migration of a server instance.

 


Migration Terminology

The following terms apply to server and service migration:

 


Leasing

Leasing is the process WebLogic Server uses to manage services that are required to run on only one member of a cluster at a time. Leasing ensures exclusive ownership of a cluster-wide entity. Within a cluster, there is a single owner of a lease. Additionally, leases can failover in case of server or cluster failure. This helps to avoid having a single point of failure.

Features that Use Leasing

The following WebLogic server features use leasing:

Note: Beyond basic configuration, most leasing functionality is handled internally by WebLogic Server.

Leasing Versions

WebLogic Server provides two separate implementations of the leasing functionality. Which one you use depends on your requirements and your environment.

Note: Within a WebLogic Server installation, you can only use one type of leasing. Although it is possible to implement multiple features that use leasing within your environment, each must use the same kind of leasing.

High-availability Database Leasing

In this version of leasing, lease information is maintained within a table in a high-availability. A high-availability database is required to ensure that leasing information is always available. Each member of the cluster must be able to connect to the database in order to access leasing information.

This method of leasing is useful for customers who already have a high-availability database within their clustered environment. This method allows you to utilize leasing functionality without being required to use Node Manager to manage servers within your environment.

Note: If you have configured Job Scheduler in a cluster, it does not perform when the database is not configured for leasing.

The following procedures outline the steps required to configure your database for leasing.

  1. Configure the database for server migration. This information that is used to determine whether or not a server is running or needs to be migrated. For more information on leasing, see Leasing.
  2. Your database must be reliable. The server instances will only be as reliable as the database is. For experimental purposes, a normal database will suffice. For a production environment, only high-availability databases are recommended. If the database goes down, all the migratable servers will shut themselves down.

    Create the leasing table in the database. This is used to store the machine-server associations used to enable server migration. The schema for this table is located in:

    <WEBLOGIC_HOME>/server/db/<dbname>/leasing.ddl

    where dbname is the name of the database vendor.

    Note: The leasing table should be stored in a highly available database. Migratable servers are only as reliable as the database used to store the leasing table.
  3. Set up and configure a data source. This data source should point to the database configured in the previous step.
  4. Note: XA data sources are not supported for server migration.

    For more information on creating a JDBC data source, see Configuring JDBC Data Sources in Configuring and Managing WebLogic JDBC.

Non-database Leasing

In the non-database version of leasing, WebLogic Server maintains leasing information in-memory. This removes the requirement of having a high-availability database to use features that require leasing.

One member of a cluster is chosen as the cluster leader and is responsible for maintaining the leasing information. The cluster leader is chosen based on the length of time that has passed since startup. The managed server that has been running the longest within a cluster is chosen as the cluster leader. Other cluster members communicate with this server to determine leasing information, however, the leasing table is replicated to other nodes of the cluster to provide failover.

Note: This version of leasing requires that you use Node Manager to control servers within the cluster. Node Manager should also be running on every machine hosting managed servers within the cluster. For more information, see Using Node Manager to Control Servers.

 


Automatic Server Migration

Note: Server Migration is not supported on all platforms. See Server Migration in WebLogic Platform 9.2 Supported Configurations.

This section outlines the procedures for configuring server migration and provides a general discussion of how server migration functions within a WebLogic Server environment.

The following topics are covered:

Preparing for Automatic Server Migration

Before configuring server migration, be aware of the following requirements:

Configuring Automatic Server Migration

Before configuring server migration, ensure that your environment meets the requirements outlined in Preparing for Automatic Server Migration.

To configure server migration for a Managed Server within a cluster, perform the following tasks:

  1. Obtain floating IP addresses for each Managed Server that will have migration enabled.
  2. Each migratable server must be assigned a floating IP address which follows the server from one physical machine to another after migration. Any server that is assigned a floating IP address must also have AutoMigrationEnabled set to true.

    Note: The migratable IP address should not be present on the interface of any of the candidate machines before the migratable server is started.
  3. Configure Node Manager. Node Manager must be running and configured to allow server migration.
  4. Note: Server migration is only supported using the SSH version of Node Manager.

    For general information on using Node Manager in server migration, see Node Manager's Role in Server Migration. For general information on configuring Node Manager, see Using Node Manager to Control Servers.

  5. If you are using a database to manage leasing information, configure the database for server migration according to the procedures outlined in High-availability Database Leasing.
  6. For general information on leasing, see Leasing.

  7. If you are using a database to store leasing information, set up and configure a data source according to the procedures outlined in High-availability Database Leasing.
  8. You should set DataSourceForAutomaticMigration to this data source in each cluster configuration.

    Note: XA data sources are not supported for server migration.

    For more information on creating a JDBC data source, see Configuring JDBC Data Sources in Configuring and Managing WebLogic JDBC.

  9. Ensure that the wlsifconfig.sh script is configured so that ifconfig has the proper permissions.
  10. This script is used to transfer IP addresses from one machine to another during migration. It must be able to run ifconfig. If you are invoking this script using sudo you do not need to manually edit the script.

    This script is available in the $BEA_HOME/weblogic92/common/bin directory.

  11. Ensure that wlsifconfig.sh, wlscontrol.sh, and nodemanager.domains are included in your machines' PATH. The .sh files are located in
    $BEA_HOME/weblogic92/common/bin, and nodemanager.domains is located in $BEA_HOME/weblogic92/common/nodemanager.
  12. Depending on your default shell, you may need to edit the first line of these scripts.

  13. The machines that host migratable servers must trust each other. For server migration to occur, it must be possible to get to a shell prompt using 'ssh/rsh machine_A' from machine_B and vice versa without having to explicitly enter a username/password. Also, each machine must be able to connect to itself using SSH in the same way.
  14. Note: You should ensure that your login scripts (.cshrc, .profile, .login, etc.) only echo messages from your shell profile if the shell is interactive. WebLogic Server uses an ssh command to login and echo the contents of the server.state file. Only the first line of this output is used to determine the server state.
  15. Set the candidate machines for server migration.Each server can have a different set of Candidate machines, or they can all have the same set.
  16. Edit wlscontrol.sh and set the Interface variable to the name of your network interface.
  17. Restart the admin server.

Using High Availability Storage for State Data

The server migration process migrates services, but not the state information associated with work in process at the time of failure.

To ensure high availability, it is critical that such state information remains available to the server instance and the services it hosts after migration. Otherwise, data about the work in process at the time of failure may be lost. State information maintained by a migratable server, such as the data contained in transaction logs, should be stored in a shared storage system that is accessible to any potential machine to which a failed migratable server might be migrated. For highest reliability, use a shared storage solution that is itself highly available—for example, a storage area network (SAN).

In addition, if you are using a database to store leasing information, the lease table, described in the following sections, which is used to track the health and liveness of migratable servers should also stored in a high availability database. For more information, see Leasing.

Server Migration Processes and Communications

The sections that follow describe key processes in a cluster that contains migratable servers:

Startup Process in a Cluster with Migratable Servers

Figure 7-1, Startup of Cluster with Migratable Servers, on page 7-11 illustrates the processing and communications that occur during startup of a cluster that contains migratable servers.

The example cluster contains two Managed Servers, both of which are migratable. The Administration Server and the two Managed Servers each run on different machines. A fourth machine is available as a backup—in the event that one of the migratable servers fails. Node Manager is running on the backup machine and on each machine with a running migratable server.

Figure 7-1 Startup of Cluster with Migratable Servers

Startup of Cluster with Migratable Servers

These are the key steps that occur during startup of the cluster illustrated in Figure 7-1:

  1. The administrator starts up the cluster.
  2. The Administration Server invokes Node Manager on Machines B and C to start Managed Servers 1 and 2, respectively. See Administration Server's Role in Server Migration.
  3. The Node Manager on each machine starts up the Managed Server that runs there. See Node Manager's Role in Server Migration.
  4. Managed Servers 1 and 2 contact the Administration Server for their configuration. See Migratable Server Behavior in a Cluster.
  5. Managed Servers 1 and 2 cache the configuration they started up.
  6. Managed Servers 1 and 2 each obtain a migratable server lease in the lease table. Because Managed Server 1 starts up first, it also obtains a cluster master lease. See Cluster Master's Role in Server Migration.
  7. Managed Server 1 and 2 periodically renew their leases in the lease table, proving their health and liveness.

Automatic Migration Process

Figure 7-2, Automatic Migration of a Failed Server, on page 7-13 illustrates the automatic migration process after the failure of the machine hosting Managed Server 2.

Figure 7-2 Automatic Migration of a Failed Server

Automatic Migration of a Failed Server

  1. Machine C, which hosts Managed Server 2, fails.
  2. Upon its next periodic review of the lease table, the cluster master detects that Managed Server 2's lease has expired. See Cluster Master's Role in Server Migration.
  3. The cluster master tries to contact Node Manager on Machine C to restart Managed Server 2, but fails, because Machine C is unreachable.
  4. Note: If the Managed Server 2's lease had expired because it was hung, and Machine C was reachable, the cluster master would use Node Manager to restart Managed Server 2 on Machine C.
  5. The cluster master contacts Node Manager on Machine D, which is configured as an available host for migratable servers in the cluster.
  6. Node Manager on Machine D starts Managed Server 2. See Node Manager's Role in Server Migration.
  7. Managed Server 2 starts up and contacts the Administration Server to obtain its configuration.
  8. Managed Server 2 caches the configuration it started up with.
  9. Managed Server 2 obtains a migratable server lease.

During migration, the clients of the Managed Server that is migrating may experience a brief interruption in service; it may be necessary to reconnect. On Solaris and Linux operating systems, this can be done using ifconfig command. The clients of a migrated server do not need to know the particular machine to which it has migrated.

When a machine that previously hosted a server instance that was migrated becomes available again, the reversal of the migration process—migrating the server instance back to its original host machine—is known as failback. WebLogic Server does not automate the process of failback. An administrator can accomplish failback by manually restoring the server instance to its original host.

The general procedures for restoring a server to its original host are as follows:

The exact procedures you will follow depend on your server and network environment.

Manual Migration Process

Figure 7-3, Manual Server Migration, on page 7-15 illustrates what happens when an administrator manually migrates a migratable server.

Figure 7-3 Manual Server Migration

Manual Server Migration

  1. An administrator uses the Administration Console to initiate the migration of Managed Server 2 from Machine C to Machine B.
  2. The Administration Server contacts Node Manager on Machine C. See Administration Server's Role in Server Migration.
  3. Node Manager on Machine C stops Managed Server 2.
  4. Managed Server 2 removes its row from the lease table.
  5. The Administration Server invokes Node Manager on Machine B.
  6. Node Manager on Machine B starts Managed Server 2.
  7. Managed Server 2 obtains its configuration from the Administration Server.
  8. Managed Server 2 caches the configuration it started up with.
  9. Managed Server 2 adds a row to the lease table.

Administration Server's Role in Server Migration

In a cluster that contains migratable servers, the Administration Server:

In addition, the Administration Server provides its regular domain management functionality, persisting configuration updates issued by an administrator, and providing a run-time view of the domain, including the migratable servers it contains.

Migratable Server Behavior in a Cluster

A migratable server is a clustered Managed Server that has been configured as migratable. These are the key behaviors of a migratable server:

Node Manager's Role in Server Migration

The use of Node Manager is required for server migration—it must run on each machine that hosts, or is intended to host.

Node Manager supports server migration in these ways:

Cluster Master's Role in Server Migration

In a cluster that contains migratable servers, one server instance acts as the cluster master. Its role is to orchestrate the server migration process. Any server instance in the cluster can serve as the cluster master. When you start a cluster that contains migratable servers, the first server to join the cluster becomes the cluster master and starts up the cluster manager service. If a cluster does not include at least one migratable server, it does not require a cluster master, and the cluster master service does not start up. In the absence of a cluster master, migratable servers can continue to operate, but server migration is not possible. These are the key functions of the cluster master:

 


JMS and JTA Service Migration

WebLogic Server supports service-level migration for JMS servers and the JTA transaction recovery service. These are referred to as migratable services, because you can move them from one server to another within a cluster.

Note: JMS also offers improved service continuity in the event of a single WebLogic Server failure by enabling you to configure multiple physical destinations (queues and topics) as part of a single distributed destination set.

WebLogic Server also supports migration at the server level—a complete server instance, and all of the services it hosts can be migrated to another machine, either automatically, or manually. This feature is described in Understanding Server and Service Migration.

Note: If you are using a database to maintain leasing information, the leasing table should be stored in a highly available database. Migratable servers are only as reliable as the database used to store the leasing table. For more information, see Leasing.

In a WebLogic Server cluster, most services are deployed homogeneously on all the server instances in the cluster, enabling transparent failover from one server to another. In contrast, singleton services, such as JMS and the JTA transaction recovery system, run only on one server in the cluster at any given time.

WebLogic Server allows the administrator to migrate singleton services from one server to another in the cluster, either in response to a server failure or as part of regularly-scheduled maintenance. This capability improves the availability of singleton services in a cluster, because those services can be quickly restarted on a redundant server should the host server fail.

How Migration of JMS and JTA Works

Clients access a migratable service in a cluster using a migration-aware RMI stub. The RMI stub keeps track of which server currently hosts the pinned service, and it directs client requests accordingly. For example, when a client first accesses a pinned service, the stub directs the client request to the server instance in the cluster that currently hosts the service. If the service migrates to a different WebLogic Server between subsequent client requests, the stub transparently redirects the request to the correct target server.

WebLogic Server implements a migration-aware RMI stub for JMS servers and the JTA transaction recovery service when those services reside in a cluster and are configured for migration.

Migrating a Service from an Unavailable Server

There are special considerations when you migrate a service from a server instance that has crashed or is unavailable to the Administration Server. If the Administration Server cannot reach the previously active host of the service at the time you perform the migration, that Managed Server's local configuration information will not be updated to reflect that it is no longer the active host for the service. In this situation, you must purge the unreachable Managed Server's local configuration cache before starting it again. This prevents the previous active host from re-activating at startup a service that has been migrated to another Managed Server.

Defining Migratable Target Servers in a Cluster

By default, WebLogic Server can migrate the JTA transaction recovery service or a JMS server to any other server in the cluster. You can optionally configure a list of servers in the cluster that can potentially host a pinned service. This list of servers is referred to as a migratable target, and it controls the servers to which you can migrate a service. In the case of JMS, the migratable target also defines the list of servers to which you can deploy a JMS server.

For example, the following figure shows a cluster of four servers. Servers A and B are configured as the migratable target for a JMS server in the cluster.

Figure 7-4 Migratable Target in Cluster

Migratable Target in Cluster

In the above example, the migratable target allows the administrator to migrate the pinned JMS server only from Server A to Server B, or vice versa. Similarly, when deploying the JMS server to the cluster, the administrator selects either Server A or B as the deployment target to enable migration for the service. (If the administrator does not use a migratable target, the JMS server can be deployed or migrated to any available server in the cluster.)

WebLogic Server enables you to create separate migratable targets for the JTA transaction recovery service and JMS servers. This allows you to always keep each service running on a different server in the cluster, if necessary. Conversely, you can configure the same selection of servers as the migratable target for both JTA and JMS, to ensure that the services remain co-located on the same server in the cluster.

 


Automatic Singleton Service Migration

Automatic singleton service migration allows the automatic health monitoring and migration of singleton services. A singleton service is a service operating within a cluster that is available on only one server at any given time. When a migratable service fails or become unavailable for any reason (for example, because of a bug in the service code, server failure, or network failure), it is deactivated at its current location and activated on a new server. The process of migrating these services to another server is handled via the Migration Master. See Migration Master on page 7-22.

WebLogic Server supports the automatic migration of user-defined singleton services.

Note: Although JMS and JTA are also singleton services that are available on only one node of a cluster at any time, they are not migrated automatically. They must be manually migrated. See How Migration of JMS and JTA Works on page 7-19.

Overview of Singleton Service Migration

This section provides an overview of how automatic singleton service is implemented in WebLogic Server.

Migration Master

The migration master is a lightweight singleton service that monitors other services that can be migrated automatically. The server that currently hosts the migration master is responsible for starting and stopping the migration tasks associated with each migratable service.

Note: Migratable services do not have to be deployed on the same server as the migration master, but they must be deployed within the same cluster.

The migration master functions similar to the cluster master in that it is maintained by lease competition and runs on only one server at a time. Each server in a cluster continuously attempts to register the migration master lease. If the server currently hosting the migration master fails, the next server in the queue will take over the lease and begin hosting the migration master.

For more information on the cluster master, see Cluster Master's Role in Server Migration on page 7-18.

Note: The migration master and cluster master function independently and are not required to be hosted on the same server.

The server hosting the migration master maintains a record of all migrations performed, including the target name, source server, destination server, and the timestamp.

Migration Failure

If the migration of a singleton service fails on every candidate server within the cluster, the service is left deactivated. You can configure the number of times the number of times the migration master will iterate through the servers in the cluster.

Note: If you do not explicitly specify a list of candidate servers, the migration master will consider all of the cluster members as possible candidates for migration.

Implementing the Singleton Service Interface

Within an application, you can define a singleton service that can be used to perform tasks that you want to be executed on only one member of a cluster at any give time. The singleton service is implemented as a class within an application and is configured as part of the deployment descriptor on each server where the application is deployed. However, it is only active on one server at any time.

To create a singleton service within an application, you must create a class that, in addition to any tasks you wish the singleton service to perform, implements the weblogic.cluster.singleton.SingletonService interface.

The SingletonService interface contains the following methods which are used in the process of migration.

After you create a class that implements the SingletonService interface, you should ensure that this class is available in either the APP-INF/lib or APP-INF/classes directories when you create the .war file for deployment.

Deploying and Configuring Automatic Service Migration

After you create an application that implements the SingletonService interface, you must perform the following steps to deploy it and run it as a singleton service:

  1. Deploy the application to a cluster.
  2. Define a singleton service within WebLogic Server.
  3. Configure the migration behavior of the singleton service.

The following sections outline these procedures in detail.

Deploying an Application as a Singleton Service

Although the singleton service will be active on only one cluster member at a time, you must deploy your application to every member of the cluster that will serve as a candidate target for the migrated singleton service.

After deploying the application to all of the candidate servers within the cluster, add the following entry is added to the weblogic-application.xml for each deployed instance of the application.

<weblogic-application>
...
   <singleton-service>
      <class-name>mypackage.MySingletonServiceImpl</class-name>
      <name>Appscoped_Singleton_Service</name>
   </singleton-service>
...
</weblogic-application>
Note: The <class-name> and <name> elements are required.

Defining a Singleton Service within WebLogic Server

After you have created and deployed your application, you must define a singleton service within WebLogic Server.

This singleton service object contains the following information:

The singleton service object functions as a link between the migration master and the deployed application code. The following excerpt from the cluster element of config.xml shows how a singleton service is defined:

<SingletonService
   Name="SingletonTestServiceName"
   ClassName="mycompany.myprogram.subpackage.SingletonTestServiceImpl"
   Cluster="mycluster-"
/>

Configuring Automatic Service Migration

You can configure the behavior of automatic service migration using the following methods:


  Back to Top       Previous  Next