Sun GlassFish Communications Server 2.0 High Availability Administration Guide

Working with Clusters

ProcedureTo Create a Cluster

  1. In the tree component, select the Clusters node.

  2. On the Clusters page, click New.

    The Create Cluster page appears.

  3. In the Name field, type a name for the cluster.

    The name must:

    • Consist only of uppercase and lowercase letters, numbers, underscores, hyphens, and periods (.)

    • Be unique across all node agent names, server instance names, cluster names, and configuration names

    • Not be domain

  4. In the Configuration field, choose a configuration from the drop-down list.

    • To create a cluster that does not use a shared configuration, choose default-config.

      Leave the radio button labeled “Make a copy of the selected Configuration” selected. The copy of the default configuration will have the name cluster_name-config.

    • To create a cluster that uses a shared configuration, choose the configuration from the drop-down list.

      Select the radio button labeled “Reference the selected Configuration” to create a cluster that uses the specified existing shared configuration.

  5. Optionally, add server instances.

    You can also add server instances after the cluster is created.

    Server instances can reside on different machines. Every server instance needs to be associated with a node agent that can communicate with the DAS. Before you create server instances for the cluster, first create one or more node agents or node agent placeholders. See To Create a Node Agent Placeholder

    To create server instances:

    1. In the Server Instances To Be Created area, click Add.

    2. Type a name for the instance in the Instance Name field

    3. Choose a node agent from the Node Agent drop-down list.

  6. Click OK.

  7. Click OK on the Cluster Created Successfully page that appears.

Equivalent asadmin command

create-cluster

See Also

For more details on how to administer clusters, server instances, and node agents, see Deploying Node Agents.

ProcedureTo Create Server Instances for a Cluster

Before You Begin

Before you can create server instances for a cluster, you must first create a node agent or node agent placeholder. See To Create a Node Agent Placeholder

  1. In the tree component, expand the Clusters node.

  2. Select the node for the cluster.

  3. Click the Instances tab to bring up the Clustered Server Instances page.

  4. Click New to bring up the Create Clustered Server Instance page.

  5. In the Name field, type a name for the server instance.

  6. Choose a node agent from the Node Agents drop-down list.

  7. Click OK.

Equivalent asadmin command

create-instance

See Also

ProcedureTo Configure a Cluster

  1. In the tree component, expand the Clusters node.

  2. Select the node for the cluster.

    On the General Information page, you can perform these tasks:

    • Click Start Instances to start the clustered server instances.

    • Click Stop Instances to stop the clustered server instances.

    • Click Migrate EJB Timers to migrate the EJB timers from a stopped server instance to another server instance in the cluster.

Equivalent asadmin command

start-cluster, stop-cluster, migrate-timers

See Also

ProcedureTo Start, Stop, and Delete Clustered Instances

  1. In the tree component, expand the Clusters node.

  2. Expand the node for the cluster that contains the server instance.

  3. Click the Instances tab to display the Clustered Server Instances page.

    On this page you can:

    • Select the checkbox for an instance and click Delete, Start, or Stop to perform the selected action on all the specified server instances.

    • Click the name of the instance to bring up the General Information page.

ProcedureTo Configure Server Instances in a Cluster

  1. In the tree component, expand the Clusters node.

  2. Expand the node for the cluster that contains the server instance.

  3. Select the server instance node.

  4. On the General Information page, you can:

    • Click Start Instance to start the instance.

    • Click Stop Instance to stop a running instance.

    • Click JNDI Browsing to browse the JNDI tree for a running instance.

    • Click View Log Files to open the server log viewer.

    • Click Rotate Log File to rotate the log file for the instance. This action schedules the log file for rotation. The actual rotation takes place the next time an entry is written to the log file.

    • Click Recover Transactions to recover incomplete transactions.

    • Click the Properties tab to modify the port numbers for the instance.

    • Click the Monitor tab to change monitoring properties.

See Also

ProcedureTo Configure Applications for a Cluster

  1. In the tree component, expand the Clusters node.

  2. Select the node for the cluster.

  3. Click the Applications tab to bring up the Applications page.

    On this page, you can:

    • From the Deploy drop-down list, select a type of application to deploy. On the Deployment page that appears, specify the application.

    • From the Filter drop-down list, select a type of application to display in the list.

    • To edit an application, click the application name.

    • Select the checkbox next to an application and choose Enable or Disable to enable or disable the application for the cluster.

See Also

ProcedureTo Configure Resources for a Cluster

  1. In the tree component, expand the Clusters node.

  2. Select the node for the cluster.

  3. Click the Resources tab to bring up the Resources page.

    On this page, you can:

    • Create a new resource for the cluster: from the New drop-down list, select a type of resource to create. Make sure to specify the cluster as a target when you create the resource.

    • Enable or Disable a resource globally: select the checkbox next to a resource and click Enable or Disable. This action does not remove the resource.

    • Display only resources of a particular type: from the Filter drop-down list, select a type of resource to display in the list.

    • Edit a resource: click the resource name.

See Also

ProcedureTo Delete a Cluster

  1. In the tree component, select the Clusters node.

  2. On the Clusters page, select the checkbox next to the name of the cluster.

  3. Click Delete.

Equivalent asadmin command

delete-cluster

See Also

ProcedureTo Migrate EJB Timers

If a server instance stops running abnormally or unexpectedly, it can be necessary to move the EJB timers installed on that server instance to a running server instance in the cluster. To do so, perform these steps:

  1. In the tree component, expand the Clusters node.

  2. Select the node for the cluster.

  3. On the General Information page, click Migrate EJB Timers.

  4. On the Migrate EJB Timers page:

    1. From the Source drop-down list, choose the stopped server instance from which to migrate the timers.

    2. (Optional) From the Destination drop-down list, choose the running server instance to which to migrate the timers.

      If you leave this field empty, a running server instance will be randomly chosen.

    3. Click OK.

  5. Stop and restart the Destination server instance.

    If the source server instance is running or if the destination server instance is not running, Admin Console displays an error message.

Equivalent asadmin command

migrate-timers

See Also

ProcedureTo Upgrade Components Without Loss of Service

In a clustered environment, a rolling upgrade redeploys an application with a minimal loss of service and sessions. A session can be any replicable artifact, including:

You can use the load balancer and multiple clusters to upgrade components within the Communications Server without any loss of service. A component can, for example, be a JVM, the Communications Server, or a web application.

A rolling upgrade can take place under light to moderate load conditions. The procedure should be doable in a brief amount of time, about 10-15 minutes per server instance.

Applications must be compatible across the upgrade. They must work correctly during the transition, when some server instances are running the old version and others the new one. The old and new versions must have the same shape (for example, non-transient instance variables) of Serializable classes that form object graphs stored in sessions. Or, if the shape of these classes is changed, then the application developer must ensure that correct Serialization behavior occurs. If the application is not compatible across the upgrade, the cluster must be stopped for a full redeployment.

The Basic3pcc sample application includes an Ant target, do-rollingupgrade, which performs all the rolling upgrade steps for you. This sample application is included with the Communications Server in the as-install/samples/sipservlet/Basic3pcc directory. The Basic3pcc application and the Ant target are available only with the JAR installer of Communications Server.

The following procedure describes how to upgrade an application running on all instances of a cluster.

  1. Run the following commands on the converged load balancer in the cluster,

    asadmin set domain.converged-lb-configs.clb_config_name.property.load-increase-factor=1

    asadmin set domain.converged-lb-configs.clb_config_name.property.load-factor-increase-period-in-seconds=0

  2. Set the value of the dynamic-reconfig attribute to false in the cluster.

  3. Redeploy a new version of the application.

    Because you have set the dynamic-reconfig attribute to false, the new version of the application will be loaded to the instance only when the instance restarts.

  4. Disable the instance from the converged load balancer by using the following asadmin command:

    asadmin disable-converged-lb-server instance_name

  5. Back up the current session with the following command:

    asadmin backup—session store instance_name

    By default, the session files are stored at instance-dir/rollingupgrade.

  6. Stop the instance with the following command:

    asadmin stop-instanceinstance_name

  7. Start the instance.

    asadmin start-instance instance_name

  8. Restore the session.

    asadmin restore—session—store instance_name

  9. Enable the instance to the converged load balancer.

    asadmin enable-converged-lb-server instance_name

  10. Use the following command to get the latest version of the session store, which could have been updated by another instance accessing this session store.

    asadmin reconcile—session—store instance_name

  11. For all instances in the cluster, repeat steps 3 to 9.

  12. Set the value of the dynamic-reconfig attribute to true in the cluster.