Go to primary content
Oracle® Agile Engineering Data Management Administration Guide
Release e6.2.0.0
E52550-02
  Go To Table Of Contents
Contents

Previous
Previous
 
Next
Next
 

6 Cluster Setup for Servers

We support two general ways to deploy Agile e6 software

  1. One single J2EE Server on a separate node, hosting the WebLogic and all J2EE components for Agile e6, and an additional set of servers, hosting the native Agile e6 components.

  2. A set of servers of which every server hosts a complete set of all Agile e6 server components - J2EE and native components. The nodes are installed in a cluster, e.g. as NLB cluster, and load balancing tools are used to distribute the load to the servers.


Note:

For further information, please refer to the Architecture Guide for Agile e6.2.0.0.

6.1 One J2EE Server on a Separate Node

The server having the Agile e6 J2EE components installed - the WebLogic server - is separated from the server with the native Agile e6 components - the EDM server.

We highly recommend setting up two WebLogic servers on separate nodes as failover. The server, which will run from beginning, is called the WebLogic node. The second WebLogic node remains inactive as failover server until the main WebLogic server stops working.

6.1.1 Installation

  1. Install the WebLogic server nodes.


    Note:

    For further information, please refer to chapter Component Based Installation in Server Installation Guide on Windows and UNIX for Agile e6.2.0.0.

  2. After completing the installation, stop the failover application server on the failover node.


    Note:

    Make sure it does not start automatically, e.g. by self-customized scripts.

  3. After completing the application server installation, install the EDM systems on separate nodes.

  4. Configure each of them for the main application server node.

    Normal Case

6.1.2 Prepare Failover Configuration in Case of Errors

  1. After completing the installation, for each EDM system, copy the application configuration file <ep_root>/init/<application-name>.xml and rename it to e.g. <application-name>-failover.xml.

  2. Edit the renamed file and change the application server entry to mention the failover node with the failover application server.

    Example:

    <IPC AbsEciUrl="eci://localhost:19997" .... </IPC>
    

In Case of Errors:

  1. If the main application server stops and cannot be restarted, start the failover application server on the failover node.

  2. Switch to the failover configuration for each EDM system.

    1. Stop the EDM system.

    2. Rename the original configuration file <ep_root>/init/<application-name>.xml to e.g. <application-name>_org.xml.

    3. Rename the backup configuration file <application-name>_failover.xml to <application-name>.xml


      Note:

      This file will be used!

    4. Start the EDM system.

      Failover App Server

      Note:

      The switch to the backup application server is done.

6.2 Several Application Servers are Active

The 2nd scenario is for a load-balanced cluster with installations of the native Agile e6 components and the WebLogic servers on all nodes. In this case, the Watchdog of the workflow services can only run on one application server. All other Agile e6 J2EE components use the load balancing feature. This includes the cache of the permission managers which will be synchronized over all cluster nodes. After a successful Agile e6 installation, the Watchdog needs to be deactivated on all servers where it is not required, as described below.

Deactivation and Failover for the Workflow Watchdog:

  1. Install WebLogic Server and EDM Server on all Nodes.


    Note:

    Further information about the installation can be found in the respective installation manuals for Agile e6.2.0.0.

  2. Define the node for the active workflow services.

  3. Edit the file "ABS_<application-name>.ini" for all other node(s).

  4. Search for:

    ServiceManager\Nodes\localhost\Threads\K]
    className=com.agile.abs.workflow.watchdog.WatchdogService
    localThread=true
    

    And change it to (comment it):

    #[ServiceManager\Nodes\localhost\Threads\K]
    #className=com.agile.abs.workflow.watchdog.WatchdogService
    #localThread=true
    
  5. Redeploy all changed applications.

In Case of Errors:

  1. If the main application server with the active Workflow service stops and cannot be restarted, remove the commenting in the file "ABS_<application-name>.ini" for one of the other nodes.

  2. Redeploy the changed application.


    Note:

    The switch to another application server with an active workflow services is completed.

6.2.1 Manage Synchronization between Permission Manager Instances

In case several permission managers are used, their cache can be automatically synchronized after a defined time period. The synchronization parameters can be configured in "ABS_<application-name>.ini for every node.

  • #activate permission manager cache

    • missing entry

    • true = activate caching (default)

    • false = deactivate caching

    • prmCache=true

    To enable caching, set prmCache=true (default). In case the prmCache entry is missing in the ABS...ini file, the cache is activated by default.

    To disable caching, set prmCache=false.

  • #period to check/synchronize the cache of several permission manager

    • null

      prmPeriod=0:10

      The prmPeriod defines the time between two checks of the prm time stamp on the data base. If something relevant happened for permission manager in that period, the prm time stamp was changed on database. The value is formatted in hours:minutes (Format: hh:mm). The default is 10 minutes.


      Note:

      After the update, the changed application has to be redeployed.