Configure Essbase Servers in a Failover Cluster

Active-passive failover solutions are common in Essbase 11g On-Premise deployments. Users migrating to Essbase 21c can also implement active-passive failover clusters for Essbase Agent using WebLogic and a load balancer.

When configuring Essbase failover, the goal is to:
  • Set up failover mode (or active-passive mode) for the Essbase Agent.
  • Set up active-active mode for Essbase web interface, REST endpoints and Provider Services. These always connect to the single active Essbase node.

An active-passive Essbase cluster consists of two or more Essbase instances, one on each node, that share a common storage for configuration and data. Storage is shared across two or more servers (for example, using a SAN), removing the need for the administrator to synchronize storage, as well as the constraint of read-only support. Essbase uses database tables to ensure that only one agent and its associated servers are active, to avoid data corruption on writes. During installation and configuration, a table is created to hold information on configuration and application data in the cluster.

Compared to Essbase 11g On-Premise, where Essbase failover is managed by an external agent (OPMN), in Essbase 21c, the WebLogic architecture supports Essbase failover with a central request leasing system. The Essbase instance that acquires the lease becomes the active node. Other nodes are waiting in a loop, trying to acquire the lease.

Installation Type Component Essbase 11.1.2.4 Essbase 21c
Single Node Provider Services
  • Provider Services runs on a single managed server, which is always active.
  • If a failure occurs, WebLogic Node Manager restarts the managed server.
Same as 11.1.2.4
- Essbase Agent
  • Single instance of the Essbase Agent process.
  • If a failure occurs, OPMN restarts the agent instance on the same node.
  • Essbase Java Agent runs on a single managed server, which is considered the active node.
  • If the managed server fails, Node Manager restarts the managed server.
- Essbase application server. If the Essbase application server fails, the Essbase agent restarts it on the next server request. Same as 11.1.2.4.
Multi-Node (Active/Passive) Provider Services
  • Provider Services is deployed with each node in the cluster.
  • All the managed servers are up and running at the same time.
  • Provider Services cannot share sessions across the nodes.
Same as 11.1.2.4.
- Essbase Agent
  • Only failover support; no load balancing support for Essbase.
  • Essbase lifecycle is managed by OPMN.
  • OPMN-managed active-passive solution.
  • Shared ARBORPATH (NFS) or Block storage mounted/ unmounted by OPMN.
  • Whenever Essbase running in the active node is not reachable (OPMNPing), OPMN restarts Essbase on a different node.
  • The newly launched Essbase instance updates the lease tables with its host details.
  • Existing Essbase applications, which were running in the previous node are unloaded. Until the unload process is complete, the agent in the new node is not able to launch those applications.
  • As a new ESSBASE process is started on a different node, downtime could be several seconds after AGENTLEASEEXPIRATIONTIME seconds.
  • OPMN runs the block storage unmount command on the previous active node (If the node is alive) and the mount command on the current active node.
  • Only failover support; no load balancing support for Essbase.
  • Essbase life cycle is managed by WebLogic, and Node Manager manages all the WebLogic instances.
  • Self-managed active-passive solution.
  • Shared Essbase Applications Directory (formerly ARBORPATH) (NFS) + Shared Relational Database for Shared Essbase.
  • Essbase Java Agent is deployed in the same managed server as Provider Services in all the active nodes. Essbase Java Agent instances use a leasing algorithm to ensure only one node runs at any point in time. Although Essbase Java Agent is up and running in all the nodes, only one of them is available for servicing. The rest of the Essbase Java Agent instances remain in standby mode, and are not listening for any Essbase requests.
  • Whenever the active node is unable to renew the lease, another Essbase Java Agent instance from a passive node gets activated.
  • Newly launchedEssbase updates the lease tables with its host details.
  • Existing Essbase applications, which were running in the previous node are unloaded. Until the unload process is complete, the agent in the new node is not available for service.
  • When there is a failover, the new Essbase Java Agent instance immediately takes over after AGENTLEASEEXPIRATIONTIME seconds.
  • Essbase Java Agent (within WebLogic) runs the block storage unmount command on the previous active node (If the node is alive and it was a graceful lease release) and the mount command on the current active node.
- Essbase Application Server
  • Restarted in the same system whenever there is a failure.
  • When the Essbase agent fails or is stopped, servers are shut down. Until the shutdown is complete, the same applications cannot be launched in the new active node.
  • Essbase server processes use lease tables.
Same as 11.1.2.4, except for server-level leasing.