|Oracle8i Parallel Server Concepts
Release 2 (8.1.6)
Part Number A76968-01
This chapter describes the concepts and some of the "best practices" methodologies for using Oracle Parallel Server to implement high availability. This chapter includes the following topics:
Computing environments that are configured to provide nearly full-time availability are known as "high availability" systems. Such systems typically have redundant hardware and software that makes the system available despite failures. Well-designed high availability systems avoid having single points-of-failure. Any hardware or software component that can fail has a redundant component of the same type.
When failures occur, a process known as "failover" moves processing performed by the failed component to the backup component. The failover process re-masters system-wide resources, recovers partial or failed transactions, and restores the system to normal, preferably within a matter of mircroseconds. The more transparent that failover is to the users, the higher the availability of the system.
Oracle Parallel Servers are inherently high availability systems. The clusters that are typical of Oracle Parallel Server environments, as described in Chapter 2, can provide continuous service for both planned and unplanned outages.
You can classify systems and evaluate their expected availability by system type. Mission- and business-critical applications such as mail and internet servers probably require a significantly greater availability than do less popular applications. As well, some systems may have a "24 x 7" uptime requirement, while others such as a stock market tracking system will have near 100% uptime requirements for specific timeframes, such as when the stock market is open.
The software industry generally measures availability using two types of metrics:
For most failure scenarios, the industry focuses on MTTR issues and investigates how to optimally design systems to reduce these. MTBF is generally more applicable for hardware availability metrics; this chapter does not go into detail about MTBF. However, given that you can design Oracle Parallel Server clusters to avoid single points-of-failure, component failures may not necessarily result in application unavailability. Hence, Oracle Parallel Server can greatly reduce the MTBF from an application availability standpoint.
Another metric that is generally used is "number of nines". For example, 526 minutes of system unavailability per year results in 99.9% or "3-nines availability", and 5 minutes of system unavailability per year results in a 99.999% or "5-nines availability".
It is difficult to consider "several nines of availability" without also describing the ground rules and strict processes for managing application environments, testing methodologies, and change management procedures. For these reasons, we focus on how Oracle Parallel Server can significantly reduce MTTR during failures. This inevitably contributes toward a more favorable "nines availability" for an entire system.
Downtime can be classified into two categories:
Scheduled maintenance, product upgrades, and application modifications are typically performed during these timeframes. End users generally do not have system access during these periods. Oracle Parallel Server provides many features to minimize the need for planned downtime to perform routine maintenance. These include failover to another node for system maintenance, online reorganization, online backups, and partitioned operations.
Unplanned downtime results when component and/or system failures result in a "down system". User error can also contribute to unplanned downtime. Such failures can be due to either hardware or software problems and can involve CPU, memory, operating system, the database, or the network.
As mentioned, a well designed Oracle Parallel Server system has redundant components that protect against most failures and that provide an environment without single points-of-failure. Working with your hardware vendor is key to building fully redundant cluster environments for Oracle Parallel Server.
High availability is the result of thorough planning and careful system design. You can conduct high availability planning at two levels:
System level planning involves:
High availability requires the timely processing of transactions in order for a system to be deemed completely "available". While this chapter does not provide extended capacity planning discussions, adequate system resources for managing application growth are important for maintaining availability.
If an application runs on a single symmetric multi-processing (SMP) machine with single instance Oracle, a natural growth path is to migrate this database to a larger SMP machine. However, depending on your hardware vendor's product line, this may not be an option.
Oracle Parallel Server allows you to add nodes to your system to increase capacity and handle application growth. You can do this online without stopping the database, and with minimal interference to existing client transactions.
Redundancy planning means duplicating system components such that no single component failure reduces system unavailability. Redundant components are often used in high-end SMP machines to protect against failures. For example, redundant power supplies and redundant cooling fans are not uncommon in high-end SMP server systems. A clustered Oracle Parallel Server environment can extend this redundant architecture to a higher level by creating complete redundancy such that there is no single point-of-failure.
Oracle Parallel Server builds higher levels of availability on top of the standard Oracle features. All single instance high availability features such as Fast-start Recovery and Online Re-organizations apply with Oracle Parallel Server as well. Fast-start Recovery can greatly reduce MTTR with minimal effects on online application performance. On-line Re-organizations reduce the durations of planned downtimes. Operations that used to be performed off-line during maintenance periods can now be performed on-line while users update the underlying objects. Oracle Parallel Server preserves all these standard Oracle features.
In addition to all the regular Oracle features, Oracle Parallel Server exploits the redundancy provided by clustering to deliver availability with N-1 node failures in an N-node cluster. In other words, all users have access to all data as long as there is one available node in the cluster.
To configure Oracle Parallel Server for high availability, you must carefully consider the hardware and software component issues of your cluster as described under the following heading.
This section describes the high availability issues for the following cluster components:
As mentioned, Oracle Parallel Server environments are fully redundant because all nodes access all the disks comprising the database. The failure of one node does not affect another node's ability to process transactions. As long as the cluster has one surviving node, all database clients can process all transactions, subject of course to increased response times due to capacity constraints on the one node.
Interconnect redundancy is often overlooked in clustered systems. This is because the Mean Time To Fail (MTTF) is generally several years and therefore cluster interconnect redundancy may not be a high priority. Also, depending on the system and sophistication level, a redundant cluster interconnect could be cost prohibitive and have insufficient business justification.
However, a redundant cluster interconnect is an important aspect of a fully redundant cluster. Without this, a system is not truly void of single points-of-failure. Cluster interconnects can fail for a variety of reason and not all of them are accounted for. Nor can we account for them when manufacturer MTTF metrics are provided. Interconnects can fail due to device malfunctions, such as an oscillator failure in a switch interconnect, or because of human error.
Oracle Parallel Server operates on a single image of the data; all nodes in the cluster access the same set of data files. Database administrators are encouraged to use hardware based mirroring to maintain redundant media. In this regard, Oracle Parallel Server is no different from single instance Oracle. Disk redundancy depends on the underlying hardware and software mirroring in use, such as RAID.
It was already mentioned that Oracle Parallel Server environments have full node redundancy. Each node runs its own operating system copy. Hence, the same considerations about node redundancy also apply to the operating system. The cluster manager is an extension of the operating system. Since the Cluster Manager software is also installed on all the nodes of the cluster, full redundancy is assured.
In Oracle Parallel Server, the database binaries are installed on the local disks of each node and an instance runs on each node of the cluster. All instances have equal access to all data and can process any transactions. In this way, Oracle Parallel Server ensures full database software redundancy.
Oracle Parallel Server is primarily a single site, high availability solution. This means the nodes in the cluster generally exist within the same building, if not the same room. Thus, disaster planning can be critical. Disaster planning covers planning for fires, floods, hurricanes, earthquakes, terrorism, and so on. Depending on the mission criticality of your system and the propensity of your system's location for such disasters, disaster planning may be an important high availability component.
Oracle offers other solutions such as standby database and replication to facilitate more comprehensive disaster recovery planning. You can use these solutions with Oracle Parallel Server where one cluster hosts the primary database and another remote system or cluster hosts the disaster recovery database. Oracle Parallel Server is not required on either site.
Once you have carefully considered the system level issues, validate that the Oracle Parallel Server environment protects against potential failures. The following is comprehensive but non-exhaustive list of causes of failures you can use for this process:
As discussed under "System Level Planning", Oracle Parallel Server environments protect against cluster component failures and software failures. However, media failures and human error may still cause system "downtime". Oracle Parallel Server, as with single instance Oracle, operates on one set of files. For this reason, you should adopt best practices to avoid media failures.
RAID-based redundancy practices avoid file loss but may not prevent rare cases of file corruptions. Also, if you mistakenly drop a database object in an Oracle Parallel Server environment, you can recover that object the same way you would in a single instance database. These are the primary limitations in an otherwise very robust and highly available Oracle Parallel Server system.
Once you deploy your system, the key issue is the transparency of failover and its duration. The next section describes failover in more detail.
The following section describes the basics of failover and the various features Oracle Parallel Server offers to implement it in high availability systems. Topics in this section include:
Failover requires that highly available systems have accurate instance monitoring or heartbeat mechanisms. In addition to having this functionality for normal operations, the system must be able to quickly and accurately synchronize resources during failover.
The process of synchronizing, or "re-mastering", requires the graceful shutdown of the failing system as well as an accurate assumption of control of the resources that were mastered on that system. Accurate re-mastering also requires that the system have adequate information about resources across the cluster. This means your system must record resource information to remote nodes as well as local. This makes the information needed for failover and recovery available to the recovering instances.
The duration of failover includes the time a system requires to remaster system-wide resources and recover from failures. The duration of the failover process can be a relatively short interval on certified platforms. For existing users, failover entails both server and client failover actions. For new users, failover only entails the server failover time.
It is important is to hide system failures from database client connections. Such connections can include application users in client server environments or mid-tier database clients in multi-tiered application environments. When database failures occur, clients should not notice a loss of connection. Properly configured failover mechanisms transparently reroute client sessions to an available node in the cluster. This capability in the Oracle database is referred to as "Transparent Application Failover".
Transparent Application Failover (TAF) enables an application user to automatically reconnect to a database if the connection breaks. Active transactions roll back, but the new database connection, made by way of a different node, is identical to the original. This is true regardless of how the connection was lost.
With Transparent Application Failover, a client sees no loss of connection as long as there is one instance left serving the application. The DBA controls which applications run on which instances and also creates a failover order for each application.
During normal client-server database operations, the client maintains a connection to the database so the client and server can communicate. If the server fails, so does the connection. The next time the client tries to use the connection the client issues an error. At this point, the user must log in to the database again.
With Transparent Application Failover, however, Oracle automatically obtains a new connection to the database. This allows the user to continue working as if the original connection had never failed.
There several elements associated with active database connections. These include:
Transparent Application Failover automatically restores some of these elements. Other elements, however, may need to be embedded in the application code to enable transparent application failover to recover the connection.
Net8 Administrator's Guide for more information about configuring TAF.
While failing over client sessions during system failures is a strong benefit of Transparent Application Failover, there are other useful scenarios in which Transparent Application Failover improves system availability. These are:
It is sometimes necessary to take nodes out of service for maintenance and/or repair. For example, you may want to apply patch releases without interrupting service to application clients. By using the TRANSACTIONAL clause of the SHUTDOWN statement, a node may be taken out of service such that the shutdown event is deferred until all existing transactions complete. In this way client sessions may be migrated to another node of the cluster at transaction boundaries.
Also, after performing a transactional shutdown, new transactions that are submitted get routed to an alternate node in the cluster. A SHUTDOWN IMMEDIATE is performed on the node when all existing transactions complete.
A database is available when it processes transactions in a timely manner. When the load exceeds a node's capacity, the client transaction response times are adversely affected and the database availability is compromised. It then becomes important to be able to manually migrate a group of client sessions to a less heavily loaded node to maintain response times for application availability.
When a connection is lost, you will see the following effects:
The important issue during failover operations is the extent to which the failure is masked from existing client connections.
At failover, in-progress queries are re-issued and processed from the beginning. This may extend the duration of the next query if the original query took a long time. With transparent application failover, the failure is masked for query clients with an increased response time being the only client observation. If the client query can be satisfied with data in the buffer cache of the surviving node that the client reconnected to, the increased response is minimal. Using the PRECONNECT method in transparent application failover further minimizes response time by saving the time to reconnect to a surviving Instance.
If the client query cannot be satisfied with data in the buffer cache of the reconnect node, disk I/O is necessary to process the client query. However, server-side recovery needs to complete before access to the data files is allowed. The client transaction experiences a system pause until server-side recovery completes providing server-side recovery has not already completed.
You can also use a callback function to notify clients of the failover so that the clients do not misinterpret the delay for a failure. This prevents the clients from manually attempting to re-establish their connections.
For DML database clients that perform INSERT, UPDATE, and DELETE operations providing the application coded fully exploits the Oracle Call Interface (OCI) libraries, in-flight DML transactions on the failed instance may be restarted on a surviving instance without client knowledge. This achieves application failover, without manual reconnects, but requires application level coding. The coding necessary essentially handles certain Oracle error codes and performs a reconnect when those error codes are returned.
If the application coding is not in place, INSERT, UPDATE, and DELETE operations on the failed instance return an unhandled Oracle error code and the transaction must be resubmitted for execution. Upon re-submission, Oracle routes the client connections to a surviving instance. The client transaction then experiences a system pause until server-side recovery completes.
Server-side failover in Oracle Parallel Server is different from regular, host-based failover solutions that are available on many server platforms.
Many operating system vendors and other cluster software vendors offer high availability application failover products. These failover solutions monitor application service(s) on a given primary cluster node. They then fail over such services to a secondary cluster node as needed. Host-based failover solutions generally have one active instance performing useful work for a given database application. The secondary node monitors the application service on the primary node and initiates failover when primary node service is unavailable.
Failover in host-based systems usually includes the following steps.
Oracle Parallel Server provides very fast server-side failover. This is accomplished by Oracle Parallel Server's concurrent, active-active architecture, in other words, multiple Oracle instances are concurrently active on multiple nodes and synchronize access to the same database. All nodes have concurrent ownership and access to all disks. When one node fails, all other nodes in the cluster maintain access to all the disks; there is no disk ownership to transfer, and database application binaries are already loaded into memory.
Depending on the size of the database, the duration of failover can vary. The larger the database, or the greater the size of its data files, the greater the failover delta benefit to using Oracle Parallel Server. This is because transfer of disk ownership from the primary to the secondary instance in host-based failover environments is proportional to the number and size of files that need to be failed over. The additional cost of restarting the application/database binaries, in host-based failover environments, is a fixed cost and is proportional to the size of the application and database binaries and to the extent of the application initialization actions.
The previously discussed client failover section analyzes client failover behavior in the midst of failure scenarios. New client connections get routed to available nodes in the cluster and certain existing client connections on the failed node can be configured to transparently fail over. However, in order for database clients to begin processing transactions on the available nodes, Oracle Parallel Server needs to complete its server-side recovery actions.
The recovery actions necessary during a failed node in Oracle Parallel Server include the following:
Oracle Parallel Sever depends on the Cluster Manager software for failure detection because the Cluster Manager maintains the heartbeat functions. The time it takes for the Cluster Manager to detect that a node is no longer in operation is a function of a configurable heartbeat timeout parameter. You can configure this value on most systems; the default and is typically one minute in duration. This value is inversely related to the number of false alarms or false failure detections as the cluster might incorrectly determine that a node is failing due to transient failures if the timeout is set too low. When failure is detected, cluster re-organization occurs.
When a node fails, Oracle must alter its cluster membership status. This is known as a "cluster re-organization" and it usually happens quickly; its duration is proportional to the number of surviving nodes in the cluster. Oracle Parallel Server is dependent on the Cluster Manager software for this information.
Oracle Parallel Server's Distributed Lock Manager provides the Cluster Manager interfaces to the software and exposes the cluster membership map to the Oracle instances when nodes are added/deleted from the cluster. The Distributed Lock Manager's LMON process on each cluster node communicates with the Cluster Manager on the respective nodes and exposes that information to the respective Oracle instances.
LMON also provides another useful function: when a node is no longer a member of the cluster, the surviving nodes do not see evidence of that node in the cluster, such as messages or writes to shared disk. These LMON-provided services are also referred to as Cluster Group Services (CGS). When a failure causes a change in a node's membership status within the cluster, LMON initiates the recovery actions that include re-mastering of PCM locks and Instance recovery.
At this stage, the Oracle Parallel Server environment is in a state of system pause, and most client transactions suspend until the necessary recovery actions are complete.
The database recovery steps necessary in Oracle Parallel Server include:
When an instance fails, PCM lock resources on the failed instance need to be re-mastered on the surviving cluster nodes. This is also referred to as Distributed Lock Manager database rebuild.
The time required for re-mastering of locks is a function of the number of PCM locks in the lock database. This number in turn depends upon the size of the buffer caches. Each distinct database block in a buffer cache requires one PCM lock if you use 1:1 releasable locking. However, in the case of releasable locks, the lock resources may have already been released and may not need to be re-mastered. When you use 1:N fixed locks, the number of locks is determined by the initialization parameters as each lock covers multiple blocks.
During this phase, all lock information is discarded and each surviving instance re-acquires all the locks it held at the time of the failure The lock space is now distributed uniformly across the remaining n instances. For any lock request, there is a 1/n chance that the request will be satisfied locally and a (N-1)/n chance that the lock request will involve remote operations. In the case of one surviving instance all lock operations will be satisfied locally.
Once re-mastering of the failed Instance PCM locks are complete, the in-flight transactions of the failed Instance needs to be cleaned up. This is known as instance recovery.
Instance recovery requires that an active Oracle Parallel Server instance detects failure and performs the necessary recovery actions on behalf of the failed Oracle Parallel Server Instance. The first Oracle Parallel Server instance that detects the failure, by way of its LMON process, assumes control of recovering the failed instance by taking over the failed instance's redo log files and performs instance recovery actions. This is why the redo log files need to be on a shared device such as a shared raw logical volume or cluster file system.
Instance recovery is said to be complete when cache recovery - in other words, on-line redo log files of the failed Instance have been replayed - and transaction recovery -in other words, all uncommitted transactions of the failed Instance are rolled back - are completed. Since transaction recovery may be performed in a deferred fashion, client transactions can start processing when cache recovery is complete.
Cache recovery requires Oracle to replay the online redo logs of the failed instance. Oracle performs cache recovery in parallel, in other words, parallel threads of work are set in motion to replay the redo logs of the failed Oracle instance. It may be important that you keep the length of the time interval for redo log replay to a predictable duration. The Fast-start Recovery feature in Oracle8 provides this capability.
Fast-start recovery uses a parameter called FAST_START_IO_TARGET to provide fine-grained control over the amount of redo log replay necessary for instance recovery. Redo log replay is performed by continuous checkpointing mechanisms that maintain redo log tails of consistent sizes. Database clients are not necessarily aware of the server side recovery actions since the only client experience is a brief system pause. Setting FAST_START_IO_TARGET to a particular value is important in keeping the system pause interval within acceptable bounds during system failures.
Oracle provides non-blocking rollback capabilities, so full database access can start as soon as online log files are replayed. After cache recovery is complete, Oracle begins transaction recovery.
Transaction recovery comprises rolling back all uncommitted transactions of the failed Instance. These are "in-progress" transactions that did not commit and that Oracle needs to roll back.
Oracle8 Fast-start Rollback performs rollback in the background as a deferred process. Oracle uses a multi-version read consistency technology to provide on-demand rollback of only the row blocked by dead transactions, so new transactions can progress with minimal delay. Since new transactions do not have to wait for the entire dead transaction to be rolled back, long-running transactions no longer affect database recovery time.
Oracle8 Fast-start Rollback rolls back dead transactions in parallel using a recovery coordinator to spawn many recovery processes. Single instance Oracle rolls back dead transactions using the CPU of one node.
Oracle Parallel Server provides cluster-aware Fast-start Rollback capabilities that use all the CPU nodes of the cluster to perform parallel rollback operations. Each cluster node spawns a recovery coordinator and recovery processes to assist with parallel rollback operations. The Fast-start Rollback feature is thus "cluster aware" because the database is aware of and utilizes all cluster resources for parallel rollback operations.
While the default behavior is to defer transaction recovery, you may choose to configure your system so transaction recovery completes before allowing client transactions to progress. In this scenario, Oracle Parallel Server's ability to parallelize transaction recovery across multiple nodes is a more visible user benefit.
The following section discusses the following three high availability configurations provided by Oracle Parallel Server:
The default N-node Oracle Parallel Server configuration is the default Oracle Parallel Server environment. Client transactions are processed on all nodes of the cluster and client sessions may be load balanced at connect time. Response time is optimized for available cluster resources, such as CPU and memory, by distributing the load across cluster nodes to create a highly available environment.
In the event of a node failure, an Oracle Parallel Server Instance on another node will perform the necessary recovery actions as previously discussed. The database clients on the failed Instance may be load balanced across the (N-1) surviving Instances of the cluster. The increased load on each of the surviving Instances may be kept to a minimum and availability increased by keeping maintaining response times within acceptable bounds. In this configuration, the database application workload may be distributed across all nodes and therefore provides high utilization of cluster machine resources.
You can easily configure Oracle Parallel Server into a basic high availability configuration; the primary instance on one node accepts user connections while the secondary instance on the other node only accepts connections when the primary node fails. While you can configure this manually by controlling the routing of transactions to specific instances, Oracle Parallel Server provides the Primary/Secondary Instance feature to accomplish this.
You configure the Primary/Secondary Instance feature by setting the
.ora parameter ACTIVE_INSTANCE_COUNT to 1. The instance that first mounts the database assumes the role of primary instance. The other instance assumes the role of secondary instance. If the primary instance fails, the secondary instance assumes the primary role. When the failed instance returns to active status, it assumes the secondary instance role.
The secondary instance becomes the primary instance only after the Cluster Manager informs it about the failure of the primary instance but before Distributed Lock Manager reconfiguration and cache and transaction recovery begin. The redirection to the surviving instance happens transparently; application programming is not required. Only minor configuration changes to the client connect strings are required.
In the Primary/Secondary Instance configuration, both instances run concurrently, like in any N-node Oracle Parallel Server environment. However, database application users only connect to the designated primary instance. The primary node masters all Distributed Lock Manager locks. This minimizes communication between the nodes and provides performance levels that are almost comparable to a regular, single node database.
The secondary instance may be utilized by specially configured clients, known as remote clients, for batch query reporting operations or database administration tasks. This enables some level of utilization of the second node. It may also help off-load CPU capacity from the primary instance and justify the investment in redundant nodes.
See Also :
Oracle8i Parallel Server Setup and Configuration Guide for information on configuring client connect strings.
The Primary/Secondary Instance feature works in both dedicated server and Multi-Threaded Server environments. However, it functions differently in each as described under the following headings.
As shown in Figure 9-1, dedicated server environments do not have cross-instance listener registration. Therefore, a connection request made to a specific instance's listener can only be connected to that instance's service. This behavior is similar to default N-node Oracle Parallel Server clusters in dedicated server environments.
When the primary instance fails, the re-connection request from the client is rejected by the failed instance's listener. The secondary instance performs recovery and becomes the primary instance. Upon resubmitting the client request, the client re-establishes the connection using the new primary instance's listener that then connects the client to the new primary instance.
Oracle Parallel Server provides re-connection performance benefits when running in Multi-Threaded Server mode. This is accomplished by the cross-registration of all the dispatchers and listeners in the cluster.
In Primary/Secondary configurations, only the primary instance's dispatcher is aware of all listeners within the cluster, as shown in Figure , Step 1. A client may connect to either listener, Step 2-only the connection to the primary node listener is illustrated. The relevant listener then connects the client to the dispatcher as shown in Step 3-only the listener/dispatcher connection on the primary node is illustrated.
Specially configured clients can use the secondary instance for batch operations. For example, batch reporting tasks, index create operations can be performed on the secondary instance.
See Also :
Oracle8i Parallel Server Setup and Configuration Guide for instructions on how to connect to secondary instances.
In Figure 9-4, if the primary node fails, the dispatcher in the secondary instance register with the listener as shown in Step 1. When the client requests a reconnection to the database through either of the listeners, the listener directs the request to the secondary instance's dispatcher.
There are a couple of different reasons for using this scenario instead of a default 2-node configuration. The Primary/Secondary Instance feature provides:
Using the Primary/Secondary configuration is a gradual way to migrate your application environment to an Oracle Parallel Server environment. Since all the client transactions are being performed on only one node at any given time, the Oracle Parallel Server tuning issues are minimized. Troubleshooting system problems is also simplified because you tune only one node at a time as opposed to simultaneously tuning two or more nodes.
Since availability is also dependent on the Database Administrator's ability to manage, tune, and troubleshoot the environment, the Oracle Parallel Server Primary/Secondary configuration provides a gradual way to ease the Database Administration staff into using Oracle Parallel Server.
Applications may not scale beyond a single instance for a several reasons. The most common reason is application level serialization points. When the application design causes it to bottleneck on a single application resource, the application cannot scale beyond the capacity of that resource.
There may be other reasons why an application cannot scale. Update intensive and non-partitioned applications, for example, may not scale very well because of Oracle Parallel Server's disk-based synchronization mechanisms. In such cases, the cost of synchronization or block pinging can be excessive. However, the extensions to the Oracle8i Cache Fusion technology will make this issue one of diminishing importance to consider.
User environments that fit into one of the above two categories may favor the Oracle Parallel Server Primary/Secondary Configuration.
Running Oracle Parallel Server in an N-node configuration most optimally utilizes the cluster resources. However, as discussed previously, this is not always possible or advisable. On the other hand, the financial investment required to have an idle node for failover is often prohibitive. These situations may be instead best suited for a shared high availability node configuration.
This type of configuration would typically have several nodes with each running a separate application module or service where all application services share one Oracle Parallel Server database. You can set up a separate, designated node, as a failover node. While an Oracle Parallel Server instance is running on that node, no users are being directed to it during normal operation. In the event that any one of the application nodes fails, the workload can be directed to the high availability node.
While this configuration is a useful one to consider for applications that need to run on separate nodes, it works best if a middle tier application or Transaction Processing Monitor can direct the appropriate application users to the appropriate nodes. Unlike, the Primary/Secondary configuration, there is no database setup that automates the workload transition to the high availability node. The application, or mid-tier software, would need to be responsible for directing the users of the failed application node to the designated high availability node. The application would also need to control failing back the users once the failed node is operational. Failing back would free the failover node for processing user work from subsequent node failures.
In this configuration, application performance is maintained in the event of a failover. In the N-node cluster configuration, application performance may degrade by 1/N due to the same workload being redistributed over a smaller set of cluster nodes.
Oracle Parallel Server on clustered systems provides a fully redundant environment that is extremely fault resilient. Central to this high availability model is the Oracle Parallel Server architecture; all cluster nodes have an active instance that has equal access to all the data. If any node fails, all users have access to all data by way of surviving instances on the other cluster nodes. In-flight transactions on the failed node will be recovered by the first node that detects the failure. In this way, there is minimal interruption to end-user application availability with Oracle Parallel Server.