Oracle® Solaris Cluster Data Services Developer's Guide

Exit Print View

Updated: July 2014, E39646-01
 
 

Analyzing the Application for Suitability

The first step in creating a data service is to determine whether the target application satisfies the requirements for being made highly available or scalable. If the application fails to meet all requirements, you might be able to modify the application source code to make it highly available or scalable.

The list that follows summarizes the requirements for an application to be made highly available or scalable. If you need more detail or if you need to modify the application source code, see Appendix A, Sample Data Service Code Listings.


Note - A scalable service must meet all the following conditions for high availability as well as some additional criteria, which follow the list.
  • Both network-aware (client-server model) and non-network-aware (client-less) applications are potential candidates for being made highly available or scalable in the Oracle Solaris Cluster environment. However, Oracle Solaris Cluster cannot provide enhanced availability in timesharing environments in which applications are run on a server that is accessed through telnet or rlogin.

  • The application must be crash tolerant. That is, the application must recover disk data (if necessary) when it is started after an unexpected failure of a node. Furthermore, the recovery time after a crash must be bounded. Crash tolerance is a prerequisite for making an application highly available because the ability to recover the disk and restart the application is a data integrity issue. The data service is not required to be able to recover connections.

  • The application must not depend upon the physical host name of the node on which it is running. See Host Names for additional information.

  • The application must operate correctly in environments in which multiple IP addresses are configured to go up. Examples include environments with multihomed hosts, in which the node is located on more than one public network, and environments with nodes on which multiple, logical interfaces are configured to go up on one hardware interface.

  • To be highly available, the application data must be located on a highly available local file system. See Multihosted Data.

    If the application uses a hard-wired path name for the location of the data, you could change that path to a symbolic link that points to a location in the cluster file system, without changing application source code. See Using Symbolic Links for Multihosted Data Placement for additional information.

  • Application binaries and libraries can be located locally on each node or in the cluster file system. The advantage of being located in the cluster file system is that a single installation is sufficient.

  • The client should have some capacity to retry a query automatically if the first attempt times out. If the application and the protocol already handle a single server's crashing and rebooting, they also can handle the containing resource group's being failed over or switched over. See Client Retry for additional information.

  • The application must not have UNIX® domain sockets or named pipes in the cluster file system.

    Additionally, scalable services must meet the following requirements:

  • The application must have the ability to run multiple instances, all operating on the same application data in the cluster file system.

  • The application must provide data consistency for simultaneous access from multiple nodes.

  • The application must implement sufficient locking with a globally visible mechanism, such as the cluster file system.

For a scalable service, application characteristics also determine the load-balancing policy. For example, the load-balancing policy Lb_weighted, which allows any instance to respond to client requests, does not work for an application that makes use of an in-memory cache on the server for client connections. In this case, specify a load-balancing policy that restricts a given client's traffic to one instance of the application. The load-balancing policies Lb_sticky and Lb_sticky_wild repeatedly send all requests by a client to the same application instance, where they can make use of an in-memory cache. Note that if multiple client requests come in from different clients, the RGM distributes the requests among the instances of the service. See Implementing a Failover Resource for more information about setting the load-balancing policy for scalable data services.