The first step in creating a data service is to determine that the target application satisfies the requirements for being made highly available or scalable. If the application fails to meet all requirements, you might be able to modify the application source code to make it so.
The list that follows summarizes the requirements for an application to be made highly available or scalable. If you need more detail or if you need to modify the application source code, refer to Appendix B, Sample Data Service Code Listings.
A scalable service must meet all the following conditions for high availability as well as some additional criteria.
Both network aware (client-server model) and non network aware (client less) applications are potential candidates for being made highly available or scalable in the Sun Cluster environment. However Sun Cluster cannot provide enhanced availability in time-sharing environments in which applications are run on a server that is accessed through telnet or rlogin.
The application must be crash tolerant. That is, it must recover disk data (if necessary) when it is started after an unexpected node death. Furthermore, the recovery time after a crash must be bounded. Crash tolerance is a prerequisite for making an application highly available because the ability to recover the disk and restart the application is a data integrity issue. The data service is not required to be able to recover connections
The application must not depend upon the physical hostname of the node on which it is running. See Host Names for additional information.
The application must operate correctly in environments in which multiple IP addresses are configured up; for example, environments with multihomed hosts, in which the node is on more than one public network, and environments with nodes on which multiple, logical interfaces are configured up on one hardware interface.
To be highly available, the application data must reside in the cluster file systems—see Multihosted Data.
If the application uses a hard-wired path name for the location of the data, you could change that path to a symbolic link that points to a location in the cluster file system, without changing application source code. See Using Symbolic Links for Multihosted Data Placement for additional information.
Application binaries and libraries can reside locally on each node or on the cluster file system. The advantage of residing on the cluster file system is that a single installation is sufficient. The disadvantage is that rolling upgrade becomes an issue because the binaries are in use while the application is running under control of the RGM.
The client should have some capacity to retry a query automatically if the first attempt times out. If the application and protocol already handle the case of a single server crashing and rebooting, then they also will handle the case of the containing resource group being failed over or switched over. See Client Retry for additional information.
The application must not have Unix domain sockets or named pipes in the cluster file system.
The application must have the ability to run multiple instances, all operating on the same application data in the cluster file system.
The application must provide data consistency for simultaneous access from multiple nodes.
The application must implement sufficient locking with a globally visible mechanism, such as the cluster file system.
For a scalable service, application characteristics also determine the load-balancing policy. For example, the load-balancing policy, LB_WEIGHTED, which allows any instance to respond to client requests, does not work for an application that makes use of an in-memory cache on the server for client connections. In this case, you should specify a load-balancing policy that restricts a given client's traffic to one instance of the application. The load-balancing policies, LB_STICKY and LB_STICKY_WILD, repeatedly send all requests by a client to the same application instance—where they can make use of an in-memory cache. Note that if multiple client requests come in from different clients, the RGM distributes the requests among the instances of the service. See Implementing a Scalable Resource for more information about setting the load balancing policy for scalable data services.