Go to main content

Oracle® Solaris Cluster Data Services Developer's Guide

Exit Print View

Updated: September 2015
 
 

Requirements for Non-Cluster-Aware Applications

An ordinary, non-cluster-aware application must meet particular requirements to be a candidate for high availability (HA). The section Analyzing the Application for Suitability lists these requirements. This appendix provides additional details about particular items in that list.

An application is made highly available by configuring its resources into resource groups. The application's data is placed on a highly available cluster file system, making the data accessible by a surviving server in the event that one server fails. See information about cluster file systems in the Oracle Solaris Cluster 4.3 Concepts Guide .

For network access by clients on the network, a logical network IP address is configured in logical host name resources that are contained in the same resource group as the data service resource. The data service resource and the network address resources fail over together, causing network clients of the data service to access the data service resource on its new host.

Multihosted Data

The highly available cluster file systems' devices are multihosted so that when a physical host crashes, one of the surviving hosts can access the device. For an application to be highly available, its data must be highly available. Therefore, the application's data must be located in file systems that can be accessed from multiple cluster nodes. Local file systems that you can make highly available with Oracle Solaris Cluster include UNIX File System (UFS) , Quick File System (QFS), and Oracle Solaris ZFS.

The cluster file system is mounted on device groups that are created as independent entities. You can choose to use some device groups as mounted cluster file systems and others as raw devices for use with a data service, such as HA for Oracle.

An application might have command-line switches or configuration files that point to the location of the data files. If the application uses hard-wired path names, you could change the path names to symbolic links that point to a file in a cluster file system, without changing the application code. See Using Symbolic Links for Multihosted Data Placement for a more detailed discussion about using symbolic links.

In the worst case, the application's source code must be modified to provide a mechanism for pointing to the actual data location. You could implement this mechanism by creating additional command-line arguments.

The Oracle Solaris Cluster software supports the use of UNIX UFS and ZFS file systems and HA raw devices that are configured in a volume manager. When installing and configuring the Oracle Solaris Cluster software, the cluster administrator must specify which disk resources to use for UFS or ZFS file systems and which disk resources to use for raw devices. Typically, raw devices are used only by database servers and multimedia servers.

Using Symbolic Links for Multihosted Data Placement

Occasionally, the path names of an application's data files are hard-wired, with no mechanism for overriding the hard-wired path names. To avoid modifying the application code, you can sometimes use symbolic links.

For example, suppose the application names its data file with the hard-wired path name /etc/mydatafile. You can change that path from a file to a symbolic link that has its value pointing to a file in one of the logical host's file systems. For example, you can make the path a symbolic link to /global/phys-schost-2/mydatafile.

A problem can occur with this use of symbolic links if the application, or one of its administrative procedures, modifies the data file name as well as its contents. For example, suppose that the application performs an update by first creating a new temporary file /etc/mydatafile.new. Then, the application renames the temporary file to have the real file name by using the rename() system call (or the mv command). By creating the temporary file and renaming it to the real file name, the data service is attempting to ensure that its data file contents are always well formed.

Unfortunately, the rename() action destroys the symbolic link. The name /etc/mydatafile is now a regular file and is in the same file system as the /etc directory, not in the cluster's cluster file system. Because the /etc file system is private to each host, the data is not available after a failover or switchover.

The underlying problem is that the existing application is not aware of the symbolic link and was not written to handle symbolic links. To use symbolic links to redirect data access into the logical host's file systems, the application implementation must behave in a way that does not obliterate the symbolic links. So, symbolic links are not a complete remedy for the problem of placing data in the cluster's file systems.