|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris Cluster Data Services Planning and Administration Guide Oracle Solaris Cluster 4.1|
This section provides configuration guidelines for Oracle Solaris Cluster data services.
Identify requirements for all of the data services before you begin Oracle Solaris and Oracle Solaris Cluster installation. Failure to do so might result in installation errors that require that you completely reinstall the Oracle Solaris and Oracle Solaris Cluster software.
For example, the Oracle Data Guard option of Oracle Solaris Cluster Support for Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. HA for SAP also has special requirements. You must accommodate these requirements before you install Oracle Solaris Cluster software because you cannot change hostnames after you install Oracle Solaris Cluster software.
You can install the application software and application configuration files on one of the following locations.
The local disks of each cluster node – Placing the software and configuration files on the individual cluster nodes provides the following advantage. You can upgrade application software later without shutting down the service.
The disadvantage is that you then have several copies of the software and configuration files to maintain and administer.
The cluster file system – If you put the application binaries on the cluster file system, you have only one copy to maintain and manage. However, you must shut down the data service in the entire cluster to upgrade the application software. If you can spare a short period of downtime for upgrades, place a single copy of the application and configuration files on the cluster file system.
For information about how to create cluster file systems, see Planning Global Devices, Device Groups, and Cluster File Systems in Oracle Solaris Cluster Software Installation Guide.
Highly available local file system – Using HAStoragePlus, you can integrate your local file system into the Oracle Solaris Cluster environment, making the local file system highly available. HAStoragePlus provides additional file system capabilities such as checks, mounts, and unmounts that enable Oracle Solaris Cluster to fail over local file systems. To fail over, the local file system must reside on global disk groups with affinity switchovers enabled.
For information about how to use the HAStoragePlus resource type, see Enabling Highly Available Local File Systems.
The nsswitch.conf file is the configuration file for name-service lookups. This file determines the following information:
The databases within the Oracle Solaris environment to use for name-service lookups
The order in which the databases are to be consulted
Some data services require that you direct “group” lookups to “files” first. For these data services, change the “group” line in the nsswitch.conf file so that the “files” entry is listed first. See the documentation for the data service that you plan to configure to determine whether you need to change the “group” line. The scinstall utility automatically configures the nsswitch.conf file for you. If you manually modify the nsswitch.conf file, you must export the new nsswitch configuration information.
Depending on the data service, you might need to configure the cluster file system to meet Oracle Solaris Cluster requirements. To determine whether any special considerations apply, see the documentation for the data service that you plan to configure.
For information about planning cluster file systems, see Planning Global Devices, Device Groups, and Cluster File Systems in Oracle Solaris Cluster Software Installation Guide.
The resource type HAStoragePlus enables you to use a highly available local file system in an Oracle Solaris Cluster environment that is configured for failover. For information about setting up the HAStoragePlus resource type, see Enabling Highly Available Local File Systems.
The Service Management Facility (SMF) enables you to automatically start and restart SMF services during a node boot or service failure. This feature is similar to the Oracle Solaris Cluster Resource Group Manager (RGM), which facilitates high availability and scalability for cluster applications. SMF services and RGM features are complementary to each other.
Oracle Solaris Cluster includes three SMF proxy resource types that can be used to enable SMF services to run with Oracle Solaris Cluster in a failover, multi-master, or scalable configuration. The SMF proxy resource types enables you to encapsulate a set of interrelated SMF services into a single resource, SMF proxy resource to be managed by Oracle Solaris Cluster. In this feature, SMF manages the availability of SMF services on a single node. Oracle Solaris Cluster provides cluster-wide high availability and scalability of the SMF services.
For information about how to encapsulate these services, see Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster.
You might require Oracle Solaris Cluster to make highly available an application other than NFS or DNS that is integrated with the Solaris Service Management Facility (SMF). To ensure that Oracle Solaris Cluster can restart or fail over the application correctly after a failure, you must disable SMF service instances for the application as follows:
For any application other than NFS or DNS, disable the SMF service instance on all potential primary nodes for the Oracle Solaris Cluster resource that represents the application.
If multiple instances of the application share any component that you require Oracle Solaris Cluster to monitor, disable all service instances of the application. Examples of such components are daemons, file systems, and devices.
Note - If you do not disable the SMF service instances of the application, both the Solaris SMF and Oracle Solaris Cluster might attempt to control the startup and shutdown of the application. As a result, the behavior of the application might become unpredictable.
For more information, see the following documentation: