Go to main content

Planning and Administering Data Services for Oracle® Solaris Cluster 4.4

Exit Print View

Updated: May 2019
 
 

Configuration Guidelines for Oracle Solaris Cluster Data Services

This section provides configuration guidelines for Oracle Solaris Cluster data services.

Identifying Data Service Special Requirements

Identify requirements for all of the data services before you begin Oracle Solaris OS and Oracle Solaris Cluster installation. Failure to do so might result in installation errors that require that you completely reinstall the Oracle Solaris OS and Oracle Solaris Cluster software.

For example, the Oracle Data Guard option of Oracle Solaris Cluster Support for Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. HA for SAP also has special requirements. You must accommodate these requirements before you install Oracle Solaris Cluster software because you cannot change hostnames after you install Oracle Solaris Cluster software.


Note -  Some Oracle Solaris Cluster data services are not supported for use in x86-based clusters. For more information, see the release notes for your release of Oracle Solaris Cluster.

Using Immutable Zones

To ensure that any write operations are allowed when installing or applying maintenance, to an application within a zone cluster configured as an immutable zone cluster, boot the zone cluster in write mode for the duration of the installation or maintenance period. Once the application is installed or maintenance has been applied the zone cluster can then be rebooted back as an immutable zone cluster.

  1. Create/configure a zone cluster as an immutable zone cluster.

  2. When installing the application or maintenance, reboot the zone cluster with the -w option.

  3. Install the application, apply maintenance and configure the Oracle Solaris Cluster Data Services.

  4. Once complete, reboot the zone cluster back as an immutable zone cluster.


Note -  If needed, datasets or file systems can be added to the zone cluster. Zone cluster nodes that are given additional datasets using clzc configure add dataset still have full control over those datasets even if the zone cluster is booted in read-only mode. Zone cluster nodes that are given additional file systems using clzc configure add fs have full control over those file systems, even if the zone cluster is booted in read-only mode, unless the file systems are set read-only. See Adding Local File Systems to a Specific Zone-Cluster Node in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.
Example 1   Adding Local zfs Dataset to Immutable Zone Cluster

This example shows addition of dataset locally to each zone cluster node for use by Oracle binaries. This enables the zone cluster nodes to have read write access to the filesystem even if the zone cluster is booted in read-only mode.

  1. On each node of the global cluster, create the filesystem for Oracle binaries to be used in each node of the zone cluster.

    # zfs create orapool/immutablezc-oracle
  2. From one node of the global cluster, use clzc to add the dataset to each zone cluster node.

    root@gcnode1:~# clzc configure immutablezc
    clzc:immutablezc> select node physical-host=gcnode2
    clzc:immutablezc:node> add dataset
    clzc:immutablezc:node:dataset> set name=orapool/immutablezc-oracle
    clzc:immutablezc:node:dataset> end
    clzc:immutablezc:node> end
    clzc:immutablezc> select node physical-host=gcnode1
    clzc:immutablezc:node> add dataset
    clzc:immutablezc:node:dataset> set name=orapool/immutablezc-oracle
    clzc:immutablezc:node:dataset> end
    clzc:immutablezc:node> end
    clzc:immutablezc> exit
    root@gcnode1:~#
    #  zfs list | grep immutablezc-oracle
    orapool/immutablezc-oracle                                          31K  81.4G    31K  /orapool/immutablezc-oracle
    #
  3. Reboot the zone cluster in write mode.

    # clzc reboot -w immutablezc
    #  zfs list | grep immutablezc-oracle
    orapool/immutablezc-oracle                                          31K  81.4G    31K  /export/zones/immutablezc/root/immutablezc-oracle
    #
  4. From each node of the global cluster, login to the zone cluster and set the dataset's mount as needed.

    # zlogin immutablezc
    # zfs list | grep oracle
    immutablezc-oracle                      31K  81.4G    31K  /immutablezc-oracle
    #
    # zfs set mountpoint=/u01 immutablezc-oracle
    # zfs list | grep oracle
    immutablezc-oracle                      31K  81.4G    31K  /u01
    #

    From this point on, the zone cluster can be rebooted as an immutable zone cluster and the zone cluster nodes will have read write access to /u01.

Determining the Location of the Application Binaries

You can install the application software and application configuration files on one of the following locations:

  • The local disks of each cluster node – Placing the software and configuration files on the individual cluster nodes provides the advantage of upgrading application software later without shutting down the service.

    The disadvantage is that you then have several copies of the software and configuration files to maintain and administer.

  • The cluster file system – If you put the application binaries on the cluster file system, you have only one copy to maintain and manage. However, you must shut down the data service in the entire cluster to upgrade the application software. If you can spare a short period of downtime for upgrades, place a single copy of the application and configuration files on the cluster file system.

    For information about how to create cluster file systems, see Planning Global Devices, Device Groups, and Cluster File Systems in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.

  • Highly available local file system – Using HAStoragePlus, you can integrate your local file system into the Oracle Solaris Cluster environment, making the local file system highly available. HAStoragePlus provides additional file system capabilities such as checks, mounts, and unmounts that enable Oracle Solaris Cluster to fail over local file systems. To fail over, the local file system must reside on global disk groups with affinity switchovers enabled.

    For information about how to use the HAStoragePlus resource type, see Enabling Highly Available Local File Systems.

Verifying the nsswitch.conf File Contents

The nsswitch.conf file is the configuration file for name-service lookups. This file determines the following information:

  • The databases within the Oracle Solaris environment to use for name-service lookups

  • The order in which the databases are to be consulted

Some data services require that you direct group lookups to files first. For these data services, change the group line in the nsswitch.conf file so that the files entry is listed first. See the documentation for the data service that you plan to configure to determine whether you need to change the group line. The scinstall utility automatically configures the nsswitch.conf file for you. If you manually modify the nsswitch.conf file, you must export the new nsswitch configuration information.

Planning the Cluster File System Configuration

Depending on the data service, you might need to configure the cluster file system to meet Oracle Solaris Cluster requirements. To determine whether any special considerations apply, see the documentation for the data service that you plan to configure.

For information about planning cluster file systems, see Planning Global Devices, Device Groups, and Cluster File Systems in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.

The resource type HAStoragePlus enables you to use a highly available local file system in an Oracle Solaris Cluster environment that is configured for failover. For information about setting up the HAStoragePlus resource type, see Enabling Highly Available Local File Systems.

Enabling Oracle Solaris SMF Services to Run Under the Control of Oracle Solaris Cluster

The Service Management Facility (SMF) enables you to automatically start and restart SMF services during a node boot or service failure. This feature is similar to the Oracle Solaris Cluster Resource Group Manager (RGM), which facilitates high availability and scalability for cluster applications. SMF services and RGM features are complementary to each other.

Oracle Solaris Cluster includes three SMF proxy resource types that can be used to enable SMF services to run with Oracle Solaris Cluster in a failover, multi-master, or scalable configuration. The SMF proxy resource types enables you to encapsulate a set of interrelated SMF services into a single resource, SMF proxy resource, to be managed by Oracle Solaris Cluster. In this feature, SMF manages the availability of SMF services on a single node. Oracle Solaris Cluster provides cluster-wide high availability and scalability of the SMF services.

For information about how to encapsulate these services, see Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster.

You might require Oracle Solaris Cluster to make highly available an application other than NFS or DNS that is integrated with the Solaris Service Management Facility (SMF). To ensure that Oracle Solaris Cluster can restart or fail over the application correctly after a failure, you must disable SMF service instances for the application as follows:

  • For any application other than NFS or DNS, disable the SMF service instance on all potential primary nodes for the Oracle Solaris Cluster resource that represents the application.

  • If multiple instances of the application share any component that you require Oracle Solaris Cluster to monitor, disable all service instances of the application. Examples of such components are daemons, file systems, and devices.


Note -  If you do not disable the SMF service instances of the application, both Oracle Solaris SMF and Oracle Solaris Cluster might attempt to control the startup and shutdown of the application. As a result, the behavior of the application might become unpredictable.

For more information, see the following documentation: